Source table = Staging table = Cube in a single mapping?

I want extract data from some source tables and load a staging table. Then, using the staging table as a source, I want to load a cube.
I have tried doing all that in a single mapping, with the staging table operator in the middle of the mapping.
Apparently, it does not work. Only the second part of the mapping is generated, that is, the merge statement that loads the cube using the staging table as the source.
Of course, I can build two mappings, and execute one after the other.
My question is: is the first approach feasible? Can I somehow load a staging table, and then use that as the source to load a cube, all in a single mapping?
Best regards
Juan Algaba

Hi,
doing all this in one mapping is bad very design - but possible. What says the runtime audit browser? How do you know that only the second part of the mapping is was executed?
Regards,
Detlef

Similar Messages

  • Load into single target table frm multiple source table in single interfac

    Hi
    I have four source table and a single target table.
    I need to move data from either of these tables into a target table , and we have to decide the source table based on user input.
    Example :
    Lets say there are four tables A,B,C,D and one target table T.
    If user input says A
    then the data from table A will move to table T
    And again , if the user says table C then data from table C will move to table T.
    And we have to create only one interface for achieving this in oracle Data Integrator ( ODI ).
    You can take assumptions in source and target table.

    Hi ,
    In ODI 11g , there are new feature callled dataset. It allows to use UNION , MINUS etc.
    Google it , you will get many tutorials on Dataset. check the link
    http://www.rittmanmead.com/2011/06/odi-11g-new-mapping-and-interface-features-part-1/
    In your case , you can provide filter conditions on your tables i.e.
    Say My target table is EMPLOYEE , My source tables are EMPLOYEE and DEPARTMENT
    INSERT INTO EMPLOYEE(CUSTOMER_ID , CUSTOMER_NAME) SELECT CUSTOMER_ID , CUSTOMER_NAME from employee where 'EMPLOYEE' = :EMP
    UNION DEPARTMENT_ID , DEPARTMENT_NAME from departments where 'DEPARTMENT' = :EMP ;
    Just pasted the Screenshots on following page : http://oracoholic.blogspot.in/ . Have a look
    Edited by: user8427112 on Jan 8, 2013 11:04 AM

  • See Multiple node names under Import manager source table of single XML

    Hi,
    I have a xml file with differnt nodes of following structure when it comes to IMport manager It shows multipletable names where I can choose Business Partner, Partner, RemoteSystem... But My maps are based on Partner only.. Now I need to update the look up Qualifer table.. of which values are in Type.. Which I am unable to see under Partner node in import manager...I dont see the fields of Type Node under Map Filed/Values tab...
    <Business_partner xlms....
    --<Partner>
    <Address>
    <name>   
    <name1>
    <Role>
    <A>   
    <B>
    <Remote SYstem>
    <name>   
    <Code>
    <Type>
    <Type>   
    <Type2>
    Can any one put there comments please..
    Thanks
    Rajeev

    Hello Rajeev
    "Import manager guide"
    page 36:
    "For XML source files, the source table list displays the nested elements defined in the XML schema."
    page 44:
    "The XML Schema list includes an entry for each XML schema defined in the MDM Console."
    page 86:
    "Ways in which the Source Hierarchy tree now reflects an XML fileu2019s structure include:
    u2022 Top-level node is the source XML file
    u2022 Tables in the tree represent nested XML structures
    u2022All tables are nested under the root element
    u2022 Fields in the tree represent data-storing XML elements
    u2022 Joins and _ID fields are no longer added or required
    With these changes, users no longer have to manually recreate the relationships implicit in the XML schema, as they are preserved by MDM and accurately reflected in the source hierarchy tree.
    NOTE ►► To ensure that Import Manager correctly interprets the structure of an XML file, specify its corresponding XML schema file in the Connect to Source dialog (see u201CStarting and Exiting the Import Manageru201D on page 43 for more information)."
    page 188:
    "When the source is an XML file, the Available Fields list is limited to nodes which are siblings (on the same level as), or children of (nested below) the currently selected Source Hierarchy tree node."
    page 213
    "For example, if a source file is in XML format, Import Manager uses a tree in the Source Fields grid to depict the nested structure of fields within the associated XML schema."
    Use XML schema for right node showing
    Regards
    Kanstantsin Chernichenka

  • Best approach to delete records that are not in the source table anymore.

    I have a situation where I need to remove records from dimensions that are not in the source data anymore. Right now we are not maintaing history, i.e. not using SCD but planning for the next release. If we did that it would be easy to figure the latest records. The load is nightly and records are updated and new added.
    The approach that I am considering is to join the dimension tables the the sources on keys and delete what doesn't join. However, is there perhaps some function in OWB that would allow to do this automatically on import so it can be also in place for the future?
    Thanks!

    Bear in mind that deleting dimension records becomes problematic if you have facts attached to them. Just because this record is no longer in the active set doesn't mean that it wasn't used historically, and so have foreign key constraints on it in your database. IF this is the case, a short-term solution would be to add an expiry_date field to the dimension and update the load to set this value when the record disappears rather than to delete it.
    And to do that, use the target dimension as a source table, outer join it to the actual source table on the natural key, and so your update will set expiry_date=nvl(expiry_date,sysdate) to set to sysdate if this record has not already been expired on all records where the outer join fails.
    Further consideration: what do you do if the record is re-inserted into the source table? create a new dimension key? Or remove the expiry date?
    But I will say that I am not a fan of deleting records in most circumstances. What do you do if you discover a calculation error and need to fix that and republish historical cubes? Without the historical data, you lose the ability to do things like that.

  • Join two source tables and replicat into a target table with BLOB

    Hi,
    I am working on an integration to source transaction data from legacy application to ESB using GG.
    What I need to do is join two source tables (to de-normalize the area_id) to form the transaction detail, then transform by concatenate the transaction detail fields into a value only CSV, replicate it on the target ESB IN_DATA table's BLOB content field.
    Based on what I had researched, lookup by join two source tables require SQLEXEC, which doesn't support BLOB.
    What alternatives are there and what GG recommend in such use case?
    Any helpful advice is much appreciated.
    thanks,
    Xiaocun

    Xiaocun,
    Not sure what you're data looks like but it's possible the the comma separated value (CSV) requirement may be solved by something like this in your MAP statement:
    colmap (usedefaults,
    my_blob = @STRCAT (col02, ",", col03, ",", col04)
    Since this is not 1:1 you'll be using a sourcedefs file, which is nice because it will do the datatype conversion for you under the covers (also a nice trick when migrating long raws to blobs). So col02 can be varchar2, col03 a number, and col04 a clob and they'll convert in real-time.
    Mapping two tables to one is simple enough with two MAP statements, the harder challenge is joining operations from separate transactions because OGG is operation based and doesn't work on aggregates. It's possible you could end up using a combination of built in parameters and funcations with SQLEXEC and SQL/PL/SQL for more complicated scenarios, all depending on the design of the target table. But you have several scenarios to address.
    For example, is the target table really a history table or are you actually going to delete from it? If just the child is deleted but you don't want to delete the whole row yet, you may want to use NOCOMPRESSDELETES & UPDATEDELETES and COLMAP a new flag column to denote it was deleted. It's likely that the insert on the child may really mean an update to the target (see UPDATEINSERTS).
    If you need to update the LOB by appending or prepending new data then that's going to require some custom work, staging tables and a looping script, or a user exit.
    Some parameters you may want to become familiar with if not already:
    COLS | COLSEXCEPT
    COLMAP
    OVERRIDEDUPS
    INSERTDELETES
    INSERTMISSINGUPDATES
    INSERTUPDATES
    GETDELETES | IGNOREDELETES
    GETINSERTS | IGNOREINSERTS
    GETUPDATES | IGNOREUPDATES
    Good luck,
    -joe

  • Mapping Issue with 7 source tables and one target table in one step

    Hi,
    Request for a small help in OWB.
    I am trying to map 7 source tables to a single target table in one step.These source tables are in Oracle 10G database but dont have PK and FK relation ship.
    I am able to link one table to the target by pointing some of the columns. But when coming to the second table it is giving some error message :
    Ap18003: Connection target attribute group is already connected to a incompatible data source. Use a joiner or set operator to join the upstream data first before connection it to this operator.
    As per the error message I used a Join operator and tried to map the second table to the target but still giving me the same error message. Could some body give me a hand to give out from this step.
    Thanks for your help in advance.
    Cheers,
    Krishna.

    Hi,
    like this:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - my_id -> Number(9,0)
    - name -> VARCHAR2(500)
    Outgroup
    - id -> point to target_table.id
    - name -> point to target_table.name
    Not:
    Ingroup1
    - id -> Number(9,0)
    - name -> VARCHAR2(500)
    Ingroup2
    - name -> VARCHAR2(500)
    - my_id -> VARCHAR2(9)
    Regards
    Detlef
    null

  • Import of Main and Lookup table in a single Map

    Hey Guys,
    I am developing a Proof of Concept to Import Main and Lookup-Flat in a single Import Map (by using a single excel file).
    Below is my Table structure:
    Main Table: Customer
    --->Customer_Number (Text,Unique Field, Display Field)
    --->Sales_Area (Lookup Flat)
    Lookup Table: Sales_Area
    Sales_Area_ID (Text,Unique Field,Display Field)
    Sales_Area_Desc (Text,Display Field).
    The import File (excel) has below attributes:
    Customer_Number,Sales_Area_ID,Sales_Area_Desc.
    When i start both Main Table and Lookup tables are empty (there is no data in Data Manager for either of the tables)
    Now in the Import Map, i selected source as the excel file and target as the Main table.
    I did mapping of Customer_Number as usual after that I created a compound field for Sales_Area_ID+ Sales_Area_Desc and did the mapping of this compound field. then did the mapping for Sales_Area_ID and Sales_Area_Desc.
    Now since there is no data in lookup table, i select the "Add" button in the "Value Mapping" section. when i execute this map, it works perfectly and data is loaded in both the Main table and lookup table but if a new value comes in the excel(a value which does not yet exist in lookup table), the map fails, when i open it , it says that i need to redo the Value Mapping, again i click on "Add" button and it starts working. So basically the Import Map fails whenever i get a value in excel which does not yet exist in Lookup table.
    Now my question is, is there a way to automate my import map, i thought clicking on "Add" button will take care of all the lookup values which are not already present.
    Can anyone please help me in this regard.
    Thanks
    Saif

    Hi Saif,
    You can try the following option.
    Right click on the lookup filed/compound filed in destination fields, and select the option 'SET MDIS Unmapped Value Handling' as 'ADD'.
    Cheers,
    Cherry.

  • Ora-30926 : unable to get a stable set of rows in source table

    Dear All
    When I try to load my cube I get the error "ora-30926 : unable to get a stable set of rows in source table".
    Any idea? Googling for this error did not return any solutions.
    My env:
    source: Oracle 10g (10.2.x)
    Target: Oracle 11g (11.1.0.7)
    I am using warehouse builder 11.1.0.7 on Linux
    thank you

    Carsten / neashton
    Thanks for your help. Duplicate rows were the issue.
    I finally debugged the problem to my time dimension.
    The OWB wizard generated time dimension contains only date, but no time.
    Unfortunately, for me to uniquely identify my data, I need to include time also (detailed in how do I include 'Time' in a time dimension? .
    Since Carsten was the first one to answer this, I am awarding the points to him.
    thanks a lot both of you
    Edited by: T2 on Jun 2, 2009 4:01 AM

  • How to identify the source column and source table for a measure

    Does anyone have a query that I can use to positively identify the source column and source table for a cube measure in an SSAS cube?  Visual Studio shows ID, Name, and Source, but it is nearly worthless in a large cube and database.
    Also - the same for a dimension would be great.
    If no query exists for this, can someone please explain how to find the source column/table for a measure and for a dimension?
    Thanks.

    DMVs don’t expose the DataSourceView content. AMO is much better suited for object model operations like
    this than the DMVs. PowerShell is also sometimes an option, but in this case C# code would be much easier because analyzing the contents of the DataSourceView is much easier using the .Net DataSet class.
    Hope this helps.
    Reeves
    Denver, CO

  • References between source tables and mappings

    Hi,
    I'm looking for a table, view etc. in the repository which shows me references between source table and mappings?
    cheers,
    Bernhard

    Here's another one:
    http://www.nicholasgoodman.com/bt/blog/2005/04/01/owb-sources-and-targets-sql/
    select
    distinct 'TARGET',
    comp.map_name,
    comp.data_entity_name,
    comp.operator_type
    from
    all_iv_xform_map_components comp,
    all_iv_xform_map_parameters param
    where
    lower(operator_type)
    in ('table', 'view', 'dimension', 'cube')
    and param.map_component_id = comp.map_component_id
    and param.source_parameter_id is not null
    UNION
    select
    distinct 'SOURCE',
    t1.c1,
    t1.c2,
    t1.c3
    from
    (select
    comp.map_name c1,
    comp.data_entity_name c2,
    comp.operator_type c3,
    max(param.source_parameter_id) c4
    from
    all_iv_xform_map_components comp,
    all_iv_xform_map_parameters param
    where
    lower(operator_type) in
    ('table', 'view', 'dimension', 'cube')
    and param.map_component_id = comp.map_component_id
    group by
    comp.map_name, comp.data_entity_name, comp.operator_type) t1
    where t1.c4 is null
    order by 2,1

  • Create a business model when we have only one source table

    Hi,
    How to create a business model when we have only one source table in Physical layer
    Regards
    Swathi

    This is very much possible and feasible. Its called as Single Table model. Good example is SA System Subject Area where we just have a single physical source. No need to create Alias in Physical. Simply use the same table twice in BMM with one Logical Table as Dummy Fact..Like say Count of Users (aggregated). Then apply normal Complex join in BMM and present in presentation layer.
    http://gerardnico.com/wiki/dat/obiee/single_table_model
    http://gerardnico.com/wiki/dat/obiee/sasystem

  • Dimension Mapping in 11g. Can Where clause be used to filter source table?

    Hi,
    Is it possible to use a where clause filter when mapping a dimension to source table in AWM 11g
    My understanding of the user guide is that filters can only be used in cube mapping?
    I am using AWM 11.2.0.1.0A on db 11g R2.
    I presume I could use a database view on the source table to filter down to the records for the dimension, however, I understand this would then prevent me refreshing any cube using this dimension using materialized view refresh?
    Thanks

    Yes, you can apply a where clause on a dimension map, but it is not exposed through AWM. You would need to add an attribute to the XML of the form
    WhereClause='<source table condition>'For example, you could add this to a HierarchyLevelMap.
    <HierarchyLevelMap
      WhereClause="CUSTOMER_DIM.SHIP_TO_ID = 123"
      KeyExpression="CUSTOMER_DIM.SHIP_TO_ID"
      Query="CUSTOMER_DIM">
    </HierarchyLevelMap>Make sure you add it to all relevant levels in a hierarchy. E.g. all levels that share the same source table. This is compatible with MV refresh. You can also map it to a view containing the where clause, and that, too, should work with MV refresh.

  • How to Find Source Table Updated Rows using Tsql Script

    Hi Folks,
    i have 2 tales Source table and Staging Table. yeaterday I have Imported 24 Million Records From Source table into Staging table. These Table Contain approxmately 42 columns. So today may be some of the updates happend  in Source table rows. ID is the
    Unique column.
    So may I know which rows are Updated compared to Both tables using Tsql Query. (all 42 Columns might be updated ) .
    usually new rows also appeard in sourcetable. I want only Which are Updated not New rows!.
    Thanks in Advance.

     SELECT Source.*, Stage.*
       FROM Source
            FULL OUTER JOIN
            Stage
            ON Source.c1 = Stage.c1
               AND Source.c2 = Stage.c2
               AND Source.cn = Stage.cn
     WHERE Source.key IS NULL 
        OR Stage.key IS NULL; 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Source  table for info object

    Hi,
    Can anybody tel me the  SOURCE TABLE FOR the Info Object <b>Plant</b>  technical name is " 0PLANT "..Is coming under Info Cube  "Purchasing Data "..
    Thanks in advance
    Umesh mc

    Barbara,
    Did you do the replication of the datasources ?
    Goto SourceSystem -> select your BW system -> right click replicate datasources.
    Then refresh your object tree, it should show you the datasource.
    Hope this helps,
    GSM.

  • How to get Materialized View to ignore unused columns in source table

    When updating a column in a source table, records are generated in the corresponding materialized view log table. This happens even if the column being updated is not used in any MV that references the source table. That could be OK, so long as those updates are ignored. However they are not ignored, so when the MV is fast refreshed, I find it can take over a minute, even though no changes are required or made. Is there some way of configuring the materialized view log such that the materialized view refresh ignores these updates ?
    So for examle if I have table TEST:
    CREATE table test (
    d_id NUMBER(10) PRIMARY KEY,
    d_name VARCHAR2(100),
    d_desc VARCHAR2(256)
    This has an MV log MLOG$_TEST:
    CREATE MATERIALIZED VIEW LOG ON TEST with rowid, sequence, primary key;
    CREATE MATERIALIZED VIEW test_mv
    refresh fast on demand
    as
    select d_id, d_name
    from test;
    INSERT 200,000 records
    exec dbms_mview.refresh('TEST_MV','f');
    update test set d_desc = upper(d_desc) ;
    exec dbms_mview.refresh('TEST_MV','f'); -- This takes 37 seconds, yet no changes are required.
    Oracle 10g/11g

    I would love to hear a positive answer to this question - I have the exact same issue :-)
    In the "old" days (version 8 I think it was) populating the materialized view logs was done by Oracle auto-creating triggers on the base table. A "trick" could then make that trigger become "FOR UPDATE OF <used_column_list>". Now-a-days it has been internalized so such "triggers" are not visible and modifiable by us mere mortals.
    I have not found a way to explicitly tell Oracle "only populate MV log for updates of these columns." I think the underlying reason is that the MV log potentially could be used for several different materialized views at possibly several different target databases. So to be safe that the MV log can be used for any MV created in the future - Oracle always populates MV log at any update (I think.)
    One way around the problem is to migrate to STREAMS replication rather than materialized views - but it seems to me like swatting a fly with a bowling ball...
    One thing to be aware of: Once the MV log has been "bloated" with a lot of unneccessary logging, you may perhaps see that all your FAST REFRESHes afterwards becomes slow - even after the one that checked all the 200000 unneccessary updates. We have seen that it can happen that Oracle decides on full table scanning the MV log when it does a fast refresh - which usually makes sense. But after a "bloat" has happened, the high water mark of the MV log is now unnaturally high, which can make the full table scan slow by scanning a lot of empty blocks.
    We have a nightly job that checks each MV log if it is empty. If it is empty, it locks the MV log and locks the base table, checks for emptiness again, and truncates the MV log if it is still empty, before finally unlocking the tables. That way if an update during the day has happened to bloat the MV log, all the empty space in the MV log will be reclaimed at night.
    But I hope someone can answer both you and me with a better solution ;-)

Maybe you are looking for

  • File does not exist: OA_HTML AppsLogin

    Hi Iam able to login to Apps R12. While debugging for some issue I started Apache as root. I stopped and removed the errorlog and pid files and Iam able to start apache. Iam able to login to ascontrol but iam not able to login to apps. Is there files

  • List box in alv grid control  through slis and reuse_alv grid_display metho

    hello, i want to display list box in one column of alv by slis method.can u suggest me how to do it? neon

  • IPhoto Albums and MobileMe Albums - Synchronization (Sync)

    I am very interested in how the iPhoto and MobileMe operate together. I like that if I publish an album to MobileMe, those photos are then kept in sync with that MobileMe album and vice versa. If a change is made to the photo in iPhoto, the same chan

  • 6303i - Changing To-do Alarm Tone

    I'm a happy new user of a 6303i Classic (updated to v09.83). I find the alarm tone I get for a new To-do list item too quiet, but I can't see a way to choose a different alarm tone as default for a To-do item. I'd rather not increase the handset Ring

  • Range Extender Problem

    I connect to the range extender & things look fine, but I can't get onto the Internet.