Insert a query on the source table from ODI to Planning

Hi, I'm creating an ODI Interface. As source I have a table and as target a Planning dimension.
I'd like to extract record from my source table in a specified order (select * from MYTABLE order by ORD).
Where can I write this command?
Thanks in advance,
Roberta

Hi,
Would it not just be easy to create a view on top of your table to extract in the way you want then use that view as the source in ODI.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • SELECT * cannot be used in an INSERT INTO query when the source or destination table contains a multivalued field

    Hi,
    I am using Access 2013 and I have the following VBA code, 
    strSQL = "INSERT INTO Master SELECT * from Master WHERE ID = 1"
     DoCmd.RunSQL (strSQL)
    when the SQL statement is run, I got this error.
    SELECT * cannot be used in an INSERT INTO query when the source or destination table contains a multivalued field
    Any suggestion on how to get around this?
    Please advice and your help would be greatly appreciated!

    Rather than modelling the many-to-many relationship type by means of a multi-valued field, do so by the conventional means of modelling the relationship type by a table which resolves it into two one-to-many relationship types.  You give no indication
    of what is being modelled here, so let's assume a generic model where there is a many-to-many relationship type between Masters and Slaves, for which you'd have the following tables:
    Masters
    ....MasterID  (PK)
    ....Master
    Slaves
    ....SlaveID  (PK)
    ....Slave
    and to model the relationship type:
    SlaveMastership
    ....SlaveID  (FK)
    ....MasterID  (FK)
    The primary key of the last is a composite one of the two foreign keys SlaveID and MasterID.
    You appear to be trying to insert duplicates of a subset of rows from the same table.  With the above structure, to do this you would firstly have to insert rows into the referenced table Masters for all columns bar the key, which, presuming this to be
    an autonumber column, would be assigned new values automatically.  To map these new rows to the same rows in Slaves as the original subset you would then need to insert rows into SlaveMastership with the same SlaveID values as those in Slaves referenced
    by those rows in Slavemastership which referenced the keys of the original subset of rows from Masters, and the MasterID values of the rows inserted in the first insert operation.  This would require joins to be made between the original and the new subsets
    of rows in two instances of Masters on other columns which constitute a candidate key of Masters, so that the rows from SlaveMastership can be identified.
    You'll find examples of these sort of insert operations in DecomposerDemo.zip in my public databases folder at:
    https://onedrive.live.com/?cid=44CC60D7FEA42912&id=44CC60D7FEA42912!169
    If you have difficulty opening the link copy its text (NB, not the link location) and paste it into your browser's address bar.
    In this little demo file non-normalized data from Excel is decomposed into a set of normalized tables.  Unlike your situation this does not involve duplication of rows into the same table, but the methodology for the insertion of rows into a table which
    models a many-to-many relationship type is broadly the same.
    The fact that you have this requirement to duplicate a subset of rows into the same table, however, does make me wonder about the validity of the underlying logical model.  I think it would help us if you could describe in detail just what in real world
    terms is being modelled by this table, and the purpose of the insert operation which you are attempting.
    Ken Sheridan, Stafford, England

  • I want to define the Source Table from the Oralce Database. 11.1.1.3

    Dear all,
    How can i Define the Source table to take data from Oracle Database, and the target to be Essbase ?
    Another Qusetion
    Have i use the ERPi in FDQM ?
    If i have Why u should use ERPi ? ?

    Please note that the example in the admin guide is looping through a recordset and doing an append. If you are importing a large amount of data from a source table, this script is not efficent and will result in out of memory errors. You would want to do an insert into on the work table during the import process. This can be done using the FDM API.

  • DBMS_CHANGE_NOTIFICATION and accessing data values part of the source table

    Hi there,
    I am using DBMS_CHANGE_NOTIFICATION following the steps mentioned in this link
    http://www.oracle-base.com/articles/10g/dbms_change_notification_10gR2.php
    Everything works fine and i am able to capture the insert event on a specific table which I have set notification listener on and subsequently trigger the follow on course of activities.
    One thing I am looking for is getting the actual values used as part of the insert event happening on the source table.
    e.g.) DBMS_CHANGE_NOTIFICATION (insert operation) applied on table TEST.
    This table TEST has 2 columns (col1 varchar2(200), col2 number (10))
    Fire an insert operation. Insert into TEST values ('testing notification',201); commit;
    As part of the event I get Table (SCOTT.TEST) - Records Inserted. Rows=
    But I need to know the specific value committed to the database table as part of the insert i.e. 201 in this case. How can i capture this as part of the event notification mechanism.
    The workaround would be to query the source table as part of the notification body workflow and retrieve the most recent record but I wanted to know if this can be achieved via the regular notification event process.
    Regards,
    Regards,

    Hi
    it is exactly the way you describe. Procedures are called with privs of the owner...
    Paul

  • SLT Replication for the same table from Multiple Source Systems

    Hello,
    With HANA 1.0 SP03, it is now possible to connect multiple source systems to the same SLT Replication server and from there on to the same schema in SAP HANA - does this mean same table as well? Or will it be different tables?
    My doubt:
    Consider i am replicating the information from KNA1 from 2 Source Systems - say SourceA and SourceB.
    If I have different records in SourceA.KNA1 and SourceB.KNA1, i believe the records will be added during the replication and as a result, the final table has 2 different records.
    Now, if the same record appears in the KNA1 tables from both the sources, the final table should hold only 1 record.
    Also, if the same Customer's record is posted in both the systems with different values, it should add the records.
    How does HANA have a check in this situation?
    Please throw some light on this.

    Hi Vishal,
    i suggest you to take a look to SAP HANA SPS03 Master Guide. There is a comparison table for the three replication technologies available (see page 25).
    For Multi-System Support, there are these values:
    - Trigger-Based Replication (SLT Replication): Multiple source systems to multiple SAP  HANA instances (one  source system can be connected to one SAP HANA schema only)
    So i think that in your case you should consider BO Data Services (losing real-time analytics capabilities of course).
    Regards
    Leopoldo Capasso

  • Analyse big data in Excel? Why the dynamic tables doesn't take all the data from the source table.

    Hi,
    I'm doing a internship in a production line.
    My job is to recover production data (input data) and test data (output data) using various types of software (excel, BusinessObject sap, etc).
    To this day, I have recovered hundreds of production data, and have also organized in excel but I need to analyze and plot them.
    I would like to know who can give me an idea of ​​how I could plot as much data and analysis.
    Now i trying to use dynamic charts and plot some data but I did not get acceptable answers.
    How could I compare, analyze and graph for example:
    Five columns of production (input) with five (5) columns tested (data output).
    After graphing.
    Someone can give me a technique to analyze data? ie I compare column by column?
    or some other technique? as a conglomerate could analyze data?
    o give you an idea of ​​the contect, now I perform an internship in a manufacturing turbines.
    My job is to analyze the input data (production) and to estimate the possible behavior of the turbines in the tests.
    As I said, use dynamic tables in excel, but i have not idea why the dynamic tables doesn't  take all the data from the source table.
    I appreciate your advice
    Thanks

    You can declare as PT source whole Columns [$A:$E], without rows number.
    Then You'll have all actually data.
    Oskar Shon, Office System MVP - www.VBATools.pl
    if Helpful; Answer when a problem solved

  • How to query the count of the temporary table from user_objects

    e.g. select count(*) from user_objects where object_type='TABLE'
    According to the SQL,I can query the all table including the temporary table,Can I how to query the count of the temporary table from user_objects excluding the permanent table?
    thanks a lot!

    select count(*)  from user_objects where object_type='TABLE' and temporary = 'Y'

  • ORA-30926: unable to get a stable set of rows in the source tables

    hi,
    I am loading data from source table to target table in a interface.
    Using LKM incremental update .
    In the merge rows step , getting the below error.
    30926 : 99999 : java.sql.SQLException: ORA-30926: unable to get a stable set of rows in the source tables
    please help as what should be done to resolve this.

    Below is the query in the merge step...
    when i run from SQL also, same error
    SQL Error: ORA-30926: unable to get a stable set of rows in the source tables
    30926. 00000 - "unable to get a stable set of rows in the source tables"
    *Cause:    A stable set of rows could not be got because of large dml
    activity or a non-deterministic where clause.
    *Action:   Remove any non-deterministic where clauses and reissue the dml.
    merge into     TFR.INVENTORIES T
    using     TFR.I$_INVENTORIES S
    on     (
              T.ORGANIZATION_ID=S.ORGANIZATION_ID
         and          T.ITEM_ID=S.ITEM_ID
    when matched
    then update set
         T.ITEM_TYPE     = S.ITEM_TYPE,
         T.SEGMENT1     = S.SEGMENT1,
         T.DESCRIPTION     = S.DESCRIPTION,
         T.LIST_PRICE_PER_UNIT     = S.LIST_PRICE_PER_UNIT,
         T.CREATED_BY     = S.CREATED_BY,
         T.DEFAULT_SO_SOURCE_TYPE     = S.DEFAULT_SO_SOURCE_TYPE,
         T.MATERIAL_BILLABLE_FLAG     = S.MATERIAL_BILLABLE_FLAG,
         T.LAST_UPDATED_BY     = S.LAST_UPDATED_BY
         ,T.ID     = TFR.INVENTORIES_SEQ.NEXTVAL,
         T.CREATION_DATE     = CURRENT_DATE,
         T.LAST_UPDATE_DATE     = CURRENT_DATE
    when not matched
    then insert
         T.ORGANIZATION_ID,
         T.ITEM_ID,
         T.ITEM_TYPE,
         T.SEGMENT1,
         T.DESCRIPTION,
         T.LIST_PRICE_PER_UNIT,
         T.CREATED_BY,
         T.DEFAULT_SO_SOURCE_TYPE,
         T.MATERIAL_BILLABLE_FLAG,
         T.LAST_UPDATED_BY
         ,T.ID,
         T.CREATION_DATE,
         T.LAST_UPDATE_DATE
    values
         S.ORGANIZATION_ID,
         S.ITEM_ID,
         S.ITEM_TYPE,
         S.SEGMENT1,
         S.DESCRIPTION,
         S.LIST_PRICE_PER_UNIT,
         S.CREATED_BY,
         S.DEFAULT_SO_SOURCE_TYPE,
         S.MATERIAL_BILLABLE_FLAG,
         S.LAST_UPDATED_BY
         ,TFR.INVENTORIES_SEQ.NEXTVAL,
         CURRENT_DATE,
         CURRENT_DATE
         )

  • MERGE error : unable to get a stable set of rows in the source tables

    Hi,
    For an update, the following MERGE statement throws the error-unable to get a stable set of rows in the source tables:
    MERGE INTO table2t INT
    USING (SELECT DISTINCT NULL bdl_inst_id,.......
    FROM table1 ftp
    WHERE ftp.gld_business_date = g_business_date
    AND ftp.underlying_instrument_id IS NOT NULL) ui
    ON ( ( INT.inst_id = ui.inst_id
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.ric = ui.ric
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.isin = ui.isin
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.sedol = ui.sedol
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.cusip = ui.cusip
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    WHEN MATCHED THEN
    UPDATE
    SET INT.inst_id = ui.inst_id, INT.ric = ui.ric
    WHEN NOT MATCHED THEN
    INSERT (inst_key, ......)
    VALUES (inst_key, ......);
    To determine the existence of a record, first check if any match is found on the first key, if not then search for the second key and so on.
    Now two records with differenct first key, i.e. inst_id, can have the same ric(second key). On a rerun, with the target table already populated, the code fails. The reason is it finds duplicate entries for the second key.
    Any suggestions on how to make this work?
    Thanks in advance.
    Annie

    Annie
    You've spotted the problem (that two records have the same RIC). MERGE doesn't allow that; each record in the data being updated is only allowed to be updated once.
    Is there a PK column (or columns) that we can rely on?
    What you can try is to outer join FTP to INT. Something like:
    MERGE INTO INT int1
    USING (
        select columns_you_need
        from (
            select ftp.columns -- whatever they are
                   , int2.columns
                   , row_number() over (partition by int2.pk_columns order by int2.somecolumns) as rn
            from   ftp
            left join int int2
            on (the condition you used in your query)
        where rn=1
    ) s
    WHEN MATCHED THEN UPDATE ...
    WHEN NOT MATCHED THEN INSERT ...So if you can restrict the driving query so that only the first one of the possible updates actually gets presented to the MERGE operation, you might be in with a chance :-)
    And of course this error is nothing to do with any triggers.
    HTH
    Regards Nigel

  • How do I create a target table with the same PK as the source table?

    I am trying to create a target table in a mapping that will end up with the same primary key as the source table.
    It is a simple map that simply uses a subset of the columns of the source table in the target table. I was wanting to create and bind a new table by dragging the columns I want from the source to the initially blank target table operator, change the column names and create a primary key to match the source table.
    I can't seem to be able to create a constraint on the table in the mapping. I can create the constraint after the table is created and boound to the database object but the PK doesn't carry back into the mapping.
    I need it in the mapping so I can use the UPDATE/INSERT operation and use the 'All Constraints' implementation. The mapping won't let me validate the object without the PK on it in the map.
    Believe it or not folks, I am getting better at this.
    Thanks very much for the guidance.
    Gary

    Hi Gary
    You are close, you are really close... :-))
    You need to do exactly as you propose plus one extra step. Build the map as you describe, binding the new table to the target. Then you edit the table definition to add the primary key and any other constraints you need. After this is the step that you are missing.
    You need to do the following:
    1. Go back and re-edit the map
    2. Right click on the table
    3. From the pop up menu, select Reconcile Inbound
    4. Set any operators that you need for the UPDATE/INSERT
    5. Save the map
    6. Commit your changes
    The first three steps above make the map read in the indexes and constraints that you set on the table. Finally, you need to deploy the table and then deploy the map.
    Hope this helps
    Regards
    Michael

  • Best approach to delete records that are not in the source table anymore.

    I have a situation where I need to remove records from dimensions that are not in the source data anymore. Right now we are not maintaing history, i.e. not using SCD but planning for the next release. If we did that it would be easy to figure the latest records. The load is nightly and records are updated and new added.
    The approach that I am considering is to join the dimension tables the the sources on keys and delete what doesn't join. However, is there perhaps some function in OWB that would allow to do this automatically on import so it can be also in place for the future?
    Thanks!

    Bear in mind that deleting dimension records becomes problematic if you have facts attached to them. Just because this record is no longer in the active set doesn't mean that it wasn't used historically, and so have foreign key constraints on it in your database. IF this is the case, a short-term solution would be to add an expiry_date field to the dimension and update the load to set this value when the record disappears rather than to delete it.
    And to do that, use the target dimension as a source table, outer join it to the actual source table on the natural key, and so your update will set expiry_date=nvl(expiry_date,sysdate) to set to sysdate if this record has not already been expired on all records where the outer join fails.
    Further consideration: what do you do if the record is re-inserted into the source table? create a new dimension key? Or remove the expiry date?
    But I will say that I am not a fan of deleting records in most circumstances. What do you do if you discover a calculation error and need to fix that and republish historical cubes? Without the historical data, you lose the ability to do things like that.

  • ORA-30926: unable to get a stable set of rows in the source  table

    When user are trying to open a form getting below error.
    com.retek.platform.exception.RetekUnknownSystemException:ORA-30926: unable to get a stable set of rows in the source tables
    Please advice
    Edited by: user13382934 on Jul 9, 2011 1:32 PM

    Please try this
    create table UPDTE_DEFERRED_MAILING_RECORDS nologging as
    SELECT distinct a.CUST_ID,
    a.EMP_ID,
    a.PURCHASE_DATE,
    a.drank,
    c.CONTACT_CD,
    c.NEW_CUST_CD,
    a.DM_ROW_ID
    FROM (SELECT a.ROWID AS DM_ROW_ID,
    a.CUST_ID,
    a.EMP_ID,
    a.PURCHASE_DATE,
    dense_rank() over(PARTITION BY a.CUST_ID, a.EMP_ID ORDER
    BY a.PURCHASE_DATE DESC, a.ROWID) DRANK
    FROM deferred_mailing a) a,
    customer c
    WHERE a.CUST_ID = c.CUST_ID
    AND a.EMP_ID = c.EMP_ID
    AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
    c.PURCHASE_DATE IS NULL)
    and a.drank=1;
    The query you've posted is behaving according to the expectations. The inner select is returning one row and the outer is returning two as the
    WHERE a.CUST_ID = c.CUST_ID
    AND a.EMP_ID = c.EMP_ID
    AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
    c.PURCHASE_DATE IS NULL)
    conditions are seeing two rows in the table customer.
    I've added the a.drank=1 clause to skip the duplicates from the inner table and distinct in the final result to remove duplicates from the overall query result.
    For eg, if you have one more row in the deferred_mailing like this
    SQL> select * from DEFERRED_MAILING;
    CUST_ID EMP_ID PURCHASE_
    444 10 11-JAN-11
    444 10 11-JAN-11
    then the query without "a.drank=1" will return 4 rows like this by the outer query.
    CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    It'll return the below even if we use distinct on the same query(i.e. without a.drank=1)
    CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    which contains duplicates again.
    So, we need a combination of distinct and dense here.
    btw, Please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
    Regards,
    CSM

  • Parse SQL query and extract source tables and columns

    Hello,
    I have a set of SQL queries and I have to extract the source tables and columns from them.
    For example:
    Let's imagine that we have two tables
    CREATE TABLE T1 (col1 number, col2 number, col3 number)
    CREATE TABLE T2 (col1 number, col2 number, col3 number)
    We have the following query:
    SELECT
    T1.col1,
    T1.col2 + T1.col3 as field2
    FROM T1 INNER JOIN T2 ON T1.col2=T2.col2
    WHERE T2.col1 = 1
    So, as a result I would like to have:
    Order Table Column
    1 T1 col1
    2 T1 col2
    2 T1 col3
    Optionally, I would like to have a list of all dependency columns (columns used in "ON", "WHERE" and "GROUP BY" clauses:
    Table Column
    T1 col2
    T2 col1
    T2 col2
    I have tried different approaches but without any success. Any help is appreciated. Thank you in advance.
    Best regards,
    Beroetz

    I have a set of SQL queries and I have to extract the source tables and columns from them. In a recent db version you can use Re: sql injection question for this.

  • Dynamically passing the source table name to OWB mapping

    I am building a mapping wherein one of the source tables is a view. The view name varies with the time parameter I pass in. I am looking at ways to pass in the time parameter to the mapping procedure such that it first gets the view name from a table and uses that view as the source table to fetch data. Any directions?
    In normal PL/SQL coding, I can first get the view name and use this view name to buld a dynamic query, which can be then executed.

    This is a common question. The best way to do this is to use a synonym.
    Create the synonym in the database and import into OWB. Use the synonym in your mapping. Have your mapping accept a mapping input for the table you want the synonym to point to. Setup a premapping process to re-create the synonym with the table you want to use.
    Here is the procedure that I use. It defaults to a private synonym. Remember, the synonym will be created in the same schema that the mapping is deployed to.
    CREATE OR REPLACE PROCEDURE "CAWDATA"."CREATE_SYNONYM_PRC" ("P_SYNONYM_NAME" IN VARCHAR2,  "P_OBJECT_NAME" IN VARCHAR2,
    "P_IS_PUBLIC_SYNONYM" IN BOOLEAN DEFAULT false)    IS
    BEGIN
    if p_is_public_synonym = true then
    execute immediate 'create or replace public synonym '|| p_synonym_name || ' for '|| p_object_name;
    else
    execute immediate 'create or replace synonym '|| p_synonym_name || ' for '|| p_object_name;
    end if;
    exception
    when others
    then
          raise_application_error(sqlcode,sqlerrm) ;
    END;

  • Row should get added in the target as soon as the data in the source table

    I have done the following:
    * The source table is part of the CDC process.
    * I have started the journal on the source table.
    Whenever I change the data in the source, I expect the target to get a new row added with a new sequence number as the surrogate key. I find that even though the source data changes, the new row does not get added.
    Could someone point out to me why is the new row not getting added?

    Step 1 - Sequence Number
    create a sequence in your rdbms namely
    CREATE SEQUENCE SEQUENCE_NAME
    MINVALUE 1
    MAXVALUE 99999
    START WITH 1
    INCREMENT BY 1
    YOU can use the above sequence in your mapping in this way
    schema_name.sequence_name.nextval executed on Target option .
    Next select only Insert option for sequence column .
    Click on the Source datastore and in the Properties panel you will find an option called " Journalized Data Only " . Now whenever this interface runs , only the journalized data gets transferred.
    The other way to see the journalized data from the source side is right click on the source datastore under the model which is journalized and now go to " changed data capture " and then to " journal data .. "
    Now you can see only the journzalied data.
    As CDC creates as trigger at the source , so whenever there is change in the source it gets captured at the target whenver you run the interface above interface with Journalized data only option.
    I hope iam clear and elaborate now.
    Thanks

Maybe you are looking for