Re: getting the source tables into models in designer

Hi all
i need help while extracting the source table's to ODI designer
my source: Oracle
Question:
i have given the source schema information. with that information i created logical and physical schema in topology manager.
And trying to create a model to extract source tables to ODI.
As i don't have all tables in the same schema (some tables were coming from different users and i don't have the information of those users) am unable to see the those tables when in selective reverse tab.
i requested them to give select privileges for those tables in the schema which am using.
after getting the select privileges for those tables.
will i would be able to see those tables in selective reverse tab?
Could some one guide me steps in this.
Thanks

917704 wrote:
Hi Alastair
firstly thank you for your reply.
my soure is oracle erp.
i cannot create physical/logical schemas for that user bez as it is head user in oracle erp, i dont have the access for that user.Hi, I've done change data capture from ebusiness suite using ODI, what we did was this :
get a 'read only' database account set up in the ebiz suite database, this is your connecting user and your work schema (for CDC objects).
grant select any table, or be more specific if you wish on the objects you need to read data from to ODI, then connect as your read only user but map the physical schemas as you wish.
Back to your original question, a model can only have one logical schema, which in turn maps to one phyiscal schema - so I think your stuck needing to read across more than one schema on the source system.

Similar Messages

  • Best approach to get the source tables into Target

    Hi
    I am new to Goldengate and I would like to know what is the best approach to get the Source tables to be replicated into the target (oracle to oracle) before performing the initial load without using exp/expdp . Is there any native Goldengate utility which i can use during the initial load or before that will create the tables on the target before loading the data ?
    Thanks

    i dont think so, for the initial load replication your struncture should be available at target machine. once your machines are in sync then you can use goldengate ddl setup for automatically replicate the table with data. 
    Batter approach for you to create a structure on target machine using export improt.  In export use conect=metadata_only for copy the structure only.....
    like
    EXPDP <<user>>/<<password>>@connection_string schemas=abc directory=TEST_DIR dumpfile= gg.dmp Content = metadata_only logfile= gg.log

  • ORA-30926: unable to get a stable set of rows in the source tables

    hi,
    I am loading data from source table to target table in a interface.
    Using LKM incremental update .
    In the merge rows step , getting the below error.
    30926 : 99999 : java.sql.SQLException: ORA-30926: unable to get a stable set of rows in the source tables
    please help as what should be done to resolve this.

    Below is the query in the merge step...
    when i run from SQL also, same error
    SQL Error: ORA-30926: unable to get a stable set of rows in the source tables
    30926. 00000 - "unable to get a stable set of rows in the source tables"
    *Cause:    A stable set of rows could not be got because of large dml
    activity or a non-deterministic where clause.
    *Action:   Remove any non-deterministic where clauses and reissue the dml.
    merge into     TFR.INVENTORIES T
    using     TFR.I$_INVENTORIES S
    on     (
              T.ORGANIZATION_ID=S.ORGANIZATION_ID
         and          T.ITEM_ID=S.ITEM_ID
    when matched
    then update set
         T.ITEM_TYPE     = S.ITEM_TYPE,
         T.SEGMENT1     = S.SEGMENT1,
         T.DESCRIPTION     = S.DESCRIPTION,
         T.LIST_PRICE_PER_UNIT     = S.LIST_PRICE_PER_UNIT,
         T.CREATED_BY     = S.CREATED_BY,
         T.DEFAULT_SO_SOURCE_TYPE     = S.DEFAULT_SO_SOURCE_TYPE,
         T.MATERIAL_BILLABLE_FLAG     = S.MATERIAL_BILLABLE_FLAG,
         T.LAST_UPDATED_BY     = S.LAST_UPDATED_BY
         ,T.ID     = TFR.INVENTORIES_SEQ.NEXTVAL,
         T.CREATION_DATE     = CURRENT_DATE,
         T.LAST_UPDATE_DATE     = CURRENT_DATE
    when not matched
    then insert
         T.ORGANIZATION_ID,
         T.ITEM_ID,
         T.ITEM_TYPE,
         T.SEGMENT1,
         T.DESCRIPTION,
         T.LIST_PRICE_PER_UNIT,
         T.CREATED_BY,
         T.DEFAULT_SO_SOURCE_TYPE,
         T.MATERIAL_BILLABLE_FLAG,
         T.LAST_UPDATED_BY
         ,T.ID,
         T.CREATION_DATE,
         T.LAST_UPDATE_DATE
    values
         S.ORGANIZATION_ID,
         S.ITEM_ID,
         S.ITEM_TYPE,
         S.SEGMENT1,
         S.DESCRIPTION,
         S.LIST_PRICE_PER_UNIT,
         S.CREATED_BY,
         S.DEFAULT_SO_SOURCE_TYPE,
         S.MATERIAL_BILLABLE_FLAG,
         S.LAST_UPDATED_BY
         ,TFR.INVENTORIES_SEQ.NEXTVAL,
         CURRENT_DATE,
         CURRENT_DATE
         )

  • MERGE error : unable to get a stable set of rows in the source tables

    Hi,
    For an update, the following MERGE statement throws the error-unable to get a stable set of rows in the source tables:
    MERGE INTO table2t INT
    USING (SELECT DISTINCT NULL bdl_inst_id,.......
    FROM table1 ftp
    WHERE ftp.gld_business_date = g_business_date
    AND ftp.underlying_instrument_id IS NOT NULL) ui
    ON ( ( INT.inst_id = ui.inst_id
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.ric = ui.ric
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.isin = ui.isin
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.sedol = ui.sedol
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    OR ( INT.cusip = ui.cusip
    AND g_business_date BETWEEN INT.valid_from_date
    AND INT.valid_to_date
    WHEN MATCHED THEN
    UPDATE
    SET INT.inst_id = ui.inst_id, INT.ric = ui.ric
    WHEN NOT MATCHED THEN
    INSERT (inst_key, ......)
    VALUES (inst_key, ......);
    To determine the existence of a record, first check if any match is found on the first key, if not then search for the second key and so on.
    Now two records with differenct first key, i.e. inst_id, can have the same ric(second key). On a rerun, with the target table already populated, the code fails. The reason is it finds duplicate entries for the second key.
    Any suggestions on how to make this work?
    Thanks in advance.
    Annie

    Annie
    You've spotted the problem (that two records have the same RIC). MERGE doesn't allow that; each record in the data being updated is only allowed to be updated once.
    Is there a PK column (or columns) that we can rely on?
    What you can try is to outer join FTP to INT. Something like:
    MERGE INTO INT int1
    USING (
        select columns_you_need
        from (
            select ftp.columns -- whatever they are
                   , int2.columns
                   , row_number() over (partition by int2.pk_columns order by int2.somecolumns) as rn
            from   ftp
            left join int int2
            on (the condition you used in your query)
        where rn=1
    ) s
    WHEN MATCHED THEN UPDATE ...
    WHEN NOT MATCHED THEN INSERT ...So if you can restrict the driving query so that only the first one of the possible updates actually gets presented to the MERGE operation, you might be in with a chance :-)
    And of course this error is nothing to do with any triggers.
    HTH
    Regards Nigel

  • Row should get added in the target as soon as the data in the source table

    I have done the following:
    * The source table is part of the CDC process.
    * I have started the journal on the source table.
    Whenever I change the data in the source, I expect the target to get a new row added with a new sequence number as the surrogate key. I find that even though the source data changes, the new row does not get added.
    Could someone point out to me why is the new row not getting added?

    Step 1 - Sequence Number
    create a sequence in your rdbms namely
    CREATE SEQUENCE SEQUENCE_NAME
    MINVALUE 1
    MAXVALUE 99999
    START WITH 1
    INCREMENT BY 1
    YOU can use the above sequence in your mapping in this way
    schema_name.sequence_name.nextval executed on Target option .
    Next select only Insert option for sequence column .
    Click on the Source datastore and in the Properties panel you will find an option called " Journalized Data Only " . Now whenever this interface runs , only the journalized data gets transferred.
    The other way to see the journalized data from the source side is right click on the source datastore under the model which is journalized and now go to " changed data capture " and then to " journal data .. "
    Now you can see only the journzalied data.
    As CDC creates as trigger at the source , so whenever there is change in the source it gets captured at the target whenver you run the interface above interface with Journalized data only option.
    I hope iam clear and elaborate now.
    Thanks

  • MERGE Statement - unable to get a stable set of rows in the source tables

    OWB Client: 10.1.0.2.0
    OWB Repository: 10.1.0.1.0
    I am trying to create a MERGE in OWB.
    I get the following error:
    ORA-12801: error signaled in parallel query server P004 ORA-30926: unable to get a stable set of rows in the source tables
    I have read the other posts regarding this and can't seem to get a fix.
    The target table has a unique index on the field that I am matching on.
    The "incoming" data doesn't have a unique index, but I have checked and confirmed that it is unique on the appropriate key.
    The "incoming" data is created by a join and filter in the mapping and I'd rather avoid having to load this data into a new table and add a unique index on this.
    Any help would be great.
    Thanks
    Laura

    Hello Laura,
    The MERGE statement does not require any constraints on its target table or source table. The only requirement is that two input rows cannot update the same target row, meaning that all existing target rows can be matched by at most one input row (otherwise the MERGE would be undeterministic since you don't know which of the input rows you would end up with in the target).
    If a table takes ages to load (and is not really big) I suspect that your mapping is not running in set mode and that it performs a full table scan on source data for each target row it produces.
    If you ARE running in set mode you should run explain plan to get a hint on what is wrong.
    Regarding your original mapping, try to set the target operator property:
    Match by constraint=no constraints
    and then check the Loading properties on each target column.
    Regards, Hans Henrik

  • Getting the Source File name Info into Target Message

    Hi all,
    I want to get the Source file name Info into Target message of one of the fields.
    i followed Michal BLOG /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
    Requirement :
    1) I am able to get the Target file name as same as the source file name when i check the ASMA in Sender & Receiver Adapter , with out any UDF...............this thing is OK
    2) I took One field extra in the target structure Like "FileName" & I mapped it Like
                              Constant(" " )--UDF-----FileName
    I Checked the Option ASMA in Both Sender & Receiver Adapters
    Here iam getting the Target File name as same as Source file name + Source File name Info in the Target Field " FileName".
    I Dont want to get the Target File name as same as Source file name. I want like Out.xml as Target file name.
    If i de-select the Option ASMA in Adapters means it is showing " null" value in the target field "FileName".
    Please Provide the Solution for this
    Regards
    Bopanna

    Hi All,
    Iam able to do this by checking the Option ASMA in only sender adapter itself
    Regards
    Bopanna

  • How to get the source of a strange posted pic into my camera roll?

    I got a strange pic put into my camera roll, this pic is most likely put by an app which has an access to my photo gallery, all I want to know is how to get the source which put this image into my camera roll, I have a punch of apps that have a photo access grant and really don't know to disable all of these apps because of only one of them.
    p.s: the apps with photo access in my device (ProCam, CameraArtFXFree, Photo Editior-, Instagram, Poster++, Photo Vault, Facebook, Tango, Viber, Y! Messenger, Ipadio, Line, WhatsApp) And the only opened apps when this photo pushed to my Camera Roll were (Viber, Tango, Line, Whatsapp).
    Thanks in advance.

    >
    Nitesh Kumar wrote:
    > Hi,
    >
    > FM to get the program source code: RPY_PROGRAM_READ
    >
    > By using this FM you can get the program name(say report_name) and then you can use
    >
    > READ REPORT report_name INTO itab
    >
    > Thanks
    > Nitesh
    u dont need the last statement the FM itself returns an itab with code in it.

  • How will get the source code of all the tables in a given schema using SQL?

    Hi All,
    How can we get the source code of all the tables in a given schema using SQL?
    Thanks in Adv.
    Junu

    Try something like...
    set heading off
    set pagesize 0
    col meta_data for a96 word_wrapped
    set long 100000
    SELECT DBMS_METADATA.GET_DDL(object_type, object_name, owner) ||';' AS meta_data
    FROM dba_objects
    WHERE owner = '<SCHEMA NAME>'
      AND object_type not in (<list of stuff you do not want>);                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • ORA-30926: unable to get a stable set of rows in the source  table

    When user are trying to open a form getting below error.
    com.retek.platform.exception.RetekUnknownSystemException:ORA-30926: unable to get a stable set of rows in the source tables
    Please advice
    Edited by: user13382934 on Jul 9, 2011 1:32 PM

    Please try this
    create table UPDTE_DEFERRED_MAILING_RECORDS nologging as
    SELECT distinct a.CUST_ID,
    a.EMP_ID,
    a.PURCHASE_DATE,
    a.drank,
    c.CONTACT_CD,
    c.NEW_CUST_CD,
    a.DM_ROW_ID
    FROM (SELECT a.ROWID AS DM_ROW_ID,
    a.CUST_ID,
    a.EMP_ID,
    a.PURCHASE_DATE,
    dense_rank() over(PARTITION BY a.CUST_ID, a.EMP_ID ORDER
    BY a.PURCHASE_DATE DESC, a.ROWID) DRANK
    FROM deferred_mailing a) a,
    customer c
    WHERE a.CUST_ID = c.CUST_ID
    AND a.EMP_ID = c.EMP_ID
    AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
    c.PURCHASE_DATE IS NULL)
    and a.drank=1;
    The query you've posted is behaving according to the expectations. The inner select is returning one row and the outer is returning two as the
    WHERE a.CUST_ID = c.CUST_ID
    AND a.EMP_ID = c.EMP_ID
    AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
    c.PURCHASE_DATE IS NULL)
    conditions are seeing two rows in the table customer.
    I've added the a.drank=1 clause to skip the duplicates from the inner table and distinct in the final result to remove duplicates from the overall query result.
    For eg, if you have one more row in the deferred_mailing like this
    SQL> select * from DEFERRED_MAILING;
    CUST_ID EMP_ID PURCHASE_
    444 10 11-JAN-11
    444 10 11-JAN-11
    then the query without "a.drank=1" will return 4 rows like this by the outer query.
    CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    It'll return the below even if we use distinct on the same query(i.e. without a.drank=1)
    CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
    444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
    444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
    which contains duplicates again.
    So, we need a combination of distinct and dense here.
    btw, Please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
    Regards,
    CSM

  • SLT - Splitting one source table into two tables in the destination

    Hi,
    I am wondering if we can split content of one source table into two different tables in the destination (HANA DB in my case) with SLT based on the codified mapping rules?
    We have the VBAK table in ERP which has the header information about various business objects (quote, sales order, invoice, outbound delivery to name a few). I want this to be replicated into tables specific to business object (like VBAK_QUOT, VBAK_SO, VBAK_INV, etc) based on document type column.
    There is one way to do it as far as i know - have multiple configurations and replicate to different schema. But we might have to be content with 4 different config at the max.
    Any help here will be highly appreciated
    Regards,
    Sesh

    Please take a look at these links related to your query.
    http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers
    http://stackoverflow.com/questions/7037228/joining-two-tables-together-in-one-database

  • How do I create a target table with the same PK as the source table?

    I am trying to create a target table in a mapping that will end up with the same primary key as the source table.
    It is a simple map that simply uses a subset of the columns of the source table in the target table. I was wanting to create and bind a new table by dragging the columns I want from the source to the initially blank target table operator, change the column names and create a primary key to match the source table.
    I can't seem to be able to create a constraint on the table in the mapping. I can create the constraint after the table is created and boound to the database object but the PK doesn't carry back into the mapping.
    I need it in the mapping so I can use the UPDATE/INSERT operation and use the 'All Constraints' implementation. The mapping won't let me validate the object without the PK on it in the map.
    Believe it or not folks, I am getting better at this.
    Thanks very much for the guidance.
    Gary

    Hi Gary
    You are close, you are really close... :-))
    You need to do exactly as you propose plus one extra step. Build the map as you describe, binding the new table to the target. Then you edit the table definition to add the primary key and any other constraints you need. After this is the step that you are missing.
    You need to do the following:
    1. Go back and re-edit the map
    2. Right click on the table
    3. From the pop up menu, select Reconcile Inbound
    4. Set any operators that you need for the UPDATE/INSERT
    5. Save the map
    6. Commit your changes
    The first three steps above make the map read in the indexes and constraints that you set on the table. Finally, you need to deploy the table and then deploy the map.
    Hope this helps
    Regards
    Michael

  • Dynamically passing the source table name to OWB mapping

    I am building a mapping wherein one of the source tables is a view. The view name varies with the time parameter I pass in. I am looking at ways to pass in the time parameter to the mapping procedure such that it first gets the view name from a table and uses that view as the source table to fetch data. Any directions?
    In normal PL/SQL coding, I can first get the view name and use this view name to buld a dynamic query, which can be then executed.

    This is a common question. The best way to do this is to use a synonym.
    Create the synonym in the database and import into OWB. Use the synonym in your mapping. Have your mapping accept a mapping input for the table you want the synonym to point to. Setup a premapping process to re-create the synonym with the table you want to use.
    Here is the procedure that I use. It defaults to a private synonym. Remember, the synonym will be created in the same schema that the mapping is deployed to.
    CREATE OR REPLACE PROCEDURE "CAWDATA"."CREATE_SYNONYM_PRC" ("P_SYNONYM_NAME" IN VARCHAR2,  "P_OBJECT_NAME" IN VARCHAR2,
    "P_IS_PUBLIC_SYNONYM" IN BOOLEAN DEFAULT false)    IS
    BEGIN
    if p_is_public_synonym = true then
    execute immediate 'create or replace public synonym '|| p_synonym_name || ' for '|| p_object_name;
    else
    execute immediate 'create or replace synonym '|| p_synonym_name || ' for '|| p_object_name;
    end if;
    exception
    when others
    then
          raise_application_error(sqlcode,sqlerrm) ;
    END;

  • DBMS_CHANGE_NOTIFICATION and accessing data values part of the source table

    Hi there,
    I am using DBMS_CHANGE_NOTIFICATION following the steps mentioned in this link
    http://www.oracle-base.com/articles/10g/dbms_change_notification_10gR2.php
    Everything works fine and i am able to capture the insert event on a specific table which I have set notification listener on and subsequently trigger the follow on course of activities.
    One thing I am looking for is getting the actual values used as part of the insert event happening on the source table.
    e.g.) DBMS_CHANGE_NOTIFICATION (insert operation) applied on table TEST.
    This table TEST has 2 columns (col1 varchar2(200), col2 number (10))
    Fire an insert operation. Insert into TEST values ('testing notification',201); commit;
    As part of the event I get Table (SCOTT.TEST) - Records Inserted. Rows=
    But I need to know the specific value committed to the database table as part of the insert i.e. 201 in this case. How can i capture this as part of the event notification mechanism.
    The workaround would be to query the source table as part of the notification body workflow and retrieve the most recent record but I wanted to know if this can be achieved via the regular notification event process.
    Regards,
    Regards,

    Hi
    it is exactly the way you describe. Procedures are called with privs of the owner...
    Paul

  • How to programmatically get the source for a class provided the class name?

    Hello,
    As a quick background, I am providing some tools to potential users of an in-house framework. One is the ability to generate quick prototypes from our existing demo applications. Assume a user downloads our jars and uses them in their project (we are using Eclipse, but that detail should not greatly affect my question). Included in the jars is a demos package that contains ready-to-run classes that serve to exhibit certain functionality. Since many users may just need quick extensions of these demos, I am trying to provide a way for them to be able to create a new project that starts with a copy of the demo class.
    So, the user is provided a list of the existing demos (each one uses a single class). When the user makes their selection, with the knowledge of our framework, I can translate that into what demo class they need (returned as a string of format package.subpack1.subpackn.DemoClassName). What I now want to do is to use that complete class name to get the source (look up the file) for the corresponding class, and copy it into to a new file in their project (the copying into the project can be done easily in Eclipse, so what I need help with is the bolded part). Is there a simple way to get the source given a class path for a class as described above? You may assume the source files are included in the jars for the framework.
    Thanks in advance.

    If there's a file named "package.subpack1.subpackn.DemoClassName.java" in a "demos" directory in the jar, then yes. You'd just use
    InputStream code = getResourceAsStream("/demos.package.subpack1.subpackn.DemoClassName.java");Or if those dots in the name actually separate directory names, i.e. you have a "package" directory under "demos" and a "subpack1" director under that and so on, then:
    InputStream code = getResourceAsStream("/demos/package/subpack1/subpackn/DemoClassName.java");

Maybe you are looking for