Oracle data integrator , reverse the tables

Hi all,
am new to odi. Is there any possibility of passing the table name as parameter to reverse it.. am facing issues in reversing the table ,as there large number of views and tables in a schema it takes long time to import. even i tried using selective reverse also.
kindly let me know.
Thanks in advance,
nithya

Hi All,
I need to compare two tables say A which is in source and A1 which is in target based on id, if id exists in A1, then need to load the contents from table c in source to table c1 in target. if not need to throw a error message. this is the scenario.
Please let me know how to do in ODI. Am completely new to odi. problem is i dont know how to compare the id's , before using IKM incremental update knowledge module.
Thanks in advance

Similar Messages

  • How to generate test data for all the tables in oracle

    I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
    planning to implement something like
    execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
    schemaname = owner,
    minrecinmstrtbl= minimum records to insert into each parent table,
    minrecsforchildtable = minimum records to enter into each child table of a each master table;
    all_tables where owner= schemaname;
    all_tab_columns and all_constrains - where owner =schemaname;
    using dbms_random pkg.
    is anyone have better idea to do this.. is this functionality already there in oracle db?

    Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
    There are two approaches you can take with this. I'll mention both and then ask which
    one you think you would find most useful for your requirements.
    One approach I would call the generic bottom-up approach which is the one I think you
    are referring to.
    This system is a generic test data generator. It isn't designed to generate data for any
    particular existing table or application but is the general case solution.
    Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
    1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
    2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
    a. min length - the minimum length to generate
    b. max length - the maximum length
    c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
    d. suffix - a suffix for the generated data; see prefix
    e. whether to generate NULLs
    3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
    min/max scale.
    4. store the attribute combinations in Oracle tables
    5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
    6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
    7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
    8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
    The second approach I have used more often. I would it call the top-down approach and I use
    it when test data is needed for an existing system. The main use case here is to avoid
    having to copy production data to QA, TEST or DEV environments.
    QA people want to test with data that they are familiar with: names, companies, code values.
    I've found they aren't often fond of random character strings for names of things.
    The second approach I use for mature systems where there is already plenty of data to choose from.
    It involves selecting subsets of data from each of the existing tables and saving that data in a
    set of test tables. This data can then be used for regression testing and for automated unit testing of
    existing functionality and functionality that is being developed.
    QA can use data they are already familiar with and can test the application (GUI?) interface on that
    data to see if they get the expected changes.
    For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
    1. DEPT_TEST_BEFORE
         This table has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look BEFORE the
         test for that test case is performed.
         CREATE TABLE DEPT_TEST_BEFORE
         TESTCASE NUMBER,
         DEPTNO NUMBER(2),
         DNAME VARCHAR2(14 BYTE),
         LOC VARCHAR2(13 BYTE)
    2. DEPT_TEST_EXPECTED
         This table also has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look AFTER the
         test for that test case is performed.
    Each of these tables are a mirror image of the actual application table with one new column
    added that contains a value representing the TESTCASE_NUMBER.
    To create test case #3 identify or create the DEPT records you want to use for test case #3.
    Insert these records into DEPT_TEST_BEFORE:
         INSERT INTO DEPT_TEST_BEFORE
         SELECT 3, D.* FROM DEPT D where DEPNO = 20
    Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
    look after test #3 is run. For example, if test #3 creates one new record add all the
    records fro the BEFORE data set and add a new one for the new record.
    When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
    there is a foreign key betwee DEPT and EMP):
    1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
              DELETE FROM DEPT
              WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
    2. insert the test data set records for SCOTT.DEPT for test case #3.
              INSERT INTO DEPT
              SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
    3 perform the test.
    4. compare the actual results with the expected results.
         This is done by a function that compares the records in DEPT with the records
         in DEPT_TEST_EXPECTED for test #3.
         I usually store these results in yet another table or just report them out.
    5. Report out the differences.
    This second approach uses data the users (QA) are already familiar with, is scaleable and
    is easy to add new data that meets business requirements.
    It is also easy to automatically generate the necessary tables and test setup/breakdown
    using a table-driven metadata approach. Adding a new test table is as easy as calling
    a stored procedure; the procedure can generate the DDL or create the actual tables needed
    for the BEFORE and AFTER snapshots.
    The main disadvantage is that existing data will almost never cover the corner cases.
    But you can add data for these. By corner cases I mean data that defines the limits
    for a data type: a VARCHAR2(30) name field should have at least one test record that
    has a name that is 30 characters long.
    Which of these approaches makes the most sense for you?

  • I red that Oracle Data Integrator provides more than 100 KMs out-of-the-box

    I red that
    Oracle Data Integrator provides more than 100 KMs out-of-the-box.
    Is anybody have any idea how I can reach or view it or use it ?

    I got it its under <Oraclehome>oracledi>impexp

  • Need help on how to create the simple mapping using ORACLE DATA INTEGRATOR

    Hi guys,
    am new to learn odi.. please share me or steps how to develop the simple mapping using ODI...

    Hi,
    I am a newbie to Oracle Data Integrator as well. You should have a look here first; http://www.business-intelligence-quotient.com/?p=379
    Try to play around with ODI and then come back if you have specific questions. You should better move to this ODI-forum; Data Integrator
    Good Luck,
    Daan Bakboord
    http://obibb.wordpress.com

  • Oracle Data Integrator 11.1.1.5 Work Schema - List of Privileges

    Hi All,
    Oracle Data Integrator 11.1.1.5.
    Extracting data from Oracle DB for Oracle EBS 12.1.3.
    Customer created read-only schema (XXAPPS) to extract the data from EBS.
    For ODI Work schema we now created one schema 'XBOL_ODI_TEMP' on the source DB. We are now looking for appropriate privileges that needs to be granted to XXAPPS and 'XBOL_ODI_TEMP' so that we won’t face the any error messageS related to permissions when we run ODI scenario?
    We are now facing the error message : ODI-1227: Task SrcSet0 (Loading) fails on the source ORACLE connection VTB_ORACLE_EBS_1213.
    Caused By: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist.
    Similar previliges can be granted to the work schema on target.
    Venkat

    i think it would be fine with only one schema(user) created at the source system which has got read access on the tables of the EBS DB. Now to resolve this error, assuming XXAPPS user is the one used,
    in the topology --> data server(for EBS) --> physical schema the EBS schema name could be selected for Schema and XXAPPS as the work schema(for all ODI work related objects e.g. CDC)
    Also, in the Data server the user XXAPPS needs to be used which has read access to EBS tables.
    Now everytime ODI generates a query it will access a table lets say DUMMY as ,<EBS Schema>.DUMMY thus the reference is made.
    Alternatively, you can create synonyms for EBS tables in XXAPPS schema.

  • "Oracle Data Integrator" with "Total Recall"

    Hi all,
    We are planning to use Oracle Data Integrator 11g for performing ELT in Oracle 11g database. We are also planning to enable the "total recall" (flashback) technology and house all our tables on it.
    Question I have in my mind right now is, will ODI and Total Recall work well together?
    Background
    Say we have an interface defined with the target data store defined on a tablespace with flashback enabled. Say there are 100 rows in the source, of which 10 rows violated the check constraint . The "bad" data, violating the constraint, will be moved to the E$ table while the rest of the 90 rows are loaded into the target.
    Questions*
    1) If our business rule dictates zero tolerance for errors and a ROLLBACK is issued, what will happen to the data in the E$ table?
    2) Say we have committed* the 90 rows and want to use a flashback transaction query to undo the changes, how will it affect the E$ table?
    3) Will the rows be deleted from the E$ table also along with rolling back of the changes in the target?
    4) If the errors in E$ are recycled and this interface is restarted after the rollback is performed, will the I$ table contain 110 rows i.e. source data + data from E$?
    5) How does ODI handle recycling / reprocessing of the violations in E$ table?
    Please advice.
    Thank you.
    CC

    1.) The data in E$ will remain there
    2.) The data in E$ will remain there
    3.) The data in E$ will remain there
    4.) 90 rows. The recycled will still error out
    5.) To me the recycling feature is pretty lame. You need to fix the errors in the E$ table and then recycle will load the data.

  • Oracle Data Integrator - Real Time Integration

    Hi,
    I want to know that, is there any possibility of integrating data in real-time using Oracle Data Integrator?
    If yes, does it affect the OLTP system performance? (Could it read from db logs,etc..)
    Thanks..

    Using ODI with Logminer-based CDC will affect performance on the source system more than using Oracle Goldengate, let me explain why:
    When using the ODI Logminer-based Journalisation the Logminer functionality will move the primary-key of the affected row into the journal table. When you are then ready to move the changed data, running an interface to move the data reads the Journal-view, which joins the journal table (primary keys) to the source data (and dealing with deleted rows), optimising out duplicate rows in order to bring across the then-current state of the data, which can then be loaded into the target system, on completion, the moved rows can be removed from the journal table. The data will appear in the journal table as soon as Logminer puts it there, which may be a lag of up to two minutes using asynchronous setting, whereas Synchronous Logminer applies "system triggers to the table, with the consequent overhead.
    With Oracle Goldengate, the comitted transactions are read from the log as soon as the log-writer puts the commit into the log. All the data is picked up from the log, at that point. It is then written to the trail-file system of Oracle Goldengate which can be propogated to multiple other systems potentially with sub-second latency, and minimal impact on the source system due to the efficient reading and writing mechanisms. One other consequence of using Oracle Goldengate is that you get every change of the data, not the optimised then-current state of the data when moved.
    Hope this explanation helps.

  • Unable to download Oracle Data Integrator-Version 11.1.1.6(Important)

    Unable to download Oracle Data Integrator with version 11.1.1.6.Hope this could be resolved ASAP.

    966234 wrote:
    Unable to download Oracle Data Integrator with version 11.1.1.6.Hope this could be resolved ASAP.What is the file you are trying to download? Is it for Windows or Linux or All Platforms?
    Thanks,
    Hussein

  • Problem in Scheduling a Package/Interface in Oracle data Integrator.

    Hi all,
    I have a problem in scheduling in odi.
    I have followed the steps:-
    1) launch a scheduler agent from commnadline using the command
    agentscheduler "-port=20300" "-v=5"
    2.)created a Physical Agent and Logical Agent on this port
    3.)and creating a scenario for the packege and scheduling it.
    But scheduling is not running
    Pls provide me the solution.is any step missing or Am i wrong anywhere?
    Message was edited by:
    user567803

    Hi,
    I have read this thread, and seem the solution you have mentioned may work for me also. But whenever I am trying to laun agent scheduler I am getting following errors
    OracleDI: Starting Scheduler Agent ...
    Starting Oracle Data Integrator Agent...
    Version : 10.1.3.2.0 - 03/01/2007
    com.sunopsis.tools.core.exception.g: java.sql.SQLException: socket creation error
    at com.sunopsis.dwg.cmd.n.a(n.java)
    at com.sunopsis.c.f.run(f.java)
    at com.sunopsis.dwg.cmd.i.y(i.java)
    at com.sunopsis.dwg.cmd.i.run(i.java)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.sql.SQLException: socket creation error
    at org.hsqldb.jdbc.jdbcUtil.sqlException(jdbcUtil.java:67)
    at org.hsqldb.jdbc.jdbcConnection.<init>(jdbcConnection.java:2451)
    at org.hsqldb.jdbcDriver.getConnection(jdbcDriver.java:188)
    at org.hsqldb.jdbcDriver.connect(jdbcDriver.java:166)
    at com.sunopsis.sql.SnpsConnection.u(SnpsConnection.java)
    at com.sunopsis.sql.SnpsConnection.c(SnpsConnection.java)
    at com.sunopsis.sql.h.run(h.java)
    Caused by:
    java.sql.SQLException: socket creation error
    at org.hsqldb.jdbc.jdbcUtil.sqlException(jdbcUtil.java:67)
    at org.hsqldb.jdbc.jdbcConnection.<init>(jdbcConnection.java:2451)
    at org.hsqldb.jdbcDriver.getConnection(jdbcDriver.java:188)
    at org.hsqldb.jdbcDriver.connect(jdbcDriver.java:166)
    at com.sunopsis.sql.SnpsConnection.u(SnpsConnection.java)
    at com.sunopsis.sql.SnpsConnection.c(SnpsConnection.java)
    at com.sunopsis.sql.h.run(h.java)
    Thanks in advance
    HA

  • Generic Java for Oracle Data Integrator 10g (10.1.3.5.0)

    Hi,
    How can I install the Oracle Data Integrator Client in my MacBook. So the generic version should be the right one.
    But there are versions for windows, linux, solaris, HPUX and AIX.. but no generic java version.
    The version 10.1.3.4.0 has an generic version.
    How can I fix this issue, since I tried the linux version but doesn't install.
    Thanks.

    Hi,
    Install the 10.1.3.4 and make an "manual install" after that. It's means copy the oracledi directory over the old oracledi directory.
    There is detailed instruction on install manual...
    Did you already tried this?
    Cezar Santos
    [www.odiexperts.com]

  • Select data from all the table names in the view

    Hi,
    "I have some tables with names T_SRI_MMYYYY in my database.
    I created a view ,Say "Summary_View" for all the table names
    with "T_SRI_%".
    Now i want to select data from all the tables in the view
    Summary_View.
    How can i do that ? Please throw some light on the same?
    Thanks and Regards
    Srinivas Chebolu

    Srinivas,
    There are a couple of things that I am unsure of here.
    Firstly, does your view definition say something like ...
    Select ...
    From "T_SRI_%"
    If so, it is not valid. Oracle won't allow this.
    The second thing is that your naming convention for the
    tables suggests to me that each table is the same except
    that they store data for different time periods. This would be
    a very bad design methodology. You should have a single
    table with an extra column to state what period is referred to,
    although you can partition it into segments for each period if
    appropriate.
    Apologies if i am misinterpreting your question, but perhaps
    you could post your view definition and table definitions
    here.

  • How to set Oracle Data Integrator Timeout paramter value as unlimited

    Hi
    Can any one help me how to set Oracle Data Integrator Timeout (ODI menu>User Paramter>Oracle Data Integrator Timeout > paramter value as unlimited.
    By default it is 30 and i want to change it as unlimited.
    I am connecting linux box through windows using citrix and opened ODI and start the scenario execution (my scenario execution in loop and it will execute continuesly) after execution starts I an logout from citrix (I am not closing ODI) and after some time like 50 min my odi execution is stoped due to timeout.
    my ODI execution should continue in my absence that to unlimited.
    Please help me it is urgent
    Regards,
    Phanikanth

    Thanks Bhabani
    Is it work for unlimited in linux box because I have given operator dispplay limit(0=no limit) as 1000000 and I am unable to see the execution session details on Operator>session list
    later i have changed to 10000 and click on ok and just refreshed and it is working fine
    If I give mensioned below it will not impact on ODI ?
    windows it is working but I have doubt on linux version
    Please help me
    Regards,
    Phanikanth

  • ** Regarding Oracle Data Integrator Companion 11gR1(11.1.1.7.0)

    Hi Experts,
    I would like to Know regarding..
    Oracle Data Integrator Companion 11gR1 (11.1.1.7.0)
    Includes: Oracle Data Integrator Application Adapters for Hadoop, Oracle Applications, SAP ERP and BW (Hadoop is Newly added when compared with the older versions)
    Which i found in the oracle downloads link and so i want to know the below points
    1) What is the use of Oracle Data Integrator Companion 11gR1 (11.1.1.7.0)
    2) if it is something like an additional adapter tool to be installed to access, where and how we need to install.
    I would like to get the complete details and usage of this particular Oracle Data Integrator Companion 11gR1 (11.1.1.7.0).
    Request you to kindly guide me..
    Awaiting your inputs.
    Many Thanks,
    Pavan Kumar

    Hi Pavan,
    Oracle Data Integrator Companion 11gR1 (11.1.1.7.0) contains additional adapters for use with Hadoop, SAP etc which does not come with normal installation. However, you will need license for industry use of these adapters. This is especially good for a developer. However for server installs you must use one of the normal installers which additionally comes with Java EE agent capabilities which may not be the case with Companion.
    Speaking of Installation procedure, this works with a simple extraction of the zip file and does not come with an installer.
    Additionally, it comes with ODI Demo Environment with a local HSQL repository which can be used for training and learning purposes.
    Functionally, for a developer there is no difference between normal and companion install.
    Regards,
    Rickson Lewis

  • Oracle Data Integration

    Hi all,
    With Oracle promoting ODI as a tool for integration with Hyperion tools as well. Has any one used this tool for integration? If so apart for ODI what adaptors and from where I need to instal to make it work with Hyperion Essbase system 9.
    cheers

    Hi,
    You may want to read the below thread:
    Re: Oracle Data integration
    Hope this helps.
    Seb

  • ODI-1241: Oracle Data Integrator tool execution fails.

    Hi
    I'm getting the following error while running the OdiOSCommand command. I'm running dos2unix command to convert text files from dos to unix format.
    Application tier is on a different host to the ODI setup. Getting the following error. Please help resolve this issue.
    Error : ODI-1226: Step OdiOSCommand fails after 1 attempt(s).
    ODI-1241: Oracle Data Integrator tool execution fails.
    Caused By: com.sunopsis.dwg.function.SnpsFunctionBaseException: ODI-30038: OS command returned 1.

    The issue was with the value set for the OUTPUT_DIR variable. It was pointing to the wrong location.
    After setting it correctly the package completed successfully.
    Thanks for all your replies.
    To anwser your question. We are finding junk data and need to run the command to remove them from the input files which are coming from a different source.
    Edited by: user761125 on Jun 3, 2012 11:38 PM

Maybe you are looking for

  • I need help ASAP, edits are showing in PS, but not in other programs

    I hope someone can give me an answer. In only one folder, I have edited JPEGs that show edits in Bridge, Camera Raw and Photoshop, but not in Windows Explorer. I discovered this when trying to burn them on a disk. Is this something related to a setti

  • Can only see 3 letters in Description and label boxes

    Captivate 5.5 only allows 3 letters to be seen in the "item name" textbox in the properties window. I am working in MAC OS 10.5.8. When I try to enter an the name of an item in the item name textbox in the properties window, it only allows the first

  • Output rhapsody during screen saver mode

    HP G62 Notebook PC, S/N: [Personal Information Removed], P/N: WQ760UA#ABA, Windows 7 I stream rhapsody from my notebook to my avr. When the screen goes blank in screen saver mode it stops outputing rhapsody. I am connected with a hdmi cable. If the c

  • Can Airport stream 1080p video?

    Hey i have a airport extreme 4th generation and I am trying to stream 1080p mkvs from an attached external hard drive on my mac to my wd tv live but the video keeps stuttering. Just trying to rule out everything I can to fix the problem. Can my airpo

  • Qosmio G30-E10: Display resolution switches from 1920x1200 to 1024x768

    Hello, Qosmio G30-E10 Display resolution switches sometimes from 1920x1200 (my default) to 1024x768 after returning from Standby or Hibernation i run Vista Ultimate, newest Display Driver from Toshiba Download Area. This doesn't happened any time, bu