ODI Documentation

Hi.
Any suggestion on how to write documentation on the ODI interfaces and packages? is there any feature with in ODI that generates any documentation templates.
Regards

Hi there,
There are "print" options available in the ODI client tools. In Designer you can right-click on a project or model object and it will generate a PDF for you automatically. In Topology Manager you can print off the physical architecture, logical architecture, or contexts through the File | Print menu, which will also generate a PDF for you.
Thanks,
OracleSeeker

Similar Messages

  • Plz point me on ODI documentation regarding production implementation ?

    Hi
    First of all,i am very thankful to u for your kind reply.
    i also need ODI Development Documentation for understanding the migration mechanism from Development to production Implementation.
    Hope for your reply.
    With Warm Regards,
    Ankush

    Hi ,
    Export existing master- and work-repository in a dump-file using Oracle db export feature
    Import the master- and work-repository into your new schema
    Create a new connection to the imported master-repository when you connect to the Topology Manager in ODI
    Create a new connection to the imported work-repository when you connect to the Designer manager in ODI
    please see the below thread also.
    http://kr.forums.oracle.com/forums/thread.jspa?threadID=929534
    link:
    http://iadviseblog.wordpress.com/2007/05/31/migrate-existing-odi-from-dev-to-test-to-production/
    I think the same process we will follow during the migration.
    Hope it will helpful for you.

  • What is the minimum file system access needed to run ODI 10.1.3.4.0 client?

    Hi ODI discussion folks,
    I have a couple of questions from an Oracle partner that I'm trying to find a definitive answer for if possible. The partner is setting up ODI 10.1.3.4.0 for a customer who insists that the absolute minimum amount of access to the file system is granted due to corporate security policies.
    I have checked the bundled ODI documentation but couldn't really find anything about file system permissions needed to run the ODI client. I was pointed towards the "Setting Up Security for an Integration Project — What to Consider" document but this does not mention a great deal about how much access to the file system is needed for the ODI client to function.
    What the partner is asking is the following:
    "1. What are the minimum file/folder permissions needed for the ODI client installation? I'm installing at xxxxx
    and their machines have to be locked down as much as possible.
    2. Say you have 3 users all wanting to run integrations etc and the Master and Work
    repositories have been set up. An admin installs the ODI client but doesn't
    create the connection to the Master repository. What are the minimum
    file/folder permissions required on the client machine to:
    a) create the connection to the repository
    b) run any subsequent integrations?"
    If anyone can advise on this then that would be much appreciated.
    Regards
    Craig Huggans
    Oracle Hyperion Support
    Message was edited by:
    user648991

    Hi Craig,
    How are you?
    Let me try to contribute a little....
    1) The minimum requirement is for its own installation directory, there is no reason to have access to other directories unless if it is necessary to read files from some other directory at the client
    2) Again only to its own install directory. The connection setting is recorded at \bin install directory. After that, all information are recorded at repository, there is no client work.
    Be free to contact me by email or phone if you have any new doubt. You can get my email from my profile.
    Does it respond your doubts?
    Cezar Santos

  • Connect to open ldap server using ODI

    Hello,
    I need to use openldap as dataserver in ODI. But I'm not sure how to provide valid connection string. My test ldap server:
    - host: 172.18.0.106
    - port: 389
    - base dn: dc=example,dc=com
    And (optionally), root access (simple authentication):
    - Bind dn: cn=jimbob, dc=example, dc=com
    - password: pass
    So, I'm trying to begin with anonymous access (read only), this is my connection string:
    jdbc:snps:ldap?ldap_url=ldap://localhost:389/&ldap_basedn='dc=example,dc=com'
    Unfortunatelly, it does now work:
    java.sql.SQLException: ODI-40528: A com.sunopsis.ldap.connection.SnpsLdapDriverPropertyException occurred saying: ODI-40546: A user has not been provided
    Why does it ask me for user? ODI documentation says, that user/pass are not mandatory.
    What are correct connection strings for anonymous access and root (or other user) access in my case? Both methods works with apache directory studio.

    You can set ldap_auth=none for anonymous user.
    User/password is mandatory otherwise. Please refer to http://docs.oracle.com/cd/E21764_01/integrate.1111/e12644/appendix_ldap_driver.htm#CHDHCABH
    Thansk,

  • Pluggable mapping in ODI?

    Hi guys!
    I was wondering if ODI has a similar functionality to OWB pluggable mapping? If yes please tell me how ;)
    With regards,
    PsmakR

    Hi,
    You can find a great blog post about ODI User Functions @ http://blogs.oracle.com/dataintegration/2009/09/odi_user_functions_a_case_stud.html.
    It should give you enough details, the User Functions are also covered in the ODI documentation.
    Thanks,
    Julien

  • Would like to start learning ODI

    HI,
    I have learned Essbase, Planning and Budgeting and HFR.
    Now i would like to learn ODI, Please provide some good study material links(PDF's etc.,). Any guidelines on how to get started with ODI is really appreciated.
    Thanks,
    Mani

    Firstly, check out the Oracle ODI documentation - it's very good. The quick start guide is a great place to start.
    Oracle Data Integrator QuickStart
    Oracle Data Integrator Documentation</title><meta name="Title" content="Oracle Data Integrator D…
    There's also a 'Getting Started' guide (PDF):
    http://www.oracle.com/technetwork/middleware/data-integrator/overview/odigs-11g-168072.pdf
    Also check out the various blogs that some ODI gurus have written - there's a ton of really excellent stuff there - a quick Google search will find most of what you're looking for.  Some of note are:
    http://odiexperts.com/
    http://gurcanorhan.wordpress.com/
    https://blogs.oracle.com/dataintegration/
    More to life than this...
    Message was edited by: _Phil (Added getting started guide)

  • Odi waitforlogdata CDC problem

    Hi,
    I m trying to create a package including the tool of odiwaitforlogdata. The package is working fine unless the name of the table doesnt include an underscore '_'.
    EX:
    If the table name is 'SUBSECTORS', it is working fine but if it is 'SUB_SECTORS' it is giving an error on the step of odiwaitforlogdata saying that 'table or view doesnt exist'.
    Tnx in advance.

    This is normal, expected behavior.
    The ODI 10.1.3.5 documentation states the following:
    -TABLE_NAME=<tableName>
    Note: This option works ONLY for tables in a model journalized in simple mode.
    -CDC_SET_NAME=<cdcSetName>
    Note: This option works ONLY for tables in a model journalized in consistent mode.
    This means that parameter -TABLE_NAME of OdiWaitForLogData is not expected to work when the Datastores have been journalized in consistent mode.
    The current behavior is consistent with the ODI documentation.
    To resolve the case, use -CDC_SET_NAME parameter instead of -TABLE_NAME.
    Remark:
    The reason for table names without underscore "_" working in consistent journalization mode with -TABME_NAME parameter, is due to the fact that the logic in code doesn't require to access SNP_SUBSCRIBERS table.
    - Laura
    Edited by: marmouton on Sep 30, 2009 12:54 AM
    Edited by: marmouton on Sep 30, 2009 12:55 AM

  • ODI 10.1.0.35 ODIXmlConcat How this works?

    Hi all,
    I trying "get on" OdiXmlConact but it simply does not work as well.
    A just make four copies from this file (http://www.w3schools.com/xml/note.xml) and created an new package in my test project folder and set OdiXmlConcat operator like ODI Documentation for concat files into on new file but I received this error msg:
    com.sunopsis.tools.core.exception.SnpsRuntimeException: Error while writting output xml
         at com.sunopsis.dwg.tools.xml.XMLJoiner.writtingException(XMLJoiner.java)
         at com.sunopsis.dwg.tools.xml.XMLJoiner$JoinerXMLFilterOutput.endOfLastInputStream(XMLJoiner.java)
         at com.sunopsis.dwg.tools.xml.XMLFilter.process(XMLFilter.java)
         at com.sunopsis.dwg.tools.xml.XMLJoiner.join(XMLJoiner.java)
         at com.sunopsis.dwg.tools.XMLConcat.actionExecute(XMLConcat.java)
         at com.sunopsis.dwg.function.SnpsFunctionBase.execute(SnpsFunctionBase.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execIntegratedFunction(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlS.treatTaskTrt(SnpSessTaskSqlS.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: javax.xml.stream.XMLStreamException: Trying to write END_DOCUMENT when document has no root (ie. trying to output empty document).
         at com.ctc.wstx.sw.BaseStreamWriter.throwOutputError(BaseStreamWriter.java:1396)
         at com.ctc.wstx.sw.BaseStreamWriter.reportNwfStructure(BaseStreamWriter.java:1425)
         at com.ctc.wstx.sw.BaseStreamWriter.finishDocument(BaseStreamWriter.java:1586)
         at com.ctc.wstx.sw.BaseStreamWriter.writeEndDocument(BaseStreamWriter.java:508)
         at com.ctc.wstx.evt.WstxEventWriter.add(WstxEventWriter.java:97)
         at com.sunopsis.dwg.tools.xml.XMLWrapper.writerFooter(XMLWrapper.java)
         ... 17 more
    Caused by:
    javax.xml.stream.XMLStreamException: Trying to write END_DOCUMENT when document has no root (ie. trying to output empty document).
         at com.ctc.wstx.sw.BaseStreamWriter.throwOutputError(BaseStreamWriter.java:1396)
         at com.ctc.wstx.sw.BaseStreamWriter.reportNwfStructure(BaseStreamWriter.java:1425)
         at com.ctc.wstx.sw.BaseStreamWriter.finishDocument(BaseStreamWriter.java:1586)
         at com.ctc.wstx.sw.BaseStreamWriter.writeEndDocument(BaseStreamWriter.java:508)
         at com.ctc.wstx.evt.WstxEventWriter.add(WstxEventWriter.java:97)
         at com.sunopsis.dwg.tools.xml.XMLWrapper.writerFooter(XMLWrapper.java)
         at com.sunopsis.dwg.tools.xml.XMLJoiner$JoinerXMLFilterOutput.endOfLastInputStream(XMLJoiner.java)
         at com.sunopsis.dwg.tools.xml.XMLFilter.process(XMLFilter.java)
         at com.sunopsis.dwg.tools.xml.XMLJoiner.join(XMLJoiner.java)
         at com.sunopsis.dwg.tools.XMLConcat.actionExecute(XMLConcat.java)
         at com.sunopsis.dwg.function.SnpsFunctionBase.execute(SnpsFunctionBase.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execIntegratedFunction(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlS.treatTaskTrt(SnpSessTaskSqlS.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Thread.java:619)
    Somebody can help me?
    Thanks in advance
    Edited by: user12099666 on 30/09/2011 11:49

    All patches can be downloaded from "My Oracle Support" and instructions for applying the patch are included.
    Do you not mean 10.1.3.6
    Patch 9377717: ORACLE DATA INTEGRATOR 10.1.3.6.0 PATCH SET
    There are newer patches than 10.1.3.6 also on My Oracle Support
    If it is 10.1.3.5.6 you want then it is
    Patch 9327111: ORACLE DATA INTEGRATOR 10.1.3.5.6 CUMULATIVE PATCH
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • How to get source table inside Template Mapping code template

    Hi guys,
    I have the following scenario, I have an table from external database and want to map it to an oracle table. This is done with Template mapping and I selected an Load code template on the execution unit that holds only the external table, this load code template will read row by row from source table and make the inserts into the flow table. I know that oracle use odiRef.getFrom() in order to construct the select statement from the external table. Because i need to do something custom i will need to have a list of the source tables inside the Load code template.
    Is this possible?
    P.S. I use owb 11gr2.
    Regards,
    Cipi
    Edited by: Iancu Ciprian on Jan 11, 2011 10:58 AM

    Hi Suraj,
    Thx for your answer!
    After posting the message i found in ODI documentation about odiRef other function and this I'm trying now to see if works, will let you know my results ...
    I implemented an custom iterator that retrieves the data from an external source and pass it to INSERT commands to execute against flow table. In order that this iterator to work i need the source table name of the current execution unit. Then the iterator is using the that name to get the data from the external entity and retrieve it as an array of Objects, this array of objects will be inserted in the flow table.
    Regards,
    Cipi

  • How to use RECNUM special field in a file bulk load interface (sqlldr)

    Hi,
    I'm trying to load an ordered set of full text lines from a flat file using Sql Loader 11.2 with ODI 11.1.1 bulk LKM (LKM File to Oracle - SQLLDR).
    I have to keep track of each line number in a separate target table column NUM_SEQ and feed it with sqlldr RECNUM special field.
    I haven't found any other way to do that but to tweak manually the generated sqlldr .ctl control file (bad but it works) :
    NUM_SEQ RECNUM,
    FULL_LINE CHAR(4000)
    I've tried to map "RECNUM" as an expression in the map tab of the loading interface but the column itself gets discarded at .ctl generation.
    I haven't found any mention of RECNUM in the whole ODI documentation, neither on this forum nor the Web.
    Using an internal Oracle sequence in the subsequent steps of the ETL breaks the garantee of ordered lines.
    Any hint ?

    You will have to enhance the KM so that this clause gets added to the CTL file each time.
    Add an Option to the KM in which you can specify name of the column that you want to act as line_number.
    And then in the KM, change the "Generate CTL File" step and add
    *<%=odiRef.getOption("RECNUM") %> RECNUM*
    after the call to the <%=odiRef.getColList(" ") %> API
    So, this will add the RECNUM column to the list of the columns generated.

  • Help required - How to flush memory for XML files ?

    Hi,
    I am receiving files with filenames like FILE_001.xml, FILE_002.xml, etc. all with the same XML format.
    I need to process each file by order of their filename and load them into the same table.
    I created a loop that renames each file to a standard filename, FILE.xml , and load into the table.
    However, when I process the 2nd file ie. FILE_002.XML to FILE.xml, the data in FILE_001.xml is loaded again. The data in FILE_001.XML still exists in memory.
    Need your advice how to get pass this problem.
    Thanks.
    Philip

    Hi Philip,
    You can use a TRUNCATE SCHEMA command on the XML Logical schema to remove the content previously loaded and then a SYNCHRONIZE FROM FILE to load the content of the new file in the backend. Refer to the XML Driver documentation in the ODI Documentation Library (Detailed Drivers Command).
    These commands should be included in an ODI procedure that you can use in your loop.
    Thanks,
    Julien

  • How to use Changed Data Capture?

    Hello,
    I want to use CDC to load my DW in real time. I am testing ODI in that mode.
    I have two tables:
    SRC Employe (empl_ID, name, surname, salaire, age) and TRG Employe( empl_ID, name, surname, CreDate, UpdDate) on SQL Server 2005. I wish to load data from SRC Employe to TRG Employe. I want that this data are charged in real time.
    how to do this? I know that I must use OdiWaitForData and OdiWaitForLogData but in ODI documentations, explanations are not sufficient.
    how to use OdiWaitForData and OdiWaitForLogData?
    thanks in advance
    Billyrose

    Hello,
    but when I created a package and added OdiWaitForLogData, pb began. the step that has OdiWaitForLogData succeed. but my interface failed. the step "Load Data" failed.why?
    There is no reason for that. WaitForXXX does not affect the data stored in the CDC infrastructure, but just detects it.
    - What type of CDC are you using: simple or consistent set?
    - Did your interface used journal data only?
    - What error did you get precisely on the "Load Data" task?
    Could you tell me how to schedule a package?For scheduling, I think you should have a look at "Generating a Scenario" in the user manual. under the generated scenario node, you'll see the schedule node and you can start scheduling from here.
    Regards,
    -FX

  • Split records into two files based on lookup table

    Hi,
    I'm new to ODI and want to know on how I could split records into two files based on a value in one of the columns in the table.
    Example:
    Table:
    my columns are
    account name country
    100 USA
    200 USA
    300 UK
    200 AUS
    So from the 4 records I maintain list of countries in a lookup file and split the records into 2 different files based on values in the file...
    Say I have records AUS and UK in my lookup file...
    So my ODI routine should send all records with country into file1 and rest to file2.
    So from above records
    File1:
    300 UK
    200 AUS
    File2:
    100 USA
    200 USA
    Can you help me how to achieve this?
    Thanks,
    Sam

    1. where and how do i create filter to restrict countries? In source or target? Should I include some kind of filter operator in interface.
    You need to have the Filter on the Source side so that we can filter records accordingly the capture the same in the File. To have a Filter . In the source data store click and drag the column outside the data store and you will have Cone shaped icon and now you can click and type the Filter.
    Please look into this link for ODI Documentation -http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html
    Also look into this Getting started guide - http://download.oracle.com/docs/cd/E15985_01/doc.10136/getstart/GSETL.pdf . You can find information as how to create Filter in this guide.
    2. If I have include multipe countries like (USA,CANADA,UK) to go to one file and rest to another file; Can I use some kind of lookup file...? Instead of modifying filter inside interface...Can i Update entries in the file?
    there are two ways of handling your situation.
    Solution 1.
    1. Create Variable Country_Variable
    2. Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
    3. Create a new Package Country File Unload
    4. Call the Variable in Country_Variable in Set Mode and provide the Country (USA )
    5. Next call the First Interface
    6. Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
    7. Now run the package .
    Solution 2.
    If you need a solution to handle through Filer.
    1. Use this Method (http://odiexperts.com/how-to-refresh-odi-variables-from-file-%E2%80%93-part-1-%E2%80%93-just-one-value ) to call the File where you wish to create store the country name into the variable Country_Variable
    2. Pretty much the same Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
    3.Create a new Package Country File Unload
    4.Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
    5. Now run the package .
    Now through this way using File you can control the File.
    Please try and let us know , if you need any other help.

  • Data Integration from Oracle GL to Hyperion  Planning

    Hi all,
    Can I knw the steps for extarcting the from Oracle GL Tables to Hyperion Planning?
    We have installed hyperion 9.3.1 and ODI 10.
    Please let me know how will we do this process by using ODI?
    Thanks

    Hi User##########,
    In a nutshell you need to do the following...
    1. Create a Project in ODI
    2. Define your Source and Target data stores in Topology
    3. Import the appropriate Knowledge Modules
    4. Reverse Engineer your Source and Target modules to create a model
    5. Create an integration to load from your Source (Oracle GL) to your Target (Hyperion Planning -- I actually prefer to load the data directly to Essbase)
    I just did this last month. When loading the data I used an Essbase load rule to ensure Essbase knew the column order.
    I was also able to load data via ODI from a flat file WITHOUT a load rule but the order of the data in the file was very important. I was also not able to substitute or alter the information in the integration when this was done. So in the end I still needed to have a load rule handy.
    I'd recommend reading up on the ODI documentation for this -- some of it is actually quite good. http://download.oracle.com/docs/cd/E10530_01/doc/nav/portal_6.htm Check out the "Oracle Data Integrator Adapter for Hyperion xxx" section.
    -Chris
    P.S. Check out John Goodwin's blog -- there's some neat stuff in there and you'll probably figure out some neat stuff on your own too.

  • Sun studio patches on linux: how to deal with?

    hello all,
    on solaris 10, i'm using a tool like pca (1) to maintain sun studio up to date.
    But on linux, i don't know how to do the same process?
    I've a Centos on a Sun x2270, i've installed SS21u1 with rpms, and sometimes, i go to the patches website, and do the painful job of installing the latest patches.
    Is there another way? or a better process to get the latest rpms available?
    For instance, on solaris 10, pca loads afile name patchdiag.xref, in which i can find patches related to Sun Studio on Linux:
    grep -i 'Studio 12' /var/tmp/patchdiag.xref | grep -i Linux | awk -F\| '{print $1}'
    124874
    124877
    126497
    126997
    127145
    127149
    127154
    127158
    141856
    141859
    142367after that, i have to use pca with -r (README) flags, to find the name of the rpms:
    Files included with this patch:
    Note:
        sun-cpl-12.1-5.i386.rpm
        sun-cplx-12.1-5.x86_64.rpm
        sun-scl-12.1-5.i386.rpm
        sun-sclx-12.1-5.x86_64.rpm
        sun-stl4o-12.1-5.i386.rpm
        sun-stl4x-12.1-5.x86_64.rpm
        sun-tll7-12.1-5.i386.rpm
        sun-tll7x-12.1-5.x86_64.rpmthen, on linux, i have to query if the rpm is installed:
    bash-3.2# rpm -qa|grep sun-cpl
    sun-cpl-12.1-4
    sun-cplx-12.1-4and install it if needed:
    sudo rpm -Fvh /tmp/128230-04/*.rpmthanks in advance for help,
    gerard
    (1) [http://www.par.univie.ac.at/solaris/pca/] and a review by a sun guy: [http://blogs.sun.com/quenelle/entry/patch_check_advanced]

    Thanks Julien for this information.
    For a SCD Type 1 dimension I used IKM Oracle Incremental Update module and when I execute the interface I get ORA-01031: insufficient privileges error message because I have only read access on the source system. Based on your suggestion, ODI shouldn't access/need source system for staging purposes.
    What did I do wrong then?
    According with ODI documentation, IKM Oracle Incremental Update module: Integrates data in an Oracle target table in incremental update mode. This IKM creates a temporary staging table to stage the data flow. It then compares its content to the target table to guess which records should be inserted and which others should be updated. It also allows performing data integrity check by invoking the CKM. Inserts and updates are done in bulk set-based processing to maximize performance. Therefore, this IKM is optimized for large volumes of data. Consider using this IKM if you plan to load your Oracle target table to insert missing records and to update existing ones. If you plan to use this IKM with very large tables, you should consider enhancing this IKM by removing the MINUS set-based operator. To use this IKM, the staging area must be on the same data server as the target.

Maybe you are looking for