Handling IKM in ODI

Hi
I have a requirement where i have to execute 9 interfaces to load to 9 tables.
Either all the tables should be loaded or none should be loaded.
For this I am using IKM SQL Control Append and have modified the commit value to 'No' both in the IKM as well as in the interfaces.
But even after this the data is getting loaded to the tables.
Is there a way to roll back thh transaction?
Kindly help me resolve this issue.
Thanks

Hi Nidhi,
You don't need to Modify IKM.
Setting the commit value in the Interfaces will do.
You have 9 interface in one transaction right.
You need to run them in all one transaction .. Don't worry by default the transaction name in all the IKM are "Transaction 1" so if you set commit = no in first 8 interface and set commit = Yes. If any of the INterface fails then data for all the interfaces willbe rolled back.
Note: Setting commit to no and running Interface independently will not show you effect of rollback as i guess developer have put commit on Execute to the transaction (because transaction end and I guess they didn't want to put extra commit button on screeen ;) )
Anyways it would have been of no use.
So the whole point id that setting commit to NO will show effect in a package.
This question was raised earlier in folowing thread but i guess it was never marked as answered so was never searched by you.
Re: Transaction Issue
Hope it helps!
Regards,
Amit
Bottom write up is just for interest of others you may choose to read or ignore
The interesting question would have been whether ODI will support 2 different transaction in a single package or not ?
I havn't tried but I guess it can, if we change the transaction name in IKMs.. will try and let us (Forum Users) all know.
Also what about maintaining transaction on different technologies like one interface on oracle and another on DB2 or teradata.
I guess even this will be supported as ODI a Java based application, Java does support distributed transactions, I alway admire Java for it capabilities in Distributed computting (my roomie will argue that no best part of java is threads) :D :D

Similar Messages

  • ODI Error Handling: IKM for Essbase (Data), Check reject at Commit Intervls

    All,
    I am trying to see if there is a way I can handle errors in the ODI IKM for SQL to Hyperion Essbase (Data), so I can switch to using a load rule interface if there are rejects.
    I am thinking, if we can check for rejects after every commit interval (right now using the default of 1000 records), and continue to the next set of 1000 records only if there are no rejects.
    If there is a way of even aborting the interface run i.e. prevent it from switching to loading records line-by-line on the occurrence of a reject, I can check a log and kick off an interface which will use an Essbase Load rule to continue loading.
    I don't know if it all sounds too hypothetical, but I want to see if anyone has ideas around this approach.
    Please share any thoughts.
    Thanks,
    Anindyo

    Thanks John, I was thinking on those lines.
    But it would help if there is a way of collecting information on what rejected, without having to set up a new physical object to pull from the work repository or from the file.
    We are trying to get away from any KM customization.
    Do you know what we can check for here? Is there a way of refreshing a variable in case of a failure, which we can check in the next step?
    Thanks,
    Anindyo

  • ODI Error Unable to load data

    Hi Frnds,
    I'm New to ODI tech. i need your assistance in resolving the following issue:
    My source has - 6 rows of data
    My target has - no rows.
    both source and target are oracle, and i used Ikm: sql to oracle and IKM: sql control append and oracle incremental update
    but still it was unable to load the data.
    In the operator window, i was able to view that data have been loaded. - in step 3.
    But in step 6 - integration - insert new rows showing error.
    The Return code of error : 4098
    Message: ora-04098: trigger is invalid and failed revalidation.
    Even, when i tried using toad to enter the data into target database manually by writing query. It was showing same error for me.
    Is the issue is with ODI or with Oracle database ?
    Is there any method to resolve the issue ? How do we handle triggers in ODI ?
    Thanks in advance,
    Raj.

    Nobody would know the answer to your question.
    Yes you can drop the trigger. You can also drop the table too. And there will be no need to create an interface or move data.
    Sorry, I am being sarcastic here.
    We will not know if that trigger is needed or not. Maybe it belongs to an ERP app that is maintaining some business rules or its an OBIEE staging area and needs trigger to validate data. Who knows.
    You should ask your administrator as to why the trigger is invalid. What is the use of the trigger. Try to find out the source of the problem.
    You should not simply drop the trigger.

  • Implementing WITH clause in ODI

    Hi,
    I want to convert certain 'WITH' clauses and 'Inline' views of a SQL query in ODI interfaces.Is there any way I can do this?
    Regards,
    RAshmik

    Hi Rashmik,
    I know that this is an old thread, but did you manage to accomplish this?
    I tried to dig the IKMs and ODI substitution, but from what I understand the "odiRef.getFrom()" method deals with the sources that are marked as subselect, and there is no way to break this subselects away from the "FROM clause" to use them in a "WITH clause".
    Does any one have other thoughts?
    Thanks,
    Murilo

  • Parallel Execution in ODI

    Hi
    I want to club my 10 scenario's into one package and execute them async. I did it using ODIStartSen and ODIWaitforSession but i am not able to track wic interface failed and which all succeed. Can you please let me know how to handle this in ODI?
    Thanks

    Hi
    Place all interfaces togerther with -->OK in a package and generate scenario and run.
    before that, run all interfaces indiviuallly and check which interface is getting error, then if all interfaces are executing without error, then you can do as I mentioned above.
    Hope this will helps you and hope same you expecting
    Thanks,
    Phani

  • Fast Reader vs ODI

    Hi,
    Anybody worked on Fast reader or have any idea on how ODI and Fast reader can be compared ?
    Thanks,
    Ramesh

    You mean this one ?
    http://www.wisdomforce.com/dweb/resources/docs/fastreader_data_sheet.pdf
    http://www.wisdomforce.com/dweb/resources/docs/FastReader_business_data_sheet.pdf
    For me Fast Reader looks lighter than ODI and I don't know it gives you the flexibility to create your own of way handling data as ODI does.
    Also, there is no perfect product. You have to find out what is the most suitable for your needs.

  • EPMA 11.1.2.1 dimension build from FlatFile to EPMA to EssbaseASO using ODI, member sorting order issues

    We are building our EssbaseASO cube using FlatFiles which are pushed to EPMA via interface tables and then the EPMA Essbase app is deployed to Essbase, this entire job is done through a ODI interface/package. The problem I am facing is the Order in which members in EPMA appear, even though the FlatFile has the right sorting order, by the time the hierarchies arrive into the EPMA interface table the sort order is changed randomly.
    I am using the File to SQL control append IKM in ODI and some where on the way I saw a suggestion to add a new option to insert "ORDER BY" into the IKM. I successfully did this and it did change the sort order but even this is not the right order(order in which my flatfile has the dimensions).
    I can't understand why Oracle/ODI needs to randomize the rows without being asked to do so! Please let me know if anyone has faced this issue before and if they were able to resolve it.

    The EPMA interface tables have a SORTORDER column. Make sure this is populated with numeric values sequencing the order you want your members to appear in the EPMA hierarchies and when you import from the interface tables this order will be respected. Prior to this feature being introduced the workaround was to create a view referencing the interface tables imposing the required ORDER BY clause but this isn't required in 11.1.2.1 just use the SORTORDER column

  • Need flexibility on using Excel as datasource

    I found this tutorial: http://blogs.oracle.com/dataintegration/2010/03/connecting_to_microsoft_excel.html
    Is it just me or there simply no flexibility if you need to predefine data area (named range) before hand? Most use cases you'd want to automate ETL, and thus data source must support varying number of entries (rows) at the very least.
    Is there any work around for this?
    TIA

    For Excel this are the steps.
    Look into this link -http://odiexperts.com/step-by-step-procedure-to-read-excel-xls
    1. Create a DSN in ODBC
    2. Create a connection in Data server (topology) using sun odbc-jdbc driver
    3. Reverse the Excel using the selective reverse. While handling multiple sheets ODI defines them like a table (datastore) and names are populated based on sheet name say if sheet1 then Sheet1$ ,Sheet2$ and so on.
    Now you can use them as Source Datasource and use it in the interface as we do with other RDBMS datastore.

  • How to pass one lookup value to another lookups

    Hi All,
    I've two relational sources with a join condition,added lookup and kept join condition too.I've to pass this lookup value to another lookup.
    let me know how we can handle this in odi.
    Thanks

    If you don't use "expression in the SELECT clause" option, why don't you simply drag and drop your file datastores in your canvas and do a standard join?
    The result will be the same.
    See here for more details : Re: Lookup Vs Join

  • Urgent- How to separate bad records from load and put into a separate table

    We have an error handling requirement in ODI 11g from the client that whenever a bad record is encountered the execution flow shud not stop rather it shud separate those records into an error table so that at the end of the load we shud be left with all the records (except bad records) in the target table and those bad records shud be there in a separate error table.
    The definition of the bad records may include the size of a column or datatype mismatch between source and target table. How to implement this error handling strategy in ODI or is there any out of box solution that we can leverage Please Help.
    Thanks & Regards,
    SBV
    Edited by: user13133733 on Dec 23, 2011 4:45 AM

    Hi SBV,
    Please find my responses below,
    I have tried the steps suggested, however i have some doubts:
    1. What all data exceptions (e.g. primary key constraint violation etc.) are covered in this mechanism?Yes you can handle PK,FK, Check constraints violations etc., using CKM.
    2. If there is a column size mismatch between source and target table will this work? (I think not because i tried it and it'll give error before populating the I$ table, because I$ is created according to the source).
    You are right column size mismatches will not be captured as a part of default CKM property.
    Also i am getting an error in the creation of SNP_CHECK_TAB step. In my case ODI is by default making a query like "create table .SNP_CHECK_TAB" , now this dot (.) before SNP_CHECK_TAB is making it an invalid table name and hence this step is a warning (not an error), but in the next step (delete previous checksum) it is throwing an error as this step is also looking for .SNP_CHECK_TAB table which is not there.
    Please help me where the issue lies. I have NO idea why it is making that query by default I have freshly impoted the CKM Oracle and used it.
    This is coz there is no DEFAULT physical schema not defined at your target data server.
    Go to, Topology Manager-> Phy architecture -> <Your Technology>-> <Your target data server>-> expand, open up your physical schema and check DEAFULT.
    Thanks,
    Guru

  • Opening and saving .mhtml files to txt or .csv

    Hello All,
    I need to accomplish the following and it's way beyond my immediate skill set.  I need ODI to do the following:
    1.     Receive an attachment in email and drop it in a folder (got this one)
    2.     The attachment is a report in ,mhtml format.  I need to get the content of the file, minus some of the header info, loaded into a table.  To handle this manually, the user opens the .mhtml file and saves it as a .csv file and deletes the unwanted header info. I'd like to find a way to handle this in ODI without user intervention.
    3.     Once the contents is in a loadable format, load the data to tables.
    My stumbling point is working with the .mhtml file.  Never done this before.  I'm sure there are options and some smart people have already figured this out.  I'm hoping someone will share their solution.
    Thanks in advance for the help!

    Ok, i did figure this out.  For those that are interested here is how I did it;
    $from = "8/31/2014 12:53:56 AM " #Get-Date –f "dd-MMMM-yyyy 00:01:00"
    $to = "8/31/2014 3:00:56 AM "#Get-Date –f "dd-MMMM-yyyy HH:mm:ss"
    $pg = Get-ProtectionGroup -DPMServerName backup01
    $ds = Get-Datasource -ProtectionGroup $pg[0]
    $so = New-SearchOption -SearchString "MASTER_.bak" -FromRecoveryPoint "$from" -ToRecoveryPoint "$to" -SearchDetail filesfolders -SearchType contains -Recursive -Location "T:\"
    $rp = Get-Recoverypoint -Datasource $ds[0]
    $ri = Get-RecoverableItem -Datasource $ds[0] -SearchOption $so
    #If you want to see what Items match the above search, just run $ri
    #Here is the part i was missing..getting the correct library.
    $lib = Get-DPMLibrary -DPMServerName backup01
    $rop = New-RecoveryOption -TargetServer SQL03 -RecoveryLocation CopyToFolder -FileSystem -AlternateLocation "T:\SQLsafe Backups\Offlined Backups\Master\" -RecoveryType Restore -OverwriteType overwrite -dpmlibrary $lib[2]
    Recover-RecoverableItem -RecoverableItem $ri -RecoveryOption $rop
    #This will just give you the status if you don't want to use the GUI
    Get-DPMJob -DPMServerName "backup01" -JobCategory RecoveryTape #-Status InProgress
    So the issue was getting the dpmlibrary then adding the reference to that library in the  recovery option (-dpmlibrary[x]
    Also, when using the -SearchType contains, you can be creative in matching the kinds of files you want it to find ie. -SearchString "*abc*MASTER_.bak"
    This script will recover all of the matching files to the specified location, though it does recreate the original file structure.
    Also, contrary to my title, i didn't end up having to iterate through a list for the restoration of a list of files.  There would still be a place for someone to do that easily enough by doing an exact match and feeding a list of the files, but for
    what i needed, it worked with the wildcard.

  • How to handle error for a file to file transform in ODI

    I am doing a lab for file to file transformation where source = CSV file and target = Flat file.
    1) When I am changing the datatype in source two files are getting created where one having the errored out data and the other having the errored message, how how to handle the errored data?
    2) If the target path is changed the session in ODI is showing as completed, it should error out. Here no files are created in source as earlier. Hoe to handle this type of error?

    Hi,
    I have used the following KMs in my transformation with the following options:
    IKM SQL Incremental Update
    INSERT    <Default>:true
    UPDATE    <Default>:true
    COMMIT    <Default>:true
    SYNC_JRN_DELETE    <Default>:true
    FLOW_CONTROL    <Default>:true
    RECYCLE_ERRORS    <Default>:false
    STATIC_CONTROL    <Default>:false
    TRUNCATE    <Default>:false
    DELETE_ALL    <Default>:false
    CREATE_TARG_TABLE    <Default>:false
    DELETE_TEMPORARY_OBJECTS     <Default>:true
    LKM SQL to SQL
    DELETE_TEMPORARY_OBJECTS    <Default>:true
    CKM Oracle
    DROP_ERROR_TABLE    <Default>:false
    DROP_CHECK_TABLE    <Default>:false
    CREATE_ERROR_INDEX    <Default>:true
    COMPATIBLE    <Default>:9
    VALIDATE    <Default>:false
    ENABLE_EDITION_SUPPORT    <Default>:false
    UPGRADE_ERROR_TABLE    true

  • How to handle error for a Db Table to Db table transform in ODI

    Hi,
    I have created two table in two different schema source and target, where there is a field for e.g.- place where the datatype is varchar2 and data inserted is string.
    In designer model of ODI i have put the type of place as number in both source and target and accordingly done the mapping.
    When it is executed it should give an error, but it got completed but no data is inserted neither in target table nor in error table in the target schema(E$_TARGET_TEST which is created automatically).
    Why the error is not given and how to handle such type of error..
    Please help.
    The codes for source and target tables are as follows:
    source table code:
    CREATE TABLE "DEF"."SOURCE_TEST"
        "EMP_ID"   NUMBER(9,0),
        "EMP_NAME" VARCHAR2(20 BYTE),
        "SAL"      NUMBER(9,0),
        "PLACE"    VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE
    inserted data:
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('1', 'ani', '12000', 'kol')
    INSERT INTO "DEF"."SOURCE_TEST" (EMP_ID, EMP_NAME, SAL, PLACE) VALUES ('2', 'priya', '15000', 'jad')
    target table code:
    CREATE TABLE "ABC"."TARGET_TEST"
        "EMP_ID"     NUMBER(9,0),
        "EMP_NAME"   VARCHAR2(20 BYTE),
        "YEARLY_SAL" NUMBER(9,0),
        "BONUS"      NUMBER(9,0),
        "PLACE"      VARCHAR2(10 BYTE),
        PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" ENABLE

    Hi,
    I have used the following KMs in my transformation with the following options:
    IKM SQL Incremental Update
    INSERT    <Default>:true
    UPDATE    <Default>:true
    COMMIT    <Default>:true
    SYNC_JRN_DELETE    <Default>:true
    FLOW_CONTROL    <Default>:true
    RECYCLE_ERRORS    <Default>:false
    STATIC_CONTROL    <Default>:false
    TRUNCATE    <Default>:false
    DELETE_ALL    <Default>:false
    CREATE_TARG_TABLE    <Default>:false
    DELETE_TEMPORARY_OBJECTS     <Default>:true
    LKM SQL to SQL
    DELETE_TEMPORARY_OBJECTS    <Default>:true
    CKM Oracle
    DROP_ERROR_TABLE    <Default>:false
    DROP_CHECK_TABLE    <Default>:false
    CREATE_ERROR_INDEX    <Default>:true
    COMPATIBLE    <Default>:9
    VALIDATE    <Default>:false
    ENABLE_EDITION_SUPPORT    <Default>:false
    UPGRADE_ERROR_TABLE    true

  • Not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c

    not able to see ikm oracle incremental update and ikm oracle slowly changing dimensions under PHYSCIAL tab in odi 12c
    But i'm able to see other IKM's please help me, how can i see them

    Nope, It has not been altered.
    COMPONENT NAME: LKM Oracle to Oracle (datapump)
    COMPONENT VERSION: 11.1.2.3
    AUTHOR: Oracle
    COMPATIBILITY: ODI 11.1.2 and above
    Description:
    - Loading Knowledge Module
    - Loads data from an Oracle Server to an Oracle Server using external tables in the datapump format.
    - This module is recommended when developing interfaces between two Oracle servers when DBLINK is not an option.
    - An External table definition is created on the source and target servers.
    - When using this module on a journalized source table, the Journaling table is first updated to flag the records consumed and then cleaned from these records at the end of the interface.

  • Odi 11g - IKM SQL to Hyperion Essbase (DATA) log file always empty

    In odi 11g when using *"IKM SQL to Hyperion Essbase (DATA)"* setting the the "LOG_ENABLED" = true,
    only an empty file are generated.
    Just "LOG_ERRORS" file (if errors occurs) are created.
    Is this just an my issue?
    Can someone help me?
    p.s.: the same issue, I got even with the *"IKM SQL to Hyperion Planning"*
    Thx in advance, Paolo

    Thanks John for your suggestion.
    here the patch *"Patch 10302682: IKM SQL TO PLANNING: LOG FILE IS CREATED BUT NOTHING INSIDE."*
    I didn't see any other about Essbase...
    I try to check all day on support site.
    Paolo
    Edited by: Paolo on 19-apr-2011 8.44

Maybe you are looking for

  • Latest issues since software update

    Since latest software update (do not like new menus, not clear, delete process now awful as goes to top one first regardless, no red button, poorer picture quality 720p instead of 1080 so picture quality worse than before, some upgrade! ) the BT Visi

  • In address book on my new macbook pro I can't drag contacts to a new message or drag a group to bcc. What do I need to do?

    in address book on my new macbook pro I can't drag contacts to a new message or drag a group to bcc like I could on my 2007 macbook pro. What do I need to do? 

  • How   to customise Personnel actions in Vesion 3.1 I

    Hello Experts, I have to  configure Personnel actions in version 3.1 but I could not find the node for the same .......Please help as it is urgent.. Thanks in advance Rajeev Chhabra

  • Image deletion won't display in browser

    Hey everyone, simplly put I am displaying a picture I have created to the user, after it is displayed I wish to delete it from my drive, the only problem is that the image is deleted before it is displayed, i want it the other way round. I have creat

  • USA iphone 5 in Europe

    Hi, I am David, from Spain. In the coming days I am going to travel to USA and I am thinking about buying an iphone 5. Does someone know if it is possible to use it in Spain? Are there any kind of problem with the frecuency bands? Thank you very much