ODI Interface Performance Improvement

Hi,
ODI Version 11G
I have created some interface and package to upload data from Oracle SQL database to MS SQL . However the interfaces are running very slow and take 8-10 minutes to complete.
Source – Oracle SQL
Target - MS SQL
IKM – IKM MS SQL Incremental Update
CKM – CKM SQL
LKM – LKM SQL to MS SQL
Could you please suggest me some performance tuning help or alternative faster way to get this task done.
Any suggestions is highly appreciated.

You can always try the LKM SQL to MSSQL (BULK) loading knowledge module, which uses the native bulk load in SQL Server. This should perform much faster loading, but of course cannot help if the source query has a performance issue.
Here's a reference to the available LKMs for SQL Server: http://docs.oracle.com/cd/E21043_01/integrate.1111/e12644/ms_sqlserver.htm
Enjoy!
Michael R.

Similar Messages

  • Slow User Interface Performance on a Mac Pro

    I've recently purchased a late 2013 6-core Mac Pro with 16gb of memory and the 256gb SSD.  While the general performance of Final Cut Pro X is outstanding (applying filters, transitions, analyzing for dominant motion, optical flow retiming, rendering, etc.), the user interface seems to still start to slow down the further along I am in my project.  It is most noticeable when I click a clip in my timeline to modify a video property in the inspector, then hit play (spacebar) again to resume editing.  At the start of a project, I can fly through my clips and edit without any hesitation, but as I near the 90 second-ish mark (with an edit/clip every 5-10 seconds), the responsiveness of the user interface starts to waiver just like my previous early 2011 Macbook Pro (something I thought was related to the graphics and/or system performance of computers from that era).
    I've tried creating proxy and optimized media projects and even edited my h.264 footage in native, compressed format - regardless of what I try, the performance of the user interface is nearly identical until I start to hit a few minutes with some colour corrections on 25% of the clips.  I've also experimented with the playback quality during editing (i.e., best performance vs. quality) which resulted in minimal performance improvements.
    Clearly, this Mac Pro has been designed to tear through video footage and process mass amounts of data, but I wonder if there is something else looming in the code of Final Cut Pro X that, regardless of the computer in use, the user interface just gets sluggish at a certain point when a critical mass of edits/clips are in a project.
    Any comments or feedback would be greatly welcomed, as I may decide to return this beast and settle on an iMac if this symptom exists regardless of what Mac I use.
    Sincerely,
    Ryan Smith

    1.  All of my media was located on the SSD installed inside the Mac Pro and is the startup disk of the computer
    This will slow you down.  Get a secondary drive for the media and Libraries.
    2.  The SSD had between 50g and 150g (depending on the project) available
    That drive is becoming too full, and slowing down.
    3.  The project consisted of a single event from one import, about 100 files totalling 40g
    A good USB 3 or Thunderbolt external RAID will fly through that without issue.  The system drive will get bogged down.
    4.  I haven't paid close enough attention to Activity Monitor; I will start doing that, but FCPX is the only application I ever have open when editing and consumes about 12gb - 16gb.
    16GB RAM should be plenty for FCPX.  I've never seen it eat up that much RAM, even on very large projects.  But more RAM won't hurt if the system is using that much RAM.  How are you getting those RAM usage numbers?
    The issue seems to be your drive.  First, don't use the system drive, that will bog things down, no matter what type of drive it is.  Second, get your media and Libraries on a fast secondary (external) drive. 
    Finally, are you running with the dual D500 or D700 GPUs?  There is a significant performance difference between the two in FCPX and Motion 5.  But I highly doubt that's your issue.  I think you simply are eating up too much of your drive, and running off the system drive, both are huge no-no's.

  • Aperture 3.2.3 performance improvement

    Hello,
    Yesterday I installed the Aperture update 3.2.3.
    I noticed a performance improvement.
    I'm curious what your experiences are with this update.

    I had thought of starting my own post but to ask just how many issues people were experiencing - rather than what if any improved performances people have seen.
    Since updating to 3.2.3, I've been completely afraid to use it!  Right out of the gate, it completely hung up my Mac and I had to force restart while using it, which then of course required to repiar permissions, and I eventually had to do the more evasive, repair aperture library.  I could be having some other kind of issues, but it's funny - I only start having problems after any kind of Aperture update!
    Things on Aperture are now VERY slow.  I just imported a large project of over 2000 photos that might be affecting my library and Aperture speed, but it shouldn't affect it this much.  And now, anything, and I mean ANYTHING that has to do ANY kind of processing - takes forever.  And I'm talking about the simpliest of edits will give me the spinning beach ball.
    Here's one thing that JUST freaked me out, and why I came to the forums now.  I double clicked on a photo I had been working on so that I could get back to the full browser view, showing all the pics I had in a project, and all of Apeture went all weird, and broken up.  I can only describe it as looking like Aperture's interface broke up into a puzzle on my display.  I'm afaid to try duplicating what happened.
    Suffice to say... I've already skimmed the support forums, but enough to know there's something seriously wrong with the recent 3.2.3 update. 3.2.1 was just as horrible, so I was happy, running along fine on 3.2.2, but now 3.2.3 is horrible - crashes, hangs, and takes forever brushing any kind of effects into shots.
    I saw that Michael L. was going to try to revert back to 3.2.2  I'm also using a 2009 iMac 27" Core i7 with 10.6.8 with 16gigs RAM.  I refuse to upgrade to buggy Lion.  I would like to know what process you did to revert back if you did?  Just time machine or how?  And how has Aperture been running if / since you reverted back?

  • Tabular Model Performance Improvements

    Hi !
    We have a bulitv tabular model inline which has a fact table and 2 dimension tables .The performance of SSRS report is very slow and we have bottle neck in deciding SSRS as reporting tool.
    Can you help us on performance improvements with Tabular Inline
    Regards,

    Hi Bhadri,
    As Sorna said, it hard to give you the detail tips to improve the tabular model performance according the limited information. Here are some useful link about performance Tuning of Tabular Models in SQL Server 2012 Analysis Services, please refer to the
    link below.
    http://msdn.microsoft.com/en-us/library/dn393915.aspx
    If this is not what you want, please elaborate the detail information, so that we can make further analysis.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Error when trying to execute ODI Interface

    Hello,
    I receive the following error when I try to execute an ODI interface. The interface contains two models trying to load a .csv file into an Oracle table. The models seem to be working and I can right click on the source file and "View Data". I have specifed LKM File To Oracle (SQLLDR), IKM Oracle Incremental Update, and CKM Oracle. But still receive this error on the fourth step of the execution "Loading - SS_0 - call sqlldr via Jython"
    org.apache.bsf.BSFException: exception from Jython: Traceback (innermost last):
    File "<string>", line 3, in ?
    OS command has signalled errors
         at org.apache.bsf.engines.jython.JythonEngine.exec(Unknown Source)
         at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
         at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
         at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
         at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
         at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
         at com.sunopsis.dwg.cmd.e.i(e.java)
         at com.sunopsis.dwg.cmd.g.y(g.java)
         at com.sunopsis.dwg.cmd.e.run(e.java)
         at java.lang.Thread.run(Unknown Source)
    Thank you for your help

    Almost...
    You need the sql loader where the "OS command" will execute it means where the agent is installed (once you will execute it from an agent when at production).
    AND that machine where the agent is, need to "see" the target database through the Oracle Listener (tnsnames)....
    Is just like to execute a Sql Loader from you own machine, but who write the command will be ODI.
    Does it make any sense to you?
    Message was edited by:
    Cezar Santos

  • What is use of Active mapping in ODi interfaces?

    hi,
    What is use of active mapping in ODI interfaces ?
    What are the advantages of using active mapping?
    If active mapping is not used in ODI ,would it affect ODI code in some sense?
    Thanks,
    Mahesh

    MaheshGx wrote:
    hi,
    What is use of active mapping in ODI interfaces ?This ask ODI to generate for for the mapped column . Other wise no code will be generated .
    What are the advantages of using active mapping?If you want to generate code use active mapping .
    If active mapping is not used in ODI ,would it affect ODI code in some sense?
    It will not generate the code
    Thanks,
    Mahesh

  • How to Comapare Row by Row in ODI Interface

    Hi all
    i want to Compare Record by Record from my Source Table.
    Can any one Explain Logic How can i do through ODI Interface?

    If you want row by row processing (rather than set based) then look at using Knowledge Modules that implicitly use a cursor via the agent to load the staging table (e.g LKM SQL to SQL) or a KM that explicitly uses a cursor to load the target table (e.g IKM Incremental updata row by row).
    It wont be as fast as set based though!
    What do you want to do with each row? Can you load them all into the staging area and then process row by row?

  • How to set a group by clause in ODI interface?

    How to set a group by clause in ODI interface?

    In ODI, group by method will be triggered automatically when any one of your mapping contains aggregate functions.
    Thanks,
    Saravanan Rajavel

  • Dynamic Column Name in ODI Interface

    Hi Everyone
    I have a requirement to read a set of columns from source table and move the data to target table. Set of columns are decided dynamically based on parameter say PVV_PREFIX.
    For example, assume the following tables
    Source: Employee (OFFICE_ADDRESS1, OFFICE_ADDRESS2, OFFICE_CITY, HOME_ADDRESS1, HOME_ADDRESS2, HOME_CITY)
    Target: Address(TYPE, ADDRESS1, ADDRESS2, CITY)
    Now, if scenario is called PVV_PREFIX=OFFICE, then OFFICE_* columns should be mapped to target table
    if scenario is called PVV_PREFIX=HOME, then HOME_* columns should be mapped to target table
    In actual requirement there are more than 30 such columns. Declaring those many variables is really not practical, rather we prefer to #PVV_PREFIX + '_ADDRESS1' kind of approach. But not getting supported in ODI Interface.
    Please let me know if this requirement is feasible via ODI Interface or we need to do only using ODI procedure. Thank you
    Regards
    Prasad

    Prasad,
    You can do it using single interface itself but you will need to customize the KM .
    Step 1. Create Interface mapping with appropriate Source to Target Mapping.
    Step2. Customize KM in such a way that you add prefix PVV_PREFIX. This can done using ODI Variable or Java
    Step 3. Execute and test you interface
    Please let me know if you have any question in above approach .

  • DS 5.2 P4 performance improvement

    We have +/- 300,000 users that regularly authenticate using our DS. The user ou is divided in ou=internal (20,000 ids) and ou=external (280,000) uids. Approximately 85-90% percent of the traffic happens on the internal ou. The question is: Could I get any performance improvement by separating the internal branch into its own suffix/database? Would running two databases adversely affect the performance instead? We see performance impacts when big searches are performed on the ou=external branch. Would the separation isolate the issue, or those searches will most likely affect the DS as a whole?
    Thanks for your help!
    Enrique.

    Thank you for the info. Are u a Sun guy - do you work
    for sun?Yes I am. I'm the Architect for Directory Server Enterprise Edition 6.0. Previously I worked on all DS 5 releases (mostly on Replication).
    You are getting the Dukes!Thanks.
    Ludovic.

  • Performance improvement in a function module

    Hi All,
    I am using SAP 6.0 version. I have a function module to retrive the PO's . for just 10,000 records its taking long time.
    Can any one sugguest the ways to improve the performance.
    Thanks in advance.

    Moderator message - Welcome to SCN.
    But
    Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting
    Just 10,000 records? The first rule in performance improvement is to reduce the amount of selected data. If you cannot do that, it's going to take time.
    I wouldn't bother with a BAPI for so many records. Write some custom code to get only the data you need.
    Tob

  • Basic Issue with ODI interface

    Hi All,
    I have a very strange problem.
    I am coding my second interface in ODI.
    Its very simple
    Joining 4 tables ,with one filter and connect it with the target table.
    The interface when executed runs fine,without any error.
    But the target table doesn't get populated.
    So when i check the ODI interface at every step via the Operator navigator,I get no error at any step.
    step1) drop a table (this comes as a warning as no table exists)
    step2) create a view(this step suceeds ,but when i check the database the view is not created)
    STEP 2 is a real issue and I can't understand as to why this view is not
    getting created in database,when the step completes successfully
    Pls can someone suggest where should i look for ?
    rgds
    s

    I can relate to your issue.
    ODI does this to me all the time.
    The Operator logs show all greens and show no error, but no data makes its way to the Target.
    I usually remove one mapping at a time and re-run the Interface. That helps me pin-point what might be the issue. Very frustrating.

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • MV Refresh Performance Improvements in 11g

    Hi there,
    the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
    "Refresh operations on materialized views are now faster with the following improvements:
    1. Refresh statement combinations (merge and delete)
    2. Removal of unnecessary refresh hint
    3. Index creation for UNION ALL MV
    4. PCT refresh possible for UNION ALL MV
    While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
    Considerations are necessary for migration decision to 11g or not...
    Thanks in advance.

    I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
    And were are the performance improvement? What is the refresh hint?
    Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set.

Maybe you are looking for

  • Adobe Acrobat 5.0 and Windows 7 Problem

    How do I get Adobe Acrobat 5.0 and Windows 7 compatible so I can access my attachments to e-mails sent to me?

  • System failure during locking GL 401000

    Dear All, At the time of J1IIN following error is displayed "System failure during locking GL 401000" This is Basic excise duty GL Please help to solve the issue. Regards, Pravin

  • Is Dreamweaver ready as a corporate tool?

    I have to decide very soon what application our international offices will use in over 15 countries and 45 locations. The total number of purchases will be nearly 75+. But I don't feel confidient about recommending and approving Dreamweaver. I am a f

  • Qestion Regaring SRM InfoProvider: ITM_DELIV_DATE

    Hi All, I'm hoping to get some help here: 0DSDEL_DATK (Requested Deliver Date.  This is a field in BW) is in DEC format while the source infoobject from SRM, ITM_DELIV_DATE, is in DATS format (yyyymmdd).  The infoobject ITM_DELIV_DATE (in SRM) is ass

  • Crash of OOo because libnss uses wrong version

    Hello everyone, I came across a crash within OpenOffice.org (OOo) which was caused by libnss using the libdb-4.2 from OOo rather then the older libdb from the system: #0 0x410aebd3 in __db_c_count_pp_4002 () from /opt/oog680m5/so-8-bin-linux-en-US/pr