OWB 10.2 Performance

Hi all,
We are using OWB 10.2 on AIX 5.3L, with Oracle DB 10.2.0.
We are experiencing performance issues, in particular during mapping deploy.
Sometimes, a deploy of a single mapping takes over than 5 minutes.
In particular, we have discovered that when the DB is performing I/O bound operations, OWB experience becomes very slow. If we launch query or procedure via sqlplus, it runs immediately, even under high I/O.
Do you have similar experiences?
Can you suggest a strategy for the solution of this issue?
Bye,
Andrea

Did you try to create the following indexes on your repositories schemas ?
create index wb_rt_idx_ao_ou on wb_rt_audit_objects (object_uoid) tablespace &tndex.;
create index wb_rt_idx_ao_pa on wb_rt_audit_objects (parent_audit_object_id) tablespace &tndex;
Regards
Matthias

Similar Messages

  • OWB vs Informatica, Performance

    Hi
    I am working on a Proof of Concept which is looking to replace Informatica with OWB. A key factor to the success of the proof of concept is performance.
    At present, the OWB mappings seems to be considerably slower, which does not make much sense to me. I have compared the SQL generated from Informatica with the SQL generated from OWB, and they are pretty much the same. They key difference between the two solutions, is an initial processing of the data file. In the OWB solution, this is used as an external table and a number of mapping perform validation functions to prepare the data for further processing. In the Informatica solution, this validation is preformed a file system level, using data caching.
    My experince of data caching is fairly limited. Is this a key way to reduce runtime of processing. Is this an advantage of Informatica over OWB, as this is performed outside the database.
    Are there any other performance related areas which informatica beat OWB. Surely OWB should always win.
    Any feedback would be gratefully received.
    Thanks in advance.

    LS,
    I do not have any more hints for you than stated earlier, only more explicitly maybe. Forget the proof of concept.
    Either you choose for the one (and only OWB) or for the other. Choise has already been made in the past and the develop/test costs spent.
    If you do not know:
    - how much money you can actually spent during production with OWB
    - how much time/skills/money you can spent during proof of concept of OWB
    - how much time/skills/money you can spent during rewriting and conversion of ETL-proces
    your battle (proof of concept) is lost at forehand.
    The solution of any detail like performance is a waist of time.
    OWB and Oracle can function really fast, if
    - the right hardware is available
    - the right skills are available at ETL-design time.
    Do not try to build an even more beautiful castle than the current with only a toothbrush during a proof of concept. Keep the comparison fair.
    I would give this assignment back to where it came from.
    Regards,
    André

  • OWB Design Client Performance Problems

    I am currently testing OWB 9.2.0.4 and i quite often experience what I can only describe as the program hangs. If I wait approximately 30 - 90 seconds, the program will begin to respond again. This is quite annoying and would like to know if there is any config that I can check or change to improve performance of the OWB client.
    My Desktop PC configuration:
    Windows XP Pro
    Dual Pentium Xeon 2.4 GHz
    512 MB RAM
    30GB SCSI Hard Drive
    All my repositories are on a single server on my network. Network connections are at 100 MBs.
    Server configuration:
    Windows 2000 Advanced Server
    Dual Xeon 3.00 GHz
    5 GB RAM
    300 GB Hard drive space on RAID 1 and RAID 5 arrays.
    Oracle 9i Enterprise (9.2.0.4.0)

    Hi Mark
    I too have OWB and a 9i database on my laptop and it works fine. My laptop has 1 Gb RAM and a very fast processor. I have run it on a machine with less memory and it definitely goes slower.
    About 9 months ago I paid a lot of money to have Oracle University come to the company that I then worked for to give private OWB training. The instructor who came in told us that 1 Gb should be considered the minimum for machine running OWB in a real-world situation. He said that if we wanted to run OWB against a database on the same machine with a smaller amount of data and maps and not have many other applications open then a lower memory would suffice.
    I also definitely agree with you about the latency of the network because I have seen OWB "hang" on even a small map on a congested network. However, I have also seen it hang because of insufficient memory. It happened to me a couple of days ago on my own laptop which as I say has 1 Gb RAM. On the day it happened to me I opened my task manager and it was showing 1.5 Gb memory being consumed by various processes and therefore it was paging to my hard disk.
    Now granted OWB was not the only process on my machine because I had an email system, several word documents, an excel spreadsheet, and a couple of browswer windows open. This is what I mean by a real-world situation.
    I think I should therefore clarify that its not necessarily OWB that needs 1 Gb RAM but, because it has to share resources with several other typical office tools, it is the machine that needs to have plenty.
    Hope this helps
    Regards
    Michael

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • OWB 9i is available.  Is anyone successfully using it?

    I've upgraded my MDL as per the docs but I continually error out
    on import to 9i. Before going to metalink, I was just wondering
    if anyone had gotten it to work at all.
    Thanks,
    Lewis

    I've moved to OWB 9i by performing repository upgrade.
    Everything has completed well...

  • Developers are complaining OWB 10.2 is slow

    Hi,
    Our shop has been using OWB 10g Release 1. Recently I upgraded to OWB 10g Release 2 (10.2.0.1.31 per inventory). This week our developers began to toy with it. Today, the developers who use it are telling me everything is very slow, the Design Center, Control Center Manager and the Runtime Audit Browser, all of it. It is unusable! Very slow compared to OWB 10g R1.
    I searched this forum for help.... and I see many folks are complaining about OWB 10.2 performance.
    I have two questions:
    1) If I patch our OWB installation from 10.2.0.1.31 to 10.2.0.2 can I expect an improvement in performance? Can anyone comment on their own experiences please?
    2) I saw someone wrote in this forum "if you have upgraded to 10gr2 I have pity on you...most of us are waiting until september unless we have dual core client pcs and stupider management..." Can someone tell me what is happening in September?
    Note: Our environment meets the minimum requirements per the Warehouse Builder Installation and Administration Guide
    Thanks,
    Phany

    Hi
    We also had same issue.So, we upgraded our hardware
    1GB RAM at client and 4GB RAM at server.
    2.5 Ghz Processor.
    Also, we optimized repository. Now, i donot know what kind of optimization was done as it was done by our DBA.
    This improved the performance OWB considerably.
    Also, try using OEM for some statistics.This might help in identifying the performance related issues.
    Im not sure, but I dont think that the number of mappings can affect the performance. The Slowness of OWB can be attributed to(as per my uderstanding)
    1 Java GUI(since you have a very good hardware combibnation it shouldnt be main problem
    2. Services/sessions : If the there are too many sessions or services running on the same server which are eating into the Memory available, then the performance would reduce.
    I request others too correct me if im wrong
    Hope this helps!
    Regards
    Vibhuti

  • OWB Backups with normal exports

    Hi,
    Could someone please confirm whether a normal database export will suffice for a Warehouse BUilder repository backup?
    That is NOT using the Design Centre (or any other OWB front-end) but a plain exp.
    I want to schedule regular daily backups of the repository.
    What would be the best process to follow?
    Regards,
    Steven.

    Hi,
    Thank you - this what I expected ...
    However, how can this be scheduled to run daily? Do you have to log into the OWB application to perform the backup manually, or can the xport be scheduled?
    What does the Datapump export do: expdp system/manager DIRECTORY=dpump_dir FULL=y
    I found this in the docs : http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/cdc.htm
    Regards,
    Steven

  • How to get the last run date.

    We intend to develop an incremental data load mapping using this strategy:
    1) The mapping reads the date it was last run from an auxiliary table.
    2) It selects from the source only those rows that were inserted or updated after said date.
    3) Then, a post-mapping process updates the last run date in the auxiliary table, using SYSDATE.
    The problem with this logic is that there is a gap: if the mapping starts running at 1:00 and ends at 2:00, the rows that are inserted in between will never be loaded.
    Is there any way to get the value when the mapping started running? Is there a better way to do this?
    Any help would be appreciated.
    Juan Algaba

    There is always the possibility of some record updates slipping through the crack if you are depending on dates unless you are very carefull. All of the audit tasks that the OWB-generate code performes take time. Any pre- or post- process that needs to run takes time. So which date is the best cuttoff point to equate to "when the last run of the merge (or insert or update) statement completed"?
    Plus, how do you handle reloads if the previous load failed and your mapping had incremental commits?
    Is your source on another server? If so, are the dates in perfect synch? The audit tables populate with sysdate of your runtime schema. Is that the same as the sysdate on your source remote database?
    I would qualify my query to look for all updates since the start of the last run that finished successfully - adjusted if neccesary for sysdate differences if it is on a remote schema. And make sure that your code handles any reloads gracefully in the event that this brings back data that you have already loaded once. .
    Because we use Oracle Streams to load a local staging area, we also have custom code to dump the primary keys of all data changes to utility staging tables while streams is updating the local copy. So, our Person table has an st_Person_delta table that just holds the primary keys that have been updated by Streams since the last ETL run.
    During datamart load we disable the streams apply to stabilize our environment, and join these lists of pk's to their source tables to drive our ETL. So we only select data where Streams has performed an update to the row since our last run. When we are done our ETL, we truncate the primary key staging tables, and then turn streams back on to start loading up our new delta into our staging tables again..
    The ETL gets pretty complex though when many tables join together in one mapping and you need to check all possible source table deltas to see if any of them got updated to determine the delta for a given dimension or fact record, but it works great once you get it all done.

  • Multiple Files Loading

    Hi
    I have requirement to load multiple files into oracle database. I am in confusion whether i should use Sql Loader scripts to load or create a mapping in OWB for better performance.
    Could someone suggest which one gives best solution?
    P.S: All files are same format

    If you use a file as a source in a mapping OWB will generate a SQL*Loader mapping. You can configure the file operator to load many files, it can also be supplied as a parameter.
    Cheers
    David

  • OWB Performance Tuning

    Hi Every body,
    I searched for OWB performance tuning guidelines for OWB11gR2.
    1) The posted link Please check: http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    is not pulling the desired white paper. It points to Oracle OWB resource page. I did not find any links related to performance tuning. Any idea?
    2) I reviewed https://blogs.oracle.com/warehousebuilder/entry/performance_tuning_mappings
    Performance tuning mappings By David Allan
    The links in the blog (a) There are reports in the utility exchange (see here)
    (b) There is a viewlet describing some of this here.
    Not working. Could you post the working links?
    Regards
    Ram Iyer

    Hi Ram
    The blog links should be fixed now, let me know if not. The blog has been rehosted a zillion times and each time stuff is broken in the migration - sound familiar?
    Cheers
    David

  • OWB Performance Bottleneck

    Is there any session log that is produced by the OWB mapping execution other than seeing the results in OWB Runtime Audit Browser.
    Suppose that the mapping is doing some hash join which is consuming too much amount of time and I would like to see which are the tables that are being joined at that instant. This would help me in identifying the exact area of the problem in a mapping. Does OWB provide a session log which can help me get that information, or from any other place where I can get some information regarding the operation which is causing a performance bottleneck
    regards
    -AP

    Thanks for all your suggestions. The mapping was using a join between some 4 - 5 tables and I think this was the place the mapping was getting stuck during execution in Set Based Mode. Moreover the mapping loads some 70 million records into the target table. Perhaps, loading such huge volume of data and that too in a set based mode and also with a massive join in the beginning, mapping should have got stuck somwhere.
    The solution that came up was to create a table with the join condition and use the table as input to the mapping. This helps us to get rid of the joiner in the very beginning and also the mapping be run in Row Based Target Only mode. The data (70 million) got loaded in some 4 hours.
    regards
    -AP

  • Owb client+performance+x-windows

    Hi,
    We are using 11.2.0.1 and finding the performance of OWB slow e.g opening mappings, control center etc.
    Machine has enough memory (2.5 GB and checked memory being used in task manager for owb.exe (about 400 M), total being used on PC at any time for all applications aboout 1.5gb and ruled out network latency as tested other cleint/server applications to access the same machine and dataabse and these are fine performance-wise.
    The server is located in the same building as the cleint and is a UNIX server.
    We are using the cumulative patch recommended by Oracle for performance reasons - still same problem.
    Anybody any other tips to get design center/control center to open quickly etc?
    Used previous suggestions such as Tools/Optimise repository etc and purging of audit tables.
    Another question
    Is it possible to run owb client (i.e. open/amend/deploy/run mapppings etc with the windows interface directly on the UNIX server (~e.g using x-windows) as opposed to the client WINDOWS pc
    Many Thank

    Hi
    There are patches with performance related fixes in this area, if you can upgrade to 11.2.0.2 that is advised otherwise there is a mega patch for 11.2.0.1;
    BEST --> OWB 11.2.0.2 + megapatch v3 (12874883)
    ALTERNATE --> OWB 11.2.0.1 + patch for bug 10270220: Mega Patch v2 (supersedes patch 9802120)
    Cheers
    David

  • OWB Performance

    Can someone give me pointers on how can the mappings execution performance can be improved? Settings on O/S level, Database , mappings etc.
    Thanks.

    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html has pointers to the specific places in OWB documentation explaining Mapping Operating Modes, Configuration Settings, Commit settings, Partition Exchange Loading. All these are directly related to the performance of your mappings execution.
    Nikolai Rochnik

  • CBO: OWB Dimension Performance Isssue (DIMENSION_KEY=DIM_LEVEL_ID)

    Hi
    In my opinion the OWB Dimensions are very useful, but sometimes there are some Performance Issues.
    I am working with the OWB Dimensions quite a lot and with the big Dimensions ( > 100.000 rows) , i often get some Performance problems when OWB generates the code to Load (Merge Step) or Lookup these Dimensions.
    OWB Dimensions have a PK on DIMENSION_KEY and Level Surrogate IDs which are equal to the DIMENSION_KEY if the The Row is an Element of that Level (and not a Parent Hierarchic Element)
    I am hunting the Problem down to the Condition DIMENSION_KEY= (DETAIL_)LEVEL_SURROGATE_ID. The OWB does that to get only the Rows with (Detail-) Level Attributes.
    But it seems, that the CBO isn´t able to predicted the Cardinality right. The CBO always assume, that the Result Cardinality of that Condition is 1 row. So I assume that Conditon is the reason for the "bad" Execution Plans, the Execution Plan
    "NESTED LOOPS OUTER" With the Inline View with Cardinality = 1;
    Example:
    SELECT COUNT(*) FROM DIM_KONTO_TAB  WHERE DIMENSION_KEY= KONTO_ID;
    --2506194
    Explain Plan for:
    SELECT DIMENSION_KEY, KONTO_ID
    FROM DIM_KONTO_TAB where DIMENSION_KEY= KONTO_ID;
    +| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |+
    +| 0 | SELECT STATEMENT | | 1 | 12 | 12568 (3)| 00:00:01 | | |+
    +| 1 | PARTITION HASH ALL | | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
    +|* 2 | TABLE ACCESS STORAGE FULL| DIM_KONTO_TAB | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
    Predicate Information (identified by operation id):
    +2 - STORAGE("DIMENSION_KEY"="KONTO_ID")+
    filter("DIMENSION_KEY"="KONTO_ID")
    Or: For Loading an SCD2 Dimension:
    +|* 12 | FILTER | | | | | | Q1,01 | PCWC | |+
    +| 13 | NESTED LOOPS OUTER | | 328K| 3792M| 3968 (2)| 00:00:01 | Q1,01 | PCWP | |+
    +| 14 | PX BLOCK ITERATOR | | | | | | Q1,01 | PCWC | |+
    +| 15 | TABLE ACCESS STORAGE FULL | OWB$KONTO_STG_D35414 | 328K| 2136M| 27 (4)| 00:00:01 | Q1,01 | PCWP | |+
    +| 16 | VIEW | | 1 | 5294 | | | Q1,01 | PCWP | |+
    +|* 17 | TABLE ACCESS STORAGE FULL | DIM_KONTO_TAB | 1 | 247 | 489 (2)| 00:00:01 | Q1,01 | PCWP | |+
    I tried a lot:
    - statistiks are gathered often, with monitoring Informations and (Frequencey-)Histograms and the Conditions Colums
    - created extend Statistiks DBMS_STATS.CREATE_EXTENDED_STATS(USER, 'DIM_KONTO_TAB', '(DIMENSION_KEY, KONTO_ID)')
    - created combined idx one DIMENSION_KEY, LEVEL_SURROGATE_ID
    - red a lot
    - hinted the Querys in OWB ( but it seems the inline View is to complex to use a Hash Join)
    Next Step:
    -Tracing the Optimizer CBO Events.
    Does some one has an Idea how-to help the CBO to get the cardinality right?
    If you need more Information, please tell me.
    Thanks a lot.
    Moritz

    Hi Patrick,
    For a relational dimension, these values must be unique within the LEVEL. It is not required to be a numeric ID (although that follows the best practices of surrogate keys best).
    If you use the same sequence for the dimension you have insured that each entry in the entire dimension is unique. Which means that you can move your data as is into OLAP solutions. We will do this as well in the next major release.
    Hope that helps,
    Jean-Pierre

  • OWB 10.2 Dimension 'remove' performance

    Has anyone had much luck with using the "Remove" option against dimensions for processing deletions? I am using relationally implemented dimensions and cubes.
    My first attempt failed because of Fact table constraints. I then tried to do a "remove" against the cube with the business key of one of the dimensions. The mapping ran and did nothing at all - apparently, to delete from a fact table, you have to provide every single dimension key which is annoying.
    I then decided to change the FK options to CASCADE (which cannot be done in OWB, I had to alter the constraints outside the tool). This seems to work but performance is somewhat awful.
    The deletion code generated against the dimension is weird. It does compares for null values as well as for the key that I specified which makes the optimizer not use indexes well. I created a compound index to help a little but it is still slow.
    Anyone have better experiences or advice?
    Thanks in advance,
    Mike

    Did you try to create the following indexes on your repositories schemas ?
    create index wb_rt_idx_ao_ou on wb_rt_audit_objects (object_uoid) tablespace &tndex.;
    create index wb_rt_idx_ao_pa on wb_rt_audit_objects (parent_audit_object_id) tablespace &tndex;
    Regards
    Matthias

Maybe you are looking for