OWB Performance

Can someone give me pointers on how can the mappings execution performance can be improved? Settings on O/S level, Database , mappings etc.
Thanks.

http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html has pointers to the specific places in OWB documentation explaining Mapping Operating Modes, Configuration Settings, Commit settings, Partition Exchange Loading. All these are directly related to the performance of your mappings execution.
Nikolai Rochnik

Similar Messages

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • OWB Performance Tuning

    Hi Every body,
    I searched for OWB performance tuning guidelines for OWB11gR2.
    1) The posted link Please check: http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    is not pulling the desired white paper. It points to Oracle OWB resource page. I did not find any links related to performance tuning. Any idea?
    2) I reviewed https://blogs.oracle.com/warehousebuilder/entry/performance_tuning_mappings
    Performance tuning mappings By David Allan
    The links in the blog (a) There are reports in the utility exchange (see here)
    (b) There is a viewlet describing some of this here.
    Not working. Could you post the working links?
    Regards
    Ram Iyer

    Hi Ram
    The blog links should be fixed now, let me know if not. The blog has been rehosted a zillion times and each time stuff is broken in the migration - sound familiar?
    Cheers
    David

  • OWB Performance Whitepaper on OTN

    Some people were asking for OWB performance tips. Please check: http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    Regards:
    Igor

    Hi there,
    thanks for reporting this glitch; will take care of this shortly, Peter

  • OWB Performance Bottleneck

    Is there any session log that is produced by the OWB mapping execution other than seeing the results in OWB Runtime Audit Browser.
    Suppose that the mapping is doing some hash join which is consuming too much amount of time and I would like to see which are the tables that are being joined at that instant. This would help me in identifying the exact area of the problem in a mapping. Does OWB provide a session log which can help me get that information, or from any other place where I can get some information regarding the operation which is causing a performance bottleneck
    regards
    -AP

    Thanks for all your suggestions. The mapping was using a join between some 4 - 5 tables and I think this was the place the mapping was getting stuck during execution in Set Based Mode. Moreover the mapping loads some 70 million records into the target table. Perhaps, loading such huge volume of data and that too in a set based mode and also with a massive join in the beginning, mapping should have got stuck somwhere.
    The solution that came up was to create a table with the join condition and use the table as input to the mapping. This helps us to get rid of the joiner in the very beginning and also the mapping be run in Row Based Target Only mode. The data (70 million) got loaded in some 4 hours.
    regards
    -AP

  • OWB performance with repository browser

    Hi,
    I just want to know... is there some way in repository browser or OWB to detect how much time that had been spent to execute one record in a query?
    For example, if i got 20 records executed in a single mapping... i want to know how much time the mapping need and the time performance of each records...
    Anyone has suggestion?
    Thanks in advance,
    Davis

    I also am not quite sure how you could do this in such a way to gaurantee accurate measurement.
    One possible approach is to append a timestamp column to the source table, and populate the field with systimestamp (for the sub-second granularity) in the mapping. Then, after the load, you could sort all your records by this column, find a chunk together and compare load times.
    But even this would be like swatting a mosquito with a bat, and may not even be fully accurate itself under certain loading scenarios (not the mention the fact that technically this would actually make the mapping run slower that actual since you've added a whole new column populated by a function call!)
    -J

  • OWB Performance Issue

    Hi
    I have a performance issue with OWB.
    OWB 9.0.4.8.21
    Oracle 9.2.0.1.0
    I designed some mappings with business rules/ transformations on windows system (AMD athalon,
    1*1.8 Ghz CPU, 1 GB ram) . When I run these mappings, my CPU usage goes to 100 % while my ram usage stays at 70 %.
    My mapping loads data from a flatfile to tables using External tables. As my cpu usage is 100% the upload time statistics are not reliable. So i have transferred the .mdl from windows to a higher end unix machine (HP-UX 11.00, 6*450 Mhz CPU, 6 Gb Ram, OWB 9.2(Unix),
    Oracle 9.2.0.1.0).
    Logically my mappings should run faster on Unix but it is taking 3 times more time than the windows system. Here also my CPU usage goes to 100 % while ram usage stays at
    30 %.
    One more observation on Unix machine is that the Oracle process which runs my mapping uses only 1 CPU while other CPUs are not utilized.
    Is there a way where in i can fork/thread my oracle process which runs the mapping to use all the CPUs instead of only 1 CPU.
    Do i need to do some changes in my mapping configuration /properties or in Oracle (init.ora) to
    improve the performance of my upload.
    Thanks in Advance.
    Manoj

    Manoj,
    If you use Process Flow that includes several mappings then use of FORK activity insures parallel execution of multiple mappings. See more on this in OWB 9.2 User Guide, page 10-24 "FORK".
    If you wish to have a single mapping execute on multiple CPUs in parallel, then it all depends on the nature of the mapping and corresponding configuration options set:
    - You mentioned using External Tables. External tables have configuration options "Parallel Access Mode" and "Parallel Access Drivers" that control parallelism. See more on this in OWB 9.2 User Guide, page 5-18 "Parallel".
    Other cases:
    - If the mapping is run in Row based mode with Parallel Row Code option set to 'True'. This takes advantage or Database's table function feature. See more on this in OWB 9.2 User Guide, page 11-7 "Parallel Row Code".
    - If the mapping inserts into multiple tables using Splitter operator with Optimized Code option set to 'True'. This takes advantage or Database's multi-table insert feature. See more on this in OWB 9.2 User Guide, page 11-8 "Optimized Code".
    The fact that a more powerful Unix server takes much longer to execute the same mappings suggests a problem with database configuration. The parallelism is usually most optimally achieved with PARALLEL_AUTOMATIC_TUNING set to 'true'. Beyond that, many manual options of configuring the database for parallelism are available. See "Oracle9i Data Warehousing Guide", Chapter 21 "Using Parallel Execution".
    Nikolai

  • OWB Performance issue (mapping execution always takes min 60 sec)

    Hi All,
    any owb mapping we execute in one of our environment , it seems the execution hangs for some time before it build the Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL' statement. The log file shows a constant difference of 30 sec. before executing the <map>.main() function. the data extraction is very low some thing like 10-1000 records
    Action taken : increase the SGA pool size to allow more resource. at the DB level
    changed the -Xms64M -Xmx256M to -Xms335M -Xmx440M.
    But no help
    Extraction from the owb log is as follows.
    2006/03/15-09:29:09-WST [1E0BF3BF] Initializing execution for auditId= 28339 parentAuditId= null topLevelAuditId=28339 taskName=XXIF_OUT_CSV_TRANS
    2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create adapter 'class.RuntimePlatform.0.NativeExecution'
    2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL'
    2006/03/15-09:29:39-WST [1E0D73BF] PLSQL callspec: declare l_env wb_rt_mapaudit.wb_rt_name_values; l_IN_BATCH_ID null........
    Kindly note the difference of 30 sec between create native operator to actual execution of PLSQL code.
    The same set of mapping is working fine(5-15secs) in our Dev env. but is taking additional time (kind of 1-3 mins ) in Test env. and the execution of mapping does not go in parallel mode(Is this an expected behaviour ?) and if we have 10 seperate excution of the mapping , it takes 30mins to complete in TEST env. compare to 3mins in DEV env.
    The noticible difference between these two env. is
    These mapping been created using 10.1.0.2.0 client and deployed on 10.1.0.1.0 repository. in DEV env.
    but the TEST Env. uses 10.1.0.4.0 repository.
    When check the audit browser .. Can see the total elapse time is 61sec but the actual mapping exec time is only 1 sec.
    Is there any configuration settings which cause this delay.
    Any pointers on this will be of great help.
    Regards,
    njain

    I am having exactly the same issue as you have described here. i.e. my mappings are taking some time to initialize before they run. Did you or anyone find a solution to this problem?

  • OWB Slow Response at logon and General Client Performance

    Hi all wonder if anyone confirm what i am seeing is normal. I am relatively new to OWB but we are running with 10G R2 but have noticed very poor response from a client perspective particular at logon. It take around six minutes for the design center logon box to appear after you have clicked on the icon which seam very slow. Also noticed that performance reduces within the client the longer you use the product in one session.
    Question: Is what i am seeing just normal performance or can the OWB performance be improved by some form of configuration either client or java settings. My PC spec is detailed below and has 2GB of memory which was recommended by Oracle
    OS Name     Microsoft Windows XP Professional     
    Version     5.1.2600 Service Pack 2 Build 2600
    Total Physical Memory     2,048.50 MB     
    Available Physical Memory     1.02 GB     
    Total Virtual Memory     2.00 GB     
    Available Virtual Memory     1.89 GB     
    Page File Space     2.82 GB

    Hey all thanks for you responses, think you might have identified the problem with the Anti Virus, rebooted machine and just start OWB on its own. Mcafee was hitting the CPU hard for five minutes and then suddenly logon box appeared so looks like thats the issue. Note sure i will be able to get that changed as it is centrally administered global setting for all machines i think. At least that explains why it took so long.
    Many thanks
    Will double check the DB side of things as well

  • Problem in OWB: connecting to a non-oracle database

    Good day,
    i'm working on a windows 32 bit machine,i have Oracle database , OWB 11.2.0.1 installed on it
    i'm trying to make OWB connect to a "Sybase" source database on another server , using ODBC
    i've tested the connection from SQLPLUS and it's working perfectly
    but when i test the connection from OWB locations , it gives me this error:
    "ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [OpenLink][ODBC][SQL Server]Statement(s) could not be prepared (0)
    {42000}[OpenLink][ODBC][SQL Server]"DUAL" not found. Specify owner.objectname
    or use sp_help to check whether the object exists (sp_help may produce lots of
    output).
    (208) {42S02,NativeErr = 208}
    my questions are :
    -why would OWB check for DUAL table on a sybase database? is that a bug in OWB?
    -can i bypass this step and just import metadata from the sybase source database? (i tried importing,but OWB performs the same test before importing)

    Hi Chris,
    I haven't tried it, and these are just some of my thoughts, but I
    think you would need to load the JDBC driver for the other database
    into the Oracle database (using the "loadjava" tool, of-course).
    Then your java stored procedure should be able to instantiate the
    third party JDBC driver and obtain a connection to the other database.
    Hope this helps.
    Good Luck,
    Avi.

  • OWB or SQL Loader?

    Is there some way to create a SQL loader control file using the OWB tool?

    Yes, OWB creates all of the ETL process, including sqlloader scripts, temporary tables, transformation. But the answer is pretty long. Creating sqlloader scripts is just one out of many functions OWB performs.
    As previously stated, your thread will not have too much echo here as this is an RDBMS specialized forum. By the way, I suggest you to narrow the scope of your question, as the answer, even in the proper forum would be too wide too.

  • OWB connection to teradata through ODBC driver

    Hi:
    I am seeking answers to my OWB connection settings. I have installed OWB 10.2.0.4 on Windows 2003 server. The Oracle 10g database is on the same server. Right now, I want to configure ODBC connection for OWB to source database (all on teradata) on my windows 2003 server. However, I found from control panel
    /Adminstrative tool/Data source (ODBC), there is no teradata driver in ODBC administrator. (It is available on windows XP). How can I solve this problem?
    OWB perform ETL workflow through OWB server and source database. The connection between OWB server and source database must be set up. If no teradata ODBC driver, is there any other way to work around. Or should I update windows 2003 server to patch teradata ODBC driver to it? Please give me your idea and your thoughts. Thanks in advance.

    Hi,
    look at this thread
    SQLServer access from AIX Warehouse builder
    What is the problem with installing Teradata ODBC driver to your Windows2003 server?
    Regards,
    Oleg

  • How to tace erronious records in a mapping or a process flow?

    Hi All,
    I read the following document by Rittman.
    http://www.rittman.net/work_stuff/tracing_owb_mappings_pt1.htm
    I am using Oracle Warehouse Builder 10G R1.
    But I feel, it may solve my problem. My problem scenario is as follows.
    Here I would like to know how to trace the records which are valid as per business rules, but not counted in the output due to some functional errors, as follows.
    For example a variable contains value Region = "R01".
    So as per the rule, we need to retrieve the number 01.
    I impelemented as to_number ( substr (Region,2 ) )
    Unfortunately, In one record, I got the field data as "RRR".
    So as per the rule , if apply that logic, this will return the error/warning.
    So this record is not counted in the output.
    Here I would like to trace these type of records in a table or a file while executing the Mapping.
    Is it possible using Oracle Warehouse Builder or Oracle?
    When we are dealing external table we can create log or bad file, which will hold all bad records by default. Is there any way to do this in a mapping?
    Is any one implemented these kind of tracing files which contains all bad records.
    Any suggestions are welcome.
    Thank you,
    Regards,
    Gowtham Sen.

    Hi,
    i have never used this before but i know that inside the mapping configuration in the table operators there is a property where you can specify in the constraints management the exceptions table name. Anyway you might add an additional field to your target table, add a case expression and then mark the field as being valid or invalid or something like that. You can then select the which records are invalid.
    Take a look at this thread: Some Thoughts On An OWB Performance/Testing Framework
    Re: Some Thoughts On An OWB Performance/Testing Framework
    Cheers,
    Ricardo

  • Information About Default Operating Mode

    Hello,
    Someone can help me and give me a good documentation that explain what do the SETBASED mode, ROWBASED, ROWBASED (target only), with schemas it will be exellent.
    I know a little the differences between but i will to learn that in detail.
    I ask that because we have to make programs that are the most optimize (time of execution) and the Operating Mode is very important.

    When a mapping runs in a set-based mode, the rows are processed as a single dataset, with a single SQL statement that is either completed successfully for all the rows or fails if there is a problem with even one row. This is generally the fastest way of processing data.
    When a mapping runs in a row-based mode, the data can be extracted from the source one at a time, processed and inserted in the target. If there is a problem with some rows, this is logged and the processing continues.
    Needless to say, the set based operating mode is faster in all cases, while the row based operating mode gives more control to the user, both in terms of logging row-by-row processing information as well as in terms of control over the management of incorrect data.
    The default operating mode is the set based, fail over to row based mode in which Warehouse builder will attempt to use the better-performing set-based operating mode, but will fall back to the more granular row-based mode if data errors are encountered. This mode allows the user to get the speed of set based processing but when an unexpected error occurs it allows you to revert to row based mode.
    The operating modes Row Based (Target Only) and Set Based Fail over to Row based (Target Only) are a compromise between the set based and the row based modes. The ‘target only’ modes will use a cursor to rapidly extract all the data from the source, but will then use a row-based mode to insert data into the target, where errors are more likely to occur. These modes should be used if there is a need to use the fast, set based mode to extract and transform the data as well as a need to extensively monitor the data errors.
    I am currently writing a paper on OWB performance, and if you are interested, drop me an e-mail ([email protected]) and I will send you a draft version of it.
    Regards:
    Igor

  • [SOLVED]Doing an update

    Hi,
    I'd like OWB perform a simple update like this:
    UPDATE my_table
    SET col_tag = SYSDATE;
    my_table has no constraints (primary key, foreign key...) at all.
    Even if i use a FILTER operator or a constant , i always get the same error VLD-2750 No matching criteria ...
    Any advice would be appreciated.
    Stephan
    Message was edited by:
    Azatoth972

    Hi Stephan,
    OWB always generates a MERGE statement (even if loading type is UPDATE), so you need a workaround:
    Target table loading type: UPDATE
    Target table match by constraint: no constraint
    Make sure that for every attribute in the target table, load column when updating is set to true and match column for updating is set to false. Then (in the mapping editor) add another column to the target table, e.g. MATCH_COL. Set load column when updating and inserting to false, but match column when updating to true. Set the bound name to some existing column that contains no null values, e.g. A.
    Add the table a second time to your mapping. Connect attribute A with MATCH_COL. Use a constant to set the SYSDATE.
    Not a very straight way to do a simple update. Maybe it is simpler to do the update in a procedure ...
    Regards,
    Carsten.

Maybe you are looking for

  • Qosmio X300-13P issues of Win 7 64bit upgrade

    Hi, I've upgraded X300-13P up to Win 7 Home Premium 64 bit. Since the model somehow not in the Qosmio X300 models list (why???), I found all needed drivers (win7 64 bit, russian) using laptop s/n and drivers filter. Now laptop works. Issues (please,

  • Basic question on setting Redundancy as the Retention policy

    DB version:11g I am planning to implement Differential Incremental Backup for my database. My strategy is Sunday -->     Level 0 backup Monday -->     Level 1 backup Tuesday-->     Level 1 backup Wednesday-->     Level 1 backup Thursday-->     Level

  • How do I change the highlight color?

    When I'm working on my document in Pages, I often do a Search (command-F) to quickly find a word or phrase that appears somewhere else in the document. It works great, but it highlights the word/phrase in light gray, which I have trouble seeing. Is t

  • ISight : black or green picture

    Hello, on my mid-2007 20" imac, the integrated iSight doesn't work any more. All I get is a black picture (with iChat) or green picture (with Photo Booth). No error message saying the iSight is already in use. The green light on the right of the came

  • System preferences energy saver isn't working

    I've noticed that after installing 10.6.6 that my energy saver preferences are sporadic. I have it set to reduce brightness after 10 minutes and after an hour it still bright. I've dumped com.apple.systempreferences.plist but the problem still exists