Cost of OWB

Could somebody please let me know what is the cost of OWB9.2/OWB10G R1/OWB10G R2 ? I understand it comes free along with Oracle server. But what is the cost of OWB seperately?
Regards,
Anirban.

Hi,
you cannot license the OWB without the enterprise edition of the database. And some functions of the OWB (like data quality, lineage, pluggable mappings, some adaptors) you must license separately (look at http://www.oracle.com/corporate/pricing/technology-price-list.pdf).
Regards
Detlef

Similar Messages

  • OWB Maintenance Cost

    Hi,
    We all know licensing cost of OWB is zero as it comes free with oracle DB....
    can anyone help me in calulating the maintenance cost of OWB.....what all needs to be considered.
    Thanks

    Hi
    Not all of OWB is free, see the following for details:
    http://www.oracle.com/technology/products/warehouse/htdocs/licensing.html
    There is a free component - but there are costed options including ODIEE (which was Enterprise ETL), Application Adapters, Data Profiling and Quality, Data Watch and Repair that are paid parts.
    Regarding the real-world comments on maintenance interested to hear.
    Cheers
    David

  • OWB License Cost

    Hi All,
    Can anybody let me now the license cost for OWB 11g R2 and 10G R2 other than in Build one.
    Thanks & Regards,
    Ankit Rana

    Hi Ankit
    OWB enterprise ETL is licensed under the Data Integrator Enterprise Edition option which can be found in the Oracle Technology price list;
    http://www.oracle.com/us/corporate/pricing/price-lists/index.html
    There was a blog post from Antonio covering the licensing change back in 2009 below;
    http://blogs.oracle.com/warehousebuilder/2009/02/oracle_data_integrator_enterprise_edition_and_the_future_of_warehouse_builder.html
    Cheers
    David

  • Owb job taking too much time to execute

    While creating a job in OWB, I am using three tables,a joiner and an aggregator which are all joined through another joiner to load into the final table. The output is coming correct but the sql query generated is very complex having so many sub-queries. So, its taking so much time to execute. Pls help me in reducing the cost.
    -KC

    It depends on what kind of code it generates at each stage. The first step would be collect stats for all the tables used and check the SQL generated using EXPLAIN PLAN. See which sub-query or inline view creates the most cost.
    Generate SQL at various stages and see if you can achieve the same with a different operator.
    The other option would be passing HINTS to the tables selected.
    - K

  • New licensing for OWB 10g R2 (Paris)

    Hi,
    Does anyone know how much the new licenses (per DB server CPU) for OWB 10g R2 (Paris) cost?
    Is it correct that the features for the modeling of SCD 1 and 2 are not included in the basic "Core ETL Features"?
    If I'm using the the SCD 2 features to develop the OWB mappings do I also need to license the "Enterprise ETL" for the production server?
    Related to the previous question: for the production DB where I only need the runtime part of the repository is it required to license any options?
    Regards
    Maurice
    PS: 2 links related to these questions:
    http://www.rittman.net/archives/2006/05/owb_10g_release_2_now_availabl.html
    http://www.oracle.com/technology/products/warehouse/htdocs/owb_10gr2_faq.htm#HowisOWBPackaged

    Maurice, not sure what to say. If you've been using OWB, then you don't lose any functionality going to the new version, and it will be "free". You simply won't get any of the new functionality (all of the old functionality is included in the "core" features)
    However, as I said in the other post, I hope Oracle reconsiders the SCD 1 / 2 / 3 licensing. To me, SCD type 2 are not "enterprise level" functionality - that is base level functionality for ANY data mart or data warehouse. I have no problems with paying for options the other ETL vendors are charging for (data quality comes to mind...), but if we deploy a large DW on a 32 processor box, paying for the Oracle licenses AND an addition $300,000+ for OWB functionality just to simplify SCD type 2, seems WAY overkill. Actually, in our project, we had pretty much settled on Oracle as the RDBMS for our DW, but if OWB is up in the air, we will probably open this up for an RFQ.
    Scott

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Can I create a BI Beans compliant cube using OWB?

    Can I create a cube that I can browse using BI beans through OWB 9.0.4 or are there additional steps that I need to take using other tools such as Enterprise manager?
    Are there any known incompatibilities between OWB 9.0.4 and BI beans 9.03.1.?
    I will also pose this question in the BI Beans forum.
    Thanks for any replies.
    Cor

    Hi,
    I am trying to build an Analytic workspace using OWB (9.0.4.8) Transfer Brigde and I got the similar error.
    None of the view/mv sqls are generated and the analytic workspace was not created either.
    FYI.
    **! Transfer logging started at Wed May 14 18:07:41 EDT 2003 !**
    OWB Bridge processed arguments
    Default local= en_US
    Exporting project:OM_SAMPLE
    initializing project:OM_SAMPLE
    Initializing module :WH
    Exporting cube:SALES
    Exporting dimension:CHANNELS
    Exporting dimension:COUNTRIES
    Exporting dimension:CUSTOMERS
    Exporting dimension:PRODUCTS
    Exporting mappings
    Exporting table:CHANNELS
    Exporting table:COUNTRIES
    Exporting table:CUSTOMERS
    Exporting table:PRODUCTS
    Exporting table:SALES
    Exporting datatypes
    Exporting project OM_SAMPLE complete.
    setting parameter: olapimp.deploytoaw = Y
    setting parameter: olapimp.awname = OWBTARDEMO
    setting parameter: olapimp.awobjprefix = OWBTAR_
    setting parameter: olapimp.awuser =
    setting parameter: olapimp.createviews = Y
    setting parameter: olapimp.viewprefix = OWBTAR_
    setting parameter: olapimp.viewaccesstype = OLAP
    setting parameter: olapimp.creatematviews = Y
    setting parameter: olapimp.viewscriptdir = /opt/oracle
    setting parameter: olapimp.deploy = N
    setting parameter: olapimp.username = OLAPSYS
    setting parameter: olapimp.password = manager
    setting parameter: olapimp.host = 10.215.79.139
    setting parameter: olapimp.port = 1521
    setting parameter: olapimp.sid = INDEXDB
    setting parameter: olapimp.inputfilename = C:\TEMP\bridges\null-nullMy_Metadata_Transfer1052950061353.XMI
    setting parameter: olapimp.outputfilename = C:\Panneer\owbtardemo.sql
    Loading Metadata
    Loading XMI input file
    processing dim: CHANNELS
    processing level: CHANNELin dimension CHANNELS
    processing level attribute use: CHL_ID in level CHANNEL for level attribute ID
    processing level attribute : ID in level CHANNEL
    processing level attribute use: CHL_LLABEL in level CHANNEL for level attribute LLABEL
    processing level attribute : LLABEL in level CHANNEL
    processing level attribute use: CHL_SLABEL in level CHANNEL for level attribute SLABEL
    processing level attribute : SLABEL in level CHANNEL
    processing level: CLASSin dimension CHANNELS
    processing level attribute use: CLS_ID in level CLASS for level attribute ID
    processing level attribute : ID in level CLASS
    processing level attribute use: CLS_LLABEL in level CLASS for level attribute LLABEL
    processing level attribute : LLABEL in level CLASS
    processing level attribute use: CLS_SLABEL in level CLASS for level attribute SLABEL
    processing level attribute : SLABEL in level CLASS
    processing hierarchy: CHANNEL_HIERARCHY in dimension CHANNELS
    processing dim: COUNTRIES
    processing level: REGIONin dimension COUNTRIES
    processing level attribute use: RGN_ID in level REGION for level attribute ID
    processing level attribute : ID in level REGION
    processing level attribute use: RGN_LLABEL in level REGION for level attribute LLABEL
    processing level attribute : LLABEL in level REGION
    processing level attribute use: RGN_SLABEL in level REGION for level attribute SLABEL
    processing level attribute : SLABEL in level REGION
    processing level: COUNTRYin dimension COUNTRIES
    processing level attribute use: CTY_ID in level COUNTRY for level attribute ID
    processing level attribute : ID in level COUNTRY
    processing level attribute use: CTY_LLABEL in level COUNTRY for level attribute LLABEL
    processing level attribute : LLABEL in level COUNTRY
    processing level attribute use: CTY_SLABEL in level COUNTRY for level attribute SLABEL
    processing level attribute : SLABEL in level COUNTRY
    processing hierarchy: COUNTRY_HIERARCHY in dimension COUNTRIES
    processing dim: CUSTOMERS
    processing level: CUSTOMERin dimension CUSTOMERS
    processing level attribute use: CTR_CREDIT_LIMIT in level CUSTOMER for level attribute CREDIT_LIMIT
    processing level attribute : CREDIT_LIMIT in level CUSTOMER
    processing level attribute use: CTR_EMAIL in level CUSTOMER for level attribute EMAIL
    processing level attribute : EMAIL in level CUSTOMER
    processing level attribute use: CTR_ID in level CUSTOMER for level attribute ID
    processing level attribute : ID in level CUSTOMER
    processing level attribute use: CTR_NAME in level CUSTOMER for level attribute NAME
    processing level attribute : NAME in level CUSTOMER
    processing dim: PRODUCTS
    processing level: PRODUCTin dimension PRODUCTS
    processing level attribute use: PDT_DESCRIPTION in level PRODUCT for level attribute DESCRIPTION
    processing level attribute : DESCRIPTION in level PRODUCT
    processing level attribute use: PDT_ID in level PRODUCT for level attribute ID
    processing level attribute : ID in level PRODUCT
    processing level attribute use: PDT_LIST_PRICE in level PRODUCT for level attribute LIST_PRICE
    processing level attribute : LIST_PRICE in level PRODUCT
    processing level attribute use: PDT_MIN_PRICE in level PRODUCT for level attribute MIN_PRICE
    processing level attribute : MIN_PRICE in level PRODUCT
    processing level attribute use: PDT_NAME in level PRODUCT for level attribute NAME
    processing level attribute : NAME in level PRODUCT
    processing level: CATEGORYin dimension PRODUCTS
    processing level attribute use: CTY_ID in level CATEGORY for level attribute ID
    processing level attribute : ID in level CATEGORY
    processing level attribute use: CTY_LLABEL in level CATEGORY for level attribute LLABEL
    processing level attribute : LLABEL in level CATEGORY
    processing level attribute use: CTY_SLABEL in level CATEGORY for level attribute SLABEL
    processing level attribute : SLABEL in level CATEGORY
    processing hierarchy: PRODUCT_HIERARCHY in dimension PRODUCTS
    processing cube: SALES
    processing classification type is := Warehouse Builder Business Area
    processing catalog name := SALESCOLLECTION ,and description is := null
    processing catalog entry element name := SALES
    processing Cube
    processing catalog entity cube := SALES
    processing measure := COSTS , in a cube := SALES
    processing measure := SALES , in a cube := SALES
    processing catalog entry element name := CHANNELS
    processing catalog entry element name := COUNTRIES
    processing catalog entry element name := CUSTOMERS
    processing catalog entry element name := PRODUCTS
    processing catalog entry element name := CHANNELS
    Class Name CHANNELS is TableImpl@405ffd not supported
    processing catalog entry element name := COUNTRIES
    Class Name COUNTRIES is TableImpl@5e1b8a not supported
    processing catalog entry element name := CUSTOMERS
    Class Name CUSTOMERS is TableImpl@6232b5 not supported
    processing catalog entry element name := PRODUCTS
    Class Name PRODUCTS is TableImpl@6f144c not supported
    processing catalog entry element name := SALES
    Class Name SALES is TableImpl@14013 not supported
    processing classification type is := Dimensional Attribute Descriptor
    Classification type Dimensional Attribute Descriptor is not supported
    closing output file
    closing log stream
    **! Transfer process 2 of 2 completed with status = 0 !**
    **! Transfer logging stopped at Wed May 14 18:07:47 EDT 2003 !**
    But when I ran the "select * from dba_registry" everything seems to be valid.
    CATALOG     Oracle9i Catalog Views     9.2.0.2.0     VALID     24-APR-2003 09:39:24     SYS     SYS     DBMS_REGISTRY_SYS.VALIDATE_CATALOG
    CATPROC     Oracle9i Packages and Types     9.2.0.2.0     VALID     24-APR-2003 09:39:24     SYS     SYS     DBMS_REGISTRY_SYS.VALIDATE_CATPROC
    OWM     Oracle Workspace Manager     9.2.0.1.0     VALID     24-APR-2003 09:39:27     SYS     WMSYS     OWM_VALIDATE
    JAVAVM     JServer JAVA Virtual Machine     9.2.0.2.0     VALID     23-APR-2003 22:19:09     SYS     SYS     [NULL]
    XML     Oracle XDK for Java     9.2.0.2.0     VALID     24-APR-2003 09:39:32     SYS     SYS     XMLVALIDATE
    CATJAVA     Oracle9i Java Packages     9.2.0.2.0     VALID     24-APR-2003 09:39:32     SYS     SYS     DBMS_REGISTRY_SYS.VALIDATE_CATJAVA
    ORDIM     Oracle interMedia     9.2.0.2.0     LOADED     23-APR-2003 23:16:42     SYS     SYS     [NULL]
    SDO     Spatial     9.2.0.2.0     LOADED     23-APR-2003 23:17:06     SYS     MDSYS     [NULL]
    CONTEXT     Oracle Text     9.2.0.2.0     VALID     23-APR-2003 23:17:26     SYS     SYS     [NULL]
    XDB     Oracle XML Database     9.2.0.2.0     VALID     24-APR-2003 09:39:39     SYS     XDB     DBMS_REGXDB.VALIDATEXDB
    WK     Oracle Ultra Search     9.2.0.2.0     VALID     24-APR-2003 09:39:42     SYS     WKSYS     WK_UTIL.VALID
    OLS     Oracle Label Security     9.2.0.2.0     VALID     24-APR-2003 09:39:43     SYS     LBACSYS     LBAC_UTL.VALIDATE
    ODM     Oracle Data Mining     9.2.0.1.0     LOADED     12-MAY-2002 17:59:03     SYS     ODM     [NULL]
    APS     OLAP Analytic Workspace     9.2.0.2.0     LOADED     23-APR-2003 22:49:51     SYS     SYS     [NULL]
    XOQ     Oracle OLAP API     9.2.0.2.0     LOADED     23-APR-2003 22:51:49     SYS     SYS     [NULL]
    AMD     OLAP Catalog     9.2.0.2.0     VALID     02-MAY-2003 15:00:13     SYS     OLAPSYS     CWM2_OLAP_INSTALLER.VALIDATE_CWM2_INSTALL
    Your help is appreciated!
    Thanks
    Panneer

  • BPEL instead of OWF in OWB

    Hi
    We have implemented 11g r1 database + OWB and are going to migrate to BPEL instead of using Oracle Workflow for our pre-11g OWB projects.
    Has anyone used BPEL to implement process flows for OWB. If so could you provide some examples on how to do it.
    We have numerous process flows that simple run extracts from source systems, run dimension mappings and then run mappings for cubes. Any help on how to use BPEL would be appreciated as i am completely new to BPEL.
    Regards

    Hi,
    you could already use bpel for owb 10.2. But it is more compley than owf. And the licensing costs are much higher.
    I did do a presentation (in german) about bpel and owb 2 years ago. Here are some findings:
    - Use Partner-link in Jdeveloper
    - One generic webservice for all mappings
    - use WB_RT_API_EXEC
    Parameter for 10.2
    - BACKGROUND = 0
    - OEM_FRIENDLY = 0
    Return Values:
    - Error
    - Warning
    - Success
    But you can still stick with owf, as it is not desupported for use within owb.
    HTH
    Oliver

  • Staging Area with OWB 10.2 - necessary or not?

    Hi to all,
    I have read so much about Staging Area and OWB 10.2 that I am totally confused: Some documents and powerpoints in the web say you do not need one, others say you need one. The thing is I am planning a DWH and now I am not sure if a staging area is necessary or not, because the mappings do the ETL jobs internal so I am not sure about the staging area. Most of my data sources are Tables/Views/MViews in a database.
    Thank you very much for any help concerning this question!
    Regards
    Thomas

    Would you prefer the answer that you MAY need one? Then again, you may just WANT one!
    For example, if you are building against a high transaction volume, busy 24/7 OLTP system then you may find that you need a local snapshot in order to do a complete build with a consistent set of source data for all your numbers to be consistent.
    Then again, you also may just find that bringing over just delta data into a local snapshot makes for much more efficient load rather than running against huge full remote tables if they are not well partitioned and/or indexed.
    Then again, complex joins run against a remote system may run more efficiently if you bring the data across with simple table dumps into a staging area that you can index to optimize your queries rather than have to deal with poor performance of complex joins over a dblink. Especially if you need to perform complex joins accross more than one db link to multiple source source systems. How big a cartesion product do you want bouncing around the network to perform that sort of scenario? Sure, maybe you can do it - but how much are you going to impact performance across the boards doing things like that?
    Is the source system already stresed to the max and sitting on a vintage piece of equipment, but your shiny new DW environment is blessed with tons of resources that will make the ETL run faster by several factors if you first copy the data over locally?
    So, do you need a staging area?
    Fact is that there is no generic correct answer to this question.
    You have to look at the specifics of your data requirements and your environment to answer that question. There are costs and benefits to having a staging area, and you have to determine which way the cost/benefit analysis comes out for your specific project.
    Mike

  • OWB bugs, missing functionality and the future of OWB

    I'm working with OWB for some time now and there are a lot of rough edges to discover. Functionality and stability leave a lot to be desired. Here's a small and incomplete list of things that annoy me:
    Some annoying OWB bugs (OWB 10g 10.1.0.2.0):
    - The debugger doesn't display the output parameters of procedures called in pre-mapping processes (displays nothing, treats values as NULL). The mapping itself works fine though.
    - When calling selfmade functions within an expression OWB precedes the function call with a constant "Functions." which prevents the function from being executed and results in an error message
    - Occasionally OWB cannot open mappings and displays an error message (null pointer exception). In this case the mapping cannot be opened anymore.
    - Occasionally when executing mappings OWB doesn't remember changes in mappings even when the changes were committed and deployed
    - When using aggregators in mappings OWB scrambles the order of the output attributes
    - The deployment of mappings sometimes doesn't work. After n retries it works without having changed anything in the mapping
    - When recreating an external table directly after dropping the table OWB recreates the external table but always displays both an error message and a success message.
    - In Key Lookups the screen always gets garbled when selecting an attribute as a join condition
    - Usage of constants results in aborts in the debugger
    - When you reconcile a table used in a key lookup the lookup condition sometimes changes. OWB seems to remember only the position of the lookup condition attribute but not the name.
    - In the process of validating a mapping often changes in the mapping get lost and errors occur like 'Internal Errors' or 'Null Pointer Exceptions'.
    - When you save the definition of external tables OWB always adds 2 whitespace columns to the beginning of all the lines following 'ORGANISATION EXTERNAL'. If you save a lot of external table definitions you get files with hundreds of leading whitespaces.
    Poor or missing functionality:
    - No logging on the level of single records possible. I'd like the possibility to see the status of each single record in each operator like using 'verbose data' in PowerCenter
    - The order of the attributes cannot be changed. This really pisses me off expecially if operators like the aggregator scramble the order of attributes.
    - No variables in expressions possible
    - Almost unusable lookup functionality (no cascading lookups, no lookup overrides, no unconnected lookups, only equal condition in key lookups)
    - No SQL overrides in soruces possible
    - No mapplets, shared containers or any kind a reusable transformations
    - No overview functionality for mappings. Often it's very hard to find a leftover operator in a big mapping.
    - No copy function for attributes
    - Printing functionality is completely useless
    - No documentation functionality for mappings (reports)
    - Debugger itself needs debugging
    - It's very difficult to mark connections between attributes of different operations. It's almost impossible to mark a group of connections without marking connections you don't want to mark.
    I really wonder which of the above bugs and mssing functionality 'Paris' will address. From what I read about 'Paris' not many if at all. If Oracle really wants to be a competitor (with regard to functionality) to Informatica, IBM/Ascential etc. they have a whole lot of work to do or purchase Informatica or another of the leading etl tool
    vendors.
    What do you think about OWB? Will it be a competitor for the leading etl tools or just a cheap database add on and become widely used like SAB BW not for reasons of technology or functionality but because it's cheap?
    Looking forward to your opinions.
    Jörg Menker

    Thanks to you two for entertaining my thoughts so far. Let me respond to you latest comments.
    Okay, lets not argue which one is better.. when a tool is there .. then there are some reasons to be there...But the points raised by Jorg and me are really very annoying. Overall I agree with both yours and Jorg's points (and I did not think it was an argument...merely sharing our observations with each other (;^)
    The OWB tool is not as mature as Informatica. However, Informatica has no foothold in the database engine itself and as I mentioned earlier, is still "on the outside looking in..." The efficiency and power of set-based activity versus row-based activity is substantial.
    Looking at it from another way lets take a look at Microstrategy as a way of observing a technical strategy for product development. Microstrategy focused on the internals (the engine) and developed it into the "heavy-lifting" tool in the industry. It did this primarily by leveraging the power of the backend...the database and the hosting server. For sheer brute force, it was champion of the day. It was less concerned with the pretty presentation and more concerned with getting the data out of the back-end so the user didn't have to sit there for a day and wait. Now they have begun to focus on the presentation part.
    Likewise this seems to be the strategy that Oracle has used for OWB. It is designed around the database engine and leverages the power of the database to do its work. Informatica (probably because it needs to be all things to all people) has tended to view the technical offerings of the database engine as a secondary consideration in its architectural approach and has probably been forced to do so more now that Oracle has put themselves in direct competition with Informatica. To do otherwise would make their product too complex to maintain and more vendor-specific.
    I am into the third data warehousing/data migration project and my previous two have been on Informatica (3 years on it).I respect your experience and your opinions...you are not a first timer. The tasks we have both had to solve and how we solved them with these tools are not necessarily the same. Could be similar in instances; could be quite different.
    So the general tendency is to evaluate the tool and try to see how things that were needed to be done in my previous projects can be done with this tool. I am afraid to say .. I am still not sure how these can be implemented in OWB. The points raised by us are probably the fall out of this deficiency.One observation that I would make is that in my experience, calls to the procedural language in the database engine have tended to perform very poorly with Informatica. Informatica's scripting language is week. Therefore, if you do not have direct usability of a good, strong procedural language to tackle some complicated tasks, then you will be in a pickle when the solution is not well suited to a relational-based approach. Informatica wants you to do most things outside of the database (in the map primarily). It is how you implement the transformation logic. OWB is built entirely around the relational, procedural, and ETL components in the Oracle database engine. That is what the tool is all about.
    If cost is the major factor for deciding a tool then OWB stands far ahead...Depends entirely on the client and the situation. I have implemented solutions for large companies and small companies. I don't use a table saw to cut cake and I don't use a pin knife to fall trees. Right tool for the right job.
    ...thats what most managers do .. without even looking how in turn by selecting such a tool they make the life tough for the developers.Been there many times. Few non-technical managers understand the process of tool evaluation and selection and the value a good process adds to the project. Nor do they understand the implications of making a bad choice (cost, productivity, maintainability).
    The functionality of OWB stands way below Informatica.If you are primarily a GUI-based implementer that is true. However, I have often found that when I have been brought in to fix performance problems with Informatica implementations that the primary problem is usually with the way that the developer implemented it. Too often I have found that the developer understands how to implement logic in the GUI component (the Designer/Maps and Sessions) with a complete lack of understanding of how all this activity will impact load performance (they don't understand how the database engine works.) For example, a strong feature in Informatica is the ability to override the default SQL statement generated by Informatica. This was a smart design decision on Informatica's part. I have frequently had to go into the "code" and fix bad joins, split up complex operations, and rip out convoluted logic to get the maps to perform within a reasonable load window. Too often these developers are only viewing the problem through the "window" of the tool. They are not stepping back and look at the problem in the context of the overall architecture. In part Informatica forces them to do this. Another possible factor is they probably don't know better.
    "One tool...one solution"
    Microstrategy until recently had been suffering from that same condition of not allowing the developer to create the actual query). OWB engineers need to rethink their strategy on overriding the SQL.
    The functionality of OWB stands way below Informatica.In some ways yes. If you do a head-to-head comparison of the GUI then yes. In other ways OWB is better (Informatica does not measure up when you compare it with all of the architectural features that the Oracle database engine offers). They need to fix the bugs and annoyances though.
    .. but even the GUI of Informatica is better than OWB and gives the developer some satisfaction of working in it.Believe me I feel your pain. On the other hand, I have suffered from Informatica bugs. Ever do a port from one database eingine to another just to have it convert everything into multi-byte? Ever have it re-define your maps to parallel processing threads when you didn't ask it to?
    Looking at the technical side of things I can give you one fine example ... there is no function in Oracle doing to_integer (to_number is there) but Informatica does that ... Hmm-m-m...sorry, I don't get the point.
    The style of ETL approach of Informatica is far more appealing.I find it unnecessarily over-engineered.
    OWB has two advantages : It is basically free of cost and it has a big brother in Oracle.
    It is basically free of cost...When you are another "Microsoft", you can throw your weight around. The message for Informatica is "don't bite the hand that feeds you." Bad decisions at the top.
    Regards,
    Dan Phillips

  • OWB vs Informatica, Performance

    Hi
    I am working on a Proof of Concept which is looking to replace Informatica with OWB. A key factor to the success of the proof of concept is performance.
    At present, the OWB mappings seems to be considerably slower, which does not make much sense to me. I have compared the SQL generated from Informatica with the SQL generated from OWB, and they are pretty much the same. They key difference between the two solutions, is an initial processing of the data file. In the OWB solution, this is used as an external table and a number of mapping perform validation functions to prepare the data for further processing. In the Informatica solution, this validation is preformed a file system level, using data caching.
    My experince of data caching is fairly limited. Is this a key way to reduce runtime of processing. Is this an advantage of Informatica over OWB, as this is performed outside the database.
    Are there any other performance related areas which informatica beat OWB. Surely OWB should always win.
    Any feedback would be gratefully received.
    Thanks in advance.

    LS,
    I do not have any more hints for you than stated earlier, only more explicitly maybe. Forget the proof of concept.
    Either you choose for the one (and only OWB) or for the other. Choise has already been made in the past and the develop/test costs spent.
    If you do not know:
    - how much money you can actually spent during production with OWB
    - how much time/skills/money you can spent during proof of concept of OWB
    - how much time/skills/money you can spent during rewriting and conversion of ETL-proces
    your battle (proof of concept) is lost at forehand.
    The solution of any detail like performance is a waist of time.
    OWB and Oracle can function really fast, if
    - the right hardware is available
    - the right skills are available at ETL-design time.
    Do not try to build an even more beautiful castle than the current with only a toothbrush during a proof of concept. Keep the comparison fair.
    I would give this assignment back to where it came from.
    Regards,
    André

  • CBO: OWB Dimension Performance Isssue (DIMENSION_KEY=DIM_LEVEL_ID)

    Hi
    In my opinion the OWB Dimensions are very useful, but sometimes there are some Performance Issues.
    I am working with the OWB Dimensions quite a lot and with the big Dimensions ( > 100.000 rows) , i often get some Performance problems when OWB generates the code to Load (Merge Step) or Lookup these Dimensions.
    OWB Dimensions have a PK on DIMENSION_KEY and Level Surrogate IDs which are equal to the DIMENSION_KEY if the The Row is an Element of that Level (and not a Parent Hierarchic Element)
    I am hunting the Problem down to the Condition DIMENSION_KEY= (DETAIL_)LEVEL_SURROGATE_ID. The OWB does that to get only the Rows with (Detail-) Level Attributes.
    But it seems, that the CBO isn´t able to predicted the Cardinality right. The CBO always assume, that the Result Cardinality of that Condition is 1 row. So I assume that Conditon is the reason for the "bad" Execution Plans, the Execution Plan
    "NESTED LOOPS OUTER" With the Inline View with Cardinality = 1;
    Example:
    SELECT COUNT(*) FROM DIM_KONTO_TAB  WHERE DIMENSION_KEY= KONTO_ID;
    --2506194
    Explain Plan for:
    SELECT DIMENSION_KEY, KONTO_ID
    FROM DIM_KONTO_TAB where DIMENSION_KEY= KONTO_ID;
    +| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |+
    +| 0 | SELECT STATEMENT | | 1 | 12 | 12568 (3)| 00:00:01 | | |+
    +| 1 | PARTITION HASH ALL | | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
    +|* 2 | TABLE ACCESS STORAGE FULL| DIM_KONTO_TAB | 1 | 12 | 12568 (3)| 00:00:01 | 1 | 8 |+
    Predicate Information (identified by operation id):
    +2 - STORAGE("DIMENSION_KEY"="KONTO_ID")+
    filter("DIMENSION_KEY"="KONTO_ID")
    Or: For Loading an SCD2 Dimension:
    +|* 12 | FILTER | | | | | | Q1,01 | PCWC | |+
    +| 13 | NESTED LOOPS OUTER | | 328K| 3792M| 3968 (2)| 00:00:01 | Q1,01 | PCWP | |+
    +| 14 | PX BLOCK ITERATOR | | | | | | Q1,01 | PCWC | |+
    +| 15 | TABLE ACCESS STORAGE FULL | OWB$KONTO_STG_D35414 | 328K| 2136M| 27 (4)| 00:00:01 | Q1,01 | PCWP | |+
    +| 16 | VIEW | | 1 | 5294 | | | Q1,01 | PCWP | |+
    +|* 17 | TABLE ACCESS STORAGE FULL | DIM_KONTO_TAB | 1 | 247 | 489 (2)| 00:00:01 | Q1,01 | PCWP | |+
    I tried a lot:
    - statistiks are gathered often, with monitoring Informations and (Frequencey-)Histograms and the Conditions Colums
    - created extend Statistiks DBMS_STATS.CREATE_EXTENDED_STATS(USER, 'DIM_KONTO_TAB', '(DIMENSION_KEY, KONTO_ID)')
    - created combined idx one DIMENSION_KEY, LEVEL_SURROGATE_ID
    - red a lot
    - hinted the Querys in OWB ( but it seems the inline View is to complex to use a Hash Join)
    Next Step:
    -Tracing the Optimizer CBO Events.
    Does some one has an Idea how-to help the CBO to get the cardinality right?
    If you need more Information, please tell me.
    Thanks a lot.
    Moritz

    Hi Patrick,
    For a relational dimension, these values must be unique within the LEVEL. It is not required to be a numeric ID (although that follows the best practices of surrogate keys best).
    If you use the same sequence for the dimension you have insured that each entry in the entire dimension is unique. Which means that you can move your data as is into OLAP solutions. We will do this as well in the next major release.
    Hope that helps,
    Jean-Pierre

  • OWB 10.2.0.4 really bad set-based delete performance

    Hi, we recently upgraded to OWB 10.2.0.4, with one of the reasons being the ability to do set-based deletes instead of row-based. However, upon testing this, we're seeing maps that in row based deletes go from 30 - 40 seconds, now taking literally 1.5 to 2 HOURS to run.
    I expected the SQL from the set based to take the form of:
    delete from my_table
    where (col_a, col_b, col_c) in (select a, b, c from ....)
    but instead the format is different:
    delete from my_table
    where exists (select 1 from ....)
    I don't quite understand what the SQL is trying to accomplish - and truthfully, it performs horribly compared to the hand-written version (explain plan shows estimated cost of 14,000 for my query, and over 5 million for the OWB query).
    Has anyone else seen this - and is there a solution? Part of me wants to say I'm doing something wrong, but the other part says "sure, but it works fine in row-based mode(target only)" - exact same map.
    Any ideas?
    Thanks!
    Scott

    Hi everyone, we'll I've figured out what is causing the problem and how to fix it...but still don't understand why it causes the problem.
    Here's a high level overview of the ETL - we find deleted record by selecting business key columns from our existing dimension table and doing a MINUS on the matching columns from the source table. If any records come out of this, it means the record was deleted on the source, and we go ahead and do a matching delete on the dimension table.
    Here's where the odd thing happens though - there's a column called "source system name" that is part of the dimension business key. This column does NOT exist on the source system - it's just a hard coded constant (put in just in case we ever add an additional system in the future).
    Basically, if we do the minus logic on all the columns EXCEPT for this one, and then connect a constant to the delete operator that has this hard coded value in - the delete takes FOREVER... On the other hand, if we actually put this field into the minus operator by simply repointing the existing constant there instead of directly to the delete table...the deletes magically start taking 30 seconds instead of 10 minutes to run.
    No idea (at all) why this makes a diff, but it seems to - and it's a day and night different.
    Hopefully this can help someone else out who runs into the same issue.
    Thanks!
    Scott

  • ODBC Connectivity in OWB 10g

    Hi,
    I have one of my data source as AS400. I have setup a DSN to connect to this data source. I want to know how I can connect using OWB 10g to the AS400 data source using this DSN.
    Is there any other way that I could connect to the AS400 data source using OWB 10g without using any additional drivers which involve a cost :-) ?
    Thanks for your help in advance.
    Thanks & Regards,
    Harshad Borgaonkar

    Hi Harshad,
    You can do this with the heterogeneous services.
    http://www.datadirect.com/developer/odbc/oracle_heterogeneous/index.ssp
    has a good description of what to do.
    You'll have to setup the heterogeneous service on an Oracle database on the same platform for which you have ODBC drivers for, presumably Windows.
    You should then be able to connect to this database in OWB and create a connection under Databases -> Other -> ODBC or Other.
    Cheers,
    Colin

  • OWB Paris, 10k$/CPU for Enterpr. ETL and 15k$/ CPU for Data Quality option

    Hi,
    I just heard that the licensing model changed completely with OWB 10g R2 (Paris).
    As far as I get informed Warehouse Builder is now shipped as part of the database and for each Database (development, test, production...) you have to license the required option.
    The 2 main options are the:
    -Enterprise ETL Option which costs 10'000$ pro CPU
    -Data Quality Option which costs 15'000$ pro CPU
    This means that from now on you pay for OWB with the the Enterprise ETL and Data Quality Option as much as 50% of what you already pay for the EE DB license with the partitioning option.
    So what's your opinion to this new licensing model?
    Regards
    Maurice

    Hi Maurice. It seems like Oracle is trying to reduce number of licenses with the OLAP option, since it should be possible to have both OLTP and OLAP systems in the very same instance.
    In the other hand, Oracle seems to want more money for "advanced ETL options", such as scheduling, metadata management, bla bla bla...
    But those "advanced" functions seems to me as something every ETL tool should provide. For free...

Maybe you are looking for

  • How can I make Adobe work?

    I tried to get some information from my personnel file in Liteblue webpage. It asked me to download Adobe. I did. It said successfully but I still can't read my file. I have windows 7 and use internet explorer. How can I fix this problem?

  • Best Scheme For Capturing For Best MPE Performance?

    So let's say I'm still capturing from MiniDV tape @ 720p24. What's the best scheme for capturing for best MPE performance? I have a new HP Z800 with (so far) one i7 4-core with 12 gigs of ram and an Nvidia GeForce GTX285. [moved from the FAQ forum]

  • Reader displays only black-and-white (grayscale)

    A customer of ours is telling me that my Illustrator-CS2-created PDF's appear to her in grayscale-only. Those same PDF's show their proper colors just dandy on everybody else's computer I've tried, so obviously it's something on her end. Sorry, but I

  • Output determination in SNC for order confirmation

    e want to use output determination to update SNC replenishment order when changes are done in R/3 sales order. We are not using new doc type. So no option left but to use 'Created by' as an option to trigger an output determination but it's not worki

  • SRGB Color Profile Appears Dull in All Adobe Applications

    I just installed CS3 in Windows Vista Ultimate x64 on my PC and I have my working color space set as sRGB. The color is dull when opening documents in any Adobe application with the working color space set to sRGB. The color is fixed if I change the