Why use/sell OWB 9.0.4

Can somebody tell me how I can convince customers to use OWB 9.0.4 when this tool isn't capable to create a simple data warehouse similar to the example that comes with the Oracle 9i database (SH schema).
To test the 9.0.4 version of OWB I have tried to import the data warehouse example into OWB, but the only objects I got were tables without the dimension information. Even the partition and tablespace information of the tables got lost.
So I tried to recreate the exact same dimensions as the example but OWB failed again. After 3 years listening to every promise made by Oracle, I had great hopes that OWB 9.0.4 finally would be capable to create dimensions that are physically stored in multiple tables. I was quite dissapointed to find out that the dimension modeller in OWB 9.0.4 hasn't changed a bit. Even primary and unique key constraints still can't be assigned to a specific index tablespace. So why should I use dimensions in OWB if Oracle still believes that every dimension is always one table? Even their own example proves otherwise.
It would be a lot easier when all the target objects are listed as tables and that you can use the dimension and cube objects to add some specific characteristics to these tables. Maybe that the OWB development team can take a look at how Oracle Enterprise Manager defines dimension objects. That is why I prefer to use Oracle Designer and if neccessary Oracle Enterprise Manager to create the physical database structure and only use OWB or a third party ETL tool to create the mappings and process flows.
My opinion after seeing OWB 9.0.4 still is that if the mapping builder, a decent dimension modeller and other OWB functionality had been added to Oracle Designer three years ago, Oracle would now have a far better data warehouse development tool than OWB will ever be. Even the GUI will be better off and more easy to use.
With kind regards,
Rob Nijland
Intelligence in Information BV
The Netherlands

Rob,
Let me address some of the issues below.
You state:
"Much to my regret I must say that I have already heard these kind of promises for the last three years."
Warehouse Builder's first release 2 became available in February 2000. I.e. the product exists 3 years now. I cannot comment about promises that have been made in the past, but we are definitely working on the feature to implement snow-flake modeling. I understand your frustration about the fact that it is not available today, but if you read Kimball he does state: unless you have a very good reason to create a snowflake implementation you should never do so.
Page 95 of Kimballs 'Data Warehousing toolkit' starts with: 'Resisting the Urge to Snowflake'. Page 97 states: 'Do not snowflake your dimensions, even if they are large. If you do snowflake your dimensions, be prepared to live with poor browsing performance.'
Irrespective of what Kimball states, I hope you can agree that OWB has moved forward in the last 3 years. OWB development has a limited number of resources and we can only address so many areas of the product in so much time. Please do understand that we do our utmost to create a product that suits our users' needs.
You state: "I have tried to import a converted OWB 9.0.2 project into OWB 9.0.4. The import went well, but the problems started when I tried to validate and generate the mappings. I received a lot of warnings that parameters of custom as well as pre-defined functions were invalid. I reconciled the functions but as soon as you try to reconcile a pre-defined function the link to the target object is gone. This means that I have to walk through 40 or more mappings, reconcile everything and partially have to rebuild the mappings."
When links between objects and implementations in a mapping used to exist but get removed upon migration you are hitting a serious bug. Please do submit your case to metalink or work with me directly ([email protected]) to get these problems resolved.
You state: "Furthermore in the mappings all table aliasses have been replaced. For example I have used aliasses, such as 'CUSTOMERS_NEW_VERSION' and 'CUSTOMERS_OLD_VERSION' after a split object to quickly identify what happens during a slowly changing dimension type 2 build. Now the aliases have been renamed to 'CUSTOMERS' and 'CUSTOMERS_1'. The only option is to open the splitter to find out what is happening and to rename the aliases again to keep the documentation valid."
Again, if a rename takes place upon migration then you are hitting a bug.
You state: "After validation I also got some warnings that I should not set the schema name. But this means that the only option I have is to create an connection between locations (eq. database link), even if my source and target are on the same database but in different schemas."
In the 9.0.4 release we introduce new concepts locations and connectors that we recommend you use. The concepts introduce benefits that you may not see in the first place. Please take a look at a Runtime update whitepaper that has been posted to OTN.
With respect to set-based updates using OWB: our experts state that bulk row-based updates in general are quicker than comparable set-based updates. If you disagree for your business case(s) then I would like to understand your requirements better so that we can work on these. Please contact me directly.
Thanks.
Mark.

Similar Messages

  • Why use Oracle 8.1.5 for Linux?

    Dear,
    If it is really pain in the ass to install Oracle on Linux, why
    use it? I have spent more than 4 full days (more than what I
    excepted) to try to install Oracle on a Linux box.
    I guess I will go for mySQL since I can just install the RPM
    package and do not have to worry those critical issues.
    Just a note for Oracle, please design a better Installer next
    time which is going to work! Thank you.
    Best Regards,
    Alex Yu
    null

    I have some sympathy for your plight. I have 8.0.5 installed on
    RedHat Linux. Apart from patching the Pro*C configuration file,
    everything is fine.
    It does seem 8.1.5 is more problematic. If you're purely
    evaluating Oracle, maybe you should consider 8.0.5 until 8i
    stabilises unless you really are gagging for some 8i feature.
    Alex Yu (guest) wrote:
    : Dear,
    : If it is really pain in the ass to install Oracle on Linux, why
    : use it? I have spent more than 4 full days (more than what I
    : excepted) to try to install Oracle on a Linux box.
    : I guess I will go for mySQL since I can just install the RPM
    : package and do not have to worry those critical issues.
    : Just a note for Oracle, please design a better Installer next
    : time which is going to work! Thank you.
    : Best Regards,
    : Alex Yu
    null

  • Why Using Top Link is best in DB Adapter?

    Hi All,
    Can any one suggest , Why Using Top Link (Build-in Insert, Select etc. operation) is best in DB Adapter over using custom query?
    Thanks

    Hi Vikky,
    for insert/select it depends on what kind of user you are. TopLink lets you browse and click on a tables and have everything generated for you. If you are more a DBA or show me the SQL type then you can just type SQL directly.
    Some advantages of TopLink would be:
    -The range of SQL generated by TopLink is limited, but if you hard code complex SQL into your service you need to maintain it.
    -TopLink can generate at runtime the correct SQL for a given database, making switching from say DB2 to Oracle easy.
    -The merge operation will compare the input XML to the columns on the database and update only what has changed. It can also do a sparse merge. If only 4 columns in the XML were set, only those 4 columns in the database will be updated.
    -For inbound polling the strategy used (LogicalDelete, Sequencing Table, etc) is a configuration property and then at runtime multiple SQL statements are generated. The SQL also takes advantage of advanced syntax like the Oracle-only FOR UPDATE SKIP LOCKED, writing it all yourself may be tedious and error prone.
    -The main benefit of TopLink is when you go beyond thinking about a single table. If you import multiple related tables at once, TopLink will generate the SQL to select from and maintain multiple tables, establish a commit order, and generate a hierarchical XSD. With custom SQL the matching XSD is always flat. I.e. if you just need to insert an emp you could get away with custom SQL. If you need to insert a dept and emps, I would use TopLink.
    -This is also when the intermediary abstraction of an object/table makes more sense, as you only need to import a complex relational schema once, then generate inserts, selects, etc.
    So I hope that helps. They are each equally viable and can do something that the other can't. Where you see an overlap I would go with personal preference.
    Thanks
    Steve

  • Why use interlaced ?

    We shoot with DVCpro50 at 24p.
    A few questions.
    1. Is there anytime that there is an advantage, or a time when you have to use field dominance ?
    Interlaced footage looks like crap. Stills look horrible in an interlaced timeline - why ever use it ?
    It seems I can just switch the field dominance from "lower/even" to "none" and everything looks 10 times better on computer and NTSC monitor.
    2. Is there anytime that there is a disadvantage to removing the pulldown and editing in 24fps. It looks 10 times better without the screwd up "B/C & C/D" frames if you leave it in and edit at 29.97fps.
    3. is there any advantage or quality enhancement by removing the pulldown in "Shake" rather than letting FCP do it while capturing. And after removing, should your sequence timeline be set to 24fps or 23.98fps and why ?
    4. Have G5, OSX 10.4.8, AJA IO hooked up to NTSC monitor, and can't view 24fps timeline on external NTSC monitor - only frames when parked - is that because these monitors will only accept 29.97 field dominated footage ?

    Fist off - thanks for the help on part 4.
    I know how TV works and why NTSC was invented over a half century ago.
    Now that I can monitor on the external I will always edit in 24. I'm only the editor - they shoot with a SPX900 in 24p - if you don't remove the pulldown, frames 3 & 4 out of the 5 frame cycle combined with the interlacing gives really bad results. So out of curiosity I switched the sequence dominance setting from "lower/even" to "none" - at this point is ii still treated as interlaced or if it is set to "none" is it now progressive, like when you are editing in 24 and the field dominance window is greyed out. I would think that this would look jittery on a TV because the field order isn't right - but it doesn't - I can make DVD, VHS(not sure why), and monitor and the only difference is that the stills look incredibly better - there's no banding and stair stepping on the shoulders- on either the computer monitor or the NTSC monitor. And with it set to "lower/even" the bigger the computer or TV screen the worse it looks - but with it set to "none" the stills just get softer as you view them on bigger screens as if the right amount of gaussian blur was being added automatically - Isn't that wht you want ?
    So that was why I ask "why use interlace" if you have a choice?
    I'm not trying to be cool - these are ligitimate questions. I consider this doing my homework - isn't that what forums are for - I help people with answers all of the time.
    As for part 3 - Does "standard pulldown" refer to going back & forth between 24fps (film) and 29.97 (video) and "advanced pulldown" from 23.98 to 29.97 ?
    Are all of these modern video cameras actually shooting at 23.98 ?
    2 G5s 1.8Ghz single & 2.7Ghz Dual (PPC)   Mac OS X (10.4.8)   FCP Studio 5.0.4, Shake 4.1, AJA IO, 1.5G RAM & 3G RAM
    2 G5s 1.8Ghz single & 2.7Ghz Dual (PPC)   Mac OS X (10.4.8)   FCP Studio 5.0.4, Shake 4.1, AJA IO, 1.5G RAM & 3G RAM

  • Why use symbol {   } in the following script?

    Why use symbol *{   }* in the following script?
    <read-write-backing-map-scheme>
    <scheme-name>SampleDatabaseScheme</scheme-name>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>SampleMemoryScheme</scheme-ref>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.tangosol.examples.coherence.DBCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    *<param-value>{cache-name}</param-value>*
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    Thank you very much
    Edited by: jetq on Jun 24, 2009 6:26 PM

    Hi Frank,
    In the example, the "{cache-name}" is supposed to be replaced by the database's table or view name that will be queried for the data to be cached. It's purpose is demonstrate how to pass parameters to the class constructor.
    Regards,
    Harv

  • Why use layer masks and adjustment layers?

    I've been using PSE and CS successfully for years.
    One thing I have never understood is: why use layer masks and adjustment layers, instead of simply creating a copy of the subject layer (the one I want to make changes to) and experimenting with that?  It's quick (Ctrl-J), I can do it as many times as I want, I'm not affecting my Background layer.  If I like the changes, I can keep them.  I can switch the copy on and off to compare with the Background layer.  I can do any type of blend or combination I desire.  I can insert Gradient layer(s), select any part of the copy and (Ctrl-J) create a new layer containing only the selected part.  I can adjust size, rotate, do anything.
    It almost seems that "layer mask" and "adjustment layer" are mainly another layer of terminology; can anyone explain (preferably in 50 words or less) how they are intrinsically different from or superior to working with copies of the Background layer?  What can be done with them that can't be done simply using copies of the Background layer?

    Here's a very basic example of the advantage of using a layer mask.
    I have this picture of a sunflower and I want to convert the background to black & white, leaving just the flower in color.  I duplicated the Background layer, converted it to B&W and proceeded to use the Eraser to uncover the flower color. But I made a mistake and erased outside the flower.  There is no way to correct this other than deleting the layer and starting again.
    Now let's use a layer mask on the B&W layer. Set the Foreground/Background colors to the defaults black/white. Using the Brush tool paint on the mask with black to reveal the color.  Here I painted too far, revealing a green leaf in the background.  No need to start over.  Simply switch to white and paint the excess to convert back to the B&W.
    Tip: while painting you can type "X" to toggle between black and white.
    You could also select the flower using the various selection tools and then fill the selection with black. If it turns out the selection was not 100% accurate you can then fine-tune the result by painting on the mask with black or white as necessary.

  • Why using workarea for internal table is better in performance

    Please tell me
    why using workarea for internal table is better in performance

    Hi Vineet ,
      Why would we choose to use an internal table without a header line when it is easier to code one  with a header line?
    it has following reason.
    1)Separate Internal Table Work Area:
         The work area (staging area) defined for the internal table   is not limited to use with just one internal table.
    take ex-
    Suppose you want two internal tables for EMPLOYEE – one to contain all records and one to contain only those records where country = ‘USA’.  You could create both of these internal tables without header lines and use only one work area to load data into both of them. You would append all records from the work area into the first internal table.  You would conditionally append the ‘USA’ records from the same work area into the second internal table.
    2)  Performance Issues:  Using an internal table without a header line is more efficient than one  with a header line
    3) Nested Internal Tables:  If you want to include an internal table within a structure or another   internal table, you must use one without a header line.
    If this one is helpful ,then rewards me
    Regards
    Shambhu

  • Why use go:title etc...

    I've just seen this in the meta data of a website:
    <meta property="og:title" content="foobar...
    There's loads of them… go:description… go:audio… go:keywords… and so on
    Apparently it's to do with open graph and Facebook which I don't fully understand, but why use it and what are the advantages? I've googled it, but I'm finding little ore than what I already mentioned. Just wondering if anybody here knew a little more detail than the info I've found.
    Thanks.
    Mat

    From this page - Facebook Content Sharing Best Practices
    I read:
    Use proper Open Graph tags
    Open Graph tags are included in your page’s HTML and allow the Facebook Crawler to generate previews when your content is shared on Facebook.
    We give examples below, but the basic Open Graph tags you should implement are:
    og:title – The title of your article, excluding any branding.
    og:site_name - The name of your website. Not the URL, but the name. (i.e. "IMDb" not "imdb.com".)
    og:url – This URL serves as the unique identifier for your post. It should match your canonical URL used for SEO, and it should not include any session variables, user identifying parameters, or counters. If you use this improperly, likes and shares will not be aggregated for this URL and will be spread across all of the variations of the URL. 
    og:description – A detailed description of the piece of content, usually between 2 and 4 sentences. This tag is technically optional, but can improve the rate at which links are read and shared.
    etc.

  • Why use “synchronized” to decorate an object which its type is Vector

    Hello,guys.
    Recently,I readed the source code of java.util.Observable.Unfortunately,I encounted some problems&#12290;In the source code ,there is an object named “obs”.Its type is Vector. As we all know,Vector is thread-safe.why use “synchronized” in below code?
        public synchronized void deleteObservers() {
         obs.removeAllElements();
        }thanks.
    Edited by: qiao123 on Dec 21, 2009 7:07 PM

    My " NewBie" Definition of Thread Safe :Is of no interest. It already has a definition and that isn't it.I wanted to make clear what my definition is.
    The [JLS DEF FOR COLLECTION|http://java.sun.com/j2se/1.4.2/docs/api/java/util/Collections.html] says:
    That is the Javadoc for Collections actually, nothing to do with the JLS or Collection.They are authoritative links for Java Language and the discussion in hand, not Fantasies of Lucy Aunty.
    So,how do we draw any inference(s) here?Yes, that you have selectively quoted the Javadoc, which goes on to talk about how you have to use it when iterating.Still not answering my question,Sir.
    ejp,you seem to have knowledge but you don't have the attitude to forward the same to others or bear what others say,even when initiative has been taken by others to put forward a problem.
    P.S : Not all hackers are arrogant and not all arrogant are hackers.
    Another P.S : Waste of my time really !!!
    Edited by: punter on Dec 22, 2009 1:09 AM

  • Why use KeyExtractor here? How about remove it?

    Please see the following script:
    setResults = cache.entrySet(new AndFilter(
    new LikeFilter(new KeyExtractor("getLastName"), "S%",
    (char) 0, false),
    new EqualsFilter("getHomeAddress.getState", "MA")));Why use KeyExtractor here? How about remove it and make it like:
    setResults = cache.entrySet(new AndFilter(
    new LikeFilter("getLastName"), "S%"),
    new EqualsFilter("getHomeAddress.getState", "MA")));Is this scrip right?
    Edited by: jetq on Oct 23, 2009 2:08 PM

    I Googled for KeyExtractor and the likeliest-looking thing was javadocs for com.tangosol.util.extractor.KeyExtractor on an Oracle site.
    Why don't you check around the Oracle developer site for a forum specifically about whatever product this is?

  • Why use daemon?

    Hello friends.
    Could anyone help me with the use of Daemon.
    How it works?
    For work?
    Why use the daemon instead of a process chain?
    Research among a few things but nothing I replied to these questions.
    Very thanks.

    Go through the below document on Real -Time Data Acquisition. This will explain u what is the Daemon and how it works.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/706af360-ba17-2c10-debe-8b0e6e1417c9?QuickLink=index&overridelayout=true

  • Why use customized / particular exceptions instead java.lang.Exception

    Hi,
    Do any of you guys know where can I find a theorical statement / explanation about why use particular / customized exceptions instead java.lang.Exception? I am aware that it consumes more resources and becomes a heavier object, as well as the clearness when coding and all that stuff; however, my boss wants to see a tech document where all this is clearly stated. Any resource over there?
    Regards

    It is better to throw specific--or at least module- or
    package-specific--excpetions, rather than Exception,
    because then the caller knows what to expect. However,
    if you declare "throws Exception", the caller can
    still catch IOException, SQLException, etc.,
    separately. He'll just have to be a really good
    guesser as to which ones he should catch.True.
    Also, there's no point in declaring "throws
    NullPointerException." Any method can throw it without
    declaring it. If you generate your own NPE (or other
    unchecked exception) inside the method, then you
    should document it in the javadoc comments, but you
    don't need to put it in the throws clause. It doesn't
    do any harm, but it's redundant and cluttersome.It was an example and I wasn't feeling too creative....excuse me. ;-) But yeah, I've never thrown NPE ever.
    I'm assuming overhead caused by extending classes.At
    the worst, it would be miniscule, or so I wouldthink.
    What overhead is created by extending classes?Hmm, I was under the impression that a class the extended another class, inherited all of the other classes data (variables, etc.). Hence, you get something like this:
    SuperClass1 + SubClass1 + SubClassOfSubcCass1 = Memory usuage for SubClassOfSubClass1.
    Since adding postivie number will always end up increasing data, SubClassOfSubClass1 will use more dtat than SuperClass1.
    Am I wrong?

  • Why use Dreamweaver for create web pages

    Hi I am new in Dreamweaver and I want to know why use Dreamweaver for create web pages?

    Twitter
    http://twitter.com/altweb
    Blog
    http://alt-web.blogspot.com/
    Site
    http://alt-web.com/

  • Why use mail client over web site mail inteface?

    Hello.
    I had question to all emial client user's:
    Why use mail client over web site mail inteface?
    And I have one more question.
    Is there any option to backup/compress your mail in mutt or thunderbird?

    brisbin33 wrote:
    briest wrote:
    brisbin33 wrote:Also, imap in mutt for me is sooooo slow (proportional to the size of the mailbox of course)
    Do you use header caching? It can accelerate remote imap access, but is disabled by default. Also playing with imap_idle (yes, if server supports), imap_keepalive (low) and timeout (high) may produce nice results.
    i've tried everything (always caching, and setting various imap_* options).  trust me.  it's funny, it wasn't bad at first, then one day i'd hit j/k and 3 seconds later the indicator would move... that's when i set up offlineimap.
    I'm very happy with this maildir setup now .  thanks anyway tho.
    OMG, mutt is flying right now! Even with header and message cache is mutt + imap not the fastest. I reccomend offlineimap for everyone.

  • Why use oracle fail safe???

    hi
    i have one question..
    why use oracle fail safe???
    In the case windows HA System, our systems is operating Oracle database using only Windows Cluster (MSCS) without Oracle Fail Safe!!
    Without oracle fail safe, our systems was installed Oracle database.....
    if using oracle fail safe, how is merit???.
    Thanks...

    Hi Kim,
    why use oracle fail safe???Refer http://download.oracle.com/docs/html/B12070_01/intro.htm#i1005996
    Hope helps
    Regrads,
    Xaheer

Maybe you are looking for