OWB 9.0.2.56 Dump

Hi
we are using Oracle warehouse builder 9.0.2.56.
we also want to implement the same version at Clients place. This dump is not available in the net currently.
could you pls. tell me how do i get this version. its very urgent..
Thanks
Narasimha.

Hi Narasimha,
This is the version that is available in the iDS 902 CD pack. So I guess the simplest is to order that from the Oracle store.
Jean-Pierre

Similar Messages

  • OWB 9.03.35.3 + Oracle 9.2 + Core dumps

    We have big problems, after we migrate to Oracle 9.2 Database with our packages, we use a selfimplemented Flowmanager like the Oracle one based on a web application using oracle specific functions, but after we migrate to 9.2 no package works, most of them causes core dumps!!!!!
    Here is the info we send to Metalink.
    Do someone have an idea what may cause this problems!!!!
    Erik Heindel (DBA)
    Metalink Infos:
    Resolution History
    13-MAR-03 16:58:03
    Can you easily recover from, bypass or work around the problem? = NO
    Does your system or application continue normally after the problem occurs? =
    YES
    Are the standard features of the system or application still available; is the
    loss of service minor? = NO
    ### Platform and O/S version, including patchset or service pack level? ###
    HP-UX 64Bit
    ### What version and patchset level of the database are you running? ###
    9.2.0.2
    ### Are you running the most recent patchset? ###
    Yes
    ### Please describe your problem: ###
    Jobs/Packages generated with OWB results in a core dump.
    ### If you are receiving errors, please list exact error messages and text: ###
    see attached file 'core'
    ### Did the error generate a trace file? ###
    No
    ### Please list all files that you plan to upload: ###
    core dump file
    ### What was being done at time of error? Any changes since this last worked? ##
    Database was upgraded from 8.1.7.4 to 9.2.0.2
    ### Can error be generated if SQL is run in SQL*Plus or Server Manager? ###
    Yes
    ### What is the frequency of the error? ###
    Consistently
    ### What is the impact to your business because of this problem? ###
    DWH can't go live!!! Packages where needed to load the data into the
    Datawarehouse-DB.
    ### Are you running any third-party applications? ###
    NO
    ### If your current issue involves a 3rd party software, has this vendor been co
    Does Not Apply
    Contact me via : E-mail -> [email protected]
    13-MAR-03 17:02:05
    Country: GERMANY
    The customer has uploaded the following file via MetaLink:
    C:\test\core

    Erik,
    Can you identify whether the problem is Warehouse Builder specific or whether it is caused by the database? I will contact the email address that is mentioned in your message.
    Thanks,
    Mark.

  • Problem writing external file to externally mounted disk in Windows

    Folks,
    I've got a puzzling problem with a simple OWB mapping where I'm dumping the contents of a table to an external file.
    Versions are OWB v 11.2.0.2 64-bits on Oracle RDBMS 11.2.0.2 Windows 2007 64-bits Enterprise Server.
    When the external files module is hooked up to a location that points to a local disk and directory on the OWB-server, everything works fine - files are created and written.
    When the external files module is hooked up to a location that points to a mounted disk on another Windows 2007 64-bits Enterprise Server, I get +"Invalid Path for target file, check if connector is deployed correctly".+
    The "File System Location Path" in OWB is set to "N:" (no slashes either way). "Test Connection" reports OK.
    I've given both the Oracle os-user and "Everyone" (for good measure) all rights on the mounted disk, and I can see that the generated package code is using the correct Directory, and the Directory Path is the correct one on the server. The mounted disk (N:) should appear as a local disk to Oracle as far as I can see. I'm able to create and delete files on the disk using command line on the OWB/DB-server.
    I'm scratching my head on this one....

    then mapped that share as a network drive (N:) on server A (the OWB/DB-server)I think that problem was with different accounts used for run Oracle database (usually database instance run under SYSTEM account) and which you used to map share (it was interactive session). Even when you made this map persistent (enable "Reconnect at logon" option during mapping) you don't grant access to this drive to other accounts (including SYSTEM ) - this drive will not be visible to other users.
    I think it is possible to create "persistent" network drive mapping for Oracle database context with specification non-SYSTEM account (domain or server local) for running Oracle database instance (and Oracle Listener service).
    Also it seems there is a workaround to access mapped network drive under SYSTEM account:
    http://stackoverflow.com/questions/182750/how-to-map-a-network-drive-to-be-used-by-a-service
    Regards,
    Oleg

  • OWB packages runtime dumps

    Hi everyone,
    I would like to know how to prevent OWB package from writing dumps to $ORACLE_HOME/admin/.../udump directory.
    When I run a package from OEM it sometimes writes quite a large dump file like <DBname>ora99999.trc
    We have problem with space so that I'd like to stop it.
    Is it a problem of OWB(2I) or oracle database (8.1.7)?
    Thanks for help
    Petr Benes

    Hi David,
    We are close to releasing the windows version. Then the porting team will start on the ports. A rough estimate would be that this will take some 4 - 8 weeks (I'm being conservative here).
    What you may be able to do to overcome this wait is to develop on Win2k and then export/import the runtime schemas onto the AIX box. That will allow all PL/SQL stuff to work. You can then later add the software.
    Note I would only do this in development, not in production unless you really, really have to...
    Jean-Pierre

  • Error API0259 while Importing Processflow in to owb 11g

    Hi All,
    I am using owb 11.1.0.7.0. I am getting the following error while importing the process flow into owb (the mdl dump was taken from owb 10gR2)
    error: API0259: The object cannot be edited in Read-Only mode.
    Can any body please tell me the solution. Its urgent.
    Thanks in advance,
    Siva

    Hi Sutirtha,
    Thanks for u r reply, the problem was solved. What I was doing is , trying to import only one process flow which is using as sub process in other process flow.thats giving me the error.
    So, I have deleted all the process flows and imported whole process flow package in one go. That solved the problem.
    Regards,
    Siva

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Staging Area with OWB 10.2 - necessary or not?

    Hi to all,
    I have read so much about Staging Area and OWB 10.2 that I am totally confused: Some documents and powerpoints in the web say you do not need one, others say you need one. The thing is I am planning a DWH and now I am not sure if a staging area is necessary or not, because the mappings do the ETL jobs internal so I am not sure about the staging area. Most of my data sources are Tables/Views/MViews in a database.
    Thank you very much for any help concerning this question!
    Regards
    Thomas

    Would you prefer the answer that you MAY need one? Then again, you may just WANT one!
    For example, if you are building against a high transaction volume, busy 24/7 OLTP system then you may find that you need a local snapshot in order to do a complete build with a consistent set of source data for all your numbers to be consistent.
    Then again, you also may just find that bringing over just delta data into a local snapshot makes for much more efficient load rather than running against huge full remote tables if they are not well partitioned and/or indexed.
    Then again, complex joins run against a remote system may run more efficiently if you bring the data across with simple table dumps into a staging area that you can index to optimize your queries rather than have to deal with poor performance of complex joins over a dblink. Especially if you need to perform complex joins accross more than one db link to multiple source source systems. How big a cartesion product do you want bouncing around the network to perform that sort of scenario? Sure, maybe you can do it - but how much are you going to impact performance across the boards doing things like that?
    Is the source system already stresed to the max and sitting on a vintage piece of equipment, but your shiny new DW environment is blessed with tons of resources that will make the ETL run faster by several factors if you first copy the data over locally?
    So, do you need a staging area?
    Fact is that there is no generic correct answer to this question.
    You have to look at the specifics of your data requirements and your environment to answer that question. There are costs and benefits to having a staging area, and you have to determine which way the cost/benefit analysis comes out for your specific project.
    Mike

  • OWB 9.2.0.2.8 download for Windows

    Hi,
    We need the software dump or link for OWB 9.2.0.2.8 for windows.
    If anyone can help us in this regard...
    Thanks in Advance.
    Sachin Kumar

    Hi there...
    It's not available anymore on OTN. Maybe you should contact your Oracle representative or try on Metalink. I'm not sure metalink will be of any help in this case, though....
    Why exactly you need such a out of date version? If all you have developed is in OWB 9.2 maybe you should try later releases and upgrade your OWB repository. Did you backed up your OWB Projects as MDL files?
    Good luck!
    Marcos

  • Splitter operator doesnt use multi table inserts in OWB...very very urgent

    Hi,
    I am using OWB 9i to carry out tranformations. I want to copy the same seuence numbers to the two target tables.
    Scenario:
    I have a source table source_table, which is connected to a splitter and the splitter is used to dump the records in two target tables namely target1_table and target2_table. I have a sequence which is also an input to the splitter, so that I can have the same sequence number in the the two output groups of he splitter. I then map the sequence number from the two output groups to the two target tables expecting to have the same sequence number in the target tables. But when I see the generated code it creates two procedures and effectively inserts sequencing numbers in the target tables which are not consistent. Please help me so that I have the same sequencing numbers in the target tables which are consistent.
    Well the above example works in row based operating mode but not in set based mode. Please give me a valid explanation.
    OWB pdf says that splitter uses multi table inserts for multiple targets. After seeing the generated code for set based operations I dont agree to this.
    Its very urgent.
    thanks a lot in advance.
    -Sharat

    Hi Mark,
    You got me wrong, let me explain you the problem again.
    RDBMS oracle 9.2.0.4
    OWB 9.2.0.2.8
    I have three tables T1,T2 and T3.
    T1 is the source table and the remaining two tables T2 and T3 are target tables.
    Following are the contents of table T1 -
    SQl>select * from T1;
    DEPTNAME LOCATIO?N
    COMP PUNE
    MECH BOMBAY
    ELEC A.P
    Now I want to populate the two destination tables T2 and T3 with the records in T1.
    For this I am using splitter operator in OWB which is suppose to generate multi table inserts, but unfortunately its not doing so when I generate the SQL. There si no "insert all" command in the sql it generates.
    What I want is, when I populate T2 and T3 I use a sequence generator and I want the same sequences for T2 and T3 eg.
    SQl>select * from T2;
    NEXT_VAL DEPTNAME LOCATIO?N
    1 COMP PUNE
    2 MECH BOMBAY
    3 ELEC A.P
    SQl>select * from T3;
    NEXT_VAL DEPTNAME LOCATIO?N
    1 COMP PUNE
    2 MECH BOMBAY
    3 ELEC A.P
    I am able to achieve this when I set the operating mode to ROW BASED. I am not geting the same result when I set the operating mode to SET BASED.
    Help me....
    -Sharat

  • Load SQL Server data with OWB 11.2

    Hi,
    We are using OWB 11.2 on Oracle Exadata Database Machine X2-2.
    We want to load data from multiple SQL Server tables in our DWH on Oracle 11.2 with OWB 11.2
    We want to load aprox. 100 million records a day.
    I've read some articles about this and the advise was to dump the data from SQL Server to files and load the files with OWB.
    We've tried to make a connection to SQL Server with OWB, this only works partially.
    In the OWB client I can import the database objects and sample the data from the SQL Server tables (all done on MS Windows client).
    From the server I am not able to run a very basic mapping which has a SQL Server source table and a Oracle target table without any difficult transformations.
    Is there another method/best practice to get the data over?
    I've read something about a database link from Oracle to SQL Server, but I don't know the details.
    Is there a tool that runs under Linux that can export data from SQL Server to a text file?
    Regards,
    Emile

    Hi Emile,
    regading extracting data from MSSQL with OWB on Unix platform (using Generic Connectivity):
    Re: SQLServer access from AIX Warehouse builder
    Re: OWB on Solaris Connectivity with SQL SERVER on Windows
    We want to load aprox. 100 million records a day.
    I've read some articles about this and the advise was to dump the data from SQL Server to files and load the files with OWB.100 million records per day is not a problem for daily extracting from MSSQL Server if you have 1-2 hour.
    In my opinion dumping to text file is a bad practice and is unnecessary if customer don't have special requirements (for example for security reason).
    SQL Server source table and a Oracle target table without any difficult transformationsIn my opninion the best way process data from MSSQL is to extract data to staging area (schema) on Oracle DB with mappings as simple as possible (ONLY filters, without any join), and most of data processing prefom in Staging area or during moving from staging to DWH.
    Also look at OWB user guide (how to use Generic Connectivity in OWB)
    http://download.oracle.com/docs/cd/E11882_01/owb.112/e10582/loading_ms_data.htm#i1064950
    Regards,
    Oleg
    Edited by: added link to OWB doc with description of using Generic Connectivity

  • Best way to JOIN with OWB 10.2

    Hi Gurus,
    im a newbie in working with OWB and i also apologize for my englisch. I have the following Issue. Im using Oracle warehousebuilder and i want to load data in a new Table as Table3 from two distinct source table Table1 and Table2.
    It should look like this: Table3.col1 = Table1.col2 ; Table3.col2 = Table1.col3; Table3.col3 = Table2.colb.
    I mean it is a kind of JOIN but the column from Table3 have to be just the same as the column from Table1 and Table2 and no other combination.
    Table1 : col1 | col2 | col3
    Table2: cola | colb | colc
    Table3: Table1.col2 | Table1.col3 | Table2.colb
    Can someone help me?
    Thx

    Vinzsanity,
    The problem is that you are trying to model a query that requires something more than a simple join as you seem to hope..
    Joining based on simple table row order is not something relational databases are really designed to do well. They join on values. For example, if you simply try to write basic SQL to make your join you discover the problem:
    select t1.col1, t2.colb
    from   table1 a,
             table2  b
    where a.rownum = b.rownum ;What you get is an ORA-01747 error as the rownum psuedocolumn is not valid in this context. But if you don't find a way to join on roworder you get the crossproduct - which is NOT what you want.
    The simplest pure SQL join to get your results as described in your sample data is:
    select a.col1, b.cola
    from   (select rownum rnum, s.* from table1 s) a,
             (select rownum rnum, t.* from table2 t) b
    where a.rnum(+) = b.rnum;but this only implies a possible one-way outer join to meet your described data sample. If you want a full outer join to handle the possibilities where either table could have more or less rows than the other, then
    select a.col1, b.cola
    from   ((select rownum rnum, s.* from table1 s) a
             full outer join
           (select rownum rnum, t.* from table2 t) b
           on (a.rnum = b.rnum))Now, clearly this is NOT going to me modelled as just a simple join in OWB. First, you need to find a way to add the rownum pseudocolumn to both table result sets in order to be able to join on them. For example, you could use an expression object for each table, dump all of the fields straight from the expression input to the expression output, and then add a rownum column to the output as well. Then join the two with a joiner defined as full outer join on the two rownum fields and map the required columns to the target table.
    Or, you could create a view:
    Create or replace view table1_And_2_outer_join as
    select a.*, b.*
    from   ((select rownum rnum1, s.* from table1 s) a
             full outer join
           (select rownum rnum2, t.* from table2 t) b
           on (a.rnum = b.rnum))And use that view as the source object in your mapping.
    Or create two views:
    Create or replace view table1_with_rownum as
    select rownum rnum1, s.* from table1;
    Create or replace view table2_with_rownum as
    select rownum rnum2, t.* from table2 tAnd then use the two views as your sources and join them together using the joiner object defined as outer join on the rownumber values.
    Any of those three options will get you what you seem to want.
    Cheers,
    Mike

  • OWB Error: Connection lost when assign a schedule to a process flow

    I have:
    - OWB 11Gr2
    - Oracle DB 10Gr2 Source/Target
    I have created a schedule module and set the location to be one of the target DB locations.
    Create a one of schedule and scheduled for daily execution.
    Open configuration of a process flow and assign with the schedule.
    After this, i tried to save the repository, and occurred the error message:
    "Repository Connection Error: The connection to the repository was lost, because of the following database error: Closed Connection
    Exit OWB without committing."
    The connection with the repository was lost and i can´t do more anything.
    I tried to create a separate location for the schedule, but it don´t make difference
    What´s happening?
    Thanks,
    Afonso

    Wea re running 11.2.0.2 and
    When looking at the trace log
    Dump continued from file: /data02/oramisc/diag/rdbms/dpmdbp/dpmdbp/trace/dpmdbp_ora_13503.trc
    ORA-00600: intern felkod, argument: [15570], [], [], [], [], [], [], [], [], [], [], []
    With this query "========= Dump for incident 16022 (ORA 600 [15570]) ========
    *** 2011-10-18 09:52:25.445
    dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
    ----- Current SQL Statement for this session (sql_id=7a76281h0tr2p) -----
    SELECT usage.locuoid, usage.locid, cal_property.elementid, cal_property.classname, schedulable.name || '_JOB' name, schedulable.name || '_JOB' logicalname, cal_property.uoid, cal_property.classname
    , cal_property.notm, cal_property.editable editable, cal_property.customerrenamable, cal_property.customerdeletable, cal_property.seeded, cal_property.UpdateTimestamp, cal_property.strongTypeName,
    cal_property.metadatasignature signature, 0 iconobject FROM Schedulable_v schedulable, CMPStringPropertyValue_v cal_property, CMPCalendarInstalledModule_v cal_mod, CMPPhysicalObject_v sched_phys, CM
    PCalendar_v cal, ( select prop.firstclassobject moduleid, prop.value locuoid, 0 locid from CMPPhysicalObject_v phys, CMPStringPropertyValue_v prop where prop.logicalname = 'SCHEDULE_MODULE_CONFIG.DE
    PLOYMENT.LOCATION' and prop.propertyOwner = phys.elementid and phys.namedconfiguration = 42636 union select installedmodule, '' locuoid, location locid from CMPLocationUsage_v locUse where deployme
    ntdefault = 1 and not exists ( select null from CMPPhysicalObject_v phys, CMPStringPropertyValue_v prop where prop.logicalname = 'SCHEDULE_MODULE_CONFIG.DEPLOYMENT.LOCATION' and prop.firstclassobjec
    t = locUse.installedmodule and prop.propertyOwner = phys.elementid and phys.namedconfiguration = 42636) ) usage WHERE cal_mod.elementid = usage.moduleid and cal_mod.elementid = cal.calendarmodule a
    nd substr(cal_property.value,0,32) = cal.uoid and cal_property.logicalname='SCHEDULABLE.PROPERTY' and (cal_property.firstclassobject = schedulable.elementid or cal_property.firstclassobject = sched
    _phys.elementid and sched_phys.logicalobject = schedulable.elementid) and cal_mod.owningproject = 42631 and cal_property.propertyowner = sched_phys.elementid and sched_phys.namedconfiguration=42636
    ORDER BY schedulable.name || '_JOB'"
    Coulöd be possible this bug
    Bug 12368039 ORA-600 [15570] with UNION ALL views with Parallel steps
    This note gives a brief overview of bug 12368039.
    The content was last updated on: 18-MAY-2011
    Click here for details of each of the sections below.
    Fixed:
    This issue is fixed in     
    * 11.2.0.3

  • How to import Process Flows without OWB

    Hi,
    I'm having trouble deploying my process flows to my PROD environment.
    Process flows complile and run ok in TEST environment, and we are suppoused to deploy in both environments (TEST and PROD) from the same OWB repository, which is correct.
    But due to technical problems with the net, by the moment I cannot see the PROD server from OWB repository installation, so I need to deploy manually (using scripts via SqlPlus or some other tool).
    I can export the PFs from the 'generate' option provided by OWB, so I can get the file with the PFs' source code.
    What I don't know is:
    1) how to import/create that process flows in the new instance manually (I mean from outside OWB) from the exported file.
    2) where should I create them, in the workflow instance or in the owb repository instance?
    I'm working with OWB 10gR2
    Can somebody help me with this?
    Thanks in advance
    Max

    Many kinds of errors. For instance:
    IMP-00017: following statement failed with ORACLE error 23327:
    "BEGIN SYS.DBMS_DEFER_IMPORT_INTERNAL.QUEUE_IMPORT_CHECK('ORACN230.WORLD',"
    "'IBMPC/WIN_NT-8.1.0'); END;"
    IMP-00003: ORACLE error 23327 encountered
    ORA-23327: imported deferred rpc data does not match GLOBAL NAME and platform of importing db
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
    ORA-06512: at "SYS.DBMS_DEFER_IMPORT_INTERNAL", line 30
    ORA-06512: at line 1
    I dropped all users except for schemas related to database structure such as SYS,SYSTEM,OUTLN,SYSMAN,XDB, WMSYS. I do not have access to original database because it's belongs to another organization. Only I got is dump file and log which I know the exp is full database with username SYS. The original database and and destination database (which I handle) probablly have different features (when create database, optional scripts run from rdbms/admin might be different). I did use IGNORE=Y every time. I tried SYS or SYSTEM to import but always are scared by too many errors.
    Regards,
    Richard

  • Error Installing OWB Repository

    I exported a 9.2.0.1 Database and imported the user into a new Database Instance.
    However, not all the runtime and repository components were reimported into the new schema.
    I dropped and reinstalled the runtime successfully.
    I also dropped the repository but have been unable to reinstall the repository.
    OWB Client NT Client Version: 9.0.3.36.2
    DB OS: Sun Solaris
    DB Version 9.2.0.1.0
    The following is the error in the repository error log:
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: java.sql.SQLException: ORA-00904: "OWM_VIEW_UTILITIES"."CLASSIFIED_OBJ_TYPE": invalid identifier
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: ORA-06512: at line 2
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: [ at runSqlScript(RuntimeInstaller.java ) ].
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: this is SQL error
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: java.sql.SQLException: ORA-00904: "OWM_VIEW_UTILITIES"."CLASSIFIED_OBJ_TYPE": invalid identifier
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: ORA-06512: at line 2
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: java.sql.SQLException: ORA-00904: "OWM_VIEW_UTILITIES"."CLASSIFIED_OBJ_TYPE": invalid identifier
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility: ORA-06512: at line 2
    Sat Jun 21 17:50:12 VET 2003
    oracle.wh.util.DebugUtility:

    Kirt,
    Are you rerunning the repository assistant, or are you trying to restore from a dump? Officially we do not support restoring from a dump (not yet, at least) ... but there are some notes on metalink on how to do it anyway:
    Repository, note 228918.1:
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=228918.1
    Runtime, note 235658.1:
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=235658.1
    Mark.

  • How to import OWB 10.0.2 mdl file to 11g

    Hi ,
    I have a OWB MDL file in 10.0.2.I want to import same MDL file in 11g.is it possible to import,if yes please guide what steps i have to follow.
    any related documents or links would be appreciate
    Thanks in Advance
    Reagrds
    Kumar

    Dan wrote:
    Hi all,
    DB 11G,
    Oracle software 11g on 2 server (same version)
    I exported using expdp and now i want to import to server with only oracle database software installed (no db yet)
    can i do that?
    ThanksFirst you need to have DB created at target only then you would able to import the dump taken from source. You can create similar tablespace as were is source to avoid errors(if directory structure is different at target).
    You can create database by dbca or manually.
    http://docs.oracle.com/cd/E11882_01/server.112/e10595/create003.htm
    http://www.oracle-base.com/articles/11g/OracleDB11gR2InstallationOnEnterpriseLinux5.php
    For importing database
    http://docs.oracle.com/cd/E11882_01/server.112/e23633/expimp.htm
    http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm

Maybe you are looking for

  • Why is a single Cross Reference text turning red after doing an Update?

    After updating a large book in FrameMaker10,  the same cross-reference made in  a couple of chapters is turning to red text.   It is  set as  Heading 2.  I tried to deleted the heading and re-typing it.  I tried to change the style to Body and back t

  • Nothing in links panel.

    We have many customers that paste graphics and pictures from other applications into indesign (1.0, 2.0, & CS through CS4). They do not show as links in the links panel. Internally we call them "Pasted Pics". Upon output they are lo-res and look terr

  • URGENT: Collision Detection In Tiled Map

    HELP!!!!! I am creating a tile map of size 32*32. And the character image size is also 32*32. Now i am stuck in making collision with the walls. I have tried my own method but to no avail. Has anyone tried to doing collision detection? Need a simple

  • Quicken & Quickbooks Check Printing Issues

    I have two issues: Office Issue – I can only print single checks.  I CAN'T print partial two page checks or a full page.  Had no problems until I had to replace my printer. QuickBooks Pro 2009, Vista Home Premium – 64 bit & HP 1505 Home Issue – I can

  • Need help on extending my TEMP tablespace (Ora ver - 8.1.7.0.0)

    Hi I am having problem in executing the SQL query with certain parameters in the WHERE clause. ORA-01114: IO error writing block to file 4 (block # 524242) ORA-27069: skgfdisp: attempt to do I/O beyond the range of the file OSD-04026: Invalid paramet