Triggers in OWB

Hi all,
Does anybody know how database triggers can be added through OWB?
regards,
Zahid Naseer

Hi Zahid
I don't believe that is a supported option within OWB at this release.
I understand your wanting to keep all the logic associated with a table in a single repository but although I haven't tried it I can't recall any place within the current product to do that.
I'd also like to see the addition of analytics that's coming in the next release.
Good luck.
/gary

Similar Messages

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Setting END_DATE in OWB 11g R1/R2 SCD type 2

    Hi
    I have this issue which needs an immediate solution. The SCD2 design in OWB 11g R1/R2 works well except that it does not allow a programmer to populate the value of END_DATE with a field from source.
    I observed that one can set the value of end_date in the dimension properties (via inspector) for initial records/open records.
    a. default effective time of initial record and
    b. default effective time of open record
    My question is for the above two properties being set on the SCD History logging properties on target level in mappings, can we set them to a particular date field coming in from the source?
    The only work around I found was to have a function which returns me a particular value from the source which can be called within the properties of these two fields, but other than that, we cannot directly populate the END_DATE field as it is a derived field within the SCD2 logic generated in the background.
    Please if you could answer this it would be great
    Birdy

    Hi Oleg
    I want the STA_FROM to drive my DIM_TO as well
    So I may have sets of records say
    Business/Natural key : Account number
    Triggering column: Name
    Suppose I load all of this today
    Account_number STA_FROM Name
    1001 01-aug-2010 Dummy1
    1001 03-aug-2010 Dummy2
    1001 05-aug-2010 Dummy3
    1001 07-aug-2010 Dummy4
    1002 01-sept-2010 Dummy_1
    Hence Dimension data will be
    Account_key Account_number EFFECTIVE_DATE END_DATE Name
    1 1001 01-aug-2010 03-aug-2010 Dummy1
    2 1001 03-aug-2010 05-aug-2010 Dummy2
    3 1001 05-aug-2010 07-aug-2010 Dummy3
    4 1001 07-aug-2010 31-dec-9999 Dummy4
    5 1002 01-aug-2010 31-dec-9999 Dummy_1
    If you notice, I am driving both Effective and end date by the STA_FROM
    I have already achieved this in the hard coded SCD logic by using Lead function to derive my end dates for "force-closing" records.
    But the SCD wizard does not allow us to drive the value of end date based on a field coming from the source. Now in my case, the STA_FROM can also be value of an input load date parameter in some cases.
    It would have been nice to allow population of END_DATE on target dimension operator instead of giving only the option of setting it in the operator properties.
    Hope its much clearer now
    Birdy

  • Hang in Mapping editor Canvas in OWB 10G r2

    For the second time I have problem with the mapping editor. All objects and mappings are invisible. The background color is grey, and it's not possible to drag new objects into the canvas. The only workaround I have found, is a full new installation of the clientsoftware.
    Any idees why this happens, and possible workarounds ?

    I raised a Tar on metalink for this issue, here is the reply I got:
    This is what I received from Development:
    The real-time feature was pulled from OWB 10gR2 and this also included the
    AQ import which was in 10.1, I think they must have been
    tightly coupled. The workaround involves coding; the pre-Paris solution
    basically created a temporary table where messages were staged and this
    queue table used for the map; the map had pre/post map triggers for
    initializing/finalizing the queue. To use it effectively it would be best to
    create some scripts that generate the appropriate SQL scripts for the
    supporting map queue table and also the PLSQL procedures representing the
    pre/post map triggers. Similar manual coding is applicable for using CDC
    within OWB for OWB 10gR2.
    These features are planned for a future release.
    Implementation of this work-around is something you can do or Oracle Consulting can do. Support
    cannot assist in developing a solution for this.
    ------------------------------------------------------------------------------

  • No commit and error triggers

    Hi,
    In OWB 10.1.0.4.0 , I see a parameter NO COMMIT and ERROR TRIGGERS in mapping configuration properties - code generation options. i was not able to find any details regarding this.
    Can anyone tell me the function of these two parameters.?

    Error Trigger - Name a procedure to be fired when map fail. You may want to execute some DDL's here.
    I don't know the NO COMMIT parameter. Usualy, when you click in the parameter you can see a brief description of that parameter in the lower part of the conf. widow.
    Isn't there any for this NO COMMIT?
    Regards
    Marcos

  • OWB general questions for effective use.

    Hi all,
    I have been using OWB for a while now, and am getting to the point where I want to make sure I am using it effectively.
    For example, how does one decide what to include in one project, or to split it up into multiple projects? I am loading a warehouse, and so far I am only loading raw data into tables.
    My next step will be to perform ETL on the raw data and start forming more structured warehouse data. Would that step be better contained in a separate project? Would I need to repeat the definitions of the tables in the loading project? Should I just keep the whole thing in one project? The loading project is quite large, as we have raw data from many sources, and it seems to get one file in takes about 5 - 7 OWB objects (flat file, ext table, 2 - 3 mappings, process flow, 1 - 2 tables.)
    So I have dozens of mappings, tables, etc.
    Even though much of the data comes from different places, it is generally used together by the end users, and the ETL will likely also need to use most of it together.
    Is there any "Best Practices" posted anywhere?
    Another question that has come up is this: It seems the idea is to create the warehouse structures completely in OWB and deploy to the DB. However OWB doesn't allow for a full table definition, for things like Triggers, or for advanced features due to a later DB version.
    So does one just create a "phantom" entry in OWB that is never deployed, and then create the actual table manually, or deploy and then modify manually to add the trigger?
    Or are we not supposed to be using DB triggers, and instead control everything through OWB?
    Any insight would be appreciated.
    Thanks

    Hi
    I think the kind of questions you are asking are more aimed at methodology's not so much OWB. There are plenty of sites you can get this kind of info from one but not necessarily the best being <http://www.ittoolbox.com/>
    In any case we use three projects and multiple schemas
    project & schema 1 is used to collect data quickly from multiple sources
    project & schema 2 normalizes the data (acts as the storage repository)
    project & schema 3 is where the datamarts exist (de-normalized data)
    this approach allows you to isolate your integration layer from your reporting layer. most changes only affect one of the layers, not all.
    as far as creating your structures in owb is concerned I seen no problem, provided you are using a good ER tool and have ironed out any potential problems.
    I have certainly created triggers manually and added them after deployment, but in most cases you can use Transformations, post-mapping, and pre-mapping processes to do the same thing, after all the data should only get into the target through a mapping. If it gets in any other way, you have a hole in you bucket.
    Chris

  • Using OWB mappings with Oracle CDC/Streams and LCRs

    Hi,
    Has anyone worked with Oracle Streams and OWB? We're looking to leverage Streams to update our data warehouse using Streams to apply changes from the transactional/source DB. At some point we seem to remember hearing that OWB could leverage Streams, perhaps even using the Logical Change Records (LCRs) from Streams as input to mappings?
    Any thoughts much appreciated.
    Thanks,
    Jim Carter

    Hi Jim,
    We've built a fairly complex solution based on streams. We wanted to break up the various components into separate entities so that any network failure or individual component failure wouldn't cause issues for the other components. So, here goes:
    1) The OLTP source database is streaming LCR's to our Datawarehouse where we keep an operational copy of production, updated daily from those streams. This allows for various operational reports to be run/rerun in a given day with the end-of-yesterday picture without impacting the performance on the source system.
    2) Our apply process on the datamart side actually updates TWO copies of data. It does a default apply to our operational copy of production, and each of those tables have triggers that put a second copy of the data into daily partitioned tables. So, yesterday's partitions has only the data that was actually changed yesterday. After the default apply, we walk the Oracle dependency tree to fill in all of the supporting information so that yesterday's partition includes all the data needed to run our ETL queries for that day.
    Example: Suppose yesterday an address for a customer was updated. Streams only knows about the change to the address record, so the automated process would only put that address record into the daily partition. The dependency walk fills in the associated customer, date of birth, etc. data into that partition so that the partition holds all of the related data to that address record for updates without having to query against the complete tables. By the same token, a change to some other customer info will backfill in the adress record for this customer too.
    Now, our ETL queries run against views created against these partitoned tables so that they are only looking at the data for that day (the view s_address joins from our control tables to the partitiond address table so that we are only seeing one day's address records). This means that the ETL is running agains the minimal subset of data required to update dimensions and create facts. It also means that, for example, if there is a problem with the ETL we can suspend running ETL while we fix a problem, and the streaming process will just go on filling partitions until we are ready to re-launch ETL and catch up - one day at a time. We also back up the data mart after each load so that, if we discover an error in ETL logic and need to rebuild we can restore the datamart to a given day and then reprocess the daily partitions in order very simply.
    We have added control fields in those partitioned tables that show which record was inserted/updated/or deleted in production, and which was added by the dependency walk so, if neccessary, our ETL can determine which data elements were the ones that changed. As we do daily updates to the data mart as our finest grain, this process may update a given record in a given partition multiple times so that the status of this record at the end of the day in that daily partition shows the final version of that record for the day. So, for example, if you add an address record an then update it on the same day the partition for that day will show the final updated version of the record, and the control field will show this to be a new inserted record for the day.
    This satisfies our business requirements. Yours may be different.
    We have a set of control tables which manage what partition is being loaded from streams, and which have been loaded via ETL to the datamart. The only limitation is that, of course, the ETL load can only go as far as the last partition completely loaded and closed from streams. And we manage the sizing of this staging system by pruning partitions.
    Now, this process IS complex, and requires a fair chunk of storage, but it provides us with the local daily static copy of the OLTP system for running operational reports against without impacting production, and a guaranteed minimal subset of the OLTP system for speedy ETL runs.
    As for referencing LCRs themselves, we did not go that route due to the dependency issues (one single LTR will almost never include all of the dependant data from which to update a dimension record or build a fact record, so we would have had to constantly link each one with the full data set to get all of that other info).
    Anyway - just thought our approach might give you some ideas as you work out your own approach.
    Cheers,
    Mike

  • Starting and stopping triggered samples

    The ESX24 Sampler claims to be a plug-in that functions exactly like a sampler should. Previous to Logic Pro 7, I used a SP-606 sampler that had three different ways a sample could be played:
    1. drum - this function played the sample as long as the button (or note) was held down then stopped playing once released
    2- trigger - this started the sample and let it play until the end of the entire audio region
    3- loop - this looped the sample based on the specified start and stop points.
    However under the second function, trigger, you could trigger the sample to start and stop and restart and stop every time you hit the sample button or sent a note.
    Does logic have this function anywhere in the ESX24?
    for instance if i had a 15 second long sample and sent a midi note from a controller to start it...how do i stop it and restart it at will?
    the only functions I see available in Logic are the 'drum' function (where the sample releases and stops as long as you hold down the note) and the trigger function (but in Logic a second trigger does not stop the sample ----
    it triggers a second sample to play again --so that multiple samples are playing simultaneously)
    Any help would be greatly appreciated.

    Start Oracle RAC Database and Listener
    Follow Oracle Database Documentation on starting and stopping Oracle Database
    To ensure Oracle Warehouse Builder Runtime Services
    Log on to the Database machine Platform as the runtime repository owner.
    Run the ORACLE_HOME\owb\rtp\sql\servie_doctor.sql script.
    Note: The runtime repository owner is dwh_ws_owner and it is database user.
    Starting Oracle BI Infrastructure
    •     Login to Oracle BI Server
    •     Navigate to location /home/oracle/..
    •     Execute Script startBI.sh for Starting Oracle BI Services
    If you have to start Services Manually -
    Order of Starting Processes on BI Servers
    •     Oracle Business Intelligence Server
    •     Oracle Business Intelligence Presentation Services
    •     Oracle Business Intelligence Scheduler
    •     Oracle Business Intelligence Cluster Controller
    Use the script in sequence with parameter start
    •     Navigate to location /home/oracle/biserver/OracleBI/server/bin directory
    Example:
    •     ./run-sa.sh start
    •     ./run-saw.sh start
    •     ./run-sch.sh start
    •     ./run-ccs.sh start
    Stopping Oracle BI Infrastructure
    •     Login to Oracle BI Server
    •     Navigate to location /home/oracle/scripts
    •     Execute Script stopBI.sh for Stopping Oracle BI Services
    If you have to stop Services Manually -
    Order of Starting Processes on BI Servers
    •     Oracle Business Intelligence Cluster Controller
    •     Oracle Business Intelligence Server
    •     Oracle Business Intelligence Presentation Services
    •     Oracle Business Intelligence Scheduler
    Use the script in sequence with parameter start
    •     Navigate to location /home/oracle/biserver/OracleBI/setup directory
    Example:
    •     ./run-ccs.sh stop
    •     ./run-sa.sh stop
    •     ./run-saw.sh stop
    •     ./run-sch.sh stop

  • Upgrade to OWB 10.2.0.2 - worth it?

    Has anyone put in this upgrade? I had an open SR due to the issue that I could not perform very many diminsion role attachements to a cube without the product entering an infinite loop and effectively hanging the client. Metalink said that this patch would correct that, but I saw nothing in the documentation to indicate such. This is a rather large upgrade and I am not anxious to install it without specific knowledge that this will fix my problem. Also, the OWB patch before that, correcting null comparison on triggering dimension columns has caused my dimension maps for Type 2 dimensions to run almost 3 times longer than before. (Thanks Oracle).
    Paul Gilbo
    VACO Technology
    Richmond, VA

    We upgrade to get round bug 5263515 (TYPE II DIMENSION TRIGGER ATTRIBUTE ISSUE WITH NULL VALUE)
    The upgrade was technically simple, however it corrupted all of our dimension objects. We ended up in a position where any map that used a dimension wouldn't validate.
    We've a TAR open with support, but so far the only solution we've found is to re-create the dimension objects from scratch.
    My advise would be to make sure you implement this patch in a test environment first.

  • OWB 10g R2 with Advanced Queues

    All,
    We are currently using 9.2.0.2.8 and make extensive use of Advance Queues in many of our mappings (we've put a lot of effort in to get this to work "real time"). We are now looking to upgrade from 9.2.0.2.8 to 10.2.0.2 - we've upgraded the mappings ok and the queues are there ok.
    Going forward it looks like AQ has been removed as a source for new mappings in OWB 10g R2. Can anyone shed any light on this - will it be included a future patch or is there a work around?
    I've looked at the OpenWorld presentations and making OWB process data "near real time" seems to be a big new feature.
    Thanks
    Craig

    I raised a Tar on metalink for this issue, here is the reply I got:
    This is what I received from Development:
    The real-time feature was pulled from OWB 10gR2 and this also included the
    AQ import which was in 10.1, I think they must have been
    tightly coupled. The workaround involves coding; the pre-Paris solution
    basically created a temporary table where messages were staged and this
    queue table used for the map; the map had pre/post map triggers for
    initializing/finalizing the queue. To use it effectively it would be best to
    create some scripts that generate the appropriate SQL scripts for the
    supporting map queue table and also the PLSQL procedures representing the
    pre/post map triggers. Similar manual coding is applicable for using CDC
    within OWB for OWB 10gR2.
    These features are planned for a future release.
    Implementation of this work-around is something you can do or Oracle Consulting can do. Support
    cannot assist in developing a solution for this.
    ------------------------------------------------------------------------------

  • Triggering process flows (run it whenever source changes)

    Hello everyone,
    I have created several process flows which includes many mappings. Now i want to run the process flow automatically whenever there is new data inserted in the source tables of those mappings. Is there a way to do this in OWB. i am using OWB R2 and Oracle 10g. Any help would be greatly appreciated.
    Thank You

    Hi,
    one way is to create (outside OWB) an update/insert/delete-trigger on the table and start the mapping in a begin-end-block of the trigger after an insert or update.
    Look at the oracle library for triggers and you get it.
    CREATE OR REPLACE TRIGGER test_trigger
    AFTER INSERT OR UPDATE OR DELETE
    ON table_test
    DECLARE
    result_num number;
    BEGIN
    <call mapping or process flow here>
    result_num :=WB_RT_API_EXEC.RUN_TASK(p_location_name
    , p_task_type
    , p_task_name
    , p_custom_params
    , p_system_params
    , p_oem_friendly
    , p_background);
    <some code to analyze the result code>
    <exception handler>
    END;
    Regards,
    Detlef

  • OWB: how can automatic updation  perform in staging database using OWB

    I am using OWB-etl to fetch data from source database and store to staging db.
    in target table operator i am using insert operation.
    it is inserting data fien.
    but my requirement is target database must be automatically updated with what ever modification made to source data base.
    can u pls help me how to achive this ?
    Thanks alot ...
    k azamtulla khan

    Hi,
    why do you want to do this using OWB, is it not easier to create a before/after insert/update trigger on source table so that your target table is updated automatically. You can benefit by using owb in scenarios where you need to transform and load data and then schedule this process and for the rest of it i would recommend using triggers for row level activities such as update.

  • Incremental aggregation in OWB

    Hi all
    I have a strange requirement to get an incremental aggregation done and stored in one particular column based on changes to other columns of a dimension. Suppose I have a dimension COUNTRY_DIM as follows
    COUNTRY_SK
    COUNTRY_CODE *( BUSINESS KEY)*
    COUNTRY_LOCATION *( TRIGGERING COLUMN)*
    POPULATION *( TRIGGERING COLUMN)*
    COLUMN_INDICATOR
    START_DATE
    END_DATE
    My source is
    COUNTRY_CODE
    COUNTRY_LOCATION
    POPULATION
    While loading the dimension, if there is a change in either country location or population, the column indicator will be accordingly set.
    Case 1: If there is no change or new record being loaded into dimension, column indicator will be 0
    Case 2: If country location has changed for a particular code say GB, column indicator should be set to 1.
    Case 3: If both country location and population have changed, column indicator needs be set to 2.
    This seems to be an incremental aggregation where I check for each triggering column will be checked for a change and the indicator appropriately set. I have done this in Informatica where we can easily set a variable within an expression but I am not sure how we can do this in OWB.
    Any ideas?
    Birdy

    This seems to be an incremental aggregation where I check for each triggering column will be checked for a change and the indicator appropriately set. I have done this in Informatica where we can easily set a variable within an expression but I am not sure how we can do this in OWB.Here also you can take a expression and in outgroup column (say indicator) define the logic
    case when
        ( LEAD( COUNTRY_LOCATION plc_tag_id, 1, 0) OVER (ORDER BY COUNTRY_LOCATION )  = COUNTRY_LOCATION    AND
          LEAD( POPULATION , 1, 0) OVER (ORDER BY POPULATION )  = POPULATION   
         then '0'
      when
        ( LEAD( COUNTRY_LOCATION , 1, 0) OVER (ORDER BY COUNTRY_LOCATION ) <> COUNTRY_LOCATION    AND
        LEAD( POPULATION , 1, 0) OVER (ORDER BY POPULATION )  <> POPULATION 
         then '2'
         when
        ( LEAD( COUNTRY_LOCATION , 1, 0) OVER (ORDER BY COUNTRY_LOCATION ) = '1'
          then '1'
         end (Marked the answer as helpul or Correct if it is (top right) )
    Cheers
    Nawneet
    Edited by: Nawneet on Jun 14, 2010 5:58 AM

  • Error while deploying a workflow in OWB

    Hi,
    I am getting the below error while deploying a workflow in Control Center.
    ORA-29532: Java call terminated by uncaught Java exception: java.sql.SQLException: The file /u01/app/oracle/product/11.2.0.2/dbhome_1/owb/bin/admin/rtrepos.properties cannot be accessed or has not been properly created on the server XXXXXX. If the file does not exist or if the database owner (normally user 'oracle') does not have the required file permissions or if the file has not been properly created then the file can be recreated by running the SQL*Plus script /u01/app/oracle/product/11.2.0.2/dbhome_1/owb/rtp/sql/reset_repository.sql (in a RAC environment the file must be manually copied to each server which is used for OWB). Otherwise if using a 10.2 database instance, then please run the SQL*Plus script /u01/app/oracle/product/11.2.0.2/dbhome_1/owb/UnifiedRepos/reset_owbcc_home.sql.
    Did any one faced this issue before?
    Kindly let me know the steps to resolve the issue.
    Thanks.

    Hi Vidyanand,
    Did you create the runtime access user using the runtime assistant? Did you select the correct runtime repository (if you have more) to associate your runtime access user with?
    Note that there are 4 database roles being created when you create a runtime repository owner:
    - OWB_A_<runtime repository owner>
    - OWB_D_<runtime repository owner>
    - OWB_R_<runtime repository owner>
    - OWB_U_<runtime repository owner>
    If you would grant those roles to a user, then that user becomes an access user for the user with username <runtime repository owner>.
    Note that you can also use the runtime repository credentials to connect to the runtime repository for deployment purposes, but you may not want that because of security concerns.
    Thanks,
    Mark.

  • How to find out which event is triggered in SDK

    Hi all
    From SDK, I am would like to know which event is triggered when the user select navigation menu Follow up -> Create Lead (screen 1).
    I am guessing, it calls LeadCreateWithRef outport event but I don't see any absl code?? (screen 2)
    When I try to switch from Display to Edit, I got error "Component which you are trying to edit comes from a lower layer. Please use Extensibility Explore to Edit" (screen 3)
    Also in Extensibility Explorer, I can't open the button details (can't even see it in the buttongroup).
    Any advice is welcomed.
    Thanks
    Anthony

    Hi Meghna
    Thanks for the info. I am trying to do reverse engineering, to understand how to existing UI screen is built, how and what events get called when button is pressed and which screen will be opened.
    For example, in Ticket screen, when I select Follow up then Create Lead:
    I cannot drill down to see the button and its properties??
    nb. Also there is no left right scroll bar to see the rest of button group. Is it a bug?? I am using Windows 8.1.
    And in outport setting, I don't see which action/event triggered? and what screen to show?
    Thanks again,
    Anthony

Maybe you are looking for

  • Self-assigned IP, no internet connection status with FIOS?

    a friend just had FIOS installed at her house and after it was set up she was able to connect to the internet. i came over later in the day to join her airport express to this WiFi network by using airport  utility (she wanted to have her printer ava

  • Can't open an Executable From Click Button

    Hi, Everyone! I've broken a learning module into several small project files to match the client's standards. I've set up click buttons within each file to open each of the published movies for the module. Everything works as expected. Now I need to

  • His it possible to paired a imack keyboard to a apple tv via bluetooth

    his it posible to paired my imack keyboard to the apple tv via bluetooth???

  • Error in creating XML Report Definition

    I have used App Engine to generate XML & XSD file having multiple rowsets. Next I created a new Data Source of type rowset and uploaded thses 2 files. But when I try to create a new Report Definition with data source type as rowset and the Data sourc

  • Automator custom date ranges

    Does anyone know if the "Find finder item" function in automator will allow me to find all files within a time range that is subject to change? I would like to use automator to find all files created "since last week" or "since three days ago". I am