Corelated commit in owb

Hi,
can any one explian how corelated commit works in owb.suppose if i enable corelated commit in mapping and when i am running same mapping and if i
pass runtime parameters like commit frequency=10000.
whether it will commit every 10000 records in target because of commit frequency or it will commit at the end of loading all records in target because of corelated commit.please comment on this issue.
Regards
naren

Commit frequency and Correlated Commit are 2 independent configuration parameter. Setting Commit frequency to 10000 will make sure that the commit happens after every 10000 records irrespective of whether Correlated Commit is set or not.
Commit frequency is applicable only for rowbased and rowbased (target) mode.
Correlated commit is applicable when you have a single source populating multiple target tables in a single mapping. OWB User guide chapter 11 (page 11-16) has very good explanation of how correlated commit works.
Hope this helps.

Similar Messages

  • New Features for OWB 9.2

    Mark Van De Wiel mentioned yesterday in this forum that OWB 9.2 is now available at http://otn.oracle.com/software/products/warehouse/content.html . From the version number it would appear that this is a major new release, and it's a separate install rather than a patch on OWB9.0.4.
    Would Mark, Igor or Shauna be able to give us some background on the new features in this new release? From quickly looking at the release notes they would appear to be;
    - Correlated Commit (committing changes across all targets uniformly)
    - Ability to create public database links
    - Direct PEL (removes the need for temporary tables when exchanging partitions)
    - Enhanced Flat File Support
    - Mapping Debugger
    - Incorporation of ETL functionality previously found in standalone Pure*Integrate
    - Metadata change management using the OWB GUI rather than using OMB*Plus
    - Multiple Name and Address Software Providers
    - Name/Address Wizard
    - New Public API for OWB
    - Better support for Real Application Clusters
    - Advanced Repository Security and Audit options
    - Support for MITI metadata bridges to third-party products
    It would be good to know the thinking behind these and what areas they think will be particularly of interest to us. Also, has anything been removed or changed that we've got used to using?
    Also, any day now (fingers crossed) the patch to make the OLAP functionality in OWB9.0.4 work (the 9i 9.2.0.3.x patch?) should be out - will this work with OWB9.2 as well?
    Many thanks
    Mark Rittman
    [email protected]

    Hi Mark,
    While I'm not in your list of people to respond I figure I give it a go anyways :-)
    The list of features you mentioned is a nice summary. One of the main release themes is data quality as you noticed. Apart from changes to the name&address functionality we also added the advanced matching and merging into this release. So you can use custom rules to match and merge data. This also allows you to do things like householding for customer data records. In essence we have now completed the integration of Pure*Integrate into Warehouse Builder.
    The mapping debugger is another big thing. This allow you to walk through your mappings and see in detail what happes to the data.
    The metadata change management is now in the UI. THis means there is now object level version management in Warehouse Builder via the UI. You can also compare versions and get a difference report.
    As you mentioned there are a number of smaller and bigger features next to this. The idea for this release is to make OWB a top Data Quality product and allow you to do your ETL and DQ in one tool. Apart from that we have added usability features (debugger, N&A wizard) and we have added some features to solve specific problems.
    Especially the corelated commit is interesting in that respect. You can now, in a map with multiple targets control the commit, and choose to either commit all targets or none, or commit them individually (current behavior).
    As usual there is a lot more, as this is indeed a major release. Keep an eye on OTN for more collateral regarding the new features. We are currently updating the site to reflect the new stuff.
    On the OLAP patch, both 9.0.4 and 9.2 will work equally well with the OLAP patch (when it is available). In 9.2 we added some small thing around composites. More about that when there is an OLAP patch to test this one on in a customer site.
    I hope this answered the question, if not let me know.
    Jean-Pierre

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • IBP 4.0 FP1 Patch 2 - Need more info on new features in Supply Planning

    Hi All,
    I was reading the SAP help and some of the new features are not very clear to me, if you have clarity around it please read and help me to get more clarity.
    As per SAP Help,The following features are now available in IBP for supply:
    Separate lead time units of measure can be defined for use in multiple planning areas depending on different time granularities.
    My Query - Where do we define it? I do not see any additional attribute in the MDT where lead time is present but don't see any attribute to define lead time units or any other different settings in the planning area so how can we define this?
    Static periods of supply are taken into account as input for the inventory target, with projected periods of coverage provided as output.
    My Query - What exactly has changed and which are the new key figures here to support this? I could not understand this from reading the above text.
    Independent demand is taken into account as input at the product location, in addition to the distribution demand from other nodes and customer demand.
    The optimizer algorithm supports the Non-Delivery Cost Rate key figure as input for the independent demand at the product location.
    My Query - A new key figure "INDEPENDENTDEMAND" is available now @PERPRODLOC so first point is very clear but second point is very confusing to me as "Non delivery cost rate" is only defined @PERPRODCUST so how we will define the non delivery cost rate for independent demand which is at different planning level and if we are not able to define it how optimizer algorithm will support it? Ideally it needs to be at same level like consensus demand and Non delivery cost rate both @PERPRODCUST Level.
    Thanks
    Girish

    Hi Mark,
    While I'm not in your list of people to respond I figure I give it a go anyways :-)
    The list of features you mentioned is a nice summary. One of the main release themes is data quality as you noticed. Apart from changes to the name&address functionality we also added the advanced matching and merging into this release. So you can use custom rules to match and merge data. This also allows you to do things like householding for customer data records. In essence we have now completed the integration of Pure*Integrate into Warehouse Builder.
    The mapping debugger is another big thing. This allow you to walk through your mappings and see in detail what happes to the data.
    The metadata change management is now in the UI. THis means there is now object level version management in Warehouse Builder via the UI. You can also compare versions and get a difference report.
    As you mentioned there are a number of smaller and bigger features next to this. The idea for this release is to make OWB a top Data Quality product and allow you to do your ETL and DQ in one tool. Apart from that we have added usability features (debugger, N&A wizard) and we have added some features to solve specific problems.
    Especially the corelated commit is interesting in that respect. You can now, in a map with multiple targets control the commit, and choose to either commit all targets or none, or commit them individually (current behavior).
    As usual there is a lot more, as this is indeed a major release. Keep an eye on OTN for more collateral regarding the new features. We are currently updating the site to reflect the new stuff.
    On the OLAP patch, both 9.0.4 and 9.2 will work equally well with the OLAP patch (when it is available). In 9.2 we added some small thing around composites. More about that when there is an OLAP patch to test this one on in a customer site.
    I hope this answered the question, if not let me know.
    Jean-Pierre

  • OWB and COMMIT

    OWB version: 9.0.2.
    I realized that OWB puts several COMMIT statement in the generated PL/SQL packages code.
    I need to execute the COMMIT statement only after the end of the PL/SQL procedure, or when I decided,or make a ROLLBACK if some error occurs, and so my question is:
    Is it possible to configure an OWB mapping in order to completely remove the COMMIT statement from the generated code?
    I remove COMMIT sentences manually, but it seems that some part of the generated code makes a COMMIT, because when the transaction finishes, I make a rollback, but all inserts remains in tables.
    TIA

    Javier,
    OWB indeed always includes commit statements in its code. However, even if you would remove all commits (assuming you run 904 or 92 version of OWB) you will end up with commits after every execution, because every execution runs in a separate session. When a session is exited cleanly the database will implicitly commit.
    In OWB92 we introduced a feature correlated commit (configuration parameter on a mapping) that helps you control commits in a multi-target mapping. With that feature you can control commits to only happen at the end of the mapping in set-based mode. In row-based mode or row-based bulk mode, the behavior is that rows from a PL/SQL cursor are being applied to all targets rather than just at the time.
    The commit behavior is following:
    - in set-based mode, there is one commit per statement and with correlated commit set to true, it will be once at the end of the mapping.
    - in row-based bulk mode, the commit frequency is the bulk size. To force row-based mode, set the bulk size to 1.
    - in row-based mode, the commit frequency can be configured by the commit frequency configuration parameter.
    So, if possible, run in set-based mode (this is fastest anyway) and set correlated commit to true. This will commit the mapping at the end. In case of any error, the entire mapping will rollback.
    Hope this helps,
    Mark.

  • Concept of Commit Unit in OWB

    Hi All,
    Asking this question again.
    How does OWB cope with the concept of a commit unit?
    In PL/SQL, you have rollback, and savepoint.
    What happens if you have to insert child records and rollback the changes.
    - Jojo

    I don't know how to do this in OWB (include rollback, savepoints).
    Since it's ETL tool, designed to create batch data loads, OWB believes you already know the whole process. So, you wouldn't have to rollback to a savepoint. That's how I see things, as I said, I don't know how to include savepoints using OWB.
    Regards,
    Marcos

  • OWB 10gR2: Commit everything or nothing

    Hello.
    We have a mapping which must run in row-based mode because it deletes records. There is one source and multiple targets. One flow directs into a logtable (LOG_LT). The other flow directs into 4 targets (MOD_LT for all targets) depending on the mode given in the recors (Update/Insert/Merge/Delete). The Logtable gets a part of the primary keys of the incoming rows wich are used for a replication application.
    It is important that there is nothing in the 4 targets which is not stored (part of pk as described) in the logtable. Otherwise information will be lost for the replication.Therfore we set target to order to use the LOG_LT first.
    a) We started with a high commit high freqency larger than the number of rows in source and autocommit. If the data flow had a error (pk e.g.) only this and not the log-flow was rollbacked. (Table based rollback)
    b) We used autocommit correlated. The logflow and the dataflow are ok, but the mapping was commiting to early (not the value in commit freqency).
    Why?
    c) The next idea was to set the mapping to manual commit and commit in a postmapping procedure if everything was ok. The mapping can be deployed but when we start the mapping we get the warning:
    RTC-5380: Manual commit control configured on maping xy is ignored.
    Why?
    Has anybody a good practice for a row-based-mapping (with multiple targets and one source) to commit everything or nothing?
    Thanks for your help
    Stephan

    Hello Stephan,
    In row based mappings, and one having single source and multiple targets then the following are the different ways of loading the targets.
    1. Correlated Commit = true and Row Based - Suppose there are 50 rows in the source and there are 4 target tables. When the mapping is run in this mode ansd suppose the mapping encounters error while loading row 10 in target table say TGT_3 then this row will be rolled back from all the four targets and the processing will continue, thus finally loading 49 records in all the 4 tables.
    When you run in row based mode, then please check the bulk processing mode. If it is set to true then the bulk size overrides the commit frequency set by you. It is thus advisable to make th bulk size and the commit frequency equal or set the bulk porcessing mode to false and then set the commit frequency.
    HTH
    -AP

  • Splitting comma seperated column data into multiple rows

    Hi Gurus,
    Please help me for solving below scenario. I have multiple value in single column with comma seperated values and my requirement is load that data into multiple rows.
    Below is the example:
    Source Data:
    Product         Size                                 Stock
    ABC              X,XL,XXL,M,L,S                 1,2,3,4,5,6
    Target Data:
    Product         Size                                 Stock
    ABC              X                                     1
    ABC              XL                                   2
    ABC              XXL                                 3
    ABC              M                                    4
    ABC              L                                      5
    ABC             S                                        6
    Which transformation we need to use for getting this output?
    Thanks in advance !

    Hello,
    Do you need to do this tranformation through OWB mapping only? And can you please tell what type of source you are using? Is it a flat file or a table?
    Thanks

  • Owbsys.wb_rt_api_exec.open fails after upgrade to OWB 11gR2

    The following code is used as a PLSQL wrapper to execute OWB mappings and is based on the good old run_my_own_stuff.sql. We have been mandated to use Tivoli as the corporate scheduler, meaning we do not have Workflow as a solution. We have implemented the audit_execution_id as an input parameter to all the mappings to be able to link the data to the OWBSYS audit tables, as well as return mapping performance and success info to the execution process/session. I have implemented this exact same procedure in 10gR1, 10gR2 and 11gR1 (current dev env) with no problems at all - the code ports easily. However following an upgrade (actually an export/import of the repository from 11gR1 on a 64bit solaris to 11gR2 on Exadata running enterprise linux 5) - actually the test server (I know, I know, I said the same thing!), the code now fails on the wb_rt_api_exec.open line (highlighted).
    CREATE OR REPLACE PROCEDURE bi_ref_data.map (p_map_name IN VARCHAR2)
    -- Procedure to execute ETL mapping package via command line call
    -- Mapping names are held in the BI_REF_DATA.MAP_NAME table
    -- with the mapping type and location data
    AS
    v_repos_owner VARCHAR2 (30) := <repository_owner>;
    v_workspace_owner VARCHAR2 (30) := <workspace_owner>;
    v_workspace_name VARCHAR2 (30) := <workspace_name>;
    v_loc_name VARCHAR2 (30);
    v_map_type VARCHAR2 (30);
    v_map_name VARCHAR2 (30) := UPPER (p_map_name);
    v_retval VARCHAR2 (255);
    v_audit_execution_id NUMBER; -- Audit Execution Id
    v_audit_result NUMBER;
    v_start_time timestamp := LOCALTIMESTAMP;
    v_end_time timestamp;
    v_execution_time NUMBER;
    v_record_rate NUMBER := 0;
    v_records_selected NUMBER;
    v_records_inserted NUMBER;
    v_records_updated NUMBER;
    v_records_deleted NUMBER;
    v_records_merged NUMBER;
    v_errors NUMBER;
    v_failure VARCHAR2 (4000);
    e_no_data_found_in_audit exception;
    v_audit_exec_count NUMBER;
    e_execution_id_error exception;
    BEGIN
    SELECT UPPER (loc_name), UPPER (map_type)
    INTO v_loc_name, v_map_type
    FROM bi_ref_data.owb_map_table
    WHERE UPPER (map_name) = UPPER (v_map_name);
    IF UPPER (v_map_type) = 'PLSQL'
    THEN
    v_map_type := 'PLSQL';
    ELSIF UPPER (v_map_type) = 'SQL_LOADER'
    THEN
    v_map_type := 'SQLLoader';
    ELSIF UPPER (v_map_type) = 'SAP'
    THEN
    v_map_type := 'SAP';
    ELSIF UPPER (v_map_type) = 'DATA_AUDITOR'
    THEN
    v_map_type := 'DataAuditor';
    ELSIF UPPER (v_map_type) = 'PROCESS'
    THEN
    v_map_type := 'ProcessFlow';
    END IF;
    -- Changed code for owb11gr2
    -- owbsys.wb_workspace_management.set_workspace (v_workspace_name, v_workspace_owner);
    owbsys.wb_rt_script_util.set_workspace (v_workspace_owner || '.' || v_workspace_name);
    v_audit_execution_id   := owbsys.wb_rt_api_exec.open (v_map_type, v_map_name, v_loc_name);
    IF v_audit_execution_id IS NULL
    OR v_audit_execution_id = 0
    THEN
    RAISE e_execution_id_error;
    END IF;
    v_retval := v_retval || 'audit_execution_id=' || TO_CHAR (v_audit_execution_id);
    v_audit_result := owbsys.wb_rt_api_exec.execute (v_audit_execution_id);
    IF v_audit_result = owbsys.wb_rt_api_exec.result_success
    THEN
    v_retval := v_retval || ' --> SUCCESS';
    ELSIF v_audit_result = owbsys.wb_rt_api_exec.result_warning
    THEN
    v_retval := v_retval || ' --> WARNING';
    ELSIF v_audit_result = owbsys.wb_rt_api_exec.result_failure
    THEN
    v_retval := v_retval || ' --> FAILURE';
    ELSE
    v_retval := v_retval || ' --> UNKNOWN';
    END IF;
    DBMS_OUTPUT.put_line (v_retval);
    owbsys.wb_rt_api_exec.close (v_audit_execution_id);
    v_end_time := LOCALTIMESTAMP;
    v_execution_time := bi_ref_data.get_seconds_from_interval (v_end_time - v_start_time);
    v_retval := 'Execution time = ' ||
    v_execution_time ||
    ' seconds.';
    DBMS_OUTPUT.put_line (v_retval);
    SELECT COUNT (w.rta_select)
    INTO v_audit_exec_count
    FROM owbsys.owb$wb_rt_audit w
    WHERE w.rte_id = v_audit_execution_id;
    IF v_audit_exec_count = 0
    THEN
    RAISE e_no_data_found_in_audit;
    END IF;
    SELECT w.rta_select,
    w.rta_insert,
    w.rta_update,
    w.rta_delete,
    w.rta_merge,
    rta_errors
    INTO v_records_selected,
    v_records_inserted,
    v_records_updated,
    v_records_deleted,
    v_records_merged,
    v_errors
    FROM owbsys.owb$wb_rt_audit w
    WHERE w.rte_id = v_audit_execution_id;
    v_retval := v_records_selected || ' records selected';
    DBMS_OUTPUT.put_line (v_retval);
    IF v_records_inserted > 0
    THEN
    v_retval := v_records_inserted || ' inserted';
    DBMS_OUTPUT.put_line (v_retval);
    END IF;
    IF v_records_updated > 0
    THEN
    v_retval := v_records_updated || ' updated';
    DBMS_OUTPUT.put_line (v_retval);
    END IF;
    IF v_records_deleted > 0
    THEN
    v_retval := v_records_deleted || ' deleted';
    DBMS_OUTPUT.put_line (v_retval);
    END IF;
    IF v_records_merged > 0
    THEN
    v_retval := v_records_merged || ' merged';
    DBMS_OUTPUT.put_line (v_retval);
    END IF;
    IF v_errors > 0
    THEN
    v_retval := v_errors || ' errors';
    DBMS_OUTPUT.put_line (v_retval);
    END IF;
    IF v_execution_time > 0
    THEN
    v_record_rate := TRUNC ( (v_records_inserted + v_records_updated + v_records_deleted + v_records_merged) / v_execution_time, 2);
    v_retval := v_record_rate || ' records/sec';
    DBMS_OUTPUT.put_line (v_retval);
    END IF;
    IF (v_audit_result = owbsys.wb_rt_api_exec.result_failure
    OR v_audit_result = owbsys.wb_rt_api_exec.result_warning)
    THEN
    FOR cursor_error
    IN (SELECT DISTINCT aml.plain_text
    FROM owbsys.owb$wb_rt_audit_messages am
    INNER JOIN
    owbsys.owb$wb_rt_audit_message_lines aml
    ON am.audit_message_id = aml.audit_message_id
    WHERE am.audit_execution_id = v_audit_execution_id)
    LOOP
    DBMS_OUTPUT.put_line (cursor_error.plain_text);
    END LOOP;
    END IF;
    -- OWBSYS.wb_rt_api_exec.close (v_audit_execution_id);
    COMMIT;
    EXCEPTION
    WHEN e_execution_id_error
    THEN
    raise_application_error (-20011, 'Invalid execution ID returned from OWB');
    -- RAISE;
    WHEN e_no_data_found_in_audit
    THEN
    raise_application_error (-20010, 'No data found in audit table for execution_id - ' || v_audit_execution_id);
    -- RAISE;
    WHEN NO_DATA_FOUND
    THEN
    raise_application_error (-20001, 'Error in reading data from OWBSYS tables.');
    -- RAISE;
    END;
    Does anyone out there know if there is a difference between 11gR1 and R2 in the way that the wb_rt_api_exec function works?
    Is there a simple way to retrieve the audit_id before executing the mapping, or at a push during the mapping so that we can maintain the link between the session data and the OWBSYS audit data?
    Martin

    Hi David, I have been reading some of your posts and blogs around OWB and I still have not found the answer.
    OK, thereis/was a script that Oracle Support/forums/OTN sent out a while ago called "run_my_iowb_stuff" - I am sure you will be familiar with it. I based the code I uploaded on it and added additional functionality. In essence, I wanted to use the audit_id as an input parameter tot he mapping, so that I can register the audit_id in the management tables, and associate each row of loaded data with a specific mapping_id which would allow a simple link to the owbsys audit tables to complete the audit circle. To that end, I used the owbsys.wb_rt_api_exec.open procedure to register the mapping execution, and then on the execute procedure of the same package, I passed this audit_id in as a custom parameter:
    <<snip>>
    owbsys.wb_workspace_management.set_workspace (v_workspace_name, v_workspace_owner);
    v_audit_execution_id := owbsys.wb_rt_api_exec.open (v_map_type, v_map_name, v_loc_name, 'PLSQL');
    IF v_audit_execution_id IS NULL
    OR v_audit_execution_id = 0
    THEN
    RAISE e_execution_id_error;
    END IF;
    v_retval := v_retval || 'audit_execution_id=' || TO_CHAR (v_audit_execution_id);
    IF v_include_mapping_id > 0 -- if non-zero, submit owb execution id as an input parameter to the map process
    THEN
    owbsys.wb_rt_api_exec.override_input_parameter (
    v_audit_execution_id,
    'p_execution_id',
    TO_CHAR (v_audit_execution_id),
    owbsys.wb_rt_api_exec.parameter_kind_custom
    END IF;
    <<snip>>
    The execution is closed, also by the use of the audit_id ( "owbsys.wb_rt_api_exec.close (v_audit_execution_id)" )
    I can also use the audit_id to inspect the audit tables to retrieve the records processed as well as any associated error messages, and format them for the calling application (owSQL*Plus, which is normally the context of our current use).
    This procedure has been working weel up to now until we moved over to 11gR2 when all of a sudden the audit_id is not returned when executing "v_audit_execution_id := owbsys.wb_rt_api_exec.open (v_map_type, v_map_name, v_loc_name);". Prior to 11gR2 this worked like a charm - now it has crashed to a halt.
    As an interesting twist, I have tried to substitute a sequence number for the audit_id, and then tried to get the audit_id after the mapping completes, so that I can put both the sequence and audit id in a table so it maintains the link. However in attempting to use the owbsys.wb_rt_script_util.run_task procedure which now appears to be the only thing left working, I was astonished to see the following output in sqlplus:
    SQL> exec map1('stg_brand')
    Stage 1: Decoding Parameters
    | location_name=STAGE_MOD
    | task_type=PLSQLMAP
    | task_name=STG_BRAND
    Stage 2: Opening Task
    | l_audit_execution_id=2135
    Stage 3: Overriding Parameters
    Stage 4: Executing Task
    | l_audit_result=1 (SUCCESS)
    Stage 5: Closing Task
    Stage 6: Processing Result
    | exit=1
    --> SUCCESS
    Execution time = .647362 seconds.
    records/sec
    PL/SQL procedure successfully completed.
    SQL>
    This output seems so identical to the "run_my_owb_stuff" that either Oracle support generated their "run_my_owb_stuff" as a lightweight owbsys.wb_rt_script_util.run_task procedure, or Oracle incorporated the "run_my_owb_stuff" script into their owbsys.wb_rt_script_util.run_task procedure! Which way round I cannot say, but it is surely one or the other! To make matters worse, I have raised this with Oracle Support, and they have the temerity to claim that they do not support the "run_my_owb_stuff" script, but think enough of it to incorporate it into their own package in a production release!
    To overcome my problems, in the short term, I need to be able to access the audit_id either during or after the execution of the mapping, so that I can at least associate that with a sequence number I am having to pass in as a parameter to each mapping. In the longer term, i would like a solution to be able to access the audit_id before I execute the mapping, as I could by calling the "owbsys.wb_rt_api_exec.open " procedure. Ideally this would be solved first and I would not need to use a sequence at all.
    Hope this clarifies things a bit.
    Regards
    Martin

  • Dynamic Lookup in OWB 10.1g

    Can we execute dynamic lookup in OWB 10.1g?
    I want update the columns of the target table, based on the previous values of the columns.
    Suppose there is a record in the target table with previous status and current status columns.
    The source table consist of 10 records which need to be processed one at a time in a single batch. Now we need to compare the status of record with the current status of target table. If the source contains next higher status then the current status of target record need to go to previous status and the new status coming from source need to overwrite the current status of target record.
    We have tried using row based option as well as setting commit frequency equal to 1 but we are not able to get the required result.
    how can we implement this in OWB10.1g?

    OK, now what I would do in an odd case like this is to look at the desired FINAL result of a run rather than worry so much about the intermediate steps.
    Based on your statement of the status incrementing upward, and only upward, your logic can actually be distilled down to the following:
    At the end of the load, the current status for a given primary key is the maximum status, and the previous status will be the second highest status. All the intermediate status values are transitional status values that have no real bearing on the desired final result.
    So, let's try a simple prototype:
    --drop table mb_tmp_src; /* SOURCE TABLE */
    --drop table mb_tmp_tgt; /*TARGET TABLE */
    create table mb_tmp_src (pk number, val number);
    insert into mb_tmp_src (pk, val) values (1,1);
    insert into mb_tmp_src (pk, val) values (1,2);
    insert into mb_tmp_src (pk, val) values (1,3);
    insert into mb_tmp_src (pk, val) values (2,2);
    insert into mb_tmp_src (pk, val) values (2,3);
    insert into mb_tmp_src (pk, val) values (3,1);
    insert into mb_tmp_src (pk, val) values (4,1);
    insert into mb_tmp_src (pk, val) values (4,3);
    insert into mb_tmp_src (pk, val) values (4,4);
    insert into mb_tmp_src (pk, val) values (4,5);
    insert into mb_tmp_src (pk, val) values (4,6);
    insert into mb_tmp_src (pk, val) values (5,5);
    commit;
    create table mb_tmp_tgt (pk number, val number, prv_val number);
    insert into mb_tmp_tgt (pk, val, prv_val) values (2,1,null);
    insert into mb_tmp_tgt (pk, val, prv_val) values (5,4,2);
    commit;
    -- for PK=1 we will want a current status of 3, prev =2
    -- for PK=2 we will want a current status of 3, prev =2
    -- for PK=3 we will want a current status of 1, prev = null
    -- for PK=4 we will want a current status of 6, prev = 5
    -- for PK=5 we will want a current status of 5, prev = 4
    Now, lets's create a pure SQL query that gives us this result:
    select pk, val, lastval
    from
    select pk,
    val,
    max(val) over (partition by pk) maxval,
    lag(val) over (partition by pk order by val ) lastval
    from (
    select pk, val
    from mb_tmp_src mts
    union
    select pk, val
    from mb_tmp_tgt mtt
    where val = maxval
    (NOTE: UNION, not UNION ALL to avoid multiples where tgt = src, and would want a distinct in the union if multiple instances of same value can occur in source table too)
    OK, now I'm not at my work right now, but you can see how unioning (SET operator) the target with the source, passing the union through an expression to get the analytics, and then through a filter to get the final rows before updating the target table will get you what you want. And the bonus is that you don't have to commit per row. If you can get OWB to generate this sort of statement, then it can go set-based.
    EDIT: And if you can't figure out how to get OWB to generate thisentirely within the mapping editor, then use it to create a view from the main subquery with the analytics, and then use that as the source in your mapping.
    If your problem was time-based where the code values could go up or down, then you would do pretty much the same thing except you want to grab the last change and have that become the current value in your dimension. The only time you would care about the intermediate values is if you were coding for a type 2 SCD, in which case you would need to track all the changes.
    Hope this helps.
    Mike
    Edited by: zeppo on Oct 25, 2008 10:46 AM

  • Error While Importing Process Flow in OWB Repository.

    Hi All,
    OWB Config Details is as follows:
    Oracle 9i Warehouse Builder Client: 9.2.0.2.8
    Oracle 9i Warehouse Builder Repository: 9.2.0.2.0
    Following Error Message is displayed when i tried to import process flow from one OWB Repositoty to another OWB Repository.
    Warning at line 23: MDL1312: Referenced logical location with name <PF_STG_INC_LOC> and UOID <F3940F1D5EB84985E0340003BA0AF737> not found for PROCESSMODULE <SALES_INC>. Logical location not set for this module.
    Warning at line 28: MDL1320: PROCESS PF_C_C_CALLS_INC in STG_INC.SALES_INC.SAL_INC cannot be imported because referenced STANDALONEPROCEDURE
    FLSTG.PROC_GET_ETL_DATA_DATE does not exist.
    Warning at line 28: MDL1320: PROCESS PF_STG_CALL_CENTER_CALLS_INC in FLSTG_INC.SALES_INC.SAL_INC cannot be imported because referenced STANDALONEPROCEDURE
    FL_TRG_FLSTG.PROC_UPDATE_RECO_CTL does not exist.
    Informational: MDL1134: COMMIT issued at end of import data file.

    Hello, all the errors you've been given are related to missing objects.
    You should re-create/import the location for you process flows before importing the process flows themselves.
    Also verify that you have imported the procedures FLSTG.PROC_GET_ETL_DATA_DATE and FL_TRG_FLSTG.PROC_UPDATE_RECO_CTL because you use them in your process flows so you need them: OWB refuses to import a process flows if some objects are missing because it would break some links.
    In general process flows should be imported where you have all the other objects in the repository because they are the final step and they reference many other objects.
    Hope this helps - Antonio

  • Error While Improting Process Flow in OWB Repository.

    Hi All,
    OWB Config Details is as follows:
    Oracle 9i Warehouse Builder Client: 9.2.0.2.8
    Oracle 9i Warehouse Builder Repository: 9.2.0.2.0
    Following Error Message is displayed when i tried to import process flow from one OWB Repositoty to another OWB Repository.
    Warning at line 23: MDL1312: Referenced logical location with name &lt;PF_STG_INC_LOC&gt; and UOID &lt;F3940F1D5EB84985E0340003BA0AF737&gt; not found for PROCESSMODULE &lt;SALES_INC&gt;. Logical location not set for this module.
    Warning at line 28: MDL1320: PROCESS PF_C_C_CALLS_INC in STG_INC.SALES_INC.SAL_INC cannot be imported because referenced STANDALONEPROCEDURE
    FLSTG.PROC_GET_ETL_DATA_DATE does not exist.
    Warning at line 28: MDL1320: PROCESS PF_STG_CALL_CENTER_CALLS_INC in FLSTG_INC.SALES_INC.SAL_INC cannot be imported because referenced STANDALONEPROCEDURE
    FL_TRG_FLSTG.PROC_UPDATE_RECO_CTL does not exist.
    Informational: MDL1134: COMMIT issued at end of import data file.

    Hello, all the errors you've been given are related to missing objects.
    You should re-create/import the location for you process flows before importing the process flows themselves.
    Also verify that you have imported the procedures FLSTG.PROC_GET_ETL_DATA_DATE and FL_TRG_FLSTG.PROC_UPDATE_RECO_CTL because you use them in your process flows so you need them: OWB refuses to import a process flows if some objects are missing because it would break some links.
    In general process flows should be imported where you have all the other objects in the repository because they are the final step and they reference many other objects.
    Hope this helps - Antonio

  • Bulk Insert - Commit Frequency

    Hi,
    I am working on OWB 9.2.
    I have a mapping that would be inserting > 90000 records in target table.
    When i execute the SQL query generated by OWB in Database it return result in 6 minutes.
    But when i executed this OWB mapping in 'Set based fail over row based' with Bulk-size = 50, commit frequency = 1000 and max error = 50 it is taking almost 40 minutes. Now while this mapping was running i checked the target table records count and it was increasing by 50 at a time and it is taking long time to finish.
    What changes I need to do to make this inserting of records Fastest.

    If it is inserting 50 a time, it means that setbased is failing and then it is switching to rowbased.
    If it is always failing in set based, it will be better to run in row based so as not spending time in set based.
    In row based mode, you can increase bulk size (like 5000 or 10000 or more ) for efficient execution. Keep commit freq same as bulk size.

  • OWB Process Flow - How is the best  version control tool ??

    HI all,
    I just start work with OWB and I have a question to know how is the best way to do something.
    Imagine the scenario below:
    If I have 2 or more requests for example:
    Request 1: Create a Dimension City.
    Request 2: Create a Dimension Products.
    I Have ONE process flow and i need put my changes inside. This is my problem.
    In my scenario I don't know what Request goes to Prod First.
    If I put the Request 1 and Request 2 in my PROCESS FLOW, maybe I need change is someone decide change MY REQUEST PRIORITY.
    There is something in OWB to "control the version or changes" ?? For a mapping I export the MDL and commit on SVN, but I dont know haw can i do with the process flow.
    Something to agree multiples peoples work in different mappings and a SAME PROCESS FLOW ??
    What is best way to work with process flow and version control.
    What are the best practices when it comes to version control?
    Thanks.

    Amit,
    Are you really doing this in 10.1.3.x and not 11g?
    At any rate, I don't see how #2 and #3 relate whatsoever to your choice of a version control system. OK, maybe in #2 if there is some "maintenance" activity to be done against the version control server. Subversion is the open source alternative that you listed there and is pretty commonly used. If your company is already using one of the mentioned tools, why change? About the only thing I'd mention is to advise you NOT to use CVS for well documented reasons (JDev does support it) - if you would have picked CVS otherwise, choose Subversion. As far as question #1 - I've only used Subversion (well, I did use CVS for a while) with JDeveloper, so I can say it was "effective enough for me." In 10.1.3.x, I also used the external svn tools for doing lots of things like merging and so forth; in 11g, the support is much much better.
    Best,
    John

  • Process flow fails to complete when another in the same package is deployed - OWB 11.2.0.2

    Hi,
    I have  problem with OWB Process Flows whereby whenever I deploy a process flow within an existing package, another Process Flow becomes (unknown to me) invalid and will not complete execution.
    When deploying the first process flow, I am synchronising all mappings within it, saving, validating and saving again before deploying it. This process flow then functions ok but then a random process flow within the same package malfunctions. By this I mean the process flow does not complete, as if it has lost some connectors. Has anybody else had this problem?

    HI Richard,
    Thanks for your reply. Yes I agree with your answer but sadly this does not seem to be the cause of the problem in my case as I always stop any running processes and double check with the list_requests.sql as provided and where necessary execute the deactivate_execution.sql and/or abort_exec_request.sql only if absolutely required. Finally I check again with list_requests.sql and if necessary run in the OWF_MGR schema: WF_PURGE.TOTAL and/or the following sql (again only if required) :
    BEGIN
    FOR cur_rec in (SELECT ITEM_TYPE,ITEM_KEY FROM OWF_MGR.WF_ITEM_ACTIVITY_STATUSES
      WHERE ACTIVITY_STATUS = 'ACTIVE')
    LOOP
    OWF_MGR.WF_PURGE.MOVE_TO_HISTORY(cur_rec.ITEM_TYPE,cur_rec.ITEM_KEY);
    COMMIT;
    END LOOP;
    END;
    So I am sure that no processes are running before I begin deployment. Thanks again for your reply.

Maybe you are looking for

  • Dynamic to Static VPN dies after about 4 minutes

    Hey everyone, I have an interesting issue with a dynamic to static VPN setup. I currently run a pair of Cisco Pix 515e firewalls in a failover setup. They are running OS 8.0.4 and they run great. I used to have a VPN between these and my office where

  • Images For DVD Menu Not All Appearing

    Can someone please explain, when I drag 4 or 5 jpeg images for my theme into the "Drop Photos Here" boxes to be cycled, only 2 images get cycled through during the preview; even after I burn the DVD. Why aren't the remaining 2 to 3 images being recog

  • LabVIEW programmer for Sound Measurement & Analysis software

    Listen, Inc. is the market leader in PC based electro-acoustic test and measurement systems for testing loudspeakers, microphones, telephones, audio electronics, hearing aids and other transducers. We have been in business for over 10 years and our c

  • SPNego for multi-forest using IBM JDK

    Hi All, I need to setup SPNego authentication for EP7 and IBM JDK for a multi-forest landscape (2 Active directory domains).  There's a guide about how to do this for Sun JDK : https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/

  • Reg: MASS CHANGE

    Hi We have uploaded 250No.of Materials  through BAPI. At that time we didn't select the plant. Now we would like to extend the material to a plant.At present the materials are not assigned to any particular plant. now i would like to do the mass chan