Implementing COMMIT frequency

I've been asked to create a procedure which should delete from a table records which are older than a specified period(input parameter) with a User specified Commit Frequency(This will be another Input parameter)
I have a column called created_time in mytable which i could use in WHERE clause of the DELETE statement.
create or replace procedure delete_mytable
       p_commitfreq  in number
       p_no_of_days_old in number
as
begin
DELETE FROM mytable where created_time< sysdate-p_no_of_days_old;
-- code to implement COMMIT frequency
end;
How can i implement COMMIT frequency in this proc?. Client is going to COMMIT after every 35000 rows. But they don't want this to be hard coded. I have seen other threads in OTN where Gurus are saying that including COMMIT frequency is a bad thing. This just happens to be in SPECS which I don't get to design(we are actually converting a Purge script written in C++ to PL/SQL)

I have seen other threads in OTN where Gurus are saying that including COMMIT
frequency is a bad thing. We consider the use of COMMIT in the middle of a business transaction to be bad practice for a number of reasons:
(1) Multiple commits are generally slower.
(2) It is frequently associated with an implementation which consists of record-by-record processing inside a PL/SQL curosr loop when set-based processing is far more efficient.
(3) It can be harder to recover from a failed business transaction if the data is in an inconsistent state.
(4) Such implementations are prone to ORA-01002 and ORA-01555 errors.
Now the question which occurs to me in your case is, "Why do you users care?" In fact, would they even notice if you implemented the procedure as you show in your post and just ignore the p_commitfreq?
However, if your conscience won't let you do that, the best solution would be something like this:
  << commit_loop >>.
  loop
       delete from mytable
       where created_time< sysdate-p_no_of_days_old
       and rownum <= p_commitfreq;
       exit when sql%rowcount = 0;
      commit;
  end loop commit_loop;
  commit;This solution will be slower than a single commit approach but avoids the other issues.
Cheers, APC
Blog : http://radiofreetooting.blogspot.com/

Similar Messages

  • Bulk Insert - Commit Frequency

    Hi,
    I am working on OWB 9.2.
    I have a mapping that would be inserting > 90000 records in target table.
    When i execute the SQL query generated by OWB in Database it return result in 6 minutes.
    But when i executed this OWB mapping in 'Set based fail over row based' with Bulk-size = 50, commit frequency = 1000 and max error = 50 it is taking almost 40 minutes. Now while this mapping was running i checked the target table records count and it was increasing by 50 at a time and it is taking long time to finish.
    What changes I need to do to make this inserting of records Fastest.

    If it is inserting 50 a time, it means that setbased is failing and then it is switching to rowbased.
    If it is always failing in set based, it will be better to run in row based so as not spending time in set based.
    In row based mode, you can increase bulk size (like 5000 or 10000 or more ) for efficient execution. Keep commit freq same as bulk size.

  • Commit Frequency

    Hi Gurus,
       Anybody help me  how to work with  commit frequency in ODI.I have 5 Million Records to load for every 10000 records it has to commit

    You can refer my blog. I believe you can still optimize the KM. This is just the beginning.
    http://dwteam.in/commit-frequency-in-odi/

  • Implement commit,rollback,cancel popup when swapping task in dynamic region

    Hello,
    I have been trying to implement this functionality when I swap task flows in a dynamic region....
    A popup/dialog to be displayed with the following options (assuming there are changes to be saved on the task flow being swapped out)
    Commit - save the changes on the the outgoing task flow and continue to bring in the new task flow
    Rollback - cancel the changes on the the outgoing task flow and continue to bring in the new task flow
    Cancel - remain on the outgoing task flow and do not bring in the new task flow
    If i click on a button within the outgoing task flow, it is possible to determine the condition of the view object, determine if there are pending changes and display such a dialog, the problem is if that the navigation within the dynamic region is being performed from outside the region, we are advised against using task flow return activities when using bounded task flows within regions ( section 17.1.8 of fusion dev guide for 11.1.1.1.0 ) so the only thing i can think of is building a finalizer routine for the task flow. While this does get executed when the task flow is swapped out... how can i display the dialog from there, and stop execution of the finalizer and return to the original task flow if ultimately i want to cancel.
    Any ideas? other solutions?
    Thanks.

    Thanks for the reply Richard,
    I think this will probably be an important feature for anyone using the single page model, unless that model is not being encouraged/being discouraged. Unfortunately I'm not aware of the "How-to" you mentioned, but I would dearly LOVE to be. The project I am working on is in a relatively early stage, and this would be the time to put things in extension libraries, templates and the like. Especially if there is no official line on when a formal fix might be available, the plans not being firm and all...
    But thanks for the reply none the less... I'd still appreciate any further information if anyone has some form of solution.
    Cheers

  • Can we control the Materializd view commit frequency

    Hi
    I am creating a huge MV that requires lot of undo space while creating initially or if I want to do a complete refresh. Can we control the commit interval like if I want the MV during initialization to commit records after every 1 million records.
    Any suggestions will be highly appreciated.

    No, you cannot do incremental commits when refreshing a materialized view. At least not if you want Oracle to refresh it for you. Even if you could, you almost certainly wouldn't want to. You would generate more undo in total trying to refresh 1 million rows at a time. And your data would be inconsistent. Plus, you'd have to build additional structures to track which rows had already been processed/ updated so that Oracle could recover in the event that there is a failure in the middle of refreshing a materialized view.
    If you're not concerned by the inconsistent data, depending on the Oracle version and how you've configured the environment, along with whether you're doing incremental or complete refreshes, you could set atomic_refresh to FALSE so that Oracle truncates and reloads the data each time, which generates less UNDO but causes the materialized view to be empty during the refresh. Or you could set the materialized view to refresh incrementally, which may be substantially faster than doing a complete refresh.
    Justin

  • COMMIT FREQUENCY /BULK SIZE ignored ?

    OWB Client 9.2.0.2.8
    OWB Repository: 9.2.0.2.0
    I have a simple SRC to TRG toy map which does an update (row based)
    I was hoping to achieve the following: if any error is encountered , then 1) abort and 2) rollback any changes
    I could achieve 1) by setting MAX_ERRORS=0
    For 2) I tried COMMIT_FREQUENCY=BULKSIZE= some_very_high_number.
    But the map always insists on updating at least some rows before encountering the first error (and then aborting)
    I tried changing the bulk prcoessing to true, and got the same results. I changed UPDATE to INSERT=>similar behavior
    why is the COMMIT_FREQUENCY=BULK SIZE setting being ignored? Or is this just a bug with my version ?

    Hi,
    Good question... it should be the case as you describe (I think). I'd need to look at the code for this, but you may want to contact support on this one so they can reproduce this...
    Jean-Pierre

  • Best way to implement a word frequency counter (input = textfile)?

    i had this for an interview question and basically came up with the solution where you use a hash table...
    //create hash table
    //bufferedreader
    //read file in,
    //for each word encountered, create an object that has (String word, int count) and push into hash table
    //then loop and read out all the hash table entries
    ===skip this stuff if you dont feel like reading too much
    then the interviewer proceeded to grill me on why i shouldn't use a tree or any other data structure for that matter... i was kidna stumped on that.
    also he asked me what happens if the number of words exceed the capacity of the hash table? i said you can increase the capacity of the hash table, but it doesn't sound too efficient and im not sure how much you know how to increase it by. i had some ok solutions:
    1. read the file thru once, and get the number of words in the file, set the hashtable capacity to that number
    2. do #1, but run anotehr alogrithm that will figure out distinct # of words
    3. separate chaining
    ===
    anyhow what kind of answeres/algorithms would you guys have come up with? thanks in advance.

    i had this for an interview question and basically
    came up with the solution where you use a hash
    table...
    //create hash table
    //bufferedreader
    //read file in,
    //for each word encountered, create an object thatWell, first you need to check to make sure the word is not already in the hashtable, right? And if it is there, you need to increment the count.
    has (String word, int count) and push into hash
    table
    //then loop and read out all the hash table entries
    ===skip this stuff if you dont feel like reading too
    much
    then the interviewer proceeded to grill me on why i
    shouldn't use a tree or any other data structure for
    that matter... i was kidna stumped on that.A hashtable has ammortized O(1) time for insert and search. A balanced binary search tree has O(log n) complexity for the same operations. So, a hashtable will be faster for large number of words. The other option is a so-called "trie" (google for more), which has O(log m) complexity, where m is the length of the longest word. So if your words aren't too long, a trie may be just as fast as a hashtable. The trie may also use less memory than the hashtable.
    also he asked me what happens if the number of words
    exceed the capacity of the hash table? i said you can
    increase the capacity of the hash table, but it
    doesn't sound too efficient and im not sure how much
    you know how to increase it by. i had some ok
    solutions:The hashmap implementation that comes with Java grows automatically, you don't need to worry about it. It may not "sound" efficient to have to copy the entire datastructure, the copy happens quickly, and occurs relatively infrequently compared with the number of words you'll be inserting.
    1. read the file thru once, and get the number of
    words in the file, set the hashtable capacity to that
    number
    2. do #1, but run anotehr alogrithm that will figure
    out distinct # of words
    3. separate chaining
    ===
    anyhow what kind of answeres/algorithms would you
    guys have come up with? thanks in advance.I would do anything to avoid making two passes over the data. Assuming you're reading it from disk, most of the time will be spent reading from disk, not inserting to the hashtable. If you really want to size the hashtable a priori, you can make it so its big enough to hold all the words in the english language, which IIRC is about 20,000.
    And relax, you had the right answer. I used to work in this field and this is exactly how we implemented our frequency counter and it worked perfectly well. Don't let these interveiewers push you around, just tell them why you thought hashtable was the best choice; show off your analytical skills!

  • Solving "COMMIT business rules" on the database server

    Headstart Oracle Designer related white paper
    "CDM RuleFrame Overview: 6 Reasons to get Framed"
    (at //otn.oracle.com/products/headstart/content.html) says:
    "For a number of business rules it is not possible to implement these in the server
    using traditional check constraints and database triggers. Below you can find two examples:
    Example rule 1: An Order must have at least one Order Line ..."
    But, one method exists that allows solving "COMMIT rules" completely on the database level.
    That method consists of the possibility of delaying the checking of the declarative constraints (NOT NULL, Primary Key, Unique Key, Foreign Key, Check Constraints) until the commit
    (that method was introduced first in the version 8.0.).
    E.g. we add the field "num_emps" to the DEPT table, which always has the value of the number
    of the belonging EMP rows and add DEFERRED CK which uses the values from that field:
    ALTER TABLE dept ADD num_emps NUMBER DEFAULT 0 NOT NULL
    UPDATE dept
    SET num_emps = (SELECT COUNT (*) FROM emp WHERE emp.deptno = dept.deptno)
    DELETE dept WHERE num_emps = 0
    ALTER TABLE dept ADD CONSTRAINT dept_num_emps_ck CHECK (num_emps > 0) INITIALLY DEFERRED
    Triggers that insure the solving of the server side "COMMIT rules" are fairly simple.
    We need a packed variable that is set and reset in the EMP triggers and those value
    is read in the bur_dept trigger (of course, we could have place the variable in the package
    specification and change/read it directly, thus not needing the package body,
    but this is a "cleaner" way to do it):
    CREATE OR REPLACE PACKAGE pack IS
    PROCEDURE set_flag;
    PROCEDURE reset_flag;
    FUNCTION dml_from_emp RETURN BOOLEAN;
    END;
    CREATE OR REPLACE PACKAGE BODY pack IS
    m_dml_from_emp BOOLEAN := FALSE;
    PROCEDURE set_flag IS
    BEGIN
    m_dml_from_emp := TRUE;
    END;
    PROCEDURE reset_flag IS
    BEGIN
    m_dml_from_emp := FALSE;
    END;
    FUNCTION dml_from_emp RETURN BOOLEAN IS
    BEGIN
    RETURN m_dml_from_emp;
    END;
    END;
    CREATE OR REPLACE TRIGGER bir_dept
    BEFORE INSERT ON dept
    FOR EACH ROW
    BEGIN
    :NEW.num_emps := 0;
    END;
    CREATE OR REPLACE TRIGGER bur_dept
    BEFORE UPDATE ON dept
    FOR EACH ROW
    BEGIN
    IF :OLD.deptno <> :NEW.deptno THEN
    RAISE_APPLICATION_ERROR (-20001, 'Can''t change deptno in DEPT!');
    END IF;
    -- only EMP trigger can change "num_emps" column
    IF NOT pack.dml_from_emp THEN
    :NEW.num_emps := :OLD.num_emps;
    END IF;
    END;
    CREATE OR REPLACE TRIGGER air_emp
    AFTER INSERT ON emp
    FOR EACH ROW
    BEGIN
    pack.set_flag;
    UPDATE dept
    SET num_emps = num_emps + 1
    WHERE deptno = :NEW.deptno;
    pack.reset_flag;
    END;
    CREATE OR REPLACE TRIGGER aur_emp
    AFTER UPDATE ON emp
    FOR EACH ROW
    BEGIN
    IF NVL (:OLD.deptno, 0) <> NVL (:NEW.deptno, 0) THEN
    pack.set_flag;
    UPDATE dept
    SET num_emps = num_emps - 1
    WHERE deptno = :OLD.deptno;
    UPDATE dept
    SET num_emps = num_emps + 1
    WHERE deptno = :NEW.deptno;
    pack.reset_flag;
    END IF;
    END;
    CREATE OR REPLACE TRIGGER adr_emp
    AFTER DELETE ON emp
    FOR EACH ROW
    BEGIN
    pack.set_flag;
    UPDATE dept
    SET num_emps = num_emps - 1
    WHERE deptno = :OLD.deptno;
    pack.reset_flag;
    END;
    If we insert a new DEPT without the belonging EMP, or delete all EMPs belonging to a certain DEPT, or move all EMPs of a certain DEPT, when the COMMIT is issued we get the following error:
    ORA-02091: transaction rolled back
    ORA-02290: check constraint (SCOTT.DEPT_NUM_EMPS_CK) violated
    Disvantage is that one "auxiliary" column is (mostly) needed for each "COMMIT rule".
    If we'd like to add another "COMMIT rule" to the DEPT table, like:
    "SUM (sal) FROM emp WHERE deptno = p_deptno must be <= p_max_dept_sal"
    we would have to add another column, like "dept_sal".
    CDM RuleFrame advantage is that it does not force us to add "auxiliary" columns.
    We must emphasize that in real life we would not write PL/SQL code directly in the database triggers, but in packages, nor would we directly use RAISE_APPLICATION_ERROR.
    It is written this way in this sample only for the code clarity purpose.
    Regards
    Zlatko Sirotic

    Zlatko,
    You are right, your method is a way to implement "COMMIT rules" completely on the database level.
    As you said yourself, disadvantage is that you need an extra column for each such rule,
    while with CDM RuleFrame this is not necessary.
    A few remarks:
    - By adding an auxiliary column (like NUM_EMPS in the DEPT table) for each "COMMIT rule",
    you effectively change the type of the rule from Dynamic (depending on the type of operation)
    to a combination of Change Event (for updating NUM_EMPS) and Static (deferred check constraint on NUM_EMPS).
    - Deferred database constraints have the following disadvantages:
    When something goes wrong within the transaction, then the complete transaction is rolled back, not just the piece that went
    wrong. Therefore, it becomes more important to use appropriate commit units.
    There is no report of the exact row responsible for the violation nor are further violations either by other rows or of other
    constraints reported.
    If you use Oracle Forms as a front end application, the errors raised from deferred constraints are not handled very well.
    - CDM discourages the use of check constraints. One of the reasons is, that when all tuple rules are placed in the CAPI,
    any violations can be reported at the end of the transaction level together with all other rule violations.
    A violated check constraint would abort the transaction right away, without the possibility of reporting back other rule violations.
    So I think your tip is a good alternative if for some reason you cannot use CDM RuleFrame,
    but you'd miss out on all the other advantages of RuleFrame that are mentioned in the paper!
    kind regards, Sandra

  • Commit after every 1000 records

    Hi dears ,
    i have to update or insert arround 1 lakhs records every day incremental basis,
    while doing it , after completing all the records commit happens, In case some problem in between all my processed records are getting rollbacked,
    I need to commit it after every frequency of records say 1000 records.
    Any one know how to do it??
    Thanks in advance
    Regards
    Raja

    Raja,
    There is an option in the configuration of a mapping in which you can set the Commit Frequency. The Commit Frequency only applies to non-bulk mode mappings. Bulk mode mappings commit according the bulk size (which is also an configuration setting of the mapping).
    When you set the Default Operating Mode to row based and Bulk Processing Code to false, Warehouse Builder uses the Commit Frequency parameter when executing the package. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
    If you set Bulk Processing Code to true, set the Commit Frequency equal to the Bulk Size. If the two values are different, Bulk Size overrides the commit frequency and Warehouse Builder implicitly performs a commit for every bulk size.
    Regards,
    Ilona

  • Corelated commit in owb

    Hi,
    can any one explian how corelated commit works in owb.suppose if i enable corelated commit in mapping and when i am running same mapping and if i
    pass runtime parameters like commit frequency=10000.
    whether it will commit every 10000 records in target because of commit frequency or it will commit at the end of loading all records in target because of corelated commit.please comment on this issue.
    Regards
    naren

    Commit frequency and Correlated Commit are 2 independent configuration parameter. Setting Commit frequency to 10000 will make sure that the commit happens after every 10000 records irrespective of whether Correlated Commit is set or not.
    Commit frequency is applicable only for rowbased and rowbased (target) mode.
    Correlated commit is applicable when you have a single source populating multiple target tables in a single mapping. OWB User guide chapter 11 (page 11-16) has very good explanation of how correlated commit works.
    Hope this helps.

  • OWB and COMMIT

    OWB version: 9.0.2.
    I realized that OWB puts several COMMIT statement in the generated PL/SQL packages code.
    I need to execute the COMMIT statement only after the end of the PL/SQL procedure, or when I decided,or make a ROLLBACK if some error occurs, and so my question is:
    Is it possible to configure an OWB mapping in order to completely remove the COMMIT statement from the generated code?
    I remove COMMIT sentences manually, but it seems that some part of the generated code makes a COMMIT, because when the transaction finishes, I make a rollback, but all inserts remains in tables.
    TIA

    Javier,
    OWB indeed always includes commit statements in its code. However, even if you would remove all commits (assuming you run 904 or 92 version of OWB) you will end up with commits after every execution, because every execution runs in a separate session. When a session is exited cleanly the database will implicitly commit.
    In OWB92 we introduced a feature correlated commit (configuration parameter on a mapping) that helps you control commits in a multi-target mapping. With that feature you can control commits to only happen at the end of the mapping in set-based mode. In row-based mode or row-based bulk mode, the behavior is that rows from a PL/SQL cursor are being applied to all targets rather than just at the time.
    The commit behavior is following:
    - in set-based mode, there is one commit per statement and with correlated commit set to true, it will be once at the end of the mapping.
    - in row-based bulk mode, the commit frequency is the bulk size. To force row-based mode, set the bulk size to 1.
    - in row-based mode, the commit frequency can be configured by the commit frequency configuration parameter.
    So, if possible, run in set-based mode (this is fastest anyway) and set correlated commit to true. This will commit the mapping at the end. In case of any error, the entire mapping will rollback.
    Hope this helps,
    Mark.

  • OWB 10gR2: Commit everything or nothing

    Hello.
    We have a mapping which must run in row-based mode because it deletes records. There is one source and multiple targets. One flow directs into a logtable (LOG_LT). The other flow directs into 4 targets (MOD_LT for all targets) depending on the mode given in the recors (Update/Insert/Merge/Delete). The Logtable gets a part of the primary keys of the incoming rows wich are used for a replication application.
    It is important that there is nothing in the 4 targets which is not stored (part of pk as described) in the logtable. Otherwise information will be lost for the replication.Therfore we set target to order to use the LOG_LT first.
    a) We started with a high commit high freqency larger than the number of rows in source and autocommit. If the data flow had a error (pk e.g.) only this and not the log-flow was rollbacked. (Table based rollback)
    b) We used autocommit correlated. The logflow and the dataflow are ok, but the mapping was commiting to early (not the value in commit freqency).
    Why?
    c) The next idea was to set the mapping to manual commit and commit in a postmapping procedure if everything was ok. The mapping can be deployed but when we start the mapping we get the warning:
    RTC-5380: Manual commit control configured on maping xy is ignored.
    Why?
    Has anybody a good practice for a row-based-mapping (with multiple targets and one source) to commit everything or nothing?
    Thanks for your help
    Stephan

    Hello Stephan,
    In row based mappings, and one having single source and multiple targets then the following are the different ways of loading the targets.
    1. Correlated Commit = true and Row Based - Suppose there are 50 rows in the source and there are 4 target tables. When the mapping is run in this mode ansd suppose the mapping encounters error while loading row 10 in target table say TGT_3 then this row will be rolled back from all the four targets and the processing will continue, thus finally loading 49 records in all the 4 tables.
    When you run in row based mode, then please check the bulk processing mode. If it is set to true then the bulk size overrides the commit frequency set by you. It is thus advisable to make th bulk size and the commit frequency equal or set the bulk porcessing mode to false and then set the commit frequency.
    HTH
    -AP

  • Optimizing a looping procedure - when to commit

    Does anyone have a link to an article that explains when a commit should be timed to be optimal (ie: after X amount of statements or after insertion / updating of Y Mb)?
    I have a procedure that loops through and inserts (ugly, but inherited and I have no time to re-code) and I'm not sure how to go about optimizing the commit point other than trial and error. Right now I'm commiting every 1000 interations of a loop, which is faster than 1 but slower than no commits, but how can I figure out what is the best?
    Connected to:
    Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.4.0 - Production
    Lemme know if more info is needed.
    Thanks,
    Pete

    Yes this is obvious each commit has a small amount of overhead, the more you do the more the overhead adds up.
    So the most optimal approach available to you is commit once at the end.
    If you have 27 million inserts this may not be possible due to resource issues. Do the tables have a lot of indexes? Try for a commit every 2 million, 1000 is rediculously tiny.
    Try the query in this post to see how much undo you use before commiting after 1000, and then multiply up to see how much you can get away with.
    Re: Suggestions for improving this update in batches of 100 records
    But again, I have run tests before that show LOOP vs SQL is hundreds of times slower, while commiting after every row in the LOOP versus commiting once after the LOOP makes a difference of a few percent, so I do not think you are going to squeeze much of a runtime improvement just be reducing the commit frequency.

  • Dynamic Lookup in OWB 10.1g

    Can we execute dynamic lookup in OWB 10.1g?
    I want update the columns of the target table, based on the previous values of the columns.
    Suppose there is a record in the target table with previous status and current status columns.
    The source table consist of 10 records which need to be processed one at a time in a single batch. Now we need to compare the status of record with the current status of target table. If the source contains next higher status then the current status of target record need to go to previous status and the new status coming from source need to overwrite the current status of target record.
    We have tried using row based option as well as setting commit frequency equal to 1 but we are not able to get the required result.
    how can we implement this in OWB10.1g?

    OK, now what I would do in an odd case like this is to look at the desired FINAL result of a run rather than worry so much about the intermediate steps.
    Based on your statement of the status incrementing upward, and only upward, your logic can actually be distilled down to the following:
    At the end of the load, the current status for a given primary key is the maximum status, and the previous status will be the second highest status. All the intermediate status values are transitional status values that have no real bearing on the desired final result.
    So, let's try a simple prototype:
    --drop table mb_tmp_src; /* SOURCE TABLE */
    --drop table mb_tmp_tgt; /*TARGET TABLE */
    create table mb_tmp_src (pk number, val number);
    insert into mb_tmp_src (pk, val) values (1,1);
    insert into mb_tmp_src (pk, val) values (1,2);
    insert into mb_tmp_src (pk, val) values (1,3);
    insert into mb_tmp_src (pk, val) values (2,2);
    insert into mb_tmp_src (pk, val) values (2,3);
    insert into mb_tmp_src (pk, val) values (3,1);
    insert into mb_tmp_src (pk, val) values (4,1);
    insert into mb_tmp_src (pk, val) values (4,3);
    insert into mb_tmp_src (pk, val) values (4,4);
    insert into mb_tmp_src (pk, val) values (4,5);
    insert into mb_tmp_src (pk, val) values (4,6);
    insert into mb_tmp_src (pk, val) values (5,5);
    commit;
    create table mb_tmp_tgt (pk number, val number, prv_val number);
    insert into mb_tmp_tgt (pk, val, prv_val) values (2,1,null);
    insert into mb_tmp_tgt (pk, val, prv_val) values (5,4,2);
    commit;
    -- for PK=1 we will want a current status of 3, prev =2
    -- for PK=2 we will want a current status of 3, prev =2
    -- for PK=3 we will want a current status of 1, prev = null
    -- for PK=4 we will want a current status of 6, prev = 5
    -- for PK=5 we will want a current status of 5, prev = 4
    Now, lets's create a pure SQL query that gives us this result:
    select pk, val, lastval
    from
    select pk,
    val,
    max(val) over (partition by pk) maxval,
    lag(val) over (partition by pk order by val ) lastval
    from (
    select pk, val
    from mb_tmp_src mts
    union
    select pk, val
    from mb_tmp_tgt mtt
    where val = maxval
    (NOTE: UNION, not UNION ALL to avoid multiples where tgt = src, and would want a distinct in the union if multiple instances of same value can occur in source table too)
    OK, now I'm not at my work right now, but you can see how unioning (SET operator) the target with the source, passing the union through an expression to get the analytics, and then through a filter to get the final rows before updating the target table will get you what you want. And the bonus is that you don't have to commit per row. If you can get OWB to generate this sort of statement, then it can go set-based.
    EDIT: And if you can't figure out how to get OWB to generate thisentirely within the mapping editor, then use it to create a view from the main subquery with the analytics, and then use that as the source in your mapping.
    If your problem was time-based where the code values could go up or down, then you would do pretty much the same thing except you want to grab the last change and have that become the current value in your dimension. The only time you would care about the intermediate values is if you were coding for a type 2 SCD, in which case you would need to track all the changes.
    Hope this helps.
    Mike
    Edited by: zeppo on Oct 25, 2008 10:46 AM

  • Tuning Log Buffer

    Gurus,
    My database Release is : 9.2.0.6 running in SPARC 5.9 64-bit
    When i ran statspack, i found log file sync problem
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    log file sync 63,909 29,611 48.07
    log file parallel write 71,223 10,045 16.31
    db file parallel write 11,614 6,134 9.96
    enqueue 23,812 4,391 7.13
    CPU time 3,807 6.18
    in my opinion, to reduce log file sync is Increase log_buffer (CMIIW)
    My question is : how to set the best value for log_buffer, currently log_buffer : 2097664 ?
    regards
    abip

    Why this question is not being answered so far? :)
    In my opinion
    1. Reduce commit frequency
    If your prg code is as followings:
    for (...) {
    do_DML;
    commit;
    Following is better practise:
    for (...) {
    do_DML;
    commit;
    If possible, implement your biz logic with PL/SQL. PL/SQL use group/asyn commit itself.
    2. Redo I/O performance
    You wait time is governed by log file sync and log file parallel write. This means that your LGWR is really busy and has difficulty at catching up with your redo I/O request. Some basic questions:
    - Is your redo log separated physically from your data files?
    - Is your redo log located on fast device?
    - Is your redo log separated physcially from your archive log files?
    I should admit that in many situations there is no way to reduce commit frequency. In that case, enhancing redo I/O performance might be the plan "B".
    There are also some tricky parameters to tune LGWR performacne, but in general situation, these should be never considered especially under production system.

Maybe you are looking for

  • JDBC to JDBC error in receiver but success in message monitoring

    Hi, I got some doubt about message monitoring (SXMB_MONI), I have JDBC (DB2) to JDBC (Oracle) scenario. The message monitoring always show me success eventhough it has error when inserting to the target system (Oracle). I am only can trace use Commun

  • Remove extra white space from an image

    I want to remove the extra white space in my image while writing it into a jpeg file using JPEGImageEncoder. Any help is welcome .

  • SCM Inbound Interfaces for 11i

    Is there any SCM Inbound Interface for 11i for external non-Oracle systems to place a requsition or purchase order? if so, what protocol does it use and is this protocol standard for a 11i install? This interface would be from PeopleSoft 8.47 not cur

  • Quick Report not working

    Hi, I try to run "Quick Report" from C1 but nothing. No error, no window. The "normal" Report and the query runs. From another Workstation it is OK from another one the same error/problem. Any idea? THX Andreas Erstellt mit Operas revolutionrem E-Mai

  • Can I revert back to Snow Leopard from Mountain Lyon ?

    Since updateing to Snow leopard my computer has become very slow.  The nice man at the White Cathedral said it was because of the age of my computer (bought 2007) and he wouldn't advice me to update to this level.  I now wish to revert back to Snow L