Need to commit after every 10 000 records inserted ?

What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
DECLARE
l_max_repa_id x_received_p.repa_id%TYPE;
l_max_rept_id x_received_p_trans.rept_id%TYPE;
BEGIN
SELECT MAX (repa_id)
INTO l_max_repa_id
FROM x_received_p
WHERE repa_modifieddate <= ADD_MONTHS (SYSDATE, -6);
SELECT MAX (rept_id)
INTO l_max_rept_id
FROM x_received_p_trans
WHERE rept_repa_id = l_max_repa_id;
INSERT INTO x_p_requests_arch
SELECT *
FROM x_p_requests
WHERE pare_repa_id <= l_max_rept_id;
DELETE FROM x__requests
WHERE pare_repa_id <= l_max_rept_id;

1006377 wrote:
we are moving between 5 and 10 million records from the one table to the other table and it takes forever.
Please could you provide me with a script just to commit after every x amount of records ? :)I concur with the other responses.
Committing every N records will slow down the process, not speed it up.
The fastest way to move your data (and 10 million rows is nothing, we do those sorts of volumes frequently ourselves) is to use a single SQL statement to do an INSERT ... SELECT ... statement (or a CREATE TABLE ... AS SELECT ... statement as appropriate).
If those SQL statements are running slowly then you need to look at what's causing the performance issue of the SELECT statement, and tackle that issue, which may be a case of simply getting the database statistics up to date, or applying a new index to a table etc. or re-writing the select statement to tackle the query in a different way.
So, deal with the cause of the performance issue, don't try and fudge your way around it, which will only create further problems.

Similar Messages

  • SE16N Export Local File - Header Line Every 60,000 Records

    Good Day,
       After a recent system upgrade to SAP ECC 6.0 Release 701, Level 007 (from Level 004), we received word that exporting to a local file from SE16N now creates a header line after every 60,000 records.  This occurs on both SAP GUI 7.10 Patch 15 and SAP GUI 7.20.
       We found notes 1309128 and 1017586, but our service pack level already implements the correction instructions listed. Is there a parameter (either system wide or SAP GUI specific) that needs to be set to disable the header lines? Is there anything else that we can check?
    Thank you for any help you may be able to provide!

    Hi,
    Sure it is unusual.
    have you checked whether the "header" was not repeated in the internal table itself?
    Best regards,
    Guillaume

  • COMMIT after every 10000 rows

    I'm getting probelms with the following procedure. Is there any that I can do to commit after every 10,000 rows of deletion? Or is there any other alternative! The DBAs are not willing to increase the undo tablespace value!
    create or replace procedure delete_rows(v_days number)
    is
    l_sql_stmt varchar2(32767) := 'DELETE TABLE_NAME WHERE ROWID IN (SELECT ROWID FROM TABLE_NAME W
    where_cond VARCHAR2(32767);
    begin
       where_cond := 'DATE_THRESHOLD < (sysdate - '|| v_days ||' )) ';
       l_sql_stmt := l_sql_stmt ||where_cond;
       IF v_days IS NOT NULL THEN
           EXECUTE IMMEDIATE l_sql_stmt;
       END IF;
    end;I think I can use cursors and for every 10,000 %ROWCOUNT, I can commit, but even before posting the thread, I feel i will get bounces! ;-)
    Please help me out in this!
    Cheers
    Sarma!

    Hello
    In the event that you can't persuede the DBA to configure the database properly, Why not just use rownum?
    SQL> CREATE TABLE dt_test_delete AS SELECT object_id, object_name, last_ddl_time FROM dba_objects;
    Table created.
    SQL>
    SQL> select count(*) from dt_test_delete WHERE last_ddl_time < SYSDATE - 100;
      COUNT(*)
         35726
    SQL>
    SQL> DECLARE
      2
      3     ln_DelSize                      NUMBER := 10000;
      4     ln_DelCount                     NUMBER;
      5
      6  BEGIN
      7
      8     LOOP
      9
    10             DELETE
    11             FROM
    12                     dt_test_delete
    13             WHERE
    14                     last_ddl_time < SYSDATE - 100
    15             AND
    16                     rownum <= ln_DelSize;
    17
    18             ln_DelCount := SQL%ROWCOUNT;
    19
    20             dbms_output.put_line(ln_DelCount);
    21
    22             EXIT WHEN ln_DelCount = 0;
    23
    24             COMMIT;
    25
    26     END LOOP;
    27
    28  END;
    29  /
    10000
    10000
    10000
    5726
    0
    PL/SQL procedure successfully completed.
    SQL>HTH
    David
    Message was edited by:
    david_tyler

  • Commit after every 1000 records

    Hi dears ,
    i have to update or insert arround 1 lakhs records every day incremental basis,
    while doing it , after completing all the records commit happens, In case some problem in between all my processed records are getting rollbacked,
    I need to commit it after every frequency of records say 1000 records.
    Any one know how to do it??
    Thanks in advance
    Regards
    Raja

    Raja,
    There is an option in the configuration of a mapping in which you can set the Commit Frequency. The Commit Frequency only applies to non-bulk mode mappings. Bulk mode mappings commit according the bulk size (which is also an configuration setting of the mapping).
    When you set the Default Operating Mode to row based and Bulk Processing Code to false, Warehouse Builder uses the Commit Frequency parameter when executing the package. Warehouse Builder commits data to the database after processing the number of rows specified in this parameter.
    If you set Bulk Processing Code to true, set the Commit Frequency equal to the Bulk Size. If the two values are different, Bulk Size overrides the commit frequency and Warehouse Builder implicitly performs a commit for every bulk size.
    Regards,
    Ilona

  • Avoid Commit after every Insert that requires a SELECT

    Hi everybody,
    Here is the problem:
    I have a table of generator alarms which is populated daily. On daily basis there are approximately 50,000 rows to be inserted in it.
    Currently i have one month's data in it ... Approximately 900,000 rows.
    here goes the main problem.
    before each insert command, whole table is checked if the record does not exist already. Two columns "SiteName" and "OccuranceDate" are checked... this means, these two columns are making a unique record when checked together with an AND operation in WHERE clause.
    we have also implemented partition on this table. and it is basically partitioned on the basis of OccuranceDate and each partition has 5 days' data.
    say
    01-Jun to 06 Jun
    07-Jun to 11 Jun
    12-Jun to 16 Jun
    and so on
    26-Jun to 30 Jun
    NOW:
    we have a commit command within the insertion loop, and the each row is committed once inserted, making approximately 50,000 commits daily.
    Question:
    Can we commit data after say each 500 inserted rows, but my real question is can we Query the records using SELECT which are Just Inserted but not yet committed ?
    a friend told me that, you can query the records which are inserted in the same connection session but not yet committed.
    Can any one help ?
    Sorry for the long question but it was to make u understand the real issue. :(
    Khalid Mehmood Awan
    khalidmehmoodawan @ gmail.com
    Edited by: user5394434 on Jun 30, 2009 11:28 PM

    Don't worry about it - I just said that because the experts over there will help you much better. If you post your code details there they will give suggestions on optimizing it.
    Doing a SELECT between every INSERT doesn't seem very natural to me, but it all depends on the details of your code.
    Also, not committing on time may cause loss of the uncommitted changes. Depending on how critical the data is and the dependency of the changes, you have to commit after every INSERT, in between, or at the end.
    Regards,
    K.

  • Is it possible to automatically add a comma after every 6 characters has been typed in a text field?

    Hope someone can help me.
    What I basically need help with is how to make Acrobat add a comma after every 6 characters in a text field:
    XXXXXX,YYYYYY, ZZZZZZ etc

    I'm sorry, but I did not understand that (i'm using Acrobat Pro X)
    Am I supposed to go to:
    Text Field Properties > Format > Custom
    and then use Custom Format Script or Custom Keystroke Script?
    I tried both and it did not work.
    And do the Text Field have to be named "chunkSize"?
    Seems like it works. I had to move to the next formfield in order to see the effect.
    Is it possible to make it happen in real time (as you type the comma is inserted?)

  • SQL: System locks up and runs slow after performing simple DML record insert

    SQL Version:  2008 R2
    I am having a serious problem.  I ran the following code to perform a simple table record insert which ran successfully.  However, after running this code I could no longer access the related table.  I couldn't run a query against it. 
    I tried to delete all the records and that wouldn't work either.  When attempting to run any DML statements (i.e. Select * From vPCCertificateTypes) against this table SQL gets locked up and never returns anything and no error messages.  I have
    to manually stop the query.  Now the entire SQL system is running slow.
    Any help would be greatly appreciated.  The code I ran to originally insert the records is below.
    Regards,
    Bob Sutor
    CODE: 
    Begin TRAN
     INSERT INTO vPCCertificateTypes (VendorGroup, CertificateType, Description, Active, Category)
     SELECT HQCo, 'MBE', 'Minority-owned Business Enterprise', 'Y', 'Affirmative Action' 
     FROM HQCO
     Where HQCo IS NOT NULL
      AND HQCo <> 100
      AND HQCo <> 99
     IF @@ROWCOUNT <> 44 ROLLBACK ELSE COMMIT
    Bob Sutor

    The problem was an open uncommitted transaction against the table as you suspected.  I ran your code and it cleared the transaction.  In the past I would intentionally leave our the Commit statement and then if the transaction ran fine, I would
    simply highlight and run 'COMMIT' and it would close the transaction.  Apparently that is not the correct approach.  I'll use your suggested code from now on.
    Thanks for the help.
    Bob Sutor

  • Commit after every three UPDATEs in CURSOR FOR loop

    DB Version: 11g
    I know that experts in here despise the concept of COMMITing inside loop.
    But most of the UPDATEs being fired by the code below are updating around 1 million records and it is breaking our UNDO tablespace.
    begin
    for rec in
          (select owner,table_name,column_name 
          from dba_tab_cols where column_name like 'ABCD%' and owner = p_schema_name)
          loop
            begin
            execute immediate 'update '||rec.owner||'.'||rec.table_name||' set '||rec.column_name|| ' = '''||rec.owner||'';
            end;
          end loop;
    end;We are not expecting ORA-01555 error as these are just batch updates.
    I was thinking of implementing something like
    FOR i IN 1..myarray.count
    LOOP
                             DBMS_OUTPUT.PUT_LINE('event_key at' || i || ' is: ' || myarray(i));
                             INSERT INTO emp
                             empid,
                             event_id,
                             dept,
                             event_key
                             VALUES
                             v_empid,
                             3423,
                             p_dept,
                             myarray(i)
                             if(MOD(i, p_CommitFreq) = 0)  --- When the loop counter becomes fully divisible by p_commit_frequency, it COMMITs
                             then
                                       commit;
                             end if;
    END LOOP;(Found from an OTN thread)
    But i don't know how to access the loop counter value in a CURSOR FOR loop.

    To be fair, what is really despised is code that takes an operation that could have been performed in a single SQL statement and steps through it in the slowest possible way, committing pointlessly as it goes along (exactly like the example you found). Your original version doesn't do that - it looks more like some sort of one-off migration where you have to set every value of every column that matches some naming standard pattern to a constant. If that's the case, and if there are huge volumes involved and you can't simply add a bit more undo, then I don't see much wrong with committing after each update, especially if you track how far you've got so you can restart cleanly if it fails.
    If you really want an incrementing counter in an unnamed cursor, apart from the explicit variable others have suggested, you could add rownum to the cursor (alias it to something that isn't an Oracle keyword), although it could complicate the ORDER BY that you might be considering for the restart logic. You could have that instead of the redundant 'owner' column in your example which is always the same as the constant p_schema_name.
    Another approach would be to keep track of SQL%ROWCOUNT after each update by adding it to a variable, and commit when the total number of rows updated so far reaches, say, a million.
    Could the generated statement use a WHERE clause, or does it really have to update every row in every table it finds?
    Could there be more than one column to update per table? If so it might be worth generating one multi-column update statement per table, although it'll complicate things a bit.
    btw you don't need the inner 'begin' and 'end' keywords, and whoever supplied the MOD example you found should know that three or four spaces usually make a good indent and you don't need brackets around IF conditions.

  • BPEL's needs to redeploy after every stop/start of WL in 11g-Urgent

    Version: 11.1.1.3
    All,
    Whenever I start/restart the managedserver, the already deployed BPEL process needs to be redeploy again. This becomes pain if I have to deploy 10-15 BPEL's again and again. Why is this performance issue with wlogic? How do I overcome this issue?
    In BPEL 10g, if I deploy the bpel 1st time on every start/stop I dont have to redeploy it.
    Thanks,
    sen

    Sen
    If you will go through the link you will get an answer to your question. It is about best known practices.
    You can remove this error by just using Oracle's Best known practices, so follow that link and let us know if you still faces issues with the same.
    Thanks
    AJ

  • Commit after 2000 records in update statement but am not using loop

    Hi
    My oracle version is oracle 9i
    I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?
    do i need to use rownum?
    BEGIN
    UPDATE
    (SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
    RT_TEMP_IN_CARTON A,
    CD_SKU_CONV M
    WHERE
    A.SKU=M.FROM_SKU AND
    A.SKU<>M.TO_SKU AND
    M.APPROVED_FLAG='Y')
    SET SKU = TO_SKU,
         TO_STORE=(SELECT(
              DECODE(TO_STORE,
              5931,'931',
              5935,'935',
              5928,'928',
              5936,'936'))
              FROM
              RT_TEMP_IN_CARTON WHERE TO_STORE IN ('5931','5935','5928','5936'));
    COMMIT;
    end;
    Thanks for your help

    I need to commit after every 2000 recordsWhy?
    Committing every n rows is not recommended....
    Currently am using the below statement without using the loop.how to do this?Use a loop? (not recommended)

  • Commit in procedures after every 100000 records possible?

    Hi All,
    I am using an ODI procedure to insert data into a table.
    I checked that in the ODI procedure there is an option of selecting transaction and setting the commit option as 'Commit after every 1000 records'.
    Since the record count to be inserted is 38489152, I would like to know is this option configurable.
    Can i ensure that commits are made at a logical step of 100000 instead of 1000 records?
    Thank You.
    Prerna

    recently added on this
    http://dwteam.in/commit-interval-in-odi/
    Thanks
    Bhabani
    http://dwteam.in

  • Calling Delta Merge in DS after every commit

    Hi Folks,
    I am using an Delta extraction logic in DS to extract large table from ECC (50 Million rows) to the HANA database. The commits in DS job have been configured fopr every 10,000 records. Three questions
    1) Should I disable the delta merge in HANA database for this target table prior to the initial load of table. Once the initial load is complete, manually perform the delta merge in HANA is the right approach or
    2) Should I be calling manually performing Delta merge in DS job to make sure the table is merged after every commit? If yes how do I call the Delta merge command in DS jobs and how can I do it per commit?
    3) Can I invoke Delta merge in DS as part of Delta extraction logic after the initial load is completed in DS?
    Any advise will definately be appreciated.
    Thanks,
    -Hari

    Hi Jim
    if your big table requires a merge, AUTOMERGE will pick it up. The mergedog process checks it every 60 seconds, so that should be alright for your requiremen.
    If the table doesn't need to be merged, it won't.
    Manually handling the delta merge is a fine-tuning action that is most often not required or recommendable.
    - Lars

  • Commit after 10000 rows

    Hi
    Iam inserting around 2.5 mmillion records in aconversion project
    let me know how i can commit after every 10000 rows please can u tell me whether i can use bulk insert or bulk bind because i have never used please resolve my problem.
    Thanks
    Madhu

    as sundar said, per link of TOM you are better of not commiting in th loop other wise it will give you snapshot too old error,
    still if you want
    1. set the counter to 0. ct number:=0;
    increment counter in the loop ct:=ct+1;
    IF ct=10000 THEN
    COMMIT;
    END IF;
    2. you can use bulk collect and FORALL also. and commit.
    but still follow the tread as per TOM
    typo
    Message was edited by:
    devmiral

  • Commit after n rows while update

    i have 1 update query
    UPDATE security
    SET display_security_alias = CONCAT ('BAD_ALIAS_', display_security_alias)
    WHERE security_id IN (
    SELECT DISTINCT security_id
    FROM security_xref
    WHERE security_alias_type IN ('BADSYMBOL', 'ERRORSYMBOL', 'BADCUSIP', 'ERRORCUSIP' )
    i want to perform commit after every 500rows ( due to Business requirement) how can we achieve this. please help

    J99 wrote:
    As mentioned by karthic_arp --Not a good idea still if you want it you can do this way
    DECLARE
    CURSOR C
    IS
    SELECT DISTINCT security_id
    FROM security_xref
    WHERE security_alias_type IN ('BADSYMBOL', 'ERRORSYMBOL', 'BADCUSIP', 'ERRORCUSIP' );
    -Counter to count records
    counter number(4) default 0;
    BEGIN
    FOR rec in C
    Loop
    UPDATE security SET display_security_alias = CONCAT ('BAD_ALIAS_', display_security_alias)
    WHERE security_id = rec.security_id
    counter := counter +SQL%rowcount ;
    If counter >= 500 then
    counter := 0;
    commit;
    end if;
    END Loop;
    end ;
    /NOT tested.Apart from committing every N rows, being a completely technically stupid idea, committing inside a cursor loop just take it one step further to stupiddom.

  • Commit after insertion

    I use Oracle 10G Rel2. I'm trying to improve the performance of my database insertions. Every two days I perform a process of inserting 10000 rows in a table. I call a PL/SQL procedure
    for inserting every row, which checks data and perform the insert command on the table.
    Should I commit after every call to the procedure ??
    Is it better to perform a commit at the end of calling 10000 times to the insertion procedure?? So the question is : is "commit" a cheap operation ??
    Any idea to improve performance with this operation ??

    > So the question is : is "commit" a cheap operation ??
    Yes. A commit for a billion rows is as fast as a commit for a single row.
    So there are no commit overheads for doing a commit on a large transaction versus a commit on a small transaction. So is this the right question to ask? The commit itself does not impact performance.
    But HOW you use the commit in your code, does. Which is why the points raised by Daniel is important.. how the commit is used. In Oracle, the "best place" is at the end of the business transaction. When the business transaction is done and dusted, commit. That is after all the very purpose of the commit command - protecting the integrity of the data and the business transaction.

Maybe you are looking for

  • How to change the standard purchase order ???

    Hi,    i am trying to change the output form of standard purchase order.for that i copied the standard script/form to my own created script/smart form.i modified it as per user requirement and i have given the same name in NACE>EF>output type>NEU>pro

  • Lenovo Thinkpad W530 Speakers are absolute garbage

    The W530 speakers are the worst speakers I've ever seen in any laptop.  I purchased the W530, and although everything else is great, the speakers are absolutely worthless. And no, calling it a business laptop doesn't justify not installing good speak

  • New Ipod through warranty, itunes not reading it

    I got a new ipod through my warranty b/c my ipod malfunctioned. Now my itunes wont recognize the new ipod b/c it is not the same one. I did not recieve a new itunes cd or anything. do you know how to get itunes to recognize it?   Windows XP  

  • Restore-WFFarm

    I am trying to test restoration of workflow maanger i face issue with step no 7 Restore-WFFarm, i get this error, any ideas http://technet.microsoft.com/en-us/library/jj730570.aspx#RestoreCommandProcess Restore-WFFarm : Non-zero exit code: -4 Restore

  • N_r_data_set- n_sx_version_20a_1 table interface

    Can anyone tell me how the n_r_data_set->n_sx_version_20a_1 structure is used in a table interface? I have been able to take a look at it contents in the debugger and see that it holds all data pertaining to the cells in a table.  What I want to do i