Update 1 million records in batches of 1000

Hello,
I have to update 1 million records in groups of 1000. This is the best code I've got but it takes about 1hr and 15 minutes. (I'm also doing a commit every 5k for rollback purposes.) Does anybody have any better ideas?
thanks,
-Kevin
set time on
spool c:\testrowbatch.log
declare
vrow_id varchar2(15);
pcount number:=0;
icount number:=0;
vbnum number:=1;
CURSOR cs_01 is
select row_id from eim_activity where if_row_batch_num = '10';
BEGIN
OPEN cs_01;
LOOP
FETCH cs_01 INTO vrow_id;
IF cs_01%NOTFOUND THEN
dbms_output.put_line('End of Data');
CLOSE cs_01;
END IF;
Update eim_activity set if_row_batch_num = vbnum where row_id = vrow_id;
icount:=icount + 1;
pcount:=pcount + 1;
IF icount = 5000 THEN
commit;
icount:=0;
CLOSE cs_01;
OPEN cs_01;
END IF;
IF pcount = 1000 THEN
vbnum := vbnum +1;
pcount:=0;
end if;
END LOOP;
commit;
end;

There are three problems with commiting inside the loop, particularly when you are updating the table that you have created the cursor on.
First, as everyone pointed out, it makes it slower.
Second, you run a serious risk of getting an ora 1555 Snapshot too old error. In which case you may not be able to reliably restart the procedure.
Third, it actually takes more rollback doing it that way than doing it in a single transaction.
The problem with Todd's approach is that you will update some records multiple times, whether you commit or not. You will set if_row_batch_num = 10 for 1000 records during the 10th iteration of the loop. These records will then be found in a subsequent iteration.
If you are doing this in PL/SQL just to increment the variable for every 1000 records, then this single update will solve that problem. It will be faster than your procedure, it will only update each row once per run, and it will always be an all or nothing update. You will still set if_row_batch_num = 10 for 1000 existing records.
UPDATE eim_activity
SET if_row_batch_num = FLOOR(rownum/1001) +1
WHERE if_row_batch_num = 10;TTFN
John

Similar Messages

  • Tune for updating 1 million record

    I want to update a main table that has 1 million records with a details from staging table that has 1 milllion distinct records. Have used bulk collect and for all , but still it takes 1 min and 10 sec for just 1000 records,
    statement used inside the procedure call. Please let me know what more can be done to optimize
    OPEN C_ORDERINFO;
      LOOP
          FETCH C_ORDERINFO BULK COLLECT INTO C_A123 ,C_B123,C_D123 LIMIT 100;
          EXIT WHEN C_A123.COUNT = 0;
    FORALL J IN C_A123.FIRST..C_A123.LAST
              UPDATE tableA
                 SET B123 = C_B123(J),
                     UPDATEDATETIME = systimestamp
               WHERE D123 = C_D123(J) AND
                     B123 = C_A123(J);

    varchar2(200000000).SQL VARCHAR2 is limited to 4000
    PL/SQL VARCHAR2 is limited to 32767

  • Fast Updates for 8 million records..!!

    Hi All,
    I was wondering is there any fast method for updating 8 million records out of 10 million table?
    For eg :
    I am having a customer table of 10m records and columns are cust_id, cust_num and cust_name.
    i need to update 8m records out of 10m customer table as follows.
    update customer set cust_id=46 where cust_id=75;
    The above statement will update 8m records. And cust_id is indexed.
    But if i fire the above update statement we'll face rollback segment problem..
    Even if i use ROWNUM and commit after 100K records still its gonna take huge time and also i know CTAS will be lot lot faster but for this scenario i guess its not possible.. Right?
    Any help is much appreciated...
    Thanks.

    You didn't specify what version you're on, but have you looked at dbms_redefinition?
    create table cust (cust_num number, constraint cust_pk primary key (cust_num), cust_id number, name varchar2(10));
    create index cust_id_idx on cust(cust_id);
    insert into cust values( 1, 1, 'a');
    insert into cust values( 2, 2, 'b');
    insert into cust values( 3, 1, 'c');
    insert into cust values( 4, 4, 'd');
    insert into cust values( 5, 1, 'e');
    select * From cust;
    create table cust_int (cust_num number, cust_id number, name varchar2(10));
    exec dbms_redefinition.start_redef_table(user,'cust', 'cust_int', 'cust_num cust_num, decode(cust_id, 1, 99, cust_id) cust_id, name name');
    declare
    i pls_integer;
    begin
    dbms_redefinition.copy_table_dependents( user, 'cust', 'cust_int', copy_indexes=>dbms_redefinition.cons_orig_params, copy_triggers=>true, copy_constraints=>true, copy_privileges=>true, ignore_errors=>false, num_errors=>i);
    dbms_output.put_line('Errors: ' || i);
    end;
    exec dbms_redefinition.finish_redef_table(user, 'cust', 'cust_int');
    select * From cust;
    select table_name, index_name from user_indexes;
    You would probably want to run a sync_interim_table in there before the finish.
    Good luck.

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • How to update a table that has  Million Records

    Hi,
    Lets consider the basic EMP table and lets assume that it has around 20 Million Records . we need to have an update statement.Normal UPdate statement may hang the system or it may take a lot of time.
    The basic or Normal update statement goes like this and hope it may not work.
    update emp set hiredate = sysdate where comm is null and hiredate is null;Basic statement may not work. sugestions Needed.
    Regards,
    Vinesh

    sri wrote:
    I heard Bulk collect will resolve these type of issues and i am really poor at Bulk Collect concepts.Exactly what type of issue are you concerned with? The business requirements here are pretty important-- what problem is the UPDATE causing, specifically, that you are trying to work around.
    so looking for a solution to the problem using Bulk Collect .Without knowing the problem, it's very tough to suggest a solution. If you process data in batches using BULK COLLECT, your UPDATE statement will take longer to run and will consume more resources on the database. If the problem you are trying to solve is that your UPDATE is not fast enough, this is a poor approach.
    On the other hand, if you process data in batches, and do interim commits, you can probably hold locks on individual rows for a shorter amount of time. That would only be a concern, though, if you have some other process that is trying to update the same rows that you are updating at the same time that you're updating them, which is pretty rare. And breaking your update into multiple transactions introduces a whole bunch of complexity. You now have to write a bunch of code to ensure that your process is restartable should the update fail mid-way through leaving some number of updates committed and some number rolled back. You have to have a very detailed understanding of the data and data consistency to ensure that breaking up the transaction isn't going to negatively impact any process, report, etc. To do it correctly is a pile of work and then it's something that is constantly at risk of creating problems in the future when requirements change.
    In the vast majority of cases, you're better off issuing a simple SQL statement during a time when the system isn't particularly busy.
    Justin

  • LC Output - problem big batch of 1000+ records

    Hi,
    I am creating a prototype with LC Output. The calling application will produce big XML file - Batch with more than1000 records.
    Example XML data:
    <?xml version="1.0" encoding="utf-8"?>
    <BATCH>
    <formDataRecords>
    <LC_KOMA002>
    <customerNameAddress>Jensens Biludlejning</customerNameAddress>
    <customerNameAddress>Vestergade 21c</customerNameAddress>
    <customerNameAddress>7100 Vejle</customerNameAddress>
    </LC_KOMA002>
    <LC_KOMA002>
    <customerNameAddress>Pete Petersen</customerNameAddress>
    <customerNameAddress>Vestergade 10</customerNameAddress>
    <customerNameAddress>7100 Copenhagen</customerNameAddress>
    </LC_KOMA002>
    </formDataRecords>
    In a setVariables step I extract the XML data to be merged with formTemplate. I extract all all nodes under <formDataRecords> in a new XML variable.
    I use this xml as input data to a GeneratePDFOutput (LC Output). The GeneratePDFOutput is set up with 'multiple streams&apos; and &apos;record level&apos; 2 and &apos;Record name&apos; = <LC_KOMA002>. I use the &apos;Output Location URI&apos; to save the pdf with incremental filename xxx1,2,3,4.pdf
    The workflow works fine with 500 records - here it produces 500 pdf files with 1 page.
    Problem:
    When running with 1000 records or more my process does not produce any pdf. The watchedfolder take my XML datafile. But after 3-4 min. it returns this error in the ERROR server log.
    2010-08-12 11:52:00,484 WARN [com.arjuna.ats.arjuna.logging.arjLoggerI18N] [com.arjuna.ats.arjuna.coordinator.BasicAction_58] - Abort of action id a10ed08:ce4d:4c596be4:15f4fee invoked while multiple threads active within it.
    It seems like GeneratePDFOutput run out of memory in some way when it handles more that 500 records.
    I hope that you have ideas to configuration/settings that can be used on the Adobe LC server or CL Output so that it can handle large batch jobs??
    I look forward to all you clever solutions - I really want to show the customer Adobe LC can do this one :)
    /Thomas Groenbaek, Jyske Bank

    Thanks Neal,
    You are always a life saver... and wauw you second post, it takes the tough questions to get you out :)
    Great info. I have now succesfully produced Batches with 1000 records and 2000 records. This means my GeneratePDFOutput create 1000/2000 PDF with one pages and 1 Big of 1000/2000 pages.
    Just want to share with everybody what settings I adjusted on my Adobe LC server:
    1) In C:\Adobe\Adobe LiveCycle ES2\jboss\bin\run.bat
    set XX:PermSize=512m -XX:MaxPermSize=512m -Xms2048m -Xmx2048m
    2) In C:\Adobe\Adobe LiveCycle ES2\jboss\server\lc_turnkey\conf\jboss-service.xml
    set <attribute name="TransactionTimeout">900</attribute> Default 300
    3) In Home > Services > Applications and Services > Service Management
    Find service 'outputservice1.1&apos; and click link to open settings
    Changed &apos;Transaction Time out&apos; from default =180 to 900
    When I produce batch with 2000 record it produced all 2000 PDF but I had an error right after it finished:
    2010-08-13 15:16:43,076 ERROR [org.jboss.ejb.plugins.LogInterceptor] TransactionRolledbackLocalException in method: public abstract java.lang.Object com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionCMTAdapterLocal.doRequiresNe w(com.adobe.idp.dsc.transaction.TransactionDefinition,com.adobe.idp.dsc.transaction.Transa ctionCallback) throws com.adobe.idp.dsc.DSCException, causedBy:
    java.lang.IllegalStateException: [com.arjuna.ats.internal.jta.transaction.arjunacore.inactive] [com.arjuna.ats.internal.jta.transaction.arjunacore.inactive] The transaction is not active!
    Anybody knowing what caused this error and how to solve it
    /Thomas Groenbaek

  • How to update more than 5 million records without error message ORA-00257:

    Hi ,
    I need to update some columns in my table which is contains about 5 million records
    I 've already tried this
    Update AAA_CDR
    Set RoamFload = Null ;
    but the problem is I've got the error message ("ORA-00257: archiver error. Connect internal only,until freed.) and the update consuming about 6 hours with no results ,
    then I do the commands ( Alter system set db_recovery_file_dest_size=50G) and the problem solved .
    but I need to update about 15 columns of this table to be null ,what I should do to overcome this message and update the table in reasonable time
    Please Help Me ,

    The best way would be to allocate sufficient disk space for your archive log destination. Your database is not sized properly. NOLOGGING option will not do much for you because it' only applies to direct load operations when the data inserted into nologging table is selected from another table. UPDATE will be be logged, regardless of the NOLOGGING status. Here is the quote from the manual:
    <quote>
    LOGGING|NOLOGGING
    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load
    INSERT operations against a nonpartitioned index, a range or hash index partition, or
    all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING)
    or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid
    and to record dictionary changes). When applied during media recovery, the extent
    invalidation records mark a range of blocks as logically corrupt, because the redo data
    is not logged. Therefore, if you cannot afford to lose this index, you must take a backup
    after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an
    operation in LOGGING mode will re-create the index. However, media recovery from a backup
    taken before an operation in NOLOGGING mode will not re-create the index.
    An index segment can have logging attributes different from those of the base table and
    different from those of other index segments for the same base table.
    </quote>
    If you are really desperate, you can try the following undocumented/unsupported command:
    ALTER DATABASE ARCHIVELOG COMPRESS ENABLE;
    That will cause database to compress your archive logs and consume less space. This command is not documented or supported, not even in the version 11.2.0.3 and causes the database to start spewing ORA-0600 in version 10G. DO NOT USE IN A PRODUCTION ENVIRONMENT!!!!

  • Reg update of a 10 million record table from 1 million record table

    I have 2 tables
    Tabke 1 : 10 millio records
    21 indexes --> 1). acct_id , acct_seq_no -- index_1
    2). c1 , c2 , c3 -- index_2
    Table 2 : 1 .5 million records
    1 index on ( acct_id, acct_seq_no) - idx_1
    common keys are acct_id and acct_seq_no
    I'm updating table1 from table 2
    I need to use index ( index_1 from table_1 ) and (idx_1 from table_2)
    How can I make my query to use only this particular index.
    MY query ia as follows
    UPDATE csban_&1 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT acct_id,
    acct_seq_no,
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE csb.acct_id = temp1.acct_id
    AND csb.acct_seq_no = temp1.acct_seq_no
    DO I need to put and index hint after this
    UPDATE csban_&1 csb --???????? /*+ indedx (csb.index_1) */
    Thanks in advance

    Thanks a lot david and rob for sharing the info.
    Please find the details
    SQL> EXPLAIN PLAN FOR
    UPDATE csban_2 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
    21 22 23 24 25 26 27 28 29
    Explained.
    SQL>
    SQL>
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 584770029
    | Id | Operation | Name | Rows | Bytes | Cost
    | TQ |IN-OUT| PQ Distrib |
    PLAN_TABLE_OUTPUT
    | 0 | UPDATE STATEMENT | | 530K| 19M| 8213
    | | | |
    | 1 | UPDATE | CSBAN_2 | | |
    | | | |
    |* 2 | FILTER | | | |
    | | | |
    | 3 | PX COORDINATOR | | | |
    | | | |
    PLAN_TABLE_OUTPUT
    | 4 | PX SEND QC (RANDOM) | :TQ10000 | 530K| 19M| 8213
    | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 530K| 19M| 8213
    | Q1,00 | PCWC | |
    | 6 | TABLE ACCESS FULL | CSBAN_2 | 530K| 19M| 8213
    | Q1,00 | PCWP | |
    |* 7 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | 10 | 3
    PLAN_TABLE_OUTPUT
    | | | |
    |* 8 | COUNT STOPKEY | | | |
    | | | |
    | 9 | TABLE ACCESS BY INDEX ROWID| CSBAN_ABI_TEMP | 1 | 38 | 4
    | | | |
    |* 10 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | | 3
    | | | |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter( EXISTS (SELECT 0 FROM "CSBAN_ABI_TEMP" "TEMP1" WHERE "TEMP1"."ACC
    T_SEQ_NO"=:B1 AND
    "TEMP1"."ACCT_ID"=:B2))
    PLAN_TABLE_OUTPUT
    7 - access("TEMP1"."ACCT_ID"=:B1 AND "TEMP1"."ACCT_SEQ_NO"=:B2)
    8 - filter(ROWNUM<2)
    10 - access("TEMP"."ACCT_ID"=:B1 AND "TEMP"."ACCT_SEQ_NO"=:B2)
    Note
    - cpu costing is off (consider enabling it)
    30 rows selected.
    The query had completed and it took 1 hr 47 mts.
    SQL> SQL> SQL> Updating CSBAN from TEMP table
    old 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_&1 csb
    new 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_1 csb
    1611807 rows updated.
    Elapsed: 01:47:16.40

  • Update Query is Performing Full table Scan of 1 Millions Records

    Hello Everyboby I have one update query ,
    UPDATE tablea SET
              task_status = 12
              WHERE tablea.link_id >0
              AND tablea.task_status <> 0
              AND tablea.event_class='eventexception'
              AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
              AND ltask.task_status = 0)
    When I do explain plan it shows following result...
    Execution Plan
    0 UPDATE STATEMENT Optimizer=CHOOSE
    1 0 UPDATE OF 'tablea'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'tablea'
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
    5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
    NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
    update 2 records....please suggest me some optimal solutions.....
    Regards
    Mahesh

    I see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
    UPDATE tablea SET
    task_status = 12
    WHERE tablea.link_id >0
    AND tablea.task_status <> 0
    AND tablea.event_class='eventexception'
    AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
    AND ltask.task_status = 0)
    I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
    so ideal case FOR optimizer should be
    Step 1)Select all the rowid having this condition...
    Step 2)
    For each row in rowid get all the row where task_status=0
    and where taskid=linkid of rowid selected above...
    Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
    I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
    It is try FULL TABLE SCAN is harmfull alteast not better than index scan.....

  • Split the file in the batches of 1000 records

    Hi,
    I need to read a fixed length file from SAP system and then have to send the file which is combination of a huge number of records (e.g 100000) in the batches of 1000 records each to the target system.The target system which is a third party has a constraint of accepting only 1000 records in a file.
    What is the best approach we may use here to split the file in the batches of 1000 records and then send the file to the target.
    Kindly suggest.
    Thank You.
    Regards,
    Indu Khurana.

    The adapter will will take care this and split the input file (message) into smaller message batches (set to the number of records defined by you, say 1000).  Each of these smaller batches will be processed as per your design and configuration.  For example a straight through sender and receiver will have the sender splitting the input file into batches of 1000 messages which will be delivered to your receiver.  You could also add a timestamp or the like to stop the file being overwritten at the destination.
    Regards,
    Mike

  • Best way to update 8 out of10 million records

    Hi friends,
    I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column.
    Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
    How should i approach this one?

    @Mark D Powell
    To give you a background my client faced this problem  a week ago , This is part of a daily cleanup activity .
    Right now i don't have the access to it due to security issue . I could only take few AWR reports and stats when the access window was opened. So basically next time when i get the access i want to close the issue once and for all
    Coming to your questions:
    So what is wrong with just issuing an update to update all 8 Million rows? 
    In a previous run , of a single update with full table scan in the plan with no parallel degree it started reading from UNDO(current_obj=-1 on event "db file sequential read" wait event) and errored out after 24 hours with tablespace full on the tablespace which contains the BLOB data(a separate tablespace)
    To add to the problem redo log files were sized too less , about 50MB only .
    The wait events (from DBA_HIST_ACTIVE_SESS_HISTORY )for the problematic sql id shows
    -  log file switch (checkpoint incomplete) and log file switch completion as the events comprising 62% of the wait events
    -CPU 29%.
    -db file sequential read 6%.
    -direct path read 2% and others contributing a little.
    30 % of the samples "db file sequential read" had a current_obj#=-1 & p1 showing undo file id.
    Is there any concurrent DML against this table? If not, the parallel DML would be an option though it may not really be needed. 
    I think there was in the previous run and i have asked to avoid in the next run.
    How large are the base table rows?
    AVG_ROW_LEN is 227
    How many indexes are effected by the update if any?
    The last column of the primary key column is the only column to be updated ( i mean used in the "SET" clause of the update)
    Do you expect the update will cause any row migration?
    Yes i think so because the only column which is going to be updated is the same column on which the table is partitioned.
    Now if there is a lot of concurrent DML on the table you probably want to use pl/sql so you can loop through the data issuing a commit every N rows so as to not lock other concurrent sessions out of the table for too long a period of time.  This may well depend on if you can write a driving cursor that can be restarted in the event of interruption and would skip over rows that have already been updated.  If not you might want to use a driving table to control the processing.
    Right now to avoid UNDO issue i have suggested to use PL/SQL approach & have asked increasing the REDO size to atleast 10 times more.
    My big question after seeing the wait events profile for the session is:
    Which was the main issue here , redo log size or the reading from UNDO which hit the update statement. The buffer gets had shot to 600 million , There are only 220k blocks in the table.

  • How to update the millions of records in oracle database?

    How to update the millions of records in oracle database?
    table have contraints & index.how to do this mass update.normal update taking several hours.

    LostWorld wrote:
    How to update the millions of records in oracle database?
    table have contraints & index.how to do this mass update.normal update taking several hours.Please, refer to Tom Kyte's answer on your question
    [How to Update millions or records in a table|http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6407993912330]
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • Update performance on a 38 million records table

    Hi all,
    I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
    The Database is 10g r2 running on a Unix TRU64.
    The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
    The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
    My question is: Is that any way to improve the performance?
    The Script:
    DECLARE
    CURSOR C1
    IS
    SELECT ROWID
    FROM RTG.TCLIENTE_RTG;
    type rowidtab is table of rowid;
    d_rowid rowidtab;
    v_char char(1) := 'N';
    BEGIN
    OPEN C1;
    LOOP
    FETCH C1
    BULK COLLECT INTO d_rowid LIMIT 200000;
    FORALL i IN d_rowid.FIRST..d_rowid.LAST
    UPDATE RTG.TCLIENTE_RTG
    SET CLI_VALID_IND = v_char
    WHERE ROWID = d_rowid(i);
    COMMIT;
    EXIT WHEN C1%NOTFOUND;
    END LOOP;
    CLOSE C1;
    END;
    Kind Regards,
    Fabio

    I'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
    Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
    If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
    Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
    That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage?

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Updating and deleting records in access DB

    i'm trying to make multiple updates and deletes in a access DB and it doesn't work.
    my program executes lot's of updates and delete statements when saving, but I only commit in the end when I know all the statements finished ok.
    I can't make it a batch update since I need to retrieve the auto-number that access issues me on some of my tables.
    I know it's kinda fuzzy, but my program is SO big already, and I can't really put the code in here - it wouldn't do much good.
    what I will do is write what my log shows (i issue a log output whenever there's a executeQuery/Update and before commit/rollback
    DBG : 29/04/03 : UPDATE CASES_T SET CASE_office_case_id = 1 ,CASE_year = 2 ,CASE_total_debt = 10009 ,CASE_is_limited = true ,CASE_monthly_payment = 400 ,CASE_hotzlap_case_id = ' - - - ' ,CASE_status = 26 WHERE CASE_tech_id = 5
    DBG : 29/04/03 : SELECT count(*) from CASES_OWEES_T where CAOW_Tech_Id = 23
    DBG : 29/04/03 : UPDATE CASES_OWEES_T SET CAOW_case_tech_id = 5 ,CAOW_id_number = '046259990' ,CAOW_first_name = '���' ,CAOW_last_name = '������' ,CAOW_work_place = 'jjjjjjjjjjjjj' ,CAOW_address = '1jjjjjjjjjjjj' ,CAOW_telephone = '09-8099999' ,CAOW_pelephone = '000-000000' ,CAOW_hotzlap_case_id = ' - - - ' ,CAOW_is_valid_address = false WHERE CAOW_TECH_ID = 23
    DBG : 29/04/03 : SELECT count(*) from CASES_OWEES_T where CAOW_Tech_Id = 24
    DBG : 29/04/03 : UPDATE CASES_OWEES_T SET CAOW_case_tech_id = 5 ,CAOW_id_number = ' ' ,CAOW_first_name = '�����' ,CAOW_last_name = '���' ,CAOW_work_place = '����' ,CAOW_address = '��?' ,CAOW_telephone = ' - ' ,CAOW_pelephone = ' - ' ,CAOW_hotzlap_case_id = ' - - - ' ,CAOW_is_valid_address = true WHERE CAOW_TECH_ID = 24
    DBG : 29/04/03 : SELECT count(*) from CASES_INNER_CASES_T where CICA_Tech_Id = 10
    DBG : 29/04/03 : UPDATE CASES_INNER_CASES_T SET CICA_case_tech_id = 5 ,CICA_debt = 5000897 ,CICA_winner_name = '���' ,CICA_lawyer_tech_id = '6' ,CICA_hotzlap_case_id= '02-22122-22-2' WHERE CICA_tech_id= 10
    DBG : 29/04/03 : SELECT count(*) from CASES_INNER_CASES_T where CICA_Tech_Id = 11
    DBG : 29/04/03 : UPDATE CASES_INNER_CASES_T SET CICA_case_tech_id = 5 ,CICA_debt = 20008 ,CICA_winner_name = '������' ,CICA_lawyer_tech_id = '2' ,CICA_hotzlap_case_id= '02-22222-22-2' WHERE CICA_tech_id= 11
    DBG : 29/04/03 : SELECT count(*) from CASES_INNER_CASES_T where CICA_Tech_Id = 14
    DBG : 29/04/03 : UPDATE CASES_INNER_CASES_T SET CICA_case_tech_id = 5 ,CICA_debt = 129 ,CICA_winner_name = '' ,CICA_lawyer_tech_id = '4' ,CICA_hotzlap_case_id= '02-22222-22-2' WHERE CICA_tech_id= 14
    DBG : 29/04/03 : SELECT count(*) from CASES_CUSTOMER_CARDS_T where CUCA_Tech_Id = 23
    DBG : 29/04/03 : UPDATE CASES_CUSTOMER_CARDS_T SET CUCA_case_tech_id = 5 , CUCA_sum = 324 , CUCA_fee_plus_maam = -38.0 , CUCA_for_division = 285.0 , CUCA_voucher = '3222 ' , CUCA_date = '2003-04-24' , CUCA_in_or_out = true WHERE CUCA_tech_id = 23
    DBG : 29/04/03 : SELECT count(*) from CASES_CUSTOMER_CARDS_T where CUCA_Tech_Id = 22
    DBG : 29/04/03 : UPDATE CASES_CUSTOMER_CARDS_T SET CUCA_case_tech_id = 5 , CUCA_sum = 324 , CUCA_fee_plus_maam = -38.0 , CUCA_for_division = 285.0 , CUCA_voucher = '2222 ' , CUCA_date = '2003-04-23' , CUCA_in_or_out = true WHERE CUCA_tech_id = 22
    DBG : 29/04/03 : SELECT count(*) from CASES_CUSTOMER_CARDS_T where CUCA_Tech_Id = 21
    DBG : 29/04/03 : UPDATE CASES_CUSTOMER_CARDS_T SET CUCA_case_tech_id = 5 , CUCA_sum = 100 , CUCA_fee_plus_maam = -11.0 , CUCA_for_division = 0.0 , CUCA_voucher = '2222 ' , CUCA_date = '2003-04-23' , CUCA_in_or_out = true WHERE CUCA_tech_id = 21
    DBG : 29/04/03 : SELECT count(*) from CASES_CUSTOMER_CARDS_T where CUCA_Tech_Id = 20
    DBG : 29/04/03 : UPDATE CASES_CUSTOMER_CARDS_T SET CUCA_case_tech_id = 5 , CUCA_sum = 1000 , CUCA_fee_plus_maam = -118.0 , CUCA_for_division = 882.0 , CUCA_voucher = '1111 ' , CUCA_date = '2003-04-23' , CUCA_in_or_out = true WHERE CUCA_tech_id = 20
    DBG : 29/04/03 : SELECT count(*) from CASES_INVESTIGATIONS_T where CAIN_Tech_Id = 16
    DBG : 29/04/03 : UPDATE CASES_INVESTIGATIONS_T SET CAIN_case_tech_id = 5 ,CAIN_text = '' ,CAIN_was_declared_limited = false ,CAIN_date = '2003-03-01' ,CAIN_rulling_effective_date = '2003-02-01' ,CAIN_payment_amount = 400 WHERE CAIN_TECH_ID = 16
    DBG : 29/04/03 : SELECT count(*) from CASES_INVESTIGATIONS_T where CAIN_Tech_Id = 12
    DBG : 29/04/03 : UPDATE CASES_INVESTIGATIONS_T SET CAIN_case_tech_id = 5 ,CAIN_text = '' ,CAIN_was_declared_limited = true ,CAIN_date = '1970-01-01' ,CAIN_rulling_effective_date = '1905-03-06' ,CAIN_payment_amount = 90 WHERE CAIN_TECH_ID = 12
    DBG : 29/04/03 : SELECT count(*) from CASES_INVESTIGATIONS_T where CAIN_Tech_Id = 10
    DBG : 29/04/03 : UPDATE CASES_INVESTIGATIONS_T SET CAIN_case_tech_id = 5 ,CAIN_text = '' ,CAIN_was_declared_limited = false ,CAIN_date = '1970-01-01' ,CAIN_rulling_effective_date = '1990-01-30' ,CAIN_payment_amount = 89 WHERE CAIN_TECH_ID = 10
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 9
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 9
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 14
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 14
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 13
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 13
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 11
    DBG : 29/04/03 : DELETE FROM CASES_INVESTIGATIONS_T where CAIN_tech_id = 11
    DBG : 29/04/03 : SELECT count(*) from CASES_PAYMENTS_T where CAPY_Tech_Id = 71
    DBG : 29/04/03 : UPDATE CASES_PAYMENTS_T SET CAPY_case_tech_id = 5 ,CAPY_sum = 400 ,CAPY_exception_text = '' ,CAPY_is_exception = true ,CAPY_voucher = '' ,CAPY_date = '2003-04-01' ,CAPY_is_paid_in_hotzlap = false WHERE CAPY_TECH_ID = 71
    DBG : 29/04/03 : SELECT count(*) from CASES_PAYMENTS_T where CAPY_Tech_Id = 70
    DBG : 29/04/03 : UPDATE CASES_PAYMENTS_T SET CAPY_case_tech_id = 5 ,CAPY_sum = 400 ,CAPY_exception_text = '' ,CAPY_is_exception = true ,CAPY_voucher = '' ,CAPY_date = '2003-03-01' ,CAPY_is_paid_in_hotzlap = false WHERE CAPY_TECH_ID = 70
    DBG : 29/04/03 : SELECT count(*) from CASES_PAYMENTS_T where CAPY_Tech_Id = 69
    DBG : 29/04/03 : UPDATE CASES_PAYMENTS_T SET CAPY_case_tech_id = 5 ,CAPY_sum = 400 ,CAPY_exception_text = '' ,CAPY_is_exception = true ,CAPY_voucher = '' ,CAPY_date = '2003-03-01' ,CAPY_is_paid_in_hotzlap = false WHERE CAPY_TECH_ID = 69
    DEV : 29/04/03 : commit

    sure - I don't get any exception. the data just
    doesn't show in the DB. (no updating/deletion of
    records)
    as I wrote before - when the the programm commits the
    changes and does the next select command it seems as
    if the data was changed/deleted but if I check the DB
    with access or if I restart the prgoram then I see
    that the data didn't change.
    Access/jdbc-odbc has a problem where modifications are not 'commited' when the statement completes. Instead one must do one of the following:
    1. Explicit commit.
    2. Simple select after statement.
    3. Close the connection.
    Presumably you are doing 1.
    Since other than this it does work, it suggests one of the following.
    1. Something is wrong with your environment. For instance you are looking at the wrong database. Or not refreshing. Or something else like that.
    2. You are using something besides a simple connection - like opening it with 'scroll insensitive'.
    3. The complexity is causing it to lose an error message. This can be tested by doing each statement individually and verify that none produce an error.
    4. Maybe you found a bug. You can turn on ODBC tracing via the applet panel and see if digging through all of the detail provides any clues (you can also do this with 3 above.)

Maybe you are looking for

  • Error in SAP R/3 4.7 & ECC 5 versions - while installing Databse Instance

    Hi I have installed Windows 2003 in my Laptop . Then I configured MS Loopback adapter and assigned an IP address. Then I changed the Virtual Memory  Then I installed Java in C drive & gave the path in System properties. Then Installed Oracle 9i and a

  • Finder Column Resize Field is missing - whitespace instead

    Hi, I noticed that the little column resize field on the vertical dividers in Finder (the one with the two vertical lines on it) has gone missing. There's now a whitespace instead, which I can still use to resize the column. So the problem is more of

  • Patch makes instance unavailable

    After applying patch 8.1.5.0.2 I get an error when trying to connect to the database ORA-01096 program version (8.1.5.0.2) incompatible with instance 8.1.5.0.0 Suggestions? After applying the patch I did not see any log file errors. I shut down the d

  • Delta load error in quantity of sales overview cube 0sd_c03

    Hi All, The quantity(0quant_b) is fetching properly for full upload.But when delta load is done 0quant_b is not fetching correctly. For example, quantity was changed from 100 to 120 for an order. When delta is run, only 100 is uploaded instead of 120

  • What does "failed to load the kiosk" mean?

    I just bought a new BT Home Hub. I have a problem with The Times app. I have tried reloading the software and the paper will not load. I get the statement "Failed to load the Kiosk". What does it mean?