Fast Updates for 8 million records..!!

Hi All,
I was wondering is there any fast method for updating 8 million records out of 10 million table?
For eg :
I am having a customer table of 10m records and columns are cust_id, cust_num and cust_name.
i need to update 8m records out of 10m customer table as follows.
update customer set cust_id=46 where cust_id=75;
The above statement will update 8m records. And cust_id is indexed.
But if i fire the above update statement we'll face rollback segment problem..
Even if i use ROWNUM and commit after 100K records still its gonna take huge time and also i know CTAS will be lot lot faster but for this scenario i guess its not possible.. Right?
Any help is much appreciated...
Thanks.

You didn't specify what version you're on, but have you looked at dbms_redefinition?
create table cust (cust_num number, constraint cust_pk primary key (cust_num), cust_id number, name varchar2(10));
create index cust_id_idx on cust(cust_id);
insert into cust values( 1, 1, 'a');
insert into cust values( 2, 2, 'b');
insert into cust values( 3, 1, 'c');
insert into cust values( 4, 4, 'd');
insert into cust values( 5, 1, 'e');
select * From cust;
create table cust_int (cust_num number, cust_id number, name varchar2(10));
exec dbms_redefinition.start_redef_table(user,'cust', 'cust_int', 'cust_num cust_num, decode(cust_id, 1, 99, cust_id) cust_id, name name');
declare
i pls_integer;
begin
dbms_redefinition.copy_table_dependents( user, 'cust', 'cust_int', copy_indexes=>dbms_redefinition.cons_orig_params, copy_triggers=>true, copy_constraints=>true, copy_privileges=>true, ignore_errors=>false, num_errors=>i);
dbms_output.put_line('Errors: ' || i);
end;
exec dbms_redefinition.finish_redef_table(user, 'cust', 'cust_int');
select * From cust;
select table_name, index_name from user_indexes;
You would probably want to run a sync_interim_table in there before the finish.
Good luck.

Similar Messages

  • Update 1 million records in batches of 1000

    Hello,
    I have to update 1 million records in groups of 1000. This is the best code I've got but it takes about 1hr and 15 minutes. (I'm also doing a commit every 5k for rollback purposes.) Does anybody have any better ideas?
    thanks,
    -Kevin
    set time on
    spool c:\testrowbatch.log
    declare
    vrow_id varchar2(15);
    pcount number:=0;
    icount number:=0;
    vbnum number:=1;
    CURSOR cs_01 is
    select row_id from eim_activity where if_row_batch_num = '10';
    BEGIN
    OPEN cs_01;
    LOOP
    FETCH cs_01 INTO vrow_id;
    IF cs_01%NOTFOUND THEN
    dbms_output.put_line('End of Data');
    CLOSE cs_01;
    END IF;
    Update eim_activity set if_row_batch_num = vbnum where row_id = vrow_id;
    icount:=icount + 1;
    pcount:=pcount + 1;
    IF icount = 5000 THEN
    commit;
    icount:=0;
    CLOSE cs_01;
    OPEN cs_01;
    END IF;
    IF pcount = 1000 THEN
    vbnum := vbnum +1;
    pcount:=0;
    end if;
    END LOOP;
    commit;
    end;

    There are three problems with commiting inside the loop, particularly when you are updating the table that you have created the cursor on.
    First, as everyone pointed out, it makes it slower.
    Second, you run a serious risk of getting an ora 1555 Snapshot too old error. In which case you may not be able to reliably restart the procedure.
    Third, it actually takes more rollback doing it that way than doing it in a single transaction.
    The problem with Todd's approach is that you will update some records multiple times, whether you commit or not. You will set if_row_batch_num = 10 for 1000 records during the 10th iteration of the loop. These records will then be found in a subsequent iteration.
    If you are doing this in PL/SQL just to increment the variable for every 1000 records, then this single update will solve that problem. It will be faster than your procedure, it will only update each row once per run, and it will always be an all or nothing update. You will still set if_row_batch_num = 10 for 1000 existing records.
    UPDATE eim_activity
    SET if_row_batch_num = FLOOR(rownum/1001) +1
    WHERE if_row_batch_num = 10;TTFN
    John

  • Tune for updating 1 million record

    I want to update a main table that has 1 million records with a details from staging table that has 1 milllion distinct records. Have used bulk collect and for all , but still it takes 1 min and 10 sec for just 1000 records,
    statement used inside the procedure call. Please let me know what more can be done to optimize
    OPEN C_ORDERINFO;
      LOOP
          FETCH C_ORDERINFO BULK COLLECT INTO C_A123 ,C_B123,C_D123 LIMIT 100;
          EXIT WHEN C_A123.COUNT = 0;
    FORALL J IN C_A123.FIRST..C_A123.LAST
              UPDATE tableA
                 SET B123 = C_B123(J),
                     UPDATEDATETIME = systimestamp
               WHERE D123 = C_D123(J) AND
                     B123 = C_A123(J);

    varchar2(200000000).SQL VARCHAR2 is limited to 4000
    PL/SQL VARCHAR2 is limited to 32767

  • Cross Form updates for searched record

    Hi All
    How to update a form fields which taking a searched value from another table .
    I have a form that records student data and which has a drop down to select the 'student registration No' that comes from different table.
    When insert or update a student record, first we select the student registration No from drop down to see if he is an existing student.
    If he is already registered student then relevant student name and other details should be shown in the form and should be able to enter few other details as well.
    Pls any one could help me on this
    Thanks

    Hi All
    How to update a form fields which taking a searched value from another table in apex 4.1.1 using PL/SQL default gateway for the web server.
    I have a form(tabular form) that records student data and which has a drop down to select the 'student registration No' that comes from different table.
    When insert or update a student record, first we select the student registration No from drop down to see if he is an existing student.
    If he is already registered student then relevant student name and other details should be shown in the form and should be able to enter few other details as well.
    Pls any one could help me on this
    Thanks

  • Delta update for deleted records

    Hi,
    I use delta update for my data. As destination I use ODS object and this ODS contains two keys: key1 and key2. On step of preparing data for extraction, key2 is lost(we can't determine it) - key1 is known only.
    I'm finding a way how to update ODS by deleted records('D' update mode) if this records contain only key1 value(key2 is empty). I need to delete all records with key1 only, key2 is not important for this update.
    Is it possible to set key2 some BW specific value, which will be accepted as equal for old key2 values ?
    Or some other standart ways are known for this situation ?
    Thanks in advance,
    Fiodar.

    I would use the update rules to read the active data table of the ODS. There you can read all the key2 values for a given key1.
    If you get one row with key1 filled and key2 = '*' (for example you could split it into multiple records, one per key2. A sample coding might look like:
    LOOP AT DATA_PACKAGE into l_d_data_package.
      IF key2 = '*'.
        DELETE DATA_PACKAGE.
        SELECT * FROM /BIC/AMY_ODS00 INTO l_d_my_ods
          WHERE key1 = l_d_data_package-key1.   
          l_d_data_package-key2 = l_d_my_ods-key2.
          append l_d_data_package to DATA_PACKAGE. 
        ENDSELECT.
      ENDIF. 
    ENDLOOP.
    Best regards
       Dirk

  • Create batches for million record table

    I have took this out for some reasons...
    Thanks for your support guys...
    Edited by: Srichan on Jun 17, 2009 2:19 PM

    Well, OK then:
    SQL> select * from t
      2  /
           COL
             1
            20
            56
            78
            80
            82
            88
            91
            95
            99
           100
           110
    12 rows selected.
    Elapsed: 00:00:00.03
    SQL> select min(col) stkey
      2  ,      max(col) endkey
      3  ,      count(*) recs
      4  ,      grpid
      5  from ( select col
      6         ,      ntile(5) over (order by col) grpid
      7         from t
      8       )
      9  group by grpid
    10  order by stkey;
         STKEY     ENDKEY       RECS      GRPID
             1         56          3          1
            78         82          3          2
            88         91          2          3
            95         99          2          4
           100        110          2          5
    5 rows selected.
    Elapsed: 00:00:00.01
    SQL> select min(col) stkey
      2   ,      max(col) endkey
      3   ,      count(*) recs
      4   ,      grpid
      5   from ( select col
      6  ,      ntile(3) over (order by col) grpid
      7          from t
      8        )
      9   group by grpid
    10   order by stkey;
         STKEY     ENDKEY       RECS      GRPID
             1         78          4          1
            80         91          4          2
            95        110          4          3
    3 rows selected.
    Elapsed: 00:00:00.06
    edit
    All credits to Salim here, I like the 'rownum-thing' to get the sets into 5-5-2, very nice and just had to run it ;) :
    SQL> select min(col) stkey
      2  ,      max(col) endkey
      3  ,      count(*) recs
      4  ,      rn
      5  from ( select col
      6        ,       trunc((rownum -1)/5)+1 rn
      7         from t
      8       )
      9  group by rn
    10  order by stkey;
         STKEY     ENDKEY       RECS         RN
             1         80          5          1
            82         99          5          2
           100        110          2          3Edited by: hoek on Jun 16, 2009 6:24 PM

  • Xsd validation in Database for 1 million record

    Hello All,
    I would like to know the pros and cons to do the xsd validation of a million record in 11g database and if possible the processing time taken to do xsd validation for million records.
    What would be good datatype to load this xml file of million records, should it be blog/clob or varchar2(200000000).
    Thanks.

    varchar2(200000000).SQL VARCHAR2 is limited to 4000
    PL/SQL VARCHAR2 is limited to 32767

  • Issue regarding 0MATERIAL_ATTR  - Attribute not Updated for Some Materials

    Hi All,
    As per requirement, I had enhanced 0MATERIAL_ATTR DS with one customer field MRP Controller.
    After enhancing 0MATERIAL_ATTR, I had made necessary changes in related objects like 0MATERIAL Info Object & related transformation for this new attribute.
    After making changes, when I had done Repair full request for 0MATERIAL_ATTR, newly added attribute had not updated for all material, other attributes were updated correctly but MRP controller had not updated for some records.
    I am confused with why attribute(MRP controller) updated in some Materials & for some materials its not updated.
    I had checked in PSA, data for this added attribute is come in PSA but not updated in data target (0material).
    I had also activated master data by running Attribute change run but still problem persist.
    Please help.
    Regards,
    Divyesh Khambhati

    Hi Venkatesh,
    I had write code in CMOD for MRP controller.
    As data is coming fine till PSA but its not updated in Data Target (0MATERIAL) .
    if I take one example..
    For exp Material XYZ has two entries in PSA table, one is updated during delta load & recent one which come though Repair Full request.
    Now newly adder attribute MRP controller available in recent request of Repair Full but that is blank in case of prior delta update.
    I am confused because MRP controller attribute is updated for some Material but for some material its remain blank.
    please help.
    regards,
    Divyesh Khambhati

  • How to update a table that has  Million Records

    Hi,
    Lets consider the basic EMP table and lets assume that it has around 20 Million Records . we need to have an update statement.Normal UPdate statement may hang the system or it may take a lot of time.
    The basic or Normal update statement goes like this and hope it may not work.
    update emp set hiredate = sysdate where comm is null and hiredate is null;Basic statement may not work. sugestions Needed.
    Regards,
    Vinesh

    sri wrote:
    I heard Bulk collect will resolve these type of issues and i am really poor at Bulk Collect concepts.Exactly what type of issue are you concerned with? The business requirements here are pretty important-- what problem is the UPDATE causing, specifically, that you are trying to work around.
    so looking for a solution to the problem using Bulk Collect .Without knowing the problem, it's very tough to suggest a solution. If you process data in batches using BULK COLLECT, your UPDATE statement will take longer to run and will consume more resources on the database. If the problem you are trying to solve is that your UPDATE is not fast enough, this is a poor approach.
    On the other hand, if you process data in batches, and do interim commits, you can probably hold locks on individual rows for a shorter amount of time. That would only be a concern, though, if you have some other process that is trying to update the same rows that you are updating at the same time that you're updating them, which is pretty rare. And breaking your update into multiple transactions introduces a whole bunch of complexity. You now have to write a bunch of code to ensure that your process is restartable should the update fail mid-way through leaving some number of updates committed and some number rolled back. You have to have a very detailed understanding of the data and data consistency to ensure that breaking up the transaction isn't going to negatively impact any process, report, etc. To do it correctly is a pile of work and then it's something that is constantly at risk of creating problems in the future when requirements change.
    In the vast majority of cases, you're better off issuing a simple SQL statement during a time when the system isn't particularly busy.
    Justin

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Help needed in PL/SQL for updating the column for multiple records

    Hi,
    I am new to PL/SQL and need some help. What is the most effiecient way to update some field in a table as I may need to update thousands of records. I have a coulmn groupid can have multiple records tied to it. All the records attached to some groupid have a priority field also.
    How can I update the prorityfield value for all the groupids in a profiecient way. Here is a sample data
    GroupId     Priority
    1            1
    1            2
    1            3
    1            4
    1            5
    2            1
    2            2
    2            3
    3            1Here I have three groups 1, 2, 3. Now if any group contains only one record the priority remains same e.g. groupid=3 on top. If any group contains more than one record e.g. groupid=1 & 2 I want to re-arrange the priority fields e.g. If I want to update groupid=1 now if I change the priority of 2 to 5 (make it the last) I want to rearrange the remaing records priority i.e. if 2 becomes 5 as I have 5 rows for groupid=1 then 5 becomes 4, 4 becomes 3, 3 becomes 2 and 1 remains the same.
    Same wya if I want to make the priority 1 to 3 for groupid=2 then need 2 to become 1 and 3 to become 2 etc....
    Any help is appreciated.
    Thanks

    Hi,
    You don't need PL/SQL to do this (though you can put the following in PL/SQL if you want to):
    UPDATE     table_x
    SET     priority = CASE
                   WHEN  groupid     = 1
                   AND   priority = 2
                        THEN  5
                   WHEN  groupod     = 1
                   AND   priority     BETWEEN 3 AND 5
                        THEN  priority - 1
                   WHEN  groupid = 2
                   THEN
                        CASE
                             WHEN  prioity = 1
                             THEN  3
                             ELSE  priority - 1
                        END
                 END
    WHERE     groupId          IN (1, 2)
    AND     (     priority     BETWEEN 2 AND 5
         OR     groupid          = 2
         );There are lots of different techniques that can reduce your coidng: for example, the nested CASE statement used for groupid=2 above.
    You could do several smaller UPDATEs (for example, one just for groupid=1). Execution will be slower, but coding and testing will be faster.
    You could make the "magic numbers" 2 (for groupid=1) and 1 (for groupid=2) variables, even outside of PL/SQL.
    If you need more help, post the information that Satyaki requested.

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • How to update more than 5 million records without error message ORA-00257:

    Hi ,
    I need to update some columns in my table which is contains about 5 million records
    I 've already tried this
    Update AAA_CDR
    Set RoamFload = Null ;
    but the problem is I've got the error message ("ORA-00257: archiver error. Connect internal only,until freed.) and the update consuming about 6 hours with no results ,
    then I do the commands ( Alter system set db_recovery_file_dest_size=50G) and the problem solved .
    but I need to update about 15 columns of this table to be null ,what I should do to overcome this message and update the table in reasonable time
    Please Help Me ,

    The best way would be to allocate sufficient disk space for your archive log destination. Your database is not sized properly. NOLOGGING option will not do much for you because it' only applies to direct load operations when the data inserted into nologging table is selected from another table. UPDATE will be be logged, regardless of the NOLOGGING status. Here is the quote from the manual:
    <quote>
    LOGGING|NOLOGGING
    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load
    INSERT operations against a nonpartitioned index, a range or hash index partition, or
    all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING)
    or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid
    and to record dictionary changes). When applied during media recovery, the extent
    invalidation records mark a range of blocks as logically corrupt, because the redo data
    is not logged. Therefore, if you cannot afford to lose this index, you must take a backup
    after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an
    operation in LOGGING mode will re-create the index. However, media recovery from a backup
    taken before an operation in NOLOGGING mode will not re-create the index.
    An index segment can have logging attributes different from those of the base table and
    different from those of other index segments for the same base table.
    </quote>
    If you are really desperate, you can try the following undocumented/unsupported command:
    ALTER DATABASE ARCHIVELOG COMPRESS ENABLE;
    That will cause database to compress your archive logs and consume less space. This command is not documented or supported, not even in the version 11.2.0.3 and causes the database to start spewing ORA-0600 in version 10G. DO NOT USE IN A PRODUCTION ENVIRONMENT!!!!

  • Reg update of a 10 million record table from 1 million record table

    I have 2 tables
    Tabke 1 : 10 millio records
    21 indexes --> 1). acct_id , acct_seq_no -- index_1
    2). c1 , c2 , c3 -- index_2
    Table 2 : 1 .5 million records
    1 index on ( acct_id, acct_seq_no) - idx_1
    common keys are acct_id and acct_seq_no
    I'm updating table1 from table 2
    I need to use index ( index_1 from table_1 ) and (idx_1 from table_2)
    How can I make my query to use only this particular index.
    MY query ia as follows
    UPDATE csban_&1 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT acct_id,
    acct_seq_no,
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE csb.acct_id = temp1.acct_id
    AND csb.acct_seq_no = temp1.acct_seq_no
    DO I need to put and index hint after this
    UPDATE csban_&1 csb --???????? /*+ indedx (csb.index_1) */
    Thanks in advance

    Thanks a lot david and rob for sharing the info.
    Please find the details
    SQL> EXPLAIN PLAN FOR
    UPDATE csban_2 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
    21 22 23 24 25 26 27 28 29
    Explained.
    SQL>
    SQL>
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 584770029
    | Id | Operation | Name | Rows | Bytes | Cost
    | TQ |IN-OUT| PQ Distrib |
    PLAN_TABLE_OUTPUT
    | 0 | UPDATE STATEMENT | | 530K| 19M| 8213
    | | | |
    | 1 | UPDATE | CSBAN_2 | | |
    | | | |
    |* 2 | FILTER | | | |
    | | | |
    | 3 | PX COORDINATOR | | | |
    | | | |
    PLAN_TABLE_OUTPUT
    | 4 | PX SEND QC (RANDOM) | :TQ10000 | 530K| 19M| 8213
    | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 530K| 19M| 8213
    | Q1,00 | PCWC | |
    | 6 | TABLE ACCESS FULL | CSBAN_2 | 530K| 19M| 8213
    | Q1,00 | PCWP | |
    |* 7 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | 10 | 3
    PLAN_TABLE_OUTPUT
    | | | |
    |* 8 | COUNT STOPKEY | | | |
    | | | |
    | 9 | TABLE ACCESS BY INDEX ROWID| CSBAN_ABI_TEMP | 1 | 38 | 4
    | | | |
    |* 10 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | | 3
    | | | |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter( EXISTS (SELECT 0 FROM "CSBAN_ABI_TEMP" "TEMP1" WHERE "TEMP1"."ACC
    T_SEQ_NO"=:B1 AND
    "TEMP1"."ACCT_ID"=:B2))
    PLAN_TABLE_OUTPUT
    7 - access("TEMP1"."ACCT_ID"=:B1 AND "TEMP1"."ACCT_SEQ_NO"=:B2)
    8 - filter(ROWNUM<2)
    10 - access("TEMP"."ACCT_ID"=:B1 AND "TEMP"."ACCT_SEQ_NO"=:B2)
    Note
    - cpu costing is off (consider enabling it)
    30 rows selected.
    The query had completed and it took 1 hr 47 mts.
    SQL> SQL> SQL> Updating CSBAN from TEMP table
    old 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_&1 csb
    new 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_1 csb
    1611807 rows updated.
    Elapsed: 01:47:16.40

  • Update Query is Performing Full table Scan of 1 Millions Records

    Hello Everyboby I have one update query ,
    UPDATE tablea SET
              task_status = 12
              WHERE tablea.link_id >0
              AND tablea.task_status <> 0
              AND tablea.event_class='eventexception'
              AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
              AND ltask.task_status = 0)
    When I do explain plan it shows following result...
    Execution Plan
    0 UPDATE STATEMENT Optimizer=CHOOSE
    1 0 UPDATE OF 'tablea'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'tablea'
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
    5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
    NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
    update 2 records....please suggest me some optimal solutions.....
    Regards
    Mahesh

    I see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
    UPDATE tablea SET
    task_status = 12
    WHERE tablea.link_id >0
    AND tablea.task_status <> 0
    AND tablea.event_class='eventexception'
    AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
    AND ltask.task_status = 0)
    I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
    so ideal case FOR optimizer should be
    Step 1)Select all the rowid having this condition...
    Step 2)
    For each row in rowid get all the row where task_status=0
    and where taskid=linkid of rowid selected above...
    Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
    I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
    It is try FULL TABLE SCAN is harmfull alteast not better than index scan.....

Maybe you are looking for