Update performance on a 38 million records table

Hi all,
I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
The Database is 10g r2 running on a Unix TRU64.
The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
My question is: Is that any way to improve the performance?
The Script:
DECLARE
CURSOR C1
IS
SELECT ROWID
FROM RTG.TCLIENTE_RTG;
type rowidtab is table of rowid;
d_rowid rowidtab;
v_char char(1) := 'N';
BEGIN
OPEN C1;
LOOP
FETCH C1
BULK COLLECT INTO d_rowid LIMIT 200000;
FORALL i IN d_rowid.FIRST..d_rowid.LAST
UPDATE RTG.TCLIENTE_RTG
SET CLI_VALID_IND = v_char
WHERE ROWID = d_rowid(i);
COMMIT;
EXIT WHEN C1%NOTFOUND;
END LOOP;
CLOSE C1;
END;
Kind Regards,
Fabio

I'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage?

Similar Messages

  • Selecting Records from 125 million record table to insert into smaller table

    Oracle 11g
    I have a large table of 125 million records - t3_universe.  This table never gets updated or altered once loaded,  but holds data that we receive from a lead company.
    I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads -  that will be updated with regard to when the lead is mailed and for other relevant information.
    My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table.  I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
    My current attempt has been to create a View using the query that selects the records as shown below.  Then use a second query that inserts into T3_Leads from this View V_Market.  This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause?    My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key. 
    CREATE VIEW V_Market  as
    WITH got_pairs    AS  
         SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */  l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no 
         ,      ROW_NUMBER () OVER ( PARTITION BY  l.address_key 
                                      ORDER BY      l.hh_verification_date  DESC 
                      ) AS r_num   
         FROM   t3_universe  e   
         JOIN   t3_universe  l  ON   
                l.address_key  = e.address_key
                AND l.zip_code = e.zip_code
              AND   l.p1_gender != e.p1_gender
                 AND   l.household_key != e.household_key         
                 AND  l.hh_verification_date  >= e.hh_verification_date 
      SELECT  * 
      FROM  got_pairs
      where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
    Then
    INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
    select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
    from V_Market;

    I had no trouble creating the view exactly as you posted it.  However, be careful here:
    and zip_code in (select * from M_mansfield_02048)
    You should name the column explicitly rather than select *.  (do you really have separate tables for different zip codes?)
    About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
    Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good.

  • Help with querying a 200 million record table

    Hi ,
    I need to query a 200 million record table which is partitioned by monthly activity.
    But my problem is I need to see how many activities occured on one account in a time frame.
    If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
    Fortunately, only activity is expected for an account in the partition which may be present or absent.
    if this table had 100 records, i would use this..
    select account_no, count(*)
    from Acct_actvy
    group by account_no;

    Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
    That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
    From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
    To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
    SQL> select count(*) from x25_calls;
      COUNT(*)
    619491919
    Elapsed: 00:00:19.68
    SQL>
    SQL>  select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
      2  group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
    MONTH               SOURCENETWORKADDRESS   COUNT(*)
    2005/09/01 00:00:00 3103165962                 3599
    2005/10/01 00:00:00 3103165962                 1184
    2005/12/01 00:00:00 3103165962                    4
    2005/06/01 00:00:00 3103165962                    1
    2005/04/01 00:00:00 3103165962                  560
    2005/08/01 00:00:00 3103165962                  101
    2005/03/01 00:00:00 3103165962                 3330
    7 rows selected.
    Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
    The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like...

  • Reg update of a 10 million record table from 1 million record table

    I have 2 tables
    Tabke 1 : 10 millio records
    21 indexes --> 1). acct_id , acct_seq_no -- index_1
    2). c1 , c2 , c3 -- index_2
    Table 2 : 1 .5 million records
    1 index on ( acct_id, acct_seq_no) - idx_1
    common keys are acct_id and acct_seq_no
    I'm updating table1 from table 2
    I need to use index ( index_1 from table_1 ) and (idx_1 from table_2)
    How can I make my query to use only this particular index.
    MY query ia as follows
    UPDATE csban_&1 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT acct_id,
    acct_seq_no,
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE csb.acct_id = temp1.acct_id
    AND csb.acct_seq_no = temp1.acct_seq_no
    DO I need to put and index hint after this
    UPDATE csban_&1 csb --???????? /*+ indedx (csb.index_1) */
    Thanks in advance

    Thanks a lot david and rob for sharing the info.
    Please find the details
    SQL> EXPLAIN PLAN FOR
    UPDATE csban_2 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
    21 22 23 24 25 26 27 28 29
    Explained.
    SQL>
    SQL>
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 584770029
    | Id | Operation | Name | Rows | Bytes | Cost
    | TQ |IN-OUT| PQ Distrib |
    PLAN_TABLE_OUTPUT
    | 0 | UPDATE STATEMENT | | 530K| 19M| 8213
    | | | |
    | 1 | UPDATE | CSBAN_2 | | |
    | | | |
    |* 2 | FILTER | | | |
    | | | |
    | 3 | PX COORDINATOR | | | |
    | | | |
    PLAN_TABLE_OUTPUT
    | 4 | PX SEND QC (RANDOM) | :TQ10000 | 530K| 19M| 8213
    | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 530K| 19M| 8213
    | Q1,00 | PCWC | |
    | 6 | TABLE ACCESS FULL | CSBAN_2 | 530K| 19M| 8213
    | Q1,00 | PCWP | |
    |* 7 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | 10 | 3
    PLAN_TABLE_OUTPUT
    | | | |
    |* 8 | COUNT STOPKEY | | | |
    | | | |
    | 9 | TABLE ACCESS BY INDEX ROWID| CSBAN_ABI_TEMP | 1 | 38 | 4
    | | | |
    |* 10 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | | 3
    | | | |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter( EXISTS (SELECT 0 FROM "CSBAN_ABI_TEMP" "TEMP1" WHERE "TEMP1"."ACC
    T_SEQ_NO"=:B1 AND
    "TEMP1"."ACCT_ID"=:B2))
    PLAN_TABLE_OUTPUT
    7 - access("TEMP1"."ACCT_ID"=:B1 AND "TEMP1"."ACCT_SEQ_NO"=:B2)
    8 - filter(ROWNUM<2)
    10 - access("TEMP"."ACCT_ID"=:B1 AND "TEMP"."ACCT_SEQ_NO"=:B2)
    Note
    - cpu costing is off (consider enabling it)
    30 rows selected.
    The query had completed and it took 1 hr 47 mts.
    SQL> SQL> SQL> Updating CSBAN from TEMP table
    old 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_&1 csb
    new 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_1 csb
    1611807 rows updated.
    Elapsed: 01:47:16.40

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Create batches for million record table

    I have took this out for some reasons...
    Thanks for your support guys...
    Edited by: Srichan on Jun 17, 2009 2:19 PM

    Well, OK then:
    SQL> select * from t
      2  /
           COL
             1
            20
            56
            78
            80
            82
            88
            91
            95
            99
           100
           110
    12 rows selected.
    Elapsed: 00:00:00.03
    SQL> select min(col) stkey
      2  ,      max(col) endkey
      3  ,      count(*) recs
      4  ,      grpid
      5  from ( select col
      6         ,      ntile(5) over (order by col) grpid
      7         from t
      8       )
      9  group by grpid
    10  order by stkey;
         STKEY     ENDKEY       RECS      GRPID
             1         56          3          1
            78         82          3          2
            88         91          2          3
            95         99          2          4
           100        110          2          5
    5 rows selected.
    Elapsed: 00:00:00.01
    SQL> select min(col) stkey
      2   ,      max(col) endkey
      3   ,      count(*) recs
      4   ,      grpid
      5   from ( select col
      6  ,      ntile(3) over (order by col) grpid
      7          from t
      8        )
      9   group by grpid
    10   order by stkey;
         STKEY     ENDKEY       RECS      GRPID
             1         78          4          1
            80         91          4          2
            95        110          4          3
    3 rows selected.
    Elapsed: 00:00:00.06
    edit
    All credits to Salim here, I like the 'rownum-thing' to get the sets into 5-5-2, very nice and just had to run it ;) :
    SQL> select min(col) stkey
      2  ,      max(col) endkey
      3  ,      count(*) recs
      4  ,      rn
      5  from ( select col
      6        ,       trunc((rownum -1)/5)+1 rn
      7         from t
      8       )
      9  group by rn
    10  order by stkey;
         STKEY     ENDKEY       RECS         RN
             1         80          5          1
            82         99          5          2
           100        110          2          3Edited by: hoek on Jun 16, 2009 6:24 PM

  • Issue of hanldle 10 million record

    Dears,
    Program is executed in online, and need handle 10 million records.
    If I  select data from database directly,  TSV_TNEW_PAGE_ALLOC_FAILED Short dump will occurs.
    ex: select xxx
            from xxx
            into table xxx.
    If I  select 1 million record from database every time, then handle 1 million records,  TIME_OUT Short dump will occurs.
    ex: select xxx
           from xxx
           into table xxx
           package size 10000000.
            perform xxxx     " handle 1 million records,
         endselect.
    Could you please teach me solution?
    Thanks in advance.
    Best regards
    Lily

    Please start with basic ABAP training before you want to SELECT 1 Mio records.
    If it is a good training you will not SELECT 1 Mio records afterwards.
    I would recommend to start with the chapter about the WHERE conditions.
    And please forget the parallel cursor, it comes at the end of the advanced training under the title 'special and obsolete commands'.
    Siegfried

  • Update Query is Performing Full table Scan of 1 Millions Records

    Hello Everyboby I have one update query ,
    UPDATE tablea SET
              task_status = 12
              WHERE tablea.link_id >0
              AND tablea.task_status <> 0
              AND tablea.event_class='eventexception'
              AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
              AND ltask.task_status = 0)
    When I do explain plan it shows following result...
    Execution Plan
    0 UPDATE STATEMENT Optimizer=CHOOSE
    1 0 UPDATE OF 'tablea'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'tablea'
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
    5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
    NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
    update 2 records....please suggest me some optimal solutions.....
    Regards
    Mahesh

    I see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
    UPDATE tablea SET
    task_status = 12
    WHERE tablea.link_id >0
    AND tablea.task_status <> 0
    AND tablea.event_class='eventexception'
    AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
    AND ltask.task_status = 0)
    I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
    so ideal case FOR optimizer should be
    Step 1)Select all the rowid having this condition...
    Step 2)
    For each row in rowid get all the row where task_status=0
    and where taskid=linkid of rowid selected above...
    Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
    I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
    It is try FULL TABLE SCAN is harmfull alteast not better than index scan.....

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • How to update a table that has  Million Records

    Hi,
    Lets consider the basic EMP table and lets assume that it has around 20 Million Records . we need to have an update statement.Normal UPdate statement may hang the system or it may take a lot of time.
    The basic or Normal update statement goes like this and hope it may not work.
    update emp set hiredate = sysdate where comm is null and hiredate is null;Basic statement may not work. sugestions Needed.
    Regards,
    Vinesh

    sri wrote:
    I heard Bulk collect will resolve these type of issues and i am really poor at Bulk Collect concepts.Exactly what type of issue are you concerned with? The business requirements here are pretty important-- what problem is the UPDATE causing, specifically, that you are trying to work around.
    so looking for a solution to the problem using Bulk Collect .Without knowing the problem, it's very tough to suggest a solution. If you process data in batches using BULK COLLECT, your UPDATE statement will take longer to run and will consume more resources on the database. If the problem you are trying to solve is that your UPDATE is not fast enough, this is a poor approach.
    On the other hand, if you process data in batches, and do interim commits, you can probably hold locks on individual rows for a shorter amount of time. That would only be a concern, though, if you have some other process that is trying to update the same rows that you are updating at the same time that you're updating them, which is pretty rare. And breaking your update into multiple transactions introduces a whole bunch of complexity. You now have to write a bunch of code to ensure that your process is restartable should the update fail mid-way through leaving some number of updates committed and some number rolled back. You have to have a very detailed understanding of the data and data consistency to ensure that breaking up the transaction isn't going to negatively impact any process, report, etc. To do it correctly is a pile of work and then it's something that is constantly at risk of creating problems in the future when requirements change.
    In the vast majority of cases, you're better off issuing a simple SQL statement during a time when the system isn't particularly busy.
    Justin

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Problem with Fetching Million Records from Table COEP into an Internal Tabl

    Hi Everyone ! Hope things are going well.
           Table : COEP has 6 million records.
    I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
    I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
    Regards,
    Owais...

    The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
    Here is my select:
              SELECT kokrs
                     belnr
                     buzei
                     ebeln
                     ebelp
                     wkgbtr
                     refbn
                     bukrs
                     gjahr
                FROM covp CLIENT SPECIFIED
                INTO TABLE i_coep
                 FOR ALL ENTRIES IN i_objnr
               WHERE mandt EQ sy-mandt
                 AND lednr EQ '00'
                 AND objnr = i_objnr-objnr
                 AND kokrs = c_conarea.

  • How to update more than 5 million records without error message ORA-00257:

    Hi ,
    I need to update some columns in my table which is contains about 5 million records
    I 've already tried this
    Update AAA_CDR
    Set RoamFload = Null ;
    but the problem is I've got the error message ("ORA-00257: archiver error. Connect internal only,until freed.) and the update consuming about 6 hours with no results ,
    then I do the commands ( Alter system set db_recovery_file_dest_size=50G) and the problem solved .
    but I need to update about 15 columns of this table to be null ,what I should do to overcome this message and update the table in reasonable time
    Please Help Me ,

    The best way would be to allocate sufficient disk space for your archive log destination. Your database is not sized properly. NOLOGGING option will not do much for you because it' only applies to direct load operations when the data inserted into nologging table is selected from another table. UPDATE will be be logged, regardless of the NOLOGGING status. Here is the quote from the manual:
    <quote>
    LOGGING|NOLOGGING
    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load
    INSERT operations against a nonpartitioned index, a range or hash index partition, or
    all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING)
    or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid
    and to record dictionary changes). When applied during media recovery, the extent
    invalidation records mark a range of blocks as logically corrupt, because the redo data
    is not logged. Therefore, if you cannot afford to lose this index, you must take a backup
    after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an
    operation in LOGGING mode will re-create the index. However, media recovery from a backup
    taken before an operation in NOLOGGING mode will not re-create the index.
    An index segment can have logging attributes different from those of the base table and
    different from those of other index segments for the same base table.
    </quote>
    If you are really desperate, you can try the following undocumented/unsupported command:
    ALTER DATABASE ARCHIVELOG COMPRESS ENABLE;
    That will cause database to compress your archive logs and consume less space. This command is not documented or supported, not even in the version 11.2.0.3 and causes the database to start spewing ORA-0600 in version 10G. DO NOT USE IN A PRODUCTION ENVIRONMENT!!!!

Maybe you are looking for

  • Board killed after changing FSB? K7N2G-ILSR

    I have a K7N2G-LISR. After I changed the FSB which was at 100MHz by default, the D-Bracket showed that the CPU was damaged. I cleared the CMOS and than it showed damaged memory. I tryed the RAM and the CPU with another PC and both were OK. Any sugges

  • To view Netflix in tv

    What do I need to view Netflix in my TV from IPad app

  • Excel Macro problems

    I have Labview 8.2. What I am trying to do is create an excel called XXX.xls.  In that Excel file, I want to create muliple sheets called X1 X2 X3 and so on.  I looked at the example, but could not find anything that will allow me to open the excel f

  • Visa buffer read in event case

    Hello everybody. I'm realizing a VI to control 3 motorized stages by RS232 Standa driver. I'm able to control the 3 stepper motors and i can correctly send the commands to the driver. I've problems when i attempt reading Visa buffer answers from my d

  • BW - HR Topics

    Hi, i am bw consultant and have been working was extracting data from mm and sd..now our company is implementing HR module..for that reason i wanted to learn SAP HR module. could anyone let me know  HR topics that are related  to bw hr extraction?