How to process million records

Hi ,
How would you process 50 million records without running out of set background time in BDC . please help is there any other process for doing this.
Moderator message: too vague, help not possible, please describe problems in all technical detail when posting again.
[Asking Good Questions in the Forums to get Good Answers|/people/rob.burbank/blog/2010/05/12/asking-good-questions-in-the-forums-to-get-good-answers]
Edited by: Thomas Zloch on Dec 9, 2010 9:40 AM

Hi,
I am not sure but please check below given link might be it will useful for you.
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/c0d1e9c4-bfb4-2c10-76b4-d5e2912b83be
Thanks,
Jaten Sangal

Similar Messages

  • How to process each records in the derived table which i created using cte table using sql server

    I want to process each row from the CTE table I created, how can I traverse from first row to second row and so on....
    how to process each records in the derived table which i created using  cte table using sql server

    Ideally you would be doing a set based processing rather than traversing row by row as thats more efficient. To answer it specific to your scenario we may need more info. Can you explain with some sample data your exact requirement?
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to process NAST record.....

    Hi all,
        can anyone tell me how to process Nast record with dispatch time 3 : send periodically with own transaction and with dispatch time 2.
    thanks in advance.
    vinod.

    I think you can process using program RSNAST00
    Regards
    MD

  • How to send  Million records using  Open hub using  into my own file server

    hi
    i am using OPEN HUB process to send  6 miilion record to my own network server. is there any limitaiton how i need to go about that? my process is failing when i execute DTP openhub. i am abel to send few records to sap aplication server.
    did any body send  more than million records to aplication server out side SAP ? how to achieve this. pelase help me out..
    i am trying to send ods records into file server

    I'm glad you solved your problem.
    Generally it is nice to post how you solved your problem so that when others have a similar problem and search the archives, they can see your solution.
    Thanks

  • How to process next record in oracle PLSQL

    Hi,
    I am processing below record set with the help of BULK COLLECT in Oracle PLSQL Procedure. While processing I am checking model is one that need not be substituted. If it is 'NA' or 'N/A', I need process next record (marked as bold in code snipet)
    Please guide me how to do it ?
    TYPE t_get_money IS TABLE OF c_get_money%ROWTYPE INDEX BY BINARY_INTEGER;
    L_money t_get_money ;
    L_subst_model VARCHAR2(40);
    L_Notify_Manager     VARCHAR2(1);
    L_grade          VARCHAR2(20);
    L_Error_Message     VARCHAR2(1);
    BEGIN
    OPEN c_get_money ;
    FETCH c_get_money BULK COLLECT INTO L_money ;
    CLOSE c_get_money;
    FOR I IN 1..L_money.count LOOP
    -- check if the model is one that need not be substituted
    IF (upper(L_money(i). subst_model) in ('N/A', 'NA')
    THEN
    L_NOTIFY_MANAGER(I) := 'Y';
    L_GRADE(I) := 'ERROR';
    L_error_message(i) := 'substitute Model is not N/A or NA' ;
    -------Here I want to process NEXT RECORD--------
    END IF ;
    END;

    One of the solution for below version of 11g...
    DECLARE
         TYPE t_get_money IS TABLE OF c_get_money%ROWTYPE
                                       INDEX BY BINARY_INTEGER;
         L_money              t_get_money;
         L_subst_model        VARCHAR2 (40);
         L_Notify_Manager   VARCHAR2 (1);
         L_grade              VARCHAR2 (20);
         L_Error_Message    VARCHAR2 (1);
    BEGIN
         OPEN c_get_money;
         FETCH c_get_money
         BULK COLLECT INTO L_money;
         CLOSE c_get_money;
         FOR I IN 1 .. L_money.COUNT LOOP
              IF UPPER (L_money (i).subst_model) IN ('N/A', 'NA') THEN
                   GOTO Nextrecord;
              END IF;
              L_NOTIFY_MANAGER (I)   := 'Y';
              L_GRADE (I)              := 'ERROR';
              L_error_message (i)    := 'substitute Model is not N/A or NA';
            <<Nextrecord>>
              NULL;
         END LOOP;
    END;One of the solution for 11gR1 and above...
    DECLARE
         TYPE t_get_money IS TABLE OF c_get_money%ROWTYPE
                                       INDEX BY BINARY_INTEGER;
         L_money              t_get_money;
         L_subst_model        VARCHAR2 (40);
         L_Notify_Manager   VARCHAR2 (1);
         L_grade              VARCHAR2 (20);
         L_Error_Message    VARCHAR2 (1);
    BEGIN
         OPEN c_get_money;
         FETCH c_get_money
         BULK COLLECT INTO L_money;
         CLOSE c_get_money;
         FOR I IN 1 .. L_money.COUNT LOOP
              IF UPPER (L_money (i).subst_model) IN ('N/A', 'NA') THEN
                   CONTINUE;
              END IF;
              L_NOTIFY_MANAGER (I)   := 'Y';
              L_GRADE (I)              := 'ERROR';
              L_error_message (i)    := 'substitute Model is not N/A or NA';
         END LOOP;
    END;

  • Sender JMS Content Conversion - How to process multiple records

    Hi All,
    I use a Sender JMS Channel with Content Conversion.
    My message structure is like this
    <root>
        <rec>    </rec>
        <rec>    </rec>
    </root>
    I have fixed length flat file with multiple records.
    i have given the parameters FixedFieldLength, FieldNames and StructureTitle.
    Which parameter i need to use specify the RecordDelimiter
    Because my input file will have more than record
    my input file -
    xxxx
    yyyy
    if i dont specify any delimiter value, in the module parameter,then for each newline of the file, a new mesage is created.
    <root>
      <rec>xxxx</rec>
    <root>
    <root>
      <rec>yyyy</rec>
    <root>
    But i want the output to be like this
    <root>
    <rec>xxxx<rec>
    <rec>yyyy</rec>
    </root>

    hi,
    You can do your FCC for sender JMS by going through page 5 of this document.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50061bd9-e56e-2910-3495-c5faa652b710

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Approx how much time should JDBC adapter take to insert 1.5 million record?

    Hi All,
              What is the optimum time for inserting 1.5 million records to Oracle Staging Table? My scenario ECC to Oracle is taking 3 hours.
    With your previous experience, what do you think about this. Is there a scope of improvement?
    We have a simple insert through JDBC datatype. i.e Action = INSERT.
    Kindly Advice.
    Regards,
    XIer
    Edited by: XIer on Mar 27, 2008 9:20 AM
    Edited by: XIer on Mar 27, 2008 10:02 AM

    Hi,
    >What do you think is the optimum time with your experience...
    We had Similar Situation  after adding Application Server  the time was reduced to 1 hour.  Now  how many App server are available in your XI system ?
    Regards
    Sangeetha

  • How can I create a details cube with millions records

    Hello everyone,
    I need now to create a cube for details data. But the problem is that the details data is very large. There are some millions records.
    How can I design such cube in the essbase? Or can man create such cube in the essbase at all?
    I need your suggests. Thank you very much!
    Ming

    hello Sandeep,
    thank you for your reply.
    Our situation is we have biee+essbase.
    And the users want to see the details data. The data is too large.
    The users want to get all data from excel (hyperion).
    So there are many problem with the speed performance.
    How can I design so that the performance is better?
    Ming

  • How to update a table that has  Million Records

    Hi,
    Lets consider the basic EMP table and lets assume that it has around 20 Million Records . we need to have an update statement.Normal UPdate statement may hang the system or it may take a lot of time.
    The basic or Normal update statement goes like this and hope it may not work.
    update emp set hiredate = sysdate where comm is null and hiredate is null;Basic statement may not work. sugestions Needed.
    Regards,
    Vinesh

    sri wrote:
    I heard Bulk collect will resolve these type of issues and i am really poor at Bulk Collect concepts.Exactly what type of issue are you concerned with? The business requirements here are pretty important-- what problem is the UPDATE causing, specifically, that you are trying to work around.
    so looking for a solution to the problem using Bulk Collect .Without knowing the problem, it's very tough to suggest a solution. If you process data in batches using BULK COLLECT, your UPDATE statement will take longer to run and will consume more resources on the database. If the problem you are trying to solve is that your UPDATE is not fast enough, this is a poor approach.
    On the other hand, if you process data in batches, and do interim commits, you can probably hold locks on individual rows for a shorter amount of time. That would only be a concern, though, if you have some other process that is trying to update the same rows that you are updating at the same time that you're updating them, which is pretty rare. And breaking your update into multiple transactions introduces a whole bunch of complexity. You now have to write a bunch of code to ensure that your process is restartable should the update fail mid-way through leaving some number of updates committed and some number rolled back. You have to have a very detailed understanding of the data and data consistency to ensure that breaking up the transaction isn't going to negatively impact any process, report, etc. To do it correctly is a pile of work and then it's something that is constantly at risk of creating problems in the future when requirements change.
    In the vast majority of cases, you're better off issuing a simple SQL statement during a time when the system isn't particularly busy.
    Justin

  • How to tace erronious records in a mapping or a process flow?

    Hi All,
    I read the following document by Rittman.
    http://www.rittman.net/work_stuff/tracing_owb_mappings_pt1.htm
    I am using Oracle Warehouse Builder 10G R1.
    But I feel, it may solve my problem. My problem scenario is as follows.
    Here I would like to know how to trace the records which are valid as per business rules, but not counted in the output due to some functional errors, as follows.
    For example a variable contains value Region = "R01".
    So as per the rule, we need to retrieve the number 01.
    I impelemented as to_number ( substr (Region,2 ) )
    Unfortunately, In one record, I got the field data as "RRR".
    So as per the rule , if apply that logic, this will return the error/warning.
    So this record is not counted in the output.
    Here I would like to trace these type of records in a table or a file while executing the Mapping.
    Is it possible using Oracle Warehouse Builder or Oracle?
    When we are dealing external table we can create log or bad file, which will hold all bad records by default. Is there any way to do this in a mapping?
    Is any one implemented these kind of tracing files which contains all bad records.
    Any suggestions are welcome.
    Thank you,
    Regards,
    Gowtham Sen.

    Hi,
    i have never used this before but i know that inside the mapping configuration in the table operators there is a property where you can specify in the constraints management the exceptions table name. Anyway you might add an additional field to your target table, add a case expression and then mark the field as being valid or invalid or something like that. You can then select the which records are invalid.
    Take a look at this thread: Some Thoughts On An OWB Performance/Testing Framework
    Re: Some Thoughts On An OWB Performance/Testing Framework
    Cheers,
    Ricardo

  • How to update more than 5 million records without error message ORA-00257:

    Hi ,
    I need to update some columns in my table which is contains about 5 million records
    I 've already tried this
    Update AAA_CDR
    Set RoamFload = Null ;
    but the problem is I've got the error message ("ORA-00257: archiver error. Connect internal only,until freed.) and the update consuming about 6 hours with no results ,
    then I do the commands ( Alter system set db_recovery_file_dest_size=50G) and the problem solved .
    but I need to update about 15 columns of this table to be null ,what I should do to overcome this message and update the table in reasonable time
    Please Help Me ,

    The best way would be to allocate sufficient disk space for your archive log destination. Your database is not sized properly. NOLOGGING option will not do much for you because it' only applies to direct load operations when the data inserted into nologging table is selected from another table. UPDATE will be be logged, regardless of the NOLOGGING status. Here is the quote from the manual:
    <quote>
    LOGGING|NOLOGGING
    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load
    INSERT operations against a nonpartitioned index, a range or hash index partition, or
    all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING)
    or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid
    and to record dictionary changes). When applied during media recovery, the extent
    invalidation records mark a range of blocks as logically corrupt, because the redo data
    is not logged. Therefore, if you cannot afford to lose this index, you must take a backup
    after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an
    operation in LOGGING mode will re-create the index. However, media recovery from a backup
    taken before an operation in NOLOGGING mode will not re-create the index.
    An index segment can have logging attributes different from those of the base table and
    different from those of other index segments for the same base table.
    </quote>
    If you are really desperate, you can try the following undocumented/unsupported command:
    ALTER DATABASE ARCHIVELOG COMPRESS ENABLE;
    That will cause database to compress your archive logs and consume less space. This command is not documented or supported, not even in the version 11.2.0.3 and causes the database to start spewing ORA-0600 in version 10G. DO NOT USE IN A PRODUCTION ENVIRONMENT!!!!

  • How to proces the record in Table with multiple threads using Pl/Sql & Java

    I have a table containing millions of records in it; and numbers of records also keep on increasing because of a high speed process populating this table.
    I want to process this table using multiple threads of java. But the condition is that each records should process only once by any of the thread. And after processing I need to delete that record from the table.
    Here is what I am thinking. I will put the code to process the records in PL/SQL procedure and call it by multiple threads of Java to make the processing concurrent.
    Java Thread.1 }
    Java Thread.2 }
    .....................} -------------> PL/SQL Procedure to process and delete Records ------> <<<Table >>>
    Java Thread.n }
    But the problem is how can I restrict a record not to pick by another thread while processing(So it should not processed multiple times) ?
    I am very much familiar with PL/SQL code. Only issue I am facing is How to fetch/process/delete the record only once.
    I can change the structure of table to add any new column if needed.
    Thanks in advance.
    Edited by: abhisheak123 on Aug 2, 2009 11:29 PM

    Check if you can use the bucket logic in your PLSQL code..
    By bucket I mean if you can make multiple buckets of your data to be processed so that each bucket contains the different rows and then call the PLSQL process in parallel.
    Lets say there is a column create_date and processed_flag in your table.
    Your PLSQL code should take 2 parameters start_date and end_date.
    Now if you want to process data say between 01-Jan to 06-Jan, a wrapper program should first create 6 buckets each of one day and then call PLSQL proc in parallel for these 6 different buckets.
    Regards
    Arun

  • Delete  over million records

    I want to delete over 1 million records from a user table. This table has 10 more relationship tables also.I tried cursor to loop through the records but I couldn't finish it and I had to kill the process.
    I have copied all user names to temp Table and I am planning to join with each table and delete.
    Do you think this approach would be the right one to delete these many records?

    Sometimes it is appropriate to use a where clause in export to extract the desired rows and tables, then recreate tables with appropriate storage parameters and import. Other times CTAS is appropriate. Other times plain old delete plus special undo. And there are other options like ETL software.
    Details determine appropriateness, including if this is a one-time thing, how long until that many records come back, time frames, scope and so forth. Row-by-row processing is seldom the right way, though that often is used in over-generalized schemes, and may be right if there are complicated business rules determining deletion. At times I've used all of the above in single projects like splitting out subsidiaries from an enterprise db or creating test schemata.

  • Best way to update 8 out of10 million records

    Hi friends,
    I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column.
    Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
    How should i approach this one?

    @Mark D Powell
    To give you a background my client faced this problem  a week ago , This is part of a daily cleanup activity .
    Right now i don't have the access to it due to security issue . I could only take few AWR reports and stats when the access window was opened. So basically next time when i get the access i want to close the issue once and for all
    Coming to your questions:
    So what is wrong with just issuing an update to update all 8 Million rows? 
    In a previous run , of a single update with full table scan in the plan with no parallel degree it started reading from UNDO(current_obj=-1 on event "db file sequential read" wait event) and errored out after 24 hours with tablespace full on the tablespace which contains the BLOB data(a separate tablespace)
    To add to the problem redo log files were sized too less , about 50MB only .
    The wait events (from DBA_HIST_ACTIVE_SESS_HISTORY )for the problematic sql id shows
    -  log file switch (checkpoint incomplete) and log file switch completion as the events comprising 62% of the wait events
    -CPU 29%.
    -db file sequential read 6%.
    -direct path read 2% and others contributing a little.
    30 % of the samples "db file sequential read" had a current_obj#=-1 & p1 showing undo file id.
    Is there any concurrent DML against this table? If not, the parallel DML would be an option though it may not really be needed. 
    I think there was in the previous run and i have asked to avoid in the next run.
    How large are the base table rows?
    AVG_ROW_LEN is 227
    How many indexes are effected by the update if any?
    The last column of the primary key column is the only column to be updated ( i mean used in the "SET" clause of the update)
    Do you expect the update will cause any row migration?
    Yes i think so because the only column which is going to be updated is the same column on which the table is partitioned.
    Now if there is a lot of concurrent DML on the table you probably want to use pl/sql so you can loop through the data issuing a commit every N rows so as to not lock other concurrent sessions out of the table for too long a period of time.  This may well depend on if you can write a driving cursor that can be restarted in the event of interruption and would skip over rows that have already been updated.  If not you might want to use a driving table to control the processing.
    Right now to avoid UNDO issue i have suggested to use PL/SQL approach & have asked increasing the REDO size to atleast 10 times more.
    My big question after seeing the wait events profile for the session is:
    Which was the main issue here , redo log size or the reading from UNDO which hit the update statement. The buffer gets had shot to 600 million , There are only 220k blocks in the table.

Maybe you are looking for

  • Is it possible to create a folder structure while using archivelink?

    We work with Archivelink and use SAPERION software/server for storing documents. We created a business object Advertisement. In a custom made application, users can open the GOS on Advertisement and now they have the possibility to select an extensio

  • IE shows code from dreamweaver CS3 after uploading

    Hi guys Need help. I have coded a site in Dreamweaver CS3 and after I upload through the automatic upload everything goes perfectly fine until I view the site online through Internet Explorer. Through all other browsers the website comes through fine

  • Freeze header row panes in cross tab WEB Intelligence report

    how do i freeze the crosstab header row in the WEB INTELLIGENCE report so that the header remains stationary /freezes at the top and the data can be independently scrolled.(similar to freeze panes in MS Excel). please provide me with a Quick Solution

  • Never-ending indexing in Spotlight

    I recently restored a Retrospect backup to a new internal disk and updated to 10.5.8. Now Spotlight does not work at all -continually says it is 'estimating indexing time' (for three days at last count) I have run Cocktail and all of the various mdut

  • Business area(STO)

    Hi, We have four Manufacturing plants related to one company code and four Business area plant wise.now they are started depot sales.related to company code.If We are tranfer material from one manufacturing plant(one Business area) to one depot.Busin