Deleting 168 million rows.

Our main application table has data for the last 5 years which has caused a performance issue as the recommended time duration of the data to be kept is 3 months.
Right now we have planned to make a copy of  the table using the nologging option but the problem is deleting the data for the last 5 years from this table as it has got around  168,789,200 rows.
I dont want to use the CTAS option for the main application table  and again re-create indexes, complile all pl/sql procedures as I feel this quite risky.
When we asked  our DBA to takeup the activity they pushed it on our team (application support) saying  its not there duty..!!!
Any kind of help is highly appreciated.

First, are you sure that you cannot do an ordinary delete, and then shrink the table? 168m rows in one transaction will generate some undo and redo, but not necessarily an inordinate amount.
Secondly, if undo is the problem, you could use dbms_parallel_execute. If you set the chunk size to (for example)  one thousandth of the table and use parallel_level=0 then you will delete and commit on average 168000 rows in each of a thousand consecutive transactions. The redo would still be generated though.
Either way, this would be an online operation, no downtime. If you have Enterprise Edition licences you could use the Resource manager to slow the job down to ensure that no-one will notice and spread the redo over a long time.
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Similar Messages

  • Delete from 95 million rows table ...

    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.

    user8604530 wrote:
    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.how many rows in the table BEFORE the DELETE?
    how many rows in the table AFTER the DELETE?
    How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    Handle:     user8604530
    Status Level:     Newbie
    Registered:     Mar 10, 2010
    Total Posts:     64
    Total Questions:     26 (22 unresolved)
    I extend to you my condolences since you rarely get your questions answered.

  • DELETE QUERY FOR A TABLE WITH MILLION ROWS

    Hello,
    I have a requirement where I have to compare 2 tables - both having around million rows, and delete data based on a single column.
    DELETE FROM TABLE_A WHERE COLUMN_A NOT IN
    (SELECT COLUMN_A FROM TABLE_B)
    COLUMN_A had index defined on it in both tables. Still it is taking a long time. What is the best way to achieve this? any work around?
    thanks

    How many rows are you deleting from this table ? If the precentage is large then the better option is
    1) Create a new table where COLUMN_A NOT IN
    (SELECT COLUMN_A FROM TABLE_B)
    2) TRUNCATE table_A
    3)Insert in to table_A (select * from new table)
    4) If you have any constraints then may be it can be diaabled.
    thanks

  • Migration of million rows from remote table using merge

    I need to migrate (using merge) almost 15 million rows from a remote database table having unique index (DAY, Key) with the following data setup.
    DAY1 -- Key1 -- NKey11
    DAY1 -- Key2 -- NKey12
    DAY2 -- Key1 -- NKey21
    DAY2 -- Key2 -- NKey22
    DAY3 -- Key1 -- NKey31
    DAY3 -- Key2 -- NKey32
    In my database, I have to merge all these 15 million rows into a table having unique index (Key); no DAY in destination table.
    First, it would be executed for DAY1 and the merge command will insert following two rows. For DAY2, it would update two rows with the NKey2 (for each Key) values and so on.
    Key1 -- NKey11
    Key2 -- NKey12
    I am looking for the best possible approach. Please note that I cannot make any change at remote database.
    Right now, I am using the following one which is taking huge time for DAY2 and so on (mainly update).
    MERGE INTO destination D
      USING (SELECT /*+ DRIVING_SITE(A) */ DAY, Key, NKey
                   FROM source@dblink A WHERE DAY = v_day) S
      ON (D.Key = S.Key)
    WHEN MATCHED THEN
       UPDATE SET D.NKey = S.NKey
    WHEN NOT MATCHED THEN
       INSERT (D.Key, D.NKey) VALUES (S.Key, S.NKey)
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;Edited by: 986517 on Feb 14, 2013 3:29 PM
    Edited by: 986517 on Feb 14, 2013 3:33 PM

    MERGE INTO destination D
      USING (SELECT /*+ DRIVING_SITE(A) */ DAY, Key, NKey
                   FROM source@dblink A WHERE DAY = v_day) S
      ON (D.Key = S.Key)
    WHEN MATCHED THEN
       UPDATE SET D.NKey = S.NKey
    WHEN NOT MATCHED THEN
       INSERT (D.Key, D.NKey) VALUES (S.Key, S.NKey)
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;The first remark I have to emphasize here is that the hint /*+ DRIVING_SITE(A) */ is silently ignored because in case of insert/update/delete/merge the driving site is always the site where the insert/update/delete is done.
    http://jonathanlewis.wordpress.com/2008/12/05/distributed-dml/#more-809
    Right now, I am using the following one which is taking huge time for DAY2 and so on (mainly update).The second remark is that you've realised that your MERGE is taking time but you didn't trace it to see where time is being spent. For that you can either use the 10046 trace event or at a first step get the execution plan followed by your MERGE statement.
    LOG ERRORS INTO err$_destination REJECT LIMIT UNLIMITED;The third remark is related to the DML error logging : be aware that unique keys will empeach the DML error loggig to work correctly.
    http://hourim.wordpress.com/?s=DML+error
    And finally I advise you to look at the following blog article I wrote about enhancing an insert/select over db-link
    http://hourim.wordpress.com/?s=insert+select
    Mohamed Houri
    www.hourim.wordpress.com

  • Delete 3 million records!

    I would like to ask if any guys have a good strategy (e.g. fallback plan, implementation) to delete 3 million of records from 3 tables using PL/SQL.
    How long would it take to do that???
    Many thanks in advance!

    Sorry, I'm on a surrelaitic tip today.
    What I'm getting at is this:
    why PL/SQL?SQL is normally the most effective way of zapping records. However, deleting 80% of a 3.5 million row table is quite slow. It may be quicker to insert the rows you want to keep into a separate table, truncate the original table and then insert back. Of course, TRUNCATE is DDL and so can't be rolled back - that affects your regression strategy (i.e. take a back up!)
    why three tableswhat is the relationship between these tables? Delete from SQL would work a lot faster if the tables were linked by foreign keys with CASCADE DELETE instead of using sub-queries.
    why three millionThe question you answered: three million out of how many?
    Cheers, APC

  • How to Load 100 Million Rows in Partioned Table

    Dear all,
    I a workling in VLDB application.
    I have a Table with 5 columns
    For ex- A,B,C,D,DATE_TIME
    I CREATED THE RANGE (DAILY) PARTIONED TABLE ON COLUMN (DATE_TIME).
    AS WELL CREATED NUMBER OF INDEX FOR EX,
    INDEX ON A
    COMPOSITE INDEX ON DATE_TIME,B,C
    REQUIREMENT
    NEED TO LOAD APPROX 100 MILLION RECORDS IN THIS TABLE EVERYDAY ( IT WILL LOAD VIA SQL LOADER OR FROM TEMP TABLE (INSERT INTO ORIG SELECT * FROM TEMP)...
    QUESTION
    TABLE IS INDEXED SO I AM NOT ABLE TO USE SQLLDR FEATURE DIRECT=TRUE.
    SO LET ME KNOW WHAT THE BEST AVILABLE WAY TO LOAD THE DATA IN THIS TABLE ????
    Note--> PLEASE REMEMBER I CAN'T DROP AND CREATE INDEX DAILY DUE TO HUGE DATA QUANTITY.

    Actually a simpler issue then what you seem to think it is.
    Q. What is the most expensive and slow operation on a database server?
    A. I/O. The more I/O, the more latency there is, the longer the wait times are, the bigger the response times are, etc.
    So how do you deal with VLT's? By minimizing I/O. For example, using direct loads/inserts (see SQL APPEND hint) means less I/O as we are only using empty data blocks. Doing one pass through the data (e.g. apply transformations as part of the INSERT and not afterwards via UPDATEs) means less I/O. Applying proper filter criteria. Etc.
    Okay, what do you do when you cannot minimize I/O anymore? In that case, you need to look at processing that I/O volume in parallel. Instead of serially reading and writing a 100 million rows, you (for example) use 10 processes that each reads and writes 10 million rows. I/O bandwidth is there to be used. It is extremely unlikely that a single process can fully utilised the available I/O bandwidth. So use more processes, each processing a chunk of data, to use more of that available I/O bandwidth.
    Lastly, think DDL before DML when dealing with VLT's. For example, a CTAS to create a new data set and then doing a partition exchange to make that new data set part of the destination table, is a lot faster than deleting that partition's data directly, and then running a INSERT to refresh that partition's data.
    That in a nutshell is about it - think I/O and think of ways to use it as effectively as possible. With VLT's and VLDB's one cannot afford to waste I/O.

  • Inserting 320 millions rows...the speed change whilst the query is running

    Hi guys, I got a strange behavior in my data warehouse. When I insert data in a table I start to take note of the speed. So in one minute, at the beginning the speed was 3 millions every minute (it expected 312 millions rows) but now, after five hours the
    speed is 83.000 row per minute and the table has already 234 millions rows. Now, I'm wondering if this behavior is right and how I can improve the performance (if I can whilst the insert is running).
    Many Thanks

    change Database recovery mode to bulklogged(Preferably)/ simple.
    No - that will not solve the problem because INSERT INTO isn't bulk logged (automatically). To force bulk logged operation the target table need to be locked exclusively!
    Greg Robidoux has a great matrix for that!
    http://www.mssqltips.com/sqlservertip/1185/minimally-logging-bulk-load-inserts-into-sql-server/
    @DIEGOCTIN,
    I assume nobody can really give you an answer because it could have so much reasons for it! As Josh has written GROWTH could have been a problem if Instant File Initialization isn't setup. But 320 mil records in one transaction could cause a heavy growth
    of the log file, too. This cannot participate from Instant File Initialization.
    Another good tip came from Olaf, too!
    I would suggest to insert the data with a tablock. In this case the transaction is minimally logged and the operation will not copy rows but pages into the target table. I've written a WIKI about that topic here:
    http://social.technet.microsoft.com/wiki/contents/articles/20651.sql-server-delete-a-huge-amount-of-data-from-a-table.aspx
    Wish you all a merry christmas and a happy new year!
    MCM - SQL Server 2008
    MCSE - SQL Server 2012
    db Berater GmbH
    SQL Server Blog (german only)

  • Left joins on multi-million rows

    i have a simple query doing left joining on several tables, upward of 7 tables. each table has several hundred million rows.
    tblA is 1:M tblB and tblB is 1:M tblC and so on.
    how to tune a query liked that?
    sample query is
    select distinct
    a.col,b.col,c.col
    from tblA a left join tblB b
    on a.id=b.id
    and a.col is not null
    left join tblC
    on b.id=c.id
    and c.col > criteria
    thanks.

    hi
    a simple query is liked
    SELECT my_DEP.description,
    my_DEP.addr_id,
    hundredRowsTbl.address,
    5MillTbl.checkin_TIME,
    5MillTbl.checkout_TIME,
    hundredRowsTbl.ID2,
    5MillTbl.ID,
    5MillTbl.col2,
    my_DEP.col3,
    5MillTbl.col13,
    hundreds.desc,
    50mmTbl.col6,
    50mmTbl.col5,
    5MillTbl.col33
    FROM
    my.5MillTbl 5MillTbl
    LEFT OUTER JOIN
    my.50mmTbl 50mmTbl
    ON 5MillTbl.ID = 50mmTbl.ID
    LEFT OUTER JOIN my.hundreds hundreds
    ON 5MillTbl.banding =
    hundreds.banding
    INNER JOIN my.my_DEP my_DEP
    ON 5MillTbl.organization_ID = my_DEP.organization_ID
    INNER JOIN my.my_40millTbl
    ON 5MillTbl.seqID = my_40millTbl.seqID
    LEFT OUTER JOIN my.hundredRowsTbl hundredRowsTbl
    ON my_DEP.addr_id = hundredRowsTbl.ID2
    LEFT OUTER JOIN my.30millTbl 30millTbl
    ON my_DEP.organization_ID = 30millTbl.dept_id
    WHERE 1=1
    AND 5MillTbl.ID IS NOT NULL
    AND ( 5MillTbl.checkout_TIME >= TO_DATE ('01-01-2009 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
    AND 5MillTbl.checkout_TIME <TO_DATE ('12-31-2010 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
    AND ( 5MillTbl.col2 IS NULL
    OR NOT (5MillTbl.col2 = 5
    OR 5MillTbl.col2 = 6)
    AND 5MillTbl.ID IS NOT NULL
    AND 30millTbl.TYPE= '30'
    AND my_DEP.addr_id = 61

  • Inserting 10 million rows in to a table hangs

    HI through toad iam using a simple for loop to insert 10 million rows into a table by saying
    for i in 1 ......10000000
    insert.................
    It hangs ........ for lot of time
    is there a better way to insert the rows in to the table....?
    i have to test for performance.... and i have to insert 50 million rows in its child table..
    practically when the code moves to production it will have these many rows...(may be more also) thats why i have to test for these many rows
    plz suggest a better way for this
    Regards
    raj

    Must be a 'hardware thing'.
    My ancient desktop (pentium IV, 1.8 Ghz, 512 MB), running XE, needs:
    MHO%xe> desc t
    Naam                                      Null?    Type
    N                                                  NUMBER
    A                                                  VARCHAR2(10)
    B                                                  VARCHAR2(10)
    MHO%xe> insert /*+ APPEND */ into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:04:09.71
    MHO%xe> drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:31.50
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.04
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:02:44.12
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:09.46
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:00.15
    MHO%xe> insert /*+ APPEND */ into t
      2   with my_data as (
      3   select level n, 'abc' a, 'def' b from dual
      4   connect by level <= 10000000
      5   )
      6   select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:03.89
    MHO%xe>  drop table t;
    Tabel is verwijderd.
    Verstreken: 00:00:27.17
    MHO%xe> create table t (n number, a varchar2(10), b varchar2(10));
    Tabel is aangemaakt.
    Verstreken: 00:00:01.15
    MHO%xe> insert into t
      2  with my_data as (
      3  select level n, 'abc' a, 'def' b from dual
      4  connect by level <= 10000000
      5  )
      6  select * from my_data;
    10000000 rijen zijn aangemaakt.
    Verstreken: 00:01:56.10Yea, 'cached' it a bit (ofcourse ;) )
    But the append hint seems to knibble about 50 sec off anyway (using NO indexes at all) on my 'configuration'.

  • How to delete the selected rows with a condition in alv

    dear all,
    i am using the code in object oriented alv.
    WHEN 'DEL'.
    PERFORM delete_rows.
    FORM delete_rows.
    DATA : lv_rows LIKE lvc_s_row.
    data : wa_ROWs like LVC_S_ROW.
    FREE : gt_rows.
    CALL METHOD alv_grid->get_selected_rows
    IMPORTING
    et_index_rows = gt_rows.
    IF gt_rows[] IS INITIAL.
    MESSAGE s000 WITH text-046.
    EXIT.
    ENDIF.
    loop at gt_rows into wa_ROWs .
    if sy-tabix ne 1.
    wa_ROWs-INDEX = wa_ROWs-INDEX - ( sy-tabix - 1 ).
    endif.
    delete gt_sim INDEX wa_ROWs-INDEX .
    endloop.
    the rows to be deleted from int.tab gt_sim not in the alv display.
    all the rows should not be deleted if one of the field in gt_sim eq 'R'.
    how to check this condition

    dear jayanthi,
            ok if i am coding like that as u mentioned ,
              it will exit the loop when first time the field value is 'R'.
      if any of  the selected rows contains  field value 'R'. it shold not delete all the selected rows.
    as u suggested it will not delete after first time the field value is r.
    i am deleting it by tab index so,
    suppose if i am selecting the row without field value R say its tabix is 1.
      the next row with tabix 2 with field value R.
      it deletes the first row and exits , it should not delete the first row also.

  • How to delete empty table row in the form using formcalc

    Hi All,
    I am displaying a table in PDF which has a few empty rows in between.  I need to delete those specific rows when the form is generated, so that they do not appear on the form.
    Has anyone worked on this before? If so, can you please share the code or advise.
    Regards
    Aditi

    Hello,
    first: there MUST be same backend you get the data from right? So in this backend there is some data extraction coding, right? So the result of this coding is bad, like some unwanted extra rows are returned in the proper set of rows, right? So why don´t you change this backend coding not to return the unwanted rows?
    IF that is not possible (I don´t believe this!), but just to describe other possibilities, you can place a script on the row subform event like initialize and test if there is something missing there and if so, set the presence of such a row to hidden.
    JS: If (this.fieldA.rowValue == "") { this.presence = "hidden"; }
    Regards, Otto

  • How to delete a table row in the context?

    Hi,
    I've got a table in my context that I access with <TABLE-NAME>-<TABLE-COLUMN>[index].
    For example <TABLE-NAME>-<TABLE-COLUMN>.dim delivers the amount of entries in this table.
    Now I want to delete a specific row in this table without any ABAP-code. Is there a possibility to do this?
    I tried to set the .dim = <old dimvalue - 1>, but the row seems to still exist, it's just empty.
    There must be a method like 'delete', or am I wrong...?
    kr, achim

    Alexander,
    could you provide an example please?
    for example, if I have a code like this:
    `repeat j from 1 to MYTABLE-MYCOLUMN1.dim`
       MYTABLE-MYCOLUMN1[j] = "value1";
       MYTABLE-MYCOLUMN2[j] = "value2";
    `end`
    and now assume, I want to delete the last row of the table, what would it look like?
    `MYTABLE.deleteValue(MYCOLUMN1, j)` ???
    kind regards, achim

  • How to delete a particular row in ALV table

    Hi,
    How to delete a particular row in ALV table based on some condition(by checking value for one of the columns in a row)
    Thanks
    Bala Duvvuri

    Hello Bala,
    Can you please be a bit more clear as to how you intend to delete the rows from your ALV? By the way deleting rows from an ALV is no different from deleting rows from a normal table. Suppose you have enabled selection property in ALV & then select multiple rows and click up on a button to delete the rows then below would be the coding: (Also keep in mind that you would have to maintain the Selection property of the context node that you are binding to your ALV to 0..n)
    data : lr_table_settings  TYPE REF TO if_salv_wd_table_settings,
                 lr_config          TYPE REF TO cl_salv_wd_config_table.
      lr_table_settings  ?= lr_config.
    ** Setting the ALV selection to multiple selection with no lead selection
      lr_table_settings->set_selection_mode( value = cl_wd_table=>e_selection_mode-multi_no_lead ).
    Next delete the selected rows in the action triggered by the button:
    METHOD onactiondelete_rows .
      DATA:  wd_node TYPE REF TO if_wd_context_node,
             lt_node1 TYPE ig_componentcontroller=>elements_node,
             wa_temp  TYPE REF TO if_wd_context_element,
             lt_temp  TYPE wdr_context_element_set,
             row_number TYPE i VALUE 0.
      wd_node = wd_context->get_child_node( name = 'NODE' ).
      CALL METHOD wd_node->get_selected_elements
        RECEIVING
          set = lt_temp.
      LOOP AT lt_temp INTO wa_temp.
        wd_node->remove_element( EXPORTING element = wa_temp ).
      ENDLOOP.
      CALL METHOD wd_node->get_static_attributes_table
        EXPORTING
          from  = 1
          to    = 2147483647
        IMPORTING
          table = lt_node1.
      wd_node->bind_table( new_items = lt_node1 ).
    ENDMETHOD.
    If in case this isn't your requirement please do let me know so that I can try come up with another analysis.
    Regards,
    Uday

  • How to delete the selected rows in a JTable on pressing a button?

    How to delete the selected rows in a JTable on pressing a button?

    You are right. I did the same.
    Following is the code where some of them might find it useful in future.
    jTable1.selectAll();
    int[] array = jTable1.getSelectedRows();
    for(int i=array.length-1;i>=0;i--)
    DefaultTableModel model = (DefaultTableModel)jTable1.getModel();
    model.removeRow(i);
    }

  • Can we delete a single row in SID table?

    I am having a problem with conversion exit in SID table. 
    These are the error messages.
    Value in SID table is TPV; correct value is TPV; SID in SID table is 875
    Message no. RSRV200
    Diagnosis
    The following data record either has an incorrect internal format or the characteristic value that is in the correct format appears as a corrected value of another incorrect value:
    ·     Characteristic value: TPV
    ·     SID: 875
    ·     Correct characteristic value: (see below) TPV
    ·     SID after correction: 875
    Value in SID table is TPV 2008; correct value is TPV; SID in SID table is 2887
    Message no. RSRV200
    Diagnosis
    The following data record either has an incorrect internal format or the characteristic value that is in the correct format appears as a corrected value of another incorrect value:
    ·     Characteristic value: TPV 2008
    ·     SID: 2887
    ·     Correct characteristic value: (see below) TPV
    ·     SID after correction: 875
    Now the row with SID 875 is causing the problem. Is it possible that I can delete only this row in the SID table.
    Thanks for your help
    Subra

    Hi.....
    Procedure :
    RSA1>>>InfoObjects >>Right Click on InfoObject >>delete Master Data >> u can see the option of deleting SIDs............
    Otherwise........u can delete value from a SID table......u can use tcode SE14...........to delete the entries.........
    Check this......
    deleting contents of SID table.
    But still I will suggest u not to delete SID...........it may lead to inconsistency of data.........
    Try to repair SId using the program : RSDMD_CHECKPRG_ALL or RSRV...........
    Regards,
    Debjani......

Maybe you are looking for

  • Error while building the Dim

    Hi All Iam getting the error wen iam building the dim its not loading below record its rejecting accrding to eror its saying no parent but it has th parent sure. it is unique record in database then wats the problem im not getting plz can any one hel

  • Process Chain: Urgent help needed!!!

    Hi Guru's, I need to change the process chain: We are currently loading the data form 000000 to 100000 in a single process but now the customer asked us to change it to 4 packages: 000000 to 250000, 250001 to 500000, etc.. Could any one help me in th

  • I just recently updated to Firefox 3.6.10 but cannot retrieve my bookmarks. I need my bookmarks urgently! Please assist!!

    I got an update prompt for version 3.6 and as soon as i updated,i lost all my bookmarks. Please kindly assist with how i can retrieve all my lost bookmarks. Very urgent,thanks!!

  • Error when open JS file

    Sometimes when I open a JS file, Dreamweaver 8 crash... there's a solution? Thank you!!!

  • Where can I find the API specifications??

    Hi, I have been searching on the SUN website for the API specifications for J2ME. However so far I did not have any luck! I found the following document that can be seen from this link: http://java.sun.com/j2me/docs/pdf/cldc11api.pdf However I went t