Deleting 5 million records(slowness issue)

Hi guys ,
we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
delete from <table_name> where date<condition_DT;
FYI
* Table is partioned table
* Primary Key is there
Pls assist us on this .

>
we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
delete from <table_name> where date<condition_DT;
FYI
* Table is partioned table
* Primary Key is there
Pls assist us on this .
>
Nothing much you can do.
About the only alternatives are
1) create a new table that copies the records you want to keep. Then drop the old table and rename the new one to the old name. If you are deleting most of the records this is a good approach.
2) create a new table that copies the records you want to keepl. Then truncate the partitions of the old table and use partition exchange to put the data back.
3) delete the data in smaller batches of 100K records or so each. You could do this by using a different date value in the WHERE clause. Delete data < 2003, then delete data < 2004 and so on.
4. If you want to delete all data in a partition you can just truncate the partition. That is the approach to use if you partition by date and are trying to remove older data.

Similar Messages

  • Delete 3 million records!

    I would like to ask if any guys have a good strategy (e.g. fallback plan, implementation) to delete 3 million of records from 3 tables using PL/SQL.
    How long would it take to do that???
    Many thanks in advance!

    Sorry, I'm on a surrelaitic tip today.
    What I'm getting at is this:
    why PL/SQL?SQL is normally the most effective way of zapping records. However, deleting 80% of a 3.5 million row table is quite slow. It may be quicker to insert the rows you want to keep into a separate table, truncate the original table and then insert back. Of course, TRUNCATE is DDL and so can't be rolled back - that affects your regression strategy (i.e. take a back up!)
    why three tableswhat is the relationship between these tables? Delete from SQL would work a lot faster if the tables were linked by foreign keys with CASCADE DELETE instead of using sub-queries.
    why three millionThe question you answered: three million out of how many?
    Cheers, APC

  • Delete  over million records

    I want to delete over 1 million records from a user table. This table has 10 more relationship tables also.I tried cursor to loop through the records but I couldn't finish it and I had to kill the process.
    I have copied all user names to temp Table and I am planning to join with each table and delete.
    Do you think this approach would be the right one to delete these many records?

    Sometimes it is appropriate to use a where clause in export to extract the desired rows and tables, then recreate tables with appropriate storage parameters and import. Other times CTAS is appropriate. Other times plain old delete plus special undo. And there are other options like ETL software.
    Details determine appropriateness, including if this is a one-time thing, how long until that many records come back, time frames, scope and so forth. Row-by-row processing is seldom the right way, though that often is used in over-generalized schemes, and may be right if there are complicated business rules determining deletion. At times I've used all of the above in single projects like splitting out subsidiaries from an enterprise db or creating test schemata.

  • Deleteing 110 million records..!!!!!!!1

    I have got a table which has 120 million records out of which only 10 million records are usefull. Now I want to delete the remaining 110 million records with the where condition. I spoke to my DBA and he said this has it will take around 2 weeks or more time for this task but I need to get this done quickly as this thing has been effecting oue daily rollup process for the generation of alerts for a high priority applications
    i want delete based on this condition
    delete from tabA where colA=0.
    Any kind of help is highly appreciated.
    Oracle Version:11g

    >
    3.) insert /*+ append */ into taba select * from taba_temp;
    >
    That's the 'old' way that should be used ONLY if OP does not have the partitioning option licensed.
    >
    1.) create table taba_temp as select * from taba where cola != 0;
    >
    That 'temp' table should be created in the desired tablespace as a RANGE partitioned table with one partition: VALUES LESS THAN (MAXVALUE)
    Then step 3 can just do 'ALTER TABLE EXCHANGE PARTITION' to swap the data in. That is metadata only operation and takes a fraction of a second.
    No need to query the data again.
    DROP TABLE EMP_COPY
    CREATE TABLE EMP_COPY AS SELECT * FROM EMP;  -- this is a copy of emp and acts as the MAIN table that we want to keep
    drop table emp_temp
    -- create a partitioned temp table with the same structure as the actual table
    -- we only want to keep emp records for deptno = 20 for this example
    CREATE TABLE EMP_TEMP 
    PARTITION BY RANGE (empno)
    (partition ALL_DATA values less than (MAXVALUE)
    AS SELECT * FROM EMP_COPY WHERE DEPTNO = 20
    -- truncate our 'real' table - very fast
    TRUNCATE TABLE EMP_COPY
    -- swap in the 'deptno=20' data from the temp table - very fast
    ALTER TABLE EMP_TEMP EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY

  • Delete 50 Million records from a table with 60 Million records

    Hi,
    I'm using oracle9.2.0.7 on win2k3 32bit.
    I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    I just want to make sure that I haven't left anything out. Kindly help.
    Thanks and Best Regards

    Can dbms.metadata get this?
    Yes, dbms_metadata can get the grants.
    YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
    DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
      GRANT SELECT ON "YAS"."TEST" TO "SYS"
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc...

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Deleting 3 millon duplicated records from 4 million records

    Hi,
    I want to delete 3 million duplicate records from 4 million records in production. We dont have the partitions. I cann't recreate table using CTAS Option. So I have to delete the data in batch because of shared environment and people acceessing the data.
    Is there any fastest way to delete the data in stead of using bulk delete by using max rowid.
    please help me out.
    Regards,
    Venkat.
    Edited by: ramanamadhav on Aug 16, 2011 8:41 AM

    After deletion of data( with suggestion given by Justin), make sure that latest statistics are also taken.
    After such a heavy delete there is always a mismatch between num_rows and actual count.
    the best way is to shrink space and then gather table statistics, other wise optimizer will get confused and it will not take into consideration correct statistics.
    For e.g.
    16:41:55 SQL> create table test_tab_sm
    as
    select level col1, rpad('*888',200,'###') col2
    from dual connect by level <= 100000;16:42:05   2  16:42:05   3  16:42:05   4 
    Table created.
    Elapsed: 00:00:00.49
    16:42:08 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.51
    16:42:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:28 SQL> delete from TEST_TAB_SM where mod(col1,2) =1;
    50000 rows deleted.
    Elapsed: 00:00:01.09
    16:42:39 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:47 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.26
    16:42:55 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:43:27 SQL> alter table TEST_TAB_SM move;
    Table altered.
    Elapsed: 00:00:00.46
    16:43:59 SQL>  select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.03
    16:44:06 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.24
    16:44:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       1471         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:44:24 SQL> We can see how the no. of blocks changes. It changed from 2942 to 1471 which is half and in your case it will be 1/4th times of what it was using.
    There are other options for shrinking space as well other than MOVE.
    alter table <table_name> shrink space;Try it out and you can see the difference yourself.

  • I want to delete approx 100,000 million records and free the space

    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.

    user8731258 wrote:
    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.To archive, backup the database.
    To delete and free up the space, truncate the table/partitions and then shrink the datafile(s) associated with the tablespace.

  • What happens when we delete trillion records and issue a commit.

    Q1. what happens when we delete trillion records and issue a commit and also is there any way to calculate that how much time this commit will take to complete the process?
    Q2. how do interpret the oracle execution plan?
         cost, cardinality, rows, etc...

    dba wrote:
    Q1. what happens when we delete trillion records and issue a commit and also is there any way to calculate that how much time this commit will take to complete the process?Since you're modifying the blocks, Undo will be generated for the modified data and change vector will be recorded in the redo log buffer.(and surely your records will be deleted ;) )
    The timing of a Commit doesn't depend on the size of transaction.
    dba wrote:
    Q2. how do interpret the oracle execution plan?
         cost, cardinality, rows, etc...Oracle documentation should help you, just google for that. You should also visit 'asktom.oracle.com' for the topic. I'm going through a book 'Troubleshooting Oracle Performance' written by Christian Antognini and I feel it contains very good explanation on such topics.
    Regards,
    S.K.

  • Strange issue with a huge SSRS 2008 report (supposed to return 3 million records)

    Hi,
    NOTE: The strange part (as mentioned in title) will come in the end.
    I am running a SSRS 2008 report which fetches 3 million records from a remote server. After around 1 hour the report processing stops and I see an error icon on the lft bottom of the browser window. When I click on that to see the error details it shows
    some PageRequestManagerSQLErrorException with an unknown error message with code 12029 (sometimes 12002).
    When I see the reportserver logs there is an error message logged in it which says "Microsoft.ReportingServices.Library.ReportServerDatabaseUnavailableException: The report server cannot open a connection to the report server database. A connection to the
    database is required for all requests and processing. ---> Microsoft.ReportingServices.Library.ReportServerDatabaseUnavailableException: The report server cannot open a connection to the report server database. A connection to the database is required for
    all requests and processing. ---> System.InvalidOperationException: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size
    was reached."
    The <DatabaseQueryTimeOut> value in the report server configuration file is already having a value set to 7200 seconds(2 hours).
    NOW, the strange part is, when I open the ExecutionLog2 table in ReportServer database, there is an entry for the same report with the status as "success" !!!
    My head is spinning over this issue, somebody please rescue.
    Regards.

    Not sure if this will help but you might give it a try :
    1. Open the Registry Editor.
    2. Navigate to the registry path below.
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\
    3. Create a DWORD value MaxUserPort.
    Value name
    MaxUserPort
    Value data (in Decimal)
    10000
    4. Restart the server.
    5. Check and confirm that the registry key ‘MaxUserPort’ is added.
    Do the same process as above for TCPTimedWaitDelay and reduce the value to 60.  The above steps are to increase the maxuserports and reduce the time_wait in order to free up resources on the server.

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Issue of hanldle 10 million record

    Dears,
    Program is executed in online, and need handle 10 million records.
    If I  select data from database directly,  TSV_TNEW_PAGE_ALLOC_FAILED Short dump will occurs.
    ex: select xxx
            from xxx
            into table xxx.
    If I  select 1 million record from database every time, then handle 1 million records,  TIME_OUT Short dump will occurs.
    ex: select xxx
           from xxx
           into table xxx
           package size 10000000.
            perform xxxx     " handle 1 million records,
         endselect.
    Could you please teach me solution?
    Thanks in advance.
    Best regards
    Lily

    Please start with basic ABAP training before you want to SELECT 1 Mio records.
    If it is a good training you will not SELECT 1 Mio records afterwards.
    I would recommend to start with the chapter about the WHERE conditions.
    And please forget the parallel cursor, it comes at the end of the advanced training under the title 'special and obsolete commands'.
    Siegfried

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Maitaning huge volume of data (Around 60 Million records)

    Iu2019ve requirement to load the data from ODS to Cube by full load. This ODS is getting 50 Million records down the line for 6 months which we have to maintain in BW.
    Can you please put the advise on the following things?
         Can we accommodate 50 Million records in ODS?
    If i.e. the case u201CCan we put the load for 50 Million records from ODS to Cube?u201D     And each record has to go check in another ODS to get the value for another InfoObject. Hence u201CIs the load going to be successful for the 50 Million records? Iu2019m not sure. Or do we get time out error?

    Harsha,
    The data load should go through ... some things to do / check...
    Delete the indices on cube before loading and then rebuild the same later after the load completes.
    regarding the lookup - if you are looking up specific values in another DSO - build a suitable secondary index on the DSO for the same ( preferably unique index )
    A DSo or cube can definitely hold 50 million records - we have had cases where we has 50 million records for 1 month with the DSO holding data for 6 to 10 months and the same with the cube also. Only that the reporting on the cube might be slow at a very detailed level.
    Also please state your version - 3.x or 7.0...
    also if you are on Oracle - plan for providing / backing up archive logs - since loading generates a lot of arcive logs...
    Edited by: Arun Varadarajan on Apr 21, 2009 2:30 AM

  • APP-PAY-07201 Cannot perform a delete when child record exists in future

    I am trying to put end date to a payment method of any employee in HR/Payroll 11.5.10.2
    but receiving the following error message:
    APP-PAY-07201 Cannot perform a delete when child record exists in future
    Can u advise what steps I should follow to resolve this issue.
    Regards /Ali

    This note is related to termination of employee while our employee is on payroll and just want to change is payment method. But in the presence of existing payment method we cannot attched another becuase we are receiving an error:
    APP-PAY-07041: Priority must be unique within an orgainzational payment method

Maybe you are looking for