Delete  over million records

I want to delete over 1 million records from a user table. This table has 10 more relationship tables also.I tried cursor to loop through the records but I couldn't finish it and I had to kill the process.
I have copied all user names to temp Table and I am planning to join with each table and delete.
Do you think this approach would be the right one to delete these many records?

Sometimes it is appropriate to use a where clause in export to extract the desired rows and tables, then recreate tables with appropriate storage parameters and import. Other times CTAS is appropriate. Other times plain old delete plus special undo. And there are other options like ETL software.
Details determine appropriateness, including if this is a one-time thing, how long until that many records come back, time frames, scope and so forth. Row-by-row processing is seldom the right way, though that often is used in over-generalized schemes, and may be right if there are complicated business rules determining deletion. At times I've used all of the above in single projects like splitting out subsidiaries from an enterprise db or creating test schemata.

Similar Messages

  • Deleting 5 million records(slowness issue)

    Hi guys ,
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .

    >
    we are trying to delete 5 million records with following query .it is taking more time(more than 2 hours).
    delete from <table_name> where date<condition_DT;
    FYI
    * Table is partioned table
    * Primary Key is there
    Pls assist us on this .
    >
    Nothing much you can do.
    About the only alternatives are
    1) create a new table that copies the records you want to keep. Then drop the old table and rename the new one to the old name. If you are deleting most of the records this is a good approach.
    2) create a new table that copies the records you want to keepl. Then truncate the partitions of the old table and use partition exchange to put the data back.
    3) delete the data in smaller batches of 100K records or so each. You could do this by using a different date value in the WHERE clause. Delete data < 2003, then delete data < 2004 and so on.
    4. If you want to delete all data in a partition you can just truncate the partition. That is the approach to use if you partition by date and are trying to remove older data.

  • Delete 3 million records!

    I would like to ask if any guys have a good strategy (e.g. fallback plan, implementation) to delete 3 million of records from 3 tables using PL/SQL.
    How long would it take to do that???
    Many thanks in advance!

    Sorry, I'm on a surrelaitic tip today.
    What I'm getting at is this:
    why PL/SQL?SQL is normally the most effective way of zapping records. However, deleting 80% of a 3.5 million row table is quite slow. It may be quicker to insert the rows you want to keep into a separate table, truncate the original table and then insert back. Of course, TRUNCATE is DDL and so can't be rolled back - that affects your regression strategy (i.e. take a back up!)
    why three tableswhat is the relationship between these tables? Delete from SQL would work a lot faster if the tables were linked by foreign keys with CASCADE DELETE instead of using sub-queries.
    why three millionThe question you answered: three million out of how many?
    Cheers, APC

  • Deleteing 110 million records..!!!!!!!1

    I have got a table which has 120 million records out of which only 10 million records are usefull. Now I want to delete the remaining 110 million records with the where condition. I spoke to my DBA and he said this has it will take around 2 weeks or more time for this task but I need to get this done quickly as this thing has been effecting oue daily rollup process for the generation of alerts for a high priority applications
    i want delete based on this condition
    delete from tabA where colA=0.
    Any kind of help is highly appreciated.
    Oracle Version:11g

    >
    3.) insert /*+ append */ into taba select * from taba_temp;
    >
    That's the 'old' way that should be used ONLY if OP does not have the partitioning option licensed.
    >
    1.) create table taba_temp as select * from taba where cola != 0;
    >
    That 'temp' table should be created in the desired tablespace as a RANGE partitioned table with one partition: VALUES LESS THAN (MAXVALUE)
    Then step 3 can just do 'ALTER TABLE EXCHANGE PARTITION' to swap the data in. That is metadata only operation and takes a fraction of a second.
    No need to query the data again.
    DROP TABLE EMP_COPY
    CREATE TABLE EMP_COPY AS SELECT * FROM EMP;  -- this is a copy of emp and acts as the MAIN table that we want to keep
    drop table emp_temp
    -- create a partitioned temp table with the same structure as the actual table
    -- we only want to keep emp records for deptno = 20 for this example
    CREATE TABLE EMP_TEMP 
    PARTITION BY RANGE (empno)
    (partition ALL_DATA values less than (MAXVALUE)
    AS SELECT * FROM EMP_COPY WHERE DEPTNO = 20
    -- truncate our 'real' table - very fast
    TRUNCATE TABLE EMP_COPY
    -- swap in the 'deptno=20' data from the temp table - very fast
    ALTER TABLE EMP_TEMP EXCHANGE PARTITION ALL_DATA WITH TABLE EMP_COPY

  • Delete 50 Million records from a table with 60 Million records

    Hi,
    I'm using oracle9.2.0.7 on win2k3 32bit.
    I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    I just want to make sure that I haven't left anything out. Kindly help.
    Thanks and Best Regards

    Can dbms.metadata get this?
    Yes, dbms_metadata can get the grants.
    YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
    DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
      GRANT SELECT ON "YAS"."TEST" TO "SYS"
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc...

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Deleting 3 millon duplicated records from 4 million records

    Hi,
    I want to delete 3 million duplicate records from 4 million records in production. We dont have the partitions. I cann't recreate table using CTAS Option. So I have to delete the data in batch because of shared environment and people acceessing the data.
    Is there any fastest way to delete the data in stead of using bulk delete by using max rowid.
    please help me out.
    Regards,
    Venkat.
    Edited by: ramanamadhav on Aug 16, 2011 8:41 AM

    After deletion of data( with suggestion given by Justin), make sure that latest statistics are also taken.
    After such a heavy delete there is always a mismatch between num_rows and actual count.
    the best way is to shrink space and then gather table statistics, other wise optimizer will get confused and it will not take into consideration correct statistics.
    For e.g.
    16:41:55 SQL> create table test_tab_sm
    as
    select level col1, rpad('*888',200,'###') col2
    from dual connect by level <= 100000;16:42:05   2  16:42:05   3  16:42:05   4 
    Table created.
    Elapsed: 00:00:00.49
    16:42:08 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.51
    16:42:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:28 SQL> delete from TEST_TAB_SM where mod(col1,2) =1;
    50000 rows deleted.
    Elapsed: 00:00:01.09
    16:42:39 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                        100000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:42:47 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.26
    16:42:55 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:43:27 SQL> alter table TEST_TAB_SM move;
    Table altered.
    Elapsed: 00:00:00.46
    16:43:59 SQL>  select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       2942         205          0
    1 row selected.
    Elapsed: 00:00:00.03
    16:44:06 SQL> exec dbms_stats.gather_table_stats(ownname => 'KDM', tabname => 'TEST_TAB_SM', cascade => true);
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.24
    16:44:17 SQL> select table_name,num_rows,blocks,avg_row_len,chain_cnt from user_tables where table_name = 'TEST_TAB_SM';
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN  CHAIN_CNT
    TEST_TAB_SM                         50000       1471         205          0
    1 row selected.
    Elapsed: 00:00:00.01
    16:44:24 SQL> We can see how the no. of blocks changes. It changed from 2942 to 1471 which is half and in your case it will be 1/4th times of what it was using.
    There are other options for shrinking space as well other than MOVE.
    alter table <table_name> shrink space;Try it out and you can see the difference yourself.

  • I want to delete approx 100,000 million records and free the space

    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.

    user8731258 wrote:
    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.To archive, backup the database.
    To delete and free up the space, truncate the table/partitions and then shrink the datafile(s) associated with the tablespace.

  • Procedure to delete mutiple table records with 1000 rows at a time

    Hello Champs,
    I am having requirement to create procedure to achive the deletes from 5 big tables. 1 master table and 4 child tables. Each table has 28 millions records. Now I have to schedule a procedure to run it on daily basis to delete approximately 500k records from each tables. But the condition is: Seperate procedures for each tables and delete only 1000 rows at a time. The plan is below. Since I am new for writing complicate procedure, I don't have much idea how to code the procedures, can some one help me to design the procedure queries. Many Thanks
    1. Fetch 1000 rows from master table with where clause condition.
    2. Match them with each child tables and delete the records.
    3. Atlast delete those 1000 rows from master table.
    4. over all commit for all 5 X 1000 records.
    5. Repeat the steps 1 to 4 till no rows to fetch from master table.
    Below are detailed procedure plan:----
    ----- Main procedure to fetch 1000 rows at a time and providing them as input to 5 different procedures to delete records, repeating same steps till no rows to fetch i.e. (i=0)-----
    Create procedure fetch_record_from_master_table:
    loop
    i = Select column1 from mastertable where <somecondition> and rowcount <= 1000;
    call procedure 1 and i rows as input;
    call procedure 2 and i rows as input;
    call procedure 3 and i rows as input;
    call procedure 4 and i rows as input;
    call procedure 5 and i rows as input;
    commit;
    exit when i=0
    end loop;
    ----- Sepereate procedure's to delete records from each child tables------
    Create procedure 1 with input:
    delete from child_table1 where column1=input rows of master table;
    Create procedure 2 with input:
    delete from child_table2 where column1=input rows of master table;
    Create procedure 3 with input:
    delete from child_table3 where column1=input rows of master table;
    Create procedure 4 with input:
    delete from child_table4 where column1=input rows of master table;
    --- procedure to delete records from master table atlast----
    Create procedure 5 with input:
    delete from Master_table where column1=input rows of master table;

    Oops, but this will work, won't it?
    begin
      execute immediate 'truncate table child_table1';
      execute immediate '  truncate table child_table2';
      execute immediate '  truncate table child_table3';
      execute immediate '  truncate table child_table4';
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' DISABLE';
      end loop;
      execute immediate '  truncate table master_table';
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' ENABLE';
      end loop;
    end;
    / Or
      truncate table child_table1;
      truncate table child_table2;
      truncate table child_table3;
      truncate table child_table4;
    begin
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' DISABLE';
      end loop;
    end;
    truncate table master_table;
    begin
      for r_fk in ( select table_name, constraint_name
                    from user_constraints
                    where table_name = 'MASTER_TABLE'
                    and   constraint_type = 'R'
      loop
        execute immediate 'ALTER TABLE ' || r_fk.table_name || ' MODIFY CONSTRAINT ' || r_fk.constraint_name || ' ENABLE';
      end loop;
    end;
    /

  • How to delete the duplicate records in a table without promary key

    I have a table that contains around 1 million records and there is no promary key or auto number coulums. I need to delete the duplicate records from this table. what is the simple effective way to do this.

    Please see this link:
    Remove duplicate records ...
    sqldevelop.wordpress.com

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Tuning the sql query when we have 30 Million records

    Hi Friends,
    I have query which takes around 25 to 30 Minutes to retrieve 9 Million records.
    Oracle version=11.2.0.2
    OS=Solaris 10 64bit
    query details
    CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
    AS 
    SELECT  A."ORDERID", A."USERORDERID", A."ORDERSIDE", A."ORDERTYPE",
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, a.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND (A.MessageSequence, A.OrderID) IN (
                SELECT  max(C.MessageSequence), C.OrderID
                  FROM  tibex_Order C
                  WHERE LastInstRejectCode = 'OK'
                  GROUP By C.OrderID
          AND a.OrderStatus IN (
                SELECT OrderStatus
                  FROM  tibex_orderStatusEnum
                  WHERE ShortDesc IN (
                          'ORD_OPEN', 'ORD_EXPIRE', 'ORD_CANCEL', 'ORD_FILLED','ORD_CREATE','ORD_PENDAMD','ORD_PENDCAN'
      UNION ALL
      SELECT  A.ORDERID, A.USERORDERID, A.ORDERSIDE, A.ORDERTYPE,
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, A.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND orderstatus in (
                SELECT  orderstatus
                  FROM  tibex_orderStatusEnum
                  WHERE ShortDesc in ('ORD_REJECT')
          AND 1 IN (
                  SELECT count(*)
                    FROM tibex_order c
                    WHERE c.orderid=a.orderid
                      AND c.instrumentID=a.instrumentID
    /Tried by modifying the query but same result was not retrieved but it was Quicker 6 min.Can Somebody check where i am going wrong.
    CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
    AS   
    WITH REJ AS
    SELECT ROWID RID
    FROM   TIBEX_ORDER
    WHERE  ORDERSTATUS = (SELECT ORDERSTATUS
                          FROM   TIBEX_ORDERSTATUSENUM
                          WHERE  SHORTDESC = 'ORD_REJECT')
    REJ1 AS
    SELECT ROWID RID
    FROM   TIBEX_ORDER
    WHERE  ORDERSTATUS NOT IN (SELECT ORDERSTATUS
                               FROM   TIBEX_ORDERSTATUSENUM
                               WHERE  SHORTDESC = 'ORD_NOTFND'
                               OR     SHORTDESC = 'ORD_REJECT')
    SELECT O.*,
           P.QSID
    FROM   TIBEX_ORDER O,
           TIBEX_PARTICIPANT P
    WHERE  O.PARTICIPANTID = P.PARTICIPANTID
    AND    O.ROWID IN (
                       SELECT RID
                       FROM   (
                               SELECT   ROWID RID,
                                        ORDERSTATUS,
                                        RANK () OVER (PARTITION BY ORDERID ORDER BY MESSAGESEQUENCE ASC) R
                               FROM     TIBEX_ORDER
                       WHERE  R = 1
                       AND    RID IN (SELECT RID FROM REJ)
    UNION ALL
    SELECT O.*,
           P.QSID
    FROM   TIBEX_ORDER O,
           TIBEX_PARTICIPANT P
    WHERE  O.PARTICIPANTID = P.PARTICIPANTID
    AND    O.ROWID IN (
                       SELECT RID
                       FROM   (
                               SELECT   ROWID RID,
                                        ORDERSTATUS,
                                        RANK () OVER (PARTITION BY ORDERID ORDER BY MESSAGESEQUENCE DESC) R
                               FROM     TIBEX_ORDER
                       WHERE  R = 1
                       AND    RID IN (SELECT RID FROM REJ1)
                      );Regards
    NM

    Hi Satish,
    CREATE OR REPLACE VIEW TIBEX_ORDERSBYQSIDVIEW
    (ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
    BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
    PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
    REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
    CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
    TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
    LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
    LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
    BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
    LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
    LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, COMPARISONPRICE,
    ENTEREDPRICETYPE, ISPEX, CLEARINGHANDLING, QSID)
    AS
    SELECT  A."ORDERID", A."USERORDERID", A."ORDERSIDE", A."ORDERTYPE",
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, a.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND (A.MessageSequence, A.OrderID) IN ( SELECT MAX (C.MessageSequence), C.OrderID
               FROM tibex_Order C
                 WHERE c.LastInstRejectCode = 'OK'
                 and a.OrderID=c.OrderID
                   GROUP BY C.OrderID)
          AND a.OrderStatus IN (2,4,5,6,1,9,10)
      UNION ALL
      SELECT  A.ORDERID, A.USERORDERID, A.ORDERSIDE, A.ORDERTYPE,
              A.ORDERSTATUS, A.BOARDID, A.TIMEINFORCE, A.INSTRUMENTID,
              A.REFERENCEID, A.PRICETYPE, A.PRICE, A.AVERAGEPRICE,
              A.QUANTITY, A.MINIMUMFILL, A.DISCLOSEDQTY, A.REMAINQTY,
              A.AON, A.PARTICIPANTID, A.ACCOUNTTYPE, A.ACCOUNTNO,
              A.CLEARINGAGENCY, A.LASTINSTRESULT, A.LASTINSTMESSAGESEQUENCE,
              A.LASTEXECUTIONID, A.NOTE, A.TIMESTAMP, A.QTYFILLED, A.MEID,
              A.LASTINSTREJECTCODE, A.LASTEXECPRICE, A.LASTEXECQTY,
              A.LASTINSTTYPE, A.LASTEXECUTIONCOUNTERPARTY, A.VISIBLEQTY,
              A.STOPPRICE, A.LASTEXECCLEARINGAGENCY, A.LASTEXECACCOUNTNO,
              A.LASTEXECCPCLEARINGAGENCY, A.MESSAGESEQUENCE,
              A.LASTINSTUSERALIAS, A.BOOKTIMESTAMP, A.PARTICIPANTIDMM,
              A.MARKETSTATE, A.PARTNEREXID, A.LastExecSETTLEMENTCYCLE,
              A.LASTEXECPOSTTRADEVENUETYPE, A.PRICELEVELPOSITION,
              A.PREVREFERENCEID, A.EXPIRYTIMESTAMP, matchType,
              a.lastExecutionRole, A.MDEntryID, a.PegOffset,
              a.haltReason, A.COMPARISONPRICE, A.ENTEREDPRICETYPE,
              A.ISPEX, A.CLEARINGHANDLING, B.qsid
        FROM  tibex_Order A,
              tibex_Participant b
        WHERE a.participantID = b.participantID
          AND orderstatus=3
          AND 1 IN (
                  SELECT count(*)
                    FROM tibex_order c
                    WHERE c.orderid=a.orderid
                      AND c.instrumentID=a.instrumentID
    select * from TIBEX_ORDERSBYQSIDVIEW where participantid='NITE';
    Current SQL using Temp Segment and Look for Column TEMPSEG_SIZE_MB
           SID TIME                OPERATION                 ESIZE        MEM    MAX MEM       PASS TEMPSEG_SIZE_MB
           183 11/10/2011:13:38:44 HASH-JOIN                    43         43       1556          1            1024
           183 11/10/2011:13:38:44 GROUP BY (HASH)            2043       2072       2072          0            4541Edited by: NM on 11-Oct-2011 04:38

  • Maitaning huge volume of data (Around 60 Million records)

    Iu2019ve requirement to load the data from ODS to Cube by full load. This ODS is getting 50 Million records down the line for 6 months which we have to maintain in BW.
    Can you please put the advise on the following things?
         Can we accommodate 50 Million records in ODS?
    If i.e. the case u201CCan we put the load for 50 Million records from ODS to Cube?u201D     And each record has to go check in another ODS to get the value for another InfoObject. Hence u201CIs the load going to be successful for the 50 Million records? Iu2019m not sure. Or do we get time out error?

    Harsha,
    The data load should go through ... some things to do / check...
    Delete the indices on cube before loading and then rebuild the same later after the load completes.
    regarding the lookup - if you are looking up specific values in another DSO - build a suitable secondary index on the DSO for the same ( preferably unique index )
    A DSo or cube can definitely hold 50 million records - we have had cases where we has 50 million records for 1 month with the DSO holding data for 6 to 10 months and the same with the cube also. Only that the reporting on the cube might be slow at a very detailed level.
    Also please state your version - 3.x or 7.0...
    also if you are on Oracle - plan for providing / backing up archive logs - since loading generates a lot of arcive logs...
    Edited by: Arun Varadarajan on Apr 21, 2009 2:30 AM

  • Best way to update 8 out of10 million records

    Hi friends,
    I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column.
    Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
    How should i approach this one?

    @Mark D Powell
    To give you a background my client faced this problem  a week ago , This is part of a daily cleanup activity .
    Right now i don't have the access to it due to security issue . I could only take few AWR reports and stats when the access window was opened. So basically next time when i get the access i want to close the issue once and for all
    Coming to your questions:
    So what is wrong with just issuing an update to update all 8 Million rows? 
    In a previous run , of a single update with full table scan in the plan with no parallel degree it started reading from UNDO(current_obj=-1 on event "db file sequential read" wait event) and errored out after 24 hours with tablespace full on the tablespace which contains the BLOB data(a separate tablespace)
    To add to the problem redo log files were sized too less , about 50MB only .
    The wait events (from DBA_HIST_ACTIVE_SESS_HISTORY )for the problematic sql id shows
    -  log file switch (checkpoint incomplete) and log file switch completion as the events comprising 62% of the wait events
    -CPU 29%.
    -db file sequential read 6%.
    -direct path read 2% and others contributing a little.
    30 % of the samples "db file sequential read" had a current_obj#=-1 & p1 showing undo file id.
    Is there any concurrent DML against this table? If not, the parallel DML would be an option though it may not really be needed. 
    I think there was in the previous run and i have asked to avoid in the next run.
    How large are the base table rows?
    AVG_ROW_LEN is 227
    How many indexes are effected by the update if any?
    The last column of the primary key column is the only column to be updated ( i mean used in the "SET" clause of the update)
    Do you expect the update will cause any row migration?
    Yes i think so because the only column which is going to be updated is the same column on which the table is partitioned.
    Now if there is a lot of concurrent DML on the table you probably want to use pl/sql so you can loop through the data issuing a commit every N rows so as to not lock other concurrent sessions out of the table for too long a period of time.  This may well depend on if you can write a driving cursor that can be restarted in the event of interruption and would skip over rows that have already been updated.  If not you might want to use a driving table to control the processing.
    Right now to avoid UNDO issue i have suggested to use PL/SQL approach & have asked increasing the REDO size to atleast 10 times more.
    My big question after seeing the wait events profile for the session is:
    Which was the main issue here , redo log size or the reading from UNDO which hit the update statement. The buffer gets had shot to 600 million , There are only 220k blocks in the table.

Maybe you are looking for

  • Windows 8.1 deployment and the store

    Hi all. I'm in the process of creating a Windows 8.1 image and I'm trying to understand how the Windows store works within the enterprise, does anyone know how the process should work in terms of... Can a domain joined Windows 8.1 device open the Win

  • Issue in upgrade GPGA code from 8.6 to 2013

    Hi, I have a LabVIEW FPGA project whcih is designed in 8.6. When I open it in 2013 I see some of the wires in the target VIs are broken while I can open it with no issue in 8.6 Also when I try to compile the code I get 61055 internal error Coudl you

  • Adding PDF to page in iWeb

    I would like to post samples of my magazine writing by posting PDFs of the articles. How do I post PDFs onto my iWeb web site (currently using iWeb '08). Or is there another file format I should use that could achieve the same effect as a PDF would (

  • How to integrate two controller​s of different vendors to synchronis​e the motion

    We have intended to synchronise the motion of X-Y linear positioning stages with the motion of precision press actuator. These machines supplied by different vendors and came with their stand alone controllers and their platform software controls. Ho

  • Slow to Start & close since updating to 10.10.3

    Problem description: since updating to Yosemite, takes forever to start up and close down, plus a lot of programs run very slow. EtreCheck version: 2.2 (132) Report generated 4/29/15, 8:46 PM Download EtreCheck from http://etresoft.com/etrecheck Clic