Delete from 95 million rows table ...

Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
Thanks in advance.

user8604530 wrote:
Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
Thanks in advance.how many rows in the table BEFORE the DELETE?
how many rows in the table AFTER the DELETE?
How do I ask a question on the forums?
SQL and PL/SQL FAQ
Handle:     user8604530
Status Level:     Newbie
Registered:     Mar 10, 2010
Total Posts:     64
Total Questions:     26 (22 unresolved)
I extend to you my condolences since you rarely get your questions answered.

Similar Messages

  • How to remove/delete from a mysql table

    Hello!
    Please can someone help me with this one; I need to remove/delete a a row in a mysql table. When I use the query below, it removes this line from other tables containing the same numbers too. I only want to remove it from this table alone
    String update= ("DELETE FROM info WHERE Number = '" + Nr + "' AND week= " + week+ ");
    THX

    I don't understand this. You're saying that running this query removes values from other tables besides your info table? I don't believe it.
    One thing I'd caution you on is using names like "info" and "number" for tables and columns. They sound suspiciously close to keywords for your database. You'd be better off using less generic, more application specific names.
    But I don't think that explains the behavior you're describing. Either this is a very serious bug in MySQL or a very serious misunderstanding on your part.

  • Best way to refresh 5 million row table

    Hello,
    I have a table with 5 million rows that needs to be refreshed every 2 weeks.
    Currently I am dropping and creating the table which takes a very long time and gives a warning related to table space at the end of execution. It does create the table with the actual number of rows but I am not sure why I get the table space warning at the end.
    Any help is greatly appreciated.
    Thanks.

    Can you please post your query.
    # What is the size of temporary tablespace
    # Is you query performing any sorts ?
    Monitor the TEMP tablespace usage from below after executing your SQL query
    SELECT TABLESPACE_NAME, BYTES_USED, BYTES_FREE
    FROM V$TEMP_SPACE_HEADER;
    {code}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Fetch only more than or equal to 10 million rows tables

    Hi all,
    How to fetch tables has more than 10 million rows with is plsql? i got this from some other site I couldn't remember.
    Somebody can help me on this please. your help is greatly appreciated.
    declare
    counter number;
    begin
    for x in (select segment_name, owner
    from dba_segments
    where segment_type='TABLE'
    and owner='KOMAKO') loop
    execute immediate 'select count(*) from '||x.owner||'.'||x.segment_name into counter;
    dbms_output.put_line(rpad(x.owner,30,' ') ||'.' ||rpad(x.segment_name,30,' ') ||' : ' || counter ||' row(s)');
    end loop;
    end;
    Thank you,
    gg

    1) This code appears to work. Of course, there seems to be no need to select from DBA_SEGMENTS when DBA_TABLES would more straightforward. And, of course, you'd have to do something when the count exceeded 10 million.
    2) If you are using the cost-based optimizer (CBO) and your statistics are reasonably accurate and you can tolerate a degree of staleness/ approximation in the row counts, you could just select the NUM_ROWS column from DBA_TABLES.
    Justin

  • Deleting from Dynamic Internal table

    Hi,
    How can we delete data from dynamic internal table...
    I have a dynamic internal table <fs_dyn_table> which is of type any and can have any fields...
    I want to delete all those records which have a value of '10' in a field named field1
    I have written my delete statement in the following manner..
    DELETE <fs_dyn_table> WHERE field1 = '10'.....but it is not working...it gives me an error...stating that the line type of table must be statically defined.

    Hi,
    Loop on the internal table into a field-symbol of line type same as the internal table.
    Use assign component statement and then delete the corresponding record.
    Regards,
    Ankur Parab

  • Duplicate entry is not deleted from the TCA tables

    Hi
    We are using Oracle Customer Online to find the duplicate data and Oracle Data Librarian to remove the duplicacy. The request to remove the duplicates is taken up by ODL and the task is performed. When checked from OCO it shows that the duplicate entry has been erased but when checked in the TCA table HZ_PARITES, the entry is still there. Can anyone please help regarding this as why this is so... after removing the duplicate entry why is it still there in the table or if the entry is still there then why cant you access it through OCO
    Regards
    Sourav Biswas

    Hi,
    Did you check the value for "STATUS" column in HZ_PARTIES table for the deleted records? It must be "D" which reperesents "Deleted". See the "REGISTRY_STATUS" AR lookup which validates the Party Statuses.
    When duplicates are eliminated using Data Librarian/Party Merge, the records are not actually deleted from the database, instead the party status will be changed to "D". This must be the reason for not being able to see the records from Customers Online.
    I guess, the same case might have occurred to your entries.
    Wishes,
    RK Goud

  • How to run insert/update/delete from CDC Change table to target using OWB

    I am planning to set up CDC and publish CDC change table as source data into to OWB. I have come across where I am confused how to apply changes from CDC change table to target database using OWB. For i.e. change tables is containing information like
    operation$, cscn$,commit_timestamp$,xidusn$,....,list of column name
    D,12323223,8/28/2008 1:44:32PM,24,.....,list of column value that have to be deleted from target
    UO,12323224,8/28/2008 1:45:23PM,24,.....,list of column value that have to be updated in target.
    Please advice or give me some hint. Thank you.

    Hi,
    you can wait for 11gR2 with CDC integration or build most of the code outside the owb. To use CDC you must do this things (http://www.oracle.com/technology/oramag/oracle/03-nov/o63tech_bi.html):
    1. Identify the source tables.
    2. Set up a publisher.
    3. Create change tables.
    4. Set up a subscriber.
    5. Subscribe to the source tables, and activate the subscription.
    6. Set up the CDC window.
    7. Prepare a subscriber view.
    8. Access data from the change tables.
    9. Drop the subscriber view, and purge the CDC window.
    10. Repeat steps 6 through 9 to see new data.
    You can do only a few of this inside owb, most of it must be done outside.
    Regards,
    Detlef

  • MINUS on a 4 million row table - how to enhance speed?

    I have a query like this:
    select col1, col2, col3
    from table1
    MINUS
    select col1, col2, col3
    from EXTERNAL_TABLE
    table1 has approximately 4 million records and external file has roughly 4 milllion rows.
    MINUS takes around 25 mins here. How can I speed up?
    Thanks in Advance!!!

    To make something go faster, you first need to know what makes it slow.
    Simple actually - to solve a problem we need to know what the problem is. Can't answer a question without knowing what the question is.
    Reading a total of 8 million rows means a lot more I/O than usual... so that is likely a culprit for the slow performance. If that is the case, you will need to find a way to increase the time it takes to perform all that I/O. Or do less I/O.
    But you need to pop the hood and take a look at just what is causing what you think is slow performance. (it may not even be slow - it may be as fast as it can do given the hardware and other limitations that are imposed on Oracle and this MINUS process)

  • Google style autosuggest with millions rows table

    Hi All,
    I'm exploring the ways of implementing a "google style autosuggest" on a table with no less than 30 millions rows. It has a field with an address (varchar) and I'd like to create a Ajax call while the user is typing that would suggest the user few addresses.
    I was thinking about using contains+fuzzy... but not sure if it will be fast enough and if it will return the right results.
    Any suggestions ?
    thanks

    2 million rows with XML type data
    link may be of your interest.
    HTH
    Girish Sharma

  • Datas deletion from Global Temporary table  when clear command is given

    Dear All,
    How to Delete datas from global temporary Table when clear command is given in forms
    Suggest me syntax..
    Pls help..
    Regards,
    Gokul.B

    http://psoug.org/reference/gtt.html
    Francois

  • Delete from multiple rows elements from Array

    I have an array that contains about 3700 rows, and 4 columns. Every 37 rows a cycle of data begins. I would like to delete the first row of every cycle of data (i.e. row 1, row 38, row 75, row 111, etc -- until I have deleted every 37th row in the entire array). Then if possible I would like to take an average of rows 2 through 37, 39 through 74, 76 through 110, etc...
    Any help would be greatly appreciated.

    > After I modify the arrrays, I am displaying them both in lab view (in
    > multiple different graphs and a table), and then also using active X
    > to transfer the data to Excel (where is will be re-arranged and
    > plotted accordingly).
    >
    > I indeally need to get an average of the points in the array about
    > every 36 rows, and then display this, since I am looking to track the
    > output decay over time. If you have any suggestions as how I might
    > find and average of every x number of rows in an array, and either
    > input these into another array or a table, that would be greatly
    > appreciated.
    >
    I can't see the original post about the data shape, but if you have a 2D
    array, wire it into a For loop. Use i mod 36 equals 0 to select whether
    you
    add the array to the current total in the shift register, or whether
    you divide the total array by 36 and append it to the averages array,
    then overwrite the total to restart the process. If the rows isn't
    doesn't contain an integer multiple of 36 rows, you need to deal with
    the excess data either ignoring or making an average with a different
    denominator. I'd assume you do this outside the loop.
    Greg McKaskle

  • Snapshot too old when deleting from a "big" table

    Hello.
    I think this is a basic thing (release 8.1.7.4). I must say I don't know how rollback segments really work.
    A table, where new records are continuously inserted and the old ones can be updated in short transactions, should be purged every day by deleting old records.
    This purge has never been done and as a result it has now almost 4 million records, and when I launch the stored procedure that deletes the old records I get the "snapshot too old" error because of the read consistency.
    If I launch the procedure after stopping the application that inserts and updates in the table, then I don't get the error. I guess the problem is that meanwhile the procedure is being executed other transactions also need to use rollback segments so that the rollback segment space that the snapshot needs isn't enough. Do you think this is the problem?
    If this is the case then I suppose that the only solution is increasing the size of the only datafile of the only tablespace for my 4 rollback segments. Am I wrong?
    (Three more questions:
    - Could the problem be solved by locking some rollback segments for the snapshot? How could I do that?
    - What is a discrete transaction?
    I'm a developer, not a dba, but don't tell me to ask my dba because it isn't that easy. Thanks in advance.

    "snapshot too old indicates the undo tablespace does not have enough free space for a long running query" what does this mean? why do I get the same error in two different databases, in the first the size of the datafile of the undo tablespace is 2GB whilst in the second it is only 2MB? How can I know how big the datafile has to be?
    One possible solution could be not deleting the whole table at once but only a few records? Would this work? Why when I try "select count(*) from my_table where rownum = 1" I also get "snapshot too old" when other transactions are running.

  • Deleting 168 million rows.

    Our main application table has data for the last 5 years which has caused a performance issue as the recommended time duration of the data to be kept is 3 months.
    Right now we have planned to make a copy of  the table using the nologging option but the problem is deleting the data for the last 5 years from this table as it has got around  168,789,200 rows.
    I dont want to use the CTAS option for the main application table  and again re-create indexes, complile all pl/sql procedures as I feel this quite risky.
    When we asked  our DBA to takeup the activity they pushed it on our team (application support) saying  its not there duty..!!!
    Any kind of help is highly appreciated.

    First, are you sure that you cannot do an ordinary delete, and then shrink the table? 168m rows in one transaction will generate some undo and redo, but not necessarily an inordinate amount.
    Secondly, if undo is the problem, you could use dbms_parallel_execute. If you set the chunk size to (for example)  one thousandth of the table and use parallel_level=0 then you will delete and commit on average 168000 rows in each of a thousand consecutive transactions. The redo would still be generated though.
    Either way, this would be an online operation, no downtime. If you have Enterprise Edition licences you could use the Resource manager to slow the job down to ensure that no-one will notice and spread the redo over a long time.
    John Watson
    Oracle Certified Master DBA
    http://skillbuilders.com

  • Deleting from 2 different tables

    HI,
    i have a Database with 2 tables that are identical.
    products (main product table)
    products_update (every morning this one is fill automatically with the list of products)
    So i need to insert/update/delete the products
    add and update works fine
    <cfquery name="update_data" datasource="mydatasource">
        SELECT *
        FROM products_update
        WHERE exists
    (select * from products where products.no_client = products_update.no_client AND products.no_stock = products_update.no_stock)
    </cfquery>
    <cfquery name="insert_data" datasource="mydatasource">
        SELECT *
        FROM products_update
        WHERE not exists
    (select * from products where products.no_client = products_update.no_client AND products.no_stock = products_update.no_stock)
    </cfquery>
    But i cant figure out how to do the query for delete so i created a select query to see the list of products i want to delete but its giving me the list of product to update i'm confuse  :-)  can someone please point out the obvious to me.
    <cfquery name="delete_data" datasource="mydatasource">
        SELECT *
        FROM products_update
        WHERE exists
    (select * from products where products.no_client = products_update.no_client AND products.no_stock <> products_update.no_stock)
    </cfquery>
    <cfoutput query="delete_data">
    #delete_data.currentrow# : #no_stock# <P>
    </cfoutput>
    Thank you

    Guys it is the weekend. Let us give the sarcasm a rest ..
    <cfoutput query="delete_data">
    I am not sure why you are using SELECT's. Normally you can do this kind of thing with just three (3) sql statements. But .. we need more information. The basic idea is clear. But the rules of your application are not.  For example, is no_stock a product id, number in stock, ....?
    - What are the structures of your tables including PK's and relationships?
    - What are the rules? ie In plain english what determines that a particular record needs to be inserted instead of updated or deleted ?

  • Change Run - 'M' entries deleted from /BI0/M* table

    Hello together,
    I came across the following problem with master data activation...
    After master data loading there appear new entries in all tables M,P, X, Q, Y with OBJVERS='M'. Next step is to execute Change Run, what should result in entries with OBJVERS='A' . Instead, in tables: M,P,X these entries are deleted and in time dependent tables - Q, Y these entries remain in Modified 'M' version. Do you have any idea why Change Run is working that way?
    The problem concerns 0UCINSTALLA object in BW 3.5.
    Thank you in advance,
    Aleksandra

    Hello again,
    1) Activation - impossible, master data already active.
    2) RSRV - 2 tests failed:
    Time intervals in Q table for a characteristic with time-dep. master data:
    - Characteristic 0UCINSTALLA: Checking consistency of time intervals in Q table
    - Characteristic 0UCINSTALLA: Chain of active intervals: 8 737 gaps (show max. 50):
    - Char. 0UCINSTALLA: Chain of changed intervals: 12 overlapping (show max.25):
    Compare characteristic values in SID/P/and Q tables for characteristic 0UCINSTALLA
    - Characteristic 0UCINSTALLA : Check for all values existing in P, Q and SID tables
    - Characteristic 0UCINSTALLA: Errors found during this test
    - Characteristic 0UCINSTALLA: 8 730 values from table /BI0/SUCINSTALLA do not exist in table /BI0/QUCINSTALLA
    - Characteristic 0UCINSTALLA : Following versions are incorrect: (Display max. 50)
    - Characteristic 0UCINSTALLA: 8 730 values from table /BI0/PUCINSTALLA do not exist in table /BI0/QUCINSTALLA
    - Characteristic 0UCINSTALLA : Following versions are incorrect: (Display max. 50)

Maybe you are looking for