Deleting large number of records

Hi Gurus,
I need to delete millions of records from a table of a specific column.
This cloumn is of type BLOB and text data in column looks like
#29/07/2010 2:20 PM#Eevent#MyEvent#MyClass_classToSchedule=abcd
I need to delete records based on partial value of above column(based on 'MyClass_classToSchedule=abcd' ).
Please help me to write a query for this.
Should I need to convert BLOB to VARCHAR2 or is there any other optimized way ?
Note: I dont want to use 'like' and 'nvl' in the query ( as those are very costly operation).
Thanks and Regards,
Nitesh

How many rows are in the table and how many do you want to delete aproximately?
Sometimes it is better to copy the remaining rows to a temporary table and truncate the original table instead of deleting it.
This is if the delete itself is the slow process.
In your case also the search logic inside the blob might be slow. Is it really a blob or is it a clob?
Some ways to speed this up have already been suggested.
I think the oracle text index using the contains comparison is a very promising one.

Similar Messages

  • Best way to delete large number of records but not interfere with tlog backups on a schedule

    Ive inherited a system with multiple databases and there are db and tlog backups that run on schedules.  There is a list of tables that need a lot of records purged from them.  What would be a good approach to use for deleting the old records?
    Ive been digging through old posts, reading best practices etc, but still not sure the best way to attack it.
    Approach #1
    A one-time delete that did everything.  Delete all the old records, in batches of say 50,000 at a time.
    After each run through all the tables for that DB, execute a tlog backup.
    Approach #2
    Create a job that does a similar process as above, except dont loop.  Only do the batch once.  Have the job scheduled to start say on the half hour, assuming the tlog backups run every hour.
    Note:
    Some of these (well, most) are going to have relations on them.

    Hi shiftbit,
    According to your description, in my opinion, the type of this question is changed to discussion. It will be better and 
    more experts will focus on this issue and assist you. When delete large number of records from tables, you can use bulk deletions that it would not make the transaction log growing and runing out of disk space. You can
    take the table offline for maintenance, a complete reorganization is always best because it does the delete and places the table back into a pristine state. 
    For more information about deleting a large number of records without affecting the transaction log.
    http://www.virtualobjectives.com.au/sqlserver/deleting_records_from_a_large_table.htm
    Hope it can help.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • What is the best practice of deleting large amount of records?

    hi,
    I need your suggestions on best practice of deleting large amount of records of SQL Azure regularly.
    Scenario:
    I have a SQL Azure database (P1) to which I insert data every day, to prevent the database size grow too fast, I need a way to  remove all the records which is older than 3 days every day.
    For on-premise SQL server, I can use SQL Server Agent/job, but, since SQL Azure does not support SQL Job yet, I have to use a Web job which scheduled to run every day to delete all old records.
    To prevent the table locking when deleting too large amount of records, in my automation or web job code, I limit the amount of deleted records to
    5000 and batch delete count to 1000 each time when calling the deleting records stored procedure:
    1. Get total amount of old records (older then 3 days)
    2. Get the total iterations: iteration = (total count/5000)
    3. Call SP in a loop:
    for(int i=0;i<iterations;i++)
       Exec PurgeRecords @BatchCount=1000, @MaxCount=5000
    And the stored procedure is something like this:
     BEGIN
      INSERT INTO @table
      SELECT TOP (@MaxCount) [RecordId] FROM [MyTable] WHERE [CreateTime] < DATEADD(DAY, -3, GETDATE())
     END
     DECLARE @RowsDeleted INTEGER
     SET @RowsDeleted = 1
     WHILE(@RowsDeleted > 0)
     BEGIN
      WAITFOR DELAY '00:00:01'
      DELETE TOP (@BatchCount) FROM [MyTable] WHERE [RecordId] IN (SELECT [RecordId] FROM @table)
      SET @RowsDeleted = @@ROWCOUNT
     END
    It basically works, but the performance is not good. One example is, it took around 11 hours to delete around 1.7 million records, really too long time...
    Following is the web job log for deleting around 1.7 million records:
    [01/12/2015 16:06:19 > 2f578e: INFO] Start getting the total counts which is older than 3 days
    [01/12/2015 16:06:25 > 2f578e: INFO] End getting the total counts to be deleted, total count:
    1721586
    [01/12/2015 16:06:25 > 2f578e: INFO] Max delete count per iteration: 5000, Batch delete count
    1000, Total iterations: 345
    [01/12/2015 16:06:25 > 2f578e: INFO] Start deleting in iteration 1
    [01/12/2015 16:09:50 > 2f578e: INFO] Successfully finished deleting in iteration 1. Elapsed time:
    00:03:25.2410404
    [01/12/2015 16:09:50 > 2f578e: INFO] Start deleting in iteration 2
    [01/12/2015 16:13:07 > 2f578e: INFO] Successfully finished deleting in iteration 2. Elapsed time:
    00:03:16.5033831
    [01/12/2015 16:13:07 > 2f578e: INFO] Start deleting in iteration 3
    [01/12/2015 16:16:41 > 2f578e: INFO] Successfully finished deleting in iteration 3. Elapsed time:
    00:03:336439434
    Per the log, SQL azure takes more than 3 mins to delete 5000 records in each iteration, and the total time is around
    11 hours.
    Any suggestion to improve the deleting records performance?

    This is one approach:
    Assume:
    1. There is an index on 'createtime'
    2. Peak time insert (avgN) is N times more than average (avg). e.g. supposed if average per hour is 10,000 and peak time per hour is 5 times more, that gives 50,000. This doesn't have to be precise.
    3. Desirable maximum record to be deleted per batch is 5,000, don't have to be exact.
    Steps:
    1. Find count of records more than 3 days old (TotalN), say 1,000,000.
    2. Divide TotalN (1,000,000) with 5,000 gives the number of deleted batches (200) if insert is very even. But since it is not even and maximum inserts can be 5 times more per period, set number of deleted batches should be 200 * 5 = 1,000.
    3. Divide 3 days (4,320 minutes) with 1,000 gives 4.32 minutes.
    4. Create a delete statement and a loop that deletes record with creation day < today - (3 days ago - 3.32 * I minutes). (I is the number of iterations from 1 to 1,000)
    In this way the number of records deleted in each batch is not even and not known but should mostly within 5,000 and even you run a lot more batches but each batch will be very fast.
    Frank

  • Problem fetch large number of records

    Hi
    I want to fetch large number of record from database.and I use secondary index database for improve performance for example my database has 100000 records and query fetch 10000 number of records from this database .I use secondary database as index and move to secondary database until fetch all of the information that match for my condition.but when I move to this loop performance terrible.
    I know when I use DB_MULTIPLE fetch all of the information and performance improves but
    I read that I can not use this flag when I use secondary database for index.
    please help me and say me the flag or implement that fetch all of the information all to gether and I can manage this data to my language
    thanks alot
    regards
    saeed

    Hi Saeed,
    Could you post here your source code, that is compiled and ready to be executed, so we can take a look at the loop section ?
    You won't be able to do bulk fetch, that is retrieval with DB_MULTIPLE given the fact that the records in the primary are unordered by master (you don't have 40K consecutive records with master='master1'). So the only way to do things in this situation would be to position with a cursor in the secondary, on the first record with the secondary key 'master1' retrieve all the duplicate data (primary keys in the primary db) one by one, and do the corresponding gets in the primary database based on the retrieved keys.
    Though, there may be another option that should be taken into consideration, if you are willing to handle more work in your source code, that is, having a database that acts as a secondary, in which you'll update the records manually, with regard to the modifications performed in the primary db, without ever associating it with the primary database. This "secondary" would have <master> as key, and <std_id>, <name> (and other fields if you want to) as data. Note that for every modification that your perform on the std_info database you'll have to perform the corresponding modification on this database as well. You'll then be able to do the DBC->c_get() calls on this database with the DB_MULTIPLE flag specified.
    I have other question.is there any way that fetch information with number of record?
    for example fetch information that located third record of my database.I guess you're refering to logical record numbers, like the relational database's ROW_ID. Since your databases are organized as BTrees (without the DB_RECNUM flag specified) this is not possible directly.You could perform this if use a cursor and iterate through the records, and stop on the record whose number is the one you want (using an incrementing counter to keep track of the position). If your database could have operated with logical record numbers (BTree with DB_RECNUM, Queue or Recno) this would have been possible directly:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/logrec.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_conf/renumber.html
    Regards,
    Andrei

  • Lookups with large number of records do not return the page

    Hi,
    I am developing an application using Oracle JHeadstart 10.1.3 Preview Version 10.1.3.0.78
    In my application I created a lookup under domains and used that lookup for an attribute (Display Type for this attribute is: dropDownList) in a group to get the translation fro this attribute. The group has around 14,800 records and the lookup has around 7,400 records.
    When I try to open this group (Tab), the progress shows that it is progressing but it does not open even after a long time.
    If I change the Display Type for the attribute from dropDownList to textInput then it works fine.
    I have other lookups with lower number of records. Those lookups work fine with dropDownList Display Type.
    Only I have this kind of problem when I have a lookup with large number of records.
    Is there any limitation of record number for lookups under Domains?
    How I can solve this?
    I need to translate the attribute (get the description from another table using the code).
    Your help would be appreciated.
    Thanks
    Syed

    We have also faced similar issue, but us, it was happening when we were using the dropDownList in a table, while the same dropDownList was working in table format. In our case the JVM is just used to crash and after google'ing it here in forums, found that it might be related to some JVM issue on Windows XP machines without Service Pack 2.
    Anyway... the workaround that we taken to get around the issue is to use LOV instead of a dropDownList in your jHeadStart.
    Hope this helps...
    - rutwik

  • Delete large number of accounts from GL master

    Hello All:
       Doing a large amount of Account deletion. There is no DI support on the topic, so I was trying to use UI on form 750.
    I tried both method of either using the find button and then delete or search through matrix. -> click on the row to bring up the accounts and then activatemanul
       I am having problem for some reason, that after certain number of record found. The delete manual 5910 no longer works. It doesn't sent out exception nor does it do anything. If you try to manually do it it will still work. But I suppose just the progarm runs so fast that it can not catch up?
       I don't know if anyone has a better solution because I have tried several ways with no improvement. It sill locate the record fine, but wouldn't delete the accounts.It just go to the next record without doing anything!
    for (int i = 0; i < Global.orecord.RecordCount; i++)
                            Global.formatcode = Global.orecord.Fields.Item("formatcode").Value.ToString();
                            Global.omatrix = (SAPbouiCOM.Matrix)(Global.oform.Items.Item("3").Specific);
                            for (int i2 = lastnum; i2 <= Global.omatrix.RowCount; i2++)
                                Global.oedit = (SAPbouiCOM.EditText)(Global.omatrix.Columns.Item("1").Cells.Item(lastnum + 1).Specific);
                                formatcode1 = Global.oedit.Value.ToString();
                                if (formatcode1.IndexOf(" - ") != -1)
                                    formatcode1 = formatcode1.Substring(0, formatcode1.IndexOf(" - "));
                                formatcode1 = formatcode1.Replace("-", "").ToString();
                                if (formatcode1.Length == 21)
                                    try
                                        if (formatcode1 == Global.formatcode)
                                            Global.omatrix.Columns.Item("1").Cells.Item(lastnum + 1).Click(SAPbouiCOM.BoCellClickType.ct_Regular, 0);
                                            Global.oapplication.ActivateMenuItem("5910");
                                            break;
                                        else
                                            if (System.Convert.ToInt64(Global.formatcode.Substring(0, 13)) <= System.Convert.ToInt64(formatcode1.Substring(0, 13)))
                                                break;
                                    catch { }
                                lastnum = i2;

    Thanks, rkaufmann87, I've tried that many times thinking I can sneak up on it (funny when we're to the point to try ANYTHING)----AB not cooperating on that one, either. The group will delete, but the names/cards are still on the main list and won't delete from the main Address Book. If you "select all" it goes back to the Address Book main list, not the group which is wanting to be deleted.

  • How do I delete large number of email messages?

    I get a large number of emails. Is there a way to select them on my iPod Touch 32g in groups rather than one at a time when I delete many messages?

    Did you ever find a solution to this?
    I, too, would like to DELETE lots of emails at the same time, even ALL of my old emails, without having to use EDIT, then touching each email one at a time, then selecting DELETE. Painfully slow if you have 200+ emails ... this should take 5 seconds, not 10 minutes.

  • How do I delete large number of duplicates on my Itunes w/o Ctrl+Click

    I have a large number of duplicates that were loaded onto my ITunes and I would like to delete them. So far the only way I have found is to go down the list one at a time and Ctrl+Click and then delete. Since Itunes can designate duplicates, is there a function for removing all the duplicates before I synch my ipod???
      Windows XP  

    Itunes can't mass delete duplicates, but one of the forum member has written a script to do it, see:
    http://home.comcast.net/~teridon73/itunesscripts/
    If you prefer to go commercial take a look at iTsync
    http://www.itsyncsoftware.com/itsync.htm

  • Analyze table after insert a large number of records?

    For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
    For example:
    Insert into foo ...... //Insert one million records to table foo.
    analyze table foo COMPUTE STATISTICS; //analyze table foo
    select * from foo, bar, car...... //Execute a complex query whithout hints
    //after 1 million records inserted into foo
    Does this strategy help to improve the overall performance?
    Thanks.

    Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
    This is why you shouldn't test an application with a small number of 'fact' records.
    This can happen both with analyze table and dbms_stats.
    The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
    You can even overrule individual table and column statistics by artificial values.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • Which is the best way for posting a large number of records?

    I have around 12000 register to commit to dababase.
    Which is the best way for doing it?
    What depends on ?
    Nowadays I can't commit such a large number of register..The dabatase seems hanged!!!
    Thanks in advance

    Xavi wrote:
    Nowadays I can't commit such a large number of registerIt should be possible to insert tens of thousands of rows in a few seconds using an insert statement even with a complex query such as the all_objects view, and commit at the end.
    SQL> create table t as select * from all_objects where 0 = 1;
    Table created.
    Elapsed: 00:00:00.03
    SQL> insert into t select * from all_objects;
    32151 rows created.
    Elapsed: 00:00:09.01
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.00
    I meant RECORDS instead of REGISTERS.Maybe that is where you are going wrong, records are for putting on turntables.

  • CLIENT_TEXT_IO - Hanging on "PUT" for large number of records

    I have successfully used CLIENT_TEXT_IO but my users have run into an error where the Form hangs and spits out details such:
    "oracle.forms.net.HTTPNStream.doFlush"
    etc....
    This happens when the number of records in the datablock is high (ex: 70,000 recs). So my question is: Is there a limit on how many lines you can write to a file?
    I'm just creating a CSV file on the client's machine using CLIENT_TEXT_IO.PUT_LINE. It works fine on say a few thousand recs but after that it hangs.
    I'm on Oracle Application Server 10g, Release 9.0.4 on Windows Server 2003, and forms compiled using Oracle Developer Suite 9.0.4.
    Thanks,
    Gio

    Hello,
    When playing with huge data, it is better to generate the file on the A.S. then get it back to the client.
    <p>Read this article</p>
    Francois

  • How to delete fixed number of records at a time

    Hi,
    I have a millions of records in one table.I have to purge first 15 days data per day by deleting at time 10000 records.
    Appreciate any ideas from you.
    Thanks in Advance.
    regards,
    Suresh

    Hi,
    I have a millions of records in one table.I haveto
    purge first 15 days data per day by deleting attime
    10000 records.
    Appreciate any ideas from you.
    Obviously you will need a timestamp.I have one column which will have record created time
    Why would you limit it to 10,000 at a time?I am using oracle 9i as back end.My requirement is not to delete more than 10000 at a time as load will be very high on this table.

  • Deleteing large number of rows_urgent

    Hi Friends,
    I need to write a procedure which will purge a table having more than 8 crores records based on the parameter passed by the user.We have open a cursor and in loop we are deleting one by one and when the count reaches 50,000 we issue commit so the rollback segment problem can be avoided.But this is taking plenty of time...very very slow.Is there any other way through which we can improve the perfomance.Instead of deleting one by one is it possible to delete 20000 rows in a fetch.
    Please suggest.

    try BULK COLLECT method of pl/sql.
    it has the option for you to process a set of records with a single execution.
    here is an example.
    declare
    type ty_dept is table of number index by binary_integer;
    dept_list ty_dept;
    cursor cur_dept is select deptno from dept;
    begin
    open cur_dept;
    loop
    --do bulk collect 1000 at a time
    fetch cur_dept bulk collect into dept_list limit 1000;
    exit when cur_dept%notfound;
    --process in bulk
    forall i in 1..dept_list.count
    delete from emp where deptno = dept_list(i);
    end loop;
    end;
    in the above example it process 1000 at a time. so instead of looping 1000 times and execute delete statements 1000 times it does once.
    maybe this should improve you performance.
    bench mark with initial some set of records before applying for entire 8 crores.

  • Slow record selection in tableView component with large number of records

    Hi experts,
    we have a Business Server Page (flow logic) with several htmlb:inputField's. As known from SAP standard we would like to offer value helper (F4) to the users for the ease of record selection.
    We use the onValueHelp() method of the inputField to open a extra browser window through JavaScript. In the popup another html-website is called, containing a tableView component with all available records. We use the SINGLESELECT mode for the table view.
    Everything works perfect and efficient, unless the tableView contains too many entries. If the number of possible entries is large the whole component performs very very slow. For example the selection of the record can take more than one minute. Also the navigation between pages through the buttons at the bottom of the component takes a lot of time. It seems that the tableView component can not handle so many entries.
    We tried to switch between stateful and stateless mode, without success. Is there a way to perform the tableView selection without doing a server-round-trip? Any ideas and comments will be appreciated.
    Best regards,
    Sebastian

    Hi Raja,
    thank you for your hint. I took a look at sbspext_table/TableViewClient.bsp but did not really understand how the Java-Script coding works. Where is the JavaScript code in that example? Which file, does it contain.
    Meanwhile I implemented another way to evite the server round trip.
    - Switch page mode of the popup window to "Stateful"
    - Use OnInitialization method like OnCreate (as shown in [using OnInitialization like OnCreate])
    - Limit the results of the SELECT statement with UP TO 1000 ROWS
    Best regards,
    Sebastian

  • How to delete large number of email notifications from facebook

    I have been receiving email notifications from facebook for years. I only know how to delete them one at a time. I now want to clean up my computer and do not know how to delete these unneeded emails. they number in the thousands.

    Holding Shift while you click selected everything between two mouse clicks.
    Using Ctrl you can fine turn what is selected by clicking on thing to unselect them
    Got the idea?

Maybe you are looking for

  • Best Buy Total Disappointment

    The story: Ordered an iPad Air Sunday evening(10/26/2014). According to tracking info it was supposed to be delivered Wednesday. Stayed home for most of the day but had to go to one short meeting at noon when received the email from UPS that the pack

  • Apps are assigned to wrong Apple ID

    Just bought my very first Apple purchase after resisting years of bombardment of propoganda from my friends and colleagues. They all told me how "easy" they are and how they "just work". Not so, I find this new 27" iMac so restrictive in what I can a

  • SAP CRM 7.0 WEBUI erroring out while trying to open URL link

    We have recently upgraded our CRM system from 6.0 to 7.0. I am currently facing an issue with CRM 7.0 WebUI where I am trying to click on the URL link attached to the activity. When I Click on any URL to open. The link does not open and we get a java

  • Regarding download of the ALV report to excel

    Hi experts, I have an ALV report which took long time to extract records from various table. So while there is some restriction the report can be executed well in foreground. And the report can be extracted well to excel sheet. But while there is hud

  • Changing Partner Details in SD Billing document header

    In the configuration table V_TPAER_SD we have switched off the field AENDB so that we can amend certain elements of the header partners of the billing document However, we are still unable to amend any element of the header partners within the billin