Want suggestion regarding a bulk deletion

Hi,
I need some suggestion regarding a deletion and i have the following scenario.
tab1 contains 100 items.
for one item tab2..6 tables contain 4000 rows.So the loop will run for each item and will delete 20,000 lines and will do a commit.
Currently for 5,00,000 deletion it is taking 1 hr.All the tables and indexes are analysied.
CURSOR C_CHECK_DELETE_IND
IS
SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y';
type p_item IS TABLE OF tab1.item%type;
act_p_item p_item;
BEGIN
OPEN C_CHECK_DELETE_IND;
LOOP
FETCH C_CHECK_DELETE_IND bulk collect INTO act_p_item limit 5000;
FOR i IN 1..act_p_item.count
LOOP
DELETE FROM tab2 WHERE item = act_p_item(i);
DELETE FROM tab3 WHERE item = act_p_item(i);
DELETE FROM tab4 WHERE item = act_p_item(i);
DELETE FROM tab5 WHERE item = act_p_item(i);
DELETE FROM tab6 WHERE item = act_p_item(i);
COMMIT;
END IF;
END LOOP;
exit when C_CHECK_DELETE_IND%notfound;
END LOOP;
Hope i have explained the scenario.Can you please suggest me the right approach.
Thanks in advance.

Hi,
why not just use straight sql. ie
DELETE FROM tabn
WHERE  item in (
  SELECT api.item
  FROM   tab1 api
  WHERE  api.delete_item_ind = 'Y');For bulk deletes other techniques include -
disabling constraints then reenabling them
making indexes unusable and then rebuilding them
creating temporary tables with the data you want left, dropping the source table and renaming the temp table
partitioning
Which of these is most useful depends on a lot of factors such as data volumes versus delete volumes, system outage availability, concurrency issues etc.
Regards
Andre

Similar Messages

  • Regarding bulk delete..

    Hi,
    I have areound 9400k reocrd should be deleted I am using for all delete with bulk collect . here i s the code for ref.
    declare
    2 type cust_array_type is table of number
    3 index by binary_integer;
    4 employee_array cust_array_type;
    5 v_index number;
    6 begin
    7 select empl_no bulk collect
    8 into employee_array from employee_history;
    9
    10 FORALL i IN employee_array.FIRST..employee_array.LAST
    11 delete from ord where empl_no = employee_array(i);
    12
    13 v_index := employee_array.FIRST;
    14 for i in employee_array.FIRST..employee_array.LAST loop
    15 dbms_output.put_line('delete for employee '
    16 ||employee_array(v_index)
    17 ||' deleted '
    18 ||SQL%BULK_ROWCOUNT(v_index)
    19 ||' rows.');
    20 v_index := employee_Array.NEXT(v_index);
    21 end loop;
    22 end;
    still data is not deleting pllease do advise

    user13301356 wrote:
    but normal delete is taking more time so to improve performance using bulk collect delete.
    so what is best approach to delete to go by bulk delete or normal delete.Look at it in simple terms...
    Method 1: Delete all Rows
    Method 2: Query all Rows then Delete all Rows
    which one, logically to you, is doing more work than the other?
    If your delete is taking a long time, that's because:-
    a) you haven't got suitable indexes to determine the records to be deleted on the target table
    b) you haven't designed the table to use parititions which could then simply be truncated (costed optional, licence required for partitioning)
    c) you are just deleting a lot of data and it's going to take time.
    No amount of PL/SQL coding around the basic SQL of a delete is going to improve the performance. You can't make any code delete the rows faster than a delete statement itself.

  • Forall bulk delete is too slow to work,seek advice.

    I used PL/SQL stored procedure to do some ETL work. It is pick up refeshed records from staging table, then check to see whether the same record exists in target table, then do a Forall bulk deletion first, then do a Forall insert all refreshed records into target atble. the insert part is working fine. Only is the deleteion part, it is too slow to get job done. My code list below. Please advise where is the problem? Thansk.
    Declare
    TYPE t_distid IS TABLE OF VARCHAR2(15) INDEX BY BINARY_INTEGER;
    v_distid t_distid;
    CURSOR dist_delete IS
    select distinct distid FROM DIST_STG where data_type = 'H';
    OPEN dist_delete;
    LOOP
    FETCH dist_delete BULK COLLECT INTO v_distid;
    FORALL i IN v_distid.FIRST..v_distid.LAST
    DELETE DIST_TARGET WHERE distid = v_distid(i);
    END LOOP;
    COMMIT;
    end;
    /

    citicbj wrote:
    Justin:
    The answers to your questions are:
    1. why would I not use a single DELETE statement? Because this PL/SQL procedure is part of ETL process. The procedure is scheduled by Oracle scheduler. It will automatically run to refresh data. Putting DELETE in stored procedure is better to executed by scheduler.You can compile SQL inside a PL/SQL procedure / function just as easily as coding it the way you have so that's really not an excuse. As Justin pointed out, the straight SQL approach will be what you want to use.
    >
    2. The records in dist_stg with data_type = 'H' vary by each month. It range from 120 to 5,000 records. These records are inserted into target table before. But they are updated in transactional database. We need to delete old records in target and insert updated ones in to replace old ones. But the distID is the same and unique. I use distID to delete old one and insert updated records with the same distID into target again. When user run report, the updated records will show up on the report. In plain SQL statement, delete 5,000 records takes about seconds. In my code above, it take forever. The database is going without any error message. There is no trigger and FK associated
    3. Merge. I haven't try that yet. I may give a try.Quite likely a good idea based on what you've outlined above, but at the very least, remove the procedural code with the delete as suggested by Justin.
    >
    Thanks.

  • Which is fast bulk delete or id's in a table and a where exists ....?

    I have some parent objects and that I use bulk collect with fetch limit and I currently store the primary keys of these parent objects to identify their child objects later by using where exists with a correlated subquery.
    I'm essentially moving objects graphs that span partitions from table to table.
    when I've done my SQL insert into select... I eventually do a delete.
    currently the delete uses the parent objects in this working table to identify the children to delete later.
    Q. What is likely to be faster?
    using a "temporary" table to requery for child objects based on the parents that I have for each batch or
    using returning clause from my insert into select so that I have rowid's or primary keys to work with later on
    when I want to perform my delete operation?
    I essentially have A's that have child B's which in tern have child C's.
    I store a batch of A pk's in a table and use those to identify the B's
    currently I don't store the B's pk but use the A pk's again to identify the B's which in turn are used to identify the C's later.
    I'm thinking if I remember the pk's I'm using at each level I can then use those later when it comes to the deletes.
    typically that's done in a returning clause and using a bulk delete from that collection later.
    thoughts?

    Parallel DML is one option. Another is to ceate a procedure (or package) that does a discreet unit of work (e.g. process parent and its children as a single business transaction). And then write a "+thread manager+" that runs x number copies of these at the same time (via DBMS_JOB for example).
    Let's say the procedure's signature is as follows:
    create or replace procedure ProcessFamily( parentID number ) is ..
    --// processes a family (parent and children)
    ..Using DBMS_JOB is pretty easy - when you start a job you get a job number for it. Looking at USER_JOBS will tell you whether that job is still in the job queue, or has completed (once off jobs are removed from the queue). The core of the this code will be a loop that checks how many jobs (threads) are running, and if less than the ceiling (e.g. it may only use 20 threads), start more ProcessFamily jobs.
    If the total number of threads/jobs to execute are known up front, then this ThreadManager can manually create a long operation entry. Such an entry contains the number of unit of works to do and then is updated with the number of units done thus far. Oracle provides time estimates for completion and percentage progress. This long operation can be tracked by most Oracle-based monitoring software and provide visibility as to what the progress is of the processing.
    The ProcessFamly procedure can also use parallel DML (if that makes sense). Or bulk processing (if needed). This approach is also scalable as h/w increases (server upgrades, new server h/w), so too does your ability to run more threads (aka jobs) at the same time.
    Now I'm not suggesting that you write a ProcessFamily() proc - I do not know the actual data and problem you're trying to solve. What I'm trying to convey is the basic principle for writing multi-thread/parallel processing software using PL/SQL. And it is not that complex. The critical thing is simply that the parallel procedure or thread be entirely thread safe - meaning that multiple copies of the same code can be started and these copies will not cause serialisation, dead locking, and other (application designed) problems.

  • Bulk delete Secure Zones subscribers

    Hi, does anyone know if there's a way to bulk delete Secure Zone subscribers rather than clicking "delete" for each in the list?
    My client needs to delete all and import a new set of Secure Zone subscribers each year. We don't necessarily want to remove them as customers in the CRM, just remove their access to the Secure Zone in bulk.
    Thanks
    Rob

    Hello,
    That is possible. On the literature page, go to Actions > Make Media Download Secure, and assign all the Secure Zones targeted.
    Regarding the payment option, the monthly payment is setup on the membership cost amount field, when setting up the secure zone, but when the first invoice is generated,  you can access it in the admin interface and modify its value.
    Also, you can consider using products instead of literature items, making them downloadable and assigning 2 prices, standard and wholesaler. At that point you just make sure that subscribers of the 2nd secure zone are made wholesalers so that they access the smaller price.

  • Is there a way to bulk delete records

    It seems that I have a a lot of duplicated records in my " central " area so I wanted to either filter by Area then delete the duplicates if there is a way to do that or bulk delete every record that is "Central" in the Area column..
    is that possible?

    Are you able to select more than 100 through the Content and Structure manager?
    OR
    I found a technet article that uses powershell to perform a bulk-delete, it might be your best bet to start here:
    http://social.technet.microsoft.com/wiki/contents/articles/19036.sharepoint-using-powershell-to-perform-a-bulk-delete-operation.aspx
    Edit: is this you?
    http://sharepoint.stackexchange.com/questions/136778/is-there-a-way-to-bulk-delete-records ;)

  • Need suggestion regarding File compression and splititng

    Hi,
    I want to split and compress the big files into small chucks. On later any standard zip utility i.e. winzip, 7z etc can merge(extract) all the chunks to generate the original file. I am looking for any java library which provide the split and compression functionality.
    As Java also supports in built compression utility. Should I use those library or any other open source is available to do this? I welcome your suggestion regarding how can I start.
    Thanks

    If you're just looking for something to be used internally, it'd be pretty simple:
    1. Open your source InputStream.
    2. Create a standard read/write loop to pump the stream.
    3. Add a counter that determines how much you've pushed into your current target. After you reach a certain value, close your current target, create a new target stream, and reset the counter.
    4. Conclude by closing your source and your current target.
    For compression, you can use the built-in GZIPOutputStream or a third-party library of your choice. I usually find GZIP sufficient for my needs.
    Of course, if you want the output to be compatible with other programs like 7-Zip, you've got a lot more work on your hands complying with the file format spec. :)

  • Can I bulk delete duplicate pictures in Elements 11?

    I imported about 2600 pictures from iPhoto to Elements 11 on my iMac OS 10.8.2.  Now there are hundreds of duplicates; deleting one at a time is a real pain.  Can I identify and bulk delete duplicates in Elements 11?

      Yes you can select thumbnails in the Organizer and hit delete. Then choose also delete from hard disk.
    If you want to use iPhoto it’s best to set up the Elements Editor application for external editing in iPhoto preferences. That will avoid duplication on your hd. See link for notes:
    http://helpx.adobe.com/photoshop-elements/kb/photoshop-elements-iphoto-mac-os.html

  • How can I bulk delete songs from the playlist, if original file could not be found.

    I deleted the original files from the Library iTines. The songs are still in the playlist. How can I bulk delete songs from the playlist, if original file could not be found. Thank.

    Take a look at my FindTracks script. Download it and test it out on a few tracks first. Use the No option first time around to manually confirm each path that it wants to fix so you can see how it works. When you're happy it does what you want select a larger group of tracks to process automatically.
    tt2

  • Suggestion regarding Oracle import & export utility

    hi
    I am navneet,I want to give some suggestion on Oracle import & export utility to Oracle Corporation. can u tell me where to send these suggestions
    regards
    navneet

    It would seem to me that if they are enhancement requests, then the right way would be to file an enhancement request. If they are bugs, then file SRs so bugs can be entered.
    Dean

  • FORALL bulk delete statement it requires lot of time

    Hi,
    when I execute FORALL bulk delete statement it requires lot of time even for 10 rows deletion.so how to avoid this problem? I checked SGA_TARGET and PGA_TARGET cureent_size and max_size are same for both. Is their is memory problem?
    thanks in advance
    Vaibhav

    Please look at this:
    http://psoug.org/definition/LIMIT.htm
    If you add the LIMIT 100 part to your query your bulk collect / forall construction will process 100 records per part.
    This will avoid filling up undo tablespaces / PGA memory.
    There's also a discussion wether to use LIMIT 100 or LIMIT 1000 (or greater).
    It depends on what your system can handle, if it's a busy production statement I'd stick to a value of 100.
    Reason for this, is that your query will not take a huge amount of PGA.
    DECLARE
    TYPE t_id_tab IS TABLE OF test.c1%TYPE;
    l_id_tab t_id_tab := t_id_tab();
    Begin
    dbms_output.put_line(DBMS_UTILITY.get_time);
    select c1 bulk collect into l_id_tab
    from TEST where c1<=150000
    LIMIT 100;
    FORALL i IN l_id_tab.first .. l_id_tab.last
    delete from TEST where c1= l_id_tab(i);
    commit;
    dbms_output.put_line(DBMS_UTILITY.get_time);
    End;we use this construction to load data into our warehouse and bumped into the LIMIT 100 part while testing.
    It did speed up things significantly.
    Another (risky!) trick is to do:
    -- exclude table from archivelogging
    ALTER TABLE TEST NOLOGGING;
    -- execute plsql block
    <your code comes here>
    -- enable logging on the table
    ALTER TABLE TEST LOGGING;The risky part is that you get inconsistent archivelogging this way because your deletes arent logged.
    When you do a restore from backup and want to roll forward until the latest change in your archive logs, you'll miss these deletes and your table will not be consistent with the rest of your database.
    In our warehouse situation we dont mind, data gets refreshed every night.
    Hope this helps!
    Robin

  • Okay so I set up my Time Capsule already and is now backing up 2 of my iMacs. Works great. What I want to know is how to use the TC to directly store files? I want to do this to delete some files but still have them on the TC for future reference..

    Okay so I set up my Time Capsule already and is now backing up 2 of my iMacs. Works great. What I want to know is how to use the TC to directly store files? I want to do this to delete some files on iMac 20inch but still have them on the TC for future reference..eg some movies on iTunes. I want to directly save them on the drive so I can delete them from iTunes and gain some storage. (Ps on iMac 20 inch (it's almost full - 320 GB) when I enter time machine, a tab comes up on finder which reads "Time Machine backups" it's able to be ejected like a disc or a connected device. On the iMac 20 inch, I dragged some files onto there as if using it like a hard drive. Is this the correct method? Then I went to my 27inch iMac and saw the "Time Machine Backups" hoping to see the files I dragged from the 20inch iMac. But the files were not there except a folder that said "Backups.backupdb". Can someone help me?

    It's not a good idea to use a network disk for both Time Machine backups and other things.  By design Time Machine will eventually consume all the space on its output disk, which will then cause problem for your other files.  I'd store those other files on an external disk connected to the Time Capsule.  The problem with that is that Time Machine will only back up files that are local to your Mac.  That means that you'll only have one copy of the files on or attached to your Time Capsule.
    By the way, you've been misled by poor field labeling on this forum into typing a large part of your message into the field intended for the subject.  In the future just type a short summary of your post into that field and type the whole message into the field below that.

  • My video's have moved a few times so I have many of the movies showing up 4 times. But only one of the 4 is the real path. How to I get rid of the invalid paths in mass? I know I can get rid of them one-by-one but I want to do a bulk clean up.

    I have many movies. I have moved them a few times. Now each movie shows up about 4 times in the video list but only 1 of the 4 is a valid path. How can I get rid of the invalid one in-mass. I know I can do them individually but it would take too long. So I want to do a bulk clean up so all the links are valid but I can't find a way to do it.

    Install iTunes Folder Watch and set its option to check for dead tracks on startup. This will root out any tracks that are no longer in your folders, and can add any that are in your folders, but not in iTunes.
    tt2

  • I have some content in my library that I do not want to sync with my ipod touch. The content has previously been synced and I want to remove it without deleting it from my itunes library. Is this possible?

    I have some music content in my itunes library that I do not want to sync with my ipod touch. The content has previously been synced and I want to remove it without deleting it from my itunes library. Is this possible?

    Hi Pez,
    iTunes in the Cloud is not iCloud. Any songs that you have purchased but have not downloaded (or have deleted), go to iTunes in the cloud. The setting I mentioned will show all of your purchases in the various views in the Music app, whether they are on your device or not. If you turn that setting off, you will only see items that are actually on your device.
    Glad to hear the second sync worked! But if they all start appearing again, make sure that setting is turned off
    Cheers,
    GB

  • I borrowed a hard drive from a friend that contains many thousands of songs. Somehow when tranferring these to iTunes I managed to get several copies of each song. How can i do a bulk delete of all the copies leaving just one version of each song?

    I need help please. I borrowed a hard drive from a friend which contains many thousands of songs. When adding them to my iTunes library I somehow managed to make multiple copies of each song. Is there a way to do a bulk delete of all the duplicates? If I have to go one by one it will take a lifetime.

    I have the same problem. I keep backups on a home server and this is where my library pointed to. I changed it to the local drive and now I have two copies in the library. I need to either bulk delete or clear the library (without deleting the files) and then rebuild. How do I do either of these things?

Maybe you are looking for

  • How to delete a blackberryid when you don't know the password or answer to the security question

    I have forgotten my password and answer to security question.  I would like to delete and restablish my blackberry id account.  Is this possible

  • Gui_download issue - trailing spaces getting truncated for fixed length fil

    Hi All, I have a requirement where I need to download an internal table as a fixed length file. The code is as follows: CALL FUNCTION 'GUI_DOWNLOAD' EXPORTING BIN_FILESIZE = FILENAME = L_FILE FILETYPE = 'ASC' APPEND = 'X' WRITE_FIELD_SEPARATOR = ' '

  • Overriding the DTD reference in XML

    Hi I have an xml file which references a dtd within the DOCType tag. The problem I have is that currently the reference is as follows: file:///D:/castor/castor-0.9.3.21/castor-0.9.3.21/doc/mapping.dtd which makes it specific to my machine. This file

  • Inserting feedback captions

    In my project, somehow the incorrect feedback button has gone missing. How do I re-insert it? It is not part of the physical elements that make up a slide. I tried editing the question but i cannot find the option to turn it back on again. Can anyone

  • Error when running XJC-generated code

    The following part of an XML schema      <element name="Parameter">           <annotation>                <documentation xml:lang="de">generische Schl��ssel-Wert-Parameter.</documentation>           </annotation>           <complexType>