FORALL bulk delete statement it requires lot of time

Hi,
when I execute FORALL bulk delete statement it requires lot of time even for 10 rows deletion.so how to avoid this problem? I checked SGA_TARGET and PGA_TARGET cureent_size and max_size are same for both. Is their is memory problem?
thanks in advance
Vaibhav

Please look at this:
http://psoug.org/definition/LIMIT.htm
If you add the LIMIT 100 part to your query your bulk collect / forall construction will process 100 records per part.
This will avoid filling up undo tablespaces / PGA memory.
There's also a discussion wether to use LIMIT 100 or LIMIT 1000 (or greater).
It depends on what your system can handle, if it's a busy production statement I'd stick to a value of 100.
Reason for this, is that your query will not take a huge amount of PGA.
DECLARE
TYPE t_id_tab IS TABLE OF test.c1%TYPE;
l_id_tab t_id_tab := t_id_tab();
Begin
dbms_output.put_line(DBMS_UTILITY.get_time);
select c1 bulk collect into l_id_tab
from TEST where c1<=150000
LIMIT 100;
FORALL i IN l_id_tab.first .. l_id_tab.last
delete from TEST where c1= l_id_tab(i);
commit;
dbms_output.put_line(DBMS_UTILITY.get_time);
End;we use this construction to load data into our warehouse and bumped into the LIMIT 100 part while testing.
It did speed up things significantly.
Another (risky!) trick is to do:
-- exclude table from archivelogging
ALTER TABLE TEST NOLOGGING;
-- execute plsql block
<your code comes here>
-- enable logging on the table
ALTER TABLE TEST LOGGING;The risky part is that you get inconsistent archivelogging this way because your deletes arent logged.
When you do a restore from backup and want to roll forward until the latest change in your archive logs, you'll miss these deletes and your table will not be consistent with the rest of your database.
In our warehouse situation we dont mind, data gets refreshed every night.
Hope this helps!
Robin

Similar Messages

  • FORALL bulk delete statement requires lot of time even for 10 rows deletion

    Hi,
    when I execute FORALL bulk delete statement it requires lot of time even for 10 rows deletion.so how to avoid this problem? I checked SGA_TARGET and PGA_TARGET cureent_size and max_size are same for both. Is their memory problem?
    I execute following code
    DECLARE
    TYPE t_id_tab IS TABLE OF test.c1%TYPE;
    l_id_tab t_id_tab := t_id_tab();
    Begin
    select c1 bulk collect into l_id_tab from TEST where c1<=10;
    dbms_output.put_line(DBMS_UTILITY.get_time);
    FORALL i IN l_id_tab.first .. l_id_tab.last
    delete from TEST where c1= l_id_tab(i);
    dbms_output.put_line(DBMS_UTILITY.get_time);
    commit;
    End;
    thanks in advance
    Vaibhav
    Edited by: Vaibhav on Oct 10, 2011 10:47 PM

    hi
    i am working on oracle 11g. Actually i have to test which is the faster method to delete 150000 records.
    1st by using FOR Loop bunch of 10000 records
    2nd by using FORALL delete
    kindly find below FORALL delete code
    DECLARE
    TYPE t_id_tab IS TABLE OF test.c1%TYPE;
    l_id_tab t_id_tab := t_id_tab();
    Begin
    select c1 bulk collect into l_id_tab from TEST where c1<=10;
    dbms_output.put_line(DBMS_UTILITY.get_time);
    FORALL i IN l_id_tab.first .. l_id_tab.last
    delete from TEST where c1= l_id_tab(i);
    dbms_output.put_line(DBMS_UTILITY.get_time);
    commit;
    End;
    Edited by: Vaibhav on Oct 10, 2011 10:56 PM

  • Forall bulk delete is too slow to work,seek advice.

    I used PL/SQL stored procedure to do some ETL work. It is pick up refeshed records from staging table, then check to see whether the same record exists in target table, then do a Forall bulk deletion first, then do a Forall insert all refreshed records into target atble. the insert part is working fine. Only is the deleteion part, it is too slow to get job done. My code list below. Please advise where is the problem? Thansk.
    Declare
    TYPE t_distid IS TABLE OF VARCHAR2(15) INDEX BY BINARY_INTEGER;
    v_distid t_distid;
    CURSOR dist_delete IS
    select distinct distid FROM DIST_STG where data_type = 'H';
    OPEN dist_delete;
    LOOP
    FETCH dist_delete BULK COLLECT INTO v_distid;
    FORALL i IN v_distid.FIRST..v_distid.LAST
    DELETE DIST_TARGET WHERE distid = v_distid(i);
    END LOOP;
    COMMIT;
    end;
    /

    citicbj wrote:
    Justin:
    The answers to your questions are:
    1. why would I not use a single DELETE statement? Because this PL/SQL procedure is part of ETL process. The procedure is scheduled by Oracle scheduler. It will automatically run to refresh data. Putting DELETE in stored procedure is better to executed by scheduler.You can compile SQL inside a PL/SQL procedure / function just as easily as coding it the way you have so that's really not an excuse. As Justin pointed out, the straight SQL approach will be what you want to use.
    >
    2. The records in dist_stg with data_type = 'H' vary by each month. It range from 120 to 5,000 records. These records are inserted into target table before. But they are updated in transactional database. We need to delete old records in target and insert updated ones in to replace old ones. But the distID is the same and unique. I use distID to delete old one and insert updated records with the same distID into target again. When user run report, the updated records will show up on the report. In plain SQL statement, delete 5,000 records takes about seconds. In my code above, it take forever. The database is going without any error message. There is no trigger and FK associated
    3. Merge. I haven't try that yet. I may give a try.Quite likely a good idea based on what you've outlined above, but at the very least, remove the procedural code with the delete as suggested by Justin.
    >
    Thanks.

  • Select statement is taking lot of time for the first time Execution.?

    Hi Experts,
    I am facing the following issue. I am using one select statement to retrieve all the contracts from the table CACS_CTRTBU according to FOR ALL ENTRIES restriction.
    if p_lt_zcacs[] is not initial.
    SELECT
               appl ctrtbu_id version gpart
               busi_begin busi_end tech_begin tech_end
               flg_cancel_obj flg_cancel_vers int_title
             FROM cacs_ctrtbu INTO TABLE lt_cacs FOR ALL ENTRIES IN p_lt_zcacs
                                                    WHERE
                                                    appl EQ gv_appl
                                                    AND ctrtbu_id EQ p_lt_zcacs-ctrtbu_id
                                                    AND  ( flg_cancel_vers EQ '' OR version EQ '000000' )
                                                    AND flg_cancel_obj EQ ''
                                                    AND busi_begin LE p_busbegin
                                                    AND busi_end GT p_busbegin.
    endif.
    The WHERE condition is in order with the available Index. The index has  APPL,CTRTBU_ID,FLG_CANCEL_VERS and FLG_CANCEL_OBJ.
    The technical settings of table CACS_CTRTBU says that the "Buffering is not allowed"
    Now the problem is , for the first time execution of this select statement, with 1.5 lakh entries in P_LT_ZCACS table, the select statement takes 3 minutes.
    If I execute this select statement again, in another run with Exactly the same parameter values and number of entries in P_LT_ZCACS ( i.e 1.5 lakh entries), it gets executed in 3-4 seconds.
    What can be the issue in this case? Why first execution takes longer time?.. Or is there any way to modify the Select statemnt to get better performance.
    Thanks in advance
    Sreejith A P

    Hi,
    >
    sree jith wrote:
    > What can be the issue in this case? Why first execution takes longer time?..
    > Sreejith A P
    Sounds like caching or buffering in some layer down the i/o stack. Your first execution
    seems to do the "physical I/O" where your following executions can use the caches / buffers
    that are filled by your first exectution.
    >
    sree jith wrote:
    > Or is there any way to modify the Select statemnt to get better performance.
    > Sreejith A P
    If modifying your SELECTS statement or your indexes could help depends on your access details:
    does your internal table P_LT_ZCACS  contain duplicates?
    how do your indexes look like?
    how does your execution plan look like?
    what are your execution figures in ST05 - Statement Summary?
    (nr. of executions, records in total, total time, time per execuiton,  records per execution, time per record,...)
    Kind regards,
    Hermann

  • DELETE Statement takes long time running

    Hi,
    DELETE statements take a very long time to complete.
    Can you advice me how can I diagnostic the slow performance
    Thanks

    Deleting rows can be an expensive operation.
    Oracle stores entire row as the 'before image' of deleted information (table and index data) in
    rollback segments, generates redo (keeps archiver busy), updates free lists for blocks that are
    falling below PCTUSED setting etc..
    Select count(*) runs longer because oracle scans all blocks (upto the High Water Mark) whether
    there are or are not any rows in the blocks.
    These operations will take more and more time, if the tables are loaded with APPEND hint or
    SQL*Loader using direct mode.
    A long time ago, I ran into a similar situation. Data was "deleted" selectively, SQL*Loader was
    used (in DIRECT mode) to add new rows to the table. I had a few tables using more number of blocks
    than the number of rows in the table!
    Needless to say, the process was changed to truncate tables after exporting/extracting required
    data first, and then loading the data back. Worked much better.

  • FORALL bulk insert ..strange behaviour

    Hi all..
    I have the following problem..
    I use a FORALL bulk Insert statement to insert a set of values using a collection that has only one row. The thing is I get a ' ORA-01400: cannot insert NULL into <schema>.<table>.<column>' error message, whereas the row has been inserted into the table!
    Any ideas why this is happening?

    Here is the sample code..
    te strange thing is that the cursor has 1 row and the array gets also 1 row..
    FUNCTION MAIN() RETURN BOOLEAN IS
    -- This cursor retrieves all necessary values from CRD table to be inserted into PDCS_DEFERRED_RELATIONSHIP table
    CURSOR mycursor IS
    SELECT key1,
    key2,
    column1,
    date1,
    date2,
    txn_date
    FROM mytable pc
    WHERE
    -- create an array and a type for the scancrd cursor
    type t_arraysample IS TABLE OF mycursor%ROWTYPE;
    myarrayofvalues t_arraysample;
    TYPE t_target IS TABLE OF mytable%ROWTYPE;
    la_target t_target := t_target();
    BEGIN
    OPEN mycursor;
    FETCH mycursor BULK COLLECT
    INTO myarrayofvalues
    LIMIT 1000;
    myarrayofvalues.extend(1000);
    FOR x IN 1 .. myarrayofvalues.COUNT
    LOOP
    -- fetch variables into arrays
    gn_index := gn_index + 1;
    la_target(gn_index).key1 := myarrayofvalues(x).key1;
    la_target(gn_index).key2 := myarrayofvalues(x).key2;
    la_target(gn_index).column1 := myarrayofvalues(x).column1;
    la_target(gn_index).date1 := myarrayofvalues(x).date1;
         la_target(gn_index).date2 := myarrayofvalues(x).date2;
    la_target(gn_index).txn_date := myarrayofvalues(x).txn_date;
    END LOOP;
    -- call function to insert/update TABLE
    IF NOT MyFunction(la_target) THEN
    ROLLBACK;
    RAISE genericError;
         ELSE COMMIT;
    END IF;
    CLOSE mycursor;
    END IF;
    FUNCTION MyFunction(t_crd IN t_arraysample) return boolean;
    DECLARE
    BEGIN
    FORALL x IN la_target.FIRST..la_target.LAST
    INSERT INTO mytable
    VALUES la_target(x);
    END IF;

  • Regarding bulk delete..

    Hi,
    I have areound 9400k reocrd should be deleted I am using for all delete with bulk collect . here i s the code for ref.
    declare
    2 type cust_array_type is table of number
    3 index by binary_integer;
    4 employee_array cust_array_type;
    5 v_index number;
    6 begin
    7 select empl_no bulk collect
    8 into employee_array from employee_history;
    9
    10 FORALL i IN employee_array.FIRST..employee_array.LAST
    11 delete from ord where empl_no = employee_array(i);
    12
    13 v_index := employee_array.FIRST;
    14 for i in employee_array.FIRST..employee_array.LAST loop
    15 dbms_output.put_line('delete for employee '
    16 ||employee_array(v_index)
    17 ||' deleted '
    18 ||SQL%BULK_ROWCOUNT(v_index)
    19 ||' rows.');
    20 v_index := employee_Array.NEXT(v_index);
    21 end loop;
    22 end;
    still data is not deleting pllease do advise

    user13301356 wrote:
    but normal delete is taking more time so to improve performance using bulk collect delete.
    so what is best approach to delete to go by bulk delete or normal delete.Look at it in simple terms...
    Method 1: Delete all Rows
    Method 2: Query all Rows then Delete all Rows
    which one, logically to you, is doing more work than the other?
    If your delete is taking a long time, that's because:-
    a) you haven't got suitable indexes to determine the records to be deleted on the target table
    b) you haven't designed the table to use parititions which could then simply be truncated (costed optional, licence required for partitioning)
    c) you are just deleting a lot of data and it's going to take time.
    No amount of PL/SQL coding around the basic SQL of a delete is going to improve the performance. You can't make any code delete the rows faster than a delete statement itself.

  • DELETE Statements [SOLVED]

    This is simple question.
    I have 1 big table. select * from big_table requires a lot of time.
    Then I delete some records on big_table. delete from big_table where no > 1000.
    Thus the table size has reduced significantly. However when I select again using select statement above. The query still take a long time.
    Why?
    When I truncate the big_table with drop storage option. Then insert the big_table with records that saved previously. Then it runs quickly.
    Question how to use DELETE statements that drop storage? Is DELETE statements doesnt release the storage?
    regards n thanks
    Message was edited by:
    user465837+++eric

    No. DELETE does not release storage that is already allocated to a table. It just makes those blocks available on the free list of blocks for that segment.
    However, the high water mark for that segment is reset ONLY by the truncate/MOVE commands (DROP also is useful but not always ;)).
    Since FTS against such tables is going to scan all the rows below the HWM, it is bound to take time when a table is cleaned up using DELETE command.

  • How to improve DELETE statement that remove millions of rows?

    The following query take lot of time when exectued, even after I drop the indexes, is there a better way to write the following query?
    DELETE from pwr_part
    where ft_src_ref_id in (select ft_src_ref_id
    from pwr_purge_ft);
    --Table:pwr_part
    --UIP10371 foreign key (FT_SRC_REF_ID, FT_DTL_SEQ)
    --Table: pwr_purge_ft
    --PWR_PURGE_FT_PK Primary key (FT_SRC_REF_ID, FT_DTL_SEQ)
    select count(*) from pwr_part;
    --27,248,294
    select count(*) from pwr_purge_ft;
    --23,803,770
    Explain Plan:
    Description Object owner Object name Cost Cardinality Bytes
    SELECT STATEMENT, GOAL = ALL_ROWS 224993 5492829 395483688
    HASH JOIN RIGHT SEMI 224993 5492829 395483688
    INDEX FAST FULL SCAN PWR_OWNER PWR_PURGE_FT_PK 43102 23803770 142822620
    PARTITION HASH ALL 60942 27156200 1792309200
    TABLE ACCESS FULL PWR_OWNER PWR_PART 60942 27156200 1792309200

    Helio Dias wrote:
    Have you ever thought about bulk collection?
    http://heliodias.wordpress.com/2010/01/07/best-way-to-delete-millions-rows-from-hundred-millions-table/
    One reason for which I would hate your suggestion.
    Regular Delete vs Bulk Delete

  • How to delete the data from KNVP without using the delete statement

    Hello friends,
    I have a requirement that I have to delete the data from KNVP table without using any delete statement. For it I have to use the Standard BAPI or any standard program.
    Can you please tell me the name of the standard program or BAPI to delete the data .
    Thanks in Advance
    Kuldeep

    Hello Raymond,
    I have use the function 'CUSTOMER_UPDATE' in which I only gives the data in T_XKNVP table only but still the data is not get deleting. Please see the code below.
    =============================================================
    REPORT  ZK_TEST2                                .
    data :
        I_KNA1     LIKE     KNA1,
        I_KNB1     LIKE     KNB1,
        I_KNVV     LIKE     KNVV,
        I_YKNA1     LIKE     KNA1,
        I_YKNB1     LIKE     KNB1.
    Data :
    T_XKNAS       LIKE     FKNAS occurs 0,
    T_XKNB5     LIKE     FKNB5 occurs 0,
    T_XKNBK     LIKE     FKNBK occurs 0,
    T_XKNVA     LIKE     FKNVA occurs 0,
    T_XKNVD     LIKE     FKNVD occurs 0,
    T_XKNVI     LIKE     FKNVI occurs 0,
    T_XKNVK     LIKE     FKNVK occurs 0,
    T_XKNVL     LIKE     FKNVL occurs 0,
    T_XKNVP     LIKE     FKNVP occurs 0 with header line,
    T_XKNVS     LIKE     FKNVS occurs 0,
    T_XKNEX     LIKE     FKNEX occurs 0,
    T_XKNZA     LIKE     FKNZA occurs 0,
    T_YKNAS     LIKE     FKNAS occurs 0,
    T_YKNB5     LIKE     FKNB5 occurs 0,
    T_YKNBK     LIKE     FKNBK occurs 0,
    T_YKNVA     LIKE     FKNVA occurs 0,
    T_YKNVD     LIKE     FKNVD occurs 0,
    T_YKNVI     LIKE     FKNVI occurs 0,
    T_YKNVK     LIKE     FKNVK occurs 0,
    T_YKNVL     LIKE     FKNVL occurs 0,
    T_YKNVP     LIKE     FKNVP  occurs 0 with header line,
    T_YKNVS     LIKE     FKNVS occurs 0,
    T_YKNEX     LIKE     FKNEX occurs 0,
    T_YKNZA     LIKE     FKNZA occurs 0.
    T_XKNVP-KUNNR     =     '7000002648'     .
    *T_XKNVP-VKORG     =     '0001'     .
    *T_XKNVP-VTWEG     =     '01'     .
    *T_XKNVP-SPART     =     '01'     .
    T_XKNVP-KZ      =     'D'     .
    append T_XKNVP to T_XKNVP.
    CALL FUNCTION 'CUSTOMER_UPDATE'
      EXPORTING
        I_KNA1        = I_KNA1
        I_KNB1        = I_KNB1
        I_KNVV        = I_KNVV
        I_YKNA1       = I_YKNA1
        I_YKNB1       = I_YKNB1
      TABLES
        T_XKNAS       = T_XKNAS
        T_XKNB5       = T_XKNB5
        T_XKNBK       = T_XKNBK
        T_XKNVA       = T_XKNVA
        T_XKNVD       = T_XKNVD
        T_XKNVI       = T_XKNVI
        T_XKNVK       = T_XKNVK
        T_XKNVL       = T_XKNVL
        T_XKNVP       = T_XKNVP
        T_XKNVS       = T_XKNVS
        T_XKNEX       = T_XKNEX
        T_XKNZA       = T_XKNZA
        T_YKNAS       = T_YKNAS
        T_YKNB5       = T_YKNB5
        T_YKNBK       = T_YKNBK
        T_YKNVA       = T_YKNVA
        T_YKNVD       = T_YKNVD
        T_YKNVI       = T_YKNVI
        T_YKNVK       = T_YKNVK
        T_YKNVL       = T_YKNVL
        T_YKNVP       = T_YKNVP
        T_YKNVS       = T_YKNVS
        T_YKNEX       = T_YKNEX
        T_YKNZA       = T_YKNZA
    =============================================================

  • Is there a way to bulk delete records

    It seems that I have a a lot of duplicated records in my " central " area so I wanted to either filter by Area then delete the duplicates if there is a way to do that or bulk delete every record that is "Central" in the Area column..
    is that possible?

    Are you able to select more than 100 through the Content and Structure manager?
    OR
    I found a technet article that uses powershell to perform a bulk-delete, it might be your best bet to start here:
    http://social.technet.microsoft.com/wiki/contents/articles/19036.sharepoint-using-powershell-to-perform-a-bulk-delete-operation.aspx
    Edit: is this you?
    http://sharepoint.stackexchange.com/questions/136778/is-there-a-way-to-bulk-delete-records ;)

  • Problems with StoredProcedures that use INSERT/DELETE statements

    Hello
    I am using Hyperion Intelligence explorer 8.5, and the database source is a MS SQL Server 2000 source connected via ODBC.
    I have this problem: when I use, in a Query section, a stored procedure that (in its code) uses only SELECT statements I get the result of the query in the Results section, but when I use a storedprocedure that does some work (and executes INSERT or DELETE or other SQL statements) and ends executing a SELECT statement in order to return data to the caller Hyperion hangs.
    I mean: first I select the stored procedure (Query\StoredProcedure... menù, than I start it using the Process command in Hyperion.
    If the storedproc contains only SELECT statements I get the results, but if it contains INSERT or DELETE (and the last statement is a SELECT) Hyperion does not return any data and if I try to repeat the Process command I get an error that tells "the connection is busy with results from another hstmt".
    Before you ask me if the stored procedure works correctly, I can confirm this because the storedproc was tested and returns the correct data if used with the db manager application.
    Any suggestions? Did you ever use (successfully )storedprocedures that process data (by INSERT or DELETE statements) and then return the result with a SELECT statement?
    Thank you for your help

    Hi Chris,
    Could you please tell us in which version of IR
    Hyperion is not going to support Stored
    Procedures....
    Regards,
    ManmohanManmohan,
    Did you even read what I just wrote? This is NOT happening. Stroed Procedures are an important part of the Intelligence Product, Oracle is continuing to enhance and support this functionality. I work for Oracle, I worked for Hyperion, and for Brio before that, so believe me when I tell you this.
    Chris (whoever he is) is gving out incorrect information in this regard. His suggestion of the workaround for the original issue is completely accurate, but as I added, it was in issue that has been corrected as of 8.5 Service Pack 2, so even the workaround is not required as long as you've upgraded. (Note that some versions of 9.x also will require the workaround due to the timing of the releases overlapping.)
    Thanks.
    Larry Johnson

  • Creating delete statements out of insert statements

    Hi, I'm running an insert script with lots of columns. Before that script is run I want to run a delete script for those records that match the ones in the insert.
    The things is that I have generated all these insert scripts and I do not want to manually create delete statments for all these records (there are many...)
    So for an insert into like this:
    Insert into ANSWERS(A,B,C,X,Y,Z) values ('Yes',No','No','No',null,'Yes')
    I want to create a delete somewhat like this:
    Delete from ANSWERS where (A,B,C,X,Y,Z) = ('Yes',No','No','No',null,'Yes')
    This way I can simple use a find and replace to do this task. Is there anyway to write a delete statement somewhat like this?
    Thanks!

    A very unusual request.
    the things is that I have generated all these insert scriptsWhy can't you generate a script to delete in a similar manner.
    Is there anyway to write a delete statement somewhat like this?something like this may help you but remember that this script will fail when you have columns values as "NULL"
    SQL> create table answers(a varchar2(3), b varchar2(3), c varchar2(3), x varchar2(3) , y varchar2(3),
      2  z varchar2(3));
    Table created.
    SQL> Insert into ANSWERS(A,B,C,X,Y,Z) values ('Yes','No','No','No','Yes','Yes');
    1 row created.
    SQL> select substr(txt, 1, length(txt)-1) || ' from dual)' from (
      2  select
      3  replace(replace(replace(
      4  trim('Insert into ANSWERS(A,B,C,X,Y,Z) values (''Yes'',''No'',''No'',''No'',''Yes'',''Yes'')'),
      5           'Insert into', 'Delete From'), '(A', ' where (A'), 'values (', ' in (select ') txt
      6  from dual);
    SUBSTR(TXT,1,LENGTH(TXT)-1)||'FROMDUAL)'
    Delete From ANSWERS where (A,B,C,X,Y,Z)  in (select 'Yes','No','No','No','Yes','Yes' from dual)
    SQL> Delete From ANSWERS where (A,B,C,X,Y,Z)  in (select 'Yes','No','No','No','Yes','Yes' from dual);
    1 row deleted.
    SQL>

  • Oracle deadlocks on delete statement

    I had a package procedure that deletes from a inline-view. It worked well and didn't create any database locks, looked like this:
    PROCEDURE serverdisconnect(pCode1 NUMERIC, pCode2 NUMERIC) IS
      BEGIN
         DELETE FROM
            SELECT cl.* FROM CurrentLogins cl, Accounts a
            WHERE cl.Code1 = pCode1
            AND cl.Code2 = pCode2
            AND cl.Code = a.code
            AND a.Type = 'lplayer'
            ORDER BY a.code
        COMMIT;
      END serverdisconnect;I slightly changed the procedure to look like following, and deadlocks started to come:
    PROCEDURE ServerDisconnect(pCode1 NUMERIC, pCode2 NUMERIC, pChannelServerCode CurrentLogins.ChannelServerCode%TYPE, pDeleteList OUT cursor_type)
      IS
        vDeleteList sys.ODCINumberList;
      BEGIN
        DELETE FROM
            SELECT cl.* FROM CurrentLogins cl, Accounts a
            WHERE cl.Code1 = pCode1
           AND cl.Code2 = pCode2
           AND cl.Code = a.code
           AND cl.ChannelServerCode = pChannelServerCode
           AND cl.Code = a.code
           AND a.Type = 'lplayer'
        ) RETURNING Code
          BULK COLLECT INTO vDeleteList;
        OPEN pDeleteList FOR
          SELECT * FROM TABLE(vDeleteList);
        COMMIT;
      END ServerDisconnect;As you see the main difference in the delete statement is that i removed "ORDER BY"-clause? Can really such data ordering plays a role with dead locking? Does the data records be always ordered with ORDER-BY-clause same way always to avoid deadlock? Why i started to get deadlock after changing the procedure?
    I have Oracle 10g.

    Yes, typo, i fixed initial post now.
    Delete will be technically done on table CurrentLogins, using that inline-view.
    I will move the Commit to proper place as you suggested.
    but still i don't understand why deadlocks started to occure.
    Maybe really the answer is having "order by" clause, which solves the deadlocks?
    See this link:
    http://www.oracle-base.com/articles/misc/Deadlocks.php
    >
    To resolve the issue, make sure that rows in tables are always locked in the same order.
    >
    Does it says that i always have to have "order by" sentence included into my delete statement?
    Relationships between tables:
    alter table CURRENTLOGINS
      add constraint FK_CURRENTLOGINS_ACCOUNTS foreign key (CODE)
      references ACCOUNTS (CODE);Maybe ORDER-BY really solves deadlocks then, see:
    http://www.dbasupport.com/forums/archive/index.php/t-50438.html
    >
    Add an explicit ORDER BY to the select, and it will probably go away.
    >
    Edited by: CharlesRoos on Sep 9, 2010 6:05 AM

  • Delete taking lot of time

    Hi
    I have a delete statement..which is taking lot of time. If I select this scenerio only 500 records are coming. But delete is taking lot of time.
    Please advise.
    delete from whs_bi.TRACK_PLAY_EVENT a
    where a.time_stamp >=to_date('5/27/2013','mm/dd/yyyy')
    and a.time_stamp <to_date('5/28/2013','mm/dd/yyyy')Thanks in adv.
    KPR

    Lets check the wait events.
    Open 2 sessions, 1 for running the update command and another for monitoring wait events. The session in which you want to run UPDATE, find the SID of that session ( SELECT userenv('SID') from dual ).
    Now run the update in one session (of which we have already found the SID).
    Run following query in another session
    select w.sid sid,
           p.spid PID,
           w.event event,
           substr(s.username,1,10) username,
           substr(s.osuser, 1,10) osuser,
           w.state state,
           w.wait_time wait_time,
           w.seconds_in_wait wis,
           substr(w.p1text||' '||to_char(w.P1)||'-'||
                  w.p2text||' '||to_char(w.P2)||'-'||
                  w.p3text||' '||to_char(w.P3), 1, 45) P1_P2_P3_TEXT
    from v$session_wait w, v$session s, v$process p
    where s.sid=w.sid
      and p.addr  = s.paddr
      and w.event not in ('SQL*Net message from client', 'pipe get')
      and s.username is not null
      and s.sid = &your_SIDWhile UPDATE is running in another session, run above query in second session for 5-6 times, with gap of (say) 10 seconds. If you can give us the output of monitoring query (from all 5-6 runs), that might throw more light on whats going under the hood.

Maybe you are looking for