Suggestions needed in a huge bulk delete operation

Hi ,
I have the following scenario.
begin
open c1;
fetch c1 bulk collect into L_item ; cursor fetch -- 20,000(x) rows
forall i in 1..L_item.count
delete tab7 where col1= L_item(i); delete 900*x rows
forall i in 1..L_item.count
delete tab1 where item = tem(i); delete 900*x rows
forall i in 1..L_item.count
delete tab2 where col1= L_item(i); delete 900*x rows
forall i in 1..L_item.count
delete tab3 where col1= L_item(i); delete 4*x rows
forall i in 1..L_item.count
delete tab4 where col1= L_item(i); delete 4*x rows
forall i in 1..L_item.count
update tab5 set col1='N' where col2 = L_item(i); update 1*x rows
forall i in 1..L_item.count
insert into tab6 values (L_item(i),sysdate); insert 1*x rows
if i do commit here redo buffer may overflow .
In this above scenario please suggest me what way i will follow or this way is the best way to do so.
My database version is below
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
PL/SQL Release 9.2.0.4.0 - Production
CORE 9.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production
Thanks in advance
Edited by: bp on Mar 24, 2009 10:29 PM

If it is feasible as per your requirement, instead of deleting data from your table take the backup of data that is to be retained in a temp table, then truncate your main table and reload it from the temp table.This will be faster and will prevent the problem of high watermark.

Similar Messages

  • Is there a way to bulk delete records

    It seems that I have a a lot of duplicated records in my " central " area so I wanted to either filter by Area then delete the duplicates if there is a way to do that or bulk delete every record that is "Central" in the Area column..
    is that possible?

    Are you able to select more than 100 through the Content and Structure manager?
    OR
    I found a technet article that uses powershell to perform a bulk-delete, it might be your best bet to start here:
    http://social.technet.microsoft.com/wiki/contents/articles/19036.sharepoint-using-powershell-to-perform-a-bulk-delete-operation.aspx
    Edit: is this you?
    http://sharepoint.stackexchange.com/questions/136778/is-there-a-way-to-bulk-delete-records ;)

  • How can I bulk delete 50,000+ photos from Revel online?

    I have already deleted the photos in my catalog that were associated with my Revel online Carosel but they are still showing up on the Revel online Carosel via web or iPad. I can delete individual photos or by date through Revel online via the web or iPad but that takes forever. I need a way to bulk delete all Revel Carosel photos that are showing up online.

    charleston spottail
    With so many photos (50,000+) involved, I think it would be in your best interests to post your thread in the Adobe Revel Forum here.
    Revel
    The top participants in the Adobe Revel Forum are identified as Staff.
    Please let us know the outcome.
    Thank you.
    ATR

  • Want suggestion regarding a bulk deletion

    Hi,
    I need some suggestion regarding a deletion and i have the following scenario.
    tab1 contains 100 items.
    for one item tab2..6 tables contain 4000 rows.So the loop will run for each item and will delete 20,000 lines and will do a commit.
    Currently for 5,00,000 deletion it is taking 1 hr.All the tables and indexes are analysied.
    CURSOR C_CHECK_DELETE_IND
    IS
    SELECT api.item FROM tab1 api WHERE api.delete_item_ind = 'Y';
    type p_item IS TABLE OF tab1.item%type;
    act_p_item p_item;
    BEGIN
    OPEN C_CHECK_DELETE_IND;
    LOOP
    FETCH C_CHECK_DELETE_IND bulk collect INTO act_p_item limit 5000;
    FOR i IN 1..act_p_item.count
    LOOP
    DELETE FROM tab2 WHERE item = act_p_item(i);
    DELETE FROM tab3 WHERE item = act_p_item(i);
    DELETE FROM tab4 WHERE item = act_p_item(i);
    DELETE FROM tab5 WHERE item = act_p_item(i);
    DELETE FROM tab6 WHERE item = act_p_item(i);
    COMMIT;
    END IF;
    END LOOP;
    exit when C_CHECK_DELETE_IND%notfound;
    END LOOP;
    Hope i have explained the scenario.Can you please suggest me the right approach.
    Thanks in advance.

    Hi,
    why not just use straight sql. ie
    DELETE FROM tabn
    WHERE  item in (
      SELECT api.item
      FROM   tab1 api
      WHERE  api.delete_item_ind = 'Y');For bulk deletes other techniques include -
    disabling constraints then reenabling them
    making indexes unusable and then rebuilding them
    creating temporary tables with the data you want left, dropping the source table and renaming the temp table
    partitioning
    Which of these is most useful depends on a lot of factors such as data volumes versus delete volumes, system outage availability, concurrency issues etc.
    Regards
    Andre

  • Need to Set Current Row when Using Built-in Data Control Delete Operation?

    I have an af:table bound to a ViewObject (VO) collection (no Entity Object) - within each row, I include a column that contains a 'Remove' command button so the user can remove the row. I add the command button by dragging/dropping the built-in delete operation from the VO on the Data Control Palette. When I use this as is (no changes), the Remove button always deletes the first row in the collection, not the selected row. Do I need to add code to set the current row, and if so can someone please provide an example and specify where I need to add? thanks.
    ------ .jspx af:table with command button to remove each row ------
    <af:table value="#{bindings.ListView1.collectionModel}" var="row"
    rows="#{bindings.ListView1.rangeSize}"
    first="#{bindings.ListView1.rangeStart}"
    // note: I don't have any code added for selectedRow or makeCurrent - assuming this is built-in?
    selectionState="#{bindings.ListView1.collectionModel.selectedRow}"
    selectionListener="#{bindings.ListView1.collectionModel.makeCurrent}">
    <af:column>
    <af:commandButton actionListener="#{bindings.Delete.execute}"
    text="Remove"
    disabled="#{!bindings.Delete.enabled}"/>
    </af:column>
    ---------- corresponding pagedef file ------------
    <bindings>.....
    <action id="Delete" IterBinding="ListView1Iterator"
    InstanceName="SrchDataControl.ListView1"
    DataControl="SrchDataControl" RequiresUpdateModel="false"
    Action="30"/>
    </bindings>
    Note: I also tried solution posted on following thread, but again, only first row is deleted, not the selected row?: Delete and Commit
    Message was edited by:
    javaX

    I just want to delete (or remove) it from the VO. Data for this VO is not on the database.
    The function is doing what I want it to do (delete from the VO), its just always deleting the first row, versus the selected row. I select the command button next on a column next to an item further down in the list and it deletes the first row. The problem is setting the selected row to be removed - I thought setting the current row would be taken care of by the SelectListener?
    selectionState="#{bindings.MyIspListView1.collectionModel.selectedRow}"
    selectionListener="#{bindings.MyIspListView1.collectionModel.makeCurrent}"

  • How can I bulk delete contacts from my iPhone 4S? I've imported contacts from gmail and hotmail, many of which I don't need. Deleting them individually will take ages, there must be a quicker way. Also, are my contacts taking up space on my iPhone or

    How can I bulk delete contacts from my iPhone 4S? I've imported them from gmail and hotmail and there are many more than I need. Deleting them one by one will take ages, there must be a quicker way! Also, do contacts take up space on the phone or are they stored remotely?
    Thanks.. J

    You should be syncing your contacts with an app on your computer or cloud service (iCloud, Gmail, Yahoo, etc), and not relying on a backup.  If you haven't been doing this, start now and then restore your old backup.  You will then be able to sync the new contacts back into the phone.  However, you will lose all messages, etc newer thant the backup.

  • Need Performance tuning in delete operation

    Hi Gurus,
    I am performing delete operation by following SQL query.
    delete from gl_account where bu_id = -99
    but it take long time to execute. Table contains 1 trigger and 5 index. I have disabled the trigger and rebuild the index but still it not executing.
    Here is my explain plan.
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    DELETE STATEMENT Optimizer Mode=ALL_ROWS          561             19
    DELETE     OFFLINETESTDB.GL_ACCOUNT
    INDEX RANGE SCAN     OFFLINETESTDB.BU_ID     561       27 K     2      
    Pls help me out to solve this.

    Hi All,
    I am still facing the same performs problem for deleting row in a table.
    here by i have attached my TKPROF for your consideration.
    TKPROF: Release 10.2.0.1.0 - Production on Tue Oct 12 14:01:13 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Trace file: rubikon_s002_3952.trc
    Sort options: exeela  exerow 
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    DELETE FROM GL_ACCOUNT
    WHERE
    GL_ACCT_ID IN (16908,16909,16456)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.13          0          0          0           0
    Execute      1      0.03       0.26          0          6        221           3
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.04       0.40          0          6        221           3
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 40  (OFFLINETESTDB)
    Rows     Row Source Operation
          0  DELETE  GL_ACCOUNT (cr=177742 pr=160538 pw=0 time=31518664 us)
          3   INLIST ITERATOR  (cr=6 pr=0 pw=0 time=103 us)
          3    INDEX RANGE SCAN GL_ACCOUNT_PK (cr=6 pr=0 pw=0 time=86 us)(object id 65637)
    Rows     Execution Plan
          0  DELETE STATEMENT   MODE: ALL_ROWS
          0   DELETE OF 'GL_ACCOUNT'
          3    INLIST ITERATOR
          3     INDEX   MODE: ANALYZED (RANGE SCAN) OF 'GL_ACCOUNT_PK'
                    (INDEX (UNIQUE))
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_SUMMARY" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.06          0          0          0           0
    Fetch        3      0.00       0.00          0          6          0           3
    total        7      0.00       0.06          0          6          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=6 pr=0 pw=0 time=236 us)
          0   VIEW  index$_join$_001 (cr=6 pr=0 pw=0 time=185 us)
          0    HASH JOIN  (cr=6 pr=0 pw=0 time=172 us)
          0     INDEX RANGE SCAN GL_ACCOUNT_SUMMARY_IX2 (cr=6 pr=0 pw=0 time=82 us)(object id 65648)
          0     INDEX RANGE SCAN GL_ACCOUNT_SUMMARY_IX1 (cr=0 pr=0 pw=0 time=0 us)(object id 65647)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_QUARTERLY_STAT" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.06          0          0          0           0
    Fetch        3      1.64      20.79     108398     109500          0           3
    total        7      1.64      20.86     108398     109500          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=109500 pr=108398 pw=0 time=20797344 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_QUARTERLY_STAT (cr=109500 pr=108398 pw=0 time=20797279 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_MONTHLY_STAT" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      4      0.00       0.06          0          0          1           0
    Fetch        3      0.75      10.11      52140      59532          0           3
    total        9      0.75      10.18      52140      59532          1           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Parsing user id: SYS
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=59532 pr=52140 pw=0 time=10116280 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_MONTHLY_STAT (cr=59532 pr=52140 pw=0 time=10116221 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_RECON_TXN_JOURNAL" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.02          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.02          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=138 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_RECON_TXN_JOURNAL (cr=9 pr=0 pw=0 time=97 us)
    select text
    from
    view$ where rowid=:1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.01       0.00          0          0          0           0
    Execute      3      0.01       0.00          0          0          2           0
    Fetch        3      0.00       0.00          0          6          0           3
    total        9      0.03       0.00          0          6          2           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS BY USER ROWID VIEW$ (cr=1 pr=0 pw=0 time=34 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_BULK_CRITERIA" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=109 us)
          0   TABLE ACCESS FULL GL_BULK_CRITERIA (cr=9 pr=0 pw=0 time=71 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_HISTORY" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.01       0.02          0       5070          0           3
    total        7      0.01       0.02          0       5070          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=5070 pr=0 pw=0 time=22519 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_HISTORY (cr=5070 pr=0 pw=0 time=22472 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_BULK_HISTORY" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=106 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_BULK_HISTORY (cr=9 pr=0 pw=0 time=69 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ALLOTMENT" where "POOL_ACCT_ID" = :1 and "POOL_ACCT_NO" =
       :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          3          0           3
    total        7      0.00       0.02          0          3          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=3 pr=0 pw=0 time=113 us)
          0   TABLE ACCESS BY INDEX ROWID GL_ALLOTMENT (cr=3 pr=0 pw=0 time=68 us)
          0    INDEX RANGE SCAN GL_ALLOTMENT_IX1 (cr=3 pr=0 pw=0 time=50 us)(object id 65651)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_YEARLY_STAT" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.01       0.01          0       3453          0           3
    total        7      0.01       0.01          0       3453          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=3453 pr=0 pw=0 time=10485 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_YEARLY_STAT (cr=3453 pr=0 pw=0 time=10440 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "CASH_GL_ACCT_ID" = :1 and
      "CASH_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=110 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=71 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "DEPOT_GL_ACCT_ID" = :1 and
      "DEPOT_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=107 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=71 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_TXN_ALLOTTEE" where "GL_ALLOTTEE_ACCT_ID" = :1 and
      "GL_ALLOTTEE_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=122 us)
          0   TABLE ACCESS FULL GL_TXN_ALLOTTEE (cr=9 pr=0 pw=0 time=84 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "POSN_GL_ACCT_ID" = :1 and
      "POSN_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=110 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=69 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ALLOTTEE" where "RECIPIENT_ACCT_ID" = :1 and
      "RECIPIENT_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=155 us)
          0   TABLE ACCESS FULL GL_ALLOTTEE (cr=9 pr=0 pw=0 time=119 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "INTER_BU_GL_ACCT_ID" = :1
      and "INTER_BU_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=102 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=67 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_BUDGET_ITEM_DATA" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=151 us)
          0   TABLE ACCESS FULL GL_BUDGET_ITEM_DATA (cr=9 pr=0 pw=0 time=108 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_TOTALLING_ACCOUNT_LINE" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.02          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=119 us)
          0   TABLE ACCESS FULL GL_TOTALLING_ACCOUNT_LINE (cr=9 pr=0 pw=0 time=82 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."SWEEP_FUNDS_XFER" where "TO_GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.01       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.01       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=123 us)
          0   TABLE ACCESS FULL SWEEP_FUNDS_XFER (cr=9 pr=0 pw=0 time=84 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCESS_ACCOUNT_LIST" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=117 us)
          0   TABLE ACCESS FULL GL_ACCESS_ACCOUNT_LIST (cr=9 pr=0 pw=0 time=79 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."SETTLEMENT_BANK_ACCOUNT" where "MIRROR_GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=121 us)
          0   TABLE ACCESS FULL SETTLEMENT_BANK_ACCOUNT (cr=9 pr=0 pw=0 time=86 us)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        8      0.01       0.29          0          0          8           0
    Execute     11      0.03       0.32          0          6        222           3
    Fetch        7      0.75      10.11      52140      59532          0           7
    total       26      0.79      10.73      52140      59538        230          10
    Misses in library cache during parse: 7
    Misses in library cache during execute: 2
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       47      0.01       0.06          0          0          0           0
    Execute     85      0.03       0.18          0          0          2           0
    Fetch       85      1.67      20.83     108398     118214          0          85
    total      217      1.71      21.08     108398     118214          2          85
    Misses in library cache during parse: 21
    Misses in library cache during execute: 20
        7  user  SQL statements in session.
       48  internal SQL statements in session.
       55  SQL statements in session.
        5  statements EXPLAINed in this session.
    Trace file: rubikon_s002_3952.trc
    Trace file compatibility: 10.01.00
    Sort options: exeela  exerow 
           1  session in tracefile.
           7  user  SQL statements in trace file.
          48  internal SQL statements in trace file.
          55  SQL statements in trace file.
          29  unique SQL statements in trace file.
           5  SQL statements EXPLAINed using schema:
               OFFLINETESTDB.prof$plan_table
                 Default table was used.
                 Table was created.
                 Table was dropped.
         548  lines in trace file.
          32  elapsed seconds in trace file.Thanks & Regards
    Sami

  • Need help in Delete operation using blazeDS

    Please find the flex client code and servlet code given below
    Trying to call a DELETE methods on the servlet using BlazeDS.Configuration is perfect in the proxy-config.xml and services-config.xml
    when DELETE is called with paramter user="krishna it is being printed as
    received DELETE operation with parameternull   
    My Question is why the servlet printing null value for the user where it should print "someuser"?Can some one help me on this?
    FLEX CLIENT
    <?xml version="1.0" encoding="utf-8"?><mx:Application  xmlns:mx="http://www.adobe.com/2006/mxml" layout="vertical">
    <mx:Script><![CDATA[
     import mx.controls.Alert; 
    import mx.rpc.http.HTTPService; 
    import mx.rpc.events.ResultEvent; 
    import mx.rpc.events.FaultEvent; 
    public function callServletDELETE():void { 
    var service:HTTPService = new HTTPService();  
    //service.url= "http://localhost:8080/examples/blazeDS";service.destination=
    "BlazeDSHTTP"; //this is configured in proxy-config.xml fileservice.useProxy =
    true; service.method =
    "DELETE"; service.resultFormat=
    "e4x"; service.addEventListener(
    "result", billingCarrierResult);service.addEventListener(
    "fault", httpFault); service.send({user:
    'someuser'});}
    protected function billingCarrierResult(event:ResultEvent):void{
    serviceResultsTextArea.text =
    "Success with BlazeDS!\n"+event.result; 
    protected function httpFault(event:FaultEvent):void{
    serviceResultsTextArea.text =
    "Failure trying to access service.\n"+ event.fault.faultString + "\n" + event.fault.faultDetail;}
    ]]>
    </mx:Script> <mx:TextArea  id="serviceResultsTextArea" width="50%" height="50%" />
    <mx:Button  label="DELETE" click="callServletDELETE()"/>
    </mx:Application>
    servlet
    import java.io.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    * Simple example intended to demonstrate BlazeDS with HttpService.
    public class BlazeHttpExample extends HttpServlet
       //excluded other methods GET,POST as they are working fine
        * Handles the HTTP <code>DELETE</code> method.
        * @param request servlet request
        * @param response servlet response
       @Override
       protected void doDelete(HttpServletRequest request, HttpServletResponse response)
          throws ServletException, IOException
          System.out.println("received DELETE operation with parameter"+request.getParameter("user"));
       @Override
       public String getServletInfo()
          return "Simple intended to illustrate BlazeDS HttpService support.";

    in your java code you are considering user as the parameter.. but that is not a parameter..
    When making a http call in flex try something like this
    var obj:Object = new Object();
    obj["user"] = 'someuser';
    service.send(obj);
    this should work hopefully.

  • Which is fast bulk delete or id's in a table and a where exists ....?

    I have some parent objects and that I use bulk collect with fetch limit and I currently store the primary keys of these parent objects to identify their child objects later by using where exists with a correlated subquery.
    I'm essentially moving objects graphs that span partitions from table to table.
    when I've done my SQL insert into select... I eventually do a delete.
    currently the delete uses the parent objects in this working table to identify the children to delete later.
    Q. What is likely to be faster?
    using a "temporary" table to requery for child objects based on the parents that I have for each batch or
    using returning clause from my insert into select so that I have rowid's or primary keys to work with later on
    when I want to perform my delete operation?
    I essentially have A's that have child B's which in tern have child C's.
    I store a batch of A pk's in a table and use those to identify the B's
    currently I don't store the B's pk but use the A pk's again to identify the B's which in turn are used to identify the C's later.
    I'm thinking if I remember the pk's I'm using at each level I can then use those later when it comes to the deletes.
    typically that's done in a returning clause and using a bulk delete from that collection later.
    thoughts?

    Parallel DML is one option. Another is to ceate a procedure (or package) that does a discreet unit of work (e.g. process parent and its children as a single business transaction). And then write a "+thread manager+" that runs x number copies of these at the same time (via DBMS_JOB for example).
    Let's say the procedure's signature is as follows:
    create or replace procedure ProcessFamily( parentID number ) is ..
    --// processes a family (parent and children)
    ..Using DBMS_JOB is pretty easy - when you start a job you get a job number for it. Looking at USER_JOBS will tell you whether that job is still in the job queue, or has completed (once off jobs are removed from the queue). The core of the this code will be a loop that checks how many jobs (threads) are running, and if less than the ceiling (e.g. it may only use 20 threads), start more ProcessFamily jobs.
    If the total number of threads/jobs to execute are known up front, then this ThreadManager can manually create a long operation entry. Such an entry contains the number of unit of works to do and then is updated with the number of units done thus far. Oracle provides time estimates for completion and percentage progress. This long operation can be tracked by most Oracle-based monitoring software and provide visibility as to what the progress is of the processing.
    The ProcessFamly procedure can also use parallel DML (if that makes sense). Or bulk processing (if needed). This approach is also scalable as h/w increases (server upgrades, new server h/w), so too does your ability to run more threads (aka jobs) at the same time.
    Now I'm not suggesting that you write a ProcessFamily() proc - I do not know the actual data and problem you're trying to solve. What I'm trying to convey is the basic principle for writing multi-thread/parallel processing software using PL/SQL. And it is not that complex. The critical thing is simply that the parallel procedure or thread be entirely thread safe - meaning that multiple copies of the same code can be started and these copies will not cause serialisation, dead locking, and other (application designed) problems.

  • ADF delete from more than one table with single delete operation

    Hi all,
    I have a scenario in which I am trying to delete record(s) from 3 tables.
    My jspx page looks like this:
    Column1:drop down from 1st table
    Column2:drop down from 2nd table
    Column3:drop down from 3rd table
    Delete Commit
    I have created a view object which has these three tables as entities. I drag dropped the view object and displayed these three columns.
    Now when i select any one or all three or any two out of these and click delete, only the "column 1" gets deleted from 1st table. The remaining two tables remain unaffected.
    Y is this so?
    Can't I use one delete operation for all three tables' DML operation in single go?
    Thanks.

    If you have a business case that requires deleting from 3 tables at once when removing a single row in a view object then you may look for the problem in you data model...
    Until then I'd suggest to create a database view providing the information you need (plus the PKs fron the individual tables) and creating entity and view objects based on this view. An "instead of dele" trigger attached to the view can do the actual delete operation on the 3 tables.
    bye
    TPD

  • Forall bulk delete is too slow to work,seek advice.

    I used PL/SQL stored procedure to do some ETL work. It is pick up refeshed records from staging table, then check to see whether the same record exists in target table, then do a Forall bulk deletion first, then do a Forall insert all refreshed records into target atble. the insert part is working fine. Only is the deleteion part, it is too slow to get job done. My code list below. Please advise where is the problem? Thansk.
    Declare
    TYPE t_distid IS TABLE OF VARCHAR2(15) INDEX BY BINARY_INTEGER;
    v_distid t_distid;
    CURSOR dist_delete IS
    select distinct distid FROM DIST_STG where data_type = 'H';
    OPEN dist_delete;
    LOOP
    FETCH dist_delete BULK COLLECT INTO v_distid;
    FORALL i IN v_distid.FIRST..v_distid.LAST
    DELETE DIST_TARGET WHERE distid = v_distid(i);
    END LOOP;
    COMMIT;
    end;
    /

    citicbj wrote:
    Justin:
    The answers to your questions are:
    1. why would I not use a single DELETE statement? Because this PL/SQL procedure is part of ETL process. The procedure is scheduled by Oracle scheduler. It will automatically run to refresh data. Putting DELETE in stored procedure is better to executed by scheduler.You can compile SQL inside a PL/SQL procedure / function just as easily as coding it the way you have so that's really not an excuse. As Justin pointed out, the straight SQL approach will be what you want to use.
    >
    2. The records in dist_stg with data_type = 'H' vary by each month. It range from 120 to 5,000 records. These records are inserted into target table before. But they are updated in transactional database. We need to delete old records in target and insert updated ones in to replace old ones. But the distID is the same and unique. I use distID to delete old one and insert updated records with the same distID into target again. When user run report, the updated records will show up on the report. In plain SQL statement, delete 5,000 records takes about seconds. In my code above, it take forever. The database is going without any error message. There is no trigger and FK associated
    3. Merge. I haven't try that yet. I may give a try.Quite likely a good idea based on what you've outlined above, but at the very least, remove the procedural code with the delete as suggested by Justin.
    >
    Thanks.

  • I have just upgraded to Mavericks and have been using Time Machine on an external disk with Snow Leopard.  Can I continue to backup with Time Machine on the same external disk or do I need a new disk since the operating system has changed?

    I have just upgraded to Mavericks and have been using Time Machine on an external disk with Snow Leopard.  Can I continue to backup with Time Machine on the same external disk or do I need a new disk since the operating system has changed?

    Hi there,
    I found that Time Machine in Mavericks will sort it all out for you. You shouldn't need to buy another backup drive, unless you have insufficient space left and can't afford to delete whats on there. It should just work fine.

  • I borrowed a hard drive from a friend that contains many thousands of songs. Somehow when tranferring these to iTunes I managed to get several copies of each song. How can i do a bulk delete of all the copies leaving just one version of each song?

    I need help please. I borrowed a hard drive from a friend which contains many thousands of songs. When adding them to my iTunes library I somehow managed to make multiple copies of each song. Is there a way to do a bulk delete of all the duplicates? If I have to go one by one it will take a lifetime.

    I have the same problem. I keep backups on a home server and this is where my library pointed to. I changed it to the local drive and now I have two copies in the library. I need to either bulk delete or clear the library (without deleting the files) and then rebuild. How do I do either of these things?

  • Table in which deleted operations are stored.

    Hi,
         We are writing a report and for that i need the value of work hrs , so i would like to know in which table deleted operations of Maintenance order are stored ....
    regrds
    pm

    Hi,
    All the operations time values are found in AFVV, this includes the deleted operations. Also read AFVC for operation number, etc. The key to both these tables is AUFPL field (available in AFKO via order number).
    To decide if a particular AFVV record is deleted or not it is necessary to look at the JEST table. JEST can be accessed via Object Number, this has the format OV + AUFPL + APLZL. If the entry with status I0013 is active (no inactive flag) then this operation is deleted.
    -Paul

  • Bulk Delete in Oracle 8i

    Hi,
    We have a requirement where in we need to archive the old records in the production into history tables. After archiving we need to delete the records from the production tables around 50 in number. From our experience we have found that the delete operation consumes the maximum time. Due to this a large archive of old records have piled up in the production tables. We have found ways to expeditie the process of moving the data to the history tables.
    Can someone give us advice on the different techniques available to expedite the delete process along with the pros and cons of the respective technique?
    Thanks in advance!!!

    Hi,
    1. How do I partition only the set of data that I want to historise?For example, a table SALES. This table contains date_sale, on which we can base the partition. One partition for one month. Each month, create a new partition.
    If you want an historic for the last 12 months, only drop (or offline correspondant tablespace) sales older than 12 months (drop partition n-12).
    But all data in this table are repartioned on parititions.
    Paritioned tables is not anly for historise, but to equiibrate access.
    2. There are After Delete triggers on some of these tables. Will these triggers get fired >when we use this concept? (or) Do we need to find someother way to trigger them ?No change about your actual code.
    A partitioned table is a table, even if there are too many table behind the tree.
    3. Will the other DB objects be affected when we use this concept?Like point 2, no change for all others objects.
    4. Some tables are accessed by other applications using DB links. Will those >applications be impacted in anyway?Like points 2&3, no change for application program or dblink...
    I adise you to read Oracle doc about thisd concept :
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/partconc.htm
    Hope this help you,
    Nicolas.
    Note Any table can be partitioned except those tables containing columns with LONG or LONG RAW datatypes.
    Message was edited by:
    N. Gasparotto

Maybe you are looking for

  • What are the implications of changing your Apple id when you own multiple devices?

    Hi There, I would like to change my regular email service provider and instead use only my new gmail address. My apple id is the same as my regular soon to be replaced email address. I understand that in the past you could not change your apple id bu

  • Advice on small gigabit network for students

    Hi Guys (and ladies) I am about to start planning a small network of macs for use by a local college. There are only 20 students. I plan to have a Mac Pro for server and 5 networked clients for use by the students. There will also be another client i

  • Profile Parameter : Time out for executing query on the web

    Hi gurus, I am executing queries on the web directly. This can be done from query designer with the button that says "Execute query on web". The problem is that for queries that take more than 600 Secs to run, I get an Application timed out error. Qu

  • Login Properties

    When I right click on one repository, there is nothing dispalyed. It means that I can not use Login function in which I can do some configuration. Could anyone tell me why it is happened. My MDM Edition is MDM 5.5 SP04.

  • Graphic in ABAP Adobe form from SE78

    Hi All, I am trying to display image(SE78) in ABAP Adobe interactive form. But it is not displaying  any image in run time. I use following code. Can any one guide what could be the problem method WDDOINIT .   DATA lo_nd_adobe TYPE REF TO if_wd_conte