Database performance degrade - delete operation

Hi,
I have a big database. One of the table contains 120 milion records and many tables (more than 50) has referential integrity to this table.
Table structure
Customer (Cust_ID, and other columns). Cust_ID is Primary Key. Other tables has referential integrity to Customer.Cust_ID.
There are around 100 thousand records that have entry only in this (Customer) table. These records are identified and kept in a table Temp_cust(Cust_ID).
I am running a PL/SQL which fetches a Cust_ID from Temp_cust and delete that record from Customer.
It is observed that a delete command takes long time and the whole system performance degrades. Even a on-line service that inserts row into this table looks like almost in hand stage.
The system is 24/7 and I have no option to disable any constraint.
Can someone explain why such a simple operation degrades system performance. Please also suggest how to complete the operation without affecting performance of other operations.
Regards
Karim

Hi antti.koskinen
There is no on delete rule. All are simple
referential integrity.
Like REFERS CUSTOMER (Cust_ID).
Regards,
KarimCan you run the following snippet just to make sure (params are name and owner of the Customer table):
select table_name, constraint_name, constraint_type, delete_rule
from dba_constraints
where r_constraint_name in
       select constraint_name
       from dba_constraints
       where owner = upper('&owner')
       and table_name = upper('&table_name')
       and constraint_type = 'P'
/Also check the last time the table was rebuilt - deletes w/o rebuilds tend to raise the high water mark.

Similar Messages

  • On performing continuous delete operation database hangs

    Dear All,
    On performing continuous delete operation , the instance on which delete operation is performed database hangs i.e it does not allows to make new connection and some times on alert log we get ORA-3136 error
    Database version 10.2.0.3 , OS : HP-UXvi3
    Regards

    Refer this thread
    ORA-3136 while performing bulk delete

  • How Increase performance of delete operation

    Hi,
    How Increase performance of delete operation. This delete is done on a table which has around millions of records and loaded back every day .
    The statement is in a procedure and is as follows.
    #$%%$#$;
    commit;
    delete from TVRBC_SITE_ROLLUP_T;
    commit;

    Hi,
    execute immediate 'truncate table TVRBC_SITE_ROLLUP_T';
    Regards,
    Oleg
    Message was edited by:
    tsiboleg

  • Database performance degradation issue

    Hi,
    We are having the database performance related problem.
    Oracle database 8.1.7.0
    when we use statement,
    SQL> select name,value from v$sysstat where name ='redo buffer allocation retries';
    NAME VALUE
    redo buffer allocation retries 2540
    Here, Redo retries value shown above is too big, which it should not be.
    Currently we are having log_buffer = 65536 bytes (64 kb)
    Is it necessary to increase the size of log_buffer ? does increasing the size of log_buffer will improve the database performance issue upto some extent ?
    Also, regarding database buffer cache,
    SQL> SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME IN ('db block gets', 'consistent gets', 'physical reads');
    NAME VALUE
    db block gets 4365099
    consistent gets 1309280457
    physical reads 103708616
    From the above values, buffer cache hit ratio is 0.921052817
    So, is it necessary to increase the size of database buffer cache ?
    With Regards

    Log_buffer 64k is likely too small. The default is 512k per CPU.
    Increasing log buffer will decrease the number of redo allocation retries.
    You need to set to 512K or 1M.
    Buffer Cache Hit Ratio is a Meaningless Indicator of the Performance of the System, as Connor McDonald has demonstrated on http://www.oracledba.co.uk
    You'd better strive to reduce I/O.
    Also you will notice you need very big amounts of memory to get very little improvement.
    Personally I would probably do something if BCHR was below 80 percent, but I know of situations where the problem is in the application and no value of db_blockf_buffers will be big enough.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • SCOM reports "A significant portion of the database buffer cache has been written out to the system paging file. This may result in severe performance degradation"

    This was discussed here, with no resolution
    http://social.technet.microsoft.com/Forums/en-US/exchange2010/thread/bb073c59-b88f-471b-a209-d7b5d9e5aa28?prof=required
    I have the same issue.  This is a single-purpose physical mailbox server with 320 users and 72GB of RAM.  That should be plenty.  I've checked and there are no manual settings for the database cache.  There are no other problems with
    the server, nothing reported in the logs, except for the aforementioned error (see below).
    The server is sluggish.  A reboot will clear up the problem temporarily.  The only processes using any significant amount of memory are store.exe (using 53GB), regsvc (using 5) and W3 and Monitoringhost.exe using 1 GB each.  Does anyone have
    any ideas on this?
    Warning ESE Event ID 906. 
    Information Store (1497076) A significant portion of the database buffer cache has been written out to the system paging file.  This may result in severe performance degradation. See help link for complete details of possible causes. Resident cache
    has fallen by 213107 buffers (or 11%) in the last 207168 seconds. Current Total Percent Resident: 79% (1574197 of 1969409 buffers)

    Brian,
    We had this event log entry as well which SCOM picked up on, and 10 seconds before it the Forefront Protection 2010 for Exchange updated all of its engines.
    We are running Exchange 2010 SP2 RU3 with no file system antivirus (the boxes are restricted and have UAC turned on as mitigations). We are running the servers primarily as Hub Transport servers with 16GB of RAM, but they do have the mailbox role installed
    for the sole purpose of serving as our public folder servers.
    So we theroized the STORE process was just grabbing a ton of RAM, and occasionally it was told to dump the memory so the other processes could grab some - thus generating the alert. Up until last night we thought nothing of it, but ~25 seconds after the
    cache flush to paging file, we got the following alert:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:14 AM
    Event ID:      17012
    Task Category: Storage
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: The database could not allocate memory. Please close some applications to make sure you have enough memory for Exchange Server. The exception is Microsoft.Exchange.Isam.IsamOutOfMemoryException: Out of Memory (-1011)
       at Microsoft.Exchange.Isam.JetInterop.CallW(Int32 errFn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, String connect, MJET_GRBIT grbit, MJET_WRN& wrn)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file, MJET_GRBIT grbit)
       at Microsoft.Exchange.Isam.JetInterop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Isam.Interop.MJetOpenDatabase(MJET_SESID sesid, String file)
       at Microsoft.Exchange.Transport.Storage.DataConnection..ctor(MJET_INSTANCE instance, DataSource source).
    Followed by:
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:08:15 AM
    Event ID:      17106
    Task Category: Storage
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error, updated the registry key (SOFTWARE\Microsoft\ExchangeServer\v14\Transport\QueueDatabase) and as a result, will attempt self-healing after process restart.
    Log Name:      Application
    Source:        MSExchangeTransport
    Date:          8/2/2012 2:13:50 AM
    Event ID:      17102
    Task Category: Storage
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      HTS1.company.com
    Description:
    Transport Mail Database: MSExchangeTransport has detected a critical storage error and has taken an automated recovery action.  This recovery action will not be repeated until the target folders are renamed or deleted. Directory path:E:\EXCHSRVR\TransportRoles\Data\Queue
    is moved to directory path:E:\EXCHSRVR\TransportRoles\Data\Queue\Queue.old.
    So it seems as if the Forefront Protection 2010 for Exchange inadvertently trigger the cache flush which didn't appear to happen quick or thuroughly enough for the transport service to do what it needed to do, so it freaked out and performed the subsequent
    actions.
    Do you have any ideas on how to prevent this 906 warning, which cascaded into a transport service outage?
    Thanks!

  • ORA-01456 : may not perform insert/delete/update operation

    When I use following stored procedure with crystal reports, following error occurs.
    ORA-01456 : may not perform insert/delete/update operation inside a READ ONLY transaction
    Kindly help me on this, please.
    My stored procedure is as under:-
    create or replace
    PROCEDURE PROC_FIFO
    (CV IN OUT TB_DATA.CV_TYPE,FDATE1 DATE, FDATE2 DATE,
    MSHOLD_CODE IN NUMBER,SHARE_ACCNO IN VARCHAR)
    IS
    --DECLARE VARIABLES
    V_QTY NUMBER(10):=0;
    V_RATE NUMBER(10,2):=0;
    V_AMOUNT NUMBER(12,2):=0;
    V_DATE DATE:=NULL;
    --DECLARE CURSORS
    CURSOR P1 IS
    SELECT *
    FROM FIFO
    WHERE SHARE_TYPE IN ('P','B','R')
    ORDER BY VOUCHER_DATE,
    CASE WHEN SHARE_TYPE='P' THEN 1
    ELSE
    CASE WHEN SHARE_TYPE='R' THEN 2
    ELSE
    CASE WHEN SHARE_TYPE='B' THEN 3
    END
    END
    END,
    TRANS_NO;
    RECP P1%ROWTYPE;
    CURSOR S1 IS
    SELECT * FROM FIFO
    WHERE SHARE_TYPE='S'
    AND TRANS_COST IS NULL
    ORDER BY VOUCHER_DATE,TRANS_NO;
    RECS S1%ROWTYPE;
    --BEGIN QUERIES
    BEGIN
    DELETE FROM FIFO;
    --OPENING BALANCES
    INSERT INTO FIFO
    VOUCHER_NO,VOUCHER_TYPE,VOUCHER_DATE,TRANS_QTY,TRANS_AMT,TRANS_RATE,
    SHOLD_CODE,SHARE_TYPE,ACC_NO,SHARE_CODE,TRANS_NO)
    SELECT TO_CHAR(FDATE1,'YYYYMM')||'001' VOUCHER_NO,'OP' VOUCHER_TYPE,
    FDATE1-1 VOUCHER_DATE,
    SUM(
    CASE WHEN
    --((SHARE_TYPE ='S' AND DTAG='Y')
    SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    ) TRANS_QTY,
    SUM(TRANS_AMT),
    NVL(CASE WHEN SUM(TRANS_AMT)<>0
    AND
    SUM
    CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    )<>0 THEN
    SUM(TRANS_AMT)/
    SUM
    CASE WHEN SHARE_TYPE IN ('P','B','R','S') THEN
    TRANS_QTY
    ELSE
    0
    END
    ) END,0) TRANS_RATE,
    MSHOLD_CODE SHOLD_CODE,'P' SHARE_TYPE,SHARE_ACCNO ACC_NO,
    SHARE_CODE,0 TRANS_NO
    FROM TRANS
    WHERE ACC_NO=SHARE_ACCNO
    AND SHOLD_CODE= MSHOLD_CODE
    AND VOUCHER_DATE<FDATE1
    --AND
    --(SHARE_TYPE='S' AND DTAG='Y')
    --OR SHARE_TYPE IN ('P','R','B'))
    group by TO_CHAR(FDATE1,'YYYYMM')||'001', MSHOLD_CODE,SHARE_ACCNO, SHARE_CODE;
    COMMIT;
    --TRANSACTIONS BETWEEND DATES
    INSERT INTO FIFO
    TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE
    SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    CASE WHEN SHARE_TYPE='S' THEN
    NVL(TRANS_RATE-COMM_PER_SHARE,0)
    ELSE
    NVL(TRANS_RATE+COMM_PER_SHARE,0)
    END
    ,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,NULL TRANS_COST,SHARE_TYPE
    FROM TRANS
    WHERE ACC_NO=SHARE_ACCNO
    AND SHOLD_CODE= MSHOLD_CODE
    AND VOUCHER_DATE BETWEEN FDATE1 AND FDATE2
    AND
    ((SHARE_TYPE='S' AND DTAG='Y')
    OR SHARE_TYPE IN ('P','R','B'));
    COMMIT;
    --PURCHASE CURSOR
    IF P1%ISOPEN THEN
    CLOSE P1;
    END IF;
    OPEN P1;
    LOOP
    FETCH P1 INTO RECP;
    V_QTY:=RECP.TRANS_QTY;
    V_RATE:=RECP.TRANS_RATE;
    V_DATE:=RECP.VOUCHER_DATE;
    dbms_output.put_line('V_RATE OPENING:'||V_RATE);
    dbms_output.put_line('OP.QTY2:'||V_QTY);
    EXIT WHEN P1%NOTFOUND;
    --SALES CURSOR
    IF S1%ISOPEN THEN
    CLOSE S1;
    END IF;
    OPEN S1;
    LOOP
    FETCH S1 INTO RECS;
    EXIT WHEN S1%NOTFOUND;
    dbms_output.put_line('OP.QTY:'||V_QTY);
    dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    IF ABS(RECS.TRANS_QTY)<=V_QTY
    AND V_QTY<>0
    AND RECS.TRANS_COST IS NULL THEN
    --IF RECS.TRANS_COST IS NULL THEN
    --dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    --dbms_output.put_line('BAL1:'||V_QTY);
    UPDATE FIFO
    SET TRANS_COST=V_RATE,
    PUR_DATE=V_DATE
    WHERE TRANS_NO=RECS.TRANS_NO
    AND TRANS_COST IS NULL;
    COMMIT;
    dbms_output.put_line('UPDATE TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('OP.QTY:'||V_QTY);
    dbms_output.put_line('SOLD:'||RECS.TRANS_QTY);
    dbms_output.put_line('TRANS_NO:'||RECS.TRANS_NO);
    dbms_output.put_line('BAL2:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
    END IF;
    IF ABS(RECS.TRANS_QTY)>ABS(V_QTY)
    AND V_QTY<>0 AND RECS.TRANS_COST IS NULL THEN
    UPDATE FIFO
    SET
    TRANS_QTY=-V_QTY,
    TRANS_COST=V_RATE,
    TRANS_AMT=ROUND(V_QTY*V_RATE,2),
    PUR_DATE=V_DATE
    WHERE TRANS_NO=RECS.TRANS_NO;
    --AND TRANS_COST IS NULL;
    COMMIT;
    dbms_output.put_line('UPDATING 100000:'||TO_CHAR(V_QTY));
    dbms_output.put_line('UPDATING 100000 TRANS_NO:'||TO_CHAR(RECS.TRANS_NO));
    INSERT INTO FIFO
    TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,SHARE_CODE,ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE,PUR_DATE
    VALUES
    MCL.NEXTVAL,RECS.VOUCHER_NO,RECS.VOUCHER_TYPE,
    RECS.VOUCHER_DATE,RECS.TRANS_QTY+V_QTY,
    RECS.TRANS_RATE,(RECS.TRANS_QTY+V_QTY)*RECS.TRANS_RATE,RECS.SHOLD_CODE,
    RECS.SHARE_CODE,RECS.ACC_NO,
    RECS.DTAG,NULL,'S',V_DATE
    dbms_output.put_line('INSERTED RECS.QTY:'||TO_CHAR(RECS.TRANS_QTY));
    dbms_output.put_line('INSERTED QTY:'||TO_CHAR(RECS.TRANS_QTY+V_QTY));
    dbms_output.put_line('INSERTED V_QTY:'||TO_CHAR(V_QTY));
    dbms_output.put_line('INSERTED RATE:'||TO_CHAR(V_RATE));
    COMMIT;
    V_QTY:=0;
    V_RATE:=0;
    EXIT;
    END IF;
    IF V_QTY>0 THEN
    V_QTY:=V_QTY+RECS.TRANS_QTY;
    ELSE
    V_QTY:=0;
    V_RATE:=0;
    EXIT;
    END IF;
    --dbms_output.put_line('BAL3:'||V_QTY);
    END LOOP;
    V_QTY:=0;
    V_RATE:=0;
    END LOOP;
    CLOSE S1;
    CLOSE P1;
    OPEN CV FOR
    SELECT TRANS_NO,VOUCHER_NO,VOUCHER_TYPE,
    VOUCHER_DATE,TRANS_QTY,
    TRANS_RATE,TRANS_AMT,SHOLD_CODE,B.SHARE_CODE,B.ACC_NO,
    DTAG,TRANS_COST,SHARE_TYPE, B.SHARE_NAME,A.PUR_DATE
    FROM FIFO A, SHARES B
    WHERE A.SHARE_CODE=B.SHARE_CODE
    --AND A.SHARE_TYPE IS NOT NULL
    ORDER BY VOUCHER_DATE,SHARE_TYPE,TRANS_NO;
    END PROC_FIFO;
    Thanks and Regards,
    Luqman

    Copy from TODOEXPERTOS.COM
    Problem Description
    When running a RAM build you get the following error as seen in the RAM build
    log file:
    14:52:50 2> Updating warehouse tables with build information...
    Process Terminated In Error:
    [Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
    (SIGENG02) ([Oracle][ODBC][Ora]ORA-01456: may not perform insert/delete/update operation inside a READ ONLY transaction
    ) Please contact the administrator of your Oracle Express Server application.
    Solution Description
    Here are the following suggestions to try out:
    1. You may want to use oci instead of odbc for your RAM build, provided you
    are running an Oracle database. This is setup through the RAA (relational access
    administrator) maintenance procedure.
    Also make sure your tnsnames.ora file is setup correctly in either net80/admin
    or network/admin directory, to point to the correct instance of Oracle.
    2. Commit or rollback the current transaction, then retry running your
    RAM build. Seems like one or more of your lookup or fact tables have a
    read-only lock on them. This occurs if you modify or add some records to your
    lookup or fact tables but forget to issue a commit or rollback. You need to do
    this through SQL+.
    3. You may need to check what permissions has been given to the relational user.
    The error could be a permissions issue.
    You must give 'connect' permission or roll to the RAM/relational user. You may
    also try giving 'dba' and 'resource' priviliges/roll to this user as a test. Inorder to
    keep it simple, make sure all your lookup, fact and wh_ tables are created on
    a single new tablespace. Create a new user with the above privileges as the
    owner of the tablespace, as well as the owner of the lookup, fact and wh_
    tables, inorder to see if this is a permissions issue.
    In this particular case, the problem was resolved by using oci instead of odbc,
    as explained in suggestion# 1.

  • Need Performance tuning in delete operation

    Hi Gurus,
    I am performing delete operation by following SQL query.
    delete from gl_account where bu_id = -99
    but it take long time to execute. Table contains 1 trigger and 5 index. I have disabled the trigger and rebuild the index but still it not executing.
    Here is my explain plan.
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    DELETE STATEMENT Optimizer Mode=ALL_ROWS          561             19
    DELETE     OFFLINETESTDB.GL_ACCOUNT
    INDEX RANGE SCAN     OFFLINETESTDB.BU_ID     561       27 K     2      
    Pls help me out to solve this.

    Hi All,
    I am still facing the same performs problem for deleting row in a table.
    here by i have attached my TKPROF for your consideration.
    TKPROF: Release 10.2.0.1.0 - Production on Tue Oct 12 14:01:13 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Trace file: rubikon_s002_3952.trc
    Sort options: exeela  exerow 
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    DELETE FROM GL_ACCOUNT
    WHERE
    GL_ACCT_ID IN (16908,16909,16456)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.13          0          0          0           0
    Execute      1      0.03       0.26          0          6        221           3
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.04       0.40          0          6        221           3
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 40  (OFFLINETESTDB)
    Rows     Row Source Operation
          0  DELETE  GL_ACCOUNT (cr=177742 pr=160538 pw=0 time=31518664 us)
          3   INLIST ITERATOR  (cr=6 pr=0 pw=0 time=103 us)
          3    INDEX RANGE SCAN GL_ACCOUNT_PK (cr=6 pr=0 pw=0 time=86 us)(object id 65637)
    Rows     Execution Plan
          0  DELETE STATEMENT   MODE: ALL_ROWS
          0   DELETE OF 'GL_ACCOUNT'
          3    INLIST ITERATOR
          3     INDEX   MODE: ANALYZED (RANGE SCAN) OF 'GL_ACCOUNT_PK'
                    (INDEX (UNIQUE))
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_SUMMARY" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.06          0          0          0           0
    Fetch        3      0.00       0.00          0          6          0           3
    total        7      0.00       0.06          0          6          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=6 pr=0 pw=0 time=236 us)
          0   VIEW  index$_join$_001 (cr=6 pr=0 pw=0 time=185 us)
          0    HASH JOIN  (cr=6 pr=0 pw=0 time=172 us)
          0     INDEX RANGE SCAN GL_ACCOUNT_SUMMARY_IX2 (cr=6 pr=0 pw=0 time=82 us)(object id 65648)
          0     INDEX RANGE SCAN GL_ACCOUNT_SUMMARY_IX1 (cr=0 pr=0 pw=0 time=0 us)(object id 65647)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_QUARTERLY_STAT" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.06          0          0          0           0
    Fetch        3      1.64      20.79     108398     109500          0           3
    total        7      1.64      20.86     108398     109500          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=109500 pr=108398 pw=0 time=20797344 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_QUARTERLY_STAT (cr=109500 pr=108398 pw=0 time=20797279 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_MONTHLY_STAT" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      4      0.00       0.06          0          0          1           0
    Fetch        3      0.75      10.11      52140      59532          0           3
    total        9      0.75      10.18      52140      59532          1           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Parsing user id: SYS
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=59532 pr=52140 pw=0 time=10116280 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_MONTHLY_STAT (cr=59532 pr=52140 pw=0 time=10116221 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_RECON_TXN_JOURNAL" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.02          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.02          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=138 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_RECON_TXN_JOURNAL (cr=9 pr=0 pw=0 time=97 us)
    select text
    from
    view$ where rowid=:1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.01       0.00          0          0          0           0
    Execute      3      0.01       0.00          0          0          2           0
    Fetch        3      0.00       0.00          0          6          0           3
    total        9      0.03       0.00          0          6          2           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS BY USER ROWID VIEW$ (cr=1 pr=0 pw=0 time=34 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_BULK_CRITERIA" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=109 us)
          0   TABLE ACCESS FULL GL_BULK_CRITERIA (cr=9 pr=0 pw=0 time=71 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_HISTORY" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.01       0.02          0       5070          0           3
    total        7      0.01       0.02          0       5070          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=5070 pr=0 pw=0 time=22519 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_HISTORY (cr=5070 pr=0 pw=0 time=22472 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_BULK_HISTORY" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=106 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_BULK_HISTORY (cr=9 pr=0 pw=0 time=69 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ALLOTMENT" where "POOL_ACCT_ID" = :1 and "POOL_ACCT_NO" =
       :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          3          0           3
    total        7      0.00       0.02          0          3          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=3 pr=0 pw=0 time=113 us)
          0   TABLE ACCESS BY INDEX ROWID GL_ALLOTMENT (cr=3 pr=0 pw=0 time=68 us)
          0    INDEX RANGE SCAN GL_ALLOTMENT_IX1 (cr=3 pr=0 pw=0 time=50 us)(object id 65651)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCOUNT_YEARLY_STAT" where "GL_ACCT_ID" = :1 and
      "GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.01       0.01          0       3453          0           3
    total        7      0.01       0.01          0       3453          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=3453 pr=0 pw=0 time=10485 us)
          0   TABLE ACCESS FULL GL_ACCOUNT_YEARLY_STAT (cr=3453 pr=0 pw=0 time=10440 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "CASH_GL_ACCT_ID" = :1 and
      "CASH_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=110 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=71 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "DEPOT_GL_ACCT_ID" = :1 and
      "DEPOT_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=107 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=71 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_TXN_ALLOTTEE" where "GL_ALLOTTEE_ACCT_ID" = :1 and
      "GL_ALLOTTEE_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=122 us)
          0   TABLE ACCESS FULL GL_TXN_ALLOTTEE (cr=9 pr=0 pw=0 time=84 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "POSN_GL_ACCT_ID" = :1 and
      "POSN_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=110 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=69 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ALLOTTEE" where "RECIPIENT_ACCT_ID" = :1 and
      "RECIPIENT_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=155 us)
          0   TABLE ACCESS FULL GL_ALLOTTEE (cr=9 pr=0 pw=0 time=119 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "INTER_BU_GL_ACCT_ID" = :1
      and "INTER_BU_GL_ACCT_NO" = :2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=102 us)
          0   TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=67 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_BUDGET_ITEM_DATA" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=151 us)
          0   TABLE ACCESS FULL GL_BUDGET_ITEM_DATA (cr=9 pr=0 pw=0 time=108 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_TOTALLING_ACCOUNT_LINE" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.02          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.02          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=119 us)
          0   TABLE ACCESS FULL GL_TOTALLING_ACCOUNT_LINE (cr=9 pr=0 pw=0 time=82 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."SWEEP_FUNDS_XFER" where "TO_GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.01       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.01       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=123 us)
          0   TABLE ACCESS FULL SWEEP_FUNDS_XFER (cr=9 pr=0 pw=0 time=84 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."GL_ACCESS_ACCOUNT_LIST" where "GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=117 us)
          0   TABLE ACCESS FULL GL_ACCESS_ACCOUNT_LIST (cr=9 pr=0 pw=0 time=79 us)
    select /*+ all_rows */ count(1)
    from
    "OFFLINETESTDB"."SETTLEMENT_BANK_ACCOUNT" where "MIRROR_GL_ACCT_ID" = :1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.00       0.00          0          0          0           0
    Fetch        3      0.00       0.00          0          9          0           3
    total        7      0.00       0.00          0          9          0           3
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          3  SORT AGGREGATE (cr=9 pr=0 pw=0 time=121 us)
          0   TABLE ACCESS FULL SETTLEMENT_BANK_ACCOUNT (cr=9 pr=0 pw=0 time=86 us)
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        8      0.01       0.29          0          0          8           0
    Execute     11      0.03       0.32          0          6        222           3
    Fetch        7      0.75      10.11      52140      59532          0           7
    total       26      0.79      10.73      52140      59538        230          10
    Misses in library cache during parse: 7
    Misses in library cache during execute: 2
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse       47      0.01       0.06          0          0          0           0
    Execute     85      0.03       0.18          0          0          2           0
    Fetch       85      1.67      20.83     108398     118214          0          85
    total      217      1.71      21.08     108398     118214          2          85
    Misses in library cache during parse: 21
    Misses in library cache during execute: 20
        7  user  SQL statements in session.
       48  internal SQL statements in session.
       55  SQL statements in session.
        5  statements EXPLAINed in this session.
    Trace file: rubikon_s002_3952.trc
    Trace file compatibility: 10.01.00
    Sort options: exeela  exerow 
           1  session in tracefile.
           7  user  SQL statements in trace file.
          48  internal SQL statements in trace file.
          55  SQL statements in trace file.
          29  unique SQL statements in trace file.
           5  SQL statements EXPLAINed using schema:
               OFFLINETESTDB.prof$plan_table
                 Default table was used.
                 Table was created.
                 Table was dropped.
         548  lines in trace file.
          32  elapsed seconds in trace file.Thanks & Regards
    Sami

  • Database Auditing to record DELETE operation on a schema for all tables.

    Hi,
    I am using ORACLE DATABASE 11g. I want to apply the AUDIT feature to record all the DELETE operations happening on the schema tables.
    I did the following steps but dint got the proper output :-
    I logged into the SYS as sysdba user and set
    alter system set audit_trail=DB,EXTENDED scope=spfile;then i executed this command to record the sql which will use the DELETE privileges
    AUDIT DELETE ANY TABLE;Then i bounced back my DB and for testing purpose i created a table in SCOTT schema and inserted 10 rows in it and then DELETE all the rows from it.
    As per expectation i check the view
    select * from aud$
    where spare1 like '%MACHINE1%'
    and USERID='SCOTT'
    order by ntimestamp#;The output i got is :-
    34     168368     1     1          SCOTT     I-DOMAIN\MACHINE1     MACHINE1     100     0                                                                      Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=2565))          MACHINE1                    5          21-DEC-11 07.02.58.621000 AM               0     928:5024     0000000000000000               983697018     <CLOB>     <CLOB>     But here i don't see the SQL generated in the last column.
    What i was expecting is that if i fire a DELETE statement in the schema it will get logged here and with the help of this view i will be able to see that which user from which machine executed a DELETE statement and what that statement was?
    Please let me know what step i have missed here.
    PS:- The ACTION# column shows 100 , is it the code for DELETE action. I also accessed the DBA_AUDIT_TRAIL view but din't found any usefull info their.
    Thanks in advance.

    Try instead:
    audit delete table;AUDIT DELETE ANY TABLE is auditing use of DELETE ANY TABLE privilege.

  • When table with clustered columnstore indexe is partitioned the performance degrades if data is located in multiple partitions

    Hello,
    Below I provide a complete code to re-produce the behavior I am observing.  You could run it in tempdb or any other database, which is not important.  The test query provided at the top of the script is pretty silly, but I have observed the same
    performance degradation with about a dozen of various queries of different complexity, so this is just the simplest one I am using as an example here. Note that I also included approximate run times in the script comments (this is obviously based on what I
    observed on my machine).  Here are the steps with numbers corresponding to the numbers in the script:
    1. Run script from #1 to #7.  This will create the two test tables, populate them with records (40 mln. and 10 mln.) and build regular clustered indexes.
    2. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Main'. Scan count 5, logical reads 151435, physical reads 0, read-ahead reads 4, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Txns'. Scan count 5, logical reads 74155, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 5514 ms, 
    elapsed time = 1389 ms.
    3. Run script from #8 to #9. This will replace regular clustered indexes with columnstore clustered indexes.
    4. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54850, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 828 ms, 
    elapsed time = 392 ms.
    As you can see the query is clearly faster.  Yay for columnstore indexes!.. But let's continue.
    5. Run script from #10 to #12 (note that this might take some time to execute).  This will move about 80% of the data in both tables to a different partition.  You should be able to see the fact that the data has been moved when running Step #
    11.
    6. Run test query (at the top of the script).  Here are the execution statistics:
    Table 'Txns'. Scan count 4, logical reads 44563, physical reads 0, read-ahead reads 37186, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 4, logical reads 54817, physical reads 2, read-ahead reads 96862, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
     SQL Server Execution Times:
       CPU time = 8172 ms, 
    elapsed time = 3119 ms.
    And now look, the I/O stats look the same as before, but the performance is the slowest of all our tries!
    I am not going to paste here execution plans or the detailed properties for each of the operators.  They show up as expected -- column store index scan, parallel/partitioned = true, both estimated and actual number of rows is less than during the second
    run (when all of the data resided on the same partition).
    So the question is: why is it slower?
    Thank you for any help!
    Here is the code to re-produce this:
    --==> Test Query - begin --<===
    DBCC DROPCLEANBUFFERS
    DBCC FREEPROCCACHE
    SET STATISTICS IO ON
    SET STATISTICS TIME ON
    SELECT COUNT(1)
    FROM Txns AS z WITH(NOLOCK)
    LEFT JOIN Main AS mmm WITH(NOLOCK) ON mmm.ColBatchID = 70 AND z.TxnID = mmm.TxnID AND mmm.RecordStatus = 1
    WHERE z.RecordStatus = 1
    --==> Test Query - end --<===
    --===========================================================
    --1. Clean-up
    IF OBJECT_ID('Txns') IS NOT NULL DROP TABLE Txns
    IF OBJECT_ID('Main') IS NOT NULL DROP TABLE Main
    IF EXISTS (SELECT 1 FROM sys.partition_schemes WHERE name = 'PS_Scheme') DROP PARTITION SCHEME PS_Scheme
    IF EXISTS (SELECT 1 FROM sys.partition_functions WHERE name = 'PF_Func') DROP PARTITION FUNCTION PF_Func
    --2. Create partition funciton
    CREATE PARTITION FUNCTION PF_Func(tinyint) AS RANGE LEFT FOR VALUES (1, 2, 3)
    --3. Partition scheme
    CREATE PARTITION SCHEME PS_Scheme AS PARTITION PF_Func ALL TO ([PRIMARY])
    --4. Create Main table
    CREATE TABLE dbo.Main(
    SetID int NOT NULL,
    SubSetID int NOT NULL,
    TxnID int NOT NULL,
    ColBatchID int NOT NULL,
    ColMadeId int NOT NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --5. Create Txns table
    CREATE TABLE dbo.Txns(
    TxnID int IDENTITY(1,1) NOT NULL,
    GroupID int NULL,
    SiteID int NULL,
    Period datetime NULL,
    Amount money NULL,
    CreateDate datetime NULL,
    Descr varchar(50) NULL,
    RecordStatus tinyint NOT NULL DEFAULT ((1))
    ) ON PS_Scheme(RecordStatus)
    --6. Populate data (credit to Jeff Moden: http://www.sqlservercentral.com/articles/Data+Generation/87901/)
    -- 40 mln. rows - approx. 4 min
    --6.1 Populate Main table
    DECLARE @NumberOfRows INT = 40000000
    INSERT INTO Main (
    SetID,
    SubSetID,
    TxnID,
    ColBatchID,
    ColMadeID,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    SetID = ABS(CHECKSUM(NEWID())) % 500 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SubSetID = ABS(CHECKSUM(NEWID())) % 3 + 1,
    TxnID = ABS(CHECKSUM(NEWID())) % 1000000 + 1,
    ColBatchId = ABS(CHECKSUM(NEWID())) % 100 + 1,
    ColMadeID = ABS(CHECKSUM(NEWID())) % 500000 + 1,
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --6.2 Populate Txns table
    -- 10 mln. rows - approx. 1 min
    SET @NumberOfRows = 10000000
    INSERT INTO Txns (
    GroupID,
    SiteID,
    Period,
    Amount,
    CreateDate,
    Descr,
    RecordStatus)
    SELECT TOP (@NumberOfRows)
    GroupID = ABS(CHECKSUM(NEWID())) % 5 + 1, -- ABS(CHECKSUM(NEWID())) % @Range + @StartValue,
    SiteID = ABS(CHECKSUM(NEWID())) % 56 + 1,
    Period = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'), -- DATEADD(dd,ABS(CHECKSUM(NEWID())) % @Days, @StartDate)
    Amount = CAST(RAND(CHECKSUM(NEWID())) * 250000 + 1 AS MONEY),
    CreateDate = DATEADD(dd,ABS(CHECKSUM(NEWID())) % 365, '05-04-2012'),
    Descr = REPLICATE(CHAR(65 + ABS(CHECKSUM(NEWID())) % 26), ABS(CHECKSUM(NEWID())) % 20),
    RecordStatus = 1
    FROM sys.all_columns ac1
    CROSS JOIN sys.all_columns ac2
    --7. Add PK's
    -- 1 min
    ALTER TABLE Txns ADD CONSTRAINT PK_Txns PRIMARY KEY CLUSTERED (RecordStatus ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED INDEX CDX_Main ON Main(RecordStatus ASC, SetID ASC, SubSetId ASC, TxnID ASC) ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Replace regular indexes with clustered columnstore indexes
    --===========================================================
    --8. Drop existing indexes
    ALTER TABLE Txns DROP CONSTRAINT PK_Txns
    DROP INDEX Main.CDX_Main
    --9. Create clustered columnstore indexes (on partition scheme!)
    -- 1 min
    CREATE CLUSTERED COLUMNSTORE INDEX PK_Txns ON Txns ON PS_Scheme(RecordStatus)
    CREATE CLUSTERED COLUMNSTORE INDEX CDX_Main ON Main ON PS_Scheme(RecordStatus)
    --==> Run test Query --<===
    --===========================================================
    -- Move about 80% the data into a different partition
    --===========================================================
    --10. Update "RecordStatus", so that data is moved to a different partition
    -- 14 min (32002557 row(s) affected)
    UPDATE Main
    SET RecordStatus = 2
    WHERE TxnID < 800000 -- range of values is from 1 to 1 mln.
    -- 4.5 min (7999999 row(s) affected)
    UPDATE Txns
    SET RecordStatus = 2
    WHERE TxnID < 8000000 -- range of values is from 1 to 10 mln.
    --11. Check data distribution
    SELECT
    OBJECT_NAME(SI.object_id) AS PartitionedTable
    , DS.name AS PartitionScheme
    , SI.name AS IdxName
    , SI.index_id
    , SP.partition_number
    , SP.rows
    FROM sys.indexes AS SI WITH (NOLOCK)
    JOIN sys.data_spaces AS DS WITH (NOLOCK)
    ON DS.data_space_id = SI.data_space_id
    JOIN sys.partitions AS SP WITH (NOLOCK)
    ON SP.object_id = SI.object_id
    AND SP.index_id = SI.index_id
    WHERE DS.type = 'PS'
    AND OBJECT_NAME(SI.object_id) IN ('Main', 'Txns')
    ORDER BY 1, 2, 3, 4, 5;
    PartitionedTable PartitionScheme IdxName index_id partition_number rows
    Main PS_Scheme CDX_Main 1 1 7997443
    Main PS_Scheme CDX_Main 1 2 32002557
    Main PS_Scheme CDX_Main 1 3 0
    Main PS_Scheme CDX_Main 1 4 0
    Txns PS_Scheme PK_Txns 1 1 2000001
    Txns PS_Scheme PK_Txns 1 2 7999999
    Txns PS_Scheme PK_Txns 1 3 0
    Txns PS_Scheme PK_Txns 1 4 0
    --12. Update statistics
    EXEC sys.sp_updatestats
    --==> Run test Query --<===

    Hello Michael,
    I just simulated the situation and got the same results as in your description. However, I did one more test - I rebuilt the two columnstore indexes after the update (and test run). I got the following details:
    Table 'Txns'. Scan count 8, logical reads 12922, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Main'. Scan count 8, logical reads 57042, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Workfile'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
    SQL Server Execution Times:
    CPU time = 251 ms, elapsed time = 128 ms.
    As an explanation of the behavior - because of the UPDATE statement in CCI is executed as a DELETE and INSERT operation, you had all original row groups of the index with almost all data deleted and almost the same amount of new row groups with new data
    (coming from the update). I suppose scanning the deleted bitmap caused the additional slowness at your end or something related with that "fragmentation". 
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Performance degradation when using foreign keys

    Hi,
    I face drastic performance degradation when I add foreign keys to a table and perform insert / update on that table.
    I have a row store table to  which I need to insert around 1,50,000 records.
    If the table has no foreign key reference it takes maximum of 5 seconds but if the same table has references to other tables (in my case there are 3 references), the processing speed reduces drastically to 2 minutes.
    Is there any solution / best practice that can help me in gaining performance (processing speed) in this situation?
    Thanks
    S.Srivatsan

    Hi Sri,
    When you perform one insert in any database table which is having foreign key relationships, it will check the corresponding parent tables to check whether the master data is available or not. If your table is having 2 foreign key relationship, it happen twice per insert. Hence the performance will degrade. This is one of the reasons why ECC  doesn't establish foreign key relationship in the back end database. The case is not just for INSERT, for UPDATE & DELETE the same is applicable.
    Sreehari

  • Delete Operation in JDBC Sender Adapter not works

    Hi,
        I have one student table which contains the fields ID,Name,BirthMonth,BirthYear,ReadFlag fields. ReadFlag is a character field or lenght 1 which contains only values either 'Y' or ' '. 
         I want to execute delete operation in this table ie to delete the records which contains the readflag = 'Y'. So, I set the below values for the following parameters.
    DELETE FROM student WHERE READFLAG = ' Y '
    Query SQL Statement : SELECT * FROM student WHERE readflag = 'Y'
    Update SQL Statement : DELETE FROM student WHERE READFLAG = 'Y'
    Poll Interval : 60 Seconds.
        There are more records in this table which contains readflag = 'Y'. But, the Adapter does not delete those records from the table i.e delete operation is not executed. At the same time, in Comm. Channel monitoring it does not show any error, but the delete operation is not carried out in the table.
         I tried after 'COMMIT' the table also. But it does not work. What could be the reason ? or How to use Delete Operation effectively on the table ?
         Kindly help me friends to solve this problem.
    Thanking you.
    Kind Regards,
    Jeg.

    http://help.sap.com/saphelp_nw04/helpdata/en/7e/5df96381ec72468a00815dd80f8b63/content.htm >>>
    <i>Adapter Work Method
    You must add an indicator that specifies the processing status of each data record in the adapter (data record processed/data record not processed) to the database table.
    The UPDATE statement must alter exactly those data records that have been selected by the SELECT statement. You can ensure this is the case by using an identical WHERE clause. (See Processing Parameters, SQL Statement for Query, and SQL Statement for Update below).
    <b>Processing can only be performed correctly when the isolation level for transaction is set to repeatable_read or serializable.
    Example
    SQL statement for query: SELECT * FROM table WHERE processed = 0;
    SQL statement for update: UPDATE table SET processed = 1 WHERE processed = 0;
    processed is the indicator in the database.</b></i>
    try with repeatable_read or serializable !!!
    Also go thru this thread - DELETE Querey in JDBC SENDER

  • How to override the default delete operation

    Hi,
    I am new to Jheadstart, java coding for that matter.
    Here's my situation,
    I have a view which is based on a function (function returns a collection).
    I have created instead of triggers on this view to perform insert/update/delete operations.
    All these DML operations work as expected in Oracle database.
    Now, I created an Entity object and a view object on this view in my jheadstart project.
    When I run this Jheadstart application my insert and search operations run fine but update and delete operations fail with JBO-26080 error.
    The underlying oracle error is "ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc."
    I know that delete and update operations work fine in Oracle and hence I would like to override the default Jheadstart operations. Can any body tell me how can I do it or point me in right direction?

    Hi,
    From the JHeadstart Developer's Guide, chapter TroubleShooting - Problem Assessment:
    If you are getting a JBO error (Business Components for Java error), try to perform the same data retrieval or data manipulation action using the BC4J Tester. You can
    invoke the tester through a right-mouse-click on the BC4J application module. If you get the same error using the BC4J tester, the problem is in the BC4J object definitions. If you added business rules, or other custom code to your BC4J objects that executes during your data retrieval or data manipulation action, you can debug this code line-by-line by running the tester in debug mode. You can also look up the JBO error in the JDeveloper online help, for each error possible causes and how to solve them are described.
    It sounds to me like you will also get this error in the BC4J Tester. This means that the problem is not related to JHeadstart. You can go to the JDeveloper discussion forum http://otn.oracle.com/discussionforums/jdev.html and ask your question there without mentioning JHeadstart. Maybe there is some switch you can set in the BC4J object to let BC4J not use SELECT FOR UPDATE.
    Hope this helps,
    Sandra Muller
    JHeadstart Team

  • Performance degrading CPU utilization 100%

    Hello,
    RHEL 4
    Oracle 10.2.0.4
    Attached to a DAS (partition is 91% full) RAID 5
    Over the past few weeks my production database performance has majorly degraded. I have not made any application, OS, or database changes (I was on vacation!). I have started troubleshooting, but need some more tips as to what else I can check.
    My users run a query against the database, and for a table with only 40,000 rows, it will take about 2 minutes before the results return. For a table with 12 million records, it takes about 10 minutes or more for the query to complete. If I run a script that counts/displays a total record count for each table in the database as well as a total count of all records in the database (~15,000,000 records total), the script either takes about 45 minutes to complete or sometimes it just never completes. The Linux partition on my DAS is currently 91% full. I do not have Flashback or auditing enabled.
    These are some things I tried/observed:
    I shut down all applications/servers/connections to the database and then restarted the database. After starting the database, I monitored the DAS interface, and the CPU utilization spiked to 100% and never goes down, even with no users/application trying to connect to the database. The alert.log file contains these errors:
    ORA-00603: ORACLE server session terminated by fatal error
    ORA-00600: internal error code arguments: [ttcdrv-recursivecall]
    ORA-03135: connection lost contact
    ORA-06512: at "CTXSYS.SYNCRN", line 1
    The database still starts, but the performance is bad. From the error above and after checking performance in EM, I see there are a lot of sync index jobs running by each of the schemas and the db sequential file read is high. There is a job to resync the indexes every 5 minutes. I am going to try disabling these jobs tihs afternoon to see what happens with the CPU utilization. If it helps, I will try adjusting the job from running every 5 minutes to something like every 30 minutes. Is there a way to defrag the CONTEXT indexes? REBUILD?
    I'm not sure if I am running down the right path or not. Does anyone have any other suggestions as to what I can check? My SGA_TARGET is currently set to 880M and the SGA_MAX_SIZE is 2032M. Would it also help for me to increase the SGA_TARGET to the SGA_MAX_SIZE; thus increasing the amount of space allocated to the buffer cache? I have ASMM enabled and currently this is what is allocated:
    Shared Pool = 18.2%
    Buffer Cache = 61.8%
    Large Pool = 16.4%
    Java Pool = 1.8%
    Other = 1.8%
    I also ran ADDM and these were the results of my Performance Analysis:
    34.7% The throughput of the I/O subsystem was significantly lower than expected (when I clicked on this it said to either implement ASM or stripe using SAME methodology...we are already using RAID5)
    31% SQL statements consuming significant database time were found (I cannot make application code changes, and my database consists entirely of INSERT statements...there are never any deletes or updates. I see that the updates that are being made were by the index resyncing job to the various DR$ tables)
    18% Individual database segments responsible for significant user I/O wait were found
    15.9% Individual SQL statements responsible for significant user I/O wait were found
    8.4% PL/SQL execution consumed significant database time
    I also recently ran a SHRINK on all possible tablespace as recommended in EM, but that did not seem to help either.
    Please let me know if I can provide any other pertinent information to solve the poor I/O problem. I am leaning toward thinking it has to do with the index sync job stepping on itself...the job cannot complete in 5 minutes before it tries to kick off again...but I could be completely wrong! What else can I check to figure out why I have 100% CPU utilization, with no users/applications connected? Thank you!
    Mimi
    Edited by: Mimi Miami on Jul 25, 2009 10:22 AM

    Tables/Indexes last analyzed today.
    I figured out that it was the Oracle Text indexes synching to frequently that was causing the problem. I disabled all the jobs that kicked off those indexes and my CPU utilization dropped to almost 0%. I will work on tuning the interval/re-enabling the indexes for my dynamic datasources.
    Thank you for everyone's suggestions!
    Mimi

  • JDBC, SQL*Net wait interface, performance degradation on 10g vs. 9i

    Hi All,
    I came across performance issue that I think results from mis-configuration of something between Oracle and JDBC. The logic of my system executes 12 threads in java. Each thread performs simple 'select a,b,c...f from table_xyz' on different tables. (so I have 12 different tables with cardinality from 3 to 48 millions and one working thread per table).
    In each thread I'm creating result set that is explicitly marked as forward_only, transaction is set read only, fetch size is set to 100000 records. Java logic processes records in standard while(rs.next()) {...} loop.
    I'm experiencing performance degradation between execution on Oracle 9i and Oracle 10g of the same java code, on the same machine, on the same data. The difference is enormous, 9i execution takes 26 hours while 10g execution takes 39 hours.
    I have collected statspack for 9i and awr report for 10g. Below I've enclosed top wait events for 9i and 10g
    ===== 9i ===================
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    db file sequential read 22,939,988 0 6,240 0 0.7
    control file parallel write 6,152 0 296 48 0.0
    SQL*Net more data to client 2,877,154 0 280 0 0.1
    db file scattered read 26,842 0 91 3 0.0
    log file parallel write 3,528 0 83 23 0.0
    latch free 94,845 0 50 1 0.0
    process startup 93 0 5 50 0.0
    log file sync 34 0 2 46 0.0
    log file switch completion 2 0 0 215 0.0
    db file single write 9 0 0 33 0.0
    control file sequential read 4,912 0 0 0 0.0
    wait list latch free 15 0 0 12 0.0
    LGWR wait for redo copy 84 0 0 1 0.0
    log file single write 2 0 0 18 0.0
    async disk IO 263 0 0 0 0.0
    direct path read 2,058 0 0 0 0.0
    slave TJ process wait 1 1 0 12 0.0
    ===== 10g ==================
    Avg
    %Time Total Wait wait Waits
    Event Waits -outs Time (s) (ms) /txn
    db file scattered read 268,314 .0 2,776 10 0.0
    SQL*Net message to client 278,082,276 .0 813 0 7.1
    io done 20,715 .0 457 22 0.0
    control file parallel write 10,971 .0 336 31 0.0
    db file parallel write 15,904 .0 294 18 0.0
    db file sequential read 66,266 .0 257 4 0.0
    log file parallel write 3,510 .0 145 41 0.0
    SQL*Net more data to client 2,221,521 .0 102 0 0.1
    SGA: allocation forcing comp 2,489 99.9 27 11 0.0
    log file sync 564 .0 23 41 0.0
    os thread startup 176 4.0 19 106 0.0
    latch: shared pool 372 .0 11 29 0.0
    latch: library cache 537 .0 5 10 0.0
    rdbms ipc reply 57 .0 3 49 0.0
    log file switch completion 5 40.0 3 552 0.0
    latch free 4,141 .0 2 0 0.0
    I put full blame for the slowdown on SQL*Net message to client wait event. All I could find about this event is that it is a network related problem. I assume it would be true if database and client were on different machines.. However in my case they are on them very same machine.
    I'd be very very grateful if someone could point me in the right direction, i.e. give a hint what statistics should I analyze further? what might cause this event to appear? why probable cause (that is said be outside db) affects only 10g instance?
    Thanks in advance,
    Rafi.

    Hi Steven,
    Thanks for the input. It's a fact that I did not gather statistics on my tables. My understanding is that statistics are useful for queries more complex than simple select * from table_xxx. In my case tables don't have indexes. There's no filtering condition as well. Full table scan is what I actually want as all software logic is inside the java code.
    Explain plans are as follows:
    ======= 10g ================================
    PLAN_TABLE_OUTPUT
    Plan hash value: 1141003974
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 259 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS FULL| xxx | 1 | 259 | 2 (0)| 00:00:01 |
    In sqlplus I get:
    SQL> set autotrace traceonly explain statistics;
    SQL> select * from xxx;
    36184384 rows selected.
    Elapsed: 00:38:44.35
    Execution Plan
    Plan hash value: 1141003974
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 259 | 2 (0)| 00:00:01 |
    | 1 | TABLE ACCESS FULL| xxx | 1 | 259 | 2 (0)| 00:00:01 |
    Statistics
    1 recursive calls
    0 db block gets
    3339240 consistent gets
    981517 physical reads
    116 redo size
    26535700 bytes received via SQL*Net from client
    2412294 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    36184384 rows processed
    ======= 9i =================================
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | | | |
    | 1 | TABLE ACCESS FULL | xxx | | | |
    Note: rule based optimization
    In sqlplus I get:
    SQL> set autotrace traceonly explain statistics;
    SQL> select * from xxx;
    36184384 rows selected.
    Elapsed: 00:17:43.06
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE
    1 0 TABLE ACCESS (FULL) OF 'xxx'
    Statistics
    0 recursive calls
    1 db block gets
    3306118 consistent gets
    957515 physical reads
    100 redo size
    23659424 bytes sent via SQL*Net to client
    26535867 bytes received via SQL*Net from client
    2412294 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    36184384 rows processed
    Thanks for pointing me difference in table scans. I infer that 9i is doing single-block full table scan (db file sequential read) while 10g is using multi-block full table scan (db file scattered read).
    I now have theory that 9i is faster because sequential reads use continuous buffer space while scattered reads use discontinuous buffer space. Since I'm accessing data 'row by row' in jdbc 10g might have an overhead in providing data from discontinuous buffer space. This overhead shows itself as SQL*Net message to client wait. Is that making any sense?
    Is there any way I could force 10g (i.e. with hint) to use sequential reads instead of scattered reads for full table scan?
    I'll experiment with FTS tuning in 10g by enabling automatic multi-block reads tuning (i.e. db_file_multiblock_read_count=0 instead of 32 as it is now). I'll also check if response time improves after statistics are gathered.
    Please advice if you have any other ideas.
    Thanks & regards,
    Rafi.

  • Delete operation is not working to delete selected row from ADF table

    Hi All,
    We are working on jdev 11.1.1.5.3. We have one ADF table as shown below. My requirement is to delete a selected row from table, but it is deleting the first row only.
    <af:table value="#{bindings.EventCalendarVO.collectionModel}" var="row"
    rows="#{bindings.EventCalendarVO.rangeSize}"
    emptyText="#{bindings.EventCalendarVO.viewable ? applcoreBundle.TABLE_EMPTY_TEXT_NO_ROWS_YET : applcoreBundle.TABLE_EMPTY_TEXT_ACCESS_DENIED}"
    fetchSize="#{bindings.EventCalendarVO.rangeSize}"
    rowBandingInterval="0"
    selectedRowKeys="#{bindings.EventCalendarVO.collectionModel.selectedRow}"
    selectionListener="#{bindings.EventCalendarVO.collectionModel.makeCurrent}"
    rowSelection="single" id="t2" partialTriggers="::ctb1 ::ctb3"
    >
    To perform delete operation i have one delete button.
    <af:commandToolbarButton
    text="Delete"
    disabled="#{!bindings.Delete.enabled}"
    id="ctb3" accessKey="d"
    actionListener="#{AddNewEventBean. *deleteCurrentRow* }"/>
    As normal delete operation is not working i am using programatic approach from bean method. This approach works with jdev 11.1.1.5.0 but fails on ver 11.1.1.5.3
    public void deleteCurrentRow (ActionEvent actionEvent) *{*               DCBindingContainer bindings =
    (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
    DCIteratorBinding dcItteratorBindings =
    bindings.findIteratorBinding("EventCalendarVOIterator");
    // Get an object representing the table and what may be selected within it
    ViewObject eventCalVO = dcItteratorBindings.getViewObject();
    // Remove selected row
    eventCalVO.removeCurrentRow();
    it is removing first row from table still. Main problem is not giving the selected row as current row. Any one point out where is the mistake?
    We have tried the below code as well in deleteCurrentRow() method
    RowKeySet rowKeySet = (RowKeySet)this.getT1().getSelectedRowKeys();
    CollectionModel cm = (CollectionModel)this.getT1().ggetValue();
    for (Object facesTreeRowKey : rowKeySet) {
    cm.setRowKey(facesTreeRowKey);
    JUCtrlHierNodeBinding rowData = (JUCtrlHierNodeBinding)cm.getRowData();
    rowData.getRow().remove();
    The same behavior still.
    Thanks in advance.
    Rechin
    Edited by: 900997 on Mar 7, 2012 3:56 AM
    Edited by: 900997 on Mar 7, 2012 4:01 AM
    Edited by: 900997 on Mar 7, 2012 4:03 AM

    JDev 11.1.1.5.3 sounds like you are using oracle apps as this not a normal jdev version.
    as it works in 11.1.1.5.0 you probably hit a bug which you should file with support.oracle.com...
    Somehow you get the first row instead of the current row (i guess). You should debug your code and make sure you get the current selected row in your bean code and not the first row.
    This might be a problem with the bean scope too. Do you have the button (or table) inside a region? Wich scope does the bean have?
    Anyway you can try to remove the iterator row you get
    public void deleteCurrentRow (ActionEvent actionEvent) { DCBindingContainer bindings =
    (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();
    DCIteratorBinding dcItteratorBindings =
    bindings.findIteratorBinding("EventCalendarVOIterator");
    dcItteratorBindings.removeCurrentRow();Timo

Maybe you are looking for

  • Posting iweb to an external site--slide show

    My wife is considering switching from a pc to a mac. She already has a web hosting site where she maintains a web site. We understand that we can use iweb to create and post the her html to this domain, but, without the enhanced slide show functional

  • How do I open SkyDrive pics in e-mails

    People send me Skydrive Pictures on e-mails.  I cannot open them.  I click on the "open slide show" and I get a message......... "Something went wrong and we can't sign you in right now. Please try again later. Don't have a Microsoft account? Sign up

  • How can I hide "Transferring data..." status?

    I want to hide the "Transferring data..." status that shows on the lower left of the browser screen because it is as hazardous as a flickering icon, from a usability standpoint.

  • Adobe muse no acepta acentos ni "ñ"

    Hola espero entiendan este mensaje ya que el foto esta en ingles y bueno no es mi idioma. bien tento un grave problema y es que adobe muse no me acepta acentos ni ñ en los vinculos, es decir por ejemplo uno de mis botones es "niños", bien pues mientr

  • [SOLVED] Citrix Receiver for Linux

    http://www.citrix.com/English/ss/downlo - Id=1689163 It would be nice if someone can package this. I've tried to install the .tar.gz with the setup script (setupwfc) and also installed openmotif, but there are still some packages missing and i don't