Table Scan  Insert Delete

I have 1 View(A_VW) and 1 Table (B_TBL)
I want to be eifficent in table scan . prefer only one scan
Samplw Table below.
if Amount > 500 and New reord added in A_VW. The table also adds this row.
If a record is Deleted in A_VW then we delete the record if its STATUS in B_TBL is N
If Experiy Date is the same as sysdate then change the status from Y to N. There are around say a million records.
I am trying out the pseudo code for this and if anyone can help that will be great.
default status and code is Y and can assume default epr date as 30 days from inserted date.
A_VW
USERNAME AMOUNT
ABC1......... 200
abc2.......... 700
B_TBL
USERNAME STATUS Expiry DATE
abc2........... N.........1/12/2004
sscs.......... N.........1/31/2006
.................. Y.........9/21/2007
Advance thanks
Tiger

Hi I pretty much solved it but i want to know if u can help me with the syntax of MERGE for multiple source and destination tables in Oracle 9i.
URGENT : ORACLE 9i MERGE MULITPLE SOURCE AND DESTINATION TABLES SYNTAX?
I c people suggesting it for multiple tables but can't fina an example get errors.
link to the new thread.
Advance Thanks
Tiger

Similar Messages

  • To know when data in a table is inserted,deleted.

    I have to execute a report as soon as data in a specifix table is inserted or deleted.
    Please guide me how to do this.

    Hi,
    Value in table dbtablog is insterted as soon as someone logs in.
    I have execute a report as soon as changes are recorded in dbtablog.
    Please guide me how to do this.

  • Insert delete or update entry in Custom table from KONV entry changed

    Hi All,
    I have custom table ZKONV with only few required columns and should have same number of records as KONV has at any point in time.
    KONV is a cluster table so its not readable from ORACLE level. So ZKONV is created. But I dont know how to keep these both tables in sync.
    I need to perform insert delete or update entry in Custom table if insert delete or update happens on cluster table KONV from any transactions.
    As KONV is a cluster table and does not have changed time stamp I am not able to know the number of records changed in perticulat time period.
    Thanks,

    Thanks for reply,
    There is a Outside SAP system which needs to read KONV data to feed into their system, but as KONV is cluster table they are not able to read it from ORACLE level.
    To solve this we are thinking to create a transparent Z-table and will fill it with KONV and catch Update, delete or Insert statement and do same on ZKONv.
    Is this possible some how? by some database event or something....

  • How do I insert/Delete/Update a row to the DB Table from Business Component Browser

    I am using the wizard and created a project containing Business component which contain some table.
    When I run the project I could see "Oracle Business Component Browser(local)" and when I select some table from "View Object Member" I get a window displaying all the field of that table and I could browse all the info.
    My Problem is when I try to insert a new record/Delete the existing record or update some record it never gets reflected to the DataBase.
    When I try to insert a new row I did save and there was a dialog box displayed saying "Transaction ID is 1". But finally It's not reflected in the Database.
    Can some one guide me how can I do insert/delete/update operation from Oracle Component Business Browser so that the changes reflect to the Original DataBase.
    Thanks in advance
    Jitendra

    Jitendra,
    This may be a problem of caching. If you do an update,insert, or delete, and do not receive an error, then the transaction should indeed be posted.
    I assume you are hitting the Save icon after your changes if you are getting a transaction ID. Are you checking for the updates through another session (i.e. SQL*Plus), or do you then requery the View Object in the tester? Do you exit the tester and come back in and not see the changes?

  • Insert,  Delete and Update options in Table control

    Experts,
    I have writen code for Insert,  Delete and Update options in Table control. They are not working properly...
    can any one send the code for the above please...
    Thanks in advance..

    Hi,
    Following steps will help you.
    1.TOP-INCLUDE
    DATA: ITAB1 LIKE KNA1 OCCURS 0 WITH HEADER LINE.
    DATA: ITAB2 LIKE KNA1 OCCURS 0 WITH HEADER LINE.
    DATA: WA LIKE KNA1.
    DATA: ANT TYPE I,CUR TYPE I.
    DATA: OK_CODE TYPE SY-UCOMM.
    CONTROLS: TABCTRL TYPE TABLEVIEW USING SCREEN 100.
    IN FLOWLOGIC
    PROCESS BEFORE OUTPUT.
    LOOP AT ITAB1 CURSOR CUR WITH CONTROL TABCTRL.
    ENDLOOP.
    PROCESS AFTER INPUT.
    MODULE CLEAR_DATA.
    LOOP AT ITAB1.
    MODULE MOVE_DATA.
    ENDLOOP.
    ADD “OK_CODE” IN ELEMENT LIST. CLICK ON LAYOUT AND  ADD TABLE CONTROL(name it as TABCTRL) AND PUSHBUTTONS AS FOLLOWS.
    SELECT THE FIELDS FROM PROGRAM. SAVE CHECK AND ACTIVATE.
    CLICK ON FLOWLOGIC EDITOR FROM APPLICATION TOOL BAR.
    DOUBLE CLICK ON MODULE “CLEAR_DATA”.
    write the in this module as below.
    CLEAR ITAB2. REFRESH ITAB2.
    DOUBLE CLICK ON MODULE “MOVE_DATA”.
    write the code in this module as below.
    APPEND ITAB1 TO ITAB2.
    ACTIVATE PAI AND WRITE THE CODE AS BELOW.
    CASE OK_CODE.
    WHEN 'FETCH'.
    SELECT * FROM KNA1 INTO TABLE ITAB1 UP TO 20 ROWS.
    TABCTRL-LINES = SY-DBCNT.
    WHEN 'ADD'.
    GET CURSOR LINE CNT.
    CNT = TABCTRL-TOP_LINE + CNT - 1.
    CLEAR WA.
    INSERT WA INTO ITAB1 INDEX CNT.
    WHEN 'MODIFY'.
    GET CURSOR LINE CNT.
    READ TABLE ITAB2 INDEX CNT.
    LOOP AT ITAB2.
    MODIFY KNA1 FROM ITAB2.
    ENDLOOP.
    SELECT * FROM KNA1 INTO TABLE ITAB1.
    WHEN 'EXIT'.
    LEAVE PROGRAM.
    ENDCASE.
    SAVE,CHECK AND ACTIVATE ALL.
    CREATE TCODE AND EXECUTE.
    contact if u hv any issues regarding this code.
    reward points,if it is useful.
    Thanks,
    Chandu.

  • Oracle Auditing for Insert/delete in a Table

    Dear Oracle Guru's
    I have a Master table for which records are added manually using the Insert command and not by any front end tools.
    I have no Auditing for this table , like when an insert/delete is made in to this table.
    Does oracle provides any other Tables/views like v$session to find out when such events happens.
    Kindly guide me
    With Warm Regards
    ssr

    Probably not.
    If your database is in ARCHIVELOG mode and you have the archived logs from the point in time that the DML happened, you could potentially use LogMiner to read the redo logs and get information about when the DML happened and who was responsible. That tends, however, to be a relatively painful manual process that is frequently complicated by the fact that most shops get rid of archived logs as soon as they are no longer necessary for database recovery which is frequently a matter of days or weeks.
    Justin

  • Delete all record in a table, the insert speed is not change.

    I have an empty table, and i insert a record need 100ms,
    when this table has 40,0000 record, i insert a record need 1s, this is ok, because i need do a compare based an index before insert a record, so more record, need more time.
    The problem is when i delete all record in this table, the insert time is still 1s, not reduce to 100ms.Why?

    Hello,
    Read through this portion of oracle documentation
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/logical.htm#CNCPT004
    The reason is still taking 1s, because when you inserted 400K record because HWM (The high water mark is the boundary between used and unused space in a segment.) moved up. And when you deleted all the records your HWM still at the same marker and didn't get to reset to 0. So when you insert 1 record it lookings for free space and after finding (generally a regular inserts got 6 steps it inserts data). If you truncate your table you and try it again it will be faster as your HWM is reset to 0.
    Regards

  • Insert data from source table to destination table dependning up on a criteria.once inserted delete from source table.

    HI,
    I have a source table with millions of records .I need to insert some of the data (depending on a condition) to a repository table.
    Once they are inserted they can be deleted from the source table.
    The deletion is taking a lot of time .
    I need to reduce the time to delete the records.
    ex:-  1 million records in 8 seconds.
    Had already used bulk collect and cursors but cannot succeed.
    Please suggest how to increase the performance.
    Thanks & Regards

    APPROACH 1:-
    CREATE OR REPLACE PROCEDURE SP_BC
    AS
    DETAILS_REC SOURCETBL%ROWTYPE;
    COUNTER NUMBER:=1;
    RCOUNT NUMBER:= 1;
    START_TIME PLS_INTEGER;
    END_TIME PLS_INTEGER;
    CURSOR C1 IS
    SELECT * FROM SOURCETBL WHERE DOJ<SYDATE;
    BEGIN
    START_TIME := DBMS_UTILITY.GET_TIME;
        DBMS_OUTPUT.PUT_LINE(START_TIME/100);
        OPEN C1;
        LOOP
        FETCH C1 INTO DETAILS_ROW;
        EXIT WHEN  C1%NOTFOUND;
               BEGIN
                EXIT WHEN COUNTER >10000;
                INSERT INTO DESTINATIONTBL VALUES DETAILS_REC;
                IF SQL%FOUND THEN
                    DELETE FROM SOURCETABLE WHERE ID= DETAILS_REC.ID;
                  COUNTER:=COUNTER+1;
            END IF; 
        COMMIT;
            END;
         COUNTER:=1;
        END LOOP;
        COMMIT;
    END;
    APPROACH 2:-
        CREATE OR REPLACE PROCEDURE SP_BC1
    IS
    TYPE T_DET IS TABLE OF SOURCETBL%ROWTYPE;
    T_REC T_DET;
    BEGIN   
        SELECT *  BULK COLLECT INTO T_REC FROM SOURCETBL
         WHERE NAME=@NAME;
        FOR I IN  T_REC .FIRST ..T_REC .LAST
           LOOP
             INSERT INTO DESTINATIONTBL VALUES T_REC (I);
          IF SQL%FOUND THEN
          DELETE FROM SOURCETBL WHERE ID =
           WHERE ID = T_REC (I).ID;  
           END IF; 
           EXIT WHEN T_REC=0;
        END LOOP;
        COMMIT;
    END;
    APPROACH 3:-
    CREATE OR REPLACE PROCEDURE SP_BC2
    AS
    TYPE REC_TYPE IS TABLE OF SOURCETBL%ROWTYPE ;
    DETAILS_ROW REC_TYPE;
    CURSOR C1 IS
    SELECT * FROM
         SOURCETBL WHERE END<SYSDATE;
        BEGIN
        OPEN C1;
        LOOP
        FETCH C1 BULK COLLECT INTO DETAILS_ROW LIMIT 999;
        FORALL I IN 1..DETAILS_ROW.COUNT
                  /* A BATCH OF 999 RECORDS WILL BE CONSIDERED FOR DATA MOVEMENT*/
    INSERT INTO DESTINATIONTBL VALUES DETAILS_ROW(I);
    --            IF SQL%FOUND  THEN
    --                DELETE from SOURCETBL WHERE ID IN DETAILS_ROW(I).ID;
    --           END IF;
            EXIT WHEN  C1%NOTFOUND; 
        COMMIT;   
        END LOOP;
        COMMIT;
    3rd approach seems better but i have an issue with referring the fileds of a record type.

  • Tuning an insert sql that inserts a million rows doing a full table scan

    Hi Experts,
    I am on Oracle 11.2.0.3 on Linux. I have a sql that inserts data in a history/archive table from a main application table based on date. The application table has 3 million rows in it. and all rows that are older then 6 months should go into a history/archive table. this was recently decided and we have 1 million rows that satisfy this criteria. This insert into archive table is taking about 3 minutes. The explain plan shows a full table scan on the main table - which is the right thing as we are pulling out 1 million rows from main table into history table.
    My question is that, is there a way I can make this sql go faster?
    Here is the query plan (I changed the table names etc.)
       INSERT INTO EMP_ARCH
       SELECT *
    FROM EMP M
    where HIRE_date < (sysdate - :v_num_days);
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute      2     96.22     165.59      92266     147180    8529323     1441230
    Fetch        0      0.00       0.00          0          0          0           0
    total        4     96.22     165.59      92266     147180    8529323     1441230
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: FIRST_ROWS
    Parsing user id: 166
    Rows     Row Source Operation
    1441401   TABLE ACCESS FULL EMP (cr=52900 pr=52885 pw=0 time=21189581 us)
    I heard that there is a way to use opt_param hint to increase the multiblock read count but didn't seem to work for me....I will be thankful for suggestions on this. also can collections and changing this to pl/sql make it faster?
    Thanks,
    OrauserN

    Also I wish experts share their insight on how we make full table scan go faster (apart from parallel suggestion I mean).
    Please make up your mind about what question you actually want help with.
    First you said you want help making the INSERT query go faster but the rest of your replies, including the above statement, imply that you are obsessed with making full table scans go faster.
    You also said:
    our management would like us to come forth with the best way to do it
    But when people make suggestions you make replies about you and your abilities:
    I do not have the liberty to do the "alter session enable parallel dml".  I have to work within this constraings
    Does 'management' want the best way to do whichever question you are asking?
    Or is is just YOU that want the best way (whatever you mean by best) based on some unknown set of constraints?
    As SB already said, you clearly do NOT have an actual problem since you have already completed the task of inserting the data, several times in fact. So the time it takes to do it is irrevelant.
    There is no universal agreement on what the word 'best' means for any given use case and you haven't given us your definition either. So how would we know what might be 'best'?
    So until you provide the COMPLETE list of constraints you are just wasting our time asking for suggestions that you refute with a comment about some 'constraint' you have.
    You also haven't provided ANY information that indicates that it is the full table scan that is the root of the problem. It is far more likely to be the INSERT into the table and a simple use of NOLOGGING with an APPEND hint might be all that is needed.
    IMHO the 'best' way would be to partition both the source and target tables and just use EXCHANGE PARTITION to move the data. That operation would only take a millisecond or two.
    But, let me guess, that would not conform to one of your 'unknown' constraints would it?

  • Bitmap index column goes for full table scan

    Hi all,
    Database : 10g R2
    OS : Windows xp
    my select query is :
    SELECT tran_id, city_id, valid_records
    FROM transaction_details
    WHERE type_id=101;
    And the Explain Plan is :
    Plan
    SELECT STATEMENT ALL_ROWSCost: 29 Bytes: 8,876 Cardinality: 634
    1 TABLE ACCESS FULL TABLE TRANSACTION_DETAILS** Cost: 29 Bytes: 8,876 Cardinality: 634
    total number of rows in the table = 1800 ;
    distinct value of type_ids are 101,102,103
    so i created a bit map index on it.
    CREATE BITMAP INDEX btmp_typeid ON transaction_details
    (type_id)
    LOGGING
    NOPARALLEL;
    after creating the index, the explain plan shows the same. why it goes for full table scan?.
    Kindly share ur idea on this.
    Edited by: 887268 on Apr 3, 2013 11:01 PM
    Edited by: 887268 on Apr 3, 2013 11:02 PM

    >
    I am sorry for being ignorant, can you please cite any scenario of locking due to bitmap indices? A link can be useful as well.
    >
    See my full reply in this thread
    Bitmap index for FKs on Fact tables
    >
    ETL is affected because DML operations (INSERT/UPDATE/DELETE) on tables with bitmapped indexes can have serious performance issues due to the serialization involved. Updating a single bit-mapped column value (e.g. from 'M' to 'F' for gender) requires both bitmapped index blocks to be locked until the update is complete. A bitmap index stored ROWID ranges (min rowid - max rowid) than can span many, many records. The entire 'range' of rowids is locked in order to change just one value.
    To change from 'M' the 'M' rowid range for that one row is locked and the ROWID must be removed from the range byt clearing the bit. To change to 'F' the 'F' rowid id range needs to be found, locked and the bit set that corresponds to that rowid. No other rows with rowids in the range can be changed since this is a serial operation. If the range includes 1000 rows and they all need changed it takes 1000 serial operations.

  • Query occasionally causes table scans (db file sequential read)

    Dear all,
    we periodically issue a query on a huge table via an oracle job.
    Whenever I invoke the query manually, the response time is good. When I start the periodic job, initially the response times are good as well. After some days, however, the query suddenly takes almost forever.
    My vague guess is that for some reason the query suddenly changes the execution plan from using the primary key index to a full table scan (or huge index range scan). Maybe because of some problems with the primary key index (fragmentation? Other?).
    - Could it be the case that the execution plan for a query changes (automatically) like this? For what reasons?
    - Do you have any hints where to look for further information for analysis? (logs, special event types, ...)?
    - Of course, the query was designed having involved indexes in mind. Also, I studied the execution plan and did not find hints for problematic table/range scans.
    - It is not a lock contention problem
    - When the query "takes forever", there is a "db file sequential read" event in v$session_event for the query with an ever increasing wait time. That's why I guess a (unreasonable) table scan is happening.
    Some charachteristics of the table in question:
    - ~ 30 Mio rows
    - There are only insertions to the table, as well as updates on a single, indexed field. No deletes.
    - There is an integer primary key field with an B-tree index.
    Charachteristics of the query:
    The main structure of the query is very simple and as follows: I select a range of about 100 rows via primary key "id", like:
    Select * from TheTable where id>11222300 and id <= 11222400
    There are several joins with rather small tables, which make the overall query more complicated.
    However, the few (100) rows of the huge table in question should always be fetched using the primary key index, shouldn't it?
    Please let me know if some relevant information about the problem is missing.
    Thanks!
    Best regards,
    Nang.

    user2560749 wrote:
    Dear all,
    we periodically issue a query on a huge table via an oracle job.
    Whenever I invoke the query manually, the response time is good. When I start the periodic job, initially the response times are good as well. After some days, however, the query suddenly takes almost forever.
    My vague guess is that for some reason the query suddenly changes the execution plan from using the primary key index to a full table scan (or huge index range scan). Maybe because of some problems with the primary key index (fragmentation? Other?).
    - Could it be the case that the execution plan for a query changes (automatically) like this? For what reasons?Yes possible, One reason is stats of the table has been changed i.e somebody issued dbms_stats. If you are worried about execution plan getting changed then two option 1) Lock the stats 2) Use hint in the query.
    - Do you have any hints where to look for further information for analysis? (logs, special event types, ...)?Have a Ora-10053 trace enabled whenever query plan changes and analysis it.
    - Of course, the query was designed having involved indexes in mind. Also, I studied the execution plan and did not find hints for problematic table/range scans.
    - It is not a lock contention problem
    - When the query "takes forever", there is a "db file sequential read" event in v$session_event for the query with an ever increasing wait time. That's why I guess a (unreasonable) table scan is happening.
    If it db file sequential read then i see two things 1) It is doing index range scan(Not table full scan) or 2) It is scanning undo tablespaces.
    Some charachteristics of the table in question:
    - ~ 30 Mio rows
    - There are only insertions to the table, as well as updates on a single, indexed field. No deletes.
    - There is an integer primary key field with an B-tree index.
    Charachteristics of the query:
    The main structure of the query is very simple and as follows: I select a range of about 100 rows via primary key "id", like:
    Select * from TheTable where id>11222300 and id <= 11222400
    There are several joins with rather small tables, which make the overall query more complicated.
    However, the few (100) rows of the huge table in question should always be fetched using the primary key index, shouldn't it?
    Yes theoreitically it should practically we can only say by looking at run time explain plan(through 10053,10046 trace).
    Please let me know if some relevant information about the problem is missing.
    Thanks!
    Best regards,
    Nang.I am still not sure in which direction you are looking for solution.
    Is your query performing bad once in a fortnight and next day it is all same again.
    I suggest to
    1) Check if the data is scanning undo tablespace. I see you mentioned there is lot of inserts, could be that Oracle would be scanning undo tablespace because of delayed clean out.
    2) Check if that particular day the number of records are high compared to other day.
    Or once it starts performing bad then for next couple of days there is no change in response time.
    1) Check if explain plan has been changed?
    And what action you take to bring back the response time to normal?
    Regards
    Anurag

  • Associative Array vs Table Scan

    Still new to PL/SQL, but very keen to learn. I wondered if somebody could advise me whether I should use a collection (such as an associative array) instead of repeating a table scan within a loop for the example below. I need to read from an input table of experiment data and if the EXPERIMENT_ID does not already exist in my EXPERIMENTS table, then add it. Here is the code I have so far. My instinct is that it my code is inefficient. Would it be more efficient to scan the EXPERIMENTS table only once and store the list of IDs in a collection, then scan the collection within the loop?
    -- Create any new Experiment IDs if needed
    open CurExperiments;
    loop
    -- Fetch the explicit cursor
    fetch CurExperiments
    into vExpId, dExpDate;
    exit when CurExperiments%notfound;
    -- Check to see if already exists
    select count(id)
    into iCheckExpExists
    from experiments
    where id = vExpId;
    if iCheckExpExists = 0 then
    -- Experiment ID is not already in table so add a row
    insert into experiments
    (id, experiment_date)
    values(vExpId, dExpDate);
    end if;
    end loop;

    Except that rownum is assigned after the result set
    is computed, so the whole table will have to be
    scanned.really?
    SQL> explain plan for select * from i;
    Explained.
    SQL> select * from table( dbms_xplan.display );
    PLAN_TABLE_OUTPUT
    Plan hash value: 1766854993
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |   910K|  4443K|   630   (3)| 00:00:08 |
    |   1 |  TABLE ACCESS FULL| I    |   910K|  4443K|   630   (3)| 00:00:08 |
    8 rows selected.
    SQL> explain plan for select * from i where rownum=1;
    Explained.
    SQL> select * from table( dbms_xplan.display );
    PLAN_TABLE_OUTPUT
    Plan hash value: 2766403234
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |     5 |     2   (0)| 00:00:01 |
    |*  1 |  COUNT STOPKEY     |      |       |       |            |          |
    |   2 |   TABLE ACCESS FULL| I    |     1 |     5 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM=1)
    14 rows selected.

  • Inserting/ deleting a line item in MIGO Transaction ( Goods Issue )?

    Hi,
    Can anyone help me with the logic for Inserting / deleting a line item in MIGO Transaction?
    Thanks,
    cs

    Hi
    The following user exits and badis for migo:
    Check the mb_migo_badi and check the method 'LINE_MODIFY' for u r purpose.
    For undestanding see the documentation of the badi and see the example implementation
    class: CL_EXM_IM_MB_MIGO_BADI
                                                                                    Enhancement/ Business Add-in            Description                                                                               
    Enhancement                                                                               
    MB_CF001                                Customer Function Exit in the Case of Updating a Art. Doc.      
    MBCF0011                                Read from RESB and RKPF for print list in  MB26                 
    MBCF0010                                Customer exit: Create reservation BAPI_RESERVATION_CREATE1      
    MBCF0009                                Filling the storage location field                              
    MBCF0007                                Customer function exit: Updating a reservation                  
    MBCF0006                                Customer function for WBS element                               
    MBCF0005                                Article document item for goods receipt/issue slip              
    MBCF0002                                Customer function exit: Segment text in article doc. item                                                                               
    Business Add-in                                                                               
    MB_RESERVATION_BADI                     MB21/MB22: Check and Complete Dialog Data                       
    MB_QUAN_CHECK_BADI                      BAdI: Item Data at Time of Quantity Check                       
    MB_PHYSINV_INTERNAL                     Connection: Core Inventory and Retail AddOn                     
    MB_MIGO_ITEM_BADI                       BAdI in MIGO for Changing Item Data                             
    MB_MIGO_BADI                            BAdI in MIGO for External Detail Subscreens                     
    MB_DOC_BADI_INTERNAL                    BAdIs when Creating an Article Document (SAP Internal)          
    MB_DOCUMENT_UPDATE                      BADI when updating article document: MSEG and MKPF              
    MB_DOCUMENT_BADI                        BAdIs when Creating an Article Document                         
    MB_CIN_MM07MFB7_QTY                     Proposal of quantity from Excise invoice in GR                  
    MB_CIN_MM07MFB7                         BAdI for India Version exit in include MM07MFB7                 
    MB_CIN_LMBMBU04                         posting of gr                                                   
    MB_CHECK_LINE_BADI                      BAdI: Check Line Before Copying to the Blocking Tables          
    ARC_MM_MATBEL_WRITE                     Check Add-On-Specific Data for MM_MATBEL                        
    ARC_MM_MATBEL_CHECK                     Check Add-On-Specific Criteria for MM_MATBEL    
    If it is helpfu rewards points
    Regards
    Pratap.M

  • Tables in subquery resulting in full table scans

    Hi,
    This is related to a p1 bug 13009447. Customer recently upgraded to 10G. Customer reported this type of problem for the second time.
    Problem Description:
    All the tables in sub-query are resulting in full table scans and hence are executing for hours.
    Here is the query
    SELECT /*+ PARALLEL*/
    act.assignment_action_id
    , act.assignment_id
    , act.tax_unit_id
    , as1.person_id
    , as1.effective_start_date
    , as1.primary_flag
    FROM pay_payroll_actions pa1
    , pay_population_ranges pop
    , per_periods_of_service pos
    , per_all_assignments_f as1
    , pay_assignment_actions act
    , pay_payroll_actions pa2
    , pay_action_classifications pcl
    , per_all_assignments_f as2
    WHERE pa1.payroll_action_id = :b2
    AND pa2.payroll_id = pa1.payroll_id
    AND pa2.effective_date
    BETWEEN pa1.start_date
    AND pa1.effective_date
    AND act.payroll_action_id = pa2.payroll_action_id
    AND act.action_status IN ('C', 'S')
    AND pcl.classification_name = :b3
    AND pa2.consolidation_set_id = pa1.consolidation_set_id
    AND pa2.action_type = pcl.action_type
    AND nvl (pa2.future_process_mode, 'Y') = 'Y'
    AND as1.assignment_id = act.assignment_id
    AND pa1.effective_date
    BETWEEN as1.effective_start_date
    AND as1.effective_end_date
    AND as2.assignment_id = act.assignment_id
    AND pa2.effective_date
    BETWEEN as2.effective_start_date
    AND as2.effective_end_date
    AND as2.payroll_id = as1.payroll_id
    AND pos.period_of_service_id = as1.period_of_service_id
    AND pop.payroll_action_id = :b2
    AND pop.chunk_number = :b1
    AND pos.person_id = pop.person_id
    AND (
    as1.payroll_id = pa1.payroll_id
    OR pa1.payroll_id IS NULL
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/ NULL
    FROM pay_assignment_actions ac2
    , pay_payroll_actions pa3
    , pay_action_interlocks int
    WHERE int.locked_action_id = act.assignment_action_id
    AND ac2.assignment_action_id = int.locking_action_id
    AND pa3.payroll_action_id = ac2.payroll_action_id
    AND pa3.action_type IN ('P', 'U')
    AND NOT EXISTS
    SELECT /*+ PARALLEL*/
    NULL
    FROM per_all_assignments_f as3
    , pay_assignment_actions ac3
    WHERE :b4 = 'N'
    AND ac3.payroll_action_id = pa2.payroll_action_id
    AND ac3.action_status NOT IN ('C', 'S')
    AND as3.assignment_id = ac3.assignment_id
    AND pa2.effective_date
    BETWEEN as3.effective_start_date
    AND as3.effective_end_date
    AND as3.person_id = as2.person_id
    ORDER BY as1.person_id
    , as1.primary_flag DESC
    , as1.effective_start_date
    , act.assignment_id
    FOR UPDATE OF as1.assignment_id
    , pos.period_of_service_id
    Here is the execution plan for this query. We tried adding hints in sub-query to force indexes to pick-up but it is still doing Full table scans.
    Suspecting some db parameter which is causing this issue.
    In the
    - Full table scans on tables in the first sub-query
    PAY_PAYROLL_ACTIONS, PAY_ASSIGNMENT_ACTIONS, PAY_ACTION_INTERLOCKS
    - Full table scans on tables in Second sub-query
    PER_ALL_ASSIGNMENTS_F PAY_ASSIGNMENT_ACTIONS
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 29 398.80 2192.99 238706 4991924 2383 0
    Fetch 1136 378.38 1921.39 0 4820511 0 1108
    total 1166 777.19 4114.38 238706 9812435 2383 1108
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 41 (APPS) (recursive depth: 1)
    Rows Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 FOR UPDATE
    0 PX COORDINATOR
    0 PX SEND (QC (ORDER)) OF ':TQ10009' [:Q1009]
    0 SORT (ORDER BY) [:Q1009]
    0 PX RECEIVE [:Q1009]
    0 PX SEND (RANGE) OF ':TQ10008' [:Q1008]
    0 HASH JOIN (ANTI BUFFERED) [:Q1008]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10006' [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE) [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 NESTED LOOPS [:Q1006]
    0 HASH JOIN (ANTI) [:Q1006]
    0 BUFFER (SORT) [:Q1006]
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10002'
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE)
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 NESTED LOOPS
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_POPULATION_RANGES_N4' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_PERIODS_OF_SERVICE' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_PERIODS_OF_SERVICE_N3' (INDEX)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_N4' (INDEX)
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PAY_ASSIGNMENT_ACTIONS_N51' (INDEX)
    0 PX RECEIVE [:Q1006]
    0 PX SEND (HASH) OF ':TQ10005' [:Q1005]
    0 VIEW OF 'VW_SQ_1' (VIEW) [:Q1005]
    0 HASH JOIN [:Q1005]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (BROADCAST) OF ':TQ10000'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_PAYROLL_ACTIONS' (TABLE)
    0 HASH JOIN [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10004' [:Q1004]
    0 PX BLOCK (ITERATOR) [:Q1004]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1004]
    0 BUFFER (SORT) [:Q1005]
    0 PX RECEIVE [:Q1005]
    0 PX SEND (HASH) OF ':TQ10001'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ACTION_INTERLOCKS' (TABLE)
    0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'PAY_PAYROLL_ACTIONS' (TABLE) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_PAYROLL_ACTIONS_PK' (INDEX (UNIQUE)) [:Q1006]
    0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PAY_ACTION_CLASSIFICATIONS_PK' (INDEX (UNIQUE))[:Q1006]
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PER_ASSIGNMENTS_F_PK' (INDEX (UNIQUE)) [:Q1006]
    0 PX RECEIVE [:Q1008]
    0 PX SEND (HASH) OF ':TQ10007' [:Q1007]
    0 VIEW OF 'VW_SQ_2' (VIEW) [:Q1007]
    0 FILTER [:Q1007]
    0 HASH JOIN [:Q1007]
    0 BUFFER (SORT) [:Q1007]
    0 PX RECEIVE [:Q1007]
    0 PX SEND (BROADCAST) OF ':TQ10003'
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PER_ALL_ASSIGNMENTS_F' (TABLE)
    0 PX BLOCK (ITERATOR) [:Q1007]
    0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'PAY_ASSIGNMENT_ACTIONS' (TABLE) [:Q1007]
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    enq: KO - fast object checkpoint 32 0.02 0.12
    os thread startup 8 0.02 0.19
    PX Deq: Join ACK 198 0.00 0.04
    PX Deq Credit: send blkd 167116 1.95 1103.72
    PX Deq Credit: need buffer 327389 1.95 266.30
    PX Deq: Parse Reply 148 0.01 0.03
    PX Deq: Execute Reply 11531 1.95 1901.50
    PX qref latch 23060 0.00 0.60
    db file sequential read 108199 0.17 22.11
    db file scattered read 9272 0.19 51.74
    PX Deq: Table Q qref 78 0.00 0.03
    PX Deq: Signal ACK 1165 0.10 10.84
    enq: PS - contention 73 0.00 0.00
    reliable message 27 0.00 0.00
    latch free 218 0.00 0.01
    latch: session allocation 11 0.00 0.00
    Thanks in advance
    Suresh PV

    Hi,
    welcome,
    how is the query performing if you delete all the hints for PARALLEL, because most of the waits are related to waits on Parallel.
    Herald ten Dam
    http://htendam.wordpress.com
    PS. Use "{code}" for showing your code and explain plans, it looks nicer

  • Trunc causing Full Table Scans

    I have a situtaion here where my query is as follows.
    SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate);
    COUNT(1)
    6
    PLAN_TABLE_OUTPUT
    Plan hash value: 3951750498
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 10 | 13904 (1)| 00:02:47 | | |
    | 1 | SORT AGGREGATE | | 1 | 10 | | | | |
    | 2 | PARTITION LIST SINGLE| | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
    |* 3 | TABLE ACCESS FULL | HBSM_SM_ACCOUNT_INFO | 1 | 10 | 13904 (1)| 00:02:47 | 12 | 12 |
    Predicate Information (identified by operation id):
    3 - filter(("CUST_STATUS"='UP' OR "CUST_STATUS"='UUP') AND
    TO_DATE(INTERNAL_FUNCTION("FIRST_ACTVN_DATE"))=TO_DATE(TO_CHAR(SYSDATE@!)))
    16 rows selected.
    If I remove the trunc clause from the query the performance definitely improves the the results are wrong.
    SQL> select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and FIRST_ACTVN_DATE = trunc(sysdate);
    COUNT(1)
    0
    PLAN_TABLE_OUTPUT
    Plan hash value: 454529511
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    | 0 | SELECT STATEMENT | | 1 | 40 | 47 (0)| 00:00:01 | | |
    |* 1 | TABLE ACCESS BY GLOBAL INDEX ROWID| HBSM_SM_ACCOUNT_INFO | 1 | 40 | 47 (0)| 00:00:01 | 12 | 12 |
    |* 2 | INDEX RANGE SCAN | IND_FIRST_ACTVN_DATE | 51 | | 4 (0)| 00:00:01 | | |
    Can someone please help me whereby I can get the right data and I can also prevent these full table scans.

    Unless you are using a functional index, applying any function to an indexed column prevents the use of the index.
    The way round it in your case is to realise that
    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP') and trunc(FIRST_ACTVN_DATE) = trunc(sysdate)Is really asking that FIRST_ACTVN_DATE should be sometime today. You could therefore rewrite it as
    select count(1) from HBSM_SM_ACCOUNT_INFO where OPTIONAL_PARM5='MH' and CUST_STATUS in ('UP','UUP')
    and FIRST_ACTVN_DATE >= trunc(sysdate)
    and FIRST_ACTVN_DATE < trunc(sysdate) + 1Note, this still might not use the index depending on how many rows are within today's date versus how many are outside today's date.
    Also, when posting, remember to put your code between tags and to post create table scripts and sample data inserts.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • In which tables can I find the sales order item texts?

    Experts, In SD, we have texts on sales order item level. These texts can be filled manually or automatically via access sequences. My question is very simple: in which table can I find these texts? I want to extract them for a big number of sales ord

  • SOA FTP Adapter - nothing happens !!

    I created a simple scenario to pick up XML files from a remote unix server, similar to the examples provided under bpel/samples. I created a new FTP connection factory and provided the ftp server info. A simple BPEL process which contains a ftp get p

  • Bug in Mail App in LION

    Just the other day, my hard drive depleted to the point of 54MB and I got a warning that my start up disk is full and I need to delete files. I found that strange since I have about 200+Gig available, the last I checked and did not do any new downloa

  • Re: Request: Fn/Cltr Swap for MSI GX 740

    I am also interested in a FN/CTRL swap for MSI GX740, MS-1727. Could I receive this mod too? Edit: Pic inc http://img259.imageshack.us/i/img0071pm.jpg/

  • How did I get charged...

    How is it possible to charge my checking account without my authorization?  I have not received nor have I signed up for any verizon services yet a charge appears on my checking account.  I tried to report on Friday, but no one in the Fraud departmen