Query takes long time to execute on exceptional cases

Hi Tuning Experts,
My query runs on prod db on daily basis on fixed scheduled time, out of seven days, for 1-2 exeptional days it takes 7-10 hr which takes 10-15 min on rest of the days.
I want to do RCA for it, so please help me how i can proceed. either i go with
AUTOTRACE,TKPROF, EXPLAIN PLAN or STATSPACK, which is best method to find out the real culprit.
Regards
Asif

To answer your question...
Running AUTOTRACE and EXPLAIN PLAN against the query in your development environment won't help diagnose the abnormal executions in production. Statspack is a global thing. It's a good thing to run but probably of insufficient granularity for here.
TKPROF is an anlyzing tool. You need something to analyze. Read this article on Interpreting Wait Events to find out how to set the 10046 event to gather evidence.
Cheers, APC

Similar Messages

  • Query takes long time to execute.

    Hi All,
    I have one query that takes 5 minutes to execute.
    The query depends on four tables.
    The table IBS_WORK_BANKDATA and IBS_ORG_BANKDATA contains 25 lack records. Table IBS_CURRENCYMASTER have 250 records and IBS_CURRENCYEEXCHANGERATE have 50 records.
    Out put of query contains 3500 records.
    Oracle version is 9.0.1.1 and OS is windows 2003 server.
    Query:
    select distinct trim(wrk.bd_alcd) as ALCD, wrk.bd_typecd as TypeCD, wrk.bd_forcd as FORCD, wrk.bd_curcd as CURCD,
    wrk.bd_councd as COUNCD, wrk.bd_sectcd as SECCD,
    wrk.bd_matcd as MATCD, wrk.bd_c_u_cd as C_U_CD, wrk.bd_s_u_cd as S_U_CD,
    0 as Org_FCBal,0 as ORG_Bal,case when wrk.bd_type='O' then wrk.bd_fc_bal else 0 end as Main_FCBal,
    case when wrk.bd_type='O' then (wrk.bd_fc_bal * nvl(exchg.cer_exchangerate, 1)) else 0 end as main_Bal,
    wrk.bd_rs_int,wrk.bd_rs_bal,wrk.bd_fc_int,wrk.bd_fc_bal,
    ' ' as TrackChangs
    from ibs_work_bankdata wrk inner join ibs_org_bankdata org ON org.bd_yrqtr = wrk.bd_yrqtr and org.bd_bkcode=wrk.bd_bkcode and org.bd_forcd = wrk.bd_forcd
    and wrk.BD_YRQTR=20044 and wrk.BD_BKCODE ='000'
    and wrk.BD_ALCD = '51' and wrk.BD_FORCD ='IN' and wrk.BD_TYPECD = '11'
    left join ibs_currencymaster curmst on curmst.cur_code = wrk.bd_curcd
    left join ibs_currencyexchangerate exchg on exchg.cer_currencyid = curmst.cur_id
    and exchg.cer_yearqtr = 20051 and exchg.CER_ACTIVE=1
    Explain Plan:
    SELECT STATEMENT, GOAL = CHOOSE               Cost=26     Cardinality=1     Bytes=157
    SORT UNIQUE               Cost=26     Cardinality=1     Bytes=157
    TABLE ACCESS BY INDEX ROWID     Object owner=RBI     Object name=IBS_ORG_BANKDATA     Cost=2     Cardinality=204     Bytes=2856
    NESTED LOOPS               Cost=26     Cardinality=1     Bytes=157
    NESTED LOOPS OUTER               Cost=24     Cardinality=1     Bytes=143
    NESTED LOOPS OUTER               Cost=23     Cardinality=1     Bytes=93
    TABLE ACCESS BY INDEX ROWID     Object owner=RBI     Object name=IBS_WORK_BANKDATA     Cost=22     Cardinality=1     Bytes=52
    INDEX SKIP SCAN     Object owner=RBI     Object name=IBS_WORK_BANKDATA_IDX     Cost=7     Cardinality=1     
    TABLE ACCESS FULL     Object owner=RBI     Object name=IBS_CURRENCYMASTER     Cost=1     Cardinality=178     Bytes=7298
    TABLE ACCESS FULL     Object owner=RBI     Object name=IBS_CURRENCYEXCHANGERATE     Cost=1     Cardinality=19     Bytes=950
    INDEX RANGE SCAN     Object owner=RBI     Object name=IBS_ORG_BANKDATA_IDX     Cost=1     Cardinality=204     
    Please help me.
    Thanks in advance,
    Prathamesh.

    Hi prathemesh,
    Check whether the tables accessed by the query are recently analyzed.
    Thanks,
    Sathis.

  • Why update query takes  long time ?

    Hello everyone;
    My update query takes long time.  In  emp  ( self testing) just  having 2 records.
    when i issue update query , it takes long time;
    SQL> select  *  from  emp;
      EID  ENAME     EQUAL     ESALARY     ECITY    EPERK       ECONTACT_NO
          2   rose              mca                  22000   calacutta                   9999999999
          1   sona             msc                  17280    pune                          9999999999
    Elapsed: 00:00:00.05
    SQL> update emp set esalary=12000 where eid='1';
    update emp set esalary=12000 where eid='1'
    * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:01:11.72
    SQL> update emp set esalary=15000;
    update emp set esalary=15000
      * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:02:22.27

    Hi  BCV;
    Thanks for your reply but it doesn't provide output,  please  see   this.
    SQL> update emp set esalary=15000;
    ........... Lock already occured.
    >> trying to trace  >>
    SQL> select HOLDING_SESSION from dba_blockers;
    HOLDING_SESSION
                144
    SQL> select sid , username, event from v$session where username='HR';
    SID USERNAME     EVENT
       144   HR    SQL*Net message from client
       151   HR    enq: TX - row lock contention
       159   HR    SQL*Net message from client
    >> It  does n 't  provide  clear output about  transaction lock >>
    SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
      2  BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
      3  request
      4  FROM v$lock, v$session
      5  WHERE v$lock.TYPE = 'TX'
      6  AND v$lock.SID = v$session.SID
      7  AND v$session.username = USER;
      no rows selected
    SQL> select MACHINE from v$session where sid = :sid;
    SP2-0552: Bind variable "SID" not declared.

  • Procedure takes long time to execute...

    Hi all
    i wrote the proxcedure but it takes long time to execute.
    The INterdata table contains 300 records.
    Here is the procedure:
    create or replace procedure inter_filter
    is
         /*v_sessionid interdata.sessionid%type;
         v_clientip interdata.clientip%type;
         v_userid interdata.userid%type;
         v_logindate interdata%type;
         v_createddate interdata%type;
         v_sourceurl interdata%type;
         v_destinationurl interdata%type;*/
         v_sessionid filter.sessionid%type;
         v_filterid filter.filterid%type;
         cursor c1 is
         select sessionid,clientip,browsertype,userid,logindate,createddate,sourceurl,destinationurl
         from interdata;
         cursor c2 is
         select sessionid,filterid
         from filter;
    begin
         open c2;
         loop
              fetch c2 into v_sessionid,v_filterid;
              for i in c1 loop
                   if i.sessionid = v_sessionid then
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,v_filterid,i.sourceurl,i.destinationurl,i.createddate);
                   else
                        insert into filter (filterid,sessionid,clientip,browsertype,userid,logindate,createddate)
                        values (filter_seq.nextval,i.sessionid,i.clientip,i.browsertype,i.userid,i.logindate,i.createddate);
                        insert into filterdetail(filterdetailid,filterid,sourceurl,destinationurl,createddate)
                        values (filterdetail_seq.nextval,filter_seq.currval,i.sourceurl,i.destinationurl,i.createddate);
                   end if;
              end loop;
         end loop;
         commit;
    end
    Please Help!
    Prathamesh

    i wrote the proxcedure but it takes long time to execute.Please define "long time". How long does it take? What are you expecting it to take?
    The INterdata table contains 300 records.But how many records are there in the FILTER table? As this is the one you are driving off this is going to determine the length of time it takes to complete. Also, this solution inserts every row in the INTERDATA table for each row in the FILTER table - in other words, if the FILTER table has twenty rows to start with you are going to end up with 6000 rows in FILTERDETAIL. No wonder it takes a long time. Is that want you want?
    Also of course, you are using PL/SQL cursors when you ought to be using set operations. Did you try the solution I posted in Re: Confusion in this  scenario>>>>>>> on this topic?
    Cheers, APC

  • My query take long time..

    The output of tkprof of my trace file is :
    SELECT ENEXT.NUM_PRSN_EMPLY ,ENEXT.COD_BUSUN ,ENEXT.DAT_CALDE ,ENEXT.COD_SHFT
    FROM
    AAC_EMPLOYEE_ENTRY_EXITS5_VIW ENEXT ,PDS.PDS_EMPLOYEES EMPL ,
    PDS.PDS_EMPLOYMENT_TYPES EMPTYP ,PDS.PDS_PAY_CONDITIONS PAYCON WHERE
    ENEXT.DAT_CALDE BETWEEN :B6 AND :B5 AND ENEXT.NUM_PRSN_EMPLY IN (SELECT
    ATT21 FROM APPS.GLOBAL_TEMPS WHERE ATT1 = 'PRSN') AND ENEXT.NUM_PRSN_EMPLY =
    EMPL.NUM_PRSN_EMPLY AND EMPL.EMTYP_COD_EMTYP = EMPTYP.COD_EMTYP AND
    EMPTYP.LKP_COD_STA_PAY_EMTYP <> 3 AND
    NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1 AND EMPL.PCOND_COD_STA_PCOND
    = PAYCON.COD_STA_PCOND AND NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1
    AND PAYCON.LKP_FLG_STA_PAY_PCOND = 1 AND ENEXT.DAT_CALDE >=
    EMPL.DAT_EMPLT_EMPLY AND ENEXT.DAT_CALDE <= NVL(EMPL.DAT_DSMSL_EMPLY,
    TO_DATE('15001229','YYYYMMDD')) AND 1 = (CASE WHEN
    ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND
    ENEXT.TYP_DAY BETWEEN 4 AND 6 THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2
    AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND ENEXT.TYP_DAY NOT BETWEEN 4 AND 6 THEN
    1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 2
    THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
    1 THEN 1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
    2 THEN 0 END) AND ENEXT.LKP_COD_DPUT_BUSUN = NVL(:B4 ,
    ENEXT.LKP_COD_DPUT_BUSUN) AND ENEXT.LKP_COD_MANAG_BUSUN = NVL(:B3 ,
    ENEXT.LKP_COD_MANAG_BUSUN) AND ENEXT.COD_BUSUN = NVL(:B2 , ENEXT.COD_BUSUN)
    AND ENEXT.COD_CAL = NVL(COD_CAL, ENEXT.COD_CAL) AND ENEXT.NUM_PRSN_EMPLY =
    NVL(:B1 , ENEXT.NUM_PRSN_EMPLY) AND ENEXT.COD_SHFT IN (SELECT
    SHFTBL.COD_SHTAB FROM AAC_SHIFT_TABLES SHFTBL WHERE
    SHFTBL.LKP_CAT_SHFT_SHTAB = 1) AND ENEXT.DAT_CALDE NOT IN (SELECT ABN.DAT
    FROM APPS.AAC_EMPL_EN_EX_ABNORMAL_VIW ABN WHERE ABN.PRSN =
    ENEXT.NUM_PRSN_EMPLY AND ABN.DAT BETWEEN :B6 AND :B5 ) AND ENEXT.DAT_CALDE
    IN (SELECT EMPENEXT.DAT_STR_SHFT_ENEXT FROM AAC.AAC_EMPLOYEE_ENTRY_EXITS
    EMPENEXT WHERE EMPENEXT.EMPLY_NUM_PRSN_EMPLY = EMPL.NUM_PRSN_EMPLY AND
    EMPENEXT.DAT_STR_SHFT_ENEXT BETWEEN :B6 AND :B5 AND
    EMPENEXT.LKP_FLG_STA_ENEXT <> 3) ORDER BY ENEXT.NUM_PRSN_EMPLY,
    ENEXT.DAT_CALDE
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 40.45 40.30 306 17107740 0 24
    total 6 40.45 40.30 306 17107740 0 24
    what is wrong in my query?
    why it take long time?

    user13344656 wrote:
    what is wrong in my query?
    why it take long time?See PL/SQL forum FAQ
    https://forums.oracle.com/forums/ann.jspa?annID=1535
    *3. How to improve the performance of my query? / My query is running slow.*
    SQL and PL/SQL FAQ
    For instructions on what information to post an how to format it.

  • What is the reason for query take more time to execute

    Hi,
    What is the reason for the query take more time inside procedure.
    but if i execute that query alone then it excute within a minutes.
    query includes update and insert.

    I have a insert and update query when I execute
    without Procedure then that query execute faster but
    If I execute inside procedure then It takes 2 hours
    to execute.Put you watch 2 hours back and the problem will disappear.
    do you understand what I want to say?I understood what you wanted to say and I understood you didn't understood what I said.
    What does the procedure, what does the query, how can you say the query does the same as the procedure that takes longer. You didn't say anything useful to have an idea of what you're talking about.
    Everyone knows what means that something is slower than something else, but it means nothing if you don't say what you're talking about.
    To begin with something take a look at this
    When your query takes too long ...
    especially the part regarding the trace.
    Bye Alessandro

  • Query ake Long Time to execute.

    Hi All,
    I Have query that joins on 4 tables.
    The Query takes 12 minutes to execute.
    the sga size is 50m.
    Please tell me whats to do now.
    Thanks in advance.
    Prathamesh.

    Prathamesh,
    Please go thru Performance Tuning Guide to isolate your problem:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm
    Try to find out:
    Is it a new query or earlier it used to return result quickly?
    Check Explain Plan etc to find out what exactly is going on behind the query.
    If SGA is not properly tuned then it will effect all query, it'll not be isolated on single one.
    Regards,
    Sayan

  • Query takes long time on multiprovider

    Hi,
    When i execute a query on the multiprovider, it takes very long time. it doesnt show up the results also. It just keep processing. I have executed the report only for one day but still it doesnt show any result. But when i execute on the cube, it executes quickly and shows the result.
    Actually i added one more cube to the multiprovider and ten transported that multiprovider to QA and PRD. Transportation went on successfully. After this i am unalbe to execute the reports on that multiprovider. What might be the cause? your help is appreciated.
    Thanks
    Annie

    Hi Annie.......
    Checklist for the performance of a Query........from a DOc........
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If u201CDisplay as hierarchyu201D is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Also check this.........Recommendations for Modeling MultiProviders
    http://help.sap.com/saphelp_nw70/helpdata/EN/43/5617d903f03e2be10000000a1553f6/frameset.htm
    Hope this helps......
    Regards,
    Debjani......

  • Query take long time in fetching when used within a procedure

    The Database is : Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    Query just takes a second from toad but when used inside a procedure as a cursor it takes takes 3 to 5 minutes.
    Following is the Tkprof information when running from procedure.
    SELECT CHCLP.CLM_PRVDR_TYPE_LKPCD, CHCLP.PRVDR_LCTN_IID, TO_CHAR
    (CHCLP.MODIFIED_DATE, 'MM-dd-yyyy hh24:mi:ss') MODIFIED_DATE,
    CHCLP.PRVDR_LCTN_IDENTIFIER, CHCLP.CLM_HDR_CLM_LN_X_PVDR_LCTN_SID
    FROM
    CLM_HDR_CLM_LN_X_PRVDR_LCTN CHCLP WHERE CHCLP.CLAIM_HEADER_SID = :B1 AND
    CHCLP.CLAIM_LINE_SID IS NULL AND CHCLP.IDNTFR_TYPE_CID = 7
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 110.79 247.79 568931 576111 0 3
    total 2 110.79 247.79 568931 576111 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93 (CMSAPP) (recursive depth: 1)
    Rows     Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 PARTITION RANGE (SINGLE) PARTITION:KEYKEY
    0 TABLE ACCESS MODE: ANALYZED (BY LOCAL INDEX ROWID) OF
    'CLM_HDR_CLM_LN_X_PRVDR_LCTN' (TABLE) PARTITION:KEYKEY
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF
    'XAK1CLM_HDR_CLM_LN_X_PRVDR_LCT' (INDEX (UNIQUE))
    PARTITION:KEYKEY
    Execution plan when running just the query from TOAD is: (it comes out in a second)
    Plan
    SELECT STATEMENT ALL_ROWSCost: 6 Bytes: 100 Cardinality: 2                
         3 PARTITION RANGE SINGLE Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 1 Partitions accessed #13          
              2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE CMSAPP.CLM_HDR_CLM_LN_X_PRVDR_LCTN Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 2 Partitions accessed #13     
    Why would fetching take such a long time? Please let me know if you need any other information.
    Thank You.
    Edited by: spur230 on Apr 1, 2009 10:23 AM
    Edited by: spur230 on Apr 1, 2009 10:26 AM
    Edited by: spur230 on Apr 1, 2009 10:28 AM
    Edited by: spur230 on Apr 1, 2009 10:30 AM

    Query just takes a second from toad It's possible that the query starts returning rows in a second, but that's not the time required for the entire query.

  • Query takes long time from one machine but 1 sec from  machine

    I got a update query which is like a application patch which takes 1 sec from one machine.I need to apply that on the other machine where application is installed
    Both applications are same and connecting to the same DB server.The query ran from second machine takes so long time ....
    but i can update other thing from the secon machine.
    IS there anything to do with page size ,line size
    Urgent Please

    HI
    Everything is same except from the diff machine.
    Any client version issue because the script us so wide like 240 chars
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | UPDATE STATEMENT | | | | |
    | 1 | UPDATE | IDI_INTERFACE_MST | | | |
    | 2 | INDEX UNIQUE SCAN | PK_IDI_INTMST | | | |
    Note: rule based optimization, PLAN_TABLE' is old version
    10 rows selected.
    Message was edited by:
    Maran.E

  • Why it takes long time to execute on Production than staging?

    Hi Experts,
    Any help apreciated on below issue.
    I have one anonymous block for updating around 1 million records by joining 9 tables.
    This is proceeded to production by following environments. And all env have exact equal volume of data.
    development->Testing->Staging->Production.
    The funny problem is while it takes 5 mins to be executed in all environments, it takes 30 mins on production.
    why it happned and what can be action points for future?
    Thanks
    -J
    ==============
    If the performance is that different in the different environments, one or more statements must have different query plans in the different environments. The first step would be to get the query plans and compare them to figure out which statement(s) is/are running slowly.
    If there are different query plans, that implies that something is different between the environments. That could be any of
    - Oracle version
    - initialization parameters
    - data
    - object statistics
    - system statistics
    If you guarantee that the data is the same, I would tend to expect that the object statistics are different. How have you gathered statistics in the various environments? Can you move statistics from an environment where performance is acceptable to the environment where performance is unacceptable?
    I would also recommend following the advice others have given you. You don't want to commit in a loop and you want to do as much processing in SQL as possible.
    Justin
    ===============
    Thanks Steve for your inputs.
    My investigation resulted following 2 points.
    There are 2 main reasons why some scripts might take longer in live than on Staging.
    1: Weekend backups were running on the live server so slowing the server down allot.
    2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
    Can we have some action points to address these above issues?
    I think if data can be contigous then it may help.
    Best Regards
    -J
    ===============
    But before that, can you raise this in a seperate thread as there is a different issue going on in this thread?
    Cheers
    Sarma.
    ===========
    Posts: 4
    Registered: 08/28/06
    Re: Performance issue (Oracle 10.2.0.3.0)
    Posted: May 22, 2009 2:46 AM in response to: Radhakrishna Sa... Edit Reply
    Hey Sarma,
    Exterme aplogies to say that I don't know how to raise a new thread.
    Thanks in advnce for your help.
    -J
    user636482
    Posts: 202
    Registered: 05/15/08
    Re: Performance issue (Oracle 10.2.0.3.0)
    Posted: May 22, 2009 2:51 AM in response to: user527345 Reply
    Hi User 527345,
    Please follow the steps to raise a request in this Forum.
    1. Register urself.
    2. Go to the forum home and select the Technolgy where do you want to rasie a request.
    eg : If is related to Oracle DATAbase general then select Oracle databse general...
    3. clik on post new thread
    4. Give the summary of your issue.
    5. then submit the issue.
    please let me know if you need more information.
    Thank you

    Jayashree Mohanty wrote:
    My investigation resulted following 2 points.
    There are 2 main reasons why some scripts might take longer in live than on Staging.
    1: Weekend backups were running on the live server so slowing the server down allot.
    2: the tables are re-orged when they are imported into staging/Dev – so the table and index layout is optimal, on live the tables and indexes are not necessarily contiguous so in order to do the same work the server will need to do many more I/O operations.
    Can we have some action points to address these above issues?
    I think if data can be contigous then it may help.First , I didn't get at all what actually was that thing when you copied some part of don't know which post in your actual question? Please read this , it would help you post a proper question to get a proper answer ,
    http://www.catb.org/~esr/faqs/smart-questions.html
    Now, how did you come to the conclusion that the backups are actually making your query slower? What's the benchmark that you lead to this? And what's the meaning of the 2nd point , can you please explain it ?
    As others have also mentioned, please post the plan of the query at boththe staging and production, that only can tell that what's going on.
    HTH
    Aman....

  • DELETE query takes long time more than 30 min

    Hi
    I have delete query as below.It takes 1300 sec to execute .Please let me know how to reduce the time of execution.I have indexes on both the columns jdoid,jdoversion of deletion.
    DELETE FROM SIT7016.DEVICEINTERFACE WHERE JDOID = :1 AND JDOVERSION = :2
    Regards
    Pandu

    udnapp wrote:
    I have delete query as below.It takes 1300 sec to execute .Please let me know how to reduce the time of execution.I have indexes on both the columns jdoid,jdoversion of deletion.
    DELETE FROM SIT7016.DEVICEINTERFACE WHERE JDOID = :1 AND JDOVERSION = :2Pretty meaningless information without an the execution plan.

  • Query takes long time - Please help!

    I've a query like below (not actual query)
    update (select eds.title eds_title,edv.title edv_title from mia_data_staging eds, mia_doc_Versions edv where eds.id = edv.id and eds.title != edv.title) set edv_title = eds_title;
    In the above query I've more than 70 columns to select, compare and update (I've shown only one above). The explain plan for the query is below, which does not show any significant time, but the query never returns when executed after a long long time. Any ideas?
    Plan hash value: 2242214163
    | Id | OpMIAtion | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | UPDATE STATEMENT | | 1627K| 1405M| | 18E (1)| |
    | 1 | UPDATE | MIA_DOC_VERSIONS | | | | | |
    |* 2 | HASH JOIN | | 1627K| 1405M| 720M| 18E (1)| |
    | 3 | TABLE ACCESS BY INDEX ROWID | MIA_DOC_VERSIONS | 1627K| 701M| | 18E (1)| |
    | 4 | BITMAP CONVERSION TO ROWIDS| | | | | | |
    | 5 | BITMAP INDEX FULL SCAN | IDX_30 | | | | | |
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | MIA_DATA_STAGING | 1628K| 705M| | 7184 (3)| 00:02:29 |
    Predicate Information (identified by operation id):
    ---------------------------------------------------

    user652494 wrote:
    I've a query like below (not actual query)
    |   3 |    TABLE ACCESS BY INDEX ROWID | MIA_DOC_VERSIONS |  1627K|   701M|       |    18E  (1)|          |
    |   4 |     BITMAP CONVERSION TO ROWIDS|                  |       |       |       |            |          |
    |   5 |      BITMAP INDEX FULL SCAN    | IDX_30           |       |       |       |            |          |This part of your execution plan looks very suspicious: It's performing a bitmap index full scan to do then a single row access by rowid apparently for all rows of the table, which seems to be a very inefficient operation. It also shows an unreasonable cost for that operation. The question is why it is not using a "full table scan" to access the MIA_DOC_VERSIONS table?
    You might want to try simply the FULL hint to request a full table scan on the MIA_DOC_VERSIONS table in order to find out how the execution plan then is going to look like:
    update (select /*+ FULL(EDV) */ eds.title eds_title,edv.title edv_title from mia_data_staging eds, mia_doc_Versions edv where eds.id = edv.id and eds.title != edv.title) set edv_title = eds_title;or
    update /*+ FULL(a.EDV) */ (select eds.title eds_title,edv.title edv_title from mia_data_staging eds, mia_doc_Versions edv where eds.id = edv.id and eds.title != edv.title) a set edv_title = eds_title;Looking at the execution plan of the hinted statement one might get a clue why the optimizer favors an index access path instead.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Query takes long time to return results.

    I am on Oracle database 10g Enterprise Edition Release 10.2.0.4.0 – 64 bit
    This query takes about 58 seconds to return 180 rows...
             SELECT order_num,
                    order_date,
                    company_num,
                    customer_num,
                    address_type,
                    create_date as address_create_date,
                    contact_name,
                    first_name,
                    middle_init,
                    last_name,
                    company_name,
                    street_address_1,
                    customer_class,
                    city,
                    state,
                    zip_code,
                    country_code,
                    MAX(decode(media_type,
                               'PHH',
                               phone_area_code || '''' || phone_number,
                               NULL)) home_phone,
                    MAX(decode(media_type,
                               'PHW',
                               phone_area_code || '''' || phone_number,
                               NULL)) work_phone,
                    address_seq_num,
                    street_address_2
               FROM (SELECT oh.order_num order_num,
                            oh.order_datetime order_date,
                            oh.company_num company_num,
                            oh.customer_num customer_num,
                            ad.address_type address_type,
                            c.create_date create_date,
                            con.first_name || '''' || con.last_name contact_name,
                            con.first_name first_name,
                            con.middle_init middle_init,
                            con.last_name last_name,
                            ad.company_name company_name,
                            ad.street_address_1 street_address_1,
                            c.customer_class customer_class,
                            ad.city city,
                            ad.state state,
                            ad.zip_code zip_code,
                            ad.country_code,
                            cph.media_type media_type,
                            cph.phone_area_code phone_area_code,
                            cph.phone_number phone_number,
                            ad.address_seq_num address_seq_num,
                            ad.street_address_2 street_address_2
                       FROM reporting_base.gt_gaft_orders gt,
                            doms.us_ordhdr   oh,
                            doms.us_address  ad,
                            doms.us_customer c,
                            doms.us_contact  con,
                            doms.us_contph   cph
                      WHERE oh.customer_num = c.customer_num(+)
                        AND oh.customer_num = ad.customer_num(+)
                        AND (
                               ad.customer_num = c.customer_num
                        AND
                               ad.address_type = 'B'
                         OR   (
                                ad.customer_num = c.customer_num
                        AND
                                ad.address_type = 'S'
                        AND
                            ad.address_seq_num = oh.ship_to_seq_num
                        AND ad.customer_num = con.customer_num(+)
                        AND ad.address_type = con.address_type(+)
                        AND ad.address_seq_num = con.address_seq_num(+)
                        AND con.customer_num = cph.customer_num(+)
                        AND con.contact_id = cph.contact_id(+)
                        AND oh.order_num = gt.order_num
                        AND oh.business_unit_id = gt.business_unit_id)
              GROUP BY order_num,
                       order_date,
                       company_num,
                       customer_num,
                       address_type,
                       create_date,
                       contact_name,
                       first_name,
                       middle_init,
                       last_name,
                       company_name,
                       street_address_1,
                       customer_class,
                       city,
                       state,
                       zip_code,
                       country_code,
                       address_seq_num,
                       street_address_2;This is the explain plan for the query:
    Plan
    SELECT STATEMENT FIRST_ROWS Cost: 21 Bytes: 207 Cardinality: 1
         18 HASH GROUP BY Cost: 21 Bytes: 207 Cardinality: 1
               17 NESTED LOOPS OUTER Cost: 20 Bytes: 207 Cardinality: 1
                     14 NESTED LOOPS OUTER Cost: 16 Bytes: 183 Cardinality: 1
                           11 FILTER
                                 10 NESTED LOOPS OUTER Cost: 12 Bytes: 152 Cardinality: 1
                                       7 NESTED LOOPS OUTER Cost: 8 Bytes: 74 Cardinality: 1
                                             4 NESTED LOOPS OUTER Cost: 5 Bytes: 56 Cardinality: 1
                                                   1 TABLE ACCESS FULL TABLE (TEMP) REPORTING_BASE.GT_GAFT_ORDERS Cost: 2 Bytes: 26 Cardinality: 1
                                                   3 TABLE ACCESS BY INDEX ROWID TABLE DOMS.US_ORDHDR Cost: 3 Bytes: 30 Cardinality: 1
                                                         2 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USORDHDR_IXUPK_ORDNUMBUID Cost: 2 Cardinality: 1
                                             6 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CUSTOMER Cost: 3 Bytes: 18 Cardinality: 1 Partition #: 11
                                                   5 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USCUSTOMER_IXUPK_CUSTNUM Cost: 2 Cardinality: 1
                                       9 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_ADDRESS Cost: 4 Bytes: 156 Cardinality: 2 Partition #: 13
                                             8 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USADDR_IXUPK_CUSTATYPASEQ Cost: 3 Cardinality: 2
                           13 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTACT Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 15
                                 12 INDEX RANGE SCAN INDEX DOMS.USCONT_IX_CNATAS Cost: 3 Cardinality: 1
                     16 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTPH Cost: 4 Bytes: 24 Cardinality: 1 Partition #: 17
                           15 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USCONTPH_IXUPK_CUSTCONTMEDSEQ Cost: 3 Cardinality: 1 Cost is good. All indexes are used. However the time to return the data is very high.
    Any ideas to make the query faster?.
    Thanks

    Hi, here is the tkprof output as requested by Rob..
    TKPROF: Release 10.2.0.4.0 - Production on Mon Jul 13 09:07:09 2009
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: axispr1_ora_15293.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    SELECT ORDER_NUM, ORDER_DATE, COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE,
      CREATE_DATE AS ADDRESS_CREATE_DATE, CONTACT_NAME, FIRST_NAME, MIDDLE_INIT,
      LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1, CUSTOMER_CLASS, CITY, STATE,
      ZIP_CODE, COUNTRY_CODE, MAX(DECODE(MEDIA_TYPE, 'PHH', PHONE_AREA_CODE ||
      '''' || PHONE_NUMBER, NULL)) HOME_PHONE, MAX(DECODE(MEDIA_TYPE, 'PHW',
      PHONE_AREA_CODE || '''' || PHONE_NUMBER, NULL)) WORK_PHONE, ADDRESS_SEQ_NUM,
       STREET_ADDRESS_2
    FROM
    (SELECT OH.ORDER_NUM ORDER_NUM, OH.ORDER_DATETIME ORDER_DATE, OH.COMPANY_NUM
      COMPANY_NUM, OH.CUSTOMER_NUM CUSTOMER_NUM, AD.ADDRESS_TYPE ADDRESS_TYPE,
      C.CREATE_DATE CREATE_DATE, CON.FIRST_NAME || '''' || CON.LAST_NAME
      CONTACT_NAME, CON.FIRST_NAME FIRST_NAME, CON.MIDDLE_INIT MIDDLE_INIT,
      CON.LAST_NAME LAST_NAME, AD.COMPANY_NAME COMPANY_NAME, AD.STREET_ADDRESS_1
      STREET_ADDRESS_1, C.CUSTOMER_CLASS CUSTOMER_CLASS, AD.CITY CITY, AD.STATE
      STATE, AD.ZIP_CODE ZIP_CODE, AD.COUNTRY_CODE, CPH.MEDIA_TYPE MEDIA_TYPE,
      CPH.PHONE_AREA_CODE PHONE_AREA_CODE, CPH.PHONE_NUMBER PHONE_NUMBER,
      AD.ADDRESS_SEQ_NUM ADDRESS_SEQ_NUM, AD.STREET_ADDRESS_2 STREET_ADDRESS_2
      FROM REPORTING_BASE.GT_GAFT_ORDERS GT, DOMS.US_ORDHDR OH, DOMS.US_ADDRESS
      AD, DOMS.US_CUSTOMER C, DOMS.US_CONTACT CON, DOMS.US_CONTPH CPH WHERE
      OH.ORDER_NUM = GT.ORDER_NUM AND OH.BUSINESS_UNIT_ID = GT.BUSINESS_UNIT_ID
      AND OH.CUSTOMER_NUM = C.CUSTOMER_NUM(+) AND OH.CUSTOMER_NUM =
      AD.CUSTOMER_NUM(+) AND AD.CUSTOMER_NUM = C.CUSTOMER_NUM AND (
      AD.ADDRESS_TYPE = 'B' OR ( AD.ADDRESS_TYPE = 'S' AND AD.ADDRESS_SEQ_NUM =
      OH.SHIP_TO_SEQ_NUM ) ) AND AD.CUSTOMER_NUM = CON.CUSTOMER_NUM(+) AND
      AD.ADDRESS_TYPE = CON.ADDRESS_TYPE(+) AND AD.ADDRESS_SEQ_NUM =
      CON.ADDRESS_SEQ_NUM(+) AND CON.CUSTOMER_NUM = CPH.CUSTOMER_NUM(+) AND
      CON.CONTACT_ID = CPH.CONTACT_ID(+) ) GROUP BY ORDER_NUM, ORDER_DATE,
      COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE, CREATE_DATE, CONTACT_NAME,
      FIRST_NAME, MIDDLE_INIT, LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1,
      CUSTOMER_CLASS, CITY, STATE, ZIP_CODE, COUNTRY_CODE, ADDRESS_SEQ_NUM,
      STREET_ADDRESS_2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch      257      0.04       0.05         45          0          0        6421
    total      257      0.04       0.05         45          0          0        6421
    Misses in library cache during parse: 0
    Parsing user id: 126
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch      257      0.04       0.05         45          0          0        6421
    total      257      0.04       0.05         45          0          0        6421
    Misses in library cache during parse: 0
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        1  user  SQL statements in session.
        0  internal SQL statements in session.
        1  SQL statements in session.
    Trace file: axispr1_ora_15293.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
           1  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           1  SQL statements in trace file.
           1  unique SQL statements in trace file.
         289  lines in trace file.
          83  elapsed seconds in trace file.Thanks in advance!

  • Query Takes Longer time

    SELECT CAL_EMPCALENDAR.START_DATE as main,
    bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' ||
    CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
    TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
    TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
    bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' ||
    CAL_EMPCALENDAR.EMPLOYEE_ID as name,
    CAL_EMPCALENDAR.START_DATE as sdate,
    CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
    CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
    TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
    TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
    TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
    CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
    CAL_EMPCALENDAR.LATE_IN,
    CAL_EMPCALENDAR.EARLY_OUT,
    CAL_EMPCALENDAR.UNDER_TIME,
    CAL_EMPCALENDAR.OVERTIME,
    TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
    CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
    HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
    BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
    (SELECT shift_id
    FROM CAL_GRPWORKDAY
    WHERE CAL_GRPWORKDAY.calgrp_id =
    (SELECT calgrp_id
    FROM CAL_CALASSIGNMENT
    WHERE employee_id = CAL_EMPCALENDAR.employee_id
    AND CAL_CALASSIGNMENT.START_DATE <=
    CAL_EMPCALENDAR.START_DATE
    AND (CAL_CALASSIGNMENT.END_DATE is null or
    CAL_CALASSIGNMENT.END_DATE >=
    CAL_EMPCALENDAR.START_DATE))
    AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
    (SELECT max(entry_dt)
    FROM , LV_TXN txn, CAL_EMPDAILYEVENT cale
    WHERE status = 'Approved'
    AND LV_APPSTATUSHIST.application_id = txn.application_id
    AND cale.reference_id = txn.txn_id
    AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id
    ) AS entry_dt,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 1
    and BIZUNIT_ID like 'SG')) F1,
    --TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,                            
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 2
    and bizunit_id like 'SG')) F2,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 3
    and bizunit_id like 'SG')) F3,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 4
    and bizunit_id like 'SG')) F4,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 5
    and bizunit_id like 'SG')) F5
    From CAL_EMPCALENDAR, HRM_CURR_CAREER_V, CAL_SHIFT, HRM_EMPLOYEE
    Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
    AND (CAL_EMPCALENDAR.WF_STATUS = 'Approved' Or
    CAL_EMPCALENDAR.WF_STATUS = 'No Action')
    AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
    --and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
    AND CAL_EMPCALENDAR.START_DATE BETWEEN
    GREATEST(HRM_EMPLOYEE.COMMENCE_DATE,
    TO_DATE('1-4-2006', 'DD-MM-YYYY')) AND
    LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'),
    NVL(HRM_EMPLOYEE.CESSATION_DATE,
    TO_DATE('30-4-2006', 'DD-MM-YYYY')))
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
    And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
    -- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
    --AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
    --$P!{ExceptionSQL}
    --$P!{iHRFilterClause}
    --order by $P!{OrderBy}
    order by main
    Hi all this query takes a very long time to run.
    On the explain plan the The table in bold letter is using full tablescan rest all go for index scanning.
    Table got Indexe on those CLOMUNS REFERREED
    Oracle version 9.2.0.6
    Message was edited by:
    Maran.E
    Message was edited by:
    Maran.E

    Maran,
    With tags and indentation it should be easiest to analyze at least for you :
    SELECT CAL_EMPCALENDAR.START_DATE as main,
           bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' || CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
           TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
           TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
           bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' || CAL_EMPCALENDAR.EMPLOYEE_ID as name,
           CAL_EMPCALENDAR.START_DATE as sdate,
           CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
           CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
           TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
           TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
           TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
           CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
           CAL_EMPCALENDAR.LATE_IN,
           CAL_EMPCALENDAR.EARLY_OUT,
           CAL_EMPCALENDAR.UNDER_TIME,
           CAL_EMPCALENDAR.OVERTIME,
           TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
           CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
           HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
           BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
           (SELECT shift_id
            FROM   CAL_GRPWORKDAY
            WHERE  CAL_GRPWORKDAY.calgrp_id = (SELECT calgrp_id
                                               FROM   CAL_CALASSIGNMENT
                                               WHERE employee_id = CAL_EMPCALENDAR.employee_id
                                               AND CAL_CALASSIGNMENT.START_DATE <= CAL_EMPCALENDAR.START_DATE
                                               AND (   CAL_CALASSIGNMENT.END_DATE is null
                                                    or CAL_CALASSIGNMENT.END_DATE >= CAL_EMPCALENDAR.START_DATE))
            AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
           (SELECT max(entry_dt)
            FROM   LV_TXN txn, CAL_EMPDAILYEVENT cale
            WHERE status = 'Approved'
            AND LV_APPSTATUSHIST.application_id = txn.application_id
            AND cale.reference_id = txn.txn_id
            AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id) AS entry_dt,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE  SEQUENCE = 1
                           and BIZUNIT_ID like 'SG')) F1,
           --TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE  SEQUENCE = 2
                           and    bizunit_id like 'SG')) F2,
           (SELECT ENTITLEMENT + ADJUST
            FROM   TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 3
                           and   bizunit_id like 'SG')) F3,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 4
                           and bizunit_id like 'SG')) F4,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 5
                           and bizunit_id like 'SG')) F5
    From CAL_EMPCALENDAR,
         HRM_CURR_CAREER_V,
         CAL_SHIFT,
         HRM_EMPLOYEE
    Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
    AND   (   CAL_EMPCALENDAR.WF_STATUS = 'Approved'
           Or CAL_EMPCALENDAR.WF_STATUS = 'No Action')
    AND   CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
    --and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
    AND   CAL_EMPCALENDAR.START_DATE BETWEEN GREATEST(HRM_EMPLOYEE.COMMENCE_DATE, TO_DATE('1-4-2006', 'DD-MM-YYYY'))
                                         AND LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'), NVL(HRM_EMPLOYEE.CESSATION_DATE, TO_DATE('30-4-2006', 'DD-MM-YYYY')))
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
    And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
    -- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
    --AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
    --$P!{ExceptionSQL}
    --$P!{iHRFilterClause}
    --order by $P!{OrderBy}
    order by mainNicolas.

Maybe you are looking for

  • FBL5 FIAR - exception aggregation - count customers with open items at key

    Hello gurus, I have read the information about counting all values in reference to a characteristic. I manage to count how many customers have a FIAR open item 'billing status' = Open in my FIAR cube. Now I am asked to replicate transaction FBL5 of R

  • SQL Expression Field - Combine Declared Variable With Case Statement

    Hello All, I have been using Crystal & Business Objects for a few months now and have figured out quite a bit on my own. This is the first real time I have struggled with something and while I could do this as a Formula Field I would like to know how

  • Dependent Object persistence in EJB 1.1

    Please take a look at the following container-managed entity EJB code: //REMOTE INTERFACE //---------------------- import javax.ejb.EJBObject; import java.rmi.RemoteException; public interface Product extends EJBObject { public void setPrice(double p

  • Differing Volume Levels-- Can't Find Answers

    I play my iPod and iTunes through my stereo at times. JVC 140 watt stereo. My volume control has a series of marks around it similar to a clock When I play a CD or vinyl record, the volume control for adequate sound is just a few notches above the "m

  • New-testcasconnectivityuser and test-outlookconnectivity script

    I am getting 2 users in AD called extest_xxxxxxxx How did this get created and can i delete it?  I did see on my exchange server under D:\Program Files\Microsoft\Exchange Server\V14\Scripts new-testcasconnectivityuser.ps1 is there.  Is it ok to delet