Query on Cube jumps to Query on ODS ; Query on ODS takes Long time

Hi All,
Perormance Issue:  Query on Cube jumps to Query on ODS.
Query on ODS taking long time.(JumpQuery)  
Specific to ODS Query: When i have checked the Query on ODS(individually) also taking longer time
Actually ODS contains quite huge data. Indexes already maintained.
I have checked the RSRT- Execute SQL and Debug Option also. Indexes maintained Perfectly .
Order of objects in ODS indexes are matching the order of Objects in SQL stat of RSRT Trans. Inspite of that taking long time.
I have checked both the ways jumpquery aswellas individually .
My question is when the query is jumping from cube to query on ODS how to check the performance, how the query is executing in background when switching over to the second query, Moreover calculated keyfigure has been used for jumping to the target query.
How can query(ods query)  time is optimized or improve performance when jumping  from query on Cube ?
can any body help?
Rgds,
C.V.
Message was edited by:
        C.V. P

What i understand is that you need to optimise the Query jumping time . But this will be very less compared to the time taken by the query on the ODS.
Ideally you shouldnt be making a BEx Query on the ODS , as this takes a long time. What you can do is try executing the Bex Query on the ODS to find out as to where the issue lies. If this query is taking a long time , there is not muich that you can do here.

Similar Messages

  • Why update query takes  long time ?

    Hello everyone;
    My update query takes long time.  In  emp  ( self testing) just  having 2 records.
    when i issue update query , it takes long time;
    SQL> select  *  from  emp;
      EID  ENAME     EQUAL     ESALARY     ECITY    EPERK       ECONTACT_NO
          2   rose              mca                  22000   calacutta                   9999999999
          1   sona             msc                  17280    pune                          9999999999
    Elapsed: 00:00:00.05
    SQL> update emp set esalary=12000 where eid='1';
    update emp set esalary=12000 where eid='1'
    * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:01:11.72
    SQL> update emp set esalary=15000;
    update emp set esalary=15000
      * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:02:22.27

    Hi  BCV;
    Thanks for your reply but it doesn't provide output,  please  see   this.
    SQL> update emp set esalary=15000;
    ........... Lock already occured.
    >> trying to trace  >>
    SQL> select HOLDING_SESSION from dba_blockers;
    HOLDING_SESSION
                144
    SQL> select sid , username, event from v$session where username='HR';
    SID USERNAME     EVENT
       144   HR    SQL*Net message from client
       151   HR    enq: TX - row lock contention
       159   HR    SQL*Net message from client
    >> It  does n 't  provide  clear output about  transaction lock >>
    SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
      2  BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
      3  request
      4  FROM v$lock, v$session
      5  WHERE v$lock.TYPE = 'TX'
      6  AND v$lock.SID = v$session.SID
      7  AND v$session.username = USER;
      no rows selected
    SQL> select MACHINE from v$session where sid = :sid;
    SP2-0552: Bind variable "SID" not declared.

  • CV04N takes long time to process select query on DRAT table

    Hello Team,
    While using CV04N to display DIR's, it takes long time to process select query on DRAT table. This query includes all the key fields. Any idea as to how to analyse this?
    Thanks and best regards,
    Bobby
    Moderator message: please read the sticky threads of this forum, there is a lot of information on what you can do.
    Edited by: Thomas Zloch on Feb 24, 2012

    Be aware that XP takes approx 1gb of your RAM leaving you with 1gb for whatever else is running. MS Outlook is also a memory hog.
    To check Virtual Memory Settings:
    Control Panel -> System
    System Properties -> Advanced Tab -> Performance Settings
    Performance Options -> Adavanced Tab - Virtual Memory section
    Virtual Memory -
    what are
    * Initial Size
    * Maximum Size
    In a presentation at one of the Hyperion conferences years ago, Mark Ostroff suggested that the initial be set to the same as Max. (Max is typically 2x physical RAM)
    These changes may provide some improvement.

  • My query take long time..

    The output of tkprof of my trace file is :
    SELECT ENEXT.NUM_PRSN_EMPLY ,ENEXT.COD_BUSUN ,ENEXT.DAT_CALDE ,ENEXT.COD_SHFT
    FROM
    AAC_EMPLOYEE_ENTRY_EXITS5_VIW ENEXT ,PDS.PDS_EMPLOYEES EMPL ,
    PDS.PDS_EMPLOYMENT_TYPES EMPTYP ,PDS.PDS_PAY_CONDITIONS PAYCON WHERE
    ENEXT.DAT_CALDE BETWEEN :B6 AND :B5 AND ENEXT.NUM_PRSN_EMPLY IN (SELECT
    ATT21 FROM APPS.GLOBAL_TEMPS WHERE ATT1 = 'PRSN') AND ENEXT.NUM_PRSN_EMPLY =
    EMPL.NUM_PRSN_EMPLY AND EMPL.EMTYP_COD_EMTYP = EMPTYP.COD_EMTYP AND
    EMPTYP.LKP_COD_STA_PAY_EMTYP <> 3 AND
    NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1 AND EMPL.PCOND_COD_STA_PCOND
    = PAYCON.COD_STA_PCOND AND NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1
    AND PAYCON.LKP_FLG_STA_PAY_PCOND = 1 AND ENEXT.DAT_CALDE >=
    EMPL.DAT_EMPLT_EMPLY AND ENEXT.DAT_CALDE <= NVL(EMPL.DAT_DSMSL_EMPLY,
    TO_DATE('15001229','YYYYMMDD')) AND 1 = (CASE WHEN
    ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND
    ENEXT.TYP_DAY BETWEEN 4 AND 6 THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2
    AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND ENEXT.TYP_DAY NOT BETWEEN 4 AND 6 THEN
    1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 2
    THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
    1 THEN 1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
    2 THEN 0 END) AND ENEXT.LKP_COD_DPUT_BUSUN = NVL(:B4 ,
    ENEXT.LKP_COD_DPUT_BUSUN) AND ENEXT.LKP_COD_MANAG_BUSUN = NVL(:B3 ,
    ENEXT.LKP_COD_MANAG_BUSUN) AND ENEXT.COD_BUSUN = NVL(:B2 , ENEXT.COD_BUSUN)
    AND ENEXT.COD_CAL = NVL(COD_CAL, ENEXT.COD_CAL) AND ENEXT.NUM_PRSN_EMPLY =
    NVL(:B1 , ENEXT.NUM_PRSN_EMPLY) AND ENEXT.COD_SHFT IN (SELECT
    SHFTBL.COD_SHTAB FROM AAC_SHIFT_TABLES SHFTBL WHERE
    SHFTBL.LKP_CAT_SHFT_SHTAB = 1) AND ENEXT.DAT_CALDE NOT IN (SELECT ABN.DAT
    FROM APPS.AAC_EMPL_EN_EX_ABNORMAL_VIW ABN WHERE ABN.PRSN =
    ENEXT.NUM_PRSN_EMPLY AND ABN.DAT BETWEEN :B6 AND :B5 ) AND ENEXT.DAT_CALDE
    IN (SELECT EMPENEXT.DAT_STR_SHFT_ENEXT FROM AAC.AAC_EMPLOYEE_ENTRY_EXITS
    EMPENEXT WHERE EMPENEXT.EMPLY_NUM_PRSN_EMPLY = EMPL.NUM_PRSN_EMPLY AND
    EMPENEXT.DAT_STR_SHFT_ENEXT BETWEEN :B6 AND :B5 AND
    EMPENEXT.LKP_FLG_STA_ENEXT <> 3) ORDER BY ENEXT.NUM_PRSN_EMPLY,
    ENEXT.DAT_CALDE
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 40.45 40.30 306 17107740 0 24
    total 6 40.45 40.30 306 17107740 0 24
    what is wrong in my query?
    why it take long time?

    user13344656 wrote:
    what is wrong in my query?
    why it take long time?See PL/SQL forum FAQ
    https://forums.oracle.com/forums/ann.jspa?annID=1535
    *3. How to improve the performance of my query? / My query is running slow.*
    SQL and PL/SQL FAQ
    For instructions on what information to post an how to format it.

  • Analyze a Query which takes longer time in Production server with ST03 only

    Hi,
    I want to Analyze a Query which takes longer time in Production server with ST03 t-code only.
    Please provide me with detail steps as to perform the same with ST03
    ST03 - Expert mode- then I need to know the steps after this. I have checked many threads. So please don't send me the links.
    Write steps in detail please.
    <REMOVED BY MODERATOR>
    Regards,
    Sameer
    Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:14 PM

    Then please close the thread.
    Greetings,
    Blag.

  • Query takes long time on multiprovider

    Hi,
    When i execute a query on the multiprovider, it takes very long time. it doesnt show up the results also. It just keep processing. I have executed the report only for one day but still it doesnt show any result. But when i execute on the cube, it executes quickly and shows the result.
    Actually i added one more cube to the multiprovider and ten transported that multiprovider to QA and PRD. Transportation went on successfully. After this i am unalbe to execute the reports on that multiprovider. What might be the cause? your help is appreciated.
    Thanks
    Annie

    Hi Annie.......
    Checklist for the performance of a Query........from a DOc........
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If u201CDisplay as hierarchyu201D is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Also check this.........Recommendations for Modeling MultiProviders
    http://help.sap.com/saphelp_nw70/helpdata/EN/43/5617d903f03e2be10000000a1553f6/frameset.htm
    Hope this helps......
    Regards,
    Debjani......

  • Query takes long time 41 seconds to run how to tune

    my query is simple as follows...i dont know how to tune it..
    cur_memb_count (p_as_of_date IN date)
    is
    select
    count(ip.individual_id) membercount,
    --lpad(re.region_id,2,'0')||lpad('000',3,'0')||lpad( pb.plan_cd,3,'0') group_id,
    substr(pb.plan_cd,1,1)||lpad(re.region_id, 2,'0')||'0000' group_id,
    ipp.legal_entity_id,
    bus.gl_bus_unit_a,
    bus.lob,
    loc.gl_loc_nbr_a,
    prod.gl_product_cd_a,
    prod.gl_fin_argmt_a,pb.plan_type_id ,pb.plan_cd
    from
    plan pb ,region_map re , state_plan_billing spb,
    insured_plan_profile ipp , insured_plan ip ,
    ods.oods_gl_bus_unit bus, ods.oods_gl_loc_nbr loc,
    ods.oods_gl_product_line prod,
    household h,
    employer_household eh
    where
    ipp.residence_st_plan_billing_id = spb.state_plan_billing_id
    and ipp.insured_plan_id = ip.insured_plan_id
    and ip.plan_cd = pb.plan_cd
    and pb.plan_cd=spb.plan_cd
    -- and pb.plan_type_id = loc.lob
    and spb.state_cd = re.state_cd
    and p_as_of_date between ip.insured_plan_effective_date and
    nvl(ip.insured_plan_termination_date,'31-dec-9999')
    and ip.insured_plan_effective_date <>
    nvl(ip.insured_plan_termination_date,'31-dec-9999')
    -- the condition below is necessary. but not enough data to test.when
    uncommented will only give
    -- a few records. try testing it just by uncommenting it.
    --and p_as_of_date between re.region_map_start_date and re.region_map_stop_date
    and loc.lob=prod.lob and loc.lob=bus.lob(+) and loc.company_cd=bus.company_cd(+)
    and p_as_of_date between pb.plan_start_date and pb.plan_stop_date
    and p_as_of_date between ipp.ins_plan_profile_start_date and
    ipp.ins_plan_profile_stop_date
    -- and lpad(re.region_id,2,'0')||lpad('000',3,'0')||lpad(pb.plan_cd,3,'0')
    = loc.group_id
    and substr(pb.plan_cd,1,
    1)||lpad(re.region_id,2,'0')||nvl(employee_id,'0000') =loc.group_id
    and p_household_id_param = h.household_id
    and h.household_id = eh.employer_household_id
    and p_date_param between eh.emp_hhold_start_date and eh.emp_hhold_stop_date
    and insplan.individual_id=housmemb.individual_id(+)
    and eh.delete_ind = 'N'
    group by
    --lpad(re.region_id,2,'0')||lpad('000',3,'0')||lpad(pb.plan_cd,3,'0'),
    substr(pb.plan_cd ,1,1)||lpad(re.region_id,2, '0')||nvl(employee_id,'0000'),

    If many full table scans on big tables consider creating indexes. Or if many index reads consider forcing full table scans :)
    Ah, I just love these tuning questions. "My query is slow. Please make it go fast". Sure, put on these red shoes, click your heels three times and make a wish. Alas, tuning is rather more complicated than that, more of a science than a voodoo rirtual. We would like to help. But we need more data, some concrete figures. Otherwise we're just guessing.
    So, first off, please read the Performance Tuning Guide. Apply some of its techniques. If you still don't understand why your query is running slow, come back to us with table descriptions, volumetrics, indexes, explain plans, stats, timings and tkprof output.
    Good luck, APC

  • Oracle SQL Select query takes long time than expected.

    Hi,
    I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
    The query is as follows.
    select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
    This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
    The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
    Kindly advice what is going wrong??
    Regards,
    Purushotham

    SQL> @/alcatel/omc1/data/query.sql
    2 ;
    9 rows selected.
    Execution Plan
    Plan hash value: 3745571015
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | SORT ORDER BY | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    | 4 | TABLE ACCESS FULL | PARAMETERTABLE |
    |* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
    |* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    |* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
    |* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    Predicate Information (identified by operation id):
    5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
    6 - access("P1"."ID"="F1"."ID")
    7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
    8 - access("F1"."ID"="ID")
    Note
    - rule based optimizer used (consider using cbo)
    Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    0 bytes sent via SQL*Net to client
    0 bytes received via SQL*Net from client
    0 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    9 rows processed
    SQL>

  • Creation of Universe takes long time on BW Query

    Hello Gurus,
    I'm trying to create my universe on BW Query and it is taking long time to open. When I parse my objects also it atleast takes 20 minutes to parse one object. The same thing happens from WebIntelligence also. When I export this universe and create a Web Intelligence query and run the report it takes atleast 15 minutes to show the data though data is very less. Any ideas on how to improve this performance.
    Regards,
    Vijay

    Hi Ingo,
    Thank you for your comments. Is it the SAP Integration Kit FP1.6 are you talking about? If yes, then I was able to install it on the server but I don't have SAP Integration kit installed in the client system. Why would I need to install SAP Integration on client as I'm using my server CMS Name to connect to server and once it has the necessary components isn't that good enough?
    I tried creating a report and the performance did improve just by installing it on the server. But if I just pull any one object in WebI I receive the below error when answering one of the prompts
    A database error occured. The database error text is: . (WIS 10901)
    When I click on help the cause and action are as below
    Cause
    The database that provides the data to this document has generated an error. Details about the error are provided in the section of the message indicated by the field code: .
    Action
    Contact your BusinessObjects administrator with the error message information or consult the documentation provided by the supplier of the database.
    Regards,
    Vijay

  • Query takes long time - Please help!

    I've a query like below (not actual query)
    update (select eds.title eds_title,edv.title edv_title from mia_data_staging eds, mia_doc_Versions edv where eds.id = edv.id and eds.title != edv.title) set edv_title = eds_title;
    In the above query I've more than 70 columns to select, compare and update (I've shown only one above). The explain plan for the query is below, which does not show any significant time, but the query never returns when executed after a long long time. Any ideas?
    Plan hash value: 2242214163
    | Id | OpMIAtion | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | UPDATE STATEMENT | | 1627K| 1405M| | 18E (1)| |
    | 1 | UPDATE | MIA_DOC_VERSIONS | | | | | |
    |* 2 | HASH JOIN | | 1627K| 1405M| 720M| 18E (1)| |
    | 3 | TABLE ACCESS BY INDEX ROWID | MIA_DOC_VERSIONS | 1627K| 701M| | 18E (1)| |
    | 4 | BITMAP CONVERSION TO ROWIDS| | | | | | |
    | 5 | BITMAP INDEX FULL SCAN | IDX_30 | | | | | |
    PLAN_TABLE_OUTPUT
    | 6 | TABLE ACCESS FULL | MIA_DATA_STAGING | 1628K| 705M| | 7184 (3)| 00:02:29 |
    Predicate Information (identified by operation id):
    ---------------------------------------------------

    user652494 wrote:
    I've a query like below (not actual query)
    |   3 |    TABLE ACCESS BY INDEX ROWID | MIA_DOC_VERSIONS |  1627K|   701M|       |    18E  (1)|          |
    |   4 |     BITMAP CONVERSION TO ROWIDS|                  |       |       |       |            |          |
    |   5 |      BITMAP INDEX FULL SCAN    | IDX_30           |       |       |       |            |          |This part of your execution plan looks very suspicious: It's performing a bitmap index full scan to do then a single row access by rowid apparently for all rows of the table, which seems to be a very inefficient operation. It also shows an unreasonable cost for that operation. The question is why it is not using a "full table scan" to access the MIA_DOC_VERSIONS table?
    You might want to try simply the FULL hint to request a full table scan on the MIA_DOC_VERSIONS table in order to find out how the execution plan then is going to look like:
    update (select /*+ FULL(EDV) */ eds.title eds_title,edv.title edv_title from mia_data_staging eds, mia_doc_Versions edv where eds.id = edv.id and eds.title != edv.title) set edv_title = eds_title;or
    update /*+ FULL(a.EDV) */ (select eds.title eds_title,edv.title edv_title from mia_data_staging eds, mia_doc_Versions edv where eds.id = edv.id and eds.title != edv.title) a set edv_title = eds_title;Looking at the execution plan of the hinted statement one might get a clue why the optimizer favors an index access path instead.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Select query takes long time....

    Hi Experts,
    I am using a select query in which inspection lot is in another table and order no. is in another table. this select query taking very long time, what is the problem in this query ? Pl. guide us.
    select bPRUEFLOS bMBLNR bCPUDT aAUFNR amatnr aLGORT a~bwart
    amenge aummat asgtxt axauto
    into corresponding fields of table itab
    *into table itab
    from mseg as a inner join qamb as b
    on amblnr = bmblnr
    and azeile = bzeile
    where b~PRUEFLOS in insp
    and  b~cpudt in date1
    and b~typ = '3'
    and a~bwart = '321'
    and a~aufnr in aufnr1.
    Yusuf

    hi
    instead of using 'move to corresponding of itab'  fields use  'into table itab'.....
    coz......if u use move to corresponding it will search for all the appropriate fields then it will place u r data........instead of that declare apprpiate internal table and use 'into table itab'.
    and one more thing dont use joins ......coz joins will decrease u r performance .....so instead of that use 'for all entries' ....and mention all the key fields in where condition ........
    ok
    reward points for helpful answers

  • Query take long time in fetching when used within a procedure

    The Database is : Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    Query just takes a second from toad but when used inside a procedure as a cursor it takes takes 3 to 5 minutes.
    Following is the Tkprof information when running from procedure.
    SELECT CHCLP.CLM_PRVDR_TYPE_LKPCD, CHCLP.PRVDR_LCTN_IID, TO_CHAR
    (CHCLP.MODIFIED_DATE, 'MM-dd-yyyy hh24:mi:ss') MODIFIED_DATE,
    CHCLP.PRVDR_LCTN_IDENTIFIER, CHCLP.CLM_HDR_CLM_LN_X_PVDR_LCTN_SID
    FROM
    CLM_HDR_CLM_LN_X_PRVDR_LCTN CHCLP WHERE CHCLP.CLAIM_HEADER_SID = :B1 AND
    CHCLP.CLAIM_LINE_SID IS NULL AND CHCLP.IDNTFR_TYPE_CID = 7
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 110.79 247.79 568931 576111 0 3
    total 2 110.79 247.79 568931 576111 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93 (CMSAPP) (recursive depth: 1)
    Rows     Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 PARTITION RANGE (SINGLE) PARTITION:KEYKEY
    0 TABLE ACCESS MODE: ANALYZED (BY LOCAL INDEX ROWID) OF
    'CLM_HDR_CLM_LN_X_PRVDR_LCTN' (TABLE) PARTITION:KEYKEY
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF
    'XAK1CLM_HDR_CLM_LN_X_PRVDR_LCT' (INDEX (UNIQUE))
    PARTITION:KEYKEY
    Execution plan when running just the query from TOAD is: (it comes out in a second)
    Plan
    SELECT STATEMENT ALL_ROWSCost: 6 Bytes: 100 Cardinality: 2                
         3 PARTITION RANGE SINGLE Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 1 Partitions accessed #13          
              2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE CMSAPP.CLM_HDR_CLM_LN_X_PRVDR_LCTN Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 2 Partitions accessed #13     
    Why would fetching take such a long time? Please let me know if you need any other information.
    Thank You.
    Edited by: spur230 on Apr 1, 2009 10:23 AM
    Edited by: spur230 on Apr 1, 2009 10:26 AM
    Edited by: spur230 on Apr 1, 2009 10:28 AM
    Edited by: spur230 on Apr 1, 2009 10:30 AM

    Query just takes a second from toad It's possible that the query starts returning rows in a second, but that's not the time required for the entire query.

  • Query Takes Longer time

    SELECT CAL_EMPCALENDAR.START_DATE as main,
    bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' ||
    CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
    TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
    TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
    bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' ||
    CAL_EMPCALENDAR.EMPLOYEE_ID as name,
    CAL_EMPCALENDAR.START_DATE as sdate,
    CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
    CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
    TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
    TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
    TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
    CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
    CAL_EMPCALENDAR.LATE_IN,
    CAL_EMPCALENDAR.EARLY_OUT,
    CAL_EMPCALENDAR.UNDER_TIME,
    CAL_EMPCALENDAR.OVERTIME,
    TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
    CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
    HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
    BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
    (SELECT shift_id
    FROM CAL_GRPWORKDAY
    WHERE CAL_GRPWORKDAY.calgrp_id =
    (SELECT calgrp_id
    FROM CAL_CALASSIGNMENT
    WHERE employee_id = CAL_EMPCALENDAR.employee_id
    AND CAL_CALASSIGNMENT.START_DATE <=
    CAL_EMPCALENDAR.START_DATE
    AND (CAL_CALASSIGNMENT.END_DATE is null or
    CAL_CALASSIGNMENT.END_DATE >=
    CAL_EMPCALENDAR.START_DATE))
    AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
    (SELECT max(entry_dt)
    FROM , LV_TXN txn, CAL_EMPDAILYEVENT cale
    WHERE status = 'Approved'
    AND LV_APPSTATUSHIST.application_id = txn.application_id
    AND cale.reference_id = txn.txn_id
    AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id
    ) AS entry_dt,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 1
    and BIZUNIT_ID like 'SG')) F1,
    --TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,                            
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 2
    and bizunit_id like 'SG')) F2,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 3
    and bizunit_id like 'SG')) F3,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 4
    and bizunit_id like 'SG')) F4,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 5
    and bizunit_id like 'SG')) F5
    From CAL_EMPCALENDAR, HRM_CURR_CAREER_V, CAL_SHIFT, HRM_EMPLOYEE
    Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
    AND (CAL_EMPCALENDAR.WF_STATUS = 'Approved' Or
    CAL_EMPCALENDAR.WF_STATUS = 'No Action')
    AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
    --and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
    AND CAL_EMPCALENDAR.START_DATE BETWEEN
    GREATEST(HRM_EMPLOYEE.COMMENCE_DATE,
    TO_DATE('1-4-2006', 'DD-MM-YYYY')) AND
    LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'),
    NVL(HRM_EMPLOYEE.CESSATION_DATE,
    TO_DATE('30-4-2006', 'DD-MM-YYYY')))
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
    And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
    -- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
    --AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
    --$P!{ExceptionSQL}
    --$P!{iHRFilterClause}
    --order by $P!{OrderBy}
    order by main
    Hi all this query takes a very long time to run.
    On the explain plan the The table in bold letter is using full tablescan rest all go for index scanning.
    Table got Indexe on those CLOMUNS REFERREED
    Oracle version 9.2.0.6
    Message was edited by:
    Maran.E
    Message was edited by:
    Maran.E

    Maran,
    With tags and indentation it should be easiest to analyze at least for you :
    SELECT CAL_EMPCALENDAR.START_DATE as main,
           bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' || CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
           TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
           TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
           bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' || CAL_EMPCALENDAR.EMPLOYEE_ID as name,
           CAL_EMPCALENDAR.START_DATE as sdate,
           CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
           CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
           TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
           TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
           TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
           CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
           CAL_EMPCALENDAR.LATE_IN,
           CAL_EMPCALENDAR.EARLY_OUT,
           CAL_EMPCALENDAR.UNDER_TIME,
           CAL_EMPCALENDAR.OVERTIME,
           TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
           CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
           HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
           BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
           (SELECT shift_id
            FROM   CAL_GRPWORKDAY
            WHERE  CAL_GRPWORKDAY.calgrp_id = (SELECT calgrp_id
                                               FROM   CAL_CALASSIGNMENT
                                               WHERE employee_id = CAL_EMPCALENDAR.employee_id
                                               AND CAL_CALASSIGNMENT.START_DATE <= CAL_EMPCALENDAR.START_DATE
                                               AND (   CAL_CALASSIGNMENT.END_DATE is null
                                                    or CAL_CALASSIGNMENT.END_DATE >= CAL_EMPCALENDAR.START_DATE))
            AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
           (SELECT max(entry_dt)
            FROM   LV_TXN txn, CAL_EMPDAILYEVENT cale
            WHERE status = 'Approved'
            AND LV_APPSTATUSHIST.application_id = txn.application_id
            AND cale.reference_id = txn.txn_id
            AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id) AS entry_dt,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE  SEQUENCE = 1
                           and BIZUNIT_ID like 'SG')) F1,
           --TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE  SEQUENCE = 2
                           and    bizunit_id like 'SG')) F2,
           (SELECT ENTITLEMENT + ADJUST
            FROM   TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 3
                           and   bizunit_id like 'SG')) F3,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 4
                           and bizunit_id like 'SG')) F4,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 5
                           and bizunit_id like 'SG')) F5
    From CAL_EMPCALENDAR,
         HRM_CURR_CAREER_V,
         CAL_SHIFT,
         HRM_EMPLOYEE
    Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
    AND   (   CAL_EMPCALENDAR.WF_STATUS = 'Approved'
           Or CAL_EMPCALENDAR.WF_STATUS = 'No Action')
    AND   CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
    --and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
    AND   CAL_EMPCALENDAR.START_DATE BETWEEN GREATEST(HRM_EMPLOYEE.COMMENCE_DATE, TO_DATE('1-4-2006', 'DD-MM-YYYY'))
                                         AND LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'), NVL(HRM_EMPLOYEE.CESSATION_DATE, TO_DATE('30-4-2006', 'DD-MM-YYYY')))
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
    And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
    -- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
    --AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
    --$P!{ExceptionSQL}
    --$P!{iHRFilterClause}
    --order by $P!{OrderBy}
    order by mainNicolas.

  • Query Prediction takes long time - After upgrade DB 9i to 10g

    Hi all, Thanks for all your help.
    we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
    After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
    Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
    Thanks in advance
    skat

    Hi skat
    Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
    If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
    I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
    Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
    There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
    If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
    Best wishes
    Michael

  • Query take long time

    I m running a query taking more time more than 20 minutes. But if I am changing the values in the
    where clause, its doing fast. I am not changing the query , only i change the numeric value used in the
    where condtion. I thing its a factor of LOCK. How to resolve it. how to make the query return resut even
    row is being locked. thanks

    QUERY 1:
    PROD> select count(*) from patient_ad a,patient_master_data p , patient_contracts c
    2 where a.patient_id=p.patient_id and c.patient_id = a.patient_id and
    3 to_date(a.admit_date,'dd/mm/yyyy') >= '29/12/2008' and
    4 to_date(a.admit_date,'dd/mm/yyyy') <= '17/12/2009' and
    5 p.nationality_code <> 16 and c.CONTRACT_NO= 2207;
    Execution Plan
    Plan hash value: 801996662
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | SORT AGGREGATE | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    |* 4 | INDEX RANGE SCAN | PATIENT_CONTRACTS_NDX2 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PATIENT_AD |
    |* 6 | INDEX RANGE SCAN | PATIENT_AD_NDX1 |
    |* 7 | TABLE ACCESS BY INDEX ROWID | PATIENT_MASTER_DATA |
    |* 8 | INDEX UNIQUE SCAN | PK_PATIENT_MASTER_DATA |
    Predicate Information (identified by operation id):
    4 - access("C"."CONTRACT_NO"=2207)
    5 - filter(TO_DATE(INTERNAL_FUNCTION("A"."ADMIT_DATE"),'dd/mm/yyyy')<
    ='17/12/2009' AND TO_DATE(INTERNAL_FUNCTION("A"."ADMIT_DATE"),'dd/mm/yyy
    y')>='29/12/2008')
    6 - access("C"."PATIENT_ID"="A"."PATIENT_ID")
    7 - filter("P"."NATIONALITY_CODE"<>16)
    8 - access("A"."PATIENT_ID"="P"."PATIENT_ID")
    Note
    - rule based optimizer used (consider using cbo)
    THIS QUERY TAKING A LONG TIME EVEN AFTER 24 HOURS NOT YIELDING ANY RESULT.
    QUERY2:
    PROD> select count(*) from patient_ad a,patient_master_data p , patient_contracts c
    2 where a.patient_id=p.patient_id and c.patient_id = a.patient_id and
    3 to_date(a.admit_date,'dd/mm/yyyy') >= '29/12/2008' and
    4 to_date(a.admit_date,'dd/mm/yyyy') <= '17/12/2009' and
    5 p.nationality_code <> 16 and c.CONTRACT_NO= 2207;
    Execution Plan
    Plan hash value: 801996662
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | SORT AGGREGATE | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    |* 4 | INDEX RANGE SCAN | PATIENT_CONTRACTS_NDX2 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PATIENT_AD |
    |* 6 | INDEX RANGE SCAN | PATIENT_AD_NDX1 |
    |* 7 | TABLE ACCESS BY INDEX ROWID | PATIENT_MASTER_DATA |
    |* 8 | INDEX UNIQUE SCAN | PK_PATIENT_MASTER_DATA |
    Predicate Information (identified by operation id):
    4 - access("C"."CONTRACT_NO"=2207)
    5 - filter(TO_DATE(INTERNAL_FUNCTION("A"."ADMIT_DATE"),'dd/mm/yyyy')<
    ='17/12/2009' AND TO_DATE(INTERNAL_FUNCTION("A"."ADMIT_DATE"),'dd/mm/yyy
    y')>='29/12/2008')
    6 - access("C"."PATIENT_ID"="A"."PATIENT_ID")
    7 - filter("P"."NATIONALITY_CODE"<>16)
    8 - access("A"."PATIENT_ID"="P"."PATIENT_ID")
    Note
    - rule based optimizer used (consider using cbo)
    THIS QUERY RETURNS THE RESULT WITHIN 1 MINUTES.

Maybe you are looking for