Batchs taking time

Hi,
we are using production database 9.2.0.6.0 .
lot of batches are running in this databases daily . batches are (function,procedures etc.)
from last few days these batches are taking more time then previous time. There are no changes in structure of database .I checked for blocking sessions also during batch run but no blocking sessions are there .
where i can check for finding the cause ....
any idea...

Sorry if my original post wasn't clearer.
Tracing a session like execute dbms_system.set_sql_trace_in_session(sid,serial#,true) will trace a currently logged in session as of the current point.
In the past I have setup a database logon trigger and traced all the sessions. Problems with doing this a user has to have the 'ALTER SESSION' privilege to enable tracing, if sessions connect through a connection pool then these need to be restarted/recycled (not possible sometimes) to capture the login and the tracing may generate alot of trace files (filesystem issues). Also, I've come across some bugs with logon tracing (creating trace file names with sys_context('USERENV','SESSION_USER') in the filename has caused problems on UNIX with OPS$ users, does not like $ in the filename and will error, possibly locking users out).
The other approach you can take is enable tracing in the PL/SQL code or the beginning of the batch job.
Another approach altogether is forget about tracing and take statspack snapshot before and after the job.

Similar Messages

  • Publishing and Overwriting a Universe, Updates new measures quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated.

    Hi gurus:
    I have a continuous problem while Publishing and Overwriting a Universe from IDT, new measures UPDATES quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated. it approximately takes half an hour to be updated while accessing universe from dashboards.
    Regards:
    Jawad Khalid

    Hi gurus:
    I have a continuous problem while Publishing and Overwriting a Universe from IDT, new measures UPDATES quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated. it approximately takes half an hour to be updated while accessing universe from dashboards.
    Regards:
    Jawad Khalid

  • In one website, it takes to much time to load the page, in other site its not taking time, so do i need to enable or change any settings

    in one website, its taking time to load the page, on other PC its not taking any time( with internet explorer) in my PC other websites are opening quickly but this website takes too much time with firefox

    Zepo wrote:
    My iMac has been overwhelmed almost since I bought it new.  After some digging the guiness bar suggested its my Aperture library being on the same
    internal Tera byte drive as my operating system.
    Having a single internal hard drive overfilled (drives slow as they fill) is very likely contributing to your problems, but IMO "my Aperture library being on the same internal Tera byte drive as my operating system" is very unlikely to be contributing to your problems. In fact the Library should stay on an underfilled (roughly, for speed, I would call ~half full "underfilled") internal drive, not on the Drobo.
    Instead build a Referenced-Masters workflow with the Library and OS on an internal drive, Masters on the Drobo, OS 10.6.8 (there have been issues reported with OS 10.7 Lion). Keep Vault backup of the Library on the Drobo, and of course back up all Drobo data off site.
    No matter what you do with i/o your C2D Mac is not a strong box for Aperture performance. If you want to really rock Aperture move to one of the better 2011 Sandy Bridge Macs, install 8 GB or more of RAM and build a Referenced-Masters workflow with the Library and OS on an internal solid state drive (SSD).
    Personally I would prefer investing in a Thunderbolt RAID rather than in a Drobo but each individual makes his/her own network speed/cost decisions. The Drobo should work OK for referenced Masters even though i/o is limited by the Firewire connection.
    Do not forget the need for off site backup. And I suggest that in the process of moving to your new setup it is most important to get the data safely and redundantly copied and generally best to disregard how long it may take.
    HTH
    -Allen Wicks

  • Prcedure taking time

    Hello Experts,
    I am running a procedure which is taking too much time during execution. How can I found that which statement or query in the procedure is taking time. Is there any package available to check this out.
    Thanks & Regards,
    Manish

    Execute the below query to find out the session id of the session in which your procedure is getting executed.
    [c]
    SELECT s.sid
    ,s.serial#
    ,ddl.name
    ,ddl.type
    ,ddl.owner
    ,s.status
    ,s.osuser
    ,s.machine
    FROM dba_ddl_locks ddl
    ,v$session s
    WHERE ddl.session_id = s.sid;
    [\c]
    Once you get the sid,execute the below query to know which DML its executing:
    [c]
    SELECT /*+ leading(s) */
    s.sid
    ,s.serial#
    ,st.piece
    ,st.sql_text
    ,s.status
    ,s.osuser
    ,s.machine
    FROM v$session s
    ,v$sqltext st
    WHERE s.sql_hash_value = st.hash_value(+)
    AND s.sid = &sid_from_above_query
    ORDER BY st.piece;
    [\c]
    Regards
    Arun

  • JSP taking time to load

    Hi All
    I am having a strange problem. The JSP file is taking to much of time to load and the IE doesnot responds properly even after the the page is loaded. I agree that the JSP has significant amount of Java code in it. What could be the problem?? The response from the server is coming fast. The page is taking time after recieving the response from server.
    Regards
    Prashant

    What could be the problem??Slow hardware, slow code, perhaps some other app is eating your resources, could be anything really. I think its time for a profiler.
    The JSP file is taking to much of time to load and the IE doesnot responds properly even after the the page is loadedIE is the best browser for improper behavior, but how exactly is it misbehaving in your case? It might be a bad piece of javascript that is causing it head aches.

  • Using Materilaized view in a query .. query is taking time????

    Hi I have a query :-
    SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
    FROM emp_view u -- using view
    CONNECT BY PRIOR u.empid = u.supervisor_id
    START WITH u.sbcuid = 'ph2755';
    here emp_view is a view .
    ------ The above query is taking 3 sec to execute.
    Then I created Materialuized view emp_mv and the the MV query is same as emp_view view query.
    After this i executed following sql
    SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
    FROM emp_mv u -- using materialized view
    CONNECT BY PRIOR u.empid = u.supervisor_id
    START WITH u.sbcuid = 'ph2755';
    this query is taking 15 sec to execute..... :(
    can anyone please tell me why MV query is taking time????

    Hi,
    In your first case you query a view, meaning that you query the underlying tables. These probably have indexes and stats are updated.
    In you second case you query a materialized view, meaning that you query the underlying base table of that mview.
    This probably do not have the same indexes to support that query.
    But of course, I'm just guessing based on the little information provided.
    If you want to take this further, please search for "When your query takes too long" and "How to post a tuning request".
    These two threads holds valuable information, not only on how to ask this kind of question, but also how to start solving it on your own.
    Regards
    Peter

  • Gl posting is taking time

    Hi All,
    Everyday we run a gl process. Gl posting is taking more 1 hr. Same way 4 gl posting run during gl process which is causing
    performance delay.
    I see in the the log file that below stage is taking time.
    glpibr.concurrency() 02-MAR-2013 08:13:03
    glpibr.concurrency() 02-MAR-2013 08:13:03
    glpidb() 02-MAR-2013 08:46:41
    glpidb() 02-MAR-2013 08:49:59
    glpibr() 02-MAR-2013 08:49:59
    This process itself takes more than 30 hours. It varies everyday. Sometime it takes 1 hr also.
    insert into gl_balances
    ( set_of_books_id, code_combination_id, currency_code, period_name,
    actual_flag, budget_version_id, encumbrance_type_id, last_update_date,
    last_updated_by, period_type, period_year, period_num, period_net_dr, period_net_cr,
    period_to_date_adb, quarter_to_date_dr, quarter_to_date_cr, quarter_to_date_adb, year_to_date_adb,
    project_to_date_dr, project_to_date_cr, project_to_date_adb, begin_balance_dr, begin_balance_cr,
    period_net_dr_beq, period_net_cr_beq, begin_balance_dr_beq, begin_balance_cr_beq, template_id, translated_flag )
    select pi.set_of_books_id, pi.code_combination_id, pi.currency_code, pi.period_name, pi.actual_flag, pi.budget_version_id, pi.encumbrance_type_id, sysdate, :fnd_user_id, pi.period_type, pi.period_year, pi.period_num, 0, 0, NULL, 0, 0, NULL, NULL,
    0, 0, NULL, 0, 0, NULL, NULL, NULL, NULL, pi.template_id, pi.translated_flag from gl_posting_interim_50130 pi where not exists ( select 'X' from gl_balances b where b.set_of_books_id = pi.set_of_books_id and b.code_combination_id = pi.code_combination_id and b.currency_code = pi.currency_code and
    b.period_name = pi.period_name and b.actual_flag = pi.actual_flag and
    nvl(b.encumbrance_type_id, -1) = nvl(pi.encumbrance_type_id, -1) and
    nvl(b.budget_version_id, -1) = nvl(pi.budget_version_id, -1) and decode(b.translated_flag, '', '-1', 'Y', '0', 'N', '0', 'R', '1', b.translated_flag) = decode(pi.translated_flag, '', '-1', 'Y', '0', 'N', '0', 'R', '1', pi.translated_flag) )Above query is taking more than 30 mins as i can see in awrreport . As per metalink note id - 1096873.1 it is due to gl_posting_interim tables which is locking the another gl_interim_posting tables. I dont see any lock but i can see lots of gl_posting_interim present in the database. As per my knowledge this table must be dropped after posting got over.
    Env details -
    Apps version - 11.5.10.2
    DB version - 11.2.0.1
    OS - IBM AIX 6.1
    Please suggest.
    Thanks

    Please see these docs.
    R11i GLTTRN Translation Performance Issue In INSERT INTO GL_BALANCES [ID 761898.1]
    Information Center: Optimizing Performance for Oracle General Ledger [ID 1489537.2]
    Performance Issue With Translation Program After Upgrading To 10G Database [ID 742025.1]
    Deleting Summary Accounts Has Poor Performance [ID 1088585.1]
    GL Posting Performance Issue at glpip2() [ID 280641.1]
    GLPPOS Performance Issue After 9i To 10g Upgrade in gluddl.lpc [ID 1262020.1]
    Thanks,
    Hussein

  • How to know which sql query is taking time for concurrent program

       Hi sir,
    I am running concurrent program,that is taking time to execute ,i want to know which sql query causing performance
    Thanaks,
    Sreekanth

    Hi,
    My Learning: Diagnosing Oracle Applications Concurrent Programmes - 11i/R12
    How to run a Trace for a Concurrent Program? (Doc ID 415640.1)
    FAQ: Common Tracing Techniques in Oracle E-Business Applications 11i and R12 (Doc ID 296559.1)
    How To Get Level 12 Trace And FND Debug File For Concurrent Programs (Doc ID 726039.1)
    How To Trace a Concurrent Request And Generate TKPROF File (Doc ID 453527.1)
    Regards
    Yoonas

  • Select query taking time

    THe following query is taking time. Is there anyway better to write this query.
    SELECT PROGRAM_NAME_ID ,PROGRAM_NAME,sum(balance)"Unpaid Balance"
        FROM (
    SELECT DISTINCT
    PROGRAM_NAME_ID ,PROGRAM_NAME,
    t.billing_key billing_key,
    (TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
    nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
    -PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
    Report_period,company_id
    FROM  BILLING B,
    PROG_SURCH T ,
    mv_program_dict P
    WHERE
    B.BILLING_KEY=T.BILLING_KEY
    AND  p.program_key= t.program_key(+)
    and company_id=:p3_hide_comp
    and b.SUBMIT_STATUS='S'
    union
    SELECT DISTINCT
    PROGRAM_NAME_ID ,PROGRAM_NAME,
    t.billing_key billing_key,
    (TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
    nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
    -PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
    Report_period,company_id
    FROM  MV_BILLING B,
    MV_PROG_SURCH T ,
    mv_program_dict P
    WHERE
    B.BILLING_KEY=T.BILLING_KEY
    AND  p.program_key= t.program_key(+)
    and company_id=:p3_hide_comp
    order by report_period,program_name_id )
    where balance>=0
    GROUP BY PROGRAM_NAME_ID,PROGRAM_NAME
    ORDER BY PROGRAM_NAME_ID

    Hi,
    This is totally right.
    >
    Being one such call. The price for calling pl/sql functions in SQL can be quite high. I'd highly recommend you find a way to incorporate the pl/sql code into the SQL query.
    >
    but, try with this query. I hope would help you and return the rows you want.
    SELECT   program_name_id, program_name,
               SUM (  tuff_generic_pkg.get_total (billing_key, program_key)
                    + NVL (penalty_interest (billing_key,
                                             program_key,
                                             company_id,
                                             report_period
                           0
             - payment_amount (company_id, program_key, report_period) balance
        FROM (SELECT program_name_id, program_name, t.billing_key, t.program_key,
                     b.company_id, b.report_period
                FROM billing b, prog_surch t, mv_program_dict p
               WHERE b.billing_key = t.billing_key
                 AND p.program_key = t.program_key(+)
                 AND company_id = :p3_hide_comp
                 AND b.submit_status = 'S'
              UNION
              SELECT program_name_id, program_name, t.billing_key, t.program_key,
                     b.company_id, b.report_period report_period, company_id
                FROM mv_billing b, mv_prog_surch t, mv_program_dict p
               WHERE b.billing_key = t.billing_key
                 AND p.program_key = t.program_key(+)
                 AND company_id = :p3_hide_comp) sub
       WHERE   (  tuff_generic_pkg.get_total (billing_key, program_key)
                + NVL (penalty_interest (billing_key,
                                         program_key,
                                         company_id,
                                         report_period
                       0
             - payment_amount (company_id, program_key, report_period) >= 0
    GROUP BY program_name_id, program_nameObviosly I cannot testing.
    HTH -- johnxjean --

  • Insert statement taking time on oracle 10g

    Hi,
    My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
    I m using oracle version 10.2.0.4.0.
    cust_item is matiralize view in procedure and it is refreshing in the procedure
    Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
    There are almost 6 lac records into MV which are going to insert into TABLE.
    In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
    INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl  NOLOGGING
             (SELECT /*+ PARALLEL */
                     ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
                     cust_nbr, item_nbr, lu_eff_dt,
                     0, 0, 0, lu_end_dt,
                     bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
                     '', 0, ' ',
                                   case
                                 when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
                                 THEN
                                         case
                                            when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
                                            then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
                                                          and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
                                                          and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
                                                          a.cases_per_pallet)
                                      else cases_per_pallet
                                  end
                          else cases_per_pallet
                     END cases_per_pallet,
                     cases_per_layer
                FROM cust_item a
               WHERE a.ctry_code = p_country_code ----varible passing by procedure
                 AND a.co_code = p_company_code   ----varible passing by procedure
                 AND a.ROWID =
                        (SELECT MAX (b.ROWID)
                           FROM cust_item b
                          WHERE b.ctry_code = a.ctry_code
                            AND b.co_code = a.co_code
                            AND b.ctry_code = p_country_code ----varible passing by procedure
                            AND b.co_code = p_company_code   ----varible passing by procedure
                            AND b.srce_loc_nbr = a.srce_loc_nbr
                            AND b.srce_loc_type_code = a.srce_loc_type_code
                            AND b.cust_nbr = a.cust_nbr
                            AND b.item_nbr = a.item_nbr
                            AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
    Plan
    INSERT STATEMENT  CHOOSECost: 133,310  Bytes: 248  Cardinality: 1                      
         5 FILTER                 
              4 HASH GROUP BY  Cost: 133,310  Bytes: 248  Cardinality: 1            
                   3 HASH JOIN  Cost: 132,424  Bytes: 1,273,090,640  Cardinality: 5,133,430       
                        1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026  Bytes: 554,410,440  Cardinality: 5,133,430 
                        2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570  Bytes: 718,680,200  Cardinality: 5,133,430  can you please look into the issue?
    Thanks.

    According to the execution plan you posted parallelism is not taking place - no parallel operations listed
    Check the hint syntax. In particular, "PARALLEL" does not look right.
    Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
    select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky

  • Transfer of XML through a firewall is taking time

    Hi,
    I have following scenario:
    Publisher BPEL --> || Firewall || Subscriber BPEL
    Now the XML is being transferred over the firewall is around 9MB.
    When these two BPEL processes are deployed on local intranet, the transfer of XML is too fast and Subscriber BPEL completes within seconds. But when above two BPEL are deployed over the internet with firewall in-between, XML transfer is taking time of about 15 Minutes.
    Is there a solution to reduce the XML transfer time or some work-around to handle such issue?
    Thanks,
    Bhavnesh.

    If it is a 9mb file this is quite large, I think this will come down to latency and bandwidwth. Test form your environment how long does it take to download a 9mb, then from the remote location. If this takes around 15 mins to perform then there is little you can do but get a better ISP.
    I doubt the issue will be with the firewall.
    cheers
    James

  • Application server 10g R2 taking time to load.

    hi all.
    i am using oracle Application server 10g R2 when i boot my windows server 2003 R2 then
    its taking time to load application server more than 5 min what is the problem
    why its taking time to load?
    any onw knows?

    JVM (java virtual machine) If you have a full stack, infrastructure + BI forms you need 2 gb at least only to install, if you plan to make some work on it it could happen that the computer get very slow.
    In opmn.xml file, in the seccion for the home container (or other you are using) there is a line for Start Parameters, there you have to use the -Xmx1024M to se the maximum JVM size to 1gb, for this you will have to restart the proc.
    For forms also there are some procs where you manage the size of the memory and procs, you can see this in the Configuration Seccion into the EM Console.
    Regards

  • I want to know what should do if the SPID : 12 Checkpoint is taking time.

    Hi Team,
    I want to know what should do if the SPID : 12 Checkpoint is taking time.
    I can't kill it also.
    Thanks

    It was taking quite long therefore I have pasted here the comments and there was a activity for database restore was pending on that database due to which I have to restart the instance service.
    If next time it will occur what should I do?
    Nothing you cannot do anything reason being it is system process. I guess i said same thing in my comment before. Its just there because system is running checkpoint. I upfront cannot suggest what you should do but I have seen this on many prod databases and
    it is normal. Dont restart SQL Server services if database is in middle of restore
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Create View taking time

    Hi,
    We are creating a view jusing the following query which is taking time. The query is as below,
    SELECT
    A.ICD_CODE AS ICD_CODE,
    A.ICD_DESC AS ICD_DESC,
    B.COMPL_ICD_CODE AS COMPL_ICD_CODE,
    B.COMPL_GRP_TXT AS COMPL_GRP_TXT,
    C.PROC_TYPE AS PROC_TYPE ,
    C.I_O_IND AS I_O_IND,
    C.DISC_MON AS QUARTER ,
    B.PAT_KEY AS PAT_KEY ,
    D.COMPL_TYPE_TXT AS COMPL_TYPE_TXT ,
    C.PROV_ID AS PROV_ID ,
    A.SPECIALTY AS SPECIALTY
    FROM
    EES_ICD_9_CODE A ,
    EES_CLINICAL_COMPL_DATA B,
    EES_CLINICAL_DATA C ,
    EES_CLINCL_COMPL_ICD D                                                                            
    WHERE A.ICD_CODE=     B.ICD_CODE
                    AND  A. ICD_CODE= C.ICD_CODE
                    AND B.ICD_CODE=D.ICD_9_CD
                    AND B.COMPL_GRP_TXT=D.COMPL_GRP_TXT
                    AND B.COMPL_ICD_CODE<>B.ICD_CODE
                    AND C.PROC_TYPE <>'L'
                    AND B.COMPL_GRP_TXT<>'Reoperations'
                    AND D.COMPL_TYPE_TXT<>'Intra-operative Misadventure'
                    UNION
                    SELECT
    A.ICD_CODE AS ICD_CODE,
    A.ICD_DESC AS ICD_DESC,
    B.COMPL_ICD_CODE AS COMPL_ICD_CODE,
    B.COMPL_GRP_TXT AS COMPL_GRP_TXT,
    C.PROC_TYPE AS PROC_TYPE ,
    C.I_O_IND AS I_O_IND,
    C.DISC_MON AS QUARTER ,
    B.PAT_KEY AS PAT_KEY ,
    D.COMPL_TYPE_TXT AS COMPL_TYPE_TXT ,
    C.PROV_ID AS PROV_ID ,
    A.SPECIALTY AS SPECIALTY
    FROM
    EES_ICD_9_CODE A ,
    EES_CLINICAL_COMPL_DATA B,
    EES_CLINICAL_DATA C ,
    EES_CLINCL_COMPL_ICD D                                                                            
    WHERE A.ICD_CODE=     B.ICD_CODE
                    AND  A. ICD_CODE= C.ICD_CODE
                    AND B.ICD_CODE=D.ICD_9_CD
                    AND B.COMPL_GRP_TXT=D.COMPL_GRP_TXT
                    AND B.COMPL_ICD_CODE<>B.ICD_CODE
                    AND C.PROC_TYPE <>'L'
                    AND D.COMPL_TYPE_TXT = 'Intra-operative Misadventure'
                    AND B.PROC_DAY=C.PROC_DAY
    UNION
                    SELECT
    A.ICD_CODE AS ICD_CODE,
    A.ICD_DESC AS ICD_DESC,
    B.COMPL_ICD_CODE AS COMPL_ICD_CODE,
    B.COMPL_GRP_TXT AS COMPL_GRP_TXT,
    C.PROC_TYPE AS PROC_TYPE ,
    C.I_O_IND AS I_O_IND,
    C.DISC_MON AS QUARTER ,
    B.PAT_KEY AS PAT_KEY ,
    D.COMPL_TYPE_TXT AS COMPL_TYPE_TXT ,
    C.PROV_ID AS PROV_ID ,
    A.SPECIALTY AS SPECIALTY
    FROM
    EES_ICD_9_CODE A ,
    EES_CLINICAL_COMPL_DATA B,
    EES_CLINICAL_DATA C ,
    EES_CLINCL_COMPL_ICD D                                                                            
    WHERE A.ICD_CODE=     B.ICD_CODE
                    AND  A. ICD_CODE= C.ICD_CODE
                    AND B.ICD_CODE=D.ICD_9_CD
                    AND B.COMPL_GRP_TXT=D.COMPL_GRP_TXT
                    AND B.COMPL_ICD_CODE<>B.ICD_CODE
                    AND C.PROC_TYPE <>'L'
                    AND B.COMPL_GRP_TXT='Reoperations'
                    AND B.PROC_DAY>C.PROC_DAY 
                    AND (B.COMPL_ICD_CODE LIKE '45.%' OR   B.COMPL_ICD_CODE LIKE '46.%'           OR          B.COMPL_ICD_CODE LIKE '48.%' OR B.COMPL_ICD_CODE LIKE '49.%')
                    )Here is the explain plan
    PLAN_TABLE_OUTPUT
    | Id  | Operation                         |  Name                    | Rows  | Bytes |TempSpc| Cost  |
    |   0 | SELECT STATEMENT                  |                          |  5443M|   710G|       |  1687M|
    |   1 |  SORT UNIQUE                      |                          |  5443M|   710G|  1598G|  1687M|
    |   2 |   UNION-ALL                       |                          |       |       |       |       |
    |*  3 |    HASH JOIN                      |                          |  5371M|   700G|    34M| 11709 |
    |*  4 |     HASH JOIN                     |                          |   281K|    31M|  4568K|  4679 |
    |   5 |      MERGE JOIN CARTESIAN         |                          | 50225 |  3972K|       |   248 |
    |   6 |       TABLE ACCESS FULL           | EES_ICD_9_CODE           |   123 |  5289 |       |     2 |
    |   7 |       BUFFER SORT                 |                          |   408 | 15504 |       |   246 |
    |*  8 |        TABLE ACCESS FULL          | EES_CLINCL_COMPL_ICD     |   408 | 15504 |       |     2 |
    |*  9 |      TABLE ACCESS FULL            | EES_CLINICAL_COMPL_DATA  |  2088K|    71M|       |  1860 |
    |* 10 |     TABLE ACCESS FULL             | EES_CLINICAL_DATA        |  1911K|    41M|       |  2855 |
    |* 11 |    HASH JOIN                      |                          |  5973K|   831M|  7216K|  9081 |
    |  12 |     TABLE ACCESS BY INDEX ROWID   | EES_ICD_9_CODE           |     1 |    43 |       |     1 |
    |  13 |      NESTED LOOPS                 |                          | 55969 |  6558K|       |  4265 |
    |  14 |       NESTED LOOPS                |                          | 55970 |  4208K|       |  3146 |
    |* 15 |        TABLE ACCESS FULL          | EES_CLINCL_COMPL_ICD     |    36 |  1368 |       |     2 |
    |* 16 |        TABLE ACCESS BY INDEX ROWID| EES_CLINICAL_COMPL_DATA  |  1555 | 60645 |       |    88 |
    |* 17 |         INDEX RANGE SCAN          | COMPL_IND_ICD_CODE       | 56875 |       |       |   248 |
    |* 18 |       INDEX RANGE SCAN            | ICD_CODE_INDEX_1         |     1 |       |       |     1 |
    |* 19 |     TABLE ACCESS FULL             | EES_CLINICAL_DATA        |  1911K|    47M|       |  2855 |
    |* 20 |    HASH JOIN                      |                          |    65M|  9142M|  8864K|  8943 |
    |* 21 |     HASH JOIN                     |                          | 68740 |  8055K|  5432K|  2781 |
    |  22 |      MERGE JOIN CARTESIAN         |                          | 59762 |  4727K|       |   248 |
    |  23 |       TABLE ACCESS FULL           | EES_ICD_9_CODE           |   123 |  5289 |       |     2 |
    |  24 |       BUFFER SORT                 |                          |   486 | 18468 |       |   246 |
    |* 25 |        TABLE ACCESS FULL          | EES_CLINCL_COMPL_ICD     |   486 | 18468 |       |     2 |
    |* 26 |      TABLE ACCESS FULL            | EES_CLINICAL_COMPL_DATA  |   429K|    15M|       |  1860 |
    |* 27 |     TABLE ACCESS FULL             | EES_CLINICAL_DATA        |  1911K|    47M|       |  2855 |
    Predicate Information (identified by operation id):
       3 - access("A"."ICD_CODE"="C"."ICD_CODE")
       4 - access("A"."ICD_CODE"="B"."ICD_CODE" AND "B"."ICD_CODE"="D"."ICD_9_CD" AND
                  "B"."COMPL_GRP_TXT"="D"."COMPL_GRP_TXT")
       8 - filter("D"."COMPL_TYPE_TXT"<>'Intra-operative Misadventure' AND
                  "D"."COMPL_GRP_TXT"<>'Reoperations')
       9 - filter("B"."COMPL_ICD_CODE"<>"B"."ICD_CODE" AND "B"."COMPL_GRP_TXT"<>'Reoperations')
      10 - filter("C"."PROC_TYPE"<>'L')
      11 - access("A"."ICD_CODE"="C"."ICD_CODE" AND "B"."PROC_DAY"="C"."PROC_DAY")
      15 - filter("D"."COMPL_TYPE_TXT"='Intra-operative Misadventure')
      16 - filter("B"."COMPL_GRP_TXT"="D"."COMPL_GRP_TXT" AND "B"."COMPL_ICD_CODE"<>"B"."ICD_CODE")
      17 - access("B"."ICD_CODE"="D"."ICD_9_CD")
      18 - access("A"."ICD_CODE"="B"."ICD_CODE")
      19 - filter("C"."PROC_TYPE"<>'L')
      20 - access("A"."ICD_CODE"="C"."ICD_CODE")
           filter("B"."PROC_DAY">"C"."PROC_DAY")
      21 - access("A"."ICD_CODE"="B"."ICD_CODE" AND "B"."ICD_CODE"="D"."ICD_9_CD" AND
                  "B"."COMPL_GRP_TXT"="D"."COMPL_GRP_TXT")
      25 - filter("D"."COMPL_GRP_TXT"='Reoperations')
      26 - filter("B"."COMPL_ICD_CODE"<>"B"."ICD_CODE" AND "B"."COMPL_GRP_TXT"='Reoperations' AND
                  ("B"."COMPL_ICD_CODE" LIKE '45.%' OR "B"."COMPL_ICD_CODE" LIKE '46.%' OR "B"."COMPL_ICD_CODE"
                  '48.%' OR "B"."COMPL_ICD_CODE" LIKE '49.%'))
      27 - filter("C"."PROC_TYPE"<>'L')
    Note: cpu costing is off
    61 rows selected.Please suggest any change for the query and where are the bottlenecks ?

    Same question as above:
    But how are you using the view?
    You're presumably not planning to just do a select * from view with no additional views or predicates are you?Views that are that big and resource intensive would rarely be used in just a select * from view.
    Presuambly that explain plan is from just the entire view source query or from the view actually being used in a SQL statement with further joins or predicates?
    Those statements are what you need to get execution plans from to check that you're getting a predicate pushdown, etc
    Otherwise, if the stats are accurate and the estimates are accurate and you are just doing a select * from view, where's the performance surprise?

  • Table is taking time load.

    Hi,
    Jdev 11g.
    I have a vo (userdefined query) and I have dropped that as a table on my jspx page.
    Bydefault, it will not any rows. On click button, it will get the data and load the table. Issue, On page load it is taking lot of time to load the empty table.
    Please let me know what I suppose to do to load the table very fast.
    Thanks.

    Hi,
    I am executing the following statment on beforephase listener of the page by using backingbean method.
    Jspx Page:
    <f:view beforePhase="#{BackingBean.method}">
    AM:
    this.getEmpView().executeEmptyRowSet();
    Only table is taking time to load for the first time.
    Thnaks.

Maybe you are looking for