Performance of Query

Hi BW Folks,
I am working on virtual cube 0bcs_vc10 for bcs(business consolidation) the base cube is 0bcs_c10. We compressed and partitioned the base cube since it was having a huge performance issue. The queries which i developed are running fine and are in production.
Now when I developed new queries after developing them and running in DEv they are taking 20 to 25 mins to run i.e. the whole partitioning and compressing of the cube is not helping us.
I went to RSRV and check the indices of the cube and I got this
yellow signal <b>ORACLE: Index /BI0/ICS_ITEM~0 has possibly degenerated</b>
I need your suggestions what should be my next step to this Will assign full points
Thanks

Hi Ravi,
i did RSRV and corrected the error but it still shows me the same error and there  is a yellow signal. Could you please tell me where else should I look in oder to get the performance of the query right.....I have already done partition and compression.
It was running fine till 2 days back and all of a sudden there is a huge runtime for the queries.
Your suggestions will be appreciatd with full pioints
Thanks

Similar Messages

  • Perform a query on tree node click

    This is probably very easy and I'm just having brain freeze..
    I have an application that is using ColdFusion to provide
    data for various controls. One control is a tree showing two levels
    of data (main level shows the process/job number, then you open
    that to see all the event numbers for that job). This all works
    fine.
    Now what I want to do is when you select an even number, flex
    will use a coldfusion page I set up to perform a query on the
    database using the event number and job number to get all the rest
    of the data about htat particular job (description, run times, etc)
    I have the click event from the tree control working fine, I
    have the page in coldfusion working fine, it outputs an XML format
    file with all the relevant details based on the parameters you send
    it. Now the question is, how do I populate the fields on the form
    with the data returned from the query?
    I am using a HTTPService call to get the data to/from
    ColdFusion (I have version 6.1 of Coldfusion).
    Thanks for any help.

    Well, I answered my own question... Here is what I did in
    case anyone else wants to know. If there is a better way, please
    advise.
    I have a click event on the tree control. Since the tree will
    only be two levels deep (parent and child) I test for whether the
    item clicked has a parent. If it does, I know they clicked on a
    child node. I then fire off the HTTPService with the parameters
    from the child and parent nodes of the tree item that was clicked.
    In the result parameter of the HTTPService, I populate the
    various fields using the lastresult.root.fieldname syntax of the
    HTTPService.
    It works as expected, but perhaps there is a better
    way?

  • How to improve performance of query

    Hi all,
    How to improve performance of query.
    please send :
    [email protected]
    thanks in advance
    bhaskar

    hi
    go through the following links for performance
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    http://www.asug.com/client_files/Calendar/Upload/ASUG%205-mar-2004%20BW%20Performance%20PDF.pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2

  • Improving performance of query with View

    Hi ,
    I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
    in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
    Thanks,
    Vishal.

    Do not try to solve problems that you have not yet confirmed to exist.  There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly. 
    As a suggestion, write a working query using a duplicate of the table in the current database.  Once it is working, then worry about performance.  Once that is working as efficiently as it can , change the query to use the "remote" table rather
    than the duplicate. Then determine if you have an issue.  If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address.  In that case, perhaps you need to change your perspective
    and approach to accomplishing your goal. 

  • Does Coloring Of Rows and Column in Query effect the Performance of Query

    Hi to all,
    Does Coloring of Rows and Column in Query via WAD or Report designer , will effect the performance of query.
    If yes, then how.
    What are the key factor should we consider while design of query in regards of Query performance.
    I shall ne thankful to you for this.
    Regards
    Pavneet Rana

    There will not be any significance perofrmance issue for Colouring the rows or columns...
    But there are various performance parameters which should be looked into while designing a query...
    Hope this PPT helps you.
    http://www.comeritinc.com/UserFiles/file/tips%20tricks%20to%20speed%20%20NW%20BI%20%202009.ppt
    rgds, Ghuru

  • Performance of query in report..

    Hi All,
    We have the below SQL which is taking 6-8 hrs to execute. Of which 'analysis_results_details' is huge table with 329 million records, its "IOT - TOP" [index oraganised] table.
    Can you please suggest some idea how to improve the performance of the query... Apologies if the format is correct tried my best
    DB details :- 10g (10.1.0.5.0) - 64 bit.
    SELECT   CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK/Ireland'
                ELSE ctr.region
             END AS region_name,
             t.reinsurance_treatment_desc AS reinsurance_treatment,
             CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK Runoff'
                ELSE rlc.country_code
             END AS country_code,       
             ars.currency_code, rln.management_line_of_business AS aog_line,
             rln.reserving_class, rln.reserving_line, rln.reserving_line_id,
            CASE
                WHEN UPPER (a.NAME) NOT LIKE 'PCSUMM%'
                   THEN ''
                ELSE CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUMM1 %'
                   THEN 'Attritional'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM2 %'
                   THEN 'Large Loss'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM4 %'
                   THEN 'Total'
                ELSE 'Cat'
             END
             END AS claim_type,
            CASE
                WHEN MOD (ard.exposure_period, 100) IN (1, 2, 3)
                   THEN TO_DATE
                          (TO_CHAR (TRUNC (ard.exposure_period / 100)) || '0101',
                           'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0401',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0701',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '1001',
                                 'yyyymmdd'
             END AS origin_date,
            CASE
                WHEN MOD (ard.evaluation_period, 100) IN
                                                   (1, 2, 3)
                   THEN TO_DATE
                          (TO_CHAR (TRUNC (ard.evaluation_period / 100)) || '0331',
                           'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0630',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0930',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '1231',
                                 'yyyymmdd'
             END AS development_date,
             SUM (DECODE (ars.data_type_id, 1, NVL (ard.VALUE, 0), 0)
                 ) AS earned_premium,
             SUM (DECODE (ars.data_type_id, 12, NVL (ard.VALUE, 0), 0)
                 ) AS paid_losses_and_alae,
             SUM (DECODE (ars.data_type_id, 27, NVL (ard.VALUE, 0), 0)
                 ) AS total_incurred_losses_inc_alae,
             SUM (DECODE (ars.data_type_id, 6, NVL (ard.VALUE, 0), 0)
                 ) AS paid_loss_total,
             SUM (DECODE (ars.data_type_id, 9, NVL (ard.VALUE, 0), 0)
                 ) AS alae_total,
             SUM (DECODE (ars.data_type_id, 13, NVL (ard.VALUE, 0), 0)
                 ) AS case_os_total,
             SUM (DECODE (ars.data_type_id, 3, NVL (ard.VALUE, 0), 0)
                 ) AS written_premium,                       
             SUM (DECODE (ars.data_type_id, 33, NVL (ard.VALUE, 0), 0)
                 ) AS total_claim_counts,                 
             SUM (DECODE (ars.data_type_id, 31, NVL (ard.VALUE, 0), 0)
                 ) AS open_claim_counts,                 
             SUM (DECODE (ars.data_type_id, 32, NVL (ard.VALUE, 0), 0)
                 ) AS closed_claim_counts,               
             SUM (DECODE (ars.data_type_id, 21, NVL (ard.VALUE, 0), 0)
                 ) AS total_case_inc_loss,                     
             SYSDATE AS export_date
        FROM res_line_country rlc,
             res_line_names rln,
             risk_history.analysis_results_criteria arc,
             (SELECT   al.analysis_id,
                       MAX (al.lob_type_id) AS lob_type_id
                  FROM analysis_lobs al
              GROUP BY analysis_id) al_types,
             analyses a,
             treatments t,
             risk_history.analysis_lobs al,
             analysis_results_summary ars,
             analysis_results_details ard,
             countries_to_regions ctr,
                      --Created Seperate Table to materialize the inline view - VM
             country_to_bu_map ctbm
       WHERE rlc.lob_value = rln.lob_value
         AND rlc.lob_value = arc.lob_value
         AND al_types.lob_type_id = rln.lob_type_id
         AND a.analysis_id = al_types.analysis_id
         AND a.analysis_id = arc.analysis_id
         AND a.analysis_id = t.analysis_id
         AND al.analysis_id = arc.analysis_id
         AND arc.lob_value = al.lob_value
         AND arc.analysis_criteria_id = ars.analysis_criteria_id
         AND ars.analysis_results_id = ard.analysis_results_id
         AND ctr.country_code = rlc.country_code
         AND ctbm.business_unit = arc.business_unit
         AND rlc.country_code = ctbm.country_code
         AND a.run_frequency !=2
         AND ars.is_non_zero_triangle = 'Y'   
         AND ars.data_type_id IN (1, 12, 27, 6, 9, 13, 3, 33, 31, 32, 21)
         AND NOT UPPER (a.NAME) LIKE 'PCSUMM88%'
         AND arc.lob_value NOT IN  ('ALL', 'SEL') -- get data only for the actual reserving lines
         AND al_types.lob_type_id IN (2, 4)
         AND arc.management_unit != 'SEL'
         AND arc.business_unit NOT IN  ('ALL', 'SEL')
         AND arc.rcc = 'ALL'                    
    GROUP BY CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK/Ireland'
                ELSE ctr.region
             END,
             t.reinsurance_treatment_desc,
             CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK Runoff'
                ELSE rlc.country_code
             END,
             ars.currency_code,
             rln.management_line_of_business,
             rln.reserving_class,
             rln.reserving_line,
             rln.reserving_line_id,
             CASE
                WHEN UPPER (a.NAME) NOT LIKE 'PCSUMM%'
                   THEN ''
                ELSE CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUMM1 %'
                   THEN 'Attritional'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM2 %'
                   THEN 'Large Loss'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM4 %'
                   THEN 'Total'
                ELSE 'Cat'
             END
             END,
             CASE
                WHEN MOD (ard.exposure_period, 100) IN (1, 2, 3)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0101',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0401',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0701',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '1001',
                                 'yyyymmdd'
             END,                                             
             CASE
                WHEN MOD (ard.evaluation_period, 100) IN (1, 2, 3)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0331',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0630',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0930',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '1231',
                                 'yyyymmdd') END,SYSDATE
      HAVING (   SUM (DECODE (ars.data_type_id, 1, NVL (ard.VALUE, 0), 0)) <> 0
              OR SUM (DECODE (ars.data_type_id, 12, NVL (ard.VALUE, 0), 0)) <>0
              OR SUM (DECODE (ars.data_type_id, 27, NVL (ard.VALUE, 0), 0)) <>0
              OR SUM (DECODE (ars.data_type_id, 6, NVL (ard.VALUE, 0), 0)) <> 0
              OR SUM (DECODE (ars.data_type_id, 9, NVL (ard.VALUE, 0), 0)) <> 0
              OR SUM (DECODE (ars.data_type_id, 13, NVL (ard.VALUE, 0), 0)) <> 0
              OR SUM (DECODE (ars.data_type_id, 3, NVL (ard.VALUE, 0), 0)) <>0
              OR SUM (DECODE (ars.data_type_id, 33, NVL (ard.VALUE, 0), 0)) <>0
              OR SUM (DECODE (ars.data_type_id, 31, NVL (ard.VALUE, 0), 0)) <>0
              OR SUM (DECODE (ars.data_type_id, 32, NVL (ard.VALUE, 0), 0)) <>0
              OR SUM (DECODE (ars.data_type_id, 21, NVL (ard.VALUE, 0), 0)) <> 0 )
    Below is the EXPLAIN PLAN for it
    PLAN_TABLE_OUTPUT
    Plan hash value: 771711397
    | Id  | Operation                                | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |                              |     1 |   256 |   356   (5)| 00:00:03 |
    |*  1 |  FILTER                                  |                              |       |       |            |          |
    |   2 |   SORT GROUP BY                          |                              |     1 |   256 |   356   (5)| 00:00:03 |
    |   3 |    NESTED LOOPS                          |                              |     1 |   256 |   355   (4)| 00:00:03 |
    |   4 |     NESTED LOOPS                         |                              |     1 |   234 |   355   (4)| 00:00:03 |
    |   5 |      NESTED LOOPS                        |                              |     1 |   217 |   355   (4)| 00:00:03 |
    |   6 |       NESTED LOOPS                       |                              |     2 |   318 |   355   (5)| 00:00:03 |
    |   7 |        NESTED LOOPS                      |                              |     1 |   139 |   142  (10)| 00:00:02 |
    |   8 |         NESTED LOOPS                     |                              |     1 |   131 |   142  (10)| 00:00:02 |
    |   9 |          NESTED LOOPS                    |                              |     1 |   121 |   141  (10)| 00:00:02 |
    |* 10 |           HASH JOIN                      |                              |     1 |   111 |   140  (10)| 00:00:02 |
    |* 11 |            TABLE ACCESS BY INDEX ROWID   | ANALYSES                     |     1 |    43 |     0   (0)| 00:00:01 |
    |  12 |             NESTED LOOPS                 |                              |     1 |    94 |   115   (8)| 00:00:01 |
    |  13 |              NESTED LOOPS                |                              |     1 |    51 |   115   (8)| 00:00:01 |
    |* 14 |               TABLE ACCESS FULL          | ANALYSIS_RESULTS_SUMMARY     |     1 |    22 |   114   (8)| 00:00:01 |
    |* 15 |               TABLE ACCESS BY INDEX ROWID| ANALYSIS_RESULTS_CRITERIA    |     1 |    29 |     1   (0)| 00:00:01 |
    |* 16 |                INDEX UNIQUE SCAN         | ANALYSIS_RESULTS_CRITERIA_PK |     1 |       |     0   (0)| 00:00:01 |
    |* 17 |              INDEX RANGE SCAN            | ANALYSES_PK                  |     1 |       |     0   (0)| 00:00:01 |
    |  18 |            VIEW                          |                              |    14 |   238 |    24  (13)| 00:00:01 |
    |* 19 |             FILTER                       |                              |       |       |            |          |
    |  20 |              SORT GROUP BY               |                              |    14 |    98 |    24  (13)| 00:00:01 |
    |  21 |               TABLE ACCESS FULL          | ANALYSIS_LOBS                | 22909 |   156K|    21   (0)| 00:00:01 |
    |* 22 |           INDEX RANGE SCAN               | ANALYSIS_LOBS_PK             |     1 |    10 |     1   (0)| 00:00:01 |
    |* 23 |          INDEX FULL SCAN                 | COUNTRY_TO_BU_MAP_PK         |     1 |    10 |     1   (0)| 00:00:01 |
    |* 24 |         INDEX UNIQUE SCAN                | RES_LINE_COUNTRY_PK          |     1 |     8 |     0   (0)| 00:00:01 |
    |* 25 |        INDEX RANGE SCAN                  | ANALYSIS_RESULTS_DETAILS_PK  | 59280 |  1157K|   213   (1)| 00:00:02 |
    |* 26 |       TABLE ACCESS BY INDEX ROWID        | RES_LINE_NAMES               |     1 |    58 |     1   (0)| 00:00:01 |
    |* 27 |        INDEX UNIQUE SCAN                 | RES_LINE_NAME_PK             |     1 |       |     0   (0)| 00:00:01 |
    |  28 |      TABLE ACCESS BY INDEX ROWID         | COUNTRIES_TO_REGIONS         |     1 |    17 |     0   (0)| 00:00:01 |
    |* 29 |       INDEX UNIQUE SCAN                  | COUNTRIES_TO_REG_PK          |     1 |       |     0   (0)| 00:00:01 |
    |  30 |     TABLE ACCESS BY INDEX ROWID          | TREATMENTS                   |     1 |    22 |     0   (0)| 00:00:01 |
    |* 31 |      INDEX UNIQUE SCAN                   | PK_ANALYSIS_ID               |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(SUM(DECODE("ARS"."DATA_TYPE_ID",1,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",12,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",27,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",6,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",9,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",13,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",3,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",33,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",31,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",32,NVL("ARD"."VALUE",0),0))<>0 OR
                  SUM(DECODE("ARS"."DATA_TYPE_ID",21,NVL("ARD"."VALUE",0),0))<>0)
      10 - access("A"."ANALYSIS_ID"="AL_TYPES"."ANALYSIS_ID")
      11 - filter("A"."RUN_FREQUENCY"<>2 AND UPPER("A"."NAME") NOT LIKE 'PCSUMM88%')
      14 - filter("ARS"."IS_NON_ZERO_TRIANGLE"='Y' AND ("ARS"."DATA_TYPE_ID"=1 OR "ARS"."DATA_TYPE_ID"=3 OR
                  "ARS"."DATA_TYPE_ID"=6 OR "ARS"."DATA_TYPE_ID"=9 OR "ARS"."DATA_TYPE_ID"=12 OR "ARS"."DATA_TYPE_ID"=13 OR
                  "ARS"."DATA_TYPE_ID"=21 OR "ARS"."DATA_TYPE_ID"=27 OR "ARS"."DATA_TYPE_ID"=31 OR "ARS"."DATA_TYPE_ID"=32 OR
                  "ARS"."DATA_TYPE_ID"=33))
      15 - filter("ARC"."BUSINESS_UNIT"<>'ALL' AND "ARC"."BUSINESS_UNIT"<>'SEL' AND "ARC"."LOB_VALUE"<>'SEL' AND
                  "ARC"."MANAGEMENT_UNIT"<>'SEL' AND "ARC"."LOB_VALUE"<>'ALL' AND "ARC"."RCC"='ALL')
      16 - access("ARC"."ANALYSIS_CRITERIA_ID"="ARS"."ANALYSIS_CRITERIA_ID")
      17 - access("A"."ANALYSIS_ID"="ARC"."ANALYSIS_ID")
      19 - filter(MAX("AL"."LOB_TYPE_ID")=2 OR MAX("AL"."LOB_TYPE_ID")=4)
      22 - access("AL"."ANALYSIS_ID"="ARC"."ANALYSIS_ID" AND "ARC"."LOB_VALUE"="AL"."LOB_VALUE")
           filter("AL"."LOB_VALUE"<>'ALL' AND "AL"."LOB_VALUE"<>'SEL' AND "ARC"."LOB_VALUE"="AL"."LOB_VALUE")
      23 - access("CTBM"."BUSINESS_UNIT"="ARC"."BUSINESS_UNIT")
           filter("CTBM"."BUSINESS_UNIT"<>'ALL' AND "CTBM"."BUSINESS_UNIT"<>'SEL' AND
                  "CTBM"."BUSINESS_UNIT"="ARC"."BUSINESS_UNIT")
      24 - access("RLC"."LOB_VALUE"="ARC"."LOB_VALUE" AND "RLC"."COUNTRY_CODE"="CTBM"."COUNTRY_CODE")
           filter("RLC"."LOB_VALUE"<>'ALL' AND "RLC"."LOB_VALUE"<>'SEL')
      25 - access("ARS"."ANALYSIS_RESULTS_ID"="ARD"."ANALYSIS_RESULTS_ID")
      26 - filter((TO_NUMBER("RLN"."LOB_TYPE_ID")=4 OR TO_NUMBER("RLN"."LOB_TYPE_ID")=2) AND
                  "AL_TYPES"."LOB_TYPE_ID"=TO_NUMBER("RLN"."LOB_TYPE_ID"))
      27 - access("RLN"."LOB_VALUE"=TO_NUMBER("RLC"."LOB_VALUE"))
      29 - access("CTR"."COUNTRY_CODE"="RLC"."COUNTRY_CODE")
      31 - access("A"."ANALYSIS_ID"="T"."ANALYSIS_ID")Regards,
    Sundeep K
    Edited by: kangula on Jan 21, 2011 4:34 PM
    Edited by: kangula on Jan 21, 2011 4:36 PM

    Hi Boneist,
    I have checked query and it doesnt change much... execution time is same.
    Hi Charles,
    I have tried using the hint /*+ GATHER_PLAN_STATISTICS */ as below, the query is still executing. Its taking more than 4 hrs... Is there any other way i can get these..
    set serveroutput off
    SELECT  /*+ GATHER_PLAN_STATISTICS */ CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK/Ireland'
                ELSE ctr.region
             END AS region_name,
             t.reinsurance_treatment_desc AS reinsurance_treatment,
             CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'Will let u know if i get these. In mean time have added new where clause [highlighted below] in query and it took 35 mins to execute. Please find it below along with plan.
    explain plan for
    SELECT  CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK/Ireland'
                ELSE ctr.region
             END AS region_name,
             t.reinsurance_treatment_desc AS reinsurance_treatment,
             CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK Runoff'
                ELSE rlc.country_code
             END AS country_code,       
             ars.currency_code, rln.management_line_of_business AS aog_line,
             rln.reserving_class, rln.reserving_line, rln.reserving_line_id,
            CASE
                WHEN UPPER (a.NAME) NOT LIKE 'PCSUMM%'
                   THEN ''
                ELSE CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUMM1 %'
                   THEN 'Attritional'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM2 %'
                   THEN 'Large Loss'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM4 %'
                   THEN 'Total'
                ELSE 'Cat'
             END
             END AS claim_type,
            CASE
                WHEN MOD (ard.exposure_period, 100) IN (1, 2, 3)
                   THEN TO_DATE
                          (TO_CHAR (TRUNC (ard.exposure_period / 100)) || '0101',
                           'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0401',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0701',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '1001',
                                 'yyyymmdd'
             END AS origin_date,
            CASE
                WHEN MOD (ard.evaluation_period, 100) IN
                                                   (1, 2, 3)
                   THEN TO_DATE
                          (TO_CHAR (TRUNC (ard.evaluation_period / 100)) || '0331',
                           'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0630',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0930',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '1231',
                                 'yyyymmdd'
             END AS development_date,
             SUM (DECODE (ars.data_type_id, 1, NVL (ard.VALUE, 0), 0)
                 ) AS earned_premium,
             SUM (DECODE (ars.data_type_id, 12, NVL (ard.VALUE, 0), 0)
                 ) AS paid_losses_and_alae,
             SUM (DECODE (ars.data_type_id, 27, NVL (ard.VALUE, 0), 0)
                 ) AS total_incurred_losses_inc_alae,
             SUM (DECODE (ars.data_type_id, 6, NVL (ard.VALUE, 0), 0)
                 ) AS paid_loss_total,
             SUM (DECODE (ars.data_type_id, 9, NVL (ard.VALUE, 0), 0)
                 ) AS alae_total,
             SUM (DECODE (ars.data_type_id, 13, NVL (ard.VALUE, 0), 0)
                 ) AS case_os_total,
             SUM (DECODE (ars.data_type_id, 3, NVL (ard.VALUE, 0), 0)
                 ) AS written_premium,                       
             SUM (DECODE (ars.data_type_id, 33, NVL (ard.VALUE, 0), 0)
                 ) AS total_claim_counts,                 
             SUM (DECODE (ars.data_type_id, 31, NVL (ard.VALUE, 0), 0)
                 ) AS open_claim_counts,                 
             SUM (DECODE (ars.data_type_id, 32, NVL (ard.VALUE, 0), 0)
                 ) AS closed_claim_counts,               
             SUM (DECODE (ars.data_type_id, 21, NVL (ard.VALUE, 0), 0)
                 ) AS total_case_inc_loss,                     
             SYSDATE AS export_date
        FROM res_line_country rlc,
             res_line_names rln,
             risk_history.analysis_results_criteria arc,
             (SELECT   al.analysis_id,
                       MAX (al.lob_type_id) AS lob_type_id
                  FROM analysis_lobs al
              GROUP BY analysis_id) al_types,
             analyses a,
             treatments t,
             risk_history.analysis_lobs al,
             analysis_results_summary ars,
             analysis_results_details ard,
             countries_to_regions ctr,
                      --Created Seperate Table to materialize the inline view - VM
             country_to_bu_map ctbm
       WHERE rlc.lob_value = rln.lob_value
         AND rlc.lob_value = arc.lob_value
         AND al_types.lob_type_id = rln.lob_type_id
         AND a.analysis_id = al_types.analysis_id
         AND a.analysis_id = arc.analysis_id
         AND a.analysis_id = t.analysis_id
         AND al.analysis_id = arc.analysis_id
         AND arc.lob_value = al.lob_value
         AND arc.analysis_criteria_id = ars.analysis_criteria_id
         AND ars.analysis_results_id = ard.analysis_results_id
         AND ctr.country_code = rlc.country_code
         AND ctbm.business_unit = arc.business_unit
         AND rlc.country_code = ctbm.country_code
         AND a.run_frequency !=2
         AND ars.is_non_zero_triangle = 'Y'   
         AND ars.data_type_id IN (1, 12, 27, 6, 9, 13, 3, 33, 31, 32, 21)
         AND NOT UPPER (a.NAME) LIKE 'PCSUMM88%'
         AND arc.lob_value NOT IN  ('ALL', 'SEL') -- get data only for the actual reserving lines
         AND al_types.lob_type_id IN (2, 4)
         AND arc.management_unit != 'SEL'
         AND arc.business_unit NOT IN  ('ALL', 'SEL')
         AND arc.rcc = 'ALL' 
         +*AND ard.VALUE <>0*+         
    GROUP BY CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK/Ireland'
                ELSE ctr.region
             END,
             t.reinsurance_treatment_desc,
             CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUM%UK%RUNOFF%'
                   THEN 'UK Runoff'
                ELSE rlc.country_code
             END,
             ars.currency_code,
             rln.management_line_of_business,
             rln.reserving_class,
             rln.reserving_line,
             rln.reserving_line_id,
             CASE
                WHEN UPPER (a.NAME) NOT LIKE 'PCSUMM%'
                   THEN ''
                ELSE CASE
                WHEN UPPER (a.NAME) LIKE 'PCSUMM1 %'
                   THEN 'Attritional'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM2 %'
                   THEN 'Large Loss'
                WHEN UPPER (a.NAME) LIKE 'PCSUMM4 %'
                   THEN 'Total'
                ELSE 'Cat'
             END
             END,
             CASE
                WHEN MOD (ard.exposure_period, 100) IN (1, 2, 3)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0101',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0401',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '0701',
                                 'yyyymmdd'
                WHEN MOD (ard.exposure_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.exposure_period / 100))
                                 || '1001',
                                 'yyyymmdd'
             END,
             CASE
                WHEN MOD (ard.evaluation_period, 100) IN (1, 2, 3)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0331',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (4, 5, 6)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0630',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (7, 8, 9)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '0930',
                                 'yyyymmdd'
                WHEN MOD (ard.evaluation_period, 100) IN (10, 11, 12)
                   THEN TO_DATE (   TO_CHAR (TRUNC (ard.evaluation_period / 100))
                                 || '1231',
                                 'yyyymmdd') END,SYSDATE
    HAVING   (SUM (DECODE (ars.data_type_id, 1, NVL (ard.VALUE, 0), 
                                          12, NVL (ard.VALUE, 0),
                                          27, NVL (ard.VALUE, 0),
                                           6, NVL (ard.VALUE, 0),
                                           9, NVL (ard.VALUE, 0),
                                          13, NVL (ard.VALUE, 0),
                                           3, NVL (ard.VALUE, 0),
                                          33, NVL (ard.VALUE, 0),
                                          31, NVL (ard.VALUE, 0),
                                          32, NVL (ard.VALUE, 0),
                                          21, NVL (ard.VALUE, 0), 0)) != 0 );
    select * from table (dbms_xplan.DISPLAY)
    PLAN_TABLE_OUTPUT
    Plan hash value: 2043043506
    | Id  | Operation                                | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |                              |     1 |   256 |   355   (5)| 00:00:03 |
    |*  1 |  FILTER                                  |                              |       |       |            |          |
    |   2 |   SORT GROUP BY                          |                              |     1 |   256 |   355   (5)| 00:00:03 |
    |   3 |    NESTED LOOPS                          |                              |     1 |   256 |   354   (4)| 00:00:03 |
    |   4 |     NESTED LOOPS                         |                              |     1 |   198 |   354   (5)| 00:00:03 |
    |   5 |      NESTED LOOPS                        |                              |     1 |   188 |   354   (5)| 00:00:03 |
    |   6 |       NESTED LOOPS                       |                              |     1 |   166 |   354   (5)| 00:00:03 |
    |   7 |        NESTED LOOPS                      |                              |     1 |   149 |   354   (5)| 00:00:03 |
    |   8 |         NESTED LOOPS                     |                              |     1 |   129 |   141  (10)| 00:00:02 |
    |   9 |          NESTED LOOPS                    |                              |     1 |   121 |   141  (10)| 00:00:02 |
    |* 10 |           HASH JOIN                      |                              |     1 |   111 |   140  (10)| 00:00:02 |
    |* 11 |            TABLE ACCESS BY INDEX ROWID   | ANALYSES                     |     1 |    43 |     0   (0)| 00:00:01 |
    |  12 |             NESTED LOOPS                 |                              |     1 |    94 |   115   (8)| 00:00:01 |
    |  13 |              NESTED LOOPS                |                              |     1 |    51 |   115   (8)| 00:00:01 |
    |* 14 |               TABLE ACCESS FULL          | ANALYSIS_RESULTS_SUMMARY     |     1 |    22 |   114   (8)| 00:00:01 |
    |* 15 |               TABLE ACCESS BY INDEX ROWID| ANALYSIS_RESULTS_CRITERIA    |     1 |    29 |     1   (0)| 00:00:01 |
    |* 16 |                INDEX UNIQUE SCAN         | ANALYSIS_RESULTS_CRITERIA_PK |     1 |       |     0   (0)| 00:00:01 |
    |* 17 |              INDEX RANGE SCAN            | ANALYSES_PK                  |     1 |       |     0   (0)| 00:00:01 |
    |  18 |            VIEW                          |                              |    14 |   238 |    24  (13)| 00:00:01 |
    |* 19 |             FILTER                       |                              |       |       |            |          |
    |  20 |              SORT GROUP BY               |                              |    14 |    98 |    24  (13)| 00:00:01 |
    |  21 |               TABLE ACCESS FULL          | ANALYSIS_LOBS                | 22909 |   156K|    21   (0)| 00:00:01 |
    |* 22 |           INDEX FULL SCAN                | COUNTRY_TO_BU_MAP_PK         |     1 |    10 |     1   (0)| 00:00:01 |
    |* 23 |          INDEX UNIQUE SCAN               | RES_LINE_COUNTRY_PK          |     1 |     8 |     0   (0)| 00:00:01 |
    |* 24 |         INDEX RANGE SCAN                 | ANALYSIS_RESULTS_DETAILS_PK  |  2964 | 59280 |   213   (1)| 00:00:02 |
    |  25 |        TABLE ACCESS BY INDEX ROWID       | COUNTRIES_TO_REGIONS         |     1 |    17 |     0   (0)| 00:00:01 |
    |* 26 |         INDEX UNIQUE SCAN                | COUNTRIES_TO_REG_PK          |     1 |       |     0   (0)| 00:00:01 |
    |  27 |       TABLE ACCESS BY INDEX ROWID        | TREATMENTS                   |     1 |    22 |     0   (0)| 00:00:01 |
    |* 28 |        INDEX UNIQUE SCAN                 | PK_ANALYSIS_ID               |     1 |       |     0   (0)| 00:00:01 |
    |* 29 |      INDEX RANGE SCAN                    | ANALYSIS_LOBS_PK             |     1 |    10 |     1   (0)| 00:00:01 |
    |* 30 |     TABLE ACCESS BY INDEX ROWID          | RES_LINE_NAMES               |     1 |    58 |     1   (0)| 00:00:01 |
    |* 31 |      INDEX UNIQUE SCAN                   | RES_LINE_NAME_PK             |     1 |       |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter(SUM(DECODE("ARS"."DATA_TYPE_ID",1,NVL("ARD"."VALUE",0),12,NVL("ARD"."VALUE",0),27,NVL("ARD"."VALUE"
                  ,0),6,NVL("ARD"."VALUE",0),9,NVL("ARD"."VALUE",0),13,NVL("ARD"."VALUE",0),3,NVL("ARD"."VALUE",0),33,NVL("ARD"."VA
                  LUE",0),31,NVL("ARD"."VALUE",0),32,NVL("ARD"."VALUE",0),21,NVL("ARD"."VALUE",0),0))<>0)
      10 - access("A"."ANALYSIS_ID"="AL_TYPES"."ANALYSIS_ID")
      11 - filter("A"."RUN_FREQUENCY"<>2 AND UPPER("A"."NAME") NOT LIKE 'PCSUMM88%')
      14 - filter("ARS"."IS_NON_ZERO_TRIANGLE"='Y' AND ("ARS"."DATA_TYPE_ID"=1 OR "ARS"."DATA_TYPE_ID"=3 OR
                  "ARS"."DATA_TYPE_ID"=6 OR "ARS"."DATA_TYPE_ID"=9 OR "ARS"."DATA_TYPE_ID"=12 OR "ARS"."DATA_TYPE_ID"=13 OR
                  "ARS"."DATA_TYPE_ID"=21 OR "ARS"."DATA_TYPE_ID"=27 OR "ARS"."DATA_TYPE_ID"=31 OR "ARS"."DATA_TYPE_ID"=32 OR
                  "ARS"."DATA_TYPE_ID"=33))
      15 - filter("ARC"."BUSINESS_UNIT"<>'ALL' AND "ARC"."BUSINESS_UNIT"<>'SEL' AND "ARC"."LOB_VALUE"<>'SEL' AND
                  "ARC"."MANAGEMENT_UNIT"<>'SEL' AND "ARC"."LOB_VALUE"<>'ALL' AND "ARC"."RCC"='ALL')
      16 - access("ARC"."ANALYSIS_CRITERIA_ID"="ARS"."ANALYSIS_CRITERIA_ID")
      17 - access("A"."ANALYSIS_ID"="ARC"."ANALYSIS_ID")
      19 - filter(MAX("AL"."LOB_TYPE_ID")=2 OR MAX("AL"."LOB_TYPE_ID")=4)
      22 - access("CTBM"."BUSINESS_UNIT"="ARC"."BUSINESS_UNIT")
           filter("CTBM"."BUSINESS_UNIT"<>'ALL' AND "CTBM"."BUSINESS_UNIT"<>'SEL' AND
                  "CTBM"."BUSINESS_UNIT"="ARC"."BUSINESS_UNIT")
      23 - access("RLC"."LOB_VALUE"="ARC"."LOB_VALUE" AND "RLC"."COUNTRY_CODE"="CTBM"."COUNTRY_CODE")
           filter("RLC"."LOB_VALUE"<>'ALL' AND "RLC"."LOB_VALUE"<>'SEL')
      24 - access("ARS"."ANALYSIS_RESULTS_ID"="ARD"."ANALYSIS_RESULTS_ID")
           filter("ARD"."VALUE"<>0)
      26 - access("CTR"."COUNTRY_CODE"="RLC"."COUNTRY_CODE")
      28 - access("A"."ANALYSIS_ID"="T"."ANALYSIS_ID")
      29 - access("AL"."ANALYSIS_ID"="ARC"."ANALYSIS_ID" AND "ARC"."LOB_VALUE"="AL"."LOB_VALUE")
           filter("AL"."LOB_VALUE"<>'ALL' AND "AL"."LOB_VALUE"<>'SEL' AND "ARC"."LOB_VALUE"="AL"."LOB_VALUE")
      30 - filter((TO_NUMBER("RLN"."LOB_TYPE_ID")=4 OR TO_NUMBER("RLN"."LOB_TYPE_ID")=2) AND
                  "AL_TYPES"."LOB_TYPE_ID"=TO_NUMBER("RLN"."LOB_TYPE_ID"))
      31 - access("RLN"."LOB_VALUE"=TO_NUMBER("RLC"."LOB_VALUE"))Kindly let me know how to go forward with this.
    Regards,
    Sunny

  • Performance problem: Query explain plan changes in pl/sql vs. literal args

    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.

    AmandaSoosai wrote:
    I have a complex query with 5+ table joins on large (million+ row) tables. In it's most simplified form, it's essentially
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between 123 and 456;
    Its performance was excellent with literal arguments (1 sec per execution).
    But, when I used pl/sql bind argument variables instead of 123 and 456 as literals, the explain plan changes drastically, and runs 10+ minutes.
    Ex:
    CREATE PROCEDURE runQuery(param1 INTEGER, param2 INTEGER){
    CURSOR LT_CURSOR IS
    select * from largeTable large
    join anotherLargeTable anothr on (anothr.id_2 = large.pk_id)
    join...(other aux tables)
    where large.pk_id between param1 AND param2;
    BEGIN
    FOR aRecord IN LT_CURSOR
    LOOP
    (print timestamp...)
    END LOOP;
    END runQuery;
    Rewriting the query 5 different ways was unfruitful. DB hints were also unfruitful in this particular case. LargeTable.pk_id was an indexed field as were all other join fields.
    Solution:
    Lacking other options, I wrote a literal query that concatenated the variable args. Open a cursor for the literal query.
    Upside: It changed the explain plan to the only really fast option and performed at 1 second instead of 10mins.
    Downside: Query not cached for future use. Perfectly fine for this query's purpose.
    Other suggestions are welcome.Best wild guess based on what you've posted is a bind variable mismatch (your column is declared as a NUMBER data type and your bind variable is declared as a VARCHAR for example). Unless you have histograms on the columns in question ... which, if you're using bind variables is usually a really bad idea.
    A basic illustration of my guess
    http://blogs.oracle.com/optimizer/entry/how_do_i_get_sql_executed_from_an_application_to_uses_the_same_execution_plan_i_get_from_sqlplus

  • Low performances when querying data dictionary Oracle 11g

    Query:
    SELECT
    ucc_fk.column_name FK_COLUMN_NAME,
    ucc_pk.table_name PK_TABLE_NAME,
    ucc_pk.column_name PK_COLUMN_NAME
    FROM user_constraints uc_fk
    INNER JOIN user_cons_columns ucc_fk ON (ucc_fk.constraint_name = uc_fk.constraint_name)
    INNER JOIN user_constraints uc_pk ON (uc_pk.constraint_name = uc_fk.r_constraint_name)
    INNER JOIN user_cons_columns ucc_pk ON (ucc_pk.constraint_name = uc_fk.r_constraint_name AND ucc_pk.position = ucc_fk.position)
    WHERE
    uc_fk.constraint_type = 'R' AND
    uc_pk.constraint_type = 'P' AND
    uc_fk.table_name = 'TABLE_NAME';
    works ok on 10g but is very slow on 11g? How to improve performances?

    You dont need to join with user_constraints again unless you are trying to avoid references to a unique key.
    SELECT ucc_fk.column_name FK_COLUMN_NAME, ucc_pk.table_name PK_TABLE_NAME, ucc_pk.column_name PK_COLUMN_NAME
      FROM user_constraints uc_fk
           JOIN user_cons_columns ucc_fk
              ON (ucc_fk.constraint_name = uc_fk.constraint_name)
           JOIN user_cons_columns ucc_pk
              ON (ucc_pk.constraint_name = uc_fk.r_constraint_name
              AND ucc_pk.position = ucc_fk.position)
    WHERE uc_fk.constraint_type = 'R'
       AND uc_fk.table_name = 'TABLE_NAME';As you can see I have removed the join to user_constraints and hence the condition uc_pk.constraint_type = 'P'
    The reason being the it is already validated to be in ('P','U') for all r_constraint_names.
    G.

  • Performance problem querying multiple CLOBS

    We are running Oracle 8.1.6 Standard Edition on Sun E420r, 2 X 450Mhz processors, 2 Gb memory
    Solaris 7. I have created an Oracle Text indexes on several columns in a large table, including varchar2 and CLOB. I am simulating search engine queries where the user chooses to find matches on the exact phrase, all of the words (AND) and any of the words (OR). I am hitting performance problems when querying on multiple CLOBs using the OR, e.g.
    select count(*) from articles
    where contains (abstract , 'matter OR dark OR detection') > 0
    or contains (subject , 'matter OR dark OR detection') > 0
    Columns abstract and subject are CLOBs. However, this query works fine for AND;
    select count(*) from articles
    where contains (abstract , 'matter AND dark AND detection') > 0
    or contains (subject , 'matter AND dark AND detection') > 0
    The explain plan gives a cost of 2157 for OR and 14.3 for AND.
    I realise that multiple contains are not a good thing, but the AND returns sub-second, and the OR is taking minutes! The indexes are created thus:
    create index article_abstract_search on article(abstract)
    INDEXTYPE IS ctxsys.context parameters ('STORAGE mystore memory 52428800');
    The data and index tables are on separate tablespaces.
    Can anyone suggest what is going on here, and any alternatives?
    Many thanks,
    Geoff Robinson

    Thanks for your reply, Omar.
    I have read the performance FAQ already, and it points out single CONTAINS clauses are preferred, but I need to check 2 columns. Also, I don't just want a count(*), I will need to select field values. As you can see from my 2 queries, the first has multiple CLOB columns using OR, and the second AND, with the second taking that much longer. Even with only a single CONTAINS, the cost estimate is 5 times more for OR than for AND.
    Add an extra CONTAINS and it becomes 300 times more costly!
    The root table is 3 million rows, the 2 token tables have 6.5 and 3 million rows respectively. All tables have been fully analyzed.
    Regards
    Geoff

  • Time Out and slow performance of query

    Hi Experts,
    We have a multiprovider giving sales and stock data.we have 1000 article 200 sites this query doesn't respond at all.It is either timed out or analyzer status is not responding. However on single article input ,the query responds in 60 sec.Please help.
    Best gds
    SumaMani

    Hi,
    Did you give the query in RSRT and click on Performance info.
    It will give some message so that you can take steps to improve the performance of the query.
    Regards,
    Rama Murthy.

  • Performance optimization : query taking 7mints

    Hi All ,
    Requirement : I need to improve the performance of  custom program ( Program taking more than 7 mints +dump).I checked in runtime analysis and below mention query taking more time .
    Please let me know the approach to minimize the  query time  .
    TYPES: BEGIN OF lty_dberchz1,
               belnr    TYPE dberchz1-belnr,
               belzeile TYPE dberchz1-belzeile,
               belzart  TYPE dberchz1-belzart,
               buchrel  TYPE dberchz1-buchrel,
               tariftyp TYPE dberchz1-tariftyp,
               tarifnr  TYPE dberchz1-tarifnr,
               v_zahl1  TYPE dberchz1-v_zahl1,
               n_zahl1  TYPE dberchz1-n_zahl1,
               v_zahl3  TYPE dberchz1-v_zahl3,
               n_zahl3  TYPE dberchz1-n_zahl3,
               nettobtr TYPE dberchz3-nettobtr,
               twaers   TYPE dberchz3-twaers,
             END   OF lty_dberchz1.
      DATA: lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
            WITH NON-UNIQUE KEY belnr belzeile
            INITIAL SIZE 0 WITH HEADER LINE.
    DATA: lt_dberchz1a LIKE TABLE OF lt_dberchz1 WITH HEADER LINE.
    *** ***********************************Taking more time*************************************************
    *Individual line items
        SELECT dberchz1~belnr dberchz1~belzeile
               belzart buchrel tariftyp tarifnr
               v_zahl1 n_zahl1 v_zahl3 n_zahl3
               nettobtr twaers
          INTO TABLE lt_dberchz1
          FROM dberchz1 JOIN dberchz3
          ON dberchz1~belnr = dberchz3~belnr
          AND dberchz1~belzeile = dberchz3~belzeile
          WHERE buchrel  EQ 'X'.
        DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.     
        LOOP AT lt_dberchz1.
          READ TABLE lt_dberdlb BINARY SEARCH
          WITH KEY billdoc = lt_dberchz1-belnr.
          IF sy-subrc NE 0.
            DELETE lt_dberchz1.
          ENDIF.
        ENDLOOP.
        lt_dberchz1a[] = lt_dberchz1[].
        DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
                              OR belzart EQ 'ZUTAX2'
                              OR belzart EQ 'ZUTAX3'.
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    ***************************second query************************************
    *  SELECT opbel budat vkont partner sto_opbel
        INTO CORRESPONDING FIELDS OF TABLE lt_erdk
        FROM erdk
        WHERE budat IN r_budat
          AND druckdat   NE '00000000'
          AND stokz      EQ space
          AND intopbel   EQ space
          AND total_amnt GT 40000.
    **************************taking more time*********************************
      SORT lt_erdk BY opbel.
      IF lt_erdk[] IS NOT INITIAL.
        SELECT DISTINCT printdoc billdoc vertrag
          INTO CORRESPONDING FIELDS OF TABLE lt_dberdlb
          FROM dberdlb
    * begin of code change by vishal
          FOR ALL ENTRIES IN lt_erdk
          WHERE printdoc = lt_erdk-opbel.
        IF lt_dberdlb[] IS NOT INITIAL.
          SELECT belnr belzart ab bis aus01
                 v_zahl1 n_zahl1 v_zahl3 n_zahl3
            INTO CORRESPONDING FIELDS OF TABLE lt_dberchz1
            FROM dberchz1
            FOR ALL ENTRIES IN lt_dberdlb
            WHERE belnr   EQ lt_dberdlb-billdoc
              AND belzart IN ('ZUTAX1', 'ZUTAX2', 'ZUTAX3').
        ENDIF. "lt_dberdlb
       endif.
    Regards
    Rahul
    Edited by: Matt on Mar 17, 2009 4:17 PM - Added  tags and moved to correct forum

    Run the SQL Trace and tell us where the time is spent,
    see here how to use it:
    SELECT dberchz1~belnr dberchz1~belzeile
               belzart buchrel tariftyp tarifnr
               v_zahl1 n_zahl1 v_zahl3 n_zahl3
               nettobtr twaers
          INTO TABLE lt_dberchz1
          FROM dberchz1 JOIN dberchz3
          ON dberchz1~belnr = dberchz3~belnr
          AND dberchz1~belzeile = dberchz3~belzeile
          WHERE buchrel  EQ 'X'.
    I assume that is this select, but without data is quite useless
    How large are the two tables  dberchz1 JOIN dberchz3
    What are the key fields?
    Is there an index on buchrel
    Please use aliases  dberchz1 as a
                                 INNER JOIN dberchz3 as b
    to which table does buchrel belong?
    I don't know you tables, but buchrel  EQ 'X' seems not selective, so a lot of data
    might be selected.
    lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
            WITH NON-UNIQUE KEY belnr belzeile
            INITIAL SIZE 0 WITH HEADER LINE.
        DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.     
        LOOP AT lt_dberchz1.
          READ TABLE lt_dberdlb BINARY SEARCH
          WITH KEY billdoc = lt_dberchz1-belnr.
          IF sy-subrc NE 0.
            DELETE lt_dberchz1.
          ENDIF.
        ENDLOOP.
        lt_dberchz1a[] = lt_dberchz1[].
        DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
                              OR belzart EQ 'ZUTAX2'
                              OR belzart EQ 'ZUTAX3'.
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    This is really poor coding, there is sorted table ... nice a compelelty different key is
    needed and used .... useless.
    Then there is a loop which is anywy a full processing no sort necessary.
    Where is the read if you use binary search on TABLE lt_dberdlb ?
    Then the tables are again process completely ...
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    What is that ???? Are you sure that anything can survive this delete???
    Siegfried

  • Performance Tuning : Query session

    Dear Friends,
    I am working with Hyperion Interactive reporting and I am very new to this environment, I am having a query session with 11 tables all are simple joins. when ever I am processing the query it talks long time to fetch the data ofcourse it has milions of records, do you have any idea how do I reduce query processing time. or else please tell me what are things I need to do and what are things I need not do. any query performance tips in brio.
    Best Regards,
    S.Murugan

    Query Performance is based on a variety of factors.
    - Network speed
    - size of dataset returning -- Are you really bringing back 1 million rows?
    - properly tuned database -- Capture the SQL and have DBA review it
    - proper created query - correct order of tables for the FROM clause -- This is based on order they were brought into the data model section
    Wayne Van Sluys
    TopDown Consulting

  • Performance Tuning Query on Large Tables

    Hi All,
    I am new to the forums and have a very specic use case which requires performance tuning, but there are some limitations on what changes I am actualy able to make to the underlying data. Essentially I have two tables which contain what should be identical data, but for reasons of a less than optimal operational nature, the datasets are different in a number of ways.
    Essentially I am querying call record detail data. Table 1 (refered to in my test code as TIME_TEST) is what I want to consider the master data, or the "ultimate truth" if you will. Table one contains the CALLED_NUMBER which is always in a consistent format. It also contains the CALLED_DATE_TIME and DURATION (in seconds).
    Table 2 (TIME_TEST_COMPARE) is a reconciliation table taken from a different source but there is no consistent unique identifiers or PK-FK relations. This table contains a wide array of differing CALLED_NUMBER formats, hugely different to that in the master table. There is also scope that the time stamp may be out by up to 30 seconds, crazy I know, but that's just the way it is and I have no control over the source of this data. Finally the duration (in seconds) can be out by up to 5 seconds +/-.
    I want to create a join returning all of the master data and matching the master table to the reconciliation table on CALLED_NUMBER / CALL_DATE_TIME / DURATION. I have written the query which works from a logi perspective but it performs very badly (master table = 200,000 records, rec table = 6,000,000+ records). I am able to add partitions (currently the tables are partitioned by month of CALL_DATE_TIME) and can also apply indexes. I cannot make any changes at this time to the ETL process loading the data into these tables.
    I paste below the create table and insert scripts to recreate my scenario & the query that I am using. Any practical suggestions for query / table optimisation would be greatly appreciated.
    Kind regards
    Mike
    -------------- NOTE: ALL DATA HAS BEEN DE-SENSITISED
    /* --- CODE TO CREATE AND POPULATE TEST TABLES ---- */
    --CREATE MAIN "TIME_TEST" TABLE: THIS TABLE HOLDS CALLED NUMBERS IN A SPECIFIED/PRE-DEFINED FORMAT
    CREATE TABLE TIME_TEST ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                            CALLED_DATE_TIME DATE, DURATION NUMBER );
    COMMIT;
    -- CREATE THE COMPARISON TABLE "TIME_TEST_COMPARE": THIS TABLE HOLDS WHAT SHOULD BE (BUT ISN'T) IDENTICAL CALL DATA.
    -- THE DATA CONTAINS DIFFERING NUMBER FORMATS, SLIGHTLY DIFFERENT CALL TIMES (ALLOW +/-60 SECONDS - THIS IS FOR A GOOD, ALBEIT UNHELPFUL, REASON)
    -- AND DURATIONS (ALLOW +/- 5 SECS)                                        
    CREATE TABLE TIME_TEST_COMPARE ( CALLED_NUMBER VARCHAR2(50 BYTE),
                                       CALLED_DATE_TIME DATE, DURATION NUMBER )                                        
    COMMIT;
    --CREATE INSERT DATA FOR THE MAIN TEST TIME TABLE
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 202);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 08:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 19);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 07:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 35);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 09:10:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 30);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:18:47 AM', 'MM/DD/YYYY HH:MI:SS AM'), 6);
    INSERT INTO TIME_TEST ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:20:21 AM', 'MM/DD/YYYY HH:MI:SS AM'), 20);
    COMMIT;
    -- CREATE INSERT DATA FOR THE TABLE WHICH NEEDS TO BE COMPARED:
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '7721345675', TO_DATE( '11/09/2011 06:10:51 AM', 'MM/DD/YYYY HH:MI:SS AM'), 200);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '00447721345675', TO_DATE( '11/09/2011 08:10:59 AM', 'MM/DD/YYYY HH:MI:SS AM'), 21);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '07721345675', TO_DATE( '11/09/2011 07:11:20 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675', TO_DATE( '11/09/2011 09:10:01 AM', 'MM/DD/YYYY HH:MI:SS AM'), 33);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '+447721345675#181345', TO_DATE( '11/09/2011 06:18:35 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 6);
    INSERT INTO TIME_TEST_COMPARE ( CALLED_NUMBER, CALLED_DATE_TIME,
    DURATION ) VALUES (
    '004477213456759777799', TO_DATE( '11/09/2011 06:19:58 AM', 'MM/DD/YYYY HH:MI:SS AM')
    , 17);
    COMMIT;
    /* --- QUERY TO UNDERTAKE MATCHING WHICH REQUIRES OPTIMISATION --------- */
    SELECT MAIN.CALLED_NUMBER AS MAIN_CALLED_NUMBER, MAIN.CALLED_DATE_TIME AS MAIN_CALL_DATE_TIME, MAIN.DURATION AS MAIN_DURATION,
         COMPARE.CALLED_NUMBER AS COMPARE_CALLED_NUMBER,COMPARE.CALLED_DATE_TIME AS COMPARE_CALLED_DATE_TIME,
         COMPARE.DURATION COMPARE_DURATION     
    FROM
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST
    ) MAIN
    LEFT JOIN
    SELECT CALLED_NUMBER, CALLED_DATE_TIME, DURATION
    FROM TIME_TEST_COMPARE
    ) COMPARE
    ON INSTR(COMPARE.CALLED_NUMBER,MAIN.CALLED_NUMBER)<> 0
    AND MAIN.CALLED_DATE_TIME BETWEEN COMPARE.CALLED_DATE_TIME-(60/86400) AND COMPARE.CALLED_DATE_TIME+(60/86400)
    AND MAIN.DURATION BETWEEN MAIN.DURATION-(5/86400) AND MAIN.DURATION+(5/86400);

    What does your execution plan look like?

  • CONNECT BY PRIOR and performance of Query Plan

    Anyone,
    I have an SQL Statement that is performing rather slow and I am trying to figure out if I could optimize it. Here is the SQL:
       SELECT/*+ index(MAXIMO.EQNDX99) */
            maximo.equipment.eqnum, maximo.equipment.parent, LEVEL
       FROM maximo.equipment@maxi_dblink
       WHERE parent = :b1 CONNECT BY PRIOR eqnum = parent
       ORDER BY eqnum, LEVELAfter some research in this board I followed some advice found to create an index on the table for both the eqnum, parent and the parent, eqnum. EQNDX99 and EQNDX999 respectivley.
    Now the Qery Plan for this query shows the following:
    SELECT STATEMENT (REMOTE)
       SORT (ORDER BY)
          FILTER
             CONNECT BY
                 INDEX (FAST FULL SCAN) EQNDX99 (NON-UNIQUE)
                 TABLE ACESS (BY USER ROWID) EQUIPMENT
                 INDEX (RANGE SCAN) EQNDX999 (NON-UNIQUE)Now it appears to be using both indexes but it is operating through a DBLINK. Is there anything else I can do to increase performance??? It appears to be using the HINT through the link as well.
    Thanks for any help I can get,
    David Miller

    how long does it takes to complete the query?

  • Performance of query , What if db_recycle_cache_size is zero

    Hi
    in our 11g Database , We have some objects for which default buffer pool is recycle pool.But i observed recycle pool size is zero (db_recycle_cache_size = 0 ).
    Now if we issue a sql for which we need to access these objects , what happen ? i strongly think as there is no recycle bin , it will go in to traditional default buffer cache and obey normal LRU algorithm.Am i missing something here ?
    The issue we face is , we have a query which is picking up correct index but it takes around 3 Minutes.I see that the step which takes more time is index range sacn which fetches around 50k records and takes 95% of whole query execution time.Then i observed that index is configured to have default pool as recycle pool.If i rerun the same query again and again , execution times are close to zero (No wonder , no physical reads in subsequent execution).
    I am thinking of setting up recycel pool .What else i may need to consider tuning this query ?
    Thanks and Regards
    Pram

    >
    Now if we issue a sql for which we need to access these objects , what happen ? i strongly think as there is no recycle bin , it will go in to traditional default buffer cache and obey normal LRU algorithm.Am i missing something here ?
    >
    Recycle bin? What does that have to do with anything?
    You are correct - with no keep or recycle cache the default buffer cache and LRU are used for aging.
    >
    I am thinking of setting up recycel pool .
    >
    Why - it doesn't sound like you know the purpose of the recycle pool. See '7.2.4 Considering Multiple Buffer Pools' in the Performance Tuning Guide
    http://docs.oracle.com/cd/B28359_01/server.111/b28274/memory.htm
    >
    With segments that have atypical access patterns, store blocks from those segments in two different buffer pools: the KEEP pool and the RECYCLE pool. A segment's access pattern may be atypical if it is constantly accessed (that is, hot) or infrequently accessed (for example, a large segment accessed by a batch job only once a day).
    Multiple buffer pools let you address these differences. You can use a KEEP buffer pool to maintain frequently accessed segments in the buffer cache, and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the cache. When an object is associated with a cache, all blocks from that object are placed in that cache. Oracle maintains a DEFAULT buffer pool for objects that have not been assigned to a specific buffer pool. The default buffer pool is of size DB_CACHE_SIZE. Each buffer pool uses the same LRU replacement policy (for example, if the KEEP pool is not large enough to store all of the segments allocated to it, then the oldest blocks age out of the cache).
    By allocating objects to appropriate buffer pools, you can:
    •Reduce or eliminate I/Os
    •Isolate or limit an object to a separate cache
    >
    Using a recycle pool isn't going to affect that initial 3 minute time. It would keep things from being aged out of the default cache when the index blocks are loaded.
    Further using a recycle pool could cause those 'close to zero' times for the second and third access to increase if the index blocks from the first query were 'recycled' so another query could use the blocks. Recycle means - thow-it-away when you are done if you want.
    >
    What else i may need to consider tuning this query ?
    >
    If it ain't broke, dont' fix it. You haven't shown that there is anything wrong with the query you are talking about. How could we possibly know if 3 minutes if really slow or if it is really fast? You haven't posted the query, an execution plan, row counts for the tables or counts for the filter predicates.
    See this AskTom article for his take on the RECYCLE and other pools.

Maybe you are looking for

  • HT5622 Two Apple devices, how can I use FaceTime with one ID?

    I recently purchased a new IPad as I will be traveling. The reason for the new iPad was to be able to connect with my family through FaceTime. I have an Apple ID, can I use the same id on the new iPad and still connect through FaceTime? To me it soun

  • Lock object not working.

    Hi, I am trying to create a lock object. Having following doubts: 1) In one program, ENQUEUE function for that lock object But DEQUEUE function is not called. So the record on which the ENQUEUE function is called will remain locked for ever or only t

  • Color Calibrating MacBook + 24" LED Displays

    I'm using a Gretag-Macbeth Eye-One Display 2 system to color calibrate my monitors. The question is, when the LED is connected to the MacBook, and I calibrate it, does this affect the calibration of the display on the MacBook itself? After my last ca

  • Built-in Calc program disappeared

    not sure what I did, but the "Calc" program on my Centro has disappeared! I had switched it to "scientific" mode by hitting "right" (on the 5-way button), now there's a "ScientificCalc" program which won't start up because of an error. Any ideas how

  • Command + F

    When I do a Command + F a window pops up to allow me to search. Is there a way to change the defaults in this window? Currently the window comes up Search: This Mac Contents Kind is any Would like to change default to Search: This Mac File Name syste