Optimizer parameters different in 10053 trace

Hello,
The optimizer settings and the ones reported in the 10053 trace does not match. Is this a known issue ? Version is printed in the code snippet.
Here, optimizer_mode is set to ALL_ROWS, but 10053 trace reports this as first_rows_100. Similarly, optimizer_index_cost_adj is 1. But, it is 25 in the trace.
The query is not using hints.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> show parameter opti
NAME                                 TYPE        VALUE
filesystemio_options                 string      none
object_cache_optimal_size            integer     102400
optimizer_dynamic_sampling           integer     2
optimizer_features_enable            string      10.2.0.3
optimizer_index_caching              integer     100
optimizer_index_cost_adj             integer     1
optimizer_mode                       string      ALL_ROWS
optimizer_secure_view_merging        boolean     TRUE
plsql_optimize_level                 integer     2
SQL>Contents of 10053 trace
PARAMETERS USED BY THE OPTIMIZER
  PARAMETERS WITH ALTERED VALUES
  sort_area_retained_size             = 65535
  optimizer_mode                      = first_rows_100
  optimizer_index_cost_adj            = 25
  optimizer_index_caching             = 100
  *********************************I can see the same used in here..
Content of other_xml column
===========================
  db_version     : 10.2.0.3
  parse_schema   : COT_PLUS
  plan_hash      : 733167152
  Outline Data:
  /*+
    BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
      OPT_PARAM('optimizer_index_cost_adj' 25)
      OPT_PARAM('optimizer_index_caching' 100)
      FIRST_ROWS(100)
      OUTLINE_LEAF(@"SEL$5DA710D3")
      UNNEST(@"SEL$2")
      OUTLINE(@"SEL$1")
      OUTLINE(@"SEL$2")
      FULL(@"SEL$5DA710D3" "CDW"@"SEL$1")
      INDEX_RS_ASC(@"SEL$5DA710D3" "O"@"SEL$2" ("ORDERS"."STATUS_ID"))
      LEADING(@"SEL$5DA710D3" "CDW"@"SEL$1" "O"@"SEL$2")
      USE_NL(@"SEL$5DA710D3" "O"@"SEL$2")
    END_OUTLINE_DATA
  */Rgds,
Gokul
Edited by: Gokul Gopal on 13-Jun-2012 03:14

Gokul,
Please report the output of the following, which checks the V$SES_OPTIMIZER_ENV view for the current session:
SELECT
  NAME,
  VALUE,
  ISDEFAULT
FROM
  V$SES_OPTIMIZER_ENV
WHERE
  SID=(SELECT SID FROM V$MYSTAT WHERE ROWNUM=1)
  AND NAME IN ('optimizer_mode','optimizer_index_cost_adj','optimizer_index_caching')
ORDER BY
  NAME;In the same session, execute the following (your SQL statement with 1=1 added in the WHERE clause to produce a hard parse):
ALTER SESSION SET TRACEFILE_IDENTIFIER='OPTIMIZER_TEST';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
select * from A where 1=1 AND col1 = (select to_char(col1) from B where status in (16,12,22));
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';Take a look in the generated 10053 trace file. Are the values for the optimizer_mode, optimizer_index_cost_adj, and optimizer_index_caching found in the OPTIMIZER_TEST 10053 trace file the same as what was produced by the above select from V$SES_OPTIMIZER_ENV?
Charles Hooper
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

Similar Messages

  • Optimizer choosing different plans when ROWNUM filter. [UPDATED: 11.2.0.1]

    I'm having a couple of issues with a query, and I can't figure out the best way to reach a solution.
    Platform Information
    Windows Server 2003 R2
    Oracle 10.2.0.4
    Optimizer Settings
    SQL > show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      10.2.0.4
    optimizer_index_caching              integer     90
    optimizer_index_cost_adj             integer     30
    optimizer_mode                       string      ALL_ROWS
    optimizer_secure_view_merging        boolean     TRUEThe query below, is a simple "Top N" query, where the top result is returned. Here it is, with bind variables in the same location as the application code:
    SELECT     PRODUCT_DESC
    FROM
         SELECT     PRODUCT_DESC
         ,     COUNT(*)     AS CNT
         FROM     USER_VISITS     
         JOIN     PRODUCT     ON PRODUCT.PRODUCT_OID = USER_VISITS.PRODUCT_OID
         WHERE     PRODUCT.PRODUCT_DESC != 'Home'     
         AND     VISIT_DATE
              BETWEEN
                   ADD_MONTHS                    
                        TRUNC                    
                             TO_DATE               
                                  :vCurrentYear
                             ,     'YYYY'
                        ,     'YEAR'
                   ,     3*(:vCurrentQuarter-1)
              AND
                   ADD_MONTHS                    
                        TRUNC                    
                             TO_DATE               
                                  :vCurrentYear
                             ,     'YYYY'
                        ,     'YEAR'
                   ,     3*:vCurrentQuarter
                   ) - INTERVAL '1' DAY               
         GROUP BY PRODUCT_DESC
         ORDER BY CNT DESC
    WHERE     ROWNUM <= 1;
    Explain Plan
    The explain plan I receive when running the query above.
    | Id  | Operation                         | Name                          | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
    |*  1 |  COUNT STOPKEY                    |                               |      1 |        |      1 |00:00:34.92 |   66343 |       |       |          |
    |   2 |   VIEW                            |                               |      1 |      1 |      1 |00:00:34.92 |   66343 |       |       |          |
    |*  3 |    FILTER                         |                               |      1 |        |      1 |00:00:34.92 |   66343 |       |       |          |
    |   4 |     SORT ORDER BY                 |                               |      1 |      1 |      1 |00:00:34.92 |   66343 |  2048 |  2048 | 2048  (0)|
    |   5 |      SORT GROUP BY NOSORT         |                               |      1 |      1 |     27 |00:00:34.92 |   66343 |       |       |          |
    |   6 |       NESTED LOOPS                |                               |      1 |      2 |  12711 |00:00:34.90 |   66343 |       |       |          |
    |   7 |        TABLE ACCESS BY INDEX ROWID| PRODUCT                       |      1 |     74 |     77 |00:00:00.01 |      44 |       |       |          |
    |*  8 |         INDEX FULL SCAN           | PRODUCT_PRODDESCHAND_UNQ      |      1 |      1 |     77 |00:00:00.01 |       1 |       |       |          |
    |*  9 |        INDEX FULL SCAN            | USER_VISITS#PK                |     77 |      2 |  12711 |00:00:34.88 |   66299 |       |       |          |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=1)
       3 - filter(ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1))<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURR
                  ENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
       8 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
       9 - access("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
                  "USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID" AND "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY')
                  ,'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
           filter(("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
                  "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2)
                  TO SECOND(0) AND "USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID"))
    Row Source Generation
    TKPROF Row Source Generation
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.01          0          0          0           0
    Fetch        2     35.10      35.13          0      66343          0           1
    total        4     35.10      35.14          0      66343          0           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62 
    Rows     Row Source Operation
          1  COUNT STOPKEY (cr=66343 pr=0 pw=0 time=35132008 us)
          1   VIEW  (cr=66343 pr=0 pw=0 time=35131996 us)
          1    FILTER  (cr=66343 pr=0 pw=0 time=35131991 us)
          1     SORT ORDER BY (cr=66343 pr=0 pw=0 time=35131936 us)
         27      SORT GROUP BY NOSORT (cr=66343 pr=0 pw=0 time=14476309 us)
      12711       NESTED LOOPS  (cr=66343 pr=0 pw=0 time=22921810 us)
         77        TABLE ACCESS BY INDEX ROWID PRODUCT (cr=44 pr=0 pw=0 time=3674 us)
         77         INDEX FULL SCAN PRODUCT_PRODDESCHAND_UNQ (cr=1 pr=0 pw=0 time=827 us)(object id 52355)
      12711        INDEX FULL SCAN USER_VISITS#PK (cr=66299 pr=0 pw=0 time=44083746 us)(object id 52949)However when I run the query with an ALL_ROWS hint I receive this explain plan (reasoning for this can be found here Jonathan's Lewis' response: http://www.freelists.org/post/oracle-l/ORDER-BY-and-first-rows-10-madness,4):
    | Id  | Operation                  | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT           |                |     1 |    39 |   223  (25)| 00:00:03 |
    |*  1 |  COUNT STOPKEY             |                |       |       |            |          |
    |   2 |   VIEW                     |                |     1 |    39 |   223  (25)| 00:00:03 |
    |*  3 |    FILTER                  |                |       |       |            |          |
    |   4 |     SORT ORDER BY          |                |     1 |    49 |   223  (25)| 00:00:03 |
    |   5 |      HASH GROUP BY         |                |     1 |    49 |   223  (25)| 00:00:03 |
    |*  6 |       HASH JOIN            |                |   490 | 24010 |   222  (24)| 00:00:03 |
    |*  7 |        TABLE ACCESS FULL   | PRODUCT   |    77 |  2849 |     2   (0)| 00:00:01 |
    |*  8 |        INDEX FAST FULL SCAN| USER_VISITS#PK |   490 |  5880 |   219  (24)| 00:00:03 |
    Predicate Information (identified by operation id):
       1 - filter(ROWNUM<=1)
       3 - filter(ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*(TO_NUMBER(:
                  VCURRENTQUARTER)-1))<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*TO_N
                  UMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
       6 - access("USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID")
       7 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
       8 - filter("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYY
                  Y'),'fmyear'),3*(TO_NUMBER(:VCURRENTQUARTER)-1)) AND
                  "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),
                  3*TO_NUMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))And the TKPROF Row Source Generation:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        3      0.51       0.51          0        907          0          27
    total        5      0.51       0.51          0        907          0          27
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 62 
    Rows     Row Source Operation
         27  FILTER  (cr=907 pr=0 pw=0 time=513472 us)
         27   SORT ORDER BY (cr=907 pr=0 pw=0 time=513414 us)
         27    HASH GROUP BY (cr=907 pr=0 pw=0 time=512919 us)
      12711     HASH JOIN  (cr=907 pr=0 pw=0 time=641130 us)
         77      TABLE ACCESS FULL PRODUCT (cr=5 pr=0 pw=0 time=249 us)
      22844      INDEX FAST FULL SCAN USER_VISITS#PK (cr=902 pr=0 pw=0 time=300356 us)(object id 52949)The query with the ALL_ROWS hint returns data instantly, while the other one takes about 70 times as long.
    Interestingly enough BOTH queries generate plans with estimates that are WAY off. The first plan is estimating 2 rows, while the second plan is estimating 490 rows. However the real number of rows is correctly reported in the Row Source Generation as 12711 (after the join operation).
    TABLE_NAME                       NUM_ROWS     BLOCKS
    USER_VISITS                        196044       1049
    INDEX_NAME                         BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR LAST_ANALYZED
    USER_VISITS#PK                          2         860        196002          57761 07/24/2009 13:17:59
    COLUMN_NAME                    NUM_DISTINCT LOW_VALUE            HIGH_VALUE                                 DENSITY     NUM_NULLS HISTOGRAM
    VISIT_DATE                           195900 786809010E0910       786D0609111328                      .0000051046452272          0 NONEI don't know how the first one is estimating 2 rows, but I can compute the second's cardinality estimates by assuming a 5% selectivity for the TO_DATE() functions:
    SQL > SELECT ROUND(0.05*0.05*196044) FROM DUAL;
    ROUND(0.05*0.05*196044)
                        490However, removing the bind variables (and clearing the shared pool), does not change the cardinality estimates at all.
    I would like to avoid hinting this plan if possible and that is why I'm looking for advice. I also have a followup question.
    Edited by: Centinul on Sep 20, 2009 4:10 PM
    See my last post for 11.2.0.1 update.

    Centinul wrote:
    You could potentially perform testing with either a CARDINALITY or OPT_ESTIMATE hint to see if the execution plan changes dramatically to improve performance. The question then becomes > whether this be sufficient to over-rule the first rows optimizer so that it does not use an index access which will avoid a sort.I tried doing that this morning by increasing the cardinality from the USER_VISITS table to a value such that the estimate was about that of the real amount of data. However the plan did not change.
    Could you use the ROW_NUMBER analytic function instead of ROWNUMInterestingly enough, when I tried this it generated the same plan as was used with the ALL_ROWS hint, so I may implement this query for now.
    I do have two more followup questions:
    1. Even though a better plan is picked the optimizer estimates are still off by a large margin because of bind variables and 5%* 5% * NUM_ROWS. How do I get the estimates in-line with the actual values? Should I really fudge statistics?
    2. Should I raise a bug report with Oracle over the behavior of the original query?That is great that the ROW_NUMBER analyitic function worked. You may want to perform some testing with this before implementing it in production to see whether Oracle performs significantly more logical or physical I/Os with the ROW_NUMBER analytic function compared to the ROWNUM solution with the ALL_ROWS hint.
    As Timur suggests, seeing a 10053 trace during a hard parse of both queries (with and without the ALL_ROWS hint) would help determine what is happening. It could be that a histogram exists which is feeding bad information to the optimizer, causing distorted cardinality in the plan. If bind peeking is used, the 5% * 5% rule might not apply, especially if a histogram is involved. Also, the WHERE clause includes "PRODUCT.PRODUCT_DESC != 'Home'" which might affect the cardinality in the plan.
    Your question may have prompted the starting of a thread in the SQL forum yesterday on the topic of ROWNUM, but it appears that thread was removed from the forum within the last couple hours.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Interpreting 10053 trace output

    Hi,
    I am trying to analyze one sql performance issue. In 10053 trace of the SQL, one analyzed index(IDX_OCI_INSTRMNT_ID) is reflected as Unanalyzed:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: OCI  Alias: OCI  (Using composite stats)
        #Rows: 119379620  #Blks:  4412959  AvgRowLen:  205.00
    Index Stats::
      Index: IDX_OCI_BANK_BR_CODE  Col#: 15 16
        LVLS: 3  #LB: 578655  #DK: 20668  LB/K: 27.00  DB/K: 4191.00  CLUF: 86632520.00
      Index: IDX_OCI_INSTRMNT_AMT_ZDATE  Col#: 12 2
        LVLS: 3  #LB: 601715  #DK: 4917119  LB/K: 1.00  DB/K: 21.00  CLUF: 107389165.00
      Index: IDX_OCI_INSTRMNT_ID  Col#: 9
        PARTITION [6]    LVLS: 3  #LB: 55130  #DK: 24310  LB/K: 2.00  DB/K: 375.00  CLUF: 9132500.00
        (NOT ANALYZED)===========================================> The index is locally partitioned and analyzed.
        LVLS: 3  #LB: 55130  #DK: 24310  LB/K: 2.00  DB/K: 375.00  CLUF: 9132500.00
      Index: IDX_OUT_CLG_INSTRMNT_TABLE  Col#: 1 2 3 4 5 58
        USING COMPOSITE STATS
        LVLS: 3  #LB: 977360  #DK: 118952050  LB/K: 1.00  DB/K: 1.00  CLUF: 95676615.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#9): INSTRMNT_ID(VARCHAR2)
        AvgLen: 17.00 NDV: 1081566 Nulls: 0 Density: 9.2459e-07
      Column (#19): STATUS_FLG(CHARACTER)
        AvgLen: 2.00 NDV: 5 Nulls: 2459035 Density: 0.2
      Column (#12): INSTRMNT_AMT(NUMBER)
        AvgLen: 5.00 NDV: 1061850 Nulls: 0 Density: 9.4175e-07 Min: 0 Max: 4000020025000
      Table: OCI  Alias: OCI    
        Card: Original: 119379620  Rounded: 1  Computed: 0.00  Non Adjusted: 0.00
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  791920.71  Resp: 791920.71  Degree: 0
          Cost_io: 716556.00  Cost_cpu: 128124159092
          Resp_io: 716556.00  Resp_cpu: 128124159092
      Access Path: index (RangeScan)
        Index: IDX_OCI_INSTRMNT_AMT_ZDATE
        resc_io: 106.00  resc_cpu: 868831
        ix_sel: 9.4401e-07  ix_sel_with_filters: 9.4401e-07
        Cost: 21.30  Resp: 21.30  Degree: 1
      Access Path: index (AllEqRange)
        Index: IDX_OCI_INSTRMNT_ID
        resc_io: 114.00  resc_cpu: 934714
        ix_sel: 9.2459e-07  ix_sel_with_filters: 9.2459e-07
        Cost: 22.91  Resp: 22.91  Degree: 1
    ****** trying bitmap/domain indexes ******
      Access Path: index (AllEqRange)
        Index: IDX_OCI_INSTRMNT_ID
        resc_io: 4.00  resc_cpu: 50686
        ix_sel: 9.2459e-07  ix_sel_with_filters: 9.2459e-07
        Cost: 1.01  Resp: 1.01  Degree: 0
      Access Path: index (IndexOnly)
        Index: IDX_OCI_INSTRMNT_AMT_ZDATE
        resc_io: 4.00  resc_cpu: 51086
        ix_sel: 9.4401e-07  ix_sel_with_filters: 9.4401e-07
        Cost: 1.01  Resp: 1.01  Degree: 0
        SORT resource      Sort statistics
          Sort width:        3070 Area size:     1048576 Max Area size:   536862720
          Degree:               1
          Blocks to Sort:       1 Row size:           21 Total Rows:            112
          Initial runs:         1 Merge passes:        0 IO Cost / pass:          0
          Total IO sort cost: 0      Total CPU sort cost: 1734406
          Total Temp space used: 0
      ****** finished trying bitmap/domain indexes ******
      Best:: AccessPath: IndexRange  Index: IDX_OCI_INSTRMNT_AMT_ZDATE
             Cost: 21.30  Degree: 1  Resp: 21.30  Card: 0.00  Bytes: 0The query is using 'IDX_OCI_INSTRMNT_AMT_ZDATE' index which is resulting in 542,193.8 gets/exec, whereas if the query uses index 'IDX_OCI_INSTRMNT_ID' the gets will reduce to 1660 per execution.
    Since the index 'IDX_OCI_INSTRMNT_ID' is appearing to be not analyzed to CBO, it's not picked by the plan. What could be the reason that 10053 is reflecting the index as not analyzed.
    Regards,
    S.K.
    Miscellaneous Information:
    The query:
    SELECT /*+ SANTU */ COUNT(1)
    FROM      oci
    where      oci.INSTRMNT_ID = LPAD( :b2 , :b3 , :b4 )
    AND      OCI.STATUS_FLG in ( :b5 , :b6 )
    AND       OCI.INSTRMNT_AMT = LPAD( :b7 , :b8 , :b9 )
    -------------------------------------------------------------------------+-----------------------------------+---------------+
    | Id  | Operation                            | Name                      | Rows  | Bytes | Cost  | Time      | Pstart| Pstop |
    -------------------------------------------------------------------------+-----------------------------------+---------------+
    | 0   | SELECT STATEMENT                     |                           |       |       |    21 |           |       |       |
    | 1   |  SORT AGGREGATE                      |                           |     1 |    24 |       |           |       |       |
    | 2   |   TABLE ACCESS BY GLOBAL INDEX ROWID | OUT_CLG_INSTRMNT_TABLE    |     1 |    24 |    21 |  00:00:01 | ROW LOCATION| ROW LOCATION|
    | 3   |    INDEX RANGE SCAN                  | IDX_OCI_INSTRMNT_AMT_ZDATE|   112 |       |     1 |  00:00:01 |       |       |
    -------------------------------------------------------------------------+-----------------------------------+---------------+
    Predicate Information:
    2 - filter(("OCI"."INSTRMNT_ID"=LPAD(:B2,:B3,:B4) AND INTERNAL_FUNCTION("OCI"."STATUS_FLG")))
    3 - access("OCI"."INSTRMNT_AMT"=TO_NUMBER(LPAD(:B7,:B8,:B9)))
    INDEX_NAME                     COLUMN_NAME                    COLUMN_POSITION
    IDX_OCI_INSTRMNT_AMT_ZDATE     INSTRMNT_AMT                                 1
    IDX_OCI_INSTRMNT_AMT_ZDATE     CLG_ZONE_DATE                                2
    IDX_OCI_INSTRMNT_ID            INSTRMNT_ID                                  1
    COLUMN_NAME          DATA_TYPE       NUM_DISTINCT  NUM_NULLS    DENSITY HISTOGRAM
    CLG_ZONE_DATE        DATE                     822          0 .001216545 NONE
    INSTRMNT_AMT         NUMBER               1010184          0 9.8992E-07 NONE
    INSTRMNT_ID          VARCHAR2             1077892          0 9.2774E-07 NONE

    INDEX_NAME                     COM PARTITION_NAME                 SUBPARTITION_COUNT     BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR LAST_ANALY GLO
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART1                                 0          3      134940          25616770 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART10                                0          2        2047            370408 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART11                                0          1          51              8503 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART2                                 0          2       67630          12927440 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART3                                 0          2       57235          10958505 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART4                                 0          2       51985           9934295 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART5                                 0          2       48395           9260565 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART6                                 0          2       50560           9680680 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART7                                 0          2       44935           8600995 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART8                                 0          2       41030           7858735 29-07-2010 YES
    IDX_OCI_INSTRMNT_ID            NO  IDXG1_OCI_PART9                                 0          3       73820          14134540 29-07-2010 YES
    COM = Composite
    GLO = Global_stats@Girish,
    I have already gone through the article by Wolfgang. The index should not be reflected as 'Not analyzed' in 10053 trace files.
    BTW..The database version is 10.2.0.4.
    Regards,
    S.K.

  • Optimizer parameters

    Hi,
    One of the our OLTP database has default settings for OPTIMIZER parameters.
    INDEX_COST_ADJ,MAX_PERMUTATIONS,DB_MULTIBLOCK values.
    Its been on production for some time now.
    What will happen if I change these parameters.How much it will cost me do my query plans will change.
    Is it advisible to change once its on production.
    Regards
    MMU

    Hi MMU,
    What will happen if I change these parametersWhat release are you on? It makes a BIG difference!
    OICA is a "silver bullet" parm, one whose setting will have a profound impact on performance, both good and bad:
    http://www.amazon.com/Oracle-Silver-Bullets-Performance-Focus/dp/0975913522
    On 10g and later, ALWAYS attempt to address the root cause of the performance issue (usually with dbms_stats) before resorting to changing OICA. Here are my notes:
    http://www.dba-oracle.com/t_global_sql_optimization.htm

  • Same Effects Panel parameters-different results on Tiff & Psd file format?

    Just a quick question to all the users and the Adobe Lightroom team & see if anybody has the same problem.
    I have one file saved in a Tiff format & the exact duplicate saved in a PSD, however when I sync the Effects Panel settings between the two, I get different results. The difference isn't black & white but clearly visible. The grain in the PSD file is less coarse & pronounced and the post crop vignetting is much more subtle. The color balance is also different. In other words every adjustments seems to be more subtle & less pronounced on a PSD file compare to a Tiff file.
    Not the end of the world but clearly something to keep in mind when syncing settings between the two formats and something that shouldn't be there in the first place.
    Thanks

    Now you are providing more detail.
    "... but lesser quality at least in terms of natural movement."
    It sounds very much like 11 is exporting 25p/30p which is why you see non smooth motion.
    25p/30p is the result of deinterlacing 50i/60i. When you take two FIELDS that are 1/50th or 1/60th apart and create 1 image, these images will be 1/25th or 1/30th second apart.
    You can create 1 frame through several methods.
    With iM09 deinterlacing occured only if you imported as LARGE, OPTIMIZED video, or used a function or FX that scales video. Perhaps with 11, Apple has decided that its time to FORCE any kind of interlaced video to be deinterlaced.
    In other words, LARGE and FULL only relate to size.
    One possible advantage is that Apple has finally decided to deinterlace using BLEND during import. This would look better, but it creates 25p or 30p video.
    They may hinting its time to buy a progressive camera.

  • Showing parameters different in Chrome and IE in Report Server

    Hi All
    I have created a report by SSDT ( Sql Server Data Tools) and deploy it on Report Server.
    I can see the report in chrome and IE
    but the problem that I have is about the parameters.
    I have two parameters for Start Date and End Date.
    in Chrome I can't select the date to have report in period of time, but it works correctly in IE and exactly according to what I have designed and see in the report viewer in SSDT.

    Hi Ensy,
    Simply put, we are preventing spam. Typical users will not need to take any additional action in order to have their account verified. However, if you would like to expedite this process, please reply below:
    https://social.msdn.microsoft.com/Forums/en-US/d03e16a7-e911-463c-b86c-02c79a6398a2/verify-your-account-23?forum=reportabug
    Furthermore, you'll be verified automatically if you've contributed to the forum with recognitions(points, replies, and answers etc.) successfully since registered.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Why is optimizer different in sql than in pl/sql?

    Hello, I am running Oracle database 9i, it is running on an On-Demand hosted server, its purpose is to serve our Oracle Payroll application.
    I have a certain query, and first tried it as a stand-alone query. It displayed the results almost immediatly.
    Then I inserted the exact same query in a pl/sql block. It took more than one hour and canceled it. By looking at the session browser in TOAD, I saw that it was stuck on that query.
    I also saw that the explain-plan for the query changed drastically from running as stand-alone sql to running inside pl/sql.
    Why is this happening? How can I avoid this?
    Please note that I do not intend to use optimizer hint since the indexes are built by oracle On-Demand and not me, so I don't know which indexes exist.
    Here is the explain plan when running from stand-alone SQL:
    Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE          1           65                     
    bq. SORT AGGREGATE          1      72
    bq. bq. TABLE ACCESS BY INDEX ROWID     HR.PAY_ELEMENT_ENTRY_VALUES_F     1      13      4
    bq. bq. bq. NESTED LOOPS          1      72      65
    bq. bq. bq. bq. MERGE JOIN CARTESIAN          1      59      61
    bq. bq. bq. bq. bq. TABLE ACCESS BY INDEX ROWID     HR.PAY_ELEMENT_ENTRY_VALUES_F     1      13      4
    bq. bq. bq. bq. bq. bq. NESTED LOOPS          1      36      33
    bq. bq. bq. bq. bq. bq. bq. TABLE ACCESS BY INDEX ROWID     HR.PAY_ELEMENT_ENTRIES_F     1      23      29
    bq. bq. bq. bq. bq. bq. bq. bq. INDEX RANGE SCAN     HR.PAY_ELEMENT_ENTRIES_F_N50     44           3
    bq. bq. bq. bq. bq. bq. bq. INDEX RANGE SCAN     HR.PAY_ELEMENT_ENTRY_VALUES_F_N50     9           3
    bq. bq. bq. bq. bq. BUFFER SORT          1      23      57
    bq. bq. bq. bq. bq. bq. TABLE ACCESS BY INDEX ROWID     HR.PAY_ELEMENT_ENTRIES_F     1      23      28
    bq. bq. bq. bq. bq. bq. bq. INDEX RANGE SCAN     HR.PAY_ELEMENT_ENTRIES_F_N50     44           2
    bq. bq. bq. bq. INDEX RANGE SCAN     HR.PAY_ELEMENT_ENTRY_VALUES_F_N50     9           3
    And here is the plan when running inside pl/sql:
    Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
    SELECT STATEMENT Optimizer Mode=CHOOSE 1 293463
    bq. SORT AGGREGATE 1 48
    bq. bq. MERGE JOIN CARTESIAN 1 M 56 M 293463
    bq. bq. bq. MERGE JOIN CARTESIAN 78 3 K 3771
    bq. bq. bq. bq. MERGE JOIN CARTESIAN 1 34 57
    bq. bq. bq. bq. bq. TABLE ACCESS BY INDEX ROWID HR.PAY_ELEMENT_ENTRIES_F 1 17 29
    bq. bq. bq. bq. bq. bq. INDEX RANGE SCAN HR.PAY_ELEMENT_ENTRIES_F_N50 44 3
    bq. bq. bq. bq. BUFFER SORT 1 17 28
    bq. bq. bq. bq. bq. TABLE ACCESS BY INDEX ROWID HR.PAY_ELEMENT_ENTRIES_F 1 17 28
    bq. bq. bq. bq. bq. bq. INDEX RANGE SCAN HR.PAY_ELEMENT_ENTRIES_F_N50 44 2
    bq. bq. bq. BUFFER SORT 15 K 107 K 3743
    bq. bq. bq. bq. TABLE ACCESS BY INDEX ROWID HR.PAY_ELEMENT_ENTRY_VALUES_F 15 K 107 K 3714
    bq. bq. bq. bq. bq. INDEX RANGE SCAN HR.PAY_ELEMENT_ENTRY_VALUES_F_N1 15 K 39
    bq. bq. BUFFER SORT 15 K 107 K 289749
    bq. bq. bq. TABLE ACCESS BY INDEX ROWID HR.PAY_ELEMENT_ENTRY_VALUES_F 15 K 107 K 3714
    bq. bq. bq. bq. INDEX RANGE SCAN HR.PAY_ELEMENT_ENTRY_VALUES_F_N1 15 K 39
    As you can see, the cost goes way up when running inside a pl/sql block.
    Thank you very much for your help.
    Eduardo Sch&ntilde;adower
    Edited by: shinaco on Dec 12, 2008 4:10 PM
    Added indentation. Sorry for that, I didn't realize copy-paste didn't work well here.

    I meant to add,
    Why is optimizer different in sql than in pl/sql? It is not different, it is the same optimizer.
    If you replace bind variables with literals then you have a totally different query, with much more information available to the optimizer. If you are seeing the same statement with the same literal values get a different plan within PL/SQL than in SQL*Plus or TOAD etc then there must be something else different. Unfortunately there is a lot less diagnostic info available in 9i (and I don't have 9i around to test on). Perhaps you can get a 10053 trace from each session and see if you can see what is different.
    btw I still can't read the execution plan. There is more information in the DBMS_XPLAN output, and it needs to be formatted using ** tags or equivalent HTML.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • 10053 - no trace file generated

    Hi,
    no 10053 trace file is generated in the diag directory.
    sql_trace = true
    trace_enabled = true
    i set
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST';
    ALTER SESSION SET EVENTS='10053 trace name context forever, level 1';
    but there is no trace file generated.
    Something seems to be missing.
    Any help would be much appreciated!
    Best Regards
    user11368124

    thanks for your messages.
    @Dom Brooks
    the Oracle release is 11.2 running on Ubuntu.
    Added flushing pool. That statement was missing.
    But unfortunately the 10053 trace file is still not generated.
    i am running the following query mentioned in the article "Examining the Oracle Database 10053 Trace Event Dump File" of Steve Callan:
    alter system set TRACE_ENABLED = true;
    alter system set SQL_TRACE = true;
    alter session set statistics_level=all;
    --alter session set max_dump_file_size = unlimited;
    --oradebug setmypid
    --oradebug unlimit
    --oradebug event 10053 trace name context forever, level 1
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST';
    alter session set events '10046 trace name context forever, level 12';
    alter session set events '10053 trace name context forever, level 1';
    -- plan_table exists
    select * from plan_table
    -- flushing pool
    alter system flush shared_pool;
    explain plan for
    SELECT ch.channel_class,
    c.cust_city,
    t.calendar_quarter_desc,
    SUM(s.amount_sold) sales_amount
    FROM sh.sales s,
    sh.times t,
    sh.customers c,
    sh.channels ch
    WHERE s.time_id = t.time_id
    AND s.cust_id = c.cust_id
    AND s.channel_id = ch.channel_id
    AND c.cust_state_province = 'CA'
    AND ch.channel_desc in ('Internet','Catalog')
    AND t.calendar_quarter_desc IN ('1999-01','1999-02')
    GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc
    ORDER by 1,2,3,4;
    Best Regards
    user11368124

  • SQL query with different plans

    Hi,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    I cannot paste the query plan due to company policies. I can see issue with the query plan, where it is doing partition range scan (with hash join) on one environment due to which it has full table scan, on and nested loop on the second environment with Index range which is fast as compared to partition range scan.
    I have checked the stats,looks updated.
    Any thoughts,...

    How, exactly, did you check the statistics? Did you just check when they were generated? Or did you check whether they were actually (more or less) the same?
    Is it an issue with bind variable peeking?
    Are there optimizer-related parameters that are different either for the databases or for different systems?
    Have you looked at a 10053 trace to see how the slower system is evaluating the faster plan?
    Unfortunately, without more information, it's going to be nearly impossible for us to help you narrow down the problem. We can speculate about things that might cause one environment to generate a different plan but we're blind to the details that would help us figure out which factors are actually influencing your particular systems.
    Justin

  • Many different user authorisation while query runtime

    Hello experts,
    I've got a request for a query authorization where about 100 people needs to have a different view of a query result and I'm not sure what way I have to go.
    There is Bex Query (SAP BW3.5) which contains the IO 0bp_respper and a key figure which gives you the number of activities back.
    The requirement is that about 100 users can have access but they are not allowed to see all 0bp_respper.
    One user can see the details of 0bp_respper (1,2,3), the next user is allowed see the details of 0bp_respper (30,76,55,60) and so on. And some are allowed to see all 0bp_respper.
    You see that I actually would need in worst case 100 roles. Within the query I would use the authorisation variable to read the athorisation at runtime from each user profile.
    It seems, and pls correct me if I'm wrong,  that the authorisation variable is only feasible as long as the different combinations behind the authorisation check is manageable. (less than 10)
    There is also the option to use exit variable at runtime. Should I rather use that one and do some coding. But in this case I would need a seperate table to define each users rights for 0bp_respper, or not?!
    Or do u have another idea to implement the requirement?
    thanks for ur help
    Michael

    A good way to check this out would be to do a 10053 trace.
    Try this:
    connect user1/pwd@whateverdb
    alter session set events '10053 trace name context forever, level 1';
    <run query here>
    connect user1/pwd@whateverdb
    alter session set events '10053 trace name context forever, level 1';
    <run query here>
    exit;
    Now compare the trace files to see what the CBO is doing differently. Pay particular attention to the optimizer parameters at the beginning of the trace file. Are all the values the same?
    -Mark

  • Different explain plan between 10.2.0.3 and 10.2.0.4

    Had a problem with an explain plan changing after upgrade from 10.2.0.3 to 10.2.0.4. Managed to simplify as much as possible for now:
    Query is :
    SELECT * FROM m_promo_chk_str
    WHERE (m_promo_chk_str.cust_cd) IN (
    SELECT cust_cd
    FROM s_usergrp_pda
    GROUP BY cust_cd)
    On 10.2.0.3 explain plan is:
    | 0 | SELECT STATEMENT | | 1 | 1227 | 26 (16)| 00:00:01 |
    |* 1 | HASH JOIN SEMI | | 1 | 1227 | 26 (16)| 00:00:01 |
    | 2 | TABLE ACCESS FULL | M_PROMO_CHK_STR | 1 | 1185 | 14 (0)| 00:00:01 |
    | 3 | VIEW | VW_NSO_1 | 137 | 5754 | 11 (28)| 00:00:01 |
    | 4 | HASH GROUP BY | | 137 | 548 | 11 (28)| 00:00:01 |
    | 5 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 9 (12)| 00:00:01 |
    On 10.2.0.4 with same data is:
    | 0 | SELECT STATEMENT | | 1 | 1201 | 46 (5)| 00:00:01 |
    | 1 | HASH GROUP BY | | 1 | 1201 | 46 (5)| 00:00:01 |
    |* 2 | HASH JOIN | | 1 | 1201 | 45 (3)| 00:00:01 |
    | 3 | TABLE ACCESS FULL| M_PROMO_CHK_STR | 1 | 1197 | 29 (0)| 00:00:01 |
    | 4 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 15 (0)| 00:00:01 |
    Explain plan is reasonable for when M_PROMO_CHK_STR is empty, however we have the case where stats are gathered when table is empty, but table is then populated and the query runs slowly. I understand that this is not a problem with the database exactly, but want to try to understand why the different behaviour.
    Will look into CBO trace tommorrow, but for now anyone want to share any thoughts?

    PatHK wrote:
    Here is further simplification to reproduce the different behaviour - I think about as simple as I can get it!
    SELECT * FROM dual WHERE (dummy) IN (SELECT dummy FROM dual GROUP BY dummy);
    On 10.2.0.3
    |   0 | SELECT STATEMENT     |          |     1 |     4 |     5  (20)| 00:00:01 |
    |   1 |  NESTED LOOPS SEMI   |          |     1 |     4 |     5  (20)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL  | DUAL     |     1 |     2 |     2   (0)| 00:00:01 |
    |*  3 |   VIEW               | VW_NSO_1 |     1 |     2 |     3  (34)| 00:00:01 |
    |   4 |    SORT GROUP BY     |          |     1 |     2 |     3  (34)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| DUAL     |     1 |     2 |     2   (0)| 00:00:01 |On 10.2.0.4
    |   0 | SELECT STATEMENT     |      |     1 |     4 |     4   (0)| 00:00:01 |
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     4 |     4   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |      |     1 |     4 |     4   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    Timur's suggestion to look at a 10053 trace file is a good idea. It might be the case that someone disabled complex view merging in the 10.2.0.3 database instance. See the following:
    _complex_view_merging
    http://jonathanlewis.wordpress.com/2007/03/08/transformation-and-optimisation/
    Here is a test you might try on both database versions:
    ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=TRUE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST1';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';
    ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=FALSE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST2';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';The first plan output:
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |      |       |       |     8 (100)|          |
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     4 |     8   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |      |     1 |     4 |     8   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     4   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("DUMMY"="DUMMY")The second plan output:
    | Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |          |       |       |     9 (100)|          |
    |   1 |  NESTED LOOPS SEMI   |          |     1 |     4 |     9  (12)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL  | DUAL     |     1 |     2 |     4   (0)| 00:00:01 |
    |*  3 |   VIEW               | VW_NSO_1 |     1 |     2 |     5  (20)| 00:00:01 |
    |   4 |    SORT GROUP BY     |          |     1 |     2 |     5  (20)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| DUAL     |     1 |     2 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("DUMMY"="$nso_col_1")From the first 10053 trace file:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      _pga_max_size                       = 368640 KB
    pgamax_size is the only parameter non-default value which could affect the optimizer.
    From the second 10053 trace file:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      _pga_max_size                       = 368640 KB
      _complex_view_merging               = false
      *********************************This section in the first 10053 trace seems to show the complex view merging:
    SU: Considering interleaved complex view merging
    SU:   Transform an ANY subquery to semi-join or distinct.
    CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
    CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
    CVM: CBQT Marking query block SEL$683B0107 (#2)as valid for CVM.
    CVM:   Merging complex view SEL$683B0107 (#2) into SEL$5DA710D3 (#1).
    qbcp:******* UNPARSED QUERY IS *******
    SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM  (SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY") "VW_NSO_2","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    vqbcp:******* UNPARSED QUERY IS *******
    SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY"
    CVM: result SEL$5DA710D3 (#1).
    ******* UNPARSED QUERY IS *******
    SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM "SYS"."DUAL" "DUAL","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="DUAL"."DUMMY" GROUP BY "DUAL"."DUMMY","DUAL".ROWID,"DUAL"."DUMMY"
    Registered qb: SEL$C9C6826C 0x155e2020 (VIEW MERGE SEL$5DA710D3; SEL$683B0107)
      signature (): qb_name=SEL$C9C6826C nbfros=2 flg=0
        fro(0): flg=0 objn=258 hint_alias="DUAL"@"SEL$1"
        fro(1): flg=0 objn=258 hint_alias="DUAL"@"SEL$2"
    FPD: Considering simple filter push in SEL$C9C6826C (#1)
    FPD:   Current where clause predicates in SEL$C9C6826C (#1) :
             "DUAL"."DUMMY"="DUAL"."DUMMY"
    kkogcp: try to generate transitive predicate from check constraints for SEL$C9C6826C (#1)
    predicates with check contraints: "DUAL"."DUMMY"="DUAL"."DUMMY"
    after transitive predicate generation: "DUAL"."DUMMY"="DUAL"."DUMMY"
    finally: "DUAL"."DUMMY"="DUAL"."DUMMY"
    CVM: Costing transformed query.
    kkoqbc-start
                : call(in-use=25864, alloc=65448), compile(in-use=115280, alloc=118736)
    kkoqbc-subheap (create addr=000000001556CD70)This is the same section from the second 10053 trace:
    SU: Considering interleaved complex view merging
    SU:   Transform an ANY subquery to semi-join or distinct.
    CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
    CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
    FPD: Considering simple filter push in SEL$5DA710D3 (#1)
    FPD:   Current where clause predicates in SEL$5DA710D3 (#1) :
             "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#1)
    predicates with check contraints: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    after transitive predicate generation: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    finally: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    FPD: Considering simple filter push in SEL$683B0107 (#2)
    FPD:   Current where clause predicates in SEL$683B0107 (#2) :
             CVM: Costing transformed query.
    kkoqbc-start
                : call(in-use=25656, alloc=65448), compile(in-use=113992, alloc=114592)
    kkoqbc-subheap (create addr=00000000157E9078)Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Event 10053

    Hi,
    Yesterday I was reading one of the performance optimization book about event 10053 but could not understand what to look in that trace file.
    Could only understand that it will show different execution plans.
    Kindly tell me what I sould actually look in the 10053 trace.
    Regards
    MMU

    Basically this is cost based optimizer trace. I usually look there for errors and other deficiencies that surround CBO calculation of the execution plans. But this comes after other steps of tuning and if you discovered that you don't like the current execution plan of an offending query.
    Vlad Sadilovskiy
    Oracle Database Tools
    http://www.fourthelephant.com

  • Why optimizer select plan with higher cost?

    Why optimizer select plan with higher cost?
    SQL with hint:
    SELECT /*+ index(ordm ORDA_PK) */
    ordm.orders_id h_docid, ordm.customer_nr h_clientid,
    ordm.cl_doc_type_code h_doctype,
    ordm.cl_doc_status_code cl_doc_status_code,
    ordm.cl_external_error_code h_errorcode, ordm.sys_version_id h_version,
    ordm.doc_number po_number, ordm.curdate po_curdate,
    ordm.cl_currency_code po_curr,
    TO_CHAR (ordm.amount, 'FM999999999999990.00') po_amount,
    ordm.account_nr po_cust_accnum, ordm.customer_name po_cust_name,
    ordd.cl_currency_cust_code po_cust_curr,
    TO_CHAR (ordd.cust_rate, 'FM999999999990.0099999999') po_cust_rate,
    ordd.cust_confirm po_cust_conf, ordd.ben_name po_ben_name,
    ordd.ben_accnum po_ben_accnum,
    ordd.cl_external_payment_code po_cust_amk, ordd.ben_info po_ben_info,
    ordd.comments po_comments
    FROM FINIX_IB.orders_archive ordm, FINIX_IB.orders_archive_fields ordd
    WHERE ordm.orders_id = ordd.orders_id (+)
    AND ordm.orders_id = NVL (4353, ordm.orders_id)
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4918 Card=1 Bytes=185)
    1 0 NESTED LOOPS (OUTER) (Cost=4918 Card=1 Bytes=185)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'ORDERS_ARCHIVE' (TABLE) (Cost=4916 Card=1 Bytes=87)
    3 2 INDEX (FULL SCAN) OF 'ORDA_PK' (INDEX (UNIQUE)) (Cost=4915 Card=1)
    4 1 TABLE ACCESS (BY INDEX ROWID) OF 'ORDERS_ARCHIVE_FIELDS' (TABLE) (Cost=2 Card=1 Bytes=98)
    5 4 INDEX (RANGE SCAN) OF 'ORDAF_ORDA_FK' (INDEX) (Cost=1 Card=1)
    Statistics
    0 recursive calls
    0 db block gets
    4792 consistent gets
    4786 physical reads
    0 redo size
    1020 bytes sent via SQL*Net to client
    237 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    SQL without hint:
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=9675 Card=1 Bytes=185)
    1 0 NESTED LOOPS (OUTER) (Cost=9675 Card=1 Bytes=185)
    2 1 TABLE ACCESS (FULL) OF 'ORDERS_ARCHIVE' (TABLE) (Cost=9673 Card=1 Bytes=87)
    3 1 TABLE ACCESS (BY INDEX ROWID) OF 'ORDERS_ARCHIVE_FIELDS' (TABLE) (Cost=2 Card=1 Bytes=98)
    4 3 INDEX (RANGE SCAN) OF 'ORDAF_ORDA_FK' (INDEX) (Cost=1 Card=1)
    Statistics
    1 recursive calls
    0 db block gets
    39706 consistent gets
    39694 physical reads
    0 redo size
    1037 bytes sent via SQL*Net to client
    237 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    The way you are comparing costs, is not the right way, as Billy already told you. Only for one query, the cost of different access paths can be compared, as can be seen in a 10053 trace.
    However, your problem seems to arise from the fact that you use the NVL function in the predicate "ordm.orders_id = NVL (4353, ordm.orders_id)". The NVL function always evaluates both expressions, so it has to do a "ordm.orders_id = 4353" and a "ordm.orders_id = ordm.orders_id". This is why both alternatives will show a full scan on your orders_archive table cq. orda_pk index.
    You are probably suppling a bind variable to this statement which in some cases contains a null and in some other cases contains a number. If by any chance you always supply a number, then the solution is easy: drop the NVL function and change the predicate to "ordm.orders_id = :<your bind variable>". If not, then you have combined two queries into one, where both variants each have its own optimal plan, but they have to share their plan due to bind variable peeking.
    To solve this, I think you have two options:
    1) Make sure your statement is reparsed everytime. If your statement doesn't get executed often, this strategy might work. You might implement this by doing unnecessary dynamic sql.
    2) Split your query into two queries where one handles the constant number input and the other handles the null/no input.
    Below are some test results I used for research:
    SQL> create table orders_archive
      2  as
      3  select l orders_id, lpad('*',100,'*') filler from (select level l from dual connect by level <= 10000)
      4  /
    Tabel is aangemaakt.
    SQL> create table orders_archive_fields
      2  as
      3  select l field_id, l+500 orders_id, lpad('*',100,'*') filler from (select level l from dual connect by level <= 9000)
      4  /
    Tabel is aangemaakt.
    SQL> alter table orders_archive add constraint orda_pk primary key (orders_id)
      2  /
    Tabel is gewijzigd.
    SQL> alter table orders_archive_fields add constraint ordaf_pk primary key (field_id)
      2  /
    Tabel is gewijzigd.
    SQL> alter table orders_archive_fields add constraint ordaf_orda_fk foreign key (orders_id) references orders_archive(orders_id)
      2  /
    Tabel is gewijzigd.
    SQL> create index ordaf_orda_fk on orders_archive_fields(orders_id)
      2  /
    Index is aangemaakt.
    SQL> exec dbms_stats.gather_table_stats(user,'ORDERS_ARCHIVE',cascade=>true)
    PL/SQL-procedure is geslaagd.
    SQL> exec dbms_stats.gather_table_stats(user,'ORDERS_ARCHIVE_FIELDS',cascade=>true)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  SELECT /*+ index(ordm ORDA_PK) */
      4  ordm.orders_id h_docid, ordm.filler, ordd.filler
      5  FROM orders_archive ordm, orders_archive_fields ordd
      6  WHERE ordm.orders_id = ordd.orders_id (+)
      7  AND ordm.orders_id = NVL(4353,ordm.orders_id)
      8  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    | Id  | Operation                    |  Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT             |                        |     1 |   209 |     8   (0)|
    |   1 |  NESTED LOOPS OUTER          |                        |     1 |   209 |     8   (0)|
    |   2 |   TABLE ACCESS BY INDEX ROWID| ORDERS_ARCHIVE         |     1 |   104 |     7   (0)|
    |*  3 |    INDEX FULL SCAN           | ORDA_PK                |     1 |       |    22   (5)|
    |   4 |   TABLE ACCESS BY INDEX ROWID| ORDERS_ARCHIVE_FIELDS  |     1 |   105 |     2  (50)|
    |*  5 |    INDEX RANGE SCAN          | ORDAF_ORDA_FK          |     1 |       |            |
    Predicate Information (identified by operation id):
       3 - filter("ORDM"."ORDERS_ID"=NVL(4353,"ORDM"."ORDERS_ID"))
       5 - access("ORDM"."ORDERS_ID"="ORDD"."ORDERS_ID"(+))
    17 rijen zijn geselecteerd.
    SQL> exec dbms_lock.sleep(1)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  SELECT
      4  ordm.orders_id h_docid, ordm.filler, ordd.filler
      5  FROM orders_archive ordm, orders_archive_fields ordd
      6  WHERE ordm.orders_id = ordd.orders_id (+)
      7  AND ordm.orders_id = NVL(4353,ordm.orders_id)
      8  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    | Id  | Operation                    |  Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT             |                        |     1 |   209 |    47   (7)|
    |   1 |  NESTED LOOPS OUTER          |                        |     1 |   209 |    47   (7)|
    |*  2 |   TABLE ACCESS FULL          | ORDERS_ARCHIVE         |     1 |   104 |    46   (7)|
    |   3 |   TABLE ACCESS BY INDEX ROWID| ORDERS_ARCHIVE_FIELDS  |     1 |   105 |     2  (50)|
    |*  4 |    INDEX RANGE SCAN          | ORDAF_ORDA_FK          |     1 |       |            |
    Predicate Information (identified by operation id):
       2 - filter("ORDM"."ORDERS_ID"=NVL(4353,"ORDM"."ORDERS_ID"))
       4 - access("ORDM"."ORDERS_ID"="ORDD"."ORDERS_ID"(+))
    16 rijen zijn geselecteerd.So this shows I reproduced your situation. Because the decode function doesn't evaluate all its arguments, but evaluates the first argument to see which arguments to evaluate, you'll see different behaviour now.
    SQL> exec dbms_lock.sleep(1)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  SELECT
      4  ordm.orders_id h_docid, ordm.filler, ordd.filler
      5  FROM orders_archive ordm, orders_archive_fields ordd
      6  WHERE ordm.orders_id = ordd.orders_id (+)
      7  AND ordm.orders_id = decode(4353,null,ordm.orders_id,4353)
      8  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    | Id  | Operation                    |  Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT             |                        |     1 |   209 |     3  (34)|
    |   1 |  NESTED LOOPS OUTER          |                        |     1 |   209 |     3  (34)|
    |   2 |   TABLE ACCESS BY INDEX ROWID| ORDERS_ARCHIVE         |     1 |   104 |     2  (50)|
    |*  3 |    INDEX UNIQUE SCAN         | ORDA_PK                |     1 |       |     2  (50)|
    |   4 |   TABLE ACCESS BY INDEX ROWID| ORDERS_ARCHIVE_FIELDS  |     1 |   105 |     2  (50)|
    |*  5 |    INDEX RANGE SCAN          | ORDAF_ORDA_FK          |     1 |       |            |
    Predicate Information (identified by operation id):
       3 - access("ORDM"."ORDERS_ID"=4353)
       5 - access("ORDM"."ORDERS_ID"="ORDD"."ORDERS_ID"(+))
    17 rijen zijn geselecteerd.
    SQL> exec dbms_lock.sleep(1)
    PL/SQL-procedure is geslaagd.
    SQL> explain plan
      2  for
      3  SELECT
      4  ordm.orders_id h_docid, ordm.filler, ordd.filler
      5  FROM orders_archive ordm, orders_archive_fields ordd
      6  WHERE ordm.orders_id = ordd.orders_id (+)
      7  AND ordm.orders_id = decode(null,null,ordm.orders_id,null)
      8  /
    Uitleg is gegeven.
    SQL> select * from table(dbms_xplan.display)
      2  /
    PLAN_TABLE_OUTPUT
    | Id  | Operation                    |  Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT             |                        |     1 |   209 |    46   (5)|
    |   1 |  NESTED LOOPS OUTER          |                        |     1 |   209 |    46   (5)|
    |*  2 |   TABLE ACCESS FULL          | ORDERS_ARCHIVE         |     1 |   104 |    45   (5)|
    |   3 |   TABLE ACCESS BY INDEX ROWID| ORDERS_ARCHIVE_FIELDS  |     1 |   105 |     2  (50)|
    |*  4 |    INDEX RANGE SCAN          | ORDAF_ORDA_FK          |     1 |       |            |
    Predicate Information (identified by operation id):
       2 - filter("ORDM"."ORDERS_ID"="ORDM"."ORDERS_ID")
       4 - access("ORDM"."ORDERS_ID"="ORDD"."ORDERS_ID"(+))
    16 rijen zijn geselecteerd.So two different plans depending on the input.
    Regards,
    Rob.

  • 10053 lower cost bad plan

    Hi all
    I am wondering that in Metalink document "Case Study: Analyzing 10053 Trace Files" there is a sql that' considered has bad plan with cost 20762 and same sql with NO_INDEX hint considered has good plan with cost 58201.
    Diffrence is more than %50 how it could be ?Cbo could choose a sql that has higher cost instead of lower cost one ? am I thinking wrong ? or misunderstood document ?
    Best Regards

    The optimizer calculated cost is based on a large number of statistics and parameters. If, according to the section "D) Calculate the multiblock read divisor", the "Mdivisor" (MBRC?) is 1.07 - which would suggest to the optimizer that when it requests a full table scan or fast full index scan, on average Oracle will be able to perform multiblock reads of an average 1.07 blocks. This will lead the optimizer to believe that a full table scan or fast full index scan will operate very slowly - possibly reading 8KB in each read request, rather than a more efficient 512KB to 1MB. The document also suggests that someone may have inappropriately changed other parameters, which would affect the calculated costs. Consider what might happen if someone adjusts the OPTIMIZER_INDEX_COST_ADJ parameter to a value of 1 - suddenly index access paths will have calculated costs that are 0.01 times their original value, making index access paths appear to the optimizer to be very cheap, yet such a change will not make index access paths complete 100 times faster than before.
    In short, the optimizer is capable of being fooled by poorly set statistics and parameters.
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Using a different Plan Version in WIP calculation

    Hi experts.
    I am using version 97 as plan in a wip calculation. The problem is that to recognize plan incomings my system uses version 0 and in the result analysis they always appear as 0 because the version the system is taking to recover the planned incomings is 97 and it is empty.
    Is there any standard way to make it take version 97 only for costs?.
    Thanks.
    Alex

    yes. this is true, the predicates are not preserved, I found that out after posting. ok, Im really thinking bug here, I'll explain further.
    Im outside of prod time so can run the mview. the plan is pretty big so not going to post just yet, its just behavioural at this stage.
    explain for QUERY, = plan good.
    create table t1 as QUERY = plan good.
    insert to table as QUERY = plan good
    create mview as QUERY = plan bad. Not just timings are bad, the plan has completely changed. I stop the mview creation at this stage. good old ctrl+c.
    the only thing thats different is Im running MVIEW create. So I hit my ADDM again, can see the bad plan and bad timings. So I get the SQL_ID from the ADDM for the bad plan and run a 10053 on it using the 11.2 sqldiag dump. Im hoping to see why its choosing the bad path
    execute DBMS_SQLDIAG.DUMP_TRACE(p_sql_id=>'576wpd5g73q9c', p_child_number=>0, p_component=>'Compiler', p_file_id=>'BAD1');
    edit my BAD1 10053 trace file and I see...the plan is GOOD. Its estimated 2 seconds to run for the SQL_ID thats actually bad when running.
    This is why Im thinking bug.

Maybe you are looking for