Different Explain Plans

Hi,
in 11.2.0.3 the same query on two different DBs (on the same server) have different Explain Plans (differnet number of rows returned 9690K vs 14M) :
DBDEV :
| Id  | Operation                | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | INSERT STATEMENT         |                  |  9690K|  4482M|   344K  (1)| 01:08:49 |
|   1 |  LOAD TABLE CONVENTIONAL | PS_PROJ_RES_TA14 |       |       |            |          |
|*  2 |   TABLE ACCESS FULL      | PS_PROJ_RESOURCE |  9690K|  4482M|   344K  (1)| 01:08:49 |
DBTST
| Id  | Operation                | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | INSERT STATEMENT         |                  |    14M|  6534M|   344K  (1)| 01:08:50 |
|   1 |  LOAD TABLE CONVENTIONAL | PS_PROJ_RES_TA14 |       |       |            |          |
|*  2 |   TABLE ACCESS FULL      | PS_PROJ_RESOURCE |    14M|  6534M|   344K  (1)| 01:08:50 |
The optimizer parameters are the same :
NAME                                 TYPE        VALUE
optimizer_capture_sql_plan_baselines  boolean     FALSE
  optimizer_dynamic_sampling           integer     2
optimizer_features_enable            string      11.2.0.3
  optimizer_index_caching              integer     0
  optimizer_index_cost_adj             integer     100
  optimizer_mode                       string      ALL_ROWS
  optimizer_secure_view_merging        boolean     TRUE
  optimizer_use_invisible_indexes      boolean     FALSE
  optimizer_use_pending_statistics     boolean    FALSE
  optimizer_use_sql_plan_baselines     boolean    TRUE
And the number of rows :
In DBTST
select count(*) from ps_proj_resource
  COUNT(*)
  18072893
In DBDEV
  COUNT(*)
  18070581
Thanks for explanation and ideas.

Thanks.
Here they are :
DEV
Predicate Information (identified by operation id):
   2 - filter(("SRC"."CST_DISTRIB_STATUS"='N' OR ("SRC"."BI_DISTRIB_STATUS"='N' OR
              "SRC"."BI_DISTRIB_STATUS"='U')) AND "SRC"."SYSTEM_SOURCE"<>'PRC' AND
              "SRC"."SYSTEM_SOURCE"<>'PRP' AND "SRC"."SYSTEM_SOURCE"<>'PRR')
TST
Predicate Information (identified by operation id):
   1 - filter("SRC"."SYSTEM_SOURCE"<>'PRC' AND "SRC"."SYSTEM_SOURCE"<>'PRP'
              AND ("SRC"."CST_DISTRIB_STATUS"='N' OR ("SRC"."BI_DISTRIB_STATUS"='N' OR
                "SRC"."BI_DISTRIB_STATUS"='U')) AND "SRC"."SYSTEM_SOURCE"<>'PRR')

Similar Messages

  • Same query, same dataset, same ddl setup, but wildly different explain plan

    Hello o fountains of oracle knowledge!
    We have a problem that caused us a full stop when rolling out a new version of our system to a customer and a whole Sunday to boot.
    The scenario is as follows:
    1. An previous version database schema
    2. The current version database schema
    3. A migration script to migrate the old schema to the new
    So we perform the following migration:
    1. Export the previous version database schema
    2. Import into a new schema called schema_old
    3. Create a new schema called schema_new
    4. Run migration script which creates objects, copies data, creates indexes etc etc in schema_new
    The migration runs fine in all environments (development, test and production)
    In our development and test environments performance is stellar, on the customer production server the performance is terrible.
    This using the exact same export file (from the production environment) and performing the exact same steps with the exact same migration script.
    Database version is 10.2.0.1.0 EE on all databases. OS is Microsoft Windows Server 2003 EE SP2 on all servers.
    The system is not in any sense under a heavy load (we have tested with no other load than ourselves).
    Looking at the explain plan for a query that is run frequently and does not use bind variables we see wildly different explain plans.
    The explain plan cost on our development and test servers is estimated to *7* for this query and there are no full table scans.
    On the production server the cost is *8433* and there are two full table scans of which one is on the largest table.
    We have tried to run analyse on all objects with very little effect. The plan changed very slightly, but still includes the two full table scans on the problem server and the cost is still the same.
    All tables and indexes are identical (including storage options), created from the same migration script.
    I am currently at loss for where to look? What can be causing this? I assume this could be caused by some parameter that is set on the server, but I don't know what to look for.
    I would be very grateful for any pointers.
    Thanks,
    Håkon

    Thank you for your answer.
    We collected statistics only after we determined that the production server where not behaving according to expectations.
    In this case we used TOAD and the tool within to collect statistics for all objects. We used 'Analyze' and 'Compute Statistics' options.
    I am not an expert, so sorry if this is too naive an approach.
    Here is the query:SELECT count(0)  
    FROM score_result sr, web_scorecard sc, product p
    WHERE sr.score_final_decision like 'VENT%'  
    AND sc.CREDIT_APPLICATION_ID = sr.CREDIT_APPLICATION_ID  
    AND sc.application_complete='Y'   
    AND p.product = sc.web_product   
    AND p.inactive_product = '2' ;I use this as an example, but the problem exists for virtually all queries.
    The output from the 'good' server:
    | Id  | Operation                      | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT               |                       |     1 |    39 |     7   (0)|
    |   1 |  SORT AGGREGATE                |                       |     1 |    39 |            |
    |   2 |   NESTED LOOPS                 |                       |     1 |    39 |     7   (0)|
    |   3 |    NESTED LOOPS                |                       |     1 |    30 |     6   (0)|
    |   4 |     TABLE ACCESS BY INDEX ROWID| SCORE_RESULT          |     1 |    17 |     4   (0)|
    |   5 |      INDEX RANGE SCAN          | SR_FINAL_DECISION_IDX |     1 |       |     3   (0)|
    |   6 |     TABLE ACCESS BY INDEX ROWID| WEB_SCORECARD         |     1 |    13 |     2   (0)|
    |   7 |      INDEX UNIQUE SCAN         | WEB_SCORECARD_PK      |     1 |       |     1   (0)|
    |   8 |    TABLE ACCESS BY INDEX ROWID | PRODUCT               |     1 |     9 |     1   (0)|
    |   9 |     INDEX UNIQUE SCAN          | PK_PRODUCT            |     1 |       |     0   (0)|
    ---------------------------------------------------------------------------------------------The output from the 'bad' server:
    | Id  | Operation                 | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT          |                       |     1 |    32 |  8344   (3)|
    |   1 |  SORT AGGREGATE           |                       |     1 |    32 |            |
    |   2 |   HASH JOIN               |                       | 10887 |   340K|  8344   (3)|
    |   3 |    TABLE ACCESS FULL      | PRODUCT               |     6 |    42 |     3   (0)|
    |   4 |    HASH JOIN              |                       | 34381 |   839K|  8340   (3)|
    |   5 |     VIEW                  | index$_join$_001      | 34381 |   503K|  2193   (3)|
    |   6 |      HASH JOIN            |                       |       |       |            |
    |   7 |       INDEX RANGE SCAN    | SR_FINAL_DECISION_IDX | 34381 |   503K|   280   (3)|
    |   8 |       INDEX FAST FULL SCAN| SCORE_RESULT_PK       | 34381 |   503K|  1371   (2)|
    |   9 |     TABLE ACCESS FULL     | WEB_SCORECARD         |   489K|  4782K|  6137   (4)|
    ----------------------------------------------------------------------------------------I hope the formatting makes this readable.
    Stats (from SQL Developer), good table:NUM_ROWS     489716
    BLOCKS     27198
    AVG_ROW_LEN     312
    SAMPLE_SIZE     489716
    LAST_ANALYZED     15.12.2009
    LAST_ANALYZED_SINCE     15.12.2009Stats (from SQL Developer), bad table:
    NUM_ROWS     489716
    BLOCKS     27199
    AVG_ROW_LEN     395
    SAMPLE_SIZE     489716
    LAST_ANALYZED     17.12.2009
    LAST_ANALYZED_SINCE     17.12.2009I'm unsure what would cause the difference in average row length.
    I could obviously try to tune our sql-statements to work on the server not behaving better, but I would rather understand why they are different and make sure that we can expect similar behaviour between environments.
    Thank you again for trying to help me.
    Håkon
    Edited by: ergates on 17.des.2009 05:57
    Edited by: ergates on 17.des.2009 06:02

  • Query returns different Explain Plan

    Hi,
    I have two databases that are identical to each other. But I have a query that runs on both and returns different explain plans in both. It runs slower in one than other. How can I troubleshoot this further to get faster times in the slower database?
    THanks,
    Prachi

    Query:
    =====
    SELECT sum(total_contacts), sum(email_partner_news), sum(email_tech_news),
    sum(email_enduser_news), sum(email), sum(email_gmo), sum(gmo_nophone),
    sum(email_gmo_noaddress), sum(email_gmo_address), sum(
    email_gmo_nocertaddr), sum(email_gmo_certaddress), sum(
    email_gmo_nophone_noaddr), sum(phone_number), sum(gmo_with_phone),
    sum(phone_number_email_gmo_noaddr), sum(phone_number_email_gmo_nocert),
    sum(phone_number_address), sum(phone_number_address_noemail), sum(
    phone_address_email_nogmo), sum(phone_number_address_email), sum(
    phone_number_certaddr), sum(phone_number_certaddr_noemail), sum(
    phone_certaddr_email_nogmo), sum(phone_number_address_gmo), sum(
    phone_certaddr_email_gmo), sum(address), sum(
    address_nophone_number_nogmo), sum(address_certified), sum(
    certaddr_nophone_nogmo), sum(address_email_nophone_gmo), sum(
    certaddr_email_nophone_gmo), sum(phone_gmo), sum(smail_gmo), sum(fax_gmo
    ), sum(email_wcast), sum(email_inews), sum(email_salrt), sum(
    phone_gmo_phone), sum(smail_gmo_address), sum(fax_gmo_fax)
    FROM contact_values_vw_tc_2 cv, ((SELECT gcd_contact_id
    FROM contact_values)
    INTERSECT
    ((SELECT gcd_contact_id
    FROM gcddata.contact_country s, gcd.country c,
    segmentation_query_values v
    WHERE s.country_id = c.country_id
    AND c.region_id = v.query_value
    AND v.selection_type = 'I'
    AND v.query_id = 2088
    AND v.query_sequence = 1)
    INTERSECT
    (SELECT gcd_contact_id
    FROM gcddata.contact_country s, gcd.country c,
    segmentation_query_values v
    WHERE s.country_id = c.country_id
    AND c.sub_region_id = v.query_value
    AND v.selection_type = 'I'
    AND v.query_id = 2088
    AND v.query_sequence = 2)) ) sl
    WHERE cv.gcd_contact_id = sl.gcd_contact_id
    AND cv.country_id IN (SELECT cm.country_id
    FROM segmentation_user_country_map cm
    WHERE cm.user_name = 'E30590')
    ========================================
    Execution Plan - FAST
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=32931 Card=1 Bytes=73)
    1 0 SORT (AGGREGATE)
    2 1 HASH JOIN (Cost=32931 Card=386388 Bytes=28206324)
    3 2 VIEW (Cost=29617 Card=1142043 Bytes=14846559)
    4 3 INTERSECTION
    5 4 SORT (UNIQUE)
    6 5 INDEX (FAST FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=5 Card=1907737 Bytes=11446422)
    7 4 INTERSECTION
    8 7 SORT (UNIQUE)
    9 8 HASH JOIN (Cost=655 Card=1998575 Bytes=57958675)
    10 9 HASH JOIN (Cost=6 Card=110 Bytes=2200)
    11 10 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=7 Bytes=91)
    12 11 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_ PK' (UNIQUE) (Cost=2 Card=15)
    13 10 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost=2 Card=120 Bytes=840)
    14 9 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1' (Cost=643 Card=2199855 Bytes=19798695)
    15 7 SORT (UNIQUE)
    16 15 HASH JOIN (Cost=655 Card=1142043 Bytes=33119247)
    17 16 HASH JOIN (Cost=6 Card=63 Bytes=1260)
    18 17 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=7 Bytes=91)
    19 18 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=15)
    20 17 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost=2 Card=120 Bytes=840)
    21 16 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1' (Cost=643 Card=2199855 Bytes=19798695)
    22 2 HASH JOIN (Cost=2174 Card=645445 Bytes=38726700)
    23 22 SORT (UNIQUE)
    24 23 TABLE ACCESS (FULL) OF 'SEGMENTATION_USER_COUNTRY_MAP' (Cost=5 Card=43 Bytes=473)
    25 22 TABLE ACCESS (FULL) OF 'CONTACT_VALUES1' (Cost=2151 Card=1907737 Bytes=93479113)
    =====================================================
    Execution Plan -SLOW
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=8751 Card=1 Bytes=73 )
    1 0 SORT (AGGREGATE)
    2 1 HASH JOIN (SEMI) (Cost=8751 Card=14477 Bytes=1056821)
    3 2 MERGE JOIN (Cost=8583 Card=89527 Bytes=5550674)
    4 3 TABLE ACCESS (BY INDEX ROWID) OF 'CONTACT_VALUES1' ( Cost=826 Card=1852109 Bytes=90753341)
    5 4 INDEX (FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=26 Card=1852109)
    6 3 SORT (JOIN) (Cost=7757 Card=89527 Bytes=1163851)
    7 6 VIEW (Cost=7454 Card=89527 Bytes=1163851)
    8 7 INTERSECTION
    9 8 SORT (UNIQUE)
    10 9 INDEX (FAST FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=5 Card=1852109 Bytes=11112654)
    11 8 INTERSECTION
    12 11 SORT (UNIQUE)
    13 12 HASH JOIN (Cost=640 Card=250676 Bytes=7269604)
    14 13 HASH JOIN (Cost=6 Card=28 Bytes=560)
    15 14 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=1 Bytes=13)
    16 15 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=2)
    17 14 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost =2 Card=120 Bytes=840)
    18 13 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1 ' (Cost=628 Card=2135370 Bytes=19218330)
    19 11 SORT (UNIQUE)
    20 19 HASH JOIN (Cost=640 Card=89527 Bytes=25962 83)
    21 20 HASH JOIN (Cost=6 Card=10 Bytes=200)
    22 21 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=1 Bytes=13)
    23 22 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=2)
    24 21 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost =2 Card=120 Bytes=840)
    25 20 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1 ' (Cost=628 Card=2135370 Bytes=19218330)
    26 2 TABLE ACCESS (FULL) OF 'SEGMENTATION_USER_COUNTRY_MAP'
    (Cost=5 Card=45 Bytes=495)

  • 'Identical' schemas on different servers - different explain plan costs

    Hello,
    I have two servers, 1 development and 1 production. I have a query which produces wildly different explain plan costs on the two servers:
    The development server provides a cost of just over 800 and the production servers is over 100000. I have 2-3 different versions of the schema (These are data warehouse schemas) on both servers and the cost numbers are similar regardless of the version used. Whenever I run the query on development, it's around 800. On production the same query is over 100000.
    The data on both servers is (should be) identical - I used impdp and expdp to transfer the data between the servers. I have run:
    DBMS_STATS.GATHER_SCHEMA_STATS ('SCHEMAV26', cascade=>TRUE);
    on the production server after importing the data. As far as I can see, the indices are identical on both servers. The difference in the execution plan is one additional line:
    Filter Predicates CE.ID < 5
    Can anyone help me figure out why the explain plans are different? The servers have similar hardware specs, and are running the same version of Oracle (11.2.0.2.0)
    Thanks,
    Dan Scott
    http://danieljamesscott.org
    Edited by: danscott on Mar 4, 2011 11:43 AM

    Thanks for all the help/suggestions - as you've probably guessed, I'm a little new to all this.
    A little background first:
    We have an items table and events table. itemid is the primary key for the items table and a foreign key in the events table. The events table contains itemids, timestamps and data values (along with a few other IDs) The query I'm running is used to create a materialized view which provides statistics for each itemid to assist users in finding a particular itemid containing the data they're interested in. Generally, we create the view on the full list of itemids (and so the indices are not used, as expected). However, we occasionally run the query for a small number of itemids, and the index on events.itemid is used on one server, but not on the other.
    Here's the SQL (Apologies for the length).
    WITH ChartItems as (
      select distinct ci.itemid, ci.label, ci.category, ci.description,
             case
                when
                   (count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2) over (partition by ci.itemid) > 0)
                   AND
                   (count(distinct ce.value1num) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2num) over (partition by ci.itemid) > 0)
                then 'H'
                when
                   count(distinct ce.value1num) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2num) over (partition by ci.itemid) > 0
                then 'N'
                when
                   count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2) over (partition by ci.itemid) > 0
                then 'S'
                else
                    'X'
             end as value_type,
             -- The value column
             case
                when
                   (count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value1num) over (partition by ci.itemid) > 0)
                   and
                   (count(distinct ce.value2) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2num) over (partition by ci.itemid) > 0)
                then 'both'
                when
                   count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value1num) over (partition by ci.itemid) > 0
                then 'value1'
                when
                   count(distinct ce.value2) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2num) over (partition by ci.itemid) > 0
                then 'value2'
                else
                    'none'
             end as value_column
        from items ci,
             events ce
       where ce.itemid = ci.itemid
         and ci.itemid < 5
    , RawData as (
        select distinct ci.itemid, ci.label, ci.category, ci.description,
              ci.value_type, ci.value_column,
              count(*)
                over (partition by ci.itemid) as rows_num,
              count(distinct ce.subject_id)
                over (partition by ci.itemid) as subjects_num,
              avg(abs(cast(ce.realtime as date) - cast(ce.charttime as date)) * 24 * 60)
                over (partition by ci.itemid) as chart_vs_realtime_delay_mean,
              stddev(abs(cast(ce.realtime as date) - cast(ce.charttime as date)) * 24 * 60)
                over (partition by ci.itemid) as chart_vs_realtime_delay_stddev,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1uom)
                                over (partition by ci.itemid
                                      order by ce.value1uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value1uom)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value1uom)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value1_uom_num,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1uom)
                                over (partition by ci.itemid
                                      order by ce.value1uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value1_uom_has_nulls,
              first_value(ce.value1uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_uom_sample1,
              last_value(ce.value1uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_uom_sample2,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1)
                                over (partition by ci.itemid
                                      order by ce.value1 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value1)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value1)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value1_distinct_num,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1)
                                over (partition by ci.itemid
                                      order by ce.value1 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value1_has_nulls,
             first_value(ce.value1) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_sample1,
             last_value(ce.value1) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_sample2,
             min(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_min,
             max(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_max,
             avg(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_mean,
             min(ce.value1num)
                 over (partition by ci.itemid) as value1num_min,
             max(ce.value1num)
                 over (partition by ci.itemid) as value1num_max,
             avg(ce.value1num)
                 over (partition by ci.itemid) as value1num_mean,
             stddev(ce.value1num)
                 over (partition by ci.itemid) as value1num_stddev,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2uom)
                                over (partition by ci.itemid
                                      order by ce.value2uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value2uom)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value2uom)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value2_uom_num,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2uom)
                                over (partition by ci.itemid
                                      order by ce.value2uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value2_uom_has_nulls,
             first_value(ce.value2uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_uom_sample1,
             last_value(ce.value2uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_uom_sample2,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2)
                                over (partition by ci.itemid
                                      order by ce.value2 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value2)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value2)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value2_distinct_num,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2)
                                over (partition by ci.itemid
                                      order by ce.value2 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value2_has_nulls,
             first_value(ce.value2)
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN  UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_sample1,
             last_value(ce.value2)
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN  UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_sample2,
             min(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_min,
             max(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_max,
             avg(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_mean,
             min(ce.value2num)
                 over (partition by ci.itemid) as value2num_min,
             max(ce.value2num)
                 over (partition by ci.itemid) as value2num_max,
             avg(ce.value2num)
                 over (partition by ci.itemid) as value2num_mean,
             stddev(ce.value2num)
                 over (partition by ci.itemid) as value2num_stddev
        from ChartItems ci,
             events ce
       where ce.itemid = ci.itemid
    --   order by ci.itemid, ci.label
    select label, trim(lower(label)) label_lower, itemid, category, description,
           value_type, value_column,
           rows_num, subjects_num,
           round(chart_vs_realtime_delay_mean, 2) as chart_vs_realtime_delay_mean,
           round(chart_vs_realtime_delay_stddev, 2) as chart_vs_realtime_delay_stddev,
           value1_uom_num, value1_uom_has_nulls,
           value1_uom_sample1, value1_uom_sample2,
           value1_distinct_num, value1_has_nulls,
           value1_sample1, value1_sample2,
           value1_length_min, value1_length_max,
           round(value1_length_mean, 2) as value1_length_mean,
           round(value1num_min, 2) as value1num_min,
           round(value1num_max, 2) as value1num_max,
           round(value1num_mean, 2) as value1num_mean,
           round(value1num_stddev, 2) as value1num_stddev,
           value2_uom_num, value2_uom_has_nulls,
           value2_uom_sample1, value2_uom_sample2,
           value2_distinct_num, value2_has_nulls,
           value2_sample1, value2_sample2,
           value2_length_min, value2_length_max,
           round(value2_length_mean, 2) as value2_length_mean,
           round(value2num_min, 2) as value2num_min,
           round(value2num_max, 2) as value2num_max,
           round(value2num_mean, 2) as value2num_mean,
           round(value2num_stddev, 2) as value2num_stddev
      from RawData
    order by label, itemid;

  • Same query with different explain plans

    Hi All,
    I have one of the select query with different explain plans on two separate env, the query fetches the correct index on test env whereas on prod it's not fetching the same index.
    The structure, indices, no. of rows are similar in CRMINFO table with up-to-date statistics.
    Env Details :
    OS - Sun Solaris 5.10
    DB - 10.2.0.4
    Init param:
    Optimizer_mode = ALL_ROWS
    optimizer_dynamic_sampling integer 5
    optimizer_features_enable string 10.2.0.4
    optimizer_index_caching integer 90
    optimizer_index_cost_adj integer 30
    Query :*
    SELECT COUNT (*)
    FROM CRMINFO
    WHERE RETAILER = :1
    AND STATUS = 20
    AND EXC = :1
    AND SUBNO IS NULL
    Expain Plan (TST):
    SELECT STATEMENT ALL_ROWSCost: 916 Bytes: 19 Cardinality: 1                
    3 SORT AGGREGATE Bytes: 19 Cardinality: 1           
    2 TABLE ACCESS BY INDEX ROWID TABLE TST.CRMINFO Cost: 916 Bytes: 16,663 Cardinality: 877      
    1 INDEX RANGE SCAN INDEX TST.CRMINFO_X1 Cost: 42 Cardinality: 12,549
    Index (TST):
    CRMINFO_X1(EXC, RETAILER, STATUS)
    Explain Plan (PROD):
    SELECT STATEMENT ALL_ROWSCost: 1,832 Bytes: 19 Cardinality: 1                
    3 SORT AGGREGATE Bytes: 19 Cardinality: 1           
    2 TABLE ACCESS BY INDEX ROWID TABLE PROD.CRMINFO Cost: 1,832 Bytes: 2,052 Cardinality: 108      
    1 INDEX RANGE SCAN INDEX PROD.CRMINFO_X2 Cost: 117 Cardinality: 42,519
    Index (PROD):
    CRMINFO_X2 (RETAILER)
    How does Oracle calculates the cost and decides which index it should fetch ? Why it didn't choose the same index as of test env? How should i approach and which domains i need to dig-in to find the cause?
    I did try playing with the above mentioned init parameters but it didn't help at all.
    Thanks.
    Regards,
    ~Pointer

    Pointer wrote:
    Hmm, my worry is how do i force oracle to grap the proper index on prod i.e (CRMINFO_X1). I certainly believe it's a bad approach on adding a hint in the select statement and to force oralce to fetch that index.Why do you believe that, the index you mention is the "proper" index versus what Oracle is choosing? Can you prove with hinting that the "proper" index results in a faster and more efficient execution plan? If it does, then the next place I would look at is the statistics for the tables and columns of interest. From here you could try and estimate why Oracle thinks the other index is better. Another option is to run a 10053 (CBO) trace and see why Oracle thinks it is better.
    I would not support a hint in a production environment, except in the most extreme cases. Usually the CBO makes the right choice, but it only can if the statistics match the distribution of data.
    Refreshing the data may help me simulating the issue on TST but it wouldn't help me to understand why on prod it uses CRMINFO_X2 instead of CRMINFO_X1 which has all the three columns in the Where clause of the query.It would help because it's a test environment and you wouldn't have to do any queries directly on your production system to achieve the same results.
    >
    A bad thought here :( , if i create a new index by changing the column positioning say like ( RETAILER, STATUS , EXC) instead of (EXC, RETAILER, STATUS) will oracle fetch it ? or would it help in reducing the cost and cardinatlity of the select query.It's not that easy. There is a lot that goes into the cost calculation, some of that is known (through the great work by Jonathan Lewis and Richard Foote), and some of that is purely internal to Oracle. If you are more interested in the internals Cost-Based Oracle Fundamentals by Jonathan Lewis is a great book.
    HTH!

  • Why two different explain plan for same objects?

    Believe or not there are two different databases, one for processing and one for reporting, plan is show different for same query. Table structure and indexes are same. It's 11G
    Thanks
    Good explain plan .. works fine..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 12,775  Bytes: 184  Cardinality: 1                                                                        
         27 SORT UNIQUE  Cost: 12,775  Bytes: 184  Cardinality: 1                                                                   
              26 NESTED LOOPS                                                              
                   24 NESTED LOOPS  Cost: 12,774  Bytes: 184  Cardinality: 1                                                         
                        22 HASH JOIN  Cost: 12,772  Bytes: 178  Cardinality: 1                                                    
                             20 NESTED LOOPS SEMI  Cost: 30  Bytes: 166  Cardinality: 1                                               
                                  17 NESTED LOOPS  Cost: 19  Bytes: 140  Cardinality: 1                                          
                                       14 NESTED LOOPS OUTER  Cost: 16  Bytes: 84  Cardinality: 1                                     
                                            11 VIEW DSSADM. Cost: 14  Bytes: 37  Cardinality: 1                                
                                                 10 NESTED LOOPS                           
                                                      8 NESTED LOOPS  Cost: 14  Bytes: 103  Cardinality: 1                      
                                                           6 NESTED LOOPS  Cost: 13  Bytes: 87  Cardinality: 1                 
                                                                3 INLIST ITERATOR            
                                                                     2 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1       
                                                                          1 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1 
                                                                5 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 36  Cardinality: 1            
                                                                     4 INDEX RANGE SCAN INDEX DSSADM.STAN_JB_FN_IDX Cost: 2  Cardinality: 1       
                                                           7 INDEX UNIQUE SCAN INDEX (UNIQUE) DSSODS.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                 
                                                      9 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 16  Cardinality: 1                      
                                            13 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                                
                                                 12 INDEX RANGE SCAN INDEX DSSODS.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                           
                                       16 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 56  Cardinality: 1                                     
                                            15 INDEX RANGE SCAN INDEX DSSADM.DIM_JOBCODE_EXPDT1 Cost: 2  Cardinality: 1                                
                                  19 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_RPT Cost: 11  Bytes: 438,906  Cardinality: 16,881                                          
                                       18 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                     
                             21 INDEX FAST FULL SCAN INDEX (UNIQUE) DSSADM.Z_PK_JOBCODE_PROMPT_TBL Cost: 12,699  Bytes: 66,790,236  Cardinality: 5,565,853                                               
                        23 INDEX RANGE SCAN INDEX DSSADM.DIM_PERSON_EMPL_RCD_SEQ_KEY Cost: 1  Cardinality: 1                                                    
                   25 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_PERSON_EMPL_RCD Cost: 2  Bytes: 6  Cardinality: 1                                                         This bad plan ... show merge join cartesian and full table ..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 3,585  Bytes: 237  Cardinality: 1                                                              
         26 SORT UNIQUE  Cost: 3,585  Bytes: 237  Cardinality: 1                                                         
              25 NESTED LOOPS SEMI  Cost: 3,584  Bytes: 237  Cardinality: 1                                                    
                   22 NESTED LOOPS  Cost: 3,573  Bytes: 211  Cardinality: 1                                               
                        20 MERGE JOIN CARTESIAN  Cost: 2,864  Bytes: 70,446  Cardinality: 354                                          
                             17 NESTED LOOPS                                     
                                  15 NESTED LOOPS  Cost: 51  Bytes: 191  Cardinality: 1                                
                                       13 NESTED LOOPS OUTER  Cost: 50  Bytes: 180  Cardinality: 1                           
                                            10 HASH JOIN  Cost: 48  Bytes: 133  Cardinality: 1                      
                                                 6 NESTED LOOPS                 
                                                      4 NESTED LOOPS  Cost: 38  Bytes: 656  Cardinality: 8            
                                                           2 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 14  Bytes: 448  Cardinality: 8       
                                                                1 INDEX RANGE SCAN INDEX REPORT2.STAN_PROM_JB_IDX Cost: 6  Cardinality: 95 
                                                           3 INDEX RANGE SCAN INDEX REPORT2.SETID_JC_IDX Cost: 2  Cardinality: 1       
                                                      5 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 3  Bytes: 26  Cardinality: 1            
                                                 9 INLIST ITERATOR                 
                                                      8 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1            
                                                           7 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1       
                                            12 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                      
                                                 11 INDEX RANGE SCAN INDEX REPORT2.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                 
                                       14 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORT2.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                           
                                  16 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 11  Cardinality: 1                                
                             19 BUFFER SORT  Cost: 2,863  Bytes: 4,295,552  Cardinality: 536,944                                     
                                  18 TABLE ACCESS FULL TABLE REPORT2.DIM_PERSON_EMPL_RCD Cost: 2,813  Bytes: 4,295,552  Cardinality: 536,944                                
                        21 INDEX RANGE SCAN INDEX (UNIQUE) REPORT2.Z_PK_JOBCODE_PROMPT_TBL Cost: 2  Bytes: 12  Cardinality: 1                                          
                   24 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_RPT Cost: 11  Bytes: 1,349,920  Cardinality: 51,920                                               
                        23 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                          

    user550024 wrote:
    I am really surprise that the stat for good sql are little old. I just computed the states of bad sql so they are uptodate..
    There is something terribly wrong..Not necessarily. Just using the default stats collection I've seen a few cases of things suddenly going wrong. As the data increases, it gets closer to an edge case where the inadequacy of the statistics convinces the optimizer to do a wrong plan. To fix, I could just go into dbconsole, set the stats back to a time when they worked, and locked them. In most cases it's definitely better to figure out what is really going on, though, to give the optimizer better information to work with. Aside from the value of learning how to do it, for some cases it's not so simple. Also, many think the default settings of the database statistic collection may be wrong in general (in 10.2.x, at least). So much depends on your application and data that you can't make too many generalizations. You have to look at the evidence and figure it out. There is still a steep learning curve for the tools to look at the evidence. People are here to help with that.
    Most of the time it works better than a dumb rule based optimizer, but at the cost of a few situations where people are smarter than computers. It's taken a lot of years to get to this point.

  • Multiple DB env with different explain plan

    Hi,
    I have DEV DB and QA DB , both located in USA. There is a table called Customer which contains the respective index , pk, constraints, etc which is available in
    both the environments. Now In DEV DB we have Customer table with 1000 records, but in QA DB we have about 50000 records.
    Now in DEV DB when i take a explain plan the columns are not going for a FULL table scan but in QA DB it is going for a full table scan? Why there is a discrepancies? Is it because of the number of records in QA is more than DEV? I hope that wont be a problem as Oracle CBO/RBO will optimize the query by itself for both the cases using the index, etc. If that is the case I am wondering what might me the problem. Whether we need to inform the Oracle using the hints? Please shed some light into this.
    Thanks.

    First of all it is a good idea to compare the execution plans from differnt environments. Not all developers do that. The cost or time can not be properly compared, but the plan itself yes. if the plan is different, then the behaviour of the query very likely also is different.
    What you didn't show us was the query and the two different plans. Maybe in a broken down test version.
    What is strange is that I would have expected a reversed behaviour. In the dev db you have only 1000 records. Usually the db will do a FULL table scan on this. Simply because the amount of data is so small that it is othen faster to read the table in one big chuck and then deal with it. The parameter db_multiblock_read_count plays a role in that. If it does instead an table access by index rowid then that is fine.
    If you QA Db now uses a "full table scan" instead of the "table access by index rowid" that it is something to think about. Possible reasons can be: Not current statistics. For example do you have an automatic job that collects statistics? Maybe this job didn't run yet, after you imported the data? To use hints as a solution would be wrong. However you can use hints to see, if the performace of the query improves after using the hint. If it does, then the CBO didn't have good enough information. If it doesn't then the CBO already made the right decision.

  • Different explain plan between 10.2.0.3 and 10.2.0.4

    Had a problem with an explain plan changing after upgrade from 10.2.0.3 to 10.2.0.4. Managed to simplify as much as possible for now:
    Query is :
    SELECT * FROM m_promo_chk_str
    WHERE (m_promo_chk_str.cust_cd) IN (
    SELECT cust_cd
    FROM s_usergrp_pda
    GROUP BY cust_cd)
    On 10.2.0.3 explain plan is:
    | 0 | SELECT STATEMENT | | 1 | 1227 | 26 (16)| 00:00:01 |
    |* 1 | HASH JOIN SEMI | | 1 | 1227 | 26 (16)| 00:00:01 |
    | 2 | TABLE ACCESS FULL | M_PROMO_CHK_STR | 1 | 1185 | 14 (0)| 00:00:01 |
    | 3 | VIEW | VW_NSO_1 | 137 | 5754 | 11 (28)| 00:00:01 |
    | 4 | HASH GROUP BY | | 137 | 548 | 11 (28)| 00:00:01 |
    | 5 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 9 (12)| 00:00:01 |
    On 10.2.0.4 with same data is:
    | 0 | SELECT STATEMENT | | 1 | 1201 | 46 (5)| 00:00:01 |
    | 1 | HASH GROUP BY | | 1 | 1201 | 46 (5)| 00:00:01 |
    |* 2 | HASH JOIN | | 1 | 1201 | 45 (3)| 00:00:01 |
    | 3 | TABLE ACCESS FULL| M_PROMO_CHK_STR | 1 | 1197 | 29 (0)| 00:00:01 |
    | 4 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 15 (0)| 00:00:01 |
    Explain plan is reasonable for when M_PROMO_CHK_STR is empty, however we have the case where stats are gathered when table is empty, but table is then populated and the query runs slowly. I understand that this is not a problem with the database exactly, but want to try to understand why the different behaviour.
    Will look into CBO trace tommorrow, but for now anyone want to share any thoughts?

    PatHK wrote:
    Here is further simplification to reproduce the different behaviour - I think about as simple as I can get it!
    SELECT * FROM dual WHERE (dummy) IN (SELECT dummy FROM dual GROUP BY dummy);
    On 10.2.0.3
    |   0 | SELECT STATEMENT     |          |     1 |     4 |     5  (20)| 00:00:01 |
    |   1 |  NESTED LOOPS SEMI   |          |     1 |     4 |     5  (20)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL  | DUAL     |     1 |     2 |     2   (0)| 00:00:01 |
    |*  3 |   VIEW               | VW_NSO_1 |     1 |     2 |     3  (34)| 00:00:01 |
    |   4 |    SORT GROUP BY     |          |     1 |     2 |     3  (34)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| DUAL     |     1 |     2 |     2   (0)| 00:00:01 |On 10.2.0.4
    |   0 | SELECT STATEMENT     |      |     1 |     4 |     4   (0)| 00:00:01 |
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     4 |     4   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |      |     1 |     4 |     4   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    Timur's suggestion to look at a 10053 trace file is a good idea. It might be the case that someone disabled complex view merging in the 10.2.0.3 database instance. See the following:
    _complex_view_merging
    http://jonathanlewis.wordpress.com/2007/03/08/transformation-and-optimisation/
    Here is a test you might try on both database versions:
    ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=TRUE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST1';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';
    ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=FALSE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST2';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';The first plan output:
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |      |       |       |     8 (100)|          |
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     4 |     8   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |      |     1 |     4 |     8   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     4   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("DUMMY"="DUMMY")The second plan output:
    | Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |          |       |       |     9 (100)|          |
    |   1 |  NESTED LOOPS SEMI   |          |     1 |     4 |     9  (12)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL  | DUAL     |     1 |     2 |     4   (0)| 00:00:01 |
    |*  3 |   VIEW               | VW_NSO_1 |     1 |     2 |     5  (20)| 00:00:01 |
    |   4 |    SORT GROUP BY     |          |     1 |     2 |     5  (20)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| DUAL     |     1 |     2 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("DUMMY"="$nso_col_1")From the first 10053 trace file:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      _pga_max_size                       = 368640 KB
    pgamax_size is the only parameter non-default value which could affect the optimizer.
    From the second 10053 trace file:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      _pga_max_size                       = 368640 KB
      _complex_view_merging               = false
      *********************************This section in the first 10053 trace seems to show the complex view merging:
    SU: Considering interleaved complex view merging
    SU:   Transform an ANY subquery to semi-join or distinct.
    CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
    CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
    CVM: CBQT Marking query block SEL$683B0107 (#2)as valid for CVM.
    CVM:   Merging complex view SEL$683B0107 (#2) into SEL$5DA710D3 (#1).
    qbcp:******* UNPARSED QUERY IS *******
    SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM  (SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY") "VW_NSO_2","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    vqbcp:******* UNPARSED QUERY IS *******
    SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY"
    CVM: result SEL$5DA710D3 (#1).
    ******* UNPARSED QUERY IS *******
    SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM "SYS"."DUAL" "DUAL","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="DUAL"."DUMMY" GROUP BY "DUAL"."DUMMY","DUAL".ROWID,"DUAL"."DUMMY"
    Registered qb: SEL$C9C6826C 0x155e2020 (VIEW MERGE SEL$5DA710D3; SEL$683B0107)
      signature (): qb_name=SEL$C9C6826C nbfros=2 flg=0
        fro(0): flg=0 objn=258 hint_alias="DUAL"@"SEL$1"
        fro(1): flg=0 objn=258 hint_alias="DUAL"@"SEL$2"
    FPD: Considering simple filter push in SEL$C9C6826C (#1)
    FPD:   Current where clause predicates in SEL$C9C6826C (#1) :
             "DUAL"."DUMMY"="DUAL"."DUMMY"
    kkogcp: try to generate transitive predicate from check constraints for SEL$C9C6826C (#1)
    predicates with check contraints: "DUAL"."DUMMY"="DUAL"."DUMMY"
    after transitive predicate generation: "DUAL"."DUMMY"="DUAL"."DUMMY"
    finally: "DUAL"."DUMMY"="DUAL"."DUMMY"
    CVM: Costing transformed query.
    kkoqbc-start
                : call(in-use=25864, alloc=65448), compile(in-use=115280, alloc=118736)
    kkoqbc-subheap (create addr=000000001556CD70)This is the same section from the second 10053 trace:
    SU: Considering interleaved complex view merging
    SU:   Transform an ANY subquery to semi-join or distinct.
    CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
    CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
    FPD: Considering simple filter push in SEL$5DA710D3 (#1)
    FPD:   Current where clause predicates in SEL$5DA710D3 (#1) :
             "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#1)
    predicates with check contraints: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    after transitive predicate generation: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    finally: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    FPD: Considering simple filter push in SEL$683B0107 (#2)
    FPD:   Current where clause predicates in SEL$683B0107 (#2) :
             CVM: Costing transformed query.
    kkoqbc-start
                : call(in-use=25656, alloc=65448), compile(in-use=113992, alloc=114592)
    kkoqbc-subheap (create addr=00000000157E9078)Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Execution Time & Explain Plan Variations

    I have a scenario here. I have 2 schemas; schema1 & schema2. I executed a lengthy SELECT statement of 5 TABLE JOIN in these 2 schemas. I am getting totally different execution time (one runs at 0.3 seconds & the other at 4 seconds) and a different Explain Plan. I assume that, since its the same SELECT statement in these schema, I should get the same Explain Plan. What could be the reason for these dissimilarities? Oracle Version: 9.2.0.8.0. I am ready to share the Explain Plan of these 2 schemas. But they are of length around 300 lines.
    Thank you.

    There are many factors come in to play here.
    1.) Size of all tables involved are same
    2.) structures are also same
    3.) Also indexes are same
    4.) Also stats are up to date on both
    5.) Constraints and other factors are also same.
    regards
    PravinAnd a few more.
    6) session environments are the same
    7) bind variable values are the same - or were at first execution
    I'd change 4 to read Optimizer statistics are the same. (not up to date which is a bit vague).
    In short if you are using the CBO and feed in the exact same inputs you will get the exact same plan, however typically you won't get the exact same inputs and so may get a different plan. If your query has a time element and the two queries are hard parsed at different times it may actually be impossible to get the same input - for example the percentage of a table that is returned by a predicate like
    timestamp_col > sysdate - 1 will be estimated differently depending on the time of parsing for the same data.
    That all said looking at the plans might reveal some obvious differences, though perhaps it might be better to point at a URL that holds the plans given the length you say they are.
    Niall Litchfield
    http://www.orawin.info/
    null

  • Explain plan changing for the same sql

    Hi All,
    In a E Business suite application, we have the 10.2.0.4 Database.
    One of the program is running a select stmt which is using different explain plan one in a month which is causing issue in the program running for longer time.
    Ex : When it uses the index A, it is running fine. When it uses the index B, it is running for longer time.
    Can you please advice on the possible reasons for the same sql to choose index B instead of index A some times.
    Thanks,
    Rakesh

    It could be that the SQL is question got aged out of the shared pool and when it came to be reparsed - the values in the bind variables were such that access via index b was more attractive than access via index a.
    Could you please send the query and the good and bad plans and all other information that might help diagnose the problem..
    Note: we had a similiar case where plans suddenly changed for no apperant reason (on 10.2.0.2) - we found that under certain circumstances the optimizer would not peek into the bind varaibles to derive the execution path.

  • Explain plans differ:  same query, cloned databases.

    oracle 10.2.0.2.0
    solaris 10
    same query, different databases (cloned from same source), different explain plans.
    db parameters are identical.
    Any ideas?
    DB1 (production)
    SQL> explain plan for
      2  SELECT  V__101.*,   SHAPE.fid,SHAPE.numofpts,SHAPE.entity,SHAPE.points,
      3    SHAPE.rowid
      4  FROM
      5   (SELECT b.OBJECTID,b.CRT_DTE,b.CRT_SRC_CDE,b.CRT_AQUI_MTHD_CDE,b.CRT_USR_ID,
      6    b.ADT_ACTN_CDE,b.ADT_ACTN_DTE,b.ADT_ACTN_SRC_CDE,b.ADT_ACTN_USR_ID,
      7    b.TP_SITE_POINT_ID,b.SHAPE,b.SRC_ID,b.SRC_CRT_DTE,b.SRC_TYPE_CDE,
      8    b.LCTN_STAT_CDE,b.FRZN_LCTN_IND,b.TP_NOTES   FROM AMSOWNER.TP_SITE_POINT b
      9    WHERE b.OBJECTID NOT IN (SELECT /*+ HASH_AJ */ SDE_DELETES_ROW_ID FROM
    10    AMSOWNER.D101 WHERE DELETED_AT IN (SELECT l.lineage_id FROM
      SDE.state_lineages l WHERE l.lineage_name = :source_lineage_name AND
    11   12    l.lineage_id <= :source_state_id) AND SDE_STATE_ID = :"SYS_B_0") UNION ALL
      SELECT a.OBJECTID,a.CRT_DTE,a.CRT_SRC_CDE,a.CRT_AQUI_MTHD_CDE,a.CRT_USR_ID,
    13   14    a.ADT_ACTN_CDE,a.ADT_ACTN_DTE,a.ADT_ACTN_SRC_CDE,a.ADT_ACTN_USR_ID,
    15    a.TP_SITE_POINT_ID,a.SHAPE,a.SRC_ID,a.SRC_CRT_DTE,a.SRC_TYPE_CDE,
    16    a.LCTN_STAT_CDE,a.FRZN_LCTN_IND,a.TP_NOTES   FROM AMSOWNER.A101 a,
    17    SDE.state_lineages SL WHERE (a.OBJECTID, a.SDE_STATE_ID) NOT IN (SELECT /*+
    18    HASH_AJ */ SDE_DELETES_ROW_ID,SDE_STATE_ID  FROM AMSOWNER.D101 WHERE
    19    DELETED_AT IN (SELECT l.lineage_id FROM SDE.state_lineages l WHERE
    20    l.lineage_name = :source_lineage_name AND l.lineage_id <= :source_state_id)
    21    AND SDE_STATE_ID > :"SYS_B_1") AND a.SDE_STATE_ID = SL.lineage_id AND
    22    SL.lineage_name = :source_lineage_name AND SL.lineage_id <=
    23    :source_state_id ) V__101 , AMSOWNER.F15 SHAPE where SHAPE.FID(+) =
    24    V__101.SHAPE;
    Explained.
    SQL>  @?/rdbms/admin/utlxplp.sql;
    PLAN_TABLE_OUTPUT
    Plan hash value: 254376361
    | Id  | Operation                         | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                  |   113K|    49M|       |  2926   (1)| 00:00:41 |
    |*  1 |  HASH JOIN RIGHT OUTER            |                  |   113K|    49M|    17M|  2926   (1)| 00:00:41 |
    |   2 |   TABLE ACCESS FULL               | F15              |   113K|    15M|       |   344   (2)| 00:00:05 |
    |   3 |   VIEW                            |                  |   113K|    34M|       |   331   (2)| 00:00:05 |
    |   4 |    UNION-ALL                      |                  |       |       |       |            |          |
    |*  5 |     HASH JOIN RIGHT ANTI          |                  |   113K|  8948K|       |   332   (2)| 00:00:05 |
    |   6 |      VIEW                         | VW_NSO_1         |     1 |    13 |       |     2   (0)| 00:00:01 |
    |   7 |       NESTED LOOPS                |                  |     1 |    50 |       |     2   (0)| 00:00:01 |
    |   8 |        TABLE ACCESS BY INDEX ROWID| D101             |     1 |    39 |       |     1   (0)| 00:00:01 |
    |*  9 |         INDEX SKIP SCAN           | D101_IDX1        |     1 |       |       |     1   (0)| 00:00:01 |
    |* 10 |        INDEX UNIQUE SCAN          | LINEAGES_PK      |     1 |    11 |       |     1   (0)| 00:00:01 |
    |  11 |      TABLE ACCESS FULL            | TP_SITE_POINT    |   113K|  7512K|       |   329   (2)| 00:00:05 |
    |* 12 |     HASH JOIN ANTI                |                  |     1 |   375 |       |     5  (20)| 00:00:01 |
    |  13 |      NESTED LOOPS                 |                  |     1 |   349 |       |     2   (0)| 00:00:01 |
    |  14 |       TABLE ACCESS BY INDEX ROWID | A101             |     1 |   338 |       |     1   (0)| 00:00:01 |
    |* 15 |        INDEX RANGE SCAN           | A101_STATEID_IX1 |     1 |       |       |     1   (0)| 00:00:01 |
    |* 16 |       INDEX UNIQUE SCAN           | LINEAGES_PK      |     1 |    11 |       |     1   (0)| 00:00:01 |
    |* 17 |      VIEW                         | VW_NSO_2         |     1 |    26 |       |     2   (0)| 00:00:01 |
    |  18 |       NESTED LOOPS                |                  |     1 |    50 |       |     2   (0)| 00:00:01 |
    |* 19 |        INDEX FULL SCAN            | D101_PK          |     1 |    39 |       |     1   (0)| 00:00:01 |
    |* 20 |        INDEX UNIQUE SCAN          | LINEAGES_PK      |     1 |    11 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - access("SHAPE"."FID"(+)="V__101"."SHAPE")
       5 - access("B"."OBJECTID"="$nso_col_1")
       9 - access("SDE_STATE_ID"=TO_NUMBER(:SYS_B_0))
           filter("SDE_STATE_ID"=TO_NUMBER(:SYS_B_0))
      10 - access("L"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND "DELETED_AT"="L"."LINEAGE_ID")
           filter("L"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      12 - access("A"."OBJECTID"="$nso_col_1" AND "A"."SDE_STATE_ID"="$nso_col_2")
      15 - access("A"."SDE_STATE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      16 - access("SL"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND
                  "A"."SDE_STATE_ID"="SL"."LINEAGE_ID")
           filter("SL"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      17 - filter("$nso_col_2"<=TO_NUMBER(:SOURCE_STATE_ID))
      19 - access("SDE_STATE_ID">TO_NUMBER(:SYS_B_1))
           filter("SDE_STATE_ID">TO_NUMBER(:SYS_B_1))
      20 - access("L"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND "DELETED_AT"="L"."LINEAGE_ID")
           filter("L"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
    47 rows selected.DB2 (staging)
    SQL> explain plan for
      2  SELECT  V__101.*,   SHAPE.fid,SHAPE.numofpts,SHAPE.entity,SHAPE.points,
      SHAPE.rowid
    FROM
      3    4    5   (SELECT b.OBJECTID,b.CRT_DTE,b.CRT_SRC_CDE,b.CRT_AQUI_MTHD_CDE,b.CRT_USR_ID,
      b.ADT_ACTN_CDE,b.ADT_ACTN_DTE,b.ADT_ACTN_SRC_CDE,b.ADT_ACTN_USR_ID,
      6    7    b.TP_SITE_POINT_ID,b.SHAPE,b.SRC_ID,b.SRC_CRT_DTE,b.SRC_TYPE_CDE,
      8    b.LCTN_STAT_CDE,b.FRZN_LCTN_IND,b.TP_NOTES   FROM AMSOWNER.TP_SITE_POINT b
      9    WHERE b.OBJECTID NOT IN (SELECT /*+ HASH_AJ */ SDE_DELETES_ROW_ID FROM
    10    AMSOWNER.D101 WHERE DELETED_AT IN (SELECT l.lineage_id FROM
    11    SDE.state_lineages l WHERE l.lineage_name = :source_lineage_name AND
    12    l.lineage_id <= :source_state_id) AND SDE_STATE_ID = :"SYS_B_0") UNION ALL
    13    SELECT a.OBJECTID,a.CRT_DTE,a.CRT_SRC_CDE,a.CRT_AQUI_MTHD_CDE,a.CRT_USR_ID,
    14    a.ADT_ACTN_CDE,a.ADT_ACTN_DTE,a.ADT_ACTN_SRC_CDE,a.ADT_ACTN_USR_ID,
    15    a.TP_SITE_POINT_ID,a.SHAPE,a.SRC_ID,a.SRC_CRT_DTE,a.SRC_TYPE_CDE,
    16    a.LCTN_STAT_CDE,a.FRZN_LCTN_IND,a.TP_NOTES   FROM AMSOWNER.A101 a,
    17    SDE.state_lineages SL WHERE (a.OBJECTID, a.SDE_STATE_ID) NOT IN (SELECT /*+
      HASH_AJ */ SDE_DELETES_ROW_ID,SDE_STATE_ID  FROM AMSOWNER.D101 WHERE
    18   19    DELETED_AT IN (SELECT l.lineage_id FROM SDE.state_lineages l WHERE
    20    l.lineage_name = :source_lineage_name AND l.lineage_id <= :source_state_id)
    21    AND SDE_STATE_ID > :"SYS_B_1") AND a.SDE_STATE_ID = SL.lineage_id AND
    22    SL.lineage_name = :source_lineage_name AND SL.lineage_id <=
    23    :source_state_id ) V__101 , AMSOWNER.F15 SHAPE where SHAPE.FID(+) =
    24    V__101.SHAPE;
    Explained.
    SQL>  @?/rdbms/admin/utlxplp.sql;
    PLAN_TABLE_OUTPUT
    Plan hash value: 4287458713
    | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                  |   145 | 68150 |   401   (1)| 00:00:06 |
    |   1 |  NESTED LOOPS OUTER               |                  |   145 | 68150 |   401   (1)| 00:00:06 |
    |   2 |   VIEW                            |                  |   145 | 47125 |   328   (1)| 00:00:05 |
    |   3 |    UNION-ALL                      |                  |       |       |            |          |
    |*  4 |     HASH JOIN ANTI                |                  |   144 | 10224 |   324   (1)| 00:00:05 |
    |   5 |      TABLE ACCESS FULL            | TP_SITE_POINT    |   145 |  8410 |   322   (1)| 00:00:05 |
    |   6 |      VIEW                         | VW_NSO_1         |     1 |    13 |     2   (0)| 00:00:01 |
    |   7 |       NESTED LOOPS                |                  |     1 |    50 |     2   (0)| 00:00:01 |
    |*  8 |        TABLE ACCESS BY INDEX ROWID| D101             |     1 |    39 |     1   (0)| 00:00:01 |
    |*  9 |         INDEX SKIP SCAN           | D101_IDX1        |     1 |       |     1   (0)| 00:00:01 |
    |* 10 |        INDEX UNIQUE SCAN          | LINEAGES_PK      |     1 |    11 |     1   (0)| 00:00:01 |
    |* 11 |     FILTER                        |                  |       |       |            |          |
    |  12 |      NESTED LOOPS                 |                  |     1 |   349 |     2   (0)| 00:00:01 |
    |  13 |       TABLE ACCESS BY INDEX ROWID | A101             |     1 |   338 |     1   (0)| 00:00:01 |
    |* 14 |        INDEX RANGE SCAN           | A101_STATEID_IX1 |     1 |       |     1   (0)| 00:00:01 |
    |* 15 |       INDEX UNIQUE SCAN           | LINEAGES_PK      |     1 |    11 |     1   (0)| 00:00:01 |
    |* 16 |      FILTER                       |                  |       |       |            |          |
    |  17 |       NESTED LOOPS                |                  |     1 |    50 |     2   (0)| 00:00:01 |
    |* 18 |        TABLE ACCESS BY INDEX ROWID| D101             |     1 |    39 |     1   (0)| 00:00:01 |
    |* 19 |         INDEX RANGE SCAN          | D101_IDX1        |     1 |       |     1   (0)| 00:00:01 |
    |* 20 |        INDEX UNIQUE SCAN          | LINEAGES_PK      |     1 |    11 |     1   (0)| 00:00:01 |
    |  21 |   TABLE ACCESS BY INDEX ROWID     | F15              |     1 |   145 |     1   (0)| 00:00:01 |
    |* 22 |    INDEX UNIQUE SCAN              | F15_UK1          |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("B"."OBJECTID"="$nso_col_1")
       8 - filter("DELETED_AT"<=TO_NUMBER(:SOURCE_STATE_ID))
       9 - access("SDE_STATE_ID"=TO_NUMBER(:SYS_B_0))
           filter("SDE_STATE_ID"=TO_NUMBER(:SYS_B_0))
      10 - access("L"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND
                  "DELETED_AT"="L"."LINEAGE_ID")
           filter("L"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      11 - filter( NOT EXISTS (SELECT /*+ HASH_AJ */ 0 FROM "AMSOWNER"."D101"
                  "D101","SDE"."STATE_LINEAGES" "L" WHERE :B1>TO_NUMBER(:SYS_B_1) AND
                  "DELETED_AT"="L"."LINEAGE_ID" AND "L"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND
                  "L"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID) AND "SDE_STATE_ID"=:B2 AND
                  "SDE_DELETES_ROW_ID"=:B3 AND "DELETED_AT"<=TO_NUMBER(:SOURCE_STATE_ID) AND
                  "SDE_STATE_ID">TO_NUMBER(:SYS_B_1)))
      14 - access("A"."SDE_STATE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      15 - access("SL"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND
                  "A"."SDE_STATE_ID"="SL"."LINEAGE_ID")
           filter("SL"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      16 - filter(:B1>TO_NUMBER(:SYS_B_1))
      18 - filter("DELETED_AT"<=TO_NUMBER(:SOURCE_STATE_ID))
      19 - access("SDE_DELETES_ROW_ID"=:B1 AND "SDE_STATE_ID"=:B2)
           filter("SDE_STATE_ID">TO_NUMBER(:SYS_B_1))
      20 - access("L"."LINEAGE_NAME"=TO_NUMBER(:SOURCE_LINEAGE_NAME) AND
                  "DELETED_AT"="L"."LINEAGE_ID")
           filter("L"."LINEAGE_ID"<=TO_NUMBER(:SOURCE_STATE_ID))
      22 - access("SHAPE"."FID"(+)="V__101"."SHAPE")
    58 rows selected.

    rocr wrote:
    oracle 10.2.0.2.0
    solaris 10
    same query, different databases (cloned from same source), different explain plans.
    db parameters are identical.
    DB1 (production)
    | Id  | Operation                         | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                  |   113K|    49M|       |  2926   (1)| 00:00:41 |DB2 (staging)
    | Id  | Operation                         | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                  |                  |   145 | 68150 |   401   (1)| 00:00:06 |
    If you check the execution plans it's quite obvious that the two environments can not be identical. If you have the same (amount of) data, then one of the two estimates is way off. Which one is in the right ballpark, how many rows are returned by the query? The same applies to the remaining operation ids of the plan, which estimate is correct?
    It's very likely that the underlying table/index statistics are not the same, and therefore you get different plans.
    By the way, you're using bind variables, therefore the output of the EXPLAIN PLAN is only of limited use, since the actual plan at run time might be completely different due to bind variable peeking and potentially histograms created on some of the columns (which is the default in 10g due to the "FOR ALL COLUMNS SIZE AUTO" method_opt default parameter value).
    If you haven't disabled the pre-configured statistics collection job that runs every night in 10g, you might get different statistics due to this job already.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Strange behavior of explain plan

    Hello i don't know if this is the correct category for posting my issue.
    I ma facing a strange behavior of the explain plan. What i mean!
    I have a query which runs very fast and it is well optimized. the explain plan shows very good results in cost and execution time.
    when i am running the query for a second or a third time is appearing a different explain plan with not such a good cost and execution time.
    Could you please guide me what it might be wrong???
    thanks a lot

    HI i found the correct category
    thanks

  • Explain plan results are different in SQL Developer than SQL Plus

    My Environment:
    SQL Developer 1.0.0.15.27
    Platform where SQL Developer is running: Windows XP 2002 SP2
    Oracle Database and Client 9.2.0.7
    Optimizer_mode: FIRST_ROWS
    I have the following SQL statement:
    SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE :b2 = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > :b1
    When I run an Explain in SQL Developer I get the following access path (which is the one I really want):
    SELECT STATEMENT                          TABLE ACCESS(BY INDEX ROWID) FEDLINK.AU_COMPANY          NESTED LOOPS                                   INDEX(RANGE SCAN)
    FEDLINK.UX2_TEMP_AU_COMPANY
              INDEX(RANGE SCAN) FEDLINK.PX1_COMPANY
    However, when I execute the statement with sql_trace turned on and use tkprof to generate the actual access path, the statement executes as follows (which is WAY more expensive):
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 3.58 6.68 28136 29232 0 0
    total 3 3.58 6.69 28136 29232 0 0
    Misses in library cache during parse: 1
    Optimizer goal: FIRST_ROWS
    Parsing user id: 979 (FEDLINK) (recursive depth: 1)
    Rows Row Source Operation
    0 NESTED LOOPS
    0 TABLE ACCESS FULL AU_COMPANY
    0 INDEX RANGE SCAN UX2_TEMP_AU_COMPANY (object id 49783)
    Notice the FULL access of au_company.
    I understand that SQL Developer has nothing to do with why the statement executed the way it did, but why is the Explain in SQL Developer different than the actual execution plan?
    Added note....when I run the explain in SQL Plus it is the same as the actual execution. Here is the explain from SQL Plus:
    explain plan for SELECT a1.comp_id
    FROM temp_au_company a0, au_company a1
    WHERE '1' = a0.temp_emp_code
    AND a0.comp_id = a1.comp_id
    AND a0.sls_terr_code != a1.sls_terr_code
    AND a1.last_mdfy_date > '01-MAY-2006';
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 2 | 76 | 2597 |
    | 1 | NESTED LOOPS | | 2 | 76 | 2597 |
    | 2 | TABLE ACCESS FULL | AU_COMPANY | 2 | 42 | 2595 |
    | 3 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 1 | 17 | 2
    Thanks,
    Brenda

    The explain is different (full scan of au_company in SQL Plus / index access in SQL Developer) even when I use variables in SQL Plus. Here is the output for SQL Plus using variables instead of literals:
    SQL> variable b1 varchar2
    SQL> variable b2 char
    SQL> explain plan for SELECT a1.comp_id
    2 FROM temp_au_company a0, au_company a1
    3 WHERE :b2 = a0.temp_emp_code
    4 AND a0.comp_id = a1.comp_id
    5 AND a0.sls_terr_code != a1.sls_terr_code
    6 AND a1.last_mdfy_date > :b1
    7 /
    Explained.
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | SELECT STATEMENT | | 3184 | 118K| 2995 |
    | 1 | HASH JOIN | | 3184 | 118K| 2995 |
    | 2 | INDEX RANGE SCAN | UX2_TEMP_AU_COMPANY | 3187 | 54179 | 3 |
    | 3 | TABLE ACCESS FULL | AU_COMPANY | 24009 | 492K| 2983 |
    Any other ideas? They should be the same.
    Brenda

  • Explain plan different for same query

    Hi all,
    I have a query, which basically selects some columns from a remote database view. The query is as follows:
    select * from tab1@remotedb, tab2@remotedb
    where tab1.cash_id = tab2.id
    and tab1.date = '01-JAN-2003'
    and tab2.country_code = 'GB';
    Now, i am working on two environments, one is production and other is development. Production environment has following specification:
    1. Remotedb = Oracle9i, Linux OS
    2. Database on which query is running = Oracle10g, Linux OS
    Development environment has following specification:
    1. Remotedb = Oracle10g, Windows OS
    2. Database on which query is running = Oracle10g, Linux OS
    Both databases in development and production environments are on different machines.
    when i execute the above query on production, i see full table scans on both tables in execution plan(TOAD), but when i execute the query in development, i see that both remote database tables are using index.
    Why am i getting different execution plans on both databases? is there is any difference of user rights/priviliges or there is a difference of statistics on both databases. I have checked the statistics for both tables on Production and Development databases, they are updated.
    This issue is creating a performance disaster in our Production system. Any kind of help or knowledge sharing is appreciated.
    Thank you and Best Regards.

    We ran into a similar situation yesterday morning, though our implementation was easier than yours. Different plans in development and production though both systems were 10gR2 at the time. Production was doing a Merge Join Cartesian (!) instead of nested loop joins. Our DBA figured out that the production stats had been locked for some tables preventing stat refresh; she unlocked them and re-analyzed so which fixed our problem.
    Of some interest was discovering that I got different execution plans from the same UPDATE via EXPLAIN PLAN and SQL*PLUS AUTOTRACE in development. Issue appears to have been bind peeking. Converting bind variables to constants yielded the AUTOTRACE plan, as did turning bind peeking off while using the bind variables. CURSOR_SHARING was set to EXACT too.
    Message was edited by:
    riedelme

Maybe you are looking for