Query returns different Explain Plan

Hi,
I have two databases that are identical to each other. But I have a query that runs on both and returns different explain plans in both. It runs slower in one than other. How can I troubleshoot this further to get faster times in the slower database?
THanks,
Prachi

Query:
=====
SELECT sum(total_contacts), sum(email_partner_news), sum(email_tech_news),
sum(email_enduser_news), sum(email), sum(email_gmo), sum(gmo_nophone),
sum(email_gmo_noaddress), sum(email_gmo_address), sum(
email_gmo_nocertaddr), sum(email_gmo_certaddress), sum(
email_gmo_nophone_noaddr), sum(phone_number), sum(gmo_with_phone),
sum(phone_number_email_gmo_noaddr), sum(phone_number_email_gmo_nocert),
sum(phone_number_address), sum(phone_number_address_noemail), sum(
phone_address_email_nogmo), sum(phone_number_address_email), sum(
phone_number_certaddr), sum(phone_number_certaddr_noemail), sum(
phone_certaddr_email_nogmo), sum(phone_number_address_gmo), sum(
phone_certaddr_email_gmo), sum(address), sum(
address_nophone_number_nogmo), sum(address_certified), sum(
certaddr_nophone_nogmo), sum(address_email_nophone_gmo), sum(
certaddr_email_nophone_gmo), sum(phone_gmo), sum(smail_gmo), sum(fax_gmo
), sum(email_wcast), sum(email_inews), sum(email_salrt), sum(
phone_gmo_phone), sum(smail_gmo_address), sum(fax_gmo_fax)
FROM contact_values_vw_tc_2 cv, ((SELECT gcd_contact_id
FROM contact_values)
INTERSECT
((SELECT gcd_contact_id
FROM gcddata.contact_country s, gcd.country c,
segmentation_query_values v
WHERE s.country_id = c.country_id
AND c.region_id = v.query_value
AND v.selection_type = 'I'
AND v.query_id = 2088
AND v.query_sequence = 1)
INTERSECT
(SELECT gcd_contact_id
FROM gcddata.contact_country s, gcd.country c,
segmentation_query_values v
WHERE s.country_id = c.country_id
AND c.sub_region_id = v.query_value
AND v.selection_type = 'I'
AND v.query_id = 2088
AND v.query_sequence = 2)) ) sl
WHERE cv.gcd_contact_id = sl.gcd_contact_id
AND cv.country_id IN (SELECT cm.country_id
FROM segmentation_user_country_map cm
WHERE cm.user_name = 'E30590')
========================================
Execution Plan - FAST
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=32931 Card=1 Bytes=73)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=32931 Card=386388 Bytes=28206324)
3 2 VIEW (Cost=29617 Card=1142043 Bytes=14846559)
4 3 INTERSECTION
5 4 SORT (UNIQUE)
6 5 INDEX (FAST FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=5 Card=1907737 Bytes=11446422)
7 4 INTERSECTION
8 7 SORT (UNIQUE)
9 8 HASH JOIN (Cost=655 Card=1998575 Bytes=57958675)
10 9 HASH JOIN (Cost=6 Card=110 Bytes=2200)
11 10 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=7 Bytes=91)
12 11 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_ PK' (UNIQUE) (Cost=2 Card=15)
13 10 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost=2 Card=120 Bytes=840)
14 9 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1' (Cost=643 Card=2199855 Bytes=19798695)
15 7 SORT (UNIQUE)
16 15 HASH JOIN (Cost=655 Card=1142043 Bytes=33119247)
17 16 HASH JOIN (Cost=6 Card=63 Bytes=1260)
18 17 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=7 Bytes=91)
19 18 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=15)
20 17 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost=2 Card=120 Bytes=840)
21 16 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1' (Cost=643 Card=2199855 Bytes=19798695)
22 2 HASH JOIN (Cost=2174 Card=645445 Bytes=38726700)
23 22 SORT (UNIQUE)
24 23 TABLE ACCESS (FULL) OF 'SEGMENTATION_USER_COUNTRY_MAP' (Cost=5 Card=43 Bytes=473)
25 22 TABLE ACCESS (FULL) OF 'CONTACT_VALUES1' (Cost=2151 Card=1907737 Bytes=93479113)
=====================================================
Execution Plan -SLOW
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=8751 Card=1 Bytes=73 )
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (SEMI) (Cost=8751 Card=14477 Bytes=1056821)
3 2 MERGE JOIN (Cost=8583 Card=89527 Bytes=5550674)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'CONTACT_VALUES1' ( Cost=826 Card=1852109 Bytes=90753341)
5 4 INDEX (FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=26 Card=1852109)
6 3 SORT (JOIN) (Cost=7757 Card=89527 Bytes=1163851)
7 6 VIEW (Cost=7454 Card=89527 Bytes=1163851)
8 7 INTERSECTION
9 8 SORT (UNIQUE)
10 9 INDEX (FAST FULL SCAN) OF 'CONTACT_VALUES1_PK' (UNIQUE) (Cost=5 Card=1852109 Bytes=11112654)
11 8 INTERSECTION
12 11 SORT (UNIQUE)
13 12 HASH JOIN (Cost=640 Card=250676 Bytes=7269604)
14 13 HASH JOIN (Cost=6 Card=28 Bytes=560)
15 14 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=1 Bytes=13)
16 15 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=2)
17 14 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost =2 Card=120 Bytes=840)
18 13 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1 ' (Cost=628 Card=2135370 Bytes=19218330)
19 11 SORT (UNIQUE)
20 19 HASH JOIN (Cost=640 Card=89527 Bytes=25962 83)
21 20 HASH JOIN (Cost=6 Card=10 Bytes=200)
22 21 TABLE ACCESS (BY INDEX ROWID) OF 'SEGMENTATION_QUERY_VALUES' (Cost=3 Card=1 Bytes=13)
23 22 INDEX (RANGE SCAN) OF 'SEG_QUERY_VALUES_PK' (UNIQUE) (Cost=2 Card=2)
24 21 TABLE ACCESS (FULL) OF 'COUNTRY' (Cost =2 Card=120 Bytes=840)
25 20 TABLE ACCESS (FULL) OF 'CONTACT_COUNTRY1 ' (Cost=628 Card=2135370 Bytes=19218330)
26 2 TABLE ACCESS (FULL) OF 'SEGMENTATION_USER_COUNTRY_MAP'
(Cost=5 Card=45 Bytes=495)

Similar Messages

  • Same query with different explain plans

    Hi All,
    I have one of the select query with different explain plans on two separate env, the query fetches the correct index on test env whereas on prod it's not fetching the same index.
    The structure, indices, no. of rows are similar in CRMINFO table with up-to-date statistics.
    Env Details :
    OS - Sun Solaris 5.10
    DB - 10.2.0.4
    Init param:
    Optimizer_mode = ALL_ROWS
    optimizer_dynamic_sampling integer 5
    optimizer_features_enable string 10.2.0.4
    optimizer_index_caching integer 90
    optimizer_index_cost_adj integer 30
    Query :*
    SELECT COUNT (*)
    FROM CRMINFO
    WHERE RETAILER = :1
    AND STATUS = 20
    AND EXC = :1
    AND SUBNO IS NULL
    Expain Plan (TST):
    SELECT STATEMENT ALL_ROWSCost: 916 Bytes: 19 Cardinality: 1                
    3 SORT AGGREGATE Bytes: 19 Cardinality: 1           
    2 TABLE ACCESS BY INDEX ROWID TABLE TST.CRMINFO Cost: 916 Bytes: 16,663 Cardinality: 877      
    1 INDEX RANGE SCAN INDEX TST.CRMINFO_X1 Cost: 42 Cardinality: 12,549
    Index (TST):
    CRMINFO_X1(EXC, RETAILER, STATUS)
    Explain Plan (PROD):
    SELECT STATEMENT ALL_ROWSCost: 1,832 Bytes: 19 Cardinality: 1                
    3 SORT AGGREGATE Bytes: 19 Cardinality: 1           
    2 TABLE ACCESS BY INDEX ROWID TABLE PROD.CRMINFO Cost: 1,832 Bytes: 2,052 Cardinality: 108      
    1 INDEX RANGE SCAN INDEX PROD.CRMINFO_X2 Cost: 117 Cardinality: 42,519
    Index (PROD):
    CRMINFO_X2 (RETAILER)
    How does Oracle calculates the cost and decides which index it should fetch ? Why it didn't choose the same index as of test env? How should i approach and which domains i need to dig-in to find the cause?
    I did try playing with the above mentioned init parameters but it didn't help at all.
    Thanks.
    Regards,
    ~Pointer

    Pointer wrote:
    Hmm, my worry is how do i force oracle to grap the proper index on prod i.e (CRMINFO_X1). I certainly believe it's a bad approach on adding a hint in the select statement and to force oralce to fetch that index.Why do you believe that, the index you mention is the "proper" index versus what Oracle is choosing? Can you prove with hinting that the "proper" index results in a faster and more efficient execution plan? If it does, then the next place I would look at is the statistics for the tables and columns of interest. From here you could try and estimate why Oracle thinks the other index is better. Another option is to run a 10053 (CBO) trace and see why Oracle thinks it is better.
    I would not support a hint in a production environment, except in the most extreme cases. Usually the CBO makes the right choice, but it only can if the statistics match the distribution of data.
    Refreshing the data may help me simulating the issue on TST but it wouldn't help me to understand why on prod it uses CRMINFO_X2 instead of CRMINFO_X1 which has all the three columns in the Where clause of the query.It would help because it's a test environment and you wouldn't have to do any queries directly on your production system to achieve the same results.
    >
    A bad thought here :( , if i create a new index by changing the column positioning say like ( RETAILER, STATUS , EXC) instead of (EXC, RETAILER, STATUS) will oracle fetch it ? or would it help in reducing the cost and cardinatlity of the select query.It's not that easy. There is a lot that goes into the cost calculation, some of that is known (through the great work by Jonathan Lewis and Richard Foote), and some of that is purely internal to Oracle. If you are more interested in the internals Cost-Based Oracle Fundamentals by Jonathan Lewis is a great book.
    HTH!

  • Query with diff. explain plans

    Hi,
    Our query returns different execution plans in Prod and non-prod. It is slow in PROD. The data size is the same in both DBs and stats are gathered at 50% estimate for both schemas:
    Prod (slow) explain plan:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 852 | 33 (4)| 00:00:01 |
    |* 1 | FILTER | | | | | |
    | 2 | HASH GROUP BY | | 1 | 852 | 33 (4)| 00:00:01 |
    | 3 | NESTED LOOPS | | 1 | 852 | 32 (0)| 00:00:01 |
    | 4 | NESTED LOOPS | | 1 | 802 | 31 (0)| 00:00:01 |
    | 5 | NESTED LOOPS OUTER | | 1 | 785 | 30 (0)| 00:00:01 |
    | 6 | NESTED LOOPS OUTER | | 1 | 742 | 29 (0)| 00:00:01 |
    | 7 | NESTED LOOPS | | 1 | 732 | 29 (0)| 00:00:01 |
    | 8 | NESTED LOOPS | | 1 | 678 | 26 (0)| 00:00:01 |
    | 9 | NESTED LOOPS | | 1 | 666 | 26 (0)| 00:00:01 |
    | 10 | NESTED LOOPS | | 1 | 623 | 25 (0)| 00:00:01 |
    | 11 | NESTED LOOPS | | 1 | 580 | 24 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | | 1 | 576 | 24 (0)| 00:00:01 |
    | 13 | NESTED LOOPS | | 2 | 1076 | 13 (0)| 00:00:01 |
    | 14 | NESTED LOOPS | | 2 | 1040 | 13 (0)| 00:00:01 |
    | 15 | NESTED LOOPS | | 2 | 1028 | 13 (0)| 00:00:01 |
    | 16 | NESTED LOOPS | | 2 | 996 | 13 (0)| 00:00:01 |
    | 17 | NESTED LOOPS | | 2 | 988 | 13 (0)| 00:00:01 |
    | 18 | NESTED LOOPS | | 2 | 954 | 13 (0)| 00:00:01 |
    | 19 | NESTED LOOPS | | 2 | 944 | 13 (0)| 00:00:01 |
    | 20 | NESTED LOOPS | | 2 | 920 | 13 (0)| 00:00:01 |
    | 21 | NESTED LOOPS | | 2 | 912 | 13 (0)| 00:00:01 |
    | 22 | NESTED LOOPS | | 2 | 826 | 11 (0)| 00:00:01 |
    | 23 | NESTED LOOPS | | 1 | 370 | 9 (0)| 00:00:01 |
    | 24 | NESTED LOOPS | | 1 | 306 | 8 (0)| 00:00:01 |
    | 25 | NESTED LOOPS | | 1 | 263 | 7 (0)| 00:00:01 |
    | 26 | NESTED LOOPS | | 1 | 220 | 6 (0)| 00:00:01 |
    | 27 | NESTED LOOPS | | 1 | 177 | 5 (0)| 00:00:01 |
    | 28 | NESTED LOOPS | | 1 | 129 | 4 (0)| 00:00:01 |
    | 29 | NESTED LOOPS | | 1 | 86 | 3 (0)| 00:00:01 |
    | 30 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 43 | 2 (0)| 00:00:01 |
    |* 31 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 1 (0)| 00:00:01 |
    | 32 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 33 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 34 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 35 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 36 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 48 | 1 (0)| 00:00:01 |
    |* 37 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 38 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 39 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 40 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 41 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 42 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 43 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 44 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 64 | 1 (0)| 00:00:01 |
    |* 45 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 46 | INLIST ITERATOR | | | | | |
    | 47 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 2 | 86 | 2 (0)| 00:00:01 |
    |* 48 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 2 | | 1 (0)| 00:00:01 |
    | 49 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 50 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    |* 51 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 52 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 12 | 0 (0)| 00:00:01 |
    |* 53 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 5 | 0 (0)| 00:00:01 |
    |* 54 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 17 | 0 (0)| 00:00:01 |
    |* 55 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 56 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 16 | 0 (0)| 00:00:01 |
    |* 57 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 6 | 0 (0)| 00:00:01 |
    |* 58 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 18 | 0 (0)| 00:00:01 |
    | 59 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 38 | 6 (0)| 00:00:01 |
    |* 60 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 5 (0)| 00:00:01 |
    |* 61 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 62 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 63 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
    | 64 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 65 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    |* 66 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 12 | 0 (0)| 00:00:01 |
    | 67 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 54 | 3 (0)| 00:00:01 |
    |* 68 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 2 (0)| 00:00:01 |
    |* 69 | INDEX UNIQUE SCAN | SYMPCCOUNT_PK | 1 | 10 | 0 (0)| 00:00:01 |
    | 70 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 43 | 1 (0)| 00:00:01 |
    |* 71 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
    |* 72 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 17 | 1 (0)| 00:00:01 |
    |* 73 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
    |* 74 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 50 | 1 (0)| 00:00:01 |
    |* 75 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
    Non Prod (Fast) Plan:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 383 | 28 (0)| 00:00:01 |
    |* 1 | FILTER | | | | | |
    | 2 | NESTED LOOPS | | 1 | 383 | 17 (0)| 00:00:01 |
    | 3 | NESTED LOOPS OUTER | | 1 | 350 | 16 (0)| 00:00:01 |
    | 4 | NESTED LOOPS | | 1 | 308 | 15 (0)| 00:00:01 |
    | 5 | NESTED LOOPS | | 1 | 266 | 14 (0)| 00:00:01 |
    | 6 | NESTED LOOPS OUTER | | 1 | 262 | 14 (0)| 00:00:01 |
    | 7 | NESTED LOOPS | | 1 | 258 | 14 (0)| 00:00:01 |
    | 8 | NESTED LOOPS | | 2 | 438 | 7 (0)| 00:00:01 |
    | 9 | NESTED LOOPS | | 2 | 428 | 7 (0)| 00:00:01 |
    | 10 | NESTED LOOPS | | 2 | 420 | 7 (0)| 00:00:01 |
    | 11 | NESTED LOOPS | | 2 | 410 | 7 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | | 2 | 402 | 7 (0)| 00:00:01 |
    | 13 | NESTED LOOPS | | 1 | 159 | 5 (0)| 00:00:01 |
    | 14 | NESTED LOOPS | | 1 | 126 | 4 (0)| 00:00:01 |
    | 15 | NESTED LOOPS | | 1 | 84 | 3 (0)| 00:00:01 |
    | 16 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 42 | 2 (0)| 00:00:01 |
    |* 17 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 1 (0)| 00:00:01 |
    | 18 | TABLE ACCESS BY INDEX ROWID| SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
    |* 19 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 20 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
    |* 21 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 22 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 33 | 1 (0)| 00:00:01 |
    |* 23 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 24 | INLIST ITERATOR | | | | | |
    | 25 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 2 | 84 | 2 (0)| 00:00:01 |
    |* 26 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 2 | | 1 (0)| 00:00:01 |
    |* 27 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 28 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 5 | 0 (0)| 00:00:01 |
    |* 29 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 30 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 5 | 0 (0)| 00:00:01 |
    | 31 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 39 | 4 (0)| 00:00:01 |
    |* 32 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 3 (0)| 00:00:01 |
    |* 33 | INDEX UNIQUE SCAN | SYMPCCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 34 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 35 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
    |* 36 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
    | 37 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 42 | 1 (0)| 00:00:01 |
    |* 38 | INDEX UNIQUE SCAN | SYMMETADATA_PK | 1 | | 0 (0)| 00:00:01 |
    |* 39 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 33 | 1 (0)| 00:00:01 |
    |* 40 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
    | 41 | SORT AGGREGATE | | 1 | 252 | | |
    | 42 | NESTED LOOPS | | 1 | 252 | 11 (0)| 00:00:01 |
    | 43 | NESTED LOOPS | | 1 | 240 | 10 (0)| 00:00:01 |
    | 44 | NESTED LOOPS | | 1 | 205 | 7 (0)| 00:00:01 |
    | 45 | NESTED LOOPS | | 1 | 200 | 7 (0)| 00:00:01 |
    | 46 | NESTED LOOPS | | 1 | 196 | 7 (0)| 00:00:01 |
    | 47 | NESTED LOOPS | | 1 | 191 | 7 (0)| 00:00:01 |
    | 48 | NESTED LOOPS | | 1 | 187 | 7 (0)| 00:00:01 |
    | 49 | NESTED LOOPS | | 1 | 183 | 7 (0)| 00:00:01 |
    | 50 | NESTED LOOPS | | 1 | 150 | 6 (0)| 00:00:01 |
    | 51 | NESTED LOOPS | | 1 | 120 | 5 (0)| 00:00:01 |
    | 52 | NESTED LOOPS | | 1 | 90 | 4 (0)| 00:00:01 |
    | 53 | NESTED LOOPS | | 1 | 60 | 3 (0)| 00:00:01 |
    | 54 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 2 (0)| 00:00:01 |
    |* 55 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 1 (0)| 00:00:01 |
    | 56 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
    |* 57 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 58 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
    |* 59 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 60 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
    |* 61 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 62 | TABLE ACCESS BY INDEX ROWID | SYMMETADATA | 1 | 30 | 1 (0)| 00:00:01 |
    |* 63 | INDEX UNIQUE SCAN | SYMMETADATA_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    | 64 | TABLE ACCESS BY INDEX ROWID | TPRODUCT | 1 | 33 | 1 (0)| 00:00:01 |
    |* 65 | INDEX UNIQUE SCAN | TPRODUCT_UNIQUE | 1 | | 0 (0)| 00:00:01 |
    |* 66 | INDEX UNIQUE SCAN | SYMUSERCOUNT_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 67 | INDEX UNIQUE SCAN | SYMMEDIATYPE_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 68 | INDEX UNIQUE SCAN | SYMSKUTYPE_PK | 1 | 5 | 0 (0)| 00:00:01 |
    |* 69 | INDEX UNIQUE SCAN | SYMSKULANGUAGE_PK | 1 | 4 | 0 (0)| 00:00:01 |
    |* 70 | INDEX UNIQUE SCAN | SYMVENDOR_PK | 1 | 5 | 0 (0)| 00:00:01 |
    | 71 | TABLE ACCESS BY INDEX ROWID | SYMPRODUCTSKU | 1 | 35 | 3 (0)| 00:00:01 |
    |* 72 | INDEX RANGE SCAN | I_PSKU_MERCH_LOOKUP | 1 | | 2 (0)| 00:00:01 |
    |* 73 | TABLE ACCESS BY INDEX ROWID | TPRODUCTSKU | 1 | 12 | 1 (0)| 00:00:01 |
    |* 74 | INDEX UNIQUE SCAN | TPRODUCTSKU_PK | 1 | | 0 (0)| 00:00:01 |
    Database version is 10.2.0.4. Can anyone help me understand what else to be looking for to make it work faster?

    Please see the following threads for the ideal information required:
    How to post a sql tuning request:
    HOW TO: Post a SQL statement tuning request - template posting
    When your query takes too long:
    When your query takes too long ...

  • Same query, same dataset, same ddl setup, but wildly different explain plan

    Hello o fountains of oracle knowledge!
    We have a problem that caused us a full stop when rolling out a new version of our system to a customer and a whole Sunday to boot.
    The scenario is as follows:
    1. An previous version database schema
    2. The current version database schema
    3. A migration script to migrate the old schema to the new
    So we perform the following migration:
    1. Export the previous version database schema
    2. Import into a new schema called schema_old
    3. Create a new schema called schema_new
    4. Run migration script which creates objects, copies data, creates indexes etc etc in schema_new
    The migration runs fine in all environments (development, test and production)
    In our development and test environments performance is stellar, on the customer production server the performance is terrible.
    This using the exact same export file (from the production environment) and performing the exact same steps with the exact same migration script.
    Database version is 10.2.0.1.0 EE on all databases. OS is Microsoft Windows Server 2003 EE SP2 on all servers.
    The system is not in any sense under a heavy load (we have tested with no other load than ourselves).
    Looking at the explain plan for a query that is run frequently and does not use bind variables we see wildly different explain plans.
    The explain plan cost on our development and test servers is estimated to *7* for this query and there are no full table scans.
    On the production server the cost is *8433* and there are two full table scans of which one is on the largest table.
    We have tried to run analyse on all objects with very little effect. The plan changed very slightly, but still includes the two full table scans on the problem server and the cost is still the same.
    All tables and indexes are identical (including storage options), created from the same migration script.
    I am currently at loss for where to look? What can be causing this? I assume this could be caused by some parameter that is set on the server, but I don't know what to look for.
    I would be very grateful for any pointers.
    Thanks,
    Håkon

    Thank you for your answer.
    We collected statistics only after we determined that the production server where not behaving according to expectations.
    In this case we used TOAD and the tool within to collect statistics for all objects. We used 'Analyze' and 'Compute Statistics' options.
    I am not an expert, so sorry if this is too naive an approach.
    Here is the query:SELECT count(0)  
    FROM score_result sr, web_scorecard sc, product p
    WHERE sr.score_final_decision like 'VENT%'  
    AND sc.CREDIT_APPLICATION_ID = sr.CREDIT_APPLICATION_ID  
    AND sc.application_complete='Y'   
    AND p.product = sc.web_product   
    AND p.inactive_product = '2' ;I use this as an example, but the problem exists for virtually all queries.
    The output from the 'good' server:
    | Id  | Operation                      | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT               |                       |     1 |    39 |     7   (0)|
    |   1 |  SORT AGGREGATE                |                       |     1 |    39 |            |
    |   2 |   NESTED LOOPS                 |                       |     1 |    39 |     7   (0)|
    |   3 |    NESTED LOOPS                |                       |     1 |    30 |     6   (0)|
    |   4 |     TABLE ACCESS BY INDEX ROWID| SCORE_RESULT          |     1 |    17 |     4   (0)|
    |   5 |      INDEX RANGE SCAN          | SR_FINAL_DECISION_IDX |     1 |       |     3   (0)|
    |   6 |     TABLE ACCESS BY INDEX ROWID| WEB_SCORECARD         |     1 |    13 |     2   (0)|
    |   7 |      INDEX UNIQUE SCAN         | WEB_SCORECARD_PK      |     1 |       |     1   (0)|
    |   8 |    TABLE ACCESS BY INDEX ROWID | PRODUCT               |     1 |     9 |     1   (0)|
    |   9 |     INDEX UNIQUE SCAN          | PK_PRODUCT            |     1 |       |     0   (0)|
    ---------------------------------------------------------------------------------------------The output from the 'bad' server:
    | Id  | Operation                 | Name                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT          |                       |     1 |    32 |  8344   (3)|
    |   1 |  SORT AGGREGATE           |                       |     1 |    32 |            |
    |   2 |   HASH JOIN               |                       | 10887 |   340K|  8344   (3)|
    |   3 |    TABLE ACCESS FULL      | PRODUCT               |     6 |    42 |     3   (0)|
    |   4 |    HASH JOIN              |                       | 34381 |   839K|  8340   (3)|
    |   5 |     VIEW                  | index$_join$_001      | 34381 |   503K|  2193   (3)|
    |   6 |      HASH JOIN            |                       |       |       |            |
    |   7 |       INDEX RANGE SCAN    | SR_FINAL_DECISION_IDX | 34381 |   503K|   280   (3)|
    |   8 |       INDEX FAST FULL SCAN| SCORE_RESULT_PK       | 34381 |   503K|  1371   (2)|
    |   9 |     TABLE ACCESS FULL     | WEB_SCORECARD         |   489K|  4782K|  6137   (4)|
    ----------------------------------------------------------------------------------------I hope the formatting makes this readable.
    Stats (from SQL Developer), good table:NUM_ROWS     489716
    BLOCKS     27198
    AVG_ROW_LEN     312
    SAMPLE_SIZE     489716
    LAST_ANALYZED     15.12.2009
    LAST_ANALYZED_SINCE     15.12.2009Stats (from SQL Developer), bad table:
    NUM_ROWS     489716
    BLOCKS     27199
    AVG_ROW_LEN     395
    SAMPLE_SIZE     489716
    LAST_ANALYZED     17.12.2009
    LAST_ANALYZED_SINCE     17.12.2009I'm unsure what would cause the difference in average row length.
    I could obviously try to tune our sql-statements to work on the server not behaving better, but I would rather understand why they are different and make sure that we can expect similar behaviour between environments.
    Thank you again for trying to help me.
    Håkon
    Edited by: ergates on 17.des.2009 05:57
    Edited by: ergates on 17.des.2009 06:02

  • Different Explain Plans

    Hi,
    in 11.2.0.3 the same query on two different DBs (on the same server) have different Explain Plans (differnet number of rows returned 9690K vs 14M) :
    DBDEV :
    | Id  | Operation                | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT         |                  |  9690K|  4482M|   344K  (1)| 01:08:49 |
    |   1 |  LOAD TABLE CONVENTIONAL | PS_PROJ_RES_TA14 |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL      | PS_PROJ_RESOURCE |  9690K|  4482M|   344K  (1)| 01:08:49 |
    DBTST
    | Id  | Operation                | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | INSERT STATEMENT         |                  |    14M|  6534M|   344K  (1)| 01:08:50 |
    |   1 |  LOAD TABLE CONVENTIONAL | PS_PROJ_RES_TA14 |       |       |            |          |
    |*  2 |   TABLE ACCESS FULL      | PS_PROJ_RESOURCE |    14M|  6534M|   344K  (1)| 01:08:50 |
    The optimizer parameters are the same :
    NAME                                 TYPE        VALUE
    optimizer_capture_sql_plan_baselines  boolean     FALSE
      optimizer_dynamic_sampling           integer     2
    optimizer_features_enable            string      11.2.0.3
      optimizer_index_caching              integer     0
      optimizer_index_cost_adj             integer     100
      optimizer_mode                       string      ALL_ROWS
      optimizer_secure_view_merging        boolean     TRUE
      optimizer_use_invisible_indexes      boolean     FALSE
      optimizer_use_pending_statistics     boolean    FALSE
      optimizer_use_sql_plan_baselines     boolean    TRUE
    And the number of rows :
    In DBTST
    select count(*) from ps_proj_resource
      COUNT(*)
      18072893
    In DBDEV
      COUNT(*)
      18070581
    Thanks for explanation and ideas.

    Thanks.
    Here they are :
    DEV
    Predicate Information (identified by operation id):
       2 - filter(("SRC"."CST_DISTRIB_STATUS"='N' OR ("SRC"."BI_DISTRIB_STATUS"='N' OR
                  "SRC"."BI_DISTRIB_STATUS"='U')) AND "SRC"."SYSTEM_SOURCE"<>'PRC' AND
                  "SRC"."SYSTEM_SOURCE"<>'PRP' AND "SRC"."SYSTEM_SOURCE"<>'PRR')
    TST
    Predicate Information (identified by operation id):
       1 - filter("SRC"."SYSTEM_SOURCE"<>'PRC' AND "SRC"."SYSTEM_SOURCE"<>'PRP'
                  AND ("SRC"."CST_DISTRIB_STATUS"='N' OR ("SRC"."BI_DISTRIB_STATUS"='N' OR
                    "SRC"."BI_DISTRIB_STATUS"='U')) AND "SRC"."SYSTEM_SOURCE"<>'PRR')

  • 'Identical' schemas on different servers - different explain plan costs

    Hello,
    I have two servers, 1 development and 1 production. I have a query which produces wildly different explain plan costs on the two servers:
    The development server provides a cost of just over 800 and the production servers is over 100000. I have 2-3 different versions of the schema (These are data warehouse schemas) on both servers and the cost numbers are similar regardless of the version used. Whenever I run the query on development, it's around 800. On production the same query is over 100000.
    The data on both servers is (should be) identical - I used impdp and expdp to transfer the data between the servers. I have run:
    DBMS_STATS.GATHER_SCHEMA_STATS ('SCHEMAV26', cascade=>TRUE);
    on the production server after importing the data. As far as I can see, the indices are identical on both servers. The difference in the execution plan is one additional line:
    Filter Predicates CE.ID < 5
    Can anyone help me figure out why the explain plans are different? The servers have similar hardware specs, and are running the same version of Oracle (11.2.0.2.0)
    Thanks,
    Dan Scott
    http://danieljamesscott.org
    Edited by: danscott on Mar 4, 2011 11:43 AM

    Thanks for all the help/suggestions - as you've probably guessed, I'm a little new to all this.
    A little background first:
    We have an items table and events table. itemid is the primary key for the items table and a foreign key in the events table. The events table contains itemids, timestamps and data values (along with a few other IDs) The query I'm running is used to create a materialized view which provides statistics for each itemid to assist users in finding a particular itemid containing the data they're interested in. Generally, we create the view on the full list of itemids (and so the indices are not used, as expected). However, we occasionally run the query for a small number of itemids, and the index on events.itemid is used on one server, but not on the other.
    Here's the SQL (Apologies for the length).
    WITH ChartItems as (
      select distinct ci.itemid, ci.label, ci.category, ci.description,
             case
                when
                   (count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2) over (partition by ci.itemid) > 0)
                   AND
                   (count(distinct ce.value1num) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2num) over (partition by ci.itemid) > 0)
                then 'H'
                when
                   count(distinct ce.value1num) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2num) over (partition by ci.itemid) > 0
                then 'N'
                when
                   count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2) over (partition by ci.itemid) > 0
                then 'S'
                else
                    'X'
             end as value_type,
             -- The value column
             case
                when
                   (count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value1num) over (partition by ci.itemid) > 0)
                   and
                   (count(distinct ce.value2) over (partition by ci.itemid) > 0 OR
                    count(distinct ce.value2num) over (partition by ci.itemid) > 0)
                then 'both'
                when
                   count(distinct ce.value1) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value1num) over (partition by ci.itemid) > 0
                then 'value1'
                when
                   count(distinct ce.value2) over (partition by ci.itemid) > 0 OR
                   count(distinct ce.value2num) over (partition by ci.itemid) > 0
                then 'value2'
                else
                    'none'
             end as value_column
        from items ci,
             events ce
       where ce.itemid = ci.itemid
         and ci.itemid < 5
    , RawData as (
        select distinct ci.itemid, ci.label, ci.category, ci.description,
              ci.value_type, ci.value_column,
              count(*)
                over (partition by ci.itemid) as rows_num,
              count(distinct ce.subject_id)
                over (partition by ci.itemid) as subjects_num,
              avg(abs(cast(ce.realtime as date) - cast(ce.charttime as date)) * 24 * 60)
                over (partition by ci.itemid) as chart_vs_realtime_delay_mean,
              stddev(abs(cast(ce.realtime as date) - cast(ce.charttime as date)) * 24 * 60)
                over (partition by ci.itemid) as chart_vs_realtime_delay_stddev,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1uom)
                                over (partition by ci.itemid
                                      order by ce.value1uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value1uom)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value1uom)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value1_uom_num,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1uom)
                                over (partition by ci.itemid
                                      order by ce.value1uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value1_uom_has_nulls,
              first_value(ce.value1uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_uom_sample1,
              last_value(ce.value1uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_uom_sample2,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1)
                                over (partition by ci.itemid
                                      order by ce.value1 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value1)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value1)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value1_distinct_num,
              case
                when ci.value_column in ('value1', 'both') then
                    case
                      when (last_value(ce.value1)
                                over (partition by ci.itemid
                                      order by ce.value1 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value1_has_nulls,
             first_value(ce.value1) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_sample1,
             last_value(ce.value1) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value1_sample2,
             min(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_min,
             max(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_max,
             avg(length(ce.value1))
                 over (partition by ci.itemid) as value1_length_mean,
             min(ce.value1num)
                 over (partition by ci.itemid) as value1num_min,
             max(ce.value1num)
                 over (partition by ci.itemid) as value1num_max,
             avg(ce.value1num)
                 over (partition by ci.itemid) as value1num_mean,
             stddev(ce.value1num)
                 over (partition by ci.itemid) as value1num_stddev,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2uom)
                                over (partition by ci.itemid
                                      order by ce.value2uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value2uom)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value2uom)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value2_uom_num,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2uom)
                                over (partition by ci.itemid
                                      order by ce.value2uom nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value2_uom_has_nulls,
             first_value(ce.value2uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_uom_sample1,
             last_value(ce.value2uom) ignore nulls
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_uom_sample2,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2)
                                over (partition by ci.itemid
                                      order by ce.value2 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          count(distinct ce.value2)
                            over (partition by ci.itemid) + 1
                      else
                          count(distinct ce.value2)
                            over (partition by ci.itemid)
                    end
                else
                    0
              end as value2_distinct_num,
              case
                when ci.value_column in ('value2', 'both') then
                    case
                      when (last_value(ce.value2)
                                over (partition by ci.itemid
                                      order by ce.value2 nulls last
                                      ROWS BETWEEN UNBOUNDED PRECEDING AND
                                      UNBOUNDED FOLLOWING)
                           ) is null then
                          'Y'
                      else
                          'N'
                    end
                else
                    null
              end as value2_has_nulls,
             first_value(ce.value2)
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN  UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_sample1,
             last_value(ce.value2)
                 over (partition by ci.itemid
                       order by ce.charttime ROWS BETWEEN  UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as value2_sample2,
             min(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_min,
             max(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_max,
             avg(length(ce.value2))
                 over (partition by ci.itemid) as value2_length_mean,
             min(ce.value2num)
                 over (partition by ci.itemid) as value2num_min,
             max(ce.value2num)
                 over (partition by ci.itemid) as value2num_max,
             avg(ce.value2num)
                 over (partition by ci.itemid) as value2num_mean,
             stddev(ce.value2num)
                 over (partition by ci.itemid) as value2num_stddev
        from ChartItems ci,
             events ce
       where ce.itemid = ci.itemid
    --   order by ci.itemid, ci.label
    select label, trim(lower(label)) label_lower, itemid, category, description,
           value_type, value_column,
           rows_num, subjects_num,
           round(chart_vs_realtime_delay_mean, 2) as chart_vs_realtime_delay_mean,
           round(chart_vs_realtime_delay_stddev, 2) as chart_vs_realtime_delay_stddev,
           value1_uom_num, value1_uom_has_nulls,
           value1_uom_sample1, value1_uom_sample2,
           value1_distinct_num, value1_has_nulls,
           value1_sample1, value1_sample2,
           value1_length_min, value1_length_max,
           round(value1_length_mean, 2) as value1_length_mean,
           round(value1num_min, 2) as value1num_min,
           round(value1num_max, 2) as value1num_max,
           round(value1num_mean, 2) as value1num_mean,
           round(value1num_stddev, 2) as value1num_stddev,
           value2_uom_num, value2_uom_has_nulls,
           value2_uom_sample1, value2_uom_sample2,
           value2_distinct_num, value2_has_nulls,
           value2_sample1, value2_sample2,
           value2_length_min, value2_length_max,
           round(value2_length_mean, 2) as value2_length_mean,
           round(value2num_min, 2) as value2num_min,
           round(value2num_max, 2) as value2num_max,
           round(value2num_mean, 2) as value2num_mean,
           round(value2num_stddev, 2) as value2num_stddev
      from RawData
    order by label, itemid;

  • Same Query returning different result (Different execution plan)

    Hi all,
    To day i have discovered a strange thing: a query that return a different result when using a different execution plan.
    The query :
    SELECT  *
      FROM schema.table@database a
    WHERE     column1 IN ('3')
           AND column2 = '101'
           AND EXISTS
                  (SELECT null
                     FROM schema.table2 c
                    WHERE a.column3 = SUBSTR (c.column1, 2, 12));where schema.table@database is a remote table.
    when executed with the hint /*+ ordered use_nl(a c) */ these query return no result and its execution plan is :
    Rows     Row Source Operation
          0  NESTED LOOPS  (cr=31 r=0 w=0 time=4894659 us)
       4323   SORT UNIQUE (cr=31 r=0 w=0 time=50835 us)
       4336    TABLE ACCESS FULL TABLE2 (cr=31 r=0 w=0 time=7607 us)
          0   REMOTE  (cr=0 r=0 w=0 time=130536 us)When i changed the execution plan with the hint /*+ use_hash(c a) */
    Rows     Row Source Operation
       3702  HASH JOIN SEMI (cr=35 r=0 w=0 time=497839 us)
      22556   REMOTE  (cr=0 r=0 w=0 time=401176 us)
       4336   TABLE ACCESS FULL TABLE2 (cr=35 r=0 w=0 time=7709 us)It seem that when the execution plan have changed the remote query return no result.
    It'is a bug or i have missed somthing ?
    PS: The two table are no subject to insert or update statement.
    Oracle version : 9.2.0.2.0
    System version : HP-UX v1
    Thanks.

    H.Mahmoud wrote:
    Oracle version : 9.2.0.2.0
    System version : HP-UX v1Hard to say. You're using a very old and deprecated version of the database, and one that was known to contain bugs.
    9.2.0.7 was really the lowest version of 9i that was considered to be 'stable', but even so, it's old and lacking in many ways.
    Consider upgrading to the latest database version at your earliest opportunity. (or at least apply patches up to the latest 9i version before querying if there is bugs in your really low buggy version)

  • Why two different explain plan for same objects?

    Believe or not there are two different databases, one for processing and one for reporting, plan is show different for same query. Table structure and indexes are same. It's 11G
    Thanks
    Good explain plan .. works fine..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 12,775  Bytes: 184  Cardinality: 1                                                                        
         27 SORT UNIQUE  Cost: 12,775  Bytes: 184  Cardinality: 1                                                                   
              26 NESTED LOOPS                                                              
                   24 NESTED LOOPS  Cost: 12,774  Bytes: 184  Cardinality: 1                                                         
                        22 HASH JOIN  Cost: 12,772  Bytes: 178  Cardinality: 1                                                    
                             20 NESTED LOOPS SEMI  Cost: 30  Bytes: 166  Cardinality: 1                                               
                                  17 NESTED LOOPS  Cost: 19  Bytes: 140  Cardinality: 1                                          
                                       14 NESTED LOOPS OUTER  Cost: 16  Bytes: 84  Cardinality: 1                                     
                                            11 VIEW DSSADM. Cost: 14  Bytes: 37  Cardinality: 1                                
                                                 10 NESTED LOOPS                           
                                                      8 NESTED LOOPS  Cost: 14  Bytes: 103  Cardinality: 1                      
                                                           6 NESTED LOOPS  Cost: 13  Bytes: 87  Cardinality: 1                 
                                                                3 INLIST ITERATOR            
                                                                     2 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1       
                                                                          1 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1 
                                                                5 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 36  Cardinality: 1            
                                                                     4 INDEX RANGE SCAN INDEX DSSADM.STAN_JB_FN_IDX Cost: 2  Cardinality: 1       
                                                           7 INDEX UNIQUE SCAN INDEX (UNIQUE) DSSODS.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                 
                                                      9 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 16  Cardinality: 1                      
                                            13 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                                
                                                 12 INDEX RANGE SCAN INDEX DSSODS.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                           
                                       16 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_JOBCODE Cost: 3  Bytes: 56  Cardinality: 1                                     
                                            15 INDEX RANGE SCAN INDEX DSSADM.DIM_JOBCODE_EXPDT1 Cost: 2  Cardinality: 1                                
                                  19 TABLE ACCESS BY INDEX ROWID TABLE DSSODS.DRV_PS_JOB_RPT Cost: 11  Bytes: 438,906  Cardinality: 16,881                                          
                                       18 INDEX RANGE SCAN INDEX DSSODS.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                     
                             21 INDEX FAST FULL SCAN INDEX (UNIQUE) DSSADM.Z_PK_JOBCODE_PROMPT_TBL Cost: 12,699  Bytes: 66,790,236  Cardinality: 5,565,853                                               
                        23 INDEX RANGE SCAN INDEX DSSADM.DIM_PERSON_EMPL_RCD_SEQ_KEY Cost: 1  Cardinality: 1                                                    
                   25 TABLE ACCESS BY INDEX ROWID TABLE DSSADM.DIM_PERSON_EMPL_RCD Cost: 2  Bytes: 6  Cardinality: 1                                                         This bad plan ... show merge join cartesian and full table ..
    Plan
    SELECT STATEMENT  ALL_ROWSCost: 3,585  Bytes: 237  Cardinality: 1                                                              
         26 SORT UNIQUE  Cost: 3,585  Bytes: 237  Cardinality: 1                                                         
              25 NESTED LOOPS SEMI  Cost: 3,584  Bytes: 237  Cardinality: 1                                                    
                   22 NESTED LOOPS  Cost: 3,573  Bytes: 211  Cardinality: 1                                               
                        20 MERGE JOIN CARTESIAN  Cost: 2,864  Bytes: 70,446  Cardinality: 354                                          
                             17 NESTED LOOPS                                     
                                  15 NESTED LOOPS  Cost: 51  Bytes: 191  Cardinality: 1                                
                                       13 NESTED LOOPS OUTER  Cost: 50  Bytes: 180  Cardinality: 1                           
                                            10 HASH JOIN  Cost: 48  Bytes: 133  Cardinality: 1                      
                                                 6 NESTED LOOPS                 
                                                      4 NESTED LOOPS  Cost: 38  Bytes: 656  Cardinality: 8            
                                                           2 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 14  Bytes: 448  Cardinality: 8       
                                                                1 INDEX RANGE SCAN INDEX REPORT2.STAN_PROM_JB_IDX Cost: 6  Cardinality: 95 
                                                           3 INDEX RANGE SCAN INDEX REPORT2.SETID_JC_IDX Cost: 2  Cardinality: 1       
                                                      5 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DIM_JOBCODE Cost: 3  Bytes: 26  Cardinality: 1            
                                                 9 INLIST ITERATOR                 
                                                      8 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_FAMILY_TBL Cost: 10  Bytes: 51  Cardinality: 1            
                                                           7 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_FAMILY_TBL_CL_SETID Cost: 9  Cardinality: 1       
                                            12 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PSXLATITEM_RPT Cost: 2  Bytes: 47  Cardinality: 1                      
                                                 11 INDEX RANGE SCAN INDEX REPORT2.PK_DRV_RIXLATITEM_RPT Cost: 1  Cardinality: 1                 
                                       14 INDEX UNIQUE SCAN INDEX (UNIQUE) REPORT2.DRV_PS_JOBCODE_TBL_SEQ_KEY_RPT Cost: 0  Cardinality: 1                           
                                  16 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOBCODE_TBL_RPT Cost: 1  Bytes: 11  Cardinality: 1                                
                             19 BUFFER SORT  Cost: 2,863  Bytes: 4,295,552  Cardinality: 536,944                                     
                                  18 TABLE ACCESS FULL TABLE REPORT2.DIM_PERSON_EMPL_RCD Cost: 2,813  Bytes: 4,295,552  Cardinality: 536,944                                
                        21 INDEX RANGE SCAN INDEX (UNIQUE) REPORT2.Z_PK_JOBCODE_PROMPT_TBL Cost: 2  Bytes: 12  Cardinality: 1                                          
                   24 TABLE ACCESS BY INDEX ROWID TABLE REPORT2.DRV_PS_JOB_RPT Cost: 11  Bytes: 1,349,920  Cardinality: 51,920                                               
                        23 INDEX RANGE SCAN INDEX REPORT2.DRV_PS_JOB_JOBCODE_RPT Cost: 2  Cardinality: 8                                          

    user550024 wrote:
    I am really surprise that the stat for good sql are little old. I just computed the states of bad sql so they are uptodate..
    There is something terribly wrong..Not necessarily. Just using the default stats collection I've seen a few cases of things suddenly going wrong. As the data increases, it gets closer to an edge case where the inadequacy of the statistics convinces the optimizer to do a wrong plan. To fix, I could just go into dbconsole, set the stats back to a time when they worked, and locked them. In most cases it's definitely better to figure out what is really going on, though, to give the optimizer better information to work with. Aside from the value of learning how to do it, for some cases it's not so simple. Also, many think the default settings of the database statistic collection may be wrong in general (in 10.2.x, at least). So much depends on your application and data that you can't make too many generalizations. You have to look at the evidence and figure it out. There is still a steep learning curve for the tools to look at the evidence. People are here to help with that.
    Most of the time it works better than a dumb rule based optimizer, but at the cost of a few situations where people are smarter than computers. It's taken a lot of years to get to this point.

  • Multiple DB env with different explain plan

    Hi,
    I have DEV DB and QA DB , both located in USA. There is a table called Customer which contains the respective index , pk, constraints, etc which is available in
    both the environments. Now In DEV DB we have Customer table with 1000 records, but in QA DB we have about 50000 records.
    Now in DEV DB when i take a explain plan the columns are not going for a FULL table scan but in QA DB it is going for a full table scan? Why there is a discrepancies? Is it because of the number of records in QA is more than DEV? I hope that wont be a problem as Oracle CBO/RBO will optimize the query by itself for both the cases using the index, etc. If that is the case I am wondering what might me the problem. Whether we need to inform the Oracle using the hints? Please shed some light into this.
    Thanks.

    First of all it is a good idea to compare the execution plans from differnt environments. Not all developers do that. The cost or time can not be properly compared, but the plan itself yes. if the plan is different, then the behaviour of the query very likely also is different.
    What you didn't show us was the query and the two different plans. Maybe in a broken down test version.
    What is strange is that I would have expected a reversed behaviour. In the dev db you have only 1000 records. Usually the db will do a FULL table scan on this. Simply because the amount of data is so small that it is othen faster to read the table in one big chuck and then deal with it. The parameter db_multiblock_read_count plays a role in that. If it does instead an table access by index rowid then that is fine.
    If you QA Db now uses a "full table scan" instead of the "table access by index rowid" that it is something to think about. Possible reasons can be: Not current statistics. For example do you have an automatic job that collects statistics? Maybe this job didn't run yet, after you imported the data? To use hints as a solution would be wrong. However you can use hints to see, if the performace of the query improves after using the hint. If it does, then the CBO didn't have good enough information. If it doesn't then the CBO already made the right decision.

  • Different explain plan between 10.2.0.3 and 10.2.0.4

    Had a problem with an explain plan changing after upgrade from 10.2.0.3 to 10.2.0.4. Managed to simplify as much as possible for now:
    Query is :
    SELECT * FROM m_promo_chk_str
    WHERE (m_promo_chk_str.cust_cd) IN (
    SELECT cust_cd
    FROM s_usergrp_pda
    GROUP BY cust_cd)
    On 10.2.0.3 explain plan is:
    | 0 | SELECT STATEMENT | | 1 | 1227 | 26 (16)| 00:00:01 |
    |* 1 | HASH JOIN SEMI | | 1 | 1227 | 26 (16)| 00:00:01 |
    | 2 | TABLE ACCESS FULL | M_PROMO_CHK_STR | 1 | 1185 | 14 (0)| 00:00:01 |
    | 3 | VIEW | VW_NSO_1 | 137 | 5754 | 11 (28)| 00:00:01 |
    | 4 | HASH GROUP BY | | 137 | 548 | 11 (28)| 00:00:01 |
    | 5 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 9 (12)| 00:00:01 |
    On 10.2.0.4 with same data is:
    | 0 | SELECT STATEMENT | | 1 | 1201 | 46 (5)| 00:00:01 |
    | 1 | HASH GROUP BY | | 1 | 1201 | 46 (5)| 00:00:01 |
    |* 2 | HASH JOIN | | 1 | 1201 | 45 (3)| 00:00:01 |
    | 3 | TABLE ACCESS FULL| M_PROMO_CHK_STR | 1 | 1197 | 29 (0)| 00:00:01 |
    | 4 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 15 (0)| 00:00:01 |
    Explain plan is reasonable for when M_PROMO_CHK_STR is empty, however we have the case where stats are gathered when table is empty, but table is then populated and the query runs slowly. I understand that this is not a problem with the database exactly, but want to try to understand why the different behaviour.
    Will look into CBO trace tommorrow, but for now anyone want to share any thoughts?

    PatHK wrote:
    Here is further simplification to reproduce the different behaviour - I think about as simple as I can get it!
    SELECT * FROM dual WHERE (dummy) IN (SELECT dummy FROM dual GROUP BY dummy);
    On 10.2.0.3
    |   0 | SELECT STATEMENT     |          |     1 |     4 |     5  (20)| 00:00:01 |
    |   1 |  NESTED LOOPS SEMI   |          |     1 |     4 |     5  (20)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL  | DUAL     |     1 |     2 |     2   (0)| 00:00:01 |
    |*  3 |   VIEW               | VW_NSO_1 |     1 |     2 |     3  (34)| 00:00:01 |
    |   4 |    SORT GROUP BY     |          |     1 |     2 |     3  (34)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| DUAL     |     1 |     2 |     2   (0)| 00:00:01 |On 10.2.0.4
    |   0 | SELECT STATEMENT     |      |     1 |     4 |     4   (0)| 00:00:01 |
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     4 |     4   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |      |     1 |     4 |     4   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     2   (0)| 00:00:01 |
    Timur's suggestion to look at a 10053 trace file is a good idea. It might be the case that someone disabled complex view merging in the 10.2.0.3 database instance. See the following:
    _complex_view_merging
    http://jonathanlewis.wordpress.com/2007/03/08/transformation-and-optimisation/
    Here is a test you might try on both database versions:
    ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=TRUE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST1';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';
    ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=FALSE;
    ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST2';
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';The first plan output:
    | Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |      |       |       |     8 (100)|          |
    |   1 |  SORT GROUP BY NOSORT|      |     1 |     4 |     8   (0)| 00:00:01 |
    |   2 |   NESTED LOOPS       |      |     1 |     4 |     8   (0)| 00:00:01 |
    |   3 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     4   (0)| 00:00:01 |
    |*  4 |    TABLE ACCESS FULL | DUAL |     1 |     2 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - filter("DUMMY"="DUMMY")The second plan output:
    | Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |          |       |       |     9 (100)|          |
    |   1 |  NESTED LOOPS SEMI   |          |     1 |     4 |     9  (12)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL  | DUAL     |     1 |     2 |     4   (0)| 00:00:01 |
    |*  3 |   VIEW               | VW_NSO_1 |     1 |     2 |     5  (20)| 00:00:01 |
    |   4 |    SORT GROUP BY     |          |     1 |     2 |     5  (20)| 00:00:01 |
    |   5 |     TABLE ACCESS FULL| DUAL     |     1 |     2 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("DUMMY"="$nso_col_1")From the first 10053 trace file:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      _pga_max_size                       = 368640 KB
    pgamax_size is the only parameter non-default value which could affect the optimizer.
    From the second 10053 trace file:
    PARAMETERS USED BY THE OPTIMIZER
      PARAMETERS WITH ALTERED VALUES
      _pga_max_size                       = 368640 KB
      _complex_view_merging               = false
      *********************************This section in the first 10053 trace seems to show the complex view merging:
    SU: Considering interleaved complex view merging
    SU:   Transform an ANY subquery to semi-join or distinct.
    CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
    CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
    CVM: CBQT Marking query block SEL$683B0107 (#2)as valid for CVM.
    CVM:   Merging complex view SEL$683B0107 (#2) into SEL$5DA710D3 (#1).
    qbcp:******* UNPARSED QUERY IS *******
    SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM  (SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY") "VW_NSO_2","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    vqbcp:******* UNPARSED QUERY IS *******
    SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY"
    CVM: result SEL$5DA710D3 (#1).
    ******* UNPARSED QUERY IS *******
    SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM "SYS"."DUAL" "DUAL","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="DUAL"."DUMMY" GROUP BY "DUAL"."DUMMY","DUAL".ROWID,"DUAL"."DUMMY"
    Registered qb: SEL$C9C6826C 0x155e2020 (VIEW MERGE SEL$5DA710D3; SEL$683B0107)
      signature (): qb_name=SEL$C9C6826C nbfros=2 flg=0
        fro(0): flg=0 objn=258 hint_alias="DUAL"@"SEL$1"
        fro(1): flg=0 objn=258 hint_alias="DUAL"@"SEL$2"
    FPD: Considering simple filter push in SEL$C9C6826C (#1)
    FPD:   Current where clause predicates in SEL$C9C6826C (#1) :
             "DUAL"."DUMMY"="DUAL"."DUMMY"
    kkogcp: try to generate transitive predicate from check constraints for SEL$C9C6826C (#1)
    predicates with check contraints: "DUAL"."DUMMY"="DUAL"."DUMMY"
    after transitive predicate generation: "DUAL"."DUMMY"="DUAL"."DUMMY"
    finally: "DUAL"."DUMMY"="DUAL"."DUMMY"
    CVM: Costing transformed query.
    kkoqbc-start
                : call(in-use=25864, alloc=65448), compile(in-use=115280, alloc=118736)
    kkoqbc-subheap (create addr=000000001556CD70)This is the same section from the second 10053 trace:
    SU: Considering interleaved complex view merging
    SU:   Transform an ANY subquery to semi-join or distinct.
    CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
    CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
    FPD: Considering simple filter push in SEL$5DA710D3 (#1)
    FPD:   Current where clause predicates in SEL$5DA710D3 (#1) :
             "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#1)
    predicates with check contraints: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    after transitive predicate generation: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    finally: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
    FPD: Considering simple filter push in SEL$683B0107 (#2)
    FPD:   Current where clause predicates in SEL$683B0107 (#2) :
             CVM: Costing transformed query.
    kkoqbc-start
                : call(in-use=25656, alloc=65448), compile(in-use=113992, alloc=114592)
    kkoqbc-subheap (create addr=00000000157E9078)Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Query with same explain-plan but slower in one env

    Hi there
    I have a stored procedure which is executed from a web application. It contains a query (insert-select-from statement). When this stored procedure is called by the web application in PROD, it takes 13sec but it takes 19sec in TEST env. I checked the explain plan for this insert statement in both instances and it is same (see below). Actually, the cost is lower in TEST env.
    ENV: Oracle 10gR2 EE, on ASM - RHEL 64bit
    The TEST server is on a better/faster hardware and will become the new PROD in near future (faster and 16 CPUs  vs 8 in PROD, high performance SAN, 132GB RAM vs 96GB in PROD, etc). The TEST database has exact same init parameter and version/patch level as current PROD. So the application is being tested against it at the moment.
    Here are the explain-plans from both environments:
    From PROD Server
    Plan
    INSERT STATEMENT ALL_ROWS Cost: 143 Bytes: 696 Cardinality: 3
    18 SORT ORDER BY Cost: 143 Bytes: 696 Cardinality: 3
    17 HASH UNIQUE Cost: 142 Bytes: 696 Cardinality: 3
    16 WINDOW SORT Cost: 143 Bytes: 696 Cardinality: 3
    15 HASH JOIN Cost: 141 Bytes: 696 Cardinality: 3
    13 HASH JOIN Cost: 128 Bytes: 519 Cardinality: 3
    11 TABLE ACCESS BY INDEX ROWID TABLE MKTG.SATDATAIMPORT Cost: 125 Bytes: 1,728 Cardinality: 12
    10 NESTED LOOPS Cost: 125 Bytes: 1,992 Cardinality: 12
    3 HASH JOIN Cost: 5 Bytes: 22 Cardinality: 1
    1 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_HDGS Cost: 2 Bytes: 12 Cardinality: 1
    2 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_DIRS Cost: 2 Bytes: 10 Cardinality: 1
    9 BITMAP CONVERSION TO ROWIDS
    8 BITMAP AND
    5 BITMAP CONVERSION FROM ROWIDS
    4 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_HEADINGNO Cost: 19 Cardinality: 4,920
    7 BITMAP CONVERSION FROM ROWIDS
    6 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_DIRNO Cost: 89 Cardinality: 4,920
    12 TABLE ACCESS FULL TABLE MKTG.MONTHS12 Cost: 2 Bytes: 84 Cardinality: 12
    14 TABLE ACCESS FULL TABLE MKTG.REF_WEST_CATEGORY Cost: 12 Bytes: 191,809 Cardinality: 3,251
    From TEST Server
    Plan
    INSERT STATEMENT ALL_ROWS Cost: 107 Bytes: 232 Cardinality: 1
    18 SORT ORDER BY Cost: 107 Bytes: 232 Cardinality: 1
    17 HASH UNIQUE Cost: 106 Bytes: 232 Cardinality: 1
    16 WINDOW SORT Cost: 107 Bytes: 232 Cardinality: 1
    15 HASH JOIN Cost: 105 Bytes: 232 Cardinality: 1
    13 HASH JOIN Cost: 93 Bytes: 173 Cardinality: 1
    11 TABLE ACCESS BY INDEX ROWID TABLE MKTG.SATDATAIMPORT Cost: 89 Bytes: 864 Cardinality: 6
    10 NESTED LOOPS Cost: 89 Bytes: 996 Cardinality: 6
    3 HASH JOIN Cost: 7 Bytes: 22 Cardinality: 1
    1 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_HDGS Cost: 3 Bytes: 12 Cardinality: 1
    2 TABLE ACCESS FULL TABLE MKTG.TMPG_CLICKS_DIRS Cost: 3 Bytes: 10 Cardinality: 1
    9 BITMAP CONVERSION TO ROWIDS
    8 BITMAP AND
    5 BITMAP CONVERSION FROM ROWIDS
    4 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_HEADINGNO Cost: 9 Cardinality: 2,977
    7 BITMAP CONVERSION FROM ROWIDS
    6 INDEX RANGE SCAN INDEX MKTG.SATDATAIMPORT_DIRNO Cost: 59 Cardinality: 2,977
    12 TABLE ACCESS FULL TABLE MKTG.MONTHS12 Cost: 3 Bytes: 84 Cardinality: 12
    14 TABLE ACCESS FULL TABLE MKTG.REF_WEST_CATEGORY Cost: 12 Bytes: 191,868 Cardinality: 3,252
    What else can I check to find out why the query is slower in TEST env?
    Please advise.
    Best regards

    Here is some more info. The query is below:
    select distinct dr.line_num 
                     ,row_number() over (partition by di.HEADINGNO,di.DIRECTORYNO order by reportyear,to_number(di.monthno)) monthposition
                     ,di.SATID,di.REPORTYEAR,di.MONTHNO,di.MONTHEN,di.MONTHFR,di.HEADINGNO,hn.NAME_EN,hn.NAME_FR,di.DIRECTORYNO
                     ,di.SUPERDIRECTORYNO,di.PRINTDIRCODE,di.DIRECTORYNAME,round(to_number(di.IMPTTOTAL)) imptotal
                     ,round(to_number(di.IMPBEST)) impbest ,round(to_number(di.IMPTAVERAGE)) imptaverage
                     ,round(to_number(di.CLICKTOTAL)) clicktotal,round(to_number(di.CLICKBEST)) clickbest
                     ,round(to_number(di.CLICKAVERAGE)) clickaverage
                     ,round(avg(to_number(impttotal)) over(partition by di.HEADINGNO,di.DIRECTORYNO)) avgimp
               from satdataimport di,tmpg_clicks_hdgs hd,tmpg_clicks_dirs dr, months12 m12, ref_west_category hn
               where di.headingno   = hd.id
                 and di.directoryno = dr.id
                 and dr.line_num=hd.line_num
                 and di.reportyear  = m12.year
                 and di.monthno     = m12.month
                 and hn.CATEGORY_CODE = di.headingno
               order by di.headingno, di.directoryno,di.reportyear,to_number(di.monthno)
    The largest table is "satdataimport" in the query has "12274818" rows. Rest of the tables are very small containing few rows to less than 4000 rows.
    I have refreshed the statistics of the larger table but this did not help either. Even a simple query like "select count(*) from satdataimport" is taking 15sec in TEST while it takes 4Sec in PROD when I run it from TOAD.
    The other strange thing is that when I run this stored procedure from TOAD, it takes 200 milli sec to complete. There is a logging table to which the stored procedure records the elapsed time taken by this INSERT statement.
    Since this query is in a stored procedure being called from the web app, the QA team wants quicker response. Current PROD is faster.
    The tables have same indexes, etc and contain identical data as that in PROD (were refreshed from PROD yesterday).
    What else can I check?
    Best regards

  • Query needs tuning; Explain plan attached

    DB version:10gR2
    Currently, the below query is taking more than 28 secs to complete. The table stats are up-to-date.
    Is there a way to rewrite/tune this query?
    SELECT DISTINCT TASK_HDR.TASK_ID,
                     TASK_HDR.WHSE,
                     TASK_HDR.TASK_DESC,
                     TASK_HDR.INVN_TYPE,
                     TASK_HDR.INVN_NEED_TYPE,
                     TASK_HDR.DFLT_TASK_PRTY,
                     TASK_HDR.CURR_TASK_PRTY,
                     TASK_HDR.XPECTD_DURTN,
                     TASK_HDR.ACTL_DURTN,
                     TASK_HDR.ERLST_START_DATE_TIME,
                     TASK_HDR.LTST_START_DATE_TIME,
                     TASK_HDR.LTST_CMPL_DATE_TIME,
                     TASK_HDR.BEGIN_AREA,
                     TASK_HDR.BEGIN_ZONE,
                     TASK_HDR.BEGIN_AISLE,
                     TASK_HDR.END_AREA,
                     TASK_HDR.END_ZONE,
                     TASK_HDR.END_AISLE,
                     TASK_HDR.START_CURR_WORK_GRP,
                     TASK_HDR.START_CURR_WORK_AREA,
                     TASK_HDR.END_CURR_WORK_GRP,
                     TASK_HDR.END_CURR_WORK_AREA,
                     TASK_HDR.START_DEST_WORK_GRP,
                     TASK_HDR.START_DEST_WORK_AREA,
                     TASK_HDR.END_DEST_WORK_GRP,
                     TASK_HDR.END_DEST_WORK_AREA,
                     TASK_HDR.TASK_TYPE,
                     TASK_HDR.TASK_GENRTN_REF_CODE,
                     TASK_HDR.TASK_GENRTN_REF_NBR,
                     TASK_HDR.NEED_ID,
                     TASK_HDR.TASK_BATCH,
                     TASK_HDR.STAT_CODE,
                     TASK_HDR.CREATE_DATE_TIME,
                     TASK_HDR.MOD_DATE_TIME,
                     TASK_HDR.USER_ID,
                     TASK_HDR.RLS_DATE_TIME,
                     TASK_HDR.SKU_ID,
                     TASK_HDR.TASK_CMPL_REF_CODE,
                     TASK_HDR.TASK_CMPL_REF_NBR,
                     TASK_HDR.OWNER_USER_ID,
                     TASK_HDR.ONE_USER_PER_GRP,
                     TASK_HDR.NEXT_TASK_ID,
                     TASK_HDR.EXCEPTION_CODE,
                     TASK_HDR.CURR_LOCN_ID,
                     TASK_HDR.TASK_PARM_ID,
                     TASK_HDR.RULE_ID,
                     TASK_HDR.VOCOLLECT_ASSIGN_ID,
                     TASK_HDR.CURR_USER_ID,
                     TASK_HDR.MHE_FLAG,
                     TASK_HDR.PICK_TO_TOTE_FLAG,
                     TASK_HDR.MHE_ORD_STATE,
                     TASK_HDR.PRT_TASK_LIST_FLAG,
                     TASK_HDR.RPT_PRTR_REQSTR,
                     TASK_HDR.ORIG_TASK_ID
       FROM INVN_NEED_TYPE, TASK_DTL, TASK_HDR
        WHERE (TASK_HDR.TASK_ID = TASK_DTL.TASK_ID(+))
          AND TASK_HDR.WHSE = '01' AND TASK_HDR.STAT_CODE >= 99 AND
              TASK_HDR.INVN_NEED_TYPE = INVN_NEED_TYPE.INVN_NEED_TYPE AND
              INVN_NEED_TYPE.WHSE = TASK_HDR.WHSE AND
              (INVN_NEED_TYPE.CO = '88') AND (INVN_NEED_TYPE.DIV = '51') AND
              (TASK_DTL.PKT_CTRL_NBR IS NULL OR TASK_DTL.INVN_NEED_TYPE = 1) AND
              TASK_HDR.MOD_DATE_TIME <= sysdate-85
      ORDER BY TASK_HDR.CREATE_DATE_TIME DESC
      The explain plan:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                        | Name              | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT                 |                   | 10032 |  1969K|       |  3143   (2)|
    |   1 |  SORT ORDER BY                   |                   | 10032 |  1969K|  4232K|  3143   (2)|
    |   2 |   HASH UNIQUE                    |                   | 10032 |  1969K|  4232K|  2689   (2)|
    |   3 |    FILTER                        |                   |       |       |       |            |
    |   4 |     NESTED LOOPS OUTER           |                   | 10032 |  1969K|       |  2235   (1)|
    |   5 |      NESTED LOOPS                |                   |  3226 |   570K|       |   284   (3)|
    |   6 |       TABLE ACCESS BY INDEX ROWID| TASK_HDR          |  3412 |   559K|       |   282   (2)|
    |   7 |        INDEX RANGE SCAN          | TASK_HDR_IND_3    |  9042 |       |       |     8  (13)|
    |   8 |       INDEX UNIQUE SCAN          | PK_INVN_NEED_TYPE |     1 |    13 |       |     1   (0)|
    |   9 |      TABLE ACCESS BY INDEX ROWID | TASK_DTL          |     3 |    60 |       |     1   (0)|
    |  10 |       INDEX RANGE SCAN           | PK_TASK_DTL       |     3 |       |       |     1   (0)|
    ---------------------------------------------------------------------------------------------------

    My apologies, yes, it is:
    SELECT TASK_HDR.TASK_ID,
                     TASK_HDR.WHSE,
                     TASK_HDR.TASK_DESC,
                     TASK_HDR.INVN_TYPE,
                     TASK_HDR.INVN_NEED_TYPE,
                     TASK_HDR.DFLT_TASK_PRTY,
                     TASK_HDR.CURR_TASK_PRTY,
                     TASK_HDR.XPECTD_DURTN,
                     TASK_HDR.ACTL_DURTN,
                     TASK_HDR.ERLST_START_DATE_TIME,
                     TASK_HDR.LTST_START_DATE_TIME,
                     TASK_HDR.LTST_CMPL_DATE_TIME,
                     TASK_HDR.BEGIN_AREA,
                     TASK_HDR.BEGIN_ZONE,
                     TASK_HDR.BEGIN_AISLE,
                     TASK_HDR.END_AREA,
                     TASK_HDR.END_ZONE,
                     TASK_HDR.END_AISLE,
                     TASK_HDR.START_CURR_WORK_GRP,
                     TASK_HDR.START_CURR_WORK_AREA,
                     TASK_HDR.END_CURR_WORK_GRP,
                     TASK_HDR.END_CURR_WORK_AREA,
                     TASK_HDR.START_DEST_WORK_GRP,
                     TASK_HDR.START_DEST_WORK_AREA,
                     TASK_HDR.END_DEST_WORK_GRP,
                     TASK_HDR.END_DEST_WORK_AREA,
                     TASK_HDR.TASK_TYPE,
                     TASK_HDR.TASK_GENRTN_REF_CODE,
                     TASK_HDR.TASK_GENRTN_REF_NBR,
                     TASK_HDR.NEED_ID,
                     TASK_HDR.TASK_BATCH,
                     TASK_HDR.STAT_CODE,
                     TASK_HDR.CREATE_DATE_TIME,
                     TASK_HDR.MOD_DATE_TIME,
                     TASK_HDR.USER_ID,
                     TASK_HDR.RLS_DATE_TIME,
                     TASK_HDR.SKU_ID,
                     TASK_HDR.TASK_CMPL_REF_CODE,
                     TASK_HDR.TASK_CMPL_REF_NBR,
                     TASK_HDR.OWNER_USER_ID,
                     TASK_HDR.ONE_USER_PER_GRP,
                     TASK_HDR.NEXT_TASK_ID,
                     TASK_HDR.EXCEPTION_CODE,
                     TASK_HDR.CURR_LOCN_ID,
                     TASK_HDR.TASK_PARM_ID,
                     TASK_HDR.RULE_ID,
                     TASK_HDR.VOCOLLECT_ASSIGN_ID,
                     TASK_HDR.CURR_USER_ID,
                     TASK_HDR.MHE_FLAG,
                     TASK_HDR.PICK_TO_TOTE_FLAG,
                     TASK_HDR.MHE_ORD_STATE,
                     TASK_HDR.PRT_TASK_LIST_FLAG,
                     TASK_HDR.RPT_PRTR_REQSTR,
                     TASK_HDR.ORIG_TASK_ID
       FROM INVN_NEED_TYPE,TASK_HDR
        WHERE TASK_HDR.WHSE = '01' AND TASK_HDR.STAT_CODE >= 99 AND
              TASK_HDR.INVN_NEED_TYPE = INVN_NEED_TYPE.INVN_NEED_TYPE AND
              INVN_NEED_TYPE.WHSE = TASK_HDR.WHSE AND
              (INVN_NEED_TYPE.CO = '88') AND (INVN_NEED_TYPE.DIV = '51') AND
              TASK_HDR.MOD_DATE_TIME <= sysdate-85
         AND EXISTS (SELECT 1
                               FROM TASK_DTL
                             WHERE TASK_HDR.TASK_ID = TASK_DTL.TASK_ID
                                AND (TASK_DTL.PKT_CTRL_NBR IS NULL OR TASK_DTL.INVN_NEED_TYPE = 1))
      ORDER BY TASK_HDR.CREATE_DATE_TIME DESCAlthough you have an OUTER JOIN on TASK_DTL.TASK_ID, this is converted to an INNER JOIN with (TASK_DTL.PKT_CTRL_NBR IS NULL OR TASK_DTL.INVN_NEED_TYPE = 1), which is the equivalent of the 'EXISTS' version above.
    I haven't been able to test this.

  • SQL Tuning- slow query on GL_BALANCES- Explain plan provided

    Hi- I really need some help here.
    The following SQL statement has been identified to perform poorly.
    It currently takes from 2-3 minutes to execute. I see it is because GL_BALANCES has so many rows.
    Is there any way around this? Explain and info below. Thanks gurus!
    This is the SQL statement:
    SELECT DISTINCT GLB.CODE_COMBINATION_ID CCID
    FROM gl_balances GLB, gl_code_combinations GCC
    WHERE GLB.ACTUAL_FLAG = 'A'
    AND GLB.Last_Update_Date > to_date('11-JAN-2010','DD-MON-YYYY')
    AND GLB.code_combination_id = GCC.code_combination_id
    AND EXISTS (
                  SELECT 1
                  FROM fnd_flex_value_sets A, fnd_flex_values B
                  WHERE A.flex_value_set_name = 'XXX_XXX'
                  AND UPPER(B.ATTRIBUTE3) = 'APPORTIONMENT'
                  AND A.flex_value_set_id = b.flex_value_set_id
                  AND GCC.segment11 = b.flex_value
               );The version of the database is 11.1.0.7.
    These are the parameters relevant to the optimizer:
    NAME TYPE VALUE
    optimizerautostats_job boolean FALSE
    optimizerextended_cursor_sharing_r string NONE
    el
    optimizer_capture_sql_plan_baselines boolean FALSE
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 11.1.0.7
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean FALSE
    optimizer_use_invisible_indexes boolean FALSE
    NAME TYPE VALUE
    optimizer_use_pending_statistics boolean FALSE
    optimizer_use_sql_plan_baselines boolean TRUE
    SQL> show parameter db_file_multi
    NAME TYPE VALUE
    db_file_multiblock_read_count integer 128
    SQL> show parameter db_block_size
    NAME TYPE VALUE
    db_block_size integer 8192
    SQL> show parameter cursor_sharing
    NAME TYPE VALUE
    optimizerextended_cursor_sharing_r string NONE
    el
    cursor_sharing string EXACT
    select
    2 sname
    3 , pname
    4 , pval1
    5 , pval2
    6 from
    7 sys.aux_stats$;
    SNAME PNAME PVAL1
    PVAL2
    SYSSTATS_INFO STATUS
    COMPLETED
    SYSSTATS_INFO DSTART
    09-02-2006 14:35
    SYSSTATS_INFO DSTOP
    09-02-2006 14:35
    SNAME PNAME PVAL1
    PVAL2
    SYSSTATS_INFO FLAGS 1
    SYSSTATS_MAIN CPUSPEEDNW 451.262136
    SYSSTATS_MAIN IOSEEKTIM 10
    SNAME PNAME PVAL1
    PVAL2
    SYSSTATS_MAIN IOTFRSPEED 4096
    SYSSTATS_MAIN SREADTIM
    SYSSTATS_MAIN MREADTIM
    SNAME PNAME PVAL1
    PVAL2
    SYSSTATS_MAIN CPUSPEED
    SYSSTATS_MAIN MBRC
    SYSSTATS_MAIN MAXTHR
    SNAME PNAME PVAL1
    PVAL2
    SYSSTATS_MAIN SLAVETHR
    13 rows selected.
    SQL> explain plan for
    2 SELECT DISTINCT GLB.CODE_COMBINATION_ID CCID
    3 FROM gl_balances GLB, gl_code_combinations GCC
    4 WHERE GLB.code_combination_id = GCC.code_combination_id
    5 AND GLB.ACTUAL_FLAG = 'A'
    6 AND GLB.Last_Update_Date > '11-JAN-2010'
    7 AND EXISTS (SELECT 1
    8 FROM fnd_flex_value_sets A, fnd_flex_values B
    9 WHERE A.flex_value_set_id = b.flex_value_set_id
    10 AND A.flex_value_set_name = 'XXX_XXX'
    11 AND UPPER(B.ATTRIBUTE3) = 'APPORTIONMENT'
    12 AND GCC.segment11 = b.flex_value);
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 1839014065
    | Id | Operation | Name | Rows | By
    tes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 4102 |
    296K| 955K (3)| 03:11:03 |
    | 1 | HASH UNIQUE | | 4102 |
    296K| 955K (3)| 03:11:03 |
    |* 2 | HASH JOIN | | 4102 |
    296K| 955K (3)| 03:11:03 |
    | 3 | NESTED LOOPS | | |
    | | |
    PLAN_TABLE_OUTPUT
    | 4 | NESTED LOOPS | | 23907 | 1
    354K| 2598 (1)| 00:00:32 |
    | 5 | NESTED LOOPS | | 1 |
    45 | 5 (0)| 00:00:01 |
    | 6 | TABLE ACCESS BY INDEX ROWID| FND_FLEX_VALUE_SETS | 1 |
    28 | 2 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | FND_FLEX_VALUE_SETS_U2 | 1 |
    PLAN_TABLE_OUTPUT
    | 1 (0)| 00:00:01 |
    |* 8 | TABLE ACCESS BY INDEX ROWID| FND_FLEX_VALUES | 1 |
    17 | 3 (0)| 00:00:01 |
    |* 9 | INDEX RANGE SCAN | FND_FLEX_VALUES_N2 | 53 |
    | 1 (0)| 00:00:01 |
    |* 10 | INDEX RANGE SCAN | GL_CODE_COMBINATIONS_N11 | 25427 |
    | 106 (1)| 00:00:02 |
    PLAN_TABLE_OUTPUT
    | 11 | TABLE ACCESS BY INDEX ROWID | GL_CODE_COMBINATIONS | 18664 |
    236K| 2593 (1)| 00:00:32 |
    |* 12 | TABLE ACCESS FULL | GL_BALANCES | 1022K|
    15M| 952K (3)| 03:10:32 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
    2 - access("GLB"."CODE_COMBINATION_ID"="GCC"."CODE_COMBINATION_ID")
    7 - access("A"."FLEX_VALUE_SET_NAME"='SSA_SGL')
    8 - filter(UPPER("B"."ATTRIBUTE3")='APPORTIONMENT')
    9 - access("A"."FLEX_VALUE_SET_ID"="B"."FLEX_VALUE_SET_ID")
    10 - access("GCC"."SEGMENT11"="B"."FLEX_VALUE")
    12 - filter("GLB"."LAST_UPDATE_DATE">TO_DATE(' 2010-01-11 00:00:00', 'syyyy-mm
    -dd hh24:mi:ss') AND
    "GLB"."ACTUAL_FLAG"='A')
    30 rows selected.

    As per the other replies, you've not really given enough information to go on - what are you trying to achieve, versions, etc.
    On my old apps 11.5.8 system, the explain plan for your query uses GL_CODE_COMBINATIONS_U1 rather than a full scan of gl_code_combinations.
    Check your stats are up to date (select table_name, num_rows, last_analyzed from dba_tables where ...)
    See if you can also use period_name and/or period_set_name (or period_num) from GL_Periods rather than period_year (i.e. use P_YEAR to lookup the period_name/period_set_name/period_num from gl_periods). It might be faster to do per period and then consolidate for the whole year, as there are indexes on gl_balances for columns period_name, period_set_name, period_num.
    regards, Ivan

  • Same query with different execution plan

    Hello All,
    I wonder why does sql server create different execution plan for these below queries ?
    Thanks.

    You can look at the expected query plan. Either visually in SSMS, or alternatively, you can run the query after the instruction SET SHOWPLAN_TEXT ON.
    The Optimizer is the component of SQL Server that determines how the query is executed. It is cost based. It will assess different execution plans, estimate the cost of each of them and then select the cheapest. In this context, cheapest means the one with
    the shortest runtime.
    In your particular case, the estimation for the second query is, that scanning just a small part of the nonclustered index and then looking up the table data of the qualifying rows is the cheapest approach, because the estimated number of qualifying rows
    is low.
    In the first query, it estimated that looking up the many qualifying rows there would be too expensive, and that it would be cheaper to simply scan the entire clustered index, and simply filter out all unwanted rows. Note that the clustered index includes
    the actual table data.
    Gert-Jan

  • TUNNING A SQL QUERY AND UNSERSTAND EXPLAIN PLAN

    I was trying my handing in tunning sql queries -
    Though I have manged to reduce the cost lil ..creating some index, but have genarated a g8 interesting in tunning - Can some experts ( I know there are lots available :-) ) help me on the approch on "HOW TO TUNE A QUERY"
    moreover I also would like to understand - how to debug a explain plan, - please help ...
    Regards..

    Hi,
    Welcome to this forum...
    I would suggest your to read first the official documentations :
    - The concepts of an Oracle DBMS (this is important to know, because Oracle structures, processes, objects (etc) are the building blocks of your database)
    - SQL reference guide
    - PL/SQL reference guide
    and then at a later stage the Performance and tuning guide:
    You can start there:
    http://download.oracle.com/docs/cd/B14117_01/nav/portal_1.htm
    Once it's done maybe read the Performance and tuning guide:
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10752/toc.htm
    (Chapter 19 explain ... how to interpret the explain plan ..)
    I wish you good chance and be patient: I'm working with Oracle for more that 10 years and I'm still learning! ...
    Rem: The Oracle website is full of interesting articles, and examples, just "search" your special points of interests ..

Maybe you are looking for

  • My ActiveSync and OMA issues

    I had a couple of ActiveSync issues that I was able to solve by going to the server URL that is used to access the exchange server. My exchange server uses SSL so the URL is https://office.domain.com/oma. The first error I had was because I promoted

  • GarageBand '11 (6.0.5) How to reinstall MagicGB & Apple Loop Package?

    Alright, Up until the newest update to OSX Mavericks (10.9) I was able to use Magic Garageband & Apple Loops Library with the free added content that it prompted me the first time I used it, which was about 1.5 gigs of content it downloaded directly

  • PRINTING OS 9-CLASSIC APPLICATIONS

    I can't print my Adobe Photo Deluxe pictures which are classic OS 9. I continue to get messages to use my HP printer utility, add a printer, etc. I do all of these things, but when I go to print nothing happens. The print monitor identifies my printe

  • Adding multiple markers to Google maps 1.0 widget??

    Is there any way to have multiple markers on the google maps widget? I've tried add some more markers in the code, but I can't get the name's on the markers to change. They all stay the same as the original marker. http://test.netfc.com/svf_dealer_lo

  • Multiple usb inputs

    I am new to mac products and garage band.  I am trying to connect two instruments to my MacBook Pro via the available two USB ports and record both simultaneously.  When both are plugged in, the Mac system recognizes both are plugged in, but GarageBa