Wrong optimizer estimate ?
Hi all,
while doing performance tests of some new queries in our application I noticed
one query where the estimated rows differ much from the actual rows in the
execution plan (Line 7 in the attached execution plan). Now I ask you kindly
to help me finding out if this is a bug in the optimizers estimation or is
there another reason (maybe strange data) which could explain the difference.
Fortunately there is no performance problem with the query yet, but I've
learned that even small differences in data combined with small
miscalculations may have a huge impact on the query performance in the future.
For better readability I attached the tests I did so far at the end of this message.
If there are any numbers, statistics ... helping to debug this further, please let
me know and I'll post them to the list.
In the tests you'll find the query in question along with the
execution plan gathered with the gather_plan_statistics hint. There are two
executions plans with two different sets of optimizer statistics. (first test
without histograms, the second test with histograms). Also attached the column
stats for the main table involved. The table t_sendung_import1 has only 1 row.
I did the tests on 11.2.0.2
Thank you very much in advance for helping my deepen my oracle knowledge.
Very best regards
Michael
PLAN_TABLE_OUTPUT (without histograms)
BEGIN
DBMS_STATS.
GATHER_TABLE_STATS (ownname => 'xxx',
tabname => 'produkttitelinstanzen',
estimate_percent => NULL,
method_opt => 'for all columns size 1',
cascade => TRUE);
END;
SQL_ID 0x4sfjdj0wshv, child number 0
select /*+ gather_plan_statistics */
pt1.id quellprodukt_id,
pt2.id zielprodukt_id
from
t_sendung_import1 timp,
produkttitelinstanzen pt1,
produkttitelinstanzen pt2
where
timp.id = pt1.produkt_id
and pt1.produkt_id <> pt2.produkt_id
and pt1.titelsuche_id = pt2.titelsuche_id
and pt1.objektbereich_id = 0
and pt1.objektbereich_id = pt2.objektbereich_id
and pt1.produkttitelart_id = 1
and pt1.produkttitelart_id = pt2.produkttitelart_id
and pt1.reihenfolge = 0
and pt1.reihenfolge = pt2.reihenfolge
and timp.jobid = 1111
Plan hash value: 2472102189
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 0 | SELECT STATEMENT | | 1 | | 197 |00:00:00.01 | 125 |
| 1 | NESTED LOOPS | | 1 | | 197 |00:00:00.01 | 125 |
| 2 | NESTED LOOPS | | 1 | 1 | 198 |00:00:00.01 | 25 |
| 3 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 7 |
|* 4 | INDEX RANGE SCAN | T_SENDUNG_IMPORT1_PK | 1 | 1 | 1 |00:00:00.01 | 2 |
| 5 | TABLE ACCESS BY INDEX ROWID| PRODUKTTITELINSTANZEN | 1 | 1 | 1 |00:00:00.01 | 5 |
|* 6 | INDEX RANGE SCAN | PRTI_UK2_I | 1 | 1 | 1 |00:00:00.01 | 4 |
|* 7 | INDEX RANGE SCAN | PRTI_TISU_FK_I | 1 | 12 | 198 |00:00:00.01 | 18 |
|* 8 | TABLE ACCESS BY INDEX ROWID | PRODUKTTITELINSTANZEN | 198 | 1 | 197 |00:00:00.01 | 100 |
Predicate Information (identified by operation id):
4 - access("TIMP"."JOBID"=1111)
6 - access("TIMP"."ID"="PT1"."PRODUKT_ID" AND "PT1"."PRODUKTTITELART_ID"=1 AND
"PT1"."OBJEKTBEREICH_ID"=0 AND "PT1"."REIHENFOLGE"=0)
7 - access("PT1"."TITELSUCHE_ID"="PT2"."TITELSUCHE_ID")
8 - filter(("PT2"."REIHENFOLGE"=0 AND "PT2"."OBJEKTBEREICH_ID"=0 AND "PT2"."PRODUKTTITELART_ID"=1 AND
"PT1"."PRODUKT_ID"<>"PT2"."PRODUKT_ID"))
PLAN_TABLE_OUTPUT (with histograms)
BEGIN
DBMS_STATS.
GATHER_TABLE_STATS (ownname => 'xxx',
tabname => 'produkttitelinstanzen',
estimate_percent => NULL,
method_opt => 'for all columns size auto',
cascade => TRUE);
END;
SQL_ID 6k2stg2srtq2w, child number 0
select /*+ gather_plan_statistics */
pt1.id quellprodukt_id,
pt2.id zielprodukt_id
from
t_sendung_import1 timp,
produkttitelinstanzen pt1,
produkttitelinstanzen pt2
where
timp.id = pt1.produkt_id
and pt1.produkt_id <> pt2.produkt_id
and pt1.titelsuche_id = pt2.titelsuche_id
and pt1.objektbereich_id = 0
and pt1.objektbereich_id = pt2.objektbereich_id
and pt1.produkttitelart_id = 1
and pt1.produkttitelart_id = pt2.produkttitelart_id
and pt1.reihenfolge = 0
and pt1.reihenfolge = pt2.reihenfolge
and timp.jobid = 1111
Plan hash value: 2472102189
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
| 0 | SELECT STATEMENT | | 1 | | 197 |00:00:00.01 | 129 |
| 1 | NESTED LOOPS | | 1 | | 197 |00:00:00.01 | 129 |
| 2 | NESTED LOOPS | | 1 | 39799 | 198 |00:00:00.01 | 29 |
| 3 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 4 | INDEX RANGE SCAN | T_SENDUNG_IMPORT1_PK | 1 | 1 | 2 |00:00:00.01 | 3 |
| 5 | TABLE ACCESS BY INDEX ROWID| PRODUKTTITELINSTANZEN | 2 | 1 | 1 |00:00:00.01 | 8 |
|* 6 | INDEX RANGE SCAN | PRTI_UK2_I | 2 | 1 | 1 |00:00:00.01 | 7 |
|* 7 | INDEX RANGE SCAN | PRTI_TISU_FK_I | 1 | 11 | 198 |00:00:00.01 | 18 |
|* 8 | TABLE ACCESS BY INDEX ROWID | PRODUKTTITELINSTANZEN | 198 | 57767 | 197 |00:00:00.01 | 100 |
Predicate Information (identified by operation id):
4 - access("TIMP"."JOBID"=1111)
6 - access("TIMP"."ID"="PT1"."PRODUKT_ID" AND "PT1"."PRODUKTTITELART_ID"=1 AND
"PT1"."OBJEKTBEREICH_ID"=0 AND "PT1"."REIHENFOLGE"=0)
7 - access("PT1"."TITELSUCHE_ID"="PT2"."TITELSUCHE_ID")
8 - filter(("PT2"."PRODUKTTITELART_ID"=1 AND "PT2"."REIHENFOLGE"=0 AND "PT2"."OBJEKTBEREICH_ID"=0 AND
"PT1"."PRODUKT_ID"<>"PT2"."PRODUKT_ID"))
39 rows selected.
COLUMN_NAME NUM_DISTINCT LOW_VALUE HIGH_VALUE DENSITY NUM_NULLS AVG_COL_LEN NUM_BUCKETS HISTOGRAM
ID 22878255 C102 C432034B63 4,37096273295319E-8 0 6 1 NONE
OBJEKTBEREICH_ID 113 80 C403011A04 2,1854813664766E-8 0 3 113 FREQUENCY
TITELTEXT_ID 2129449 C415010102 C419190123 0,000602515818749885 0 6 254 HEIGHT BALANCED
TITELSUCHE_ID 1992312 C415010102 C41909231C 0,000740863626550299 0 6 254 HEIGHT BALANCED
REIHENFOLGE 393 80 C2045D 0,0020138646157787 0 3 254 HEIGHT BALANCED
PRODUKT_ID 8581895 C40B1B5923 C50329633844 1,16524380687482E-7 0 7 1 NONE
PRODUKTTITELART_ID 9 80 C109 2,1854813664766E-8 0 3 9 FREQUENCY
ORIGINALTITELINSTANZ_ID 22874695 C42A563F05
Yes, these are two different testcases with the same query, one without histograms (size 1) and one with historgrams (size auto) to see if the optimizer makes better estimates with histograms which seems not to be the case.
Similar Messages
-
Wrong cardinality estimate for range scan
select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - ProductionSQL : select * from GC_FULFILLMENT_ITEMS where MARKETPLACE_ID=:b1 and GC_FULFILLMENT_STATUS_ID=:b2;
Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 474K| 99M| 102 (85)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| GC_FULFILLMENT_ITEMS | 474K| 99M| 102 (85)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | I_GCFI_GCFS_ID_SDOC_MKTPLID | 474K| | 91 (95)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("GC_FULFILLMENT_STATUS_ID"=TO_NUMBER(:B2) AND "MARKETPLACE_ID"=TO_NUMBER(:B1))
filter("MARKETPLACE_ID"=TO_NUMBER(:B1))If i use literals than CBO uses cardinality =1 (I believe this is due it fix control :5483301 which i set to off In my environment)
select * from GC_FULFILLMENT_ITEMS where MARKETPLACE_ID=5 and GC_FULFILLMENT_STATUS_ID=2;
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 220 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| GC_FULFILLMENT_ITEMS | 1 | 220 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | I_GCFI_GCFS_ID_SDOC_MKTPLID | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("GC_FULFILLMENT_STATUS_ID"=2 AND "MARKETPLACE_ID"=5)
filter("MARKETPLACE_ID"=5)Here is column distribution and histogram information
Enter value for column_name: MARKETPLACE_ID
COLUMN_NAME ENDPOINT_VALUE CUMMULATIVE_FREQUENCY FREQUENCY ENDPOINT_ACTUAL_VALU
MARKETPLACE_ID 1 1 1
MARKETPLACE_ID 3 8548 8547
MARKETPLACE_ID 4 15608 7060
MARKETPLACE_ID 5 16385 777 --->
MARKETPLACE_ID 35691 16398 13
MARKETPLACE_ID 44551 16407 9
6 rows selected.
Enter value for column_name: GC_FULFILLMENT_STATUS_ID
COLUMN_NAME ENDPOINT_VALUE CUMMULATIVE_FREQUENCY FREQUENCY ENDPOINT_ACTUAL_VALU
GC_FULFILLMENT_STATUS_ID 5 19602 19602
GC_FULFILLMENT_STATUS_ID 6 19612 10
GC_FULFILLMENT_STATUS_ID 8 19802 190
3 rows selected.
Actual distribution
select MARKETPLACE_ID,count(*) from GC_FULFILLMENT_ITEMS group by MARKETPLACE_ID order by 1;
MARKETPLACE_ID COUNT(*)
1 2099
3 16339936
4 13358682
5 1471839 --->
35691 33623
44551 19881
78931 40273
101611 1
6309408
9 rows selected.
BHAVIK_DBA: GC1EU> select GC_FULFILLMENT_STATUS_ID,count(*) from GC_FULFILLMENT_ITEMS group by GC_FULFILLMENT_STATUS_ID order by 1;
GC_FULFILLMENT_STATUS_ID COUNT(*)
1 880
2 63 --->
3 24
5 37226908
6 22099
7 18
8 325409
9 343
8 rows selected.10053 trace
SINGLE TABLE ACCESS PATH
Table: GC_FULFILLMENT_ITEMS Alias: GC_FULFILLMENT_ITEMS
Card: Original: 36703588.000000 Rounded: 474909 Computed: 474909.06 Non Adjusted: 474909.06
Best:: AccessPath: IndexRange
Index: I_GCFI_GCFS_ID_SDOC_MKTPLID
Cost: 102.05 Degree: 1 Resp: 102.05 Card: 474909.06 Bytes: 0
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
DB_VERSION('11.2.0.2')
OPT_PARAM('_b_tree_bitmap_plans' 'false')
OPT_PARAM('_optim_peek_user_binds' 'false')
OPT_PARAM('_fix_control' '5483301:0')
ALL_ROWS
OUTLINE_LEAF(@"SEL$F5BB74E1")
MERGE(@"SEL$2")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
INDEX_RS_ASC(@"SEL$F5BB74E1" "GC_FULFILLMENT_ITEMS"@"SEL$2" ("GC_FULFILLMENT_ITEMS"."GC_FULFILLMENT_STATUS_ID" "GC_FULFILLMENT_ITEMS"."SHIP_DELIVERY_OPTION_CODE" "GC_FULFILLMENT_ITEMS"."MARKETPLACE_ID"))
END_OUTLINE_DATA
*/Is there any reason why CBO is using card=474909.06 ? Having fix control () in place, it should have set card=1 if it is considering GC_FULFILLMENT_STATUS_ID= 2 as "rare" value..isn't it ?OraDBA02 wrote:
You are right Charles.
I was reading one of your blog and saw that.
As you said, it is an issue with SQLPLUS.
However, plan for the sql which is comming from application still shows the same (wrong cardinality) plan. It does not have TO_NUMBER function because of the reason that it does not experience data-type conversion that SQLPLUS has.
But YES...Plan is exactly the same with/without NO_NUMBER.OraDBA02,
I believe that some of the other people responding to this thread might have already described why the execution plan in the library cache is the same plan that you are seeing. One of the goals of using bind variables in SQL statements is to reduce the number of time consuming (and resource intensive) hard parses. That also means that a second goal is to share the same execution plan for future executions of the same SQL statement, even through bind variable values have changed. The catch here is that bind variable peeking, introduced with Oracle Database 9.0.1 (may be disabled by modifying a hidden parameter), helps the optimizer select the "best" (lowest calculated cost) execution plan for those specific bind variable values - the same plan may not be the "best" execution plan for other sets of bind variable values on future executions.
Histograms on one or more of the columns in the WHERE clause could either help or hinder the situation further. It might further help the first execution, but might further hinder future executions with different bind variable values. Oracle Database 11.1 introduced something called adaptive cursor sharing (and 11.2 introduced cardinality feedback) that in theory addresses issues where the execution plan should change for later executions with different bind variable values (but the SQL statement must execute poorly at least once).
There might be multiple child cursors in the library cache for the same SQL statement, each potentially with a different execution plan. I suggest finding the SQL_ID of the SQL statement that the application is submitting (you can do this by checking V$SQL or V$SQLAREA). Once you have the SQL_ID, go back to the SQL statement that I suggested for displaying the execution plan:
SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));The first NULL in the above SQL statement is where you would specify the SQL_ID. If you leave the second NULL in place, the above SQL statement will retrieve the execution plan for all child cursors with that SQL_ID.
For instance, if the SQL_ID was 75chksrfa5fbt, you would execute the following:
SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR('75chksrfa5fbt',NULL,'TYPICAL'));Usually, you can take it a step further to see the bind variables that were used during the optimization phase. To do that, you would add the +PEEKED_BINDS format parameter:
SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR('75chksrfa5fbt',NULL,'TYPICAL +PEEKED_BINDS'));Note that there are various optimizer parameters that affect the optimizer's decisions, for instance, maybe the optimizer mode is set to FIRST_ROWS. Also possibly helpful is the +OUTLINE format parameter that might provide a clue regarding the value of some of the parameters affecting the optimizer. The SQL statement that you would then enter is similar to the following:
SELECT * FROM TABLE (DBMS_XPLAN.DISPLAY_CURSOR('75chksrfa5fbt',NULL,'TYPICAL +PEEKED_BINDS +OUTLINE'));Additional information might be helpful. Please see the following two forum threads to see what kind of information you should gather:
When your query takes too long… : When your query takes too long ...
How to post a SQL statement tuning request: HOW TO: Post a SQL statement tuning request - template posting
Charles Hooper
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Wrong time estimate when downloading with pulsed downloads
When downloading from fileservers (using 'free' download) with some servers the
displayed download time estimate is horribly wrong. This seems to stem from their method of
'pulsed' fast packets download with long pauses between the packets. It would be nice
if this issue could be corrected in next versions (if possible at all). It is very annoying.If the estimate time doesn't get corrected by Firefox over time if the server sends the file as separate chunks that aren't arriving constantly but at irregular intervals then there is probably not much that you can do other then being patient.
-
CBO - Wrong Cardinality Estimate
Hello,
Version 10.2.0.3
I am trying to understand the figures in the Explain Plan. I am not able to explain the cardinality of 70 on step 4.
The query takes very long to execute (about 400 secs). I would expect HASH JOIN SEMI instead of NESTED LOOPS SEMI.
I have tried to provide as much information as possible. I have just requested the 10053 trace, dont have it now.
There is a primary key on ORDERS.ORDER_ID (NUMBER) column. However, we are forced to use to_char(order_id) to accomodate for COT_EXTERNAL_ID being VARCHAR2 field.
1 select cdw.* from cdw_orders cdw where cdw.cot_external_id in
2 (
3 select to_char(order_id) from orders o where o.status_id in (12,16,22)
4* )
SQL> /
Execution Plan
Plan hash value: 733167152
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 280 | 326 (1)| 00:00:04 |
| 1 | NESTED LOOPS SEMI | | 2 | 280 | 326 (1)| 00:00:04 |
| 2 | TABLE ACCESS FULL | CDW_ORDERS | 3362 | 433K| 293 (1)| 00:00:04 |
| 3 | INLIST ITERATOR | | | | | |
|* 4 | TABLE ACCESS BY INDEX ROWID| ORDERS | 70 | 560 | 1 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | ORDERS_STATUS_ID_IDX | 2 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("CDW"."COT_EXTERNAL_ID"=TO_CHAR("ORDER_ID"))
5 - access("O"."STATUS_ID"=12 OR "O"."STATUS_ID"=16 OR "O"."STATUS_ID"=22)Here is some of the details on the table columns and data.
SQL> select column_name,num_distinct,density,num_nulls,num_buckets from all_tab_columns where table_name = 'ORDERS'
2 and column_name in ('STATUS_ID','ORDER_ID');
COLUMN_NAME NUM_DISTINCT DENSITY NUM_NULLS NUM_BUCKETS
ORDER_ID 177951 .00000561952447584 0 254
STATUS_ID 23 .00000275335899280 0 23
SQL> select num_rows from all_tables where table_name = 'ORDERS';
NUM_ROWS
177951
SQL> select index_name,distinct_keys,clustering_factor,num_rows,sample_size from all_indexes where index_name = 'ORDERS_STATUS_ID_IDX'
2 /
INDEX_NAME DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
ORDERS_STATUS_ID_IDX 25 35893 177951 177951Histograms on column STATUS_ID
SQL> select * from (
2 select column_name,endpoint_value,endpoint_number- nvl(lag(endpoint_number) over (order by endpoint_value),0) count
3 from all_tab_histograms where column_name = 'STATUS_ID' and table_name = 'ORDERS'
4 ) where endpoint_value in (12,16,22);
COLUMN_NAME ENDPOINT_VALUE COUNT
STATUS_ID 12 494
STATUS_ID 16 24
STATUS_ID 22 3064
SQL> select max(endpoint_number) from all_tab_histograms where column_name = 'STATUS_ID' and table_name = 'ORDERS' ;
MAX(ENDPOINT_NUMBER)
5641I tried to run the query for individual values instead of inlist to check the numbers.
1 select cdw.* from cdw_orders cdw where cdw.cot_external_id in
2 (
3 select to_char(order_id) from orders o where o.status_id = 12
4* )
SQL> /
Execution Plan
Plan hash value: 3178043291
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 280 | 33 (19)| 00:00:01 |
| 1 | MERGE JOIN SEMI | | 2 | 280 | 33 (19)| 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID| CDW_ORDERS | 3362 | 433K| 21 (0)| 00:00:01 |
| 3 | INDEX FULL SCAN | CDW_ORD_COT_EXT_ID | 3362 | | 2 (0)| 00:00:01 |
|* 4 | SORT UNIQUE | | 15584 | 121K| 11 (46)| 00:00:01 |
|* 5 | VIEW | index$_join$_002 | 15584 | 121K| 9 (34)| 00:00:01 |
|* 6 | HASH JOIN | | | | | |
|* 7 | INDEX RANGE SCAN | ORDERS_STATUS_ID_IDX | 15584 | 121K| 1 (0)| 00:00:01 |
| 8 | INDEX FAST FULL SCAN | PK_ORDERS | 15584 | 121K| 5 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("CDW"."COT_EXTERNAL_ID"=TO_CHAR("ORDER_ID"))
filter("CDW"."COT_EXTERNAL_ID"=TO_CHAR("ORDER_ID"))
5 - filter("O"."STATUS_ID"=12)
6 - access(ROWID=ROWID)
7 - access("O"."STATUS_ID"=12)For status_id = 12, the cardinality on step 7 for orders_status_id_idx is 15584 which is inline with the expectation ie., (494/5641)*177951 = 15583.7 ~ 15584.
Now, I continue the same with status_is = 16
1 select cdw.* from cdw_orders cdw where cdw.cot_external_id in
2 (
3 select to_char(order_id) from orders o where o.status_id = 16
4* )
SQL> /
Execution Plan
Plan hash value: 43581000
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1363 | 186K| 10 (10)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID | CDW_ORDERS | 2 | 264 | 1 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1363 | 186K| 10 (10)| 00:00:01 |
| 3 | SORT UNIQUE | | 757 | 6056 | 2 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| ORDERS | 757 | 6056 | 2 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | ORDERS_STATUS_ID_IDX | 757 | | 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | CDW_ORD_COT_EXT_ID | 2 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
5 - access("O"."STATUS_ID"=16)
6 - access("CDW"."COT_EXTERNAL_ID"=TO_CHAR("ORDER_ID"))Here also the cardinality on step 5 for orders_status_id_idx is as expected ie., (24/5641)*177951 = 757.1 ~ 757
Finally, running the same for status_id = 22 surprises me
1 select cdw.* from cdw_orders cdw where cdw.cot_external_id in
2 (
3 select to_char(order_id) from orders o where o.status_id = 22
4* )
SQL> /
Execution Plan
Plan hash value: 3496542905
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 280 | 326 (1)| 00:00:04 |
| 1 | NESTED LOOPS SEMI | | 2 | 280 | 326 (1)| 00:00:04 |
| 2 | TABLE ACCESS FULL | CDW_ORDERS | 3362 | 433K| 293 (1)| 00:00:04 |
|* 3 | TABLE ACCESS BY INDEX ROWID| ORDERS | 60 | 480 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | ORDERS_STATUS_ID_IDX | 2 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("CDW"."COT_EXTERNAL_ID"=TO_CHAR("ORDER_ID"))
4 - access("O"."STATUS_ID"=22)Like the ones for 12 and 16, I would have expected the cardinality on step 4 to be (3064/5641)*177951 = 96657, but I see only 2.
This is where my doubt is. Is this got to do with 22 being a popular value ? Can someone explain this behaviour ?
As a solution I am thinking of creating an index on to_char(order_id) - function based, hoping that the step 3 CDW.COT_EXTERNAL_ID = TO_CHAR(ORDER_ID) changes
to access instead of filter. Let me know your thoughts on the index creation as well.
Thanks,
Rgds,
Gokul
Edited by: Gokul Gopal on 24-May-2012 02:40Hello Jonathan,
Apologies, I was wrong about optimizer_index_cost_adj value to be set to 100. I gather from DBA the value is set to currently set to 1.
I have pasted the 10053 trace file for value 22. I was not able to figure out the "jsel=min(1, 6.1094e-04)" bit.
/dborafiles/COTP/bycota2/udump/bycota2_ora_2147_values_22.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
ORACLE_HOME = /dboracle/orabase/product/10.2.0
System name: Linux
Node name: byl945d002
Release: 2.6.9-55.ELsmp
Version: #1 SMP Fri Apr 20 16:36:54 EDT 2007
Machine: x86_64
Instance name: bycota2
Redo thread mounted by this instance: 2
Oracle process number: 37
Unix process pid: 2147, image: oracle@byl945d002 (TNS V1-V3)
*** 2012-05-28 14:00:59.737
*** ACTION NAME:() 2012-05-28 14:00:59.737
*** MODULE NAME:(SQL*Plus) 2012-05-28 14:00:59.737
*** SERVICE NAME:(SYS$USERS) 2012-05-28 14:00:59.737
*** SESSION ID:(713.51629) 2012-05-28 14:00:59.737
Registered qb: SEL$1 0x973e5458 (PARSER)
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=51893 hint_alias="CDW"@"SEL$1"
Registered qb: SEL$2 0x973e6058 (PARSER)
signature (): qb_name=SEL$2 nbfros=1 flg=0
fro(0): flg=4 objn=51782 hint_alias="O"@"SEL$2"
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks passed for 5r4bhr2yrt5gz.
apadrv-start: call(in-use=704, alloc=16344), compile(in-use=60840, alloc=63984)
Current SQL statement for this session:
select cdw.* from cdw_orders cdw where cdw.cot_external_id in (select to_char(o.order_id) from orders o where status_id = 22)
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
ST - star transformation
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUCSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
512: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
128: use hash partitioning dimension
256: use range partitioning dimension
2048: use list partitioning dimension
1024: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
Peeked values of the binds in SQL statement
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
sort_area_retained_size = 65535
optimizer_mode = first_rows_100
optimizer_index_cost_adj = 25
optimizer_index_caching = 100
Bug Fix Control Environment
fix 4611850 = enabled
fix 4663804 = enabled
fix 4663698 = enabled
fix 4545833 = enabled
fix 3499674 = disabled
fix 4584065 = enabled
fix 4602374 = enabled
fix 4569940 = enabled
fix 4631959 = enabled
fix 4519340 = enabled
fix 4550003 = enabled
fix 4488689 = enabled
fix 3118776 = enabled
fix 4519016 = enabled
fix 4487253 = enabled
fix 4556762 = 15
fix 4728348 = enabled
fix 4723244 = enabled
fix 4554846 = enabled
fix 4175830 = enabled
fix 4722900 = enabled
fix 5094217 = enabled
fix 4904890 = enabled
fix 4483286 = disabled
fix 4969880 = disabled
fix 4711525 = enabled
fix 4717546 = enabled
fix 4904838 = enabled
fix 5005866 = enabled
fix 4600710 = enabled
fix 5129233 = enabled
fix 5195882 = enabled
fix 5084239 = enabled
fix 4595987 = enabled
fix 4134994 = enabled
fix 5104624 = enabled
fix 4908162 = enabled
fix 5015557 = enabled
PARAMETERS WITH DEFAULT VALUES
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 10.2.0.3
_optimizer_search_limit = 5
cpu_count = 8
active_instance_count = 2
parallel_threads_per_cpu = 2
hash_area_size = 131072
bitmap_merge_area_size = 1048576
sort_area_size = 65536
_sort_elimination_cost_ratio = 0
_optimizer_block_size = 8192
_sort_multiblock_read_count = 2
_hash_multiblock_io_count = 0
_db_file_optimizer_read_count = 32
_optimizer_max_permutations = 2000
pga_aggregate_target = 602112 KB
_pga_max_size = 204800 KB
_query_rewrite_maxdisjunct = 257
_smm_auto_min_io_size = 56 KB
_smm_auto_max_io_size = 248 KB
_smm_min_size = 602 KB
_smm_max_size = 102400 KB
_smm_px_max_size = 301056 KB
_cpu_to_io = 0
_optimizer_undo_cost_change = 10.2.0.3
parallel_query_mode = enabled
parallel_dml_mode = disabled
parallel_ddl_mode = enabled
sqlstat_enabled = false
_optimizer_percent_parallel = 101
_always_anti_join = choose
_always_semi_join = choose
_optimizer_mode_force = true
_partition_view_enabled = true
_always_star_transformation = false
_query_rewrite_or_error = false
_hash_join_enabled = true
cursor_sharing = exact
_b_tree_bitmap_plans = true
star_transformation_enabled = false
_optimizer_cost_model = choose
_new_sort_cost_estimate = true
_complex_view_merging = true
_unnest_subquery = true
_eliminate_common_subexpr = true
_pred_move_around = true
_convert_set_to_join = false
_push_join_predicate = true
_push_join_union_view = true
_fast_full_scan_enabled = true
_optim_enhance_nnull_detection = true
_parallel_broadcast_enabled = true
_px_broadcast_fudge_factor = 100
_ordered_nested_loop = true
_no_or_expansion = false
_system_index_caching = 0
_disable_datalayer_sampling = false
query_rewrite_enabled = true
query_rewrite_integrity = enforced
_query_cost_rewrite = true
_query_rewrite_2 = true
_query_rewrite_1 = true
_query_rewrite_expression = true
_query_rewrite_jgmigrate = true
_query_rewrite_fpc = true
_query_rewrite_drj = true
_full_pwise_join_enabled = true
_partial_pwise_join_enabled = true
_left_nested_loops_random = true
_improved_row_length_enabled = true
_index_join_enabled = true
_enable_type_dep_selectivity = true
_improved_outerjoin_card = true
_optimizer_adjust_for_nulls = true
_optimizer_degree = 0
_use_column_stats_for_function = true
_subquery_pruning_enabled = true
_subquery_pruning_mv_enabled = false
_or_expand_nvl_predicate = true
_like_with_bind_as_equality = false
_table_scan_cost_plus_one = true
_cost_equality_semi_join = true
_default_non_equality_sel_check = true
_new_initial_join_orders = true
_oneside_colstat_for_equijoins = true
_optim_peek_user_binds = true
_minimal_stats_aggregation = true
_force_temptables_for_gsets = false
workarea_size_policy = auto
_smm_auto_cost_enabled = true
_gs_anti_semi_join_allowed = true
_optim_new_default_join_sel = true
optimizer_dynamic_sampling = 2
_pre_rewrite_push_pred = true
_optimizer_new_join_card_computation = true
_union_rewrite_for_gs = yes_gset_mvs
_generalized_pruning_enabled = true
_optim_adjust_for_part_skews = true
_force_datefold_trunc = false
statistics_level = typical
_optimizer_system_stats_usage = true
skip_unusable_indexes = true
_remove_aggr_subquery = true
_optimizer_push_down_distinct = 0
_dml_monitoring_enabled = true
_optimizer_undo_changes = false
_predicate_elimination_enabled = true
_nested_loop_fudge = 100
_project_view_columns = true
_local_communication_costing_enabled = true
_local_communication_ratio = 50
_query_rewrite_vop_cleanup = true
_slave_mapping_enabled = true
_optimizer_cost_based_transformation = linear
_optimizer_mjc_enabled = true
_right_outer_hash_enable = true
_spr_push_pred_refspr = true
_optimizer_cache_stats = false
_optimizer_cbqt_factor = 50
_optimizer_squ_bottomup = true
_fic_area_size = 131072
_optimizer_skip_scan_enabled = true
_optimizer_cost_filter_pred = false
_optimizer_sortmerge_join_enabled = true
_optimizer_join_sel_sanity_check = true
_mmv_query_rewrite_enabled = true
_bt_mmv_query_rewrite_enabled = true
_add_stale_mv_to_dependency_list = true
_distinct_view_unnesting = false
_optimizer_dim_subq_join_sel = true
_optimizer_disable_strans_sanity_checks = 0
_optimizer_compute_index_stats = true
_push_join_union_view2 = true
_optimizer_ignore_hints = false
_optimizer_random_plan = 0
_query_rewrite_setopgrw_enable = true
_optimizer_correct_sq_selectivity = true
_disable_function_based_index = false
_optimizer_join_order_control = 3
_optimizer_cartesian_enabled = true
_optimizer_starplan_enabled = true
_extended_pruning_enabled = true
_optimizer_push_pred_cost_based = true
_sql_model_unfold_forloops = run_time
_enable_dml_lock_escalation = false
_bloom_filter_enabled = true
_update_bji_ipdml_enabled = 0
_optimizer_extended_cursor_sharing = udo
_dm_max_shared_pool_pct = 1
_optimizer_cost_hjsmj_multimatch = true
_optimizer_transitivity_retain = true
_px_pwg_enabled = true
optimizer_secure_view_merging = true
_optimizer_join_elimination_enabled = true
flashback_table_rpi = non_fbt
_optimizer_cbqt_no_size_restriction = true
_optimizer_enhanced_filter_push = true
_optimizer_filter_pred_pullup = true
_rowsrc_trace_level = 0
_simple_view_merging = true
_optimizer_rownum_pred_based_fkr = true
_optimizer_better_inlist_costing = all
_optimizer_self_induced_cache_cost = false
_optimizer_min_cache_blocks = 10
_optimizer_or_expansion = depth
_optimizer_order_by_elimination_enabled = true
_optimizer_outer_to_anti_enabled = true
_selfjoin_mv_duplicates = true
_dimension_skip_null = true
_force_rewrite_enable = false
_optimizer_star_tran_in_with_clause = true
_optimizer_complex_pred_selectivity = true
_optimizer_connect_by_cost_based = true
_gby_hash_aggregation_enabled = true
_globalindex_pnum_filter_enabled = true
_fix_control_key = 0
_optimizer_skip_scan_guess = false
_enable_row_shipping = false
_row_shipping_threshold = 80
_row_shipping_explain = false
_optimizer_rownum_bind_default = 10
_first_k_rows_dynamic_proration = true
_optimizer_native_full_outer_join = off
Bug Fix Control Environment
fix 4611850 = enabled
fix 4663804 = enabled
fix 4663698 = enabled
fix 4545833 = enabled
fix 3499674 = disabled
fix 4584065 = enabled
fix 4602374 = enabled
fix 4569940 = enabled
fix 4631959 = enabled
fix 4519340 = enabled
fix 4550003 = enabled
fix 4488689 = enabled
fix 3118776 = enabled
fix 4519016 = enabled
fix 4487253 = enabled
fix 4556762 = 15
fix 4728348 = enabled
fix 4723244 = enabled
fix 4554846 = enabled
fix 4175830 = enabled
fix 4722900 = enabled
fix 5094217 = enabled
fix 4904890 = enabled
fix 4483286 = disabled
fix 4969880 = disabled
fix 4711525 = enabled
fix 4717546 = enabled
fix 4904838 = enabled
fix 5005866 = enabled
fix 4600710 = enabled
fix 5129233 = enabled
fix 5195882 = enabled
fix 5084239 = enabled
fix 4595987 = enabled
fix 4134994 = enabled
fix 5104624 = enabled
fix 4908162 = enabled
fix 5015557 = enabled
PARAMETERS IN OPT_PARAM HINT
Column Usage Monitoring is ON: tracking level = 1
COST-BASED QUERY TRANSFORMATIONS
FPD: Considering simple filter push (pre rewrite) in SEL$1 (#0)
FPD: Current where clause predicates in SEL$1 (#0) :
"CDW"."COT_EXTERNAL_ID"=ANY (SELECT TO_CHAR("O"."ORDER_ID") FROM "ORDERS" "O")
Registered qb: SEL$1 0x974658b0 (COPY SEL$1)
signature(): NULL
Registered qb: SEL$2 0x9745e408 (COPY SEL$2)
signature(): NULL
Cost-Based Subquery Unnesting
SU: No subqueries to consider in query block SEL$2 (#2).
SU: Considering subquery unnesting in query block SEL$1 (#1)
SU: Performing unnesting that does not require costing.
SU: Considering subquery unnest on SEL$1 (#1).
SU: Checking validity of unnesting subquery SEL$2 (#2)
SU: Passed validity checks.
SU: Transforming ANY subquery to a join.
Registered qb: SEL$5DA710D3 0x974658b0 (SUBQUERY UNNEST SEL$1; SEL$2)
signature (): qb_name=SEL$5DA710D3 nbfros=2 flg=0
fro(0): flg=0 objn=51893 hint_alias="CDW"@"SEL$1"
fro(1): flg=0 objn=51782 hint_alias="O"@"SEL$2"
Cost-Based Complex View Merging
CVM: Finding query blocks in SEL$5DA710D3 (#1) that are valid to merge.
SU: Transforming ANY subquery to a join.
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$5DA710D3 (#1).
Query block (0x2a973e5458) before join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT "CDW".* FROM "COT_PLUS"."ORDERS" "O","COT_PLUS"."CDW_ORDERS" "CDW" WHERE "CDW"."COT_EXTERNAL_ID"=TO_CHAR("O"."ORDER_ID") AND "O"."STATUS_ID"=22
Query block (0x2a973e5458) unchanged
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$5DA710D3 (#1).
PM: Checking validity of predicate move-around in SEL$5DA710D3 (#1).
PM: PM bypassed: Outer query contains no views.
JPPD: Applying transformation directives
JPPD: Checking validity of push-down in query block SEL$5DA710D3 (#1)
JPPD: No view found to push predicate into.
FPD: Considering simple filter push in SEL$5DA710D3 (#1)
FPD: Current where clause predicates in SEL$5DA710D3 (#1) :
"CDW"."COT_EXTERNAL_ID"=TO_CHAR("O"."ORDER_ID") AND "O"."STATUS_ID"=22
kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#1)
predicates with check contraints: "CDW"."COT_EXTERNAL_ID"=TO_CHAR("O"."ORDER_ID") AND "O"."STATUS_ID"=22
after transitive predicate generation: "CDW"."COT_EXTERNAL_ID"=TO_CHAR("O"."ORDER_ID") AND "O"."STATUS_ID"=22
finally: "CDW"."COT_EXTERNAL_ID"=TO_CHAR("O"."ORDER_ID") AND "O"."STATUS_ID"=22
First K Rows: Setup begin
kkoqbc-start
: call(in-use=1592, alloc=16344), compile(in-use=101000, alloc=134224)
QUERY BLOCK TEXT
select cdw.* from cdw_orders cdw where cdw.cot_external_id in (select to_char(o.order_id) from orders o where status_id = 22)
QUERY BLOCK SIGNATURE
qb name was generated
signature (optimizer): qb_name=SEL$5DA710D3 nbfros=2 flg=0
fro(0): flg=0 objn=51893 hint_alias="CDW"@"SEL$1"
fro(1): flg=0 objn=51782 hint_alias="O"@"SEL$2"
SYSTEM STATISTICS INFORMATION
Using NOWORKLOAD Stats
CPUSPEED: 714 millions instruction/sec
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
BASE STATISTICAL INFORMATION
Table Stats::
Table: CDW_ORDERS Alias: CDW
#Rows: 3375 #Blks: 1504 AvgRowLen: 132.00
Index Stats::
Index: CDW_ORD_COT_EXT_ID Col#: 10
LVLS: 1 #LB: 232 #DK: 1878 LB/K: 1.00 DB/K: 1.00 CLUF: 1899.00
Index: CDW_ORD_REFERENCE_IDX Col#: 13
LVLS: 0 #LB: 0 #DK: 0 LB/K: 0.00 DB/K: 0.00 CLUF: 0.00
Index: COMMITTED_IDX Col#: 12
LVLS: 1 #LB: 171 #DK: 1673 LB/K: 1.00 DB/K: 1.00 CLUF: 1657.00
Index: OBJID_IDX Col#: 16 17
LVLS: 2 #LB: 318 #DK: 3372 LB/K: 1.00 DB/K: 1.00 CLUF: 1901.00
Index: ORDID_IDX Col#: 14
LVLS: 0 #LB: 0 #DK: 0 LB/K: 0.00 DB/K: 0.00 CLUF: 0.00
Table Stats::
Table: ORDERS Alias: O
#Rows: 178253 #Blks: 7300 AvgRowLen: 282.00
Index Stats::
Index: IDX_ORDERS_CONFIG Col#: 80
LVLS: 1 #LB: 215 #DK: 452 LB/K: 1.00 DB/K: 130.00 CLUF: 59161.00
Index: IDX_ORDERS_REFRENCE_NUMBER Col#: 6
LVLS: 1 #LB: 428 #DK: 68698 LB/K: 1.00 DB/K: 1.00 CLUF: 115830.00
Index: ORDERS_BILLING_SI_IDX Col#: 13
LVLS: 1 #LB: 84 #DK: 3049 LB/K: 1.00 DB/K: 8.00 CLUF: 27006.00
Index: ORDERS_LATEST_ORD_IDX Col#: 3
LVLS: 0 #LB: 0 #DK: 0 LB/K: 0.00 DB/K: 0.00 CLUF: 0.00
Index: ORDERS_ORDER_TYPE_IDX Col#: 4
LVLS: 2 #LB: 984 #DK: 64 LB/K: 15.00 DB/K: 932.00 CLUF: 59702.00
Index: ORDERS_ORD_MINOR__IDX Col#: 43 5
LVLS: 2 #LB: 784 #DK: 112 LB/K: 7.00 DB/K: 375.00 CLUF: 42012.00
Index: ORDERS_OWNING_ORG_IDX Col#: 37
LVLS: 0 #LB: 0 #DK: 0 LB/K: 0.00 DB/K: 0.00 CLUF: 0.00
Index: ORDERS_PARENT_ORD_IDX Col#: 2
LVLS: 1 #LB: 206 #DK: 37492 LB/K: 1.00 DB/K: 1.00 CLUF: 58051.00
Index: ORDERS_SD_CONFIG__IDX Col#: 42
LVLS: 2 #LB: 604 #DK: 10 LB/K: 60.00 DB/K: 3638.00 CLUF: 36389.00
Index: ORDERS_SPECIAL_OR_IDX Col#: 36
LVLS: 1 #LB: 63 #DK: 2 LB/K: 31.00 DB/K: 556.00 CLUF: 1113.00
Index: ORDERS_STATUS_ID_IDX Col#: 5
LVLS: 2 #LB: 635 #DK: 25 LB/K: 25.00 DB/K: 1440.00 CLUF: 36015.00
Index: PK_ORDERS Col#: 1
LVLS: 1 #LB: 408 #DK: 178253 LB/K: 1.00 DB/K: 1.00 CLUF: 131025.00
SINGLE TABLE ACCESS PATH
Column (#5): STATUS_ID(NUMBER)
AvgLen: 3.00 NDV: 20 Nulls: 0 Density: 2.7653e-06 Min: 2 Max: 33
Histogram: Freq #Bkts: 20 UncompBkts: 5567 EndPtVals: 20
Table: ORDERS Alias: O
Card: Original: 178253 Rounded: 95450 Computed: 95450.37 Non Adjusted: 95450.37
Access Path: TableScan
Cost: 1419.89 Resp: 1419.89 Degree: 0
Cost_io: 1408.00 Cost_cpu: 101897352
Resp_io: 1408.00 Resp_cpu: 101897352
kkofmx: index filter:"O"."STATUS_ID"=22
Access Path: index (skip-scan)
SS sel: 0.53548 ANDV (#skips): 60
SS io: 419.81 vs. table scan io: 1408.00
Skip Scan chosen
Access Path: index (SkipScan)
Index: ORDERS_ORD_MINOR__IDX
resc_io: 22918.81 resc_cpu: 204258888
ix_sel: 0.53548 ix_sel_with_filters: 0.53548
Cost: 5735.66 Resp: 5735.66 Degree: 1
Access Path: index (AllEqRange)
Index: ORDERS_STATUS_ID_IDX
resc_io: 19629.00 resc_cpu: 180830676
ix_sel: 0.53548 ix_sel_with_filters: 0.53548
Cost: 4912.53 Resp: 4912.53 Degree: 1
****** trying bitmap/domain indexes ******
Best:: AccessPath: TableScan
Cost: 1419.89 Degree: 1 Resp: 1419.89 Card: 95450.37 Bytes: 0
SINGLE TABLE ACCESS PATH
Table: CDW_ORDERS Alias: CDW
Card: Original: 3375 Rounded: 3375 Computed: 3375.00 Non Adjusted: 3375.00
Access Path: TableScan
Cost: 292.51 Resp: 292.51 Degree: 0
Cost_io: 291.00 Cost_cpu: 12971896
Resp_io: 291.00 Resp_cpu: 12971896
Best:: AccessPath: TableScan
Cost: 292.51 Degree: 1 Resp: 292.51 Card: 3375.00 Bytes: 0
OPTIMIZER STATISTICS AND COMPUTATIONS
GENERAL PLANS
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: CDW_ORDERS[CDW]#0 ORDERS[O]#1
Now joining: ORDERS[O]#1
NL Join
Outer table: Card: 3375.00 Cost: 292.51 Resp: 292.51 Degree: 1 Bytes: 132
Inner table: ORDERS Alias: O
Access Path: TableScan
NL Join: Cost: 4788284.86 Resp: 4788284.86 Degree: 0
Cost_io: 4748144.00 Cost_cpu: 343916534896
Resp_io: 4748144.00 Resp_cpu: 343916534896
kkofmx: index filter:"O"."STATUS_ID"=22
OPTIMIZER PERCENT INDEX CACHING = 100
Access Path: index (FullScan)
Index: ORDERS_ORD_MINOR__IDX
resc_io: 22497.00 resc_cpu: 217815366
ix_sel: 1 ix_sel_with_filters: 0.53548
NL Join: Cost: 19004464.41 Resp: 19004464.41 Degree: 1
Cost_io: 18982134.75 Cost_cpu: 191314735126
Resp_io: 18982134.75 Resp_cpu: 191314735126
OPTIMIZER PERCENT INDEX CACHING = 100
Access Path: index (AllEqJoin)
Index: ORDERS_STATUS_ID_IDX
resc_io: 1.00 resc_cpu: 7981
ix_sel: 1.0477e-05 ix_sel_with_filters: 1.0477e-05
NL Join: Cost: 1137.05 Resp: 1137.05 Degree: 1
Cost_io: 1134.75 Cost_cpu: 19706236
Resp_io: 1134.75 Resp_cpu: 19706236
****** trying bitmap/domain indexes ******
Best NL cost: 1137.05
resc: 1137.05 resc_io: 1134.75 resc_cpu: 19706236
resp: 1137.05 resp_io: 1134.75 resp_cpu: 19706236
adjusting AJ/SJ sel based on min/max ranges: jsel=min(1, 6.1094e-04)Semi Join Card: 2.06 = outer (3375.00) * sel (6.1094e-04)
Join Card - Rounded: 2 Computed: 2.06
SM Join
Outer table:
resc: 292.51 card 3375.00 bytes: 132 deg: 1 resp: 292.51
Inner table: ORDERS Alias: O
resc: 1419.89 card: 95450.37 bytes: 8 deg: 1 resp: 1419.89
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 616448 Max Area size: 104857600
Degree: 1
Blocks to Sort: 65 Row size: 156 Total Rows: 3375
Initial runs: 1 Merge passes: 0 IO Cost / pass: 0
Total IO sort cost: 0 Total CPU sort cost: 10349977
Total Temp space used: 0
SORT resource Sort statistics
Sort width: 598 Area size: 616448 Max Area size: 104857600
Degree: 1
Blocks to Sort: 223 Row size: 19 Total Rows: 95450
Initial runs: 2 Merge passes: 1 IO Cost / pass: 122
Total IO sort cost: 345 Total CPU sort cost: 85199490
Total Temp space used: 3089000
SM join: Resc: 2068.56 Resp: 2068.56 [multiMatchCost=0.00]
SM cost: 2068.56
resc: 2068.56 resc_io: 2044.00 resc_cpu: 210418716
resp: 2068.56 resp_io: 2044.00 resp_cpu: 210418716
SM Join (with index on outer)
Access Path: index (FullScan)
Index: CDW_ORD_COT_EXT_ID
resc_io: 2132.00 resc_cpu: 18119160
ix_sel: 1 ix_sel_with_filters: 1
Cost: 533.53 Resp: 533.53 Degree: 1
Outer table:
resc: 533.53 card 3375.00 bytes: 132 deg: 1 resp: 533.53
Inner table: ORDERS Alias: O
resc: 1419.89 card: 95450.37 bytes: 8 deg: 1 resp: 1419.89
using dmeth: 2 #groups: 1
SORT resource Sort statistics
Sort width: 598 Area size: 616448 Max Area size: 104857600
Degree: 1
Blocks to Sort: 223 Row size: 19 Total Rows: 95450
Initial runs: 2 Merge passes: 1 IO Cost / pass: 122
Total IO sort cost: 345 Total CPU sort cost: 85199490
Total Temp space used: 3089000
SM join: Resc: 2308.37 Resp: 2308.37 [multiMatchCost=0.00]
HA Join
Outer table:
resc: 292.51 card 3375.00 bytes: 132 deg: 1 resp: 292.51
Inner table: ORDERS Alias: O
resc: 1419.89 card: 95450.37 bytes: 8 deg: 1 resp: 1419.89
using dmeth: 2 #groups: 1
Cost per ptn: 1.67 #ptns: 1
hash_area: 151 (max=25600) Hash join: Resc: 1714.08 Resp: 1714.08 [multiMatchCost=0.00]
HA cost: 1714.08
resc: 1714.08 resc_io: 1699.00 resc_cpu: 129204369
resp: 1714.08 resp_io: 1699.00 resp_cpu: 129204369
Best:: JoinMethod: NestedLoopSemi
Cost: 1137.05 Degree: 1 Resp: 1137.05 Card: 2.06 Bytes: 140
Best so far: Table#: 0 cost: 292.5140 card: 3375.0000 bytes: 445500
Table#: 1 cost: 1137.0501 card: 2.0619 bytes: 280
Number of join permutations tried: 1
(newjo-save) [0 1 ]
Final - All Rows Plan: Best join order: 1
Cost: 1137.0501 Degree: 1 Card: 2.0000 Bytes: 280
Resc: 1137.0501 Resc_io: 1134.7500 Resc_cpu: 19706236
Resp: 1137.0501 Resp_io: 1134.7500 Resc_cpu: 19706236
kkoipt: Query block SEL$5DA710D3 (#1)
kkoqbc-end
: call(in-use=156048, alloc=164408), compile(in-use=103696, alloc=134224)
First K Rows: Setup end
*********************** -
Wrong cost estimate due to info record SA reference field.
Hi
I have an issue during cost estimate.
When we create scheduling agreement for a Material/Vendor/Plant combination.
The info record in Purchase organisation - reference field updates with latest SA number.
Here. I had created the second Scheduling agreement and later i decide to delete the latest SA and use the previous one. But inforecord does not update with previous SA number.
As per configuration, the reference field is taken for cost estimate and it takes the latest SA number, which is wrong.
Can anyone guide me how to get the live SA number in the reference field in Info record?
Thanks in advance,
SasiHi
The idea of SAP in case of retreiving information in case of PO/SA, is the last document number that is stored in the info record. Since you have just created the SA agreement, whose information is not stored in info record, you can simply update info record (select info update indcator) which is available in ME31L somewhere in Menu bar options (Sorry I dont know precisely). This could solve your problem I hope.
Regrds
Sidd -
Hi,
While we are running Cost Estimate through tcode CK11N, amount seems to incorrect. In Material Master Moving price is showing differ than after showing Cost Estimate.
If Material Master showing Moving price 100.00, then during cost run why it is showing 150.00...
Please suggest..Please see below screenshot....
Actual Material price is 376.08 and in Cost Estimate it is showing 260.06.
Costing View....
Cost estimate
Please guide... -
Wrong size estimate in DBSIZE.XML
Hi,
We are doing unicode export of 300gb database.
After export the dbsize.xml file showing most of the data in sapdata2.
When we are palning to import space problem is coming for that reason.
Below are the mount points and the export and import will happen in the same machine.
/dev/md/dsk/d116 39G 37G 1.8G 96% /oracle/PRD/sapdata7
/dev/md/dsk/d118 39G 37G 1.8G 96% /oracle/PRD/sapdata9
/dev/md/dsk/d110 39G 37G 1.6G 96% /oracle/PRD/sapdata1
/dev/md/dsk/d114 39G 37G 1.8G 96% /oracle/PRD/sapdata5
/dev/md/dsk/d117 39G 37G 2.1G 95% /oracle/PRD/sapdata8
/dev/md/dsk/d119 39G 36G 2.8G 93% /oracle/PRD/sapdata10
/dev/md/dsk/d111 39G 37G 1.8G 96% /oracle/PRD/sapdata2
/dev/md/dsk/d113 39G 37G 1.8G 96% /oracle/PRD/sapdata4
/dev/md/dsk/d125 98G 78G 20G 80% /oracle/PRD/sapdata11
/dev/md/dsk/d115 39G 37G 1.8G 96% /oracle/PRD/sapdata6
/dev/md/dsk/d112 39G 37G 1.8G 96% /oracle/PRD/sapdata3
New kernel and R3 tools are used.
Oracle database and solaris o/s.
sap ecc6 ehp4
Regards
Ashok DalaiHi Nils,
Thank you very much for your response.
As you can see the mount points above and sum all 11 mount points it will come to 500gb.
I exported 300 gb database and after import the database size is 370 gb.We edited the dbsize.xml file and added extra space and completed the import in our development system.
Now we are planing in our production box.
My queastion is when we are doing the import ,it asked for 220gb of free space in sapdata2 even though we have other mount points free.any thing we can do to skip this manual work.
Regards
Ashok Dalai -
I'm having a couple of issues with a query, and I can't figure out the best way to reach a solution.
Platform Information
Windows Server 2003 R2
Oracle 10.2.0.4
Optimizer Settings
SQL > show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 90
optimizer_index_cost_adj integer 30
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUEThe query below, is a simple "Top N" query, where the top result is returned. Here it is, with bind variables in the same location as the application code:
SELECT PRODUCT_DESC
FROM
SELECT PRODUCT_DESC
, COUNT(*) AS CNT
FROM USER_VISITS
JOIN PRODUCT ON PRODUCT.PRODUCT_OID = USER_VISITS.PRODUCT_OID
WHERE PRODUCT.PRODUCT_DESC != 'Home'
AND VISIT_DATE
BETWEEN
ADD_MONTHS
TRUNC
TO_DATE
:vCurrentYear
, 'YYYY'
, 'YEAR'
, 3*(:vCurrentQuarter-1)
AND
ADD_MONTHS
TRUNC
TO_DATE
:vCurrentYear
, 'YYYY'
, 'YEAR'
, 3*:vCurrentQuarter
) - INTERVAL '1' DAY
GROUP BY PRODUCT_DESC
ORDER BY CNT DESC
WHERE ROWNUM <= 1;
Explain Plan
The explain plan I receive when running the query above.
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | COUNT STOPKEY | | 1 | | 1 |00:00:34.92 | 66343 | | | |
| 2 | VIEW | | 1 | 1 | 1 |00:00:34.92 | 66343 | | | |
|* 3 | FILTER | | 1 | | 1 |00:00:34.92 | 66343 | | | |
| 4 | SORT ORDER BY | | 1 | 1 | 1 |00:00:34.92 | 66343 | 2048 | 2048 | 2048 (0)|
| 5 | SORT GROUP BY NOSORT | | 1 | 1 | 27 |00:00:34.92 | 66343 | | | |
| 6 | NESTED LOOPS | | 1 | 2 | 12711 |00:00:34.90 | 66343 | | | |
| 7 | TABLE ACCESS BY INDEX ROWID| PRODUCT | 1 | 74 | 77 |00:00:00.01 | 44 | | | |
|* 8 | INDEX FULL SCAN | PRODUCT_PRODDESCHAND_UNQ | 1 | 1 | 77 |00:00:00.01 | 1 | | | |
|* 9 | INDEX FULL SCAN | USER_VISITS#PK | 77 | 2 | 12711 |00:00:34.88 | 66299 | | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=1)
3 - filter(ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1))<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURR
ENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
8 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
9 - access("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
"USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID" AND "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY')
,'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
filter(("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
"USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2)
TO SECOND(0) AND "USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID"))
Row Source Generation
TKPROF Row Source Generation
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 2 35.10 35.13 0 66343 0 1
total 4 35.10 35.14 0 66343 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62
Rows Row Source Operation
1 COUNT STOPKEY (cr=66343 pr=0 pw=0 time=35132008 us)
1 VIEW (cr=66343 pr=0 pw=0 time=35131996 us)
1 FILTER (cr=66343 pr=0 pw=0 time=35131991 us)
1 SORT ORDER BY (cr=66343 pr=0 pw=0 time=35131936 us)
27 SORT GROUP BY NOSORT (cr=66343 pr=0 pw=0 time=14476309 us)
12711 NESTED LOOPS (cr=66343 pr=0 pw=0 time=22921810 us)
77 TABLE ACCESS BY INDEX ROWID PRODUCT (cr=44 pr=0 pw=0 time=3674 us)
77 INDEX FULL SCAN PRODUCT_PRODDESCHAND_UNQ (cr=1 pr=0 pw=0 time=827 us)(object id 52355)
12711 INDEX FULL SCAN USER_VISITS#PK (cr=66299 pr=0 pw=0 time=44083746 us)(object id 52949)However when I run the query with an ALL_ROWS hint I receive this explain plan (reasoning for this can be found here Jonathan's Lewis' response: http://www.freelists.org/post/oracle-l/ORDER-BY-and-first-rows-10-madness,4):
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 39 | 223 (25)| 00:00:03 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | VIEW | | 1 | 39 | 223 (25)| 00:00:03 |
|* 3 | FILTER | | | | | |
| 4 | SORT ORDER BY | | 1 | 49 | 223 (25)| 00:00:03 |
| 5 | HASH GROUP BY | | 1 | 49 | 223 (25)| 00:00:03 |
|* 6 | HASH JOIN | | 490 | 24010 | 222 (24)| 00:00:03 |
|* 7 | TABLE ACCESS FULL | PRODUCT | 77 | 2849 | 2 (0)| 00:00:01 |
|* 8 | INDEX FAST FULL SCAN| USER_VISITS#PK | 490 | 5880 | 219 (24)| 00:00:03 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=1)
3 - filter(ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*(TO_NUMBER(:
VCURRENTQUARTER)-1))<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*TO_N
UMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
6 - access("USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID")
7 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
8 - filter("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYY
Y'),'fmyear'),3*(TO_NUMBER(:VCURRENTQUARTER)-1)) AND
"USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),
3*TO_NUMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))And the TKPROF Row Source Generation:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.51 0.51 0 907 0 27
total 5 0.51 0.51 0 907 0 27
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62
Rows Row Source Operation
27 FILTER (cr=907 pr=0 pw=0 time=513472 us)
27 SORT ORDER BY (cr=907 pr=0 pw=0 time=513414 us)
27 HASH GROUP BY (cr=907 pr=0 pw=0 time=512919 us)
12711 HASH JOIN (cr=907 pr=0 pw=0 time=641130 us)
77 TABLE ACCESS FULL PRODUCT (cr=5 pr=0 pw=0 time=249 us)
22844 INDEX FAST FULL SCAN USER_VISITS#PK (cr=902 pr=0 pw=0 time=300356 us)(object id 52949)The query with the ALL_ROWS hint returns data instantly, while the other one takes about 70 times as long.
Interestingly enough BOTH queries generate plans with estimates that are WAY off. The first plan is estimating 2 rows, while the second plan is estimating 490 rows. However the real number of rows is correctly reported in the Row Source Generation as 12711 (after the join operation).
TABLE_NAME NUM_ROWS BLOCKS
USER_VISITS 196044 1049
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR LAST_ANALYZED
USER_VISITS#PK 2 860 196002 57761 07/24/2009 13:17:59
COLUMN_NAME NUM_DISTINCT LOW_VALUE HIGH_VALUE DENSITY NUM_NULLS HISTOGRAM
VISIT_DATE 195900 786809010E0910 786D0609111328 .0000051046452272 0 NONEI don't know how the first one is estimating 2 rows, but I can compute the second's cardinality estimates by assuming a 5% selectivity for the TO_DATE() functions:
SQL > SELECT ROUND(0.05*0.05*196044) FROM DUAL;
ROUND(0.05*0.05*196044)
490However, removing the bind variables (and clearing the shared pool), does not change the cardinality estimates at all.
I would like to avoid hinting this plan if possible and that is why I'm looking for advice. I also have a followup question.
Edited by: Centinul on Sep 20, 2009 4:10 PM
See my last post for 11.2.0.1 update.Centinul wrote:
You could potentially perform testing with either a CARDINALITY or OPT_ESTIMATE hint to see if the execution plan changes dramatically to improve performance. The question then becomes > whether this be sufficient to over-rule the first rows optimizer so that it does not use an index access which will avoid a sort.I tried doing that this morning by increasing the cardinality from the USER_VISITS table to a value such that the estimate was about that of the real amount of data. However the plan did not change.
Could you use the ROW_NUMBER analytic function instead of ROWNUMInterestingly enough, when I tried this it generated the same plan as was used with the ALL_ROWS hint, so I may implement this query for now.
I do have two more followup questions:
1. Even though a better plan is picked the optimizer estimates are still off by a large margin because of bind variables and 5%* 5% * NUM_ROWS. How do I get the estimates in-line with the actual values? Should I really fudge statistics?
2. Should I raise a bug report with Oracle over the behavior of the original query?That is great that the ROW_NUMBER analyitic function worked. You may want to perform some testing with this before implementing it in production to see whether Oracle performs significantly more logical or physical I/Os with the ROW_NUMBER analytic function compared to the ROWNUM solution with the ALL_ROWS hint.
As Timur suggests, seeing a 10053 trace during a hard parse of both queries (with and without the ALL_ROWS hint) would help determine what is happening. It could be that a histogram exists which is feeding bad information to the optimizer, causing distorted cardinality in the plan. If bind peeking is used, the 5% * 5% rule might not apply, especially if a histogram is involved. Also, the WHERE clause includes "PRODUCT.PRODUCT_DESC != 'Home'" which might affect the cardinality in the plan.
Your question may have prompted the starting of a thread in the SQL forum yesterday on the topic of ROWNUM, but it appears that thread was removed from the forum within the last couple hours.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Cost Estimate on BOM set up with discontinuation group/Follow up group
Hi FI Experts,
Is there anyone there who has worked on BOM that consists of a material that has a followup material? Ex. BOX1 is to be replaced by BOX2 when BOX1 has no stocks.
It is giving us a wrong cost estimates because it still reads BOX1 instead of BOX2. Please help, im struggling with this. Please please!
THanks,
KebHi Frnd,
Check the Production Version in material master.
and click on CHECK button.
also check whther u have assigned that BOM in production Version.
I hope this will solve ur problem
ASSIGN POINTS IF USEFUL
Regards,
Jigar -
8.1.7.4 Performance Issue
Hi, I'm not sure I'm posting this question in the right place, but here is better than no where. We recently upgraded all our DB environments (in the last month) from 8.0.6 to 8.1.7.4 and we took performance hit. SQL that used to do rather well is doing very poorly. We're doing full table scans instead of using indexes. Here's one example of a SQL statement that is doing badly:
SELECT ALL COMPANY.ED_NO, COMPANY.CURR_IND, COMPANY.NAME, COMPANY.CITY,
COMPANY.STATE, COMPANY.ZIP, COMPANY.ACCT_TYPE, COMPANY.VERSION_NO,
COMPANY.AUDIT_DATE, COMPANY.AUDIT_USER, CONTRACT.CONTRACT_NO,
EDITORNO.SOURCE, EDITORNO.ACCT_AUDIT_DATE, EDITORNO.ACCT_AUDIT_USER,
COMPANY.SEARCH_KEY, EDITORNO.DONT_PUB_IND
FROM COMPANY, CONTRACT, EDITORNO, TACS_CONTRACT
WHERE (COMPANY.SEARCH_KEY LIKE 'DAWN%' OR COMPANY.SEARCH_KEY LIKE 'DAWN%')
AND COMPANY.ED_NO = CONTRACT.ED_NO(+) AND COMPANY.ED_NO = EDITORNO.ED_NO(+)
AND COMPANY.ED_NO = TACS_CONTRACT.ED_NO(+) AND TACS_CONTRACT.PUB_CODE(+) = '01'
AND CONTRACT.PUB_CODE(+) = '01' AND COMPANY.CURR_IND = '1'
ORDER BY COMPANY.NAME
The explain on this is:
SELECT STATEMENT Cost = 1380
2.1 SORT ORDER BY
3.1 HASH JOIN OUTER
4.1 HASH JOIN OUTER
5.1 HASH JOIN OUTER
6.1 TABLE ACCESS FULL COMPANY
6.2 TABLE ACCESS FULL EDITORNO
5.2 TABLE ACCESS FULL TACS_CONTRACT
4.2 TABLE ACCESS FULL CONTRACT
Low cost but our database has never done hash joins very well. This can take up to 3 minutes to return a result.
If we disable the hash joins then we get:
SELECT STATEMENT Cost = 2546
2.1 SORT ORDER BY
3.1 MERGE JOIN OUTER
4.1 MERGE JOIN OUTER
5.1 MERGE JOIN OUTER
6.1 SORT JOIN
7.1 TABLE ACCESS FULL COMPANY
6.2 SORT JOIN
7.1 TABLE ACCESS FULL TACS_CONTRACT
5.2 SORT JOIN
6.1 TABLE ACCESS FULL CONTRACT
4.2 SORT JOIN
5.1 TABLE ACCESS FULL EDITORNO
This query runs in about the same about of time as the one above (3 mins).
So we go the hint route and add a hint:
SELECT /*+ INDEX (company company_ie6) USE_NL(contract) USE_NL(editorno) USE_NL(tacs_contract)*/
ALL COMPANY.ED_NO, COMPANY.CURR_IND, COMPANY.NAME, COMPANY.CITY,
COMPANY.STATE, COMPANY.ZIP, COMPANY.ACCT_TYPE, COMPANY.VERSION_NO,
COMPANY.AUDIT_DATE, COMPANY.AUDIT_USER, CONTRACT.CONTRACT_NO,
EDITORNO.SOURCE, EDITORNO.ACCT_AUDIT_DATE, EDITORNO.ACCT_AUDIT_USER,
COMPANY.SEARCH_KEY, EDITORNO.DONT_PUB_IND
FROM COMPANY, CONTRACT, EDITORNO, TACS_CONTRACT
WHERE (COMPANY.SEARCH_KEY LIKE 'DAWN%' OR COMPANY.SEARCH_KEY LIKE 'DAWN%')
AND COMPANY.ED_NO = CONTRACT.ED_NO(+) AND COMPANY.ED_NO = EDITORNO.ED_NO(+)
AND COMPANY.ED_NO = TACS_CONTRACT.ED_NO(+) AND TACS_CONTRACT.PUB_CODE(+) = '01'
AND CONTRACT.PUB_CODE(+) = '01' AND COMPANY.CURR_IND = '1'
ORDER BY COMPANY.NAME;
Here is the explain on this:
SELECT STATEMENT Cost = 50743
2.1 SORT ORDER BY
3.1 CONCATENATION
4.1 NESTED LOOPS OUTER
5.1 NESTED LOOPS OUTER
6.1 NESTED LOOPS OUTER
7.1 TABLE ACCESS BY INDEX ROWID COMPANY
8.1 INDEX RANGE SCAN COMPANY_IE6 NON-UNIQUE
7.2 TABLE ACCESS BY INDEX ROWID TACS_CONTRACT
8.1 INDEX RANGE SCAN TACS_CONTRACT_IE1 NON-UNIQUE
6.2 TABLE ACCESS BY INDEX ROWID CONTRACT
7.1 INDEX UNIQUE SCAN CONTRACT_PK UNIQUE
5.2 TABLE ACCESS BY INDEX ROWID EDITORNO
6.1 INDEX UNIQUE SCAN EDITORNO_PK UNIQUE
4.2 NESTED LOOPS OUTER
5.1 NESTED LOOPS OUTER
6.1 NESTED LOOPS OUTER
7.1 TABLE ACCESS BY INDEX ROWID COMPANY
8.1 INDEX RANGE SCAN COMPANY_IE6 NON-UNIQUE
7.2 TABLE ACCESS BY INDEX ROWID EDITORNO
8.1 INDEX UNIQUE SCAN EDITORNO_PK UNIQUE
6.2 TABLE ACCESS BY INDEX ROWID CONTRACT
7.1 INDEX UNIQUE SCAN CONTRACT_PK UNIQUE
5.2 TABLE ACCESS BY INDEX ROWID TACS_CONTRACT
6.1 INDEX RANGE SCAN TACS_CONTRACT_IE1 NON-UNIQUE
This query runs in a few seconds. So why does the query with the worst cost run the best? I'm concerned that we are going to alter our production application to add hints and I'm not even sure how to evaluate those hints because "Cost" no longer seems as reliable as before. Is anyone else experiencing this?
Thank you for any help you can provide.
Dawn
[email protected]You can ignore the cost= part of an explain statement. This is something used internally by Oracle when calculating explain plans and doesn't indicate which plan is better. I don't know why it's included in the output except to confuse people. Really? This indicator (while not perfect) has always worked pretty well for me in the past. I think I may have been wrong about this after reading the 8.1.7 documentation. I'd seen other messages saying to ignore the cost of explain
plans before and I took those posts as being right.
Anyway here's what the 8.1.7 documentation says about analyzing tables. Maybe you should try analyzing your tables using the
dbms_stats package mentioned below. There's also another package called dbms_utility.analyze_schema that we use and haven't
had any problems with it.
From Oracle 8i Designing and Tuning for Performance Ch. 4:
The CBO consists of the following steps:
1.The optimizer generates a set of potential plans for the SQL statement based on its available access paths and hints.
2.The optimizer estimates the cost of each plan based on statistics in the data dictionary for the data distribution and storage characteristics of the tables, indexes,
and partitions accessed by the statement.
The cost is an estimated value proportional to the expected resource use needed to execute the statement with a particular plan. The optimizer calculates the cost
of each possible access method and join order based on the estimated computer resources, including (but not limited to) I/O and memory, that are required to
execute the statement using the plan.
Serial plans with greater costs take more time to execute than those with smaller costs. When using a parallel plan, however, resource use is not directly related
to elapsed time.
3.The optimizer compares the costs of the plans and chooses the one with the smallest cost.
To maintain the effectiveness of the CBO, you must gather statistics and keep them current. Gather statistics on your objects using either of the following:
For releases prior to Oracle8i, use the ANALYZE statement.
For Oracle8i releases, use the DBMS_STATS package.
For table columns which contain skewed data (i.e., values with large variations in number of duplicates), you must collect histograms.
The resulting statistics provide the CBO with information about data uniqueness and distribution. Using this information, the CBO is able to compute plan costs with a
high degree of accuracy. This enables the CBO to choose the best execution plan based on the least cost.
See Also:
For detailed information on gathering statistics, see Chapter 8, "Gathering Statistics". -
Hiya, all
A short while ago this thread {thread:id=2441850} was posted, which triggered some thoughts...
I have never really used LNNVL much, as I generally avoid function calls on a column to be filtered in the where clause (for example never use trunc(date_column) = date '2012-09-21' ) because that precludes use of normal non-functionbased indexes.
But then LNNVL is kind of different as it is not a function on a column, but a function on a boolean expression. Maybe it was not so much a function evaluation as a shortcut method of writing something that the optimizer would understand and rewrite?
So I got curious whether the LNNVL would be evaluated like any other function call or if there was a special rewrite going on by the optimizer. I tried to do a short test:
SQL> /* create table - col1 with not null, col2 allows null */
SQL>
SQL> create table test1 (
2 col1 integer not null
3 , col2 integer
4 , col3 varchar2(30)
5 )
6 /
Table created.
SQL>
SQL> /* insert 10000 rows with a couple nulls in col2 */
SQL>
SQL> insert into test1
2 select rownum col1
3 , nullif(mod(rownum,5000),0) col2
4 , object_name col3
5 from all_objects
6 where rownum <= 10000
7 /
10000 rows created.
SQL>
SQL> /* index that includes col2 null values because col1 is not null */
SQL>
SQL> create index test1_col1_col2 on test1 (
2 col2, col1
3 )
4 /
Index created.
SQL>
SQL> /* gather stats */
SQL>
SQL> begin
2 dbms_stats.gather_table_stats(USER, 'TEST1');
3 end;
4 /
PL/SQL procedure successfully completed.
SQL>
SQL> set autotrace on
SQL>
SQL> /* using lnnvl to select rows with col2 values 4999 and null */
SQL>
SQL> select *
2 from test1
3 where lnnvl(col2 <= 4998)
4 order by col1
5 /
COL1 COL2 COL3
4999 4999 DBA_LOGMNR_SESSION
5000 DBA_LOGMNR_SESSION
9999 4999 /69609d2d_OracleTypeHierarchy
10000 /1cef5dbd_Oracle8TypePropertie
Execution Plan
Plan hash value: 4009883541
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 27 | 17 (12)| 00:00:01 |
| 1 | SORT ORDER BY | | 1 | 27 | 17 (12)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| TEST1 | 1 | 27 | 16 (7)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter(LNNVL("COL2"<=4998))
Statistics
1 recursive calls
0 db block gets
53 consistent gets
0 physical reads
0 redo size
413 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
4 rows processed
SQL>
SQL> /* using or statement to select the same rows */
SQL>
SQL> select *
2 from test1
3 where not col2 <= 4998 or col2 is null
4 order by col1
5 /
COL1 COL2 COL3
4999 4999 DBA_LOGMNR_SESSION
5000 DBA_LOGMNR_SESSION
9999 4999 /69609d2d_OracleTypeHierarchy
10000 /1cef5dbd_Oracle8TypePropertie
Execution Plan
Plan hash value: 2198096298
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 4 | 108 | 10 (10)| 00:00:01 |
| 1 | SORT ORDER BY | | 4 | 108 | 10 (10)| 00:00:01 |
| 2 | CONCATENATION | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| TEST1 | 2 | 54 | 5 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | TEST1_COL1_COL2 | 2 | | 2 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| TEST1 | 2 | 54 | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | TEST1_COL1_COL2 | 2 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("COL2">4998 AND "COL2" IS NOT NULL)
6 - access("COL2" IS NULL)
filter(LNNVL("COL2">4998))
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
413 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
4 rows processedAs far as I understand LNNVL, this:
where lnnvl(col2 <= 4998)should be the logical equivalent of this:
where not col2 <= 4998 or col2 is nullThey do return the same rows, but using LNNVL forces evaluation of the function for all rows (full table scan), while the longer expression using OR allows the optimizer to rewrite the expression and use the index twice (because this index included a NOT NULL column.) LNNVL also gives wrong cardinality estimate while the OR way estimates spot on.
<provocative_mode>
So did I misunderstand something, or is LNNVL merely a way to allow a lazy developer to save a few keystrokes at the expense of possibly bad execution plans? ;-)
</provocative_mode>Kim Berg Hansen wrote:
As far as I understand LNNVL, this:
where lnnvl(col2 <= 4998)should be the logical equivalent of this:
where not col2 <= 4998 or col2 is null
I think it is
where not col2 <= 4998 or col2 is null or 4998 is null
They do return the same rows, but using LNNVL forces evaluation of the function for all rows (full table scan), while the longer expression using OR allows the optimizer to rewrite the expression and use the index twice (because this index included a NOT NULL column.) LNNVL also gives wrong cardinality estimate while the OR way estimates spot on.
<provocative_mode>
So did I misunderstand something, or is LNNVL merely a way to allow a lazy developer to save a few keystrokes at the expense of possibly bad execution plans? ;-)
</provocative_mode>You can also say, that it is a way to confuse 95% of all pl/sql developers. -
Tuning needed for sql:EXPLAIN PLAN attached
DB Version:10gR2
The below sql was running slow, so i took an explain plan
SQL> explain plan for
2 SELECT COUNT(1) FROM SHIP_DTL WHERE
3 SHIP_DTL.PLT_ID = 'AM834'
4 AND SHIP_DTL.WHSE = '34' AND
5 SHIP_DTL.STAT_CODE != '845'
6 ORDER BY SHIP_DTL.LOAD_SEQ ASC;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 18 | 5 (20)|
| 1 | SORT AGGREGATE | | 1 | 18 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| SHIP_DTL | 200 | 3600 | 5 (20)|
|* 3 | INDEX RANGE SCAN | SHIP_DTL_IND_4 | 203 | | 3 (0)|
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter("SHIP_DTL"."WHSE"='34' AND "SHIP_DTL"."STAT_CODE"<>845)
3 - access("SHIP_DTL"."PLT_ID"='AM834')Why is there an INDEX RANGE scan where there is no BETWEEN operator in the query? What are various options(indexes, rewriting query) in tuning this query?james_p wrote:
DB Version:10gR2
The below sql was running slow, so i took an explain planCheck your plan, the optimizer estimates that the following query:
select count(*)
from SHIP_DTL
where "SHIP_DTL"."PLT_ID"='AM834';only returns 200 records. Is this correct? Please post the result of above query.
It probably isn't the case, because retrieving 200 records per index range scan and single row random table access shouldn't take long, at maximum a couple of seconds if you need to read each block actually from disk rather than from the cache.
If the estimate is wrong you need to check the statistics on the table and index that were used by the optimizer to come to that conclusion.
Are you sure that this plan is the actual plan used at execution time? You can check for the actual plans used to execute by using the DBMS_XPLAN.DISPLAY_CURSOR function in 10g if the SQL is still cached in the Shared Pool. You need to pass the SQL_ID and SQL_CHILD_NUMBER which you can retrieve from V$SESSION while the statement is executing.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Normal index in oracle.
i have a city_tbl with columns ( city_cd, city_name,state_cd) in oracle 11g release2
city_cd is a primary key.
i ran the query
select city_name from city_tbl where state_cd = 'AL'
expalin plan shows full table scan
now i create normal index on state_cd column
after running the query still explain plan shows full table scan.
after creating the index why there is full table scan ?Hi,
it looks like the optimizer is doing the right thing by ignoring your index and doing the full table scan. Full table scan is a perfectly valid way of retrieving data and in many cases is more efficient than an index access, because a) it uses multiblock reads b) doesn't have to go through index blocks first before reading the data.
In your case, the optimizer estimates that the cost of reading the table via a FTS is equivalent to 34 single-block reads. Index access in your case is more expensive, because while it's pretty cheap to acquire rowids of desired rows (only 2 reads are needed), the desired table rows are scattered around 115 blocks, so Oracle would have to make 117 reads to retrieve the data via an index.
Of course, this is only true if the optimizer is right in its assumptions about table size, index structure, predicate selectivity and clustering factor. If it's not, then you need to find where exactly the optimizer is wrong and correct it (e.g. by collecting a histogram on the column in WHERE clause).
Best regards,
Nikolay -
select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
show parameter opt
NAME TYPE VALUE
_optim_peek_user_binds boolean TRUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.2
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean FALSE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean FALSEI am asking a broad question here.But any help/insight will be greatly appreciated.
I want to understand how (and what) Dynamic buffer cache statistics are exposed to CBO to let her form optimal execution plan ? I am under impression that CBO never looks at buffer cache statistics while making execution plan as buffer cache statistics are highly dynamic and it would be a intrusive to consider buffer cache statistics to ensure plan stability. (with 11g's "feedback loop" mechanism between optimizer and execution engine, Apparently, my understanding is wrong).
I am running Workload statistics and since my underlying storage is SAN, i am explicitly setting SREAD=4 and MREAD=10 for hybrid database. DB has 65% OLTP access (between 8:00 AM to 10:00 PM) with 35% batch processing 24x7 .958830 wrote:
Can you share your experience on "advantage of having Workload system statistics over NOWORKLOAD statistics" ? (SREAD < MREAD)I have written several notes on my blog about system stats ( http://jonathanlewis.wordpress.com/category/oracle/statistics/system-stats/ ). In general my preference is to leave as many parameters and related values as possible to default. I apply this guideline to system stats even though, in absolute terms, the values for sreadtim and mreadtim are about 30 years out of date when derived from the ioseektim and the other one (transfer rate, but I can't remember the proper name). For further comments about when I would ignore this to give Oracle a better model of the truth I'll leave you to read the notes.)
(Considering Oracle recommendsation on 10g/11g, i have not set DBMBRC parameter with NOWORKLOAD system stats,).As above - I approve of not setting db_file_multiblock_read_count in any circumstances (in the latest versions).
>
Let me also take an opportunity to ask you if you can highlight some light on positive results of exposing Cache statistics to CBO ?I wouldn't do it (I think I made a comment in the book about how scary it might be). In some ways it sounds like a very good idea - if the optimizer estimates that it need to visit 5,000 random table blocks (based on calculations related to the clustering_factor), but also has information that the "object-level cache hit ratio" for that table sa 98% over the last hour then it seems reasonable to cost that at 0.02 * 5000 = 100, rather than the 5,000 that would currently appear.
The problem (as I probably mentioned in the book) is that the code you activate tries to keep a smoothed rolling average over time for object-level cache hit ratios - which means that the DBA ends up with the problem of saying: "why did this query use plan A at 9:45, and plan B at 14:30?" - with the only answer being "it depends what was going on in the preceding 4 or 5 hours.
Regards
Jonathan Lewis -
Need help to understand awrsqrpt
Hi,
Following is the content of awrsqrpt:
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 4,013,371 4,013,371.0 39.7
CPU Time (ms) 4,013,407 4,013,406.5 44.5
Executions 1 N/A N/A
Buffer Gets 1.11E+09 1.111433E+09 54.2
Disk Reads 0 0.0 0.0
Parse Calls 1 1.0 0.0
*Rows 5,749 5,749.0 N/A*
User I/O Wait Time (ms) 0 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 0 N/A N/A
Invalidations 0 N/A N/A
Version Count 2 N/A N/A
Sharable Mem(KB) 80 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | | | 7 (100)| |
| 1 | FILTER | | | | | |
*| 2 | HASH GROUP BY | | 1 | 66 | 7 (15)| 00:00:01 |*
| 3 | NESTED LOOPS | | 1 | 66 | 6 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 61 | 5 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | TT_TMP1 | 1 | 20 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 41 | 3 (0)| 00:00:01 |
| 7 | INDEX RANGE SCAN | IDX$$_01150001 | 1 | | 2 (0)| 00:00:01 |
| 8 | INDEX RANGE SCAN | INDX_T2 | 1 | 5 | 1 (0)| 00:00:01 |
INSERT INTO TT_TMP2 (
SELECT
TRAN_DAT,
CARD_NUM,
SUM(MPU_AMT_1 + MPU_AMT_2) TOT_AMT
FROM
T1,
T2,
TT_TMP1
WHERE
MPU_MER_REF = MMR_MER_REF
AND
MPU_TRAN_DAT = TRAN_DAT
AND
MPU_CRD_NUM = CARD_NUM
AND
MPU_SETTL_FLAG = 'Y'
AND
MMR_RISK_TYPE IN ('B')
AND
MPU_CHANNEL_ID = 0
GROUP BY
TRAN_DAT,
CARD_NUM,
MMR_RISK_TYPE
HAVING
SUM(MPU_AMT_1 + MPU_AMT_2) > CASE MMR_RISK_TYPE WHEN 'B' THEN 0 ELSE NULL END)Where 'TT_TMP1' and 'TT_TMP2' are temporary tables. Now if we see Rows in Plan Statistics it shows 5,749 whereas in execution plan shows 1 row processed. Actually the DML selects 9 Lakhs of rows and inserts into the temporary table TT_TMP2.
Why the awrsqrpt is showing wrong amount of rows?
Platform: Windows
Database: 10.2.0.5
Regards,The plan shows the optimizer estimates - the estimates which led to that particular plan being chosen - rather than the actuals.
The difference between actual and estimate is what gives you excellent leads to investigate poor plan choices.
Given that all the estimates are for 1 row - perhaps ( ! ) you have a statistics issue.
Maybe you are looking for
-
How do i ged rid of the "other" media on my ipod video 30gb
When i connect my ipod to my pc the itunes summary tells me i have 7.8gb of other media on my ipod.I tried to restore it to factory settings but while doing that an error message appeared and said the files on the ipod were being used elsewhere so it
-
All driver for Hp Pavilion 15-n213sx windows 7&8.1
please All driver for Hp Pavilion 15-n213sx windows 7&8.1 This question was solved. View Solution.
-
I am a new iMac user. I can connect with someone in the States and with the Apple test connection without any problems. I got the "User did not respond" message when trying to connect with someone in Israel. My firewall is off. We use a Comcast cable
-
Has anyone else noticed strange behavior in X11 in Leopard? Whenever I tell Leopard to keep X11 in the Dock and click on it there to launch it . . it starts a new icon and hoses up. It also starts xterm by default . . but it's not listed in xinitrc a
-
How can I give an iBook I have finished to a daughter across the country?
iPad