Optimizer chooses slow parallel plan in Oracle 9.2
We have an application that we are migrating from Oracle8 (8.1.7) to Oracle9 (9.2.0). We've changed from DMTs to LMTs too. We've come across a performance issue when running a application benchmark that we've isolated down to a simple 1 row look up query in a rather small table, initially 8 rows growing to several thousand. The table is 2 columns, an int ID value and a VARCHAR2(254) column. There is a unique index on the name column. The prepared query is of the form:
select id from table_name where name = <bind_variable>;
In Oracle8 the CBO chooses a serial plan of unique index lookup followed by table access by rowid to get the result from the table row. Oracle9 chooses a parallel table scan of degree default (effective degree 4) for the same query even when the table only has 8 rows.
Over thousands of executions the Oracle8 query average 2-3 msec each. However, the exactly the same data and same sequence of queries the Oracle9 query averages 150 msec each. Clearly the cost of setup and tear down of executing a parallel query here is too expensive.
My question is why is the Oracle9 optimizer making such a different, and in my opinion poorer, choice of query plan than the Oracle8 optimizer?
I should note the optimizer stats are the same and result from exactly the same DBMS_STATS.GATHER_TABLE_STATS calls and Oracle9 is behaving exactly the same with the OPTIMIZER_FEATURES_ENABLE set to 9.2.0 and 8.1.7.
Gabriele,
As indicated in the original post - the following work-around might work for you for bug 9743250. What this does is essentially force the optimizer to ALWAYS use the spatial index. That is usually good - unless the query is actually more efficient not using the index, such as when it is much cheaper to just to a FTS. Anyhow this is what you can do from 11.2.0.1 to 11.2.0.3 - fixed in 11.2.0.4 and 12.1.0.1:
connect /as sysdba
alter session set current_schema=MDSYS;
DISASSOCIATE STATISTICS FROM INDEXTYPES spatial_index FORCE
DISASSOCIATE STATISTICS FROM PACKAGES sdo_3gl FORCE;
DISASSOCIATE STATISTICS FROM PACKAGES prvt_idx FORCE;
Bryan
Similar Messages
-
SQL query with Bind variable with slower execution plan
I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
(Cost=2 Card=135 Bytes=6480)
Statistics
0 recursive calls
18 db block gets
15558 consistent gets
47 physical reads
9896 redo size
423 bytes sent via SQL*Net to client
1095 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
Statistics
0 recursive calls
12 db block gets
3003199 consistent gets
54 physical reads
9448 redo size
423 bytes sent via SQL*Net to client
1258 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
Regards
IvanMany thanks for your reply.
I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
for table I use:-
begin
dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
end;
for index I use:-
begin
dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
end;
Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
regards
Ivan -
Description required on DOC Planning Your Oracle E-Business Suite Upgrade..
Hi All
Im planing to upgrade Oracle Applications from 11i to 12 i. while reading white paper "Planning Your Oracle E-Business Suite Upgrade from Release 11i to Release 12.1" i want some description on following lines
How can we "Develop robust testing strategy." during oracle Applications Upgradation from 11i to 12i
What possable "oracle support resource available."
Which tools can we use that can improve upgrade and maintenance experience.
How to become Planning Your Oracle E-Business Suite Upgrade from Release 11i to Release 12.1 familiar r with oracle information available.
steps to minimize downtime.
Regards,
MuhammadSalahuddinManzoorHi,
Please see these threads for a similar discussion.
How can we reduced the patch time for u6394500.drv
How can we reduced the patch time for u6394500.drv
Oracle Applications upgradation from 11.5.10.2 to 12.1.1 Parallel Upgrade
Re: Oracle Applications upgradation from 11.5.10.2 to 12.1.1 Parallel Upgrade
Thanks,
Hussein -
SQL slow after upgrading to Oracle Database 10g Enterprise Edition Release
Hi all:
We have recently upgraded our database from Oracle9i Enterprise Edition Release 9.2.0.6.0 to Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
After that we found that our some sql getting very slow
for example query with 9i showing result in 4 seconds while in 10g showing result in 28 seconds.
Following is the execution plan of my query in Oracle9i
Operation Object PARTITION_START PARTITION_STOP COST
SELECT STATEMENT () 9458
NESTED LOOPS () 9458
SORT (UNIQUE)
INDEX (RANGE SCAN) BL_EQ_PK_N 2
VIEW () CONTAINER_INFO 2
UNION-ALL (PARTITION)
TABLE ACCESS (BY INDEX ROW SERVICE_EVENTS 1
NESTED LOOPS () 11
NESTED LOOPS () 10
NESTED LOOPS (OUTER) 9
NESTED LOOPS () 8
NESTED LOOPS () 7
NESTED LOOPS () 6
NESTED LOOPS () 5
NESTED LOOPS () 4
NESTED LOOPS (OUT 3
TABLE ACCESS (BY EQUIPMENT_USES 2
INDEX (UNIQUE S EQUSE_PK 1
TABLE ACCESS (BY SHIPPING_LINES 1
INDEX (UNIQUE S LINE_PK
INDEX (UNIQUE SCA EQHT_PK
TABLE ACCESS (BY I EQUIPMENT_TYPES 1
INDEX (UNIQUE SCA EQTP_PK
TABLE ACCESS (BY IN EQUIPMENT_SIZES 1
INDEX (UNIQUE SCAN EQSZ_PK
TABLE ACCESS (BY IND SHIP_VISITS 2
INDEX (RANGE SCAN) SVISIT_UK 1
TABLE ACCESS (BY INDE SHIPS 1
INDEX (UNIQUE SCAN) SHIP_PK
TABLE ACCESS (BY INDEX CARE_VIR_MAP 1
INDEX (UNIQUE SCAN) VIR_VESVOY
TABLE ACCESS (BY INDEX EQUIPMENT 1
INDEX (RANGE SCAN) EQ_EQUSE_FK
INDEX (RANGE SCAN) SEVENTS_EQUSE_FK_N
NESTED LOOPS () 7
NESTED LOOPS () 6
NESTED LOOPS () 5
NESTED LOOPS () 4
NESTED LOOPS (OUTER) 3
TABLE ACCESS (BY INDE EQUIPMENT_USES 2
INDEX (UNIQUE SCAN) EQUSE_PK 1
TABLE ACCESS (BY INDE SHIPPING_LINES 1
INDEX (UNIQUE SCAN) LINE_PK
INDEX (UNIQUE SCAN) EQHT_PK
TABLE ACCESS (BY INDEX EQUIPMENT_TYPES 1
INDEX (UNIQUE SCAN) EQTP_PK
TABLE ACCESS (BY INDEX R EQUIPMENT_SIZES 1
INDEX (UNIQUE SCAN) EQSZ_PK
TABLE ACCESS (BY INDEX RO EQUIPMENT 1
INDEX (RANGE SCAN) EQ_EQUSE_FK and following is my query plan in Oracle 10g
Operation Object PARTITION_START PARTITION_STOP COST
SELECT STATEMENT () 2881202
NESTED LOOPS () 2881202
SORT (UNIQUE) 2
INDEX (RANGE SCAN) BL_EQ_PK_N 2
VIEW () CONTAINER_INFO 2881199
UNION-ALL ()
NESTED LOOPS (OUTER) 2763680
NESTED LOOPS () 2718271
NESTED LOOPS () 2694552
NESTED LOOPS () 2623398
NESTED LOOPS (OUTER) 2623380
NESTED LOOPS () 2393965
NESTED LOOPS () 2393949
NESTED LOOPS () 2164536
NESTED LOOPS () 1706647
NESTED LOOPS () 854120
TABLE ACCESS (FU BL_EQUIPMENT 1515
TABLE ACCESS (BY EQUIPMENT_USES 1
INDEX (UNIQUE S EQUSE_PK 1
TABLE ACCESS (BY EQUIPMENT 1
INDEX (RANGE SCA EQ_EQUSE_FK 1
TABLE ACCESS (BY I EQUIPMENT_TYPES 1
INDEX (UNIQUE SCA EQTP_PK 1
TABLE ACCESS (BY IN EQUIPMENT_SIZES 1
INDEX (UNIQUE SCAN EQSZ_PK 1
INDEX (UNIQUE SCAN) EQHT_PK 1
TABLE ACCESS (BY INDE SHIPPING_LINES 1
INDEX (UNIQUE SCAN) LINE_PK 1
INDEX (RANGE SCAN) SEVENTS_TSERV_FK_N 1
TABLE ACCESS (BY INDEX SHIP_VISITS 2
INDEX (RANGE SCAN) SVISIT_UK 2
TABLE ACCESS (BY INDEX R SHIPS 1
INDEX (UNIQUE SCAN) SHIP_PK 1
TABLE ACCESS (BY INDEX RO CARE_VIR_MAP 2
INDEX (UNIQUE SCAN) VIR_VESVOY 1
NESTED LOOPS (OUTER) 117519
NESTED LOOPS () 98158
NESTED LOOPS () 78798
NESTED LOOPS () 78795
NESTED LOOPS () 59432
TABLE ACCESS (FULL) EQUIPMENT_USES 20788
TABLE ACCESS (BY INDE EQUIPMENT_TYPES 1
INDEX (UNIQUE SCAN) EQTP_PK 1
TABLE ACCESS (BY INDEX EQUIPMENT 1
INDEX (RANGE SCAN) EQ_EQUSE_FK 1
INDEX (UNIQUE SCAN) EQHT_PK 1
TABLE ACCESS (BY INDEX R EQUIPMENT_SIZES 1
INDEX (UNIQUE SCAN) EQSZ_PK 1
TABLE ACCESS (BY INDEX RO SHIPPING_LINES 1
INDEX (UNIQUE SCAN) LINE_PK 1can somebody help me regarding this?
Thanks
HassanI would say ..gather stats on 9i/10gfor the required table and indexes , then post the expalin plan.
--Girish -
Good Day to all,
I've little time working in environments with Oracle databases, I have requested to carry out a capacity plan with Oracle Database 10g for a data warehouse project that is leading the company in which they work. I request to make a plan specifying, among other things: size and number of tablespace and datafiles, projection growth taking into account the initial charge and the charge per week (incremental). The truth is a bit complicated for my inexperience in this kind of sizing requirements we will ask for your valuable cooperation. There are mathematical formulas that allow me to take those projections into account the type of data and their lengths? , There is a standard for creating the tablespace and datafiles?.
In advance thank you for your contributions.The first thing you need to get management to do is give you two things.
1. The cost to the organization for downtime, rated in dollars/hour.
2. The service level agreement for the system's customers.
3. The amount of data to be loaded into the system and the retention time.
4. What version of RAID or what ASM redundancy is planned.
With that you can start at the grossest level which is planning for database + archived redo logs + online back files.
I generally figure the database, itself, at about 25% of required storage because I like to have at least two full backups, a bunch of incremental backups, plus all of the archived redo logs that support them. And all on separate shelves.
The number of tablespaces and data files is really just a question of maintenance. Ease of transport. Ease of movement. Ease of backing up.
If you want to get down to the actual sizes of tables and indexes the best place to go is the DBMS_SPACE built-in package. Play with these two procedures:
CREATE_TABLE_COST and CREATE_INDEX_COST which do the job far more efficiently and accurately than any formulas you will receive. You can find demos
of this capability here: http://www.morganslibrary.org/reference/dbms_space.html. -
There were no results for If i buy locked iphone 5s( verizon) from usa can i use it in india ..and when i buy this ph is this mandatory that i have to choose the carrier plan....and one more qs after unlocking still the warenty is valid..plzz tell me officialy unlocking procidure in details...thank you..in advance...
Your question has been answered. Jailbreaking cannot be discussed here. If you buy a phone locked to a carrier, it would be down to them to unlock it, and that's very unlikely if you've only just got the phone. Locked phones are "cheap" for a reason - you're signing up for a contract and that subsidises the cost of the phone.
-
Optimizer creating high production plan in few buckets
Hi,
Optimizer is creating high production plan in few buckets though there is less demand.
Findings
1. system in not preplanning to cover future demand.
2. there is one resource used by three different products for one product optimizer is doing excess planning while it is showing supply shortage for other products.
3.If I again take optimizer run for all the three products at all the location system is giving correct results (all receipt elements in line with the demand.
Now i need to understand the reasons why optimizer gave high production plan in last run ?
Regards
Abhi1. system in not preplanning to cover future demand.
check storage cost and penalty cost , i think storage cost is more then penalty cost.
2. there is one resource used by three different products for one product optimizer is doing excess planning while it is showing supply shortage for other products.
if you run all three one by one it plan only first product on that particular resource, if you run all three same time system will plan according to the cost optimisation logic, that means it checks all cost and calculate which product needs to be manufacture on that resource and how much.
you can check the log of snp optimisation run in
/SAPAPO/SNPOPLOG - SNP Optimizer Log Data -
Loading Metadata from planning to oracle table
Hi
I am trying to load one dimension metadata from planning to oracle table.we are on 10.1.3
i selected LKM SQL to SQL to load from source to staging area and IKM SQL to SQL append to load from staging to Target.
I got the below error
0 : null : java.sql.SQLException: Driver must be specified
java.sql.SQLException: Driver must be specified
at com.sunopsis.sql.SnpsConnection.a(SnpsConnection.java)
at com.sunopsis.sql.SnpsConnection.t(SnpsConnection.java)
at com.sunopsis.sql.SnpsConnection.connect(SnpsConnection.java)
at com.sunopsis.sql.SnpsQuery.updateExecStatement(SnpsQuery.java)
at com.sunopsis.sql.SnpsQuery.executeQuery(SnpsQuery.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execCollOrders(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSqlC.treatTaskTrt(SnpSessTaskSqlC.java)
at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
at com.sunopsis.dwg.cmd.e.i(e.java)
at com.sunopsis.dwg.cmd.h.y(h.java)
at com.sunopsis.dwg.cmd.e.run(e.java)
at java.lang.Thread.run(Unknown Source)
Please suggest..
Thanks,Hi John
Thank you for the response
i am trying to load accounts back to oracle table.
My plan is to load existing accounts in planning to a oracle table i.e PLANACNT,so when i load new accounts from oracle to planning through ODI , i can say in the source filter that load Table.column not in (select accounts from PLANACNT).
Please suggest if there is any better method for this...my plan is to load new accounts created in oracle to planning by comparing the accounts in planning.
Thanks, -
Hyperion Planning to Oracle GL
Hi all,
We have requirement to load data from hyperion planning to oracle GL.
How can i do this?
Please guide me
Thanksanyone ?
I read your john blog describe how to extract data from essbase using ODI
and implemet the same method, now as describe, we have odi on win and essbase on linux,
can it be resovled:
1. using maxl script, where i'll wrote the right patch to windoes server, and then ODI
will know to pull this? will it work using LKM Essbase to SQL
2. Should I use report script instead of calc sript?
Pls advice,
Ido -
Pre-requisites for Drill-Down from Hyperion Planning to Oracle GL
I am using Hyperion Planning 11.1.2 for maintaining budgets. I do comparison of actual vs budget in Hyperion. I export actual results from Oracle General Ledger (11.5.10) in flat file and upload to Hyperion planning using FDM. I maintain mapping in FDM for flat file and the Hyperion members.
Q1) Is it possible to drill down actual numbers from Hyperion planning to Oracle General Ledger via FDM?
Q2) If yes, what are prerequisites for using drill-down functionality?
Can anyone please answer and refer some link/material that may help me in setting this up?
Thanks in advance.Thanks
But when i try to log in i m getting error bellow"
Failure of server APACHE bridge: No backend server available for connection: timed out after 10 seconds or idempotent set to OFF."
So do u have any other reference plz
thanks -
If i choose the monthly plan can i cancel anytime
If i choose the monthly plan can i cancel anytime? I got a new job and want to brush up on illustrator, if i purchase the monthly plan for 29.99 can i cancel anytime? What about the 19.99 plan that is paid monthly but on an annual contract? That one makes you pay every month for a year correct?
Tyler311 I would recommend reviewing the following discussions where this topic has been discussed:
Need to cancel my free 1 month membership CC trial before I get charged for 6 months contract period.
cancel month to month
Cancel month to month membership
cancel month to month
Creative Cloud Month-to-Month Cancellation -
Optimizer choosing hash joins even when slower
We have several queries where joins are being evaluated by full scans / hash joins even when forcing index use results in an execution time about a quarter the duration of the hash join plan. It still happens if I run DMS_STATS.GATHER_TABLE_STATS with FOR ALL COLUMNS.
Is there a stats gathering option which is more likely to result in an indexd join without having to get developers to put optimizer hints in their queries?
11g on SuSE 10.
Many thanks.user10400178 wrote:
That would require me to post a large amount of schema information as well to be of any added value.
Surely there are some general recommendations one could make as to how to allow the optimizer to realise that joining through an index is going to be quicker than doing a full scan and hash join to a table.If you don't want to post the plans, then as a first step you basically need to verify yourself if the cardinality estimates returned by the execution plan correspond roughly to the actual cardinalities.
E.g. in your execution plan there are steps like "FULL TABLE SCAN" and these operations likely have a corresponding "FILTER" predicate in the "Predicate Information" section below the plan.
As first step you should run simple count queries ("select count(*) from ... where <FILTER/ACCESS predicates>") on the tables involved using the "FILTER" and "ACCESS" predicates mentioned to compare if the returned number of rows is in the same ballpark than the estimates mentioned in the plan.
If these estimates are already way off then you know that for some reason the optimizer makes wrong assumptions and that's probably the reason why the suboptimal access pattern is preferred.
One potential reason could be correlated column values, but since you're already on 11g you could make use of extended column statistics. See here for more details:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/stats.htm#BEIEEIJA
Another reason might simply that you're choosing a too low "estimate" sample size for the statistics collection. In 11g you should always use the DBMS_STATS.AUTO_SAMPLE_SIZE for the "estimate_percent" parameter of the DBMS_STATS.GATHER__STATS procedures. It should generate accurate statistics without the need to analyze all of the data. See here in Greg Rahn's blog for an example:
http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/
Regarding the histograms: Oracle 11g by default generates histograms if it deems them to be beneficial. It is controlled by the parameter "METHOD_OPT" which has the default value of "FOR ALL COLUMNS SIZE AUTO". The "SIZE" keyword determines the generation of histograms. You could use "SIZE 1" to prevent histogram generation, "SIZE <n>" to control the number of buckets to use for the histogram or "SIZE AUTO" to let Oracle decide itself when and how to generate histograms.
Regarding the stored outlines: You could have so called "stored outlines" that force the optimizer to stick to a certain plan. That features was introduced a long time ago and is sometimes also referred to as "plan stability", its main purpose was an attempt to smooth the transition from the rule based optimizer (RBO) to the cost based optimizer (CBO) introduced in Oracle 7 (although you can use it for other purposes, too, of course). Oracle 11g offers now the new "SQL plan management" feature that is supposed to supersede the "plan stability" feature. For more information, look here:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28274/outlines.htm#PFGRF707
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Edited by: Randolf Geist on Oct 16, 2008 4:20 PM
Sample size note added
Edited by: Randolf Geist on Oct 16, 2008 6:54 PM
Outline info added -
I'm having a couple of issues with a query, and I can't figure out the best way to reach a solution.
Platform Information
Windows Server 2003 R2
Oracle 10.2.0.4
Optimizer Settings
SQL > show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 90
optimizer_index_cost_adj integer 30
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUEThe query below, is a simple "Top N" query, where the top result is returned. Here it is, with bind variables in the same location as the application code:
SELECT PRODUCT_DESC
FROM
SELECT PRODUCT_DESC
, COUNT(*) AS CNT
FROM USER_VISITS
JOIN PRODUCT ON PRODUCT.PRODUCT_OID = USER_VISITS.PRODUCT_OID
WHERE PRODUCT.PRODUCT_DESC != 'Home'
AND VISIT_DATE
BETWEEN
ADD_MONTHS
TRUNC
TO_DATE
:vCurrentYear
, 'YYYY'
, 'YEAR'
, 3*(:vCurrentQuarter-1)
AND
ADD_MONTHS
TRUNC
TO_DATE
:vCurrentYear
, 'YYYY'
, 'YEAR'
, 3*:vCurrentQuarter
) - INTERVAL '1' DAY
GROUP BY PRODUCT_DESC
ORDER BY CNT DESC
WHERE ROWNUM <= 1;
Explain Plan
The explain plan I receive when running the query above.
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | COUNT STOPKEY | | 1 | | 1 |00:00:34.92 | 66343 | | | |
| 2 | VIEW | | 1 | 1 | 1 |00:00:34.92 | 66343 | | | |
|* 3 | FILTER | | 1 | | 1 |00:00:34.92 | 66343 | | | |
| 4 | SORT ORDER BY | | 1 | 1 | 1 |00:00:34.92 | 66343 | 2048 | 2048 | 2048 (0)|
| 5 | SORT GROUP BY NOSORT | | 1 | 1 | 27 |00:00:34.92 | 66343 | | | |
| 6 | NESTED LOOPS | | 1 | 2 | 12711 |00:00:34.90 | 66343 | | | |
| 7 | TABLE ACCESS BY INDEX ROWID| PRODUCT | 1 | 74 | 77 |00:00:00.01 | 44 | | | |
|* 8 | INDEX FULL SCAN | PRODUCT_PRODDESCHAND_UNQ | 1 | 1 | 77 |00:00:00.01 | 1 | | | |
|* 9 | INDEX FULL SCAN | USER_VISITS#PK | 77 | 2 | 12711 |00:00:34.88 | 66299 | | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=1)
3 - filter(ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1))<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURR
ENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
8 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
9 - access("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
"USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID" AND "USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY')
,'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
filter(("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*(:VCURRENTQUARTER-1)) AND
"USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(TO_CHAR(:VCURRENTYEAR),'YYYY'),'fmyear'),3*:VCURRENTQUARTER)-INTERVAL'+01 00:00:00' DAY(2)
TO SECOND(0) AND "USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID"))
Row Source Generation
TKPROF Row Source Generation
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 0 0 0
Fetch 2 35.10 35.13 0 66343 0 1
total 4 35.10 35.14 0 66343 0 1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62
Rows Row Source Operation
1 COUNT STOPKEY (cr=66343 pr=0 pw=0 time=35132008 us)
1 VIEW (cr=66343 pr=0 pw=0 time=35131996 us)
1 FILTER (cr=66343 pr=0 pw=0 time=35131991 us)
1 SORT ORDER BY (cr=66343 pr=0 pw=0 time=35131936 us)
27 SORT GROUP BY NOSORT (cr=66343 pr=0 pw=0 time=14476309 us)
12711 NESTED LOOPS (cr=66343 pr=0 pw=0 time=22921810 us)
77 TABLE ACCESS BY INDEX ROWID PRODUCT (cr=44 pr=0 pw=0 time=3674 us)
77 INDEX FULL SCAN PRODUCT_PRODDESCHAND_UNQ (cr=1 pr=0 pw=0 time=827 us)(object id 52355)
12711 INDEX FULL SCAN USER_VISITS#PK (cr=66299 pr=0 pw=0 time=44083746 us)(object id 52949)However when I run the query with an ALL_ROWS hint I receive this explain plan (reasoning for this can be found here Jonathan's Lewis' response: http://www.freelists.org/post/oracle-l/ORDER-BY-and-first-rows-10-madness,4):
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 39 | 223 (25)| 00:00:03 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | VIEW | | 1 | 39 | 223 (25)| 00:00:03 |
|* 3 | FILTER | | | | | |
| 4 | SORT ORDER BY | | 1 | 49 | 223 (25)| 00:00:03 |
| 5 | HASH GROUP BY | | 1 | 49 | 223 (25)| 00:00:03 |
|* 6 | HASH JOIN | | 490 | 24010 | 222 (24)| 00:00:03 |
|* 7 | TABLE ACCESS FULL | PRODUCT | 77 | 2849 | 2 (0)| 00:00:01 |
|* 8 | INDEX FAST FULL SCAN| USER_VISITS#PK | 490 | 5880 | 219 (24)| 00:00:03 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=1)
3 - filter(ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*(TO_NUMBER(:
VCURRENTQUARTER)-1))<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),3*TO_N
UMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))
6 - access("USER_VISITS"."PRODUCT_OID"="PRODUCT"."PRODUCT_OID")
7 - filter("PRODUCT"."PRODUCT_DESC"<>'Home')
8 - filter("USER_VISITS"."VISIT_DATE">=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYY
Y'),'fmyear'),3*(TO_NUMBER(:VCURRENTQUARTER)-1)) AND
"USER_VISITS"."VISIT_DATE"<=ADD_MONTHS(TRUNC(TO_DATE(:VCURRENTYEAR,'YYYY'),'fmyear'),
3*TO_NUMBER(:VCURRENTQUARTER))-INTERVAL'+01 00:00:00' DAY(2) TO SECOND(0))And the TKPROF Row Source Generation:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.51 0.51 0 907 0 27
total 5 0.51 0.51 0 907 0 27
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62
Rows Row Source Operation
27 FILTER (cr=907 pr=0 pw=0 time=513472 us)
27 SORT ORDER BY (cr=907 pr=0 pw=0 time=513414 us)
27 HASH GROUP BY (cr=907 pr=0 pw=0 time=512919 us)
12711 HASH JOIN (cr=907 pr=0 pw=0 time=641130 us)
77 TABLE ACCESS FULL PRODUCT (cr=5 pr=0 pw=0 time=249 us)
22844 INDEX FAST FULL SCAN USER_VISITS#PK (cr=902 pr=0 pw=0 time=300356 us)(object id 52949)The query with the ALL_ROWS hint returns data instantly, while the other one takes about 70 times as long.
Interestingly enough BOTH queries generate plans with estimates that are WAY off. The first plan is estimating 2 rows, while the second plan is estimating 490 rows. However the real number of rows is correctly reported in the Row Source Generation as 12711 (after the join operation).
TABLE_NAME NUM_ROWS BLOCKS
USER_VISITS 196044 1049
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR LAST_ANALYZED
USER_VISITS#PK 2 860 196002 57761 07/24/2009 13:17:59
COLUMN_NAME NUM_DISTINCT LOW_VALUE HIGH_VALUE DENSITY NUM_NULLS HISTOGRAM
VISIT_DATE 195900 786809010E0910 786D0609111328 .0000051046452272 0 NONEI don't know how the first one is estimating 2 rows, but I can compute the second's cardinality estimates by assuming a 5% selectivity for the TO_DATE() functions:
SQL > SELECT ROUND(0.05*0.05*196044) FROM DUAL;
ROUND(0.05*0.05*196044)
490However, removing the bind variables (and clearing the shared pool), does not change the cardinality estimates at all.
I would like to avoid hinting this plan if possible and that is why I'm looking for advice. I also have a followup question.
Edited by: Centinul on Sep 20, 2009 4:10 PM
See my last post for 11.2.0.1 update.Centinul wrote:
You could potentially perform testing with either a CARDINALITY or OPT_ESTIMATE hint to see if the execution plan changes dramatically to improve performance. The question then becomes > whether this be sufficient to over-rule the first rows optimizer so that it does not use an index access which will avoid a sort.I tried doing that this morning by increasing the cardinality from the USER_VISITS table to a value such that the estimate was about that of the real amount of data. However the plan did not change.
Could you use the ROW_NUMBER analytic function instead of ROWNUMInterestingly enough, when I tried this it generated the same plan as was used with the ALL_ROWS hint, so I may implement this query for now.
I do have two more followup questions:
1. Even though a better plan is picked the optimizer estimates are still off by a large margin because of bind variables and 5%* 5% * NUM_ROWS. How do I get the estimates in-line with the actual values? Should I really fudge statistics?
2. Should I raise a bug report with Oracle over the behavior of the original query?That is great that the ROW_NUMBER analyitic function worked. You may want to perform some testing with this before implementing it in production to see whether Oracle performs significantly more logical or physical I/Os with the ROW_NUMBER analytic function compared to the ROWNUM solution with the ALL_ROWS hint.
As Timur suggests, seeing a 10053 trace during a hard parse of both queries (with and without the ALL_ROWS hint) would help determine what is happening. It could be that a histogram exists which is feeding bad information to the optimizer, causing distorted cardinality in the plan. If bind peeking is used, the 5% * 5% rule might not apply, especially if a histogram is involved. Also, the WHERE clause includes "PRODUCT.PRODUCT_DESC != 'Home'" which might affect the cardinality in the plan.
Your question may have prompted the starting of a thread in the SQL forum yesterday on the topic of ROWNUM, but it appears that thread was removed from the forum within the last couple hours.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Hi,
I've found a discussion ended last year.
I've Oracle 11.2.0.3 with the same problem: some query are very slow (in the Excecution plan I've found "PX COORDINATOR")
Others installation of Oracle 11.2.0.3 haven't same problem: in the excecution plan found "DOMAIN INDEX".
I 've solved using statement "ALTER TABLE table_name NOPARALLEL;" which disable parallel feature in the table
Can anyone tell me if it's a good (bad) solution ? thank you
Bye
GabrieleGabriele,
As indicated in the original post - the following work-around might work for you for bug 9743250. What this does is essentially force the optimizer to ALWAYS use the spatial index. That is usually good - unless the query is actually more efficient not using the index, such as when it is much cheaper to just to a FTS. Anyhow this is what you can do from 11.2.0.1 to 11.2.0.3 - fixed in 11.2.0.4 and 12.1.0.1:
connect /as sysdba
alter session set current_schema=MDSYS;
DISASSOCIATE STATISTICS FROM INDEXTYPES spatial_index FORCE
DISASSOCIATE STATISTICS FROM PACKAGES sdo_3gl FORCE;
DISASSOCIATE STATISTICS FROM PACKAGES prvt_idx FORCE;
Bryan -
Can the CBO optimizer be slow ?
Hi,
i have a strange problem with a query. The first time i execute the query it takes 3 or 4 seconds to be executed. The query has a correct plan (i use the FIRST_ROWS hint) and the second time i execute it then the response is quite immediate.
I have tried to use the RULE hint and the plan is the same as with the FIRST_ROWS hint, but the query is very fast the first time too. I've noted that the time the tkproof utility takes to obtain the query plan (with the FIRST_ROWS hint) is similar to the difference between the first and the second execution.
My question is : can the CBO be so slow ?
TIA
Sergio SetteNeither one of those queries is using the CBO. /* RULE */ and /* FIRST ROWS */ are hints and use th Rules based optimizer.This is what the oracle documentation says :
---8<---
FIRST_ROWS
The optimizer uses a cost-based approach for all SQL statements in the session regardless of the presence of statistics and optimizes with a goal of best
response time (minimum resource use to return the first row of the result set).
--->8---
first_rows (at least with 8.1.6 or later) uses the CBO and can be very slow. I have solved the problem reanalyzing (using the dbms_utility package) all the object in my schema.
Regards
sergio sette
Maybe you are looking for
-
Mail with PDF attachments sent, not stored in Sent
IMAP account, Sent folder configured to point to server's sent mail folder. "Windows friendly attachments" selected. If I send a message with multiple PDF attachments, the message will be received, but it won't be written to the Sent folder. I can't
-
Hi all, After installing Oracle db 11.2.0.1 on my Win7 laptop, I got the following completion message: The Database Control URL is https://localhost:5500/em Management Repository has been placed in secure mode wherein Enterprise Manager data will be
-
I even did a full factory restore without using a back-up (for other reasons) and it still won't work...please help?
-
Photoshop Elements not loading and shows error
I have a sony vaio lap with win-8 OS and the PS Elements licensed version came built in and now the thing is my friend has unknowingly uninstalled PS and now i have restored it by running the system restore. But the problem is now it is restored but
-
When will lightroom support tethered shooting with the NIKON D4S?
when will lightroom support tethered shooting with the NIKON D4S?