Avoid execution plan that resolves unused join
There are two tables Master and LookUp.
Master references LookUp by its indexed primary key.
CREATE TABLE "LookUp" (
ID_LU NUMBER NOT NULL,
DATA VARCHAR2(100) );
CREATE UNIQUE INDEX LOOKUP_PK ON "LookUp"(ID_LU);
ALTER TABLE "LookUp" ADD (
CONSTRAINT LOOKUP_PK
PRIMARY KEY (ID_LU)
USING INDEX );
CREATE TABLE "Master" (
ID NUMBER NOT NULL,
DATA VARCHAR2(100),
ID_LU NUMBER );
CREATE UNIQUE INDEX MASTER_PK ON "Master"(ID);
ALTER TABLE "Master" ADD (
CONSTRAINT MASTER_PK
PRIMARY KEY (ID)
USING INDEX );
ALTER TABLE "Master" ADD (
CONSTRAINT FK_MASTER
FOREIGN KEY (ID_LU)
REFERENCES "LookUp" (ID_LU));
Selecting rows from LookUp with LEFT OUTER JOIN Master produces a query execution plan that does not consider Master as it is not used.
SELECT t1.ID_LU FROM "LookUp" t1
LEFT OUTER JOIN "Master" t2
ON t1.ID_LU = t2.ID_LU;
PLAN_ID ID PARENT_ID DEPTH OPERATION OPTIMIZER OPTIONS OBJECT_NAME OBJECT_ALIAS OBJECT_TYPE
2 0 0 SELECT STATEMENT ALL_ROWS
2 1 0 1 TABLE ACCESS FULL Master T1@SEL$2 TABLE
But selecting rows from Master with LEFT OUTER JOIN LookUp produces a not specular query execution plan that considers LookUp table although it is not used.
SELECT t1.ID_LU FROM "Master" t1
LEFT OUTER JOIN "LookUp" t2
ON t1.ID_LU = t2.ID_LU;
PLAN_ID ID PARENT_ID DEPTH OPERATION OPTIMIZER OPTIONS OBJECT_NAME OBJECT_ALIAS OBJECT_TYPE
1 0 0 SELECT STATEMENT ALL_ROWS
1 1 0 1 HASH JOIN OUTER
1 2 1 2 INDEX FAST FULL SCAN LOOKUP_PK T1@SEL$2 INDEX (UNIQUE)
1 3 1 2 TABLE ACCESS FULL Master T2@SEL$1 TABLE
For example Sql Server 2005 does not make distiction between the two query execution plans.
I would like to know why sql optimizer behaves this way and especially if there is a hint or an option that helps optimizer to avoid involving unused join tables.
Actually, something does not add up. Left outer join selects all rows in left table even if there is no matching row in right table. Left table in first query is Lookup table. So I can not understand how execution plan:
SELECT t1.ID_LU FROM "LookUp" t1
LEFT OUTER JOIN "Master" t2
ON t1.ID_LU = t2.ID_LU;
PLAN_ID ID PARENT_ID DEPTH OPERATION OPTIMIZER OPTIONS OBJECT_NAME OBJECT_ALIAS OBJECT_TYPE
2 0 0 SELECT STATEMENT ALL_ROWS
2 1 0 1 TABLE ACCESS FULL Master T1@SEL$2 TABLEbypasses Lookup table. On my 10.2.0.4.0 I get:
SQL> SELECT t1.ID_LU FROM "LookUp" t1
2 LEFT OUTER JOIN "Master" t2
3 ON t1.ID_LU = t2.ID_LU;
no rows selected
Execution Plan
Plan hash value: 3482147238
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 26 | 5 (20)| 00:00:01 |
|* 1 | HASH JOIN OUTER | | 1 | 26 | 5 (20)| 00:00:01 |
| 2 | TABLE ACCESS FULL| LookUp | 1 | 13 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| Master | 1 | 13 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T1"."ID_LU"="T2"."ID_LU"(+))
Note
- dynamic sampling used for this statement
Statistics
209 recursive calls
0 db block gets
48 consistent gets
0 physical reads
0 redo size
274 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
0 rows processedI do question this plan. I would expect FULL INDEX SCAN of LOOKUP_PK index. And for second query I get plan same a OP:
SQL> SELECT t1.ID_LU FROM "Master" t1
2 LEFT OUTER JOIN "LookUp" t2
3 ON t1.ID_LU = t2.ID_LU;
no rows selected
Execution Plan
Plan hash value: 3856835961
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 26 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS OUTER| | 1 | 26 | 2 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL| Master | 1 | 13 | 2 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN| LOOKUP_PK | 1 | 13 | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("T1"."ID_LU"="T2"."ID_LU"(+))
Note
- dynamic sampling used for this statement
Statistics
1 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
274 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
SQL> SY.
Similar Messages
-
How to corret an execution plan that shows wrong number of rows?
Using Oracle 10gR2 RAC (10.2.0.3) on SUSE Linux 9 (x86_64).
I have a partition table that has 5 million rows (5,597,831). However an execution plan against the table show that the table has 10 million rows.
Execution plan:
SELECT STATEMENT ALL_ROWS Cost : 275,751 Bytes : 443 Cardinality : 1
3 HASH GROUP BY Cost : 275,751 Bytes : 443 Cardinality : 1
2 PARTITION RANGE ALL Cost : 275,018 Bytes : 4,430,000,000 Cardinality : *10,000,000* Partition # : 2 Partitions accessed #1 - #6
1 TABLE ACCESS FULL TRACESALES.TRACE_BUSINESS_AREA Cost : 275,018 Bytes : 4,430,000,000 Cardinality : 10,000,000 Partition # : 2 Partitions accessed #1 - #6
Plan hash value: 322783426
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 443 | 275K (2)| 00:55:10 | | |
| 1 | HASH GROUP BY | | 1 | 443 | 275K (2)| 00:55:10 | | |
| 2 | PARTITION RANGE ALL| | 10M| 4224M| 275K (2)| 00:55:01 | 1 | 6 |
| 3 | TABLE ACCESS FULL | TRACE_BUSINESS_AREA | 10M| 4224M| 275K (2)| 00:55:01 | 1 | 6 |
How does one correct the explain plan?
The problem: Queries against the table are taking hours to complete. The problem started when the table was dropped then recreated with a new partition.
I have complete the drop and creation against several tables for several years without problems until now.
I have done the following: Analyzed statistics against the table, flushed buffer cache. Created a materialized view.
However users queries are taking several hours to complete, where before the addition of the partition the queries where taking 5 minutes to complete.
Thanks. BL.Yes, complete analysis of statistic was complete on indexes and against partitions.
Table creation statement:
CREATE TABLE TRACESALES.TRACE_BUSINESS_AREA
... *(400 columns)*
TABLESPACE "trace_OLAPTS"
PCTUSED 0
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL KEEP
PARTITION BY RANGE (YEAR)
PARTITION TRACE_06 VALUES LESS THAN ('2007')
NOLOGGING
NOCOMPRESS
TABLESPACE TRACE_2006
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION TRACE_07 VALUES LESS THAN ('2008')
NOLOGGING
NOCOMPRESS
TABLESPACE TRACE_2007
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION TRACE_08 VALUES LESS THAN ('2009')
NOLOGGING
NOCOMPRESS
TABLESPACE TRACE_2008
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION TRACE_09 VALUES LESS THAN ('2010')
NOLOGGING
NOCOMPRESS
TABLESPACE TRACE_2009
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION TRACE_10 VALUES LESS THAN ('2011')
NOLOGGING
NOCOMPRESS
TABLESPACE TRACE_2010
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION TRACE_11 VALUES LESS THAN (MAXVALUE)
NOLOGGING
NOCOMPRESS
TABLESPACE TRACE_2011
PCTFREE 15
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCOMPRESS
CACHE
PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
MONITORING;
*(index statements, constraints, triggers and security)*
Table caching is on and running in parallel degree 4 instances 1. -
What does cost mean in execution plan?
What does a cost increase actually mean compared to the decrease of data that has to be processed?
More details:
I have 3 tables:
- users
- transaction types
- user transactions
I'm joining user transactions with transaction types for a particular user and aggregating some result from the transaction type table.
Originally there was no index on the user transactions table containing both the user_id (on which is filtered) and the transaction id. This lead to a TABLE ACCESS and joining a lot of data for nothing. I created an index to contain both fields so that no more TABLE ACCESS is needed. It indeed reduced the amount of data to be merged and aggregated considerably, but the COST of the query went up! This I don't understand. (the query itself did not seem to take more/less time).
What does a cost increase actually mean compared to the decrease of data that has to be processed?
Below are the two execution plans:
Execution plan with low cost and table access.
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=51 Card=1 Bytes=33)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=51 Card=283759 Bytes=9364047)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'THIRD_PARTY_TRANSACT
IONS' (Cost=2 Card=448 Bytes=8960)
4 3 INDEX (RANGE SCAN) OF 'X_TP_TRANSACTIONS_USER' (NON-
UNIQUE) (Cost=1 Card=448)
5 2 INDEX (FAST FULL SCAN) OF 'X_ISP_TRANSACTIONS_TRID_FIN
AL' (NON-UNIQUE) (Cost=4 Card=63339 Bytes=823407)
Execution plan with only index join but high cost
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=272 Card=1 Bytes=28)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=272 Card=3287 Bytes=92036)
3 2 INDEX (FAST FULL SCAN) OF 'X_TP_TRANSACTIONS_TRID_USER
ID' (NON-UNIQUE) (Cost=21 Card=3287 Bytes=49305)
4 2 INDEX (FAST FULL SCAN) OF 'X_ISP_TRANSACTIONS_TRID_FIN
AL' (NON-UNIQUE) (Cost=4 Card=63339 Bytes=823407)when oracle parses and optimises a query it creates several execution plans and assigns a cost to each plan.
It then uses this cost to compare these plans.
But you can't use this cost as an indication of how "heavy" a query is, nor can you compare the costs between different queries.
It is only a value used internally by oracle.
greetings
Freek D -
Locked table stats on volatile IOT result in suboptimal execution plan
Hello,
since upgrading to 10gR2 we are experiencing weird behaviour in execution plans of queries which join tables with a volatile IOT on which we deleted and locked statistics.
Execution plan of the example query running ok (SYS_IOT... is the volatile IOT):
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=12 Card=1 Bytes=169)
1 0 SORT AGGREGATE (Card=1 Bytes=169)
2 1 NESTED LOOPS OUTER (Cost=12 Card=1 Bytes=169)
3 2 NESTED LOOPS OUTER (Cost=10 Card=1 Bytes=145)
4 3 NESTED LOOPS (Cost=6 Card=1 Bytes=121)
5 4 NESTED LOOPS OUTER (Cost=5 Card=1 Bytes=100)
6 5 NESTED LOOPS (Cost=5 Card=1 Bytes=96)
7 6 INDEX FAST FULL SCAN ...SYS_IOT_TOP_76973 (Cost=2 Card=1 Bytes=28)
8 6 TABLE ACCESS BY INDEX ROWID ...VSUC (Cost=3 Card=1 Bytes=68)
9 8 INDEX RANGE SCAN ...VSUC_VORG (Cost=2 Card=1)Since 10gR2 the index on the joined table is not used:
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=857 Card=1 Bytes=179)
1 0 SORT AGGREGATE (Card=1 Bytes=179)
2 1 NESTED LOOPS OUTER (Cost=857 Card=1 Bytes=179)
3 2 NESTED LOOPS OUTER (Cost=855 Card=1 Bytes=155)
4 3 NESTED LOOPS (Cost=851 Card=1 Bytes=131)
5 4 NESTED LOOPS OUTER (Cost=850 Card=1 Bytes=110)
6 5 NESTED LOOPS (Cost=850 Card=1 Bytes=106)
7 6 TABLE ACCESS FULL ...VSUC (Cost=847 Card=1 Bytes=68)
8 6 INDEX RANGE SCAN ...SYS_IOT_TOP_76973 (Cost=3 Card=1 Bytes=38)I did a UNLOCK_TABLE_STATS and GATHER_TABLE_STATS on the IOT and everything worked fine - the database used the first execution plan.
Also, setting OPTIMIZER_FEATURES_ENABLE to 10.1.0.4 results in the correct execution plan, whereas 10.2.0.2 (standard on 10gR2) doesn't use the index - so i suppose it's an optimizer problem/bug/whatever.
I've also tried forcing the index with a hint - it's scanning the index but the costs are extremly high.
Any help would be greatly appreciated,
regards
-sdsdeng,
The first thing you should do is to switch to using the dbms_xplan package for generating execution plans. Amongst other things, this will give you the filter and access predicates as they were when Oracle produced the execution plan. It will also report comments like: 'dynamic sampling used for this statement'.
If you have deleted and locked stats on the IOT, then 10gR2 will (by default) be using dynamic sampling on that object - which means (in theory) it gets a better picture of how many rows really are there, and how well they might join to the next table. This may be enought to explain the change in plan.
What you might try, if the first plan is guaranteed to be good, is to collect stats on the IOT when there is NO data in the IOT, then lock the stats. (Alternatively, fake some stats that say the table is empty if it never is really empty).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
SQL Query C# Using Execution Plan Cache Without SP
I have a situation where i am executing an SQL query thru c# code. I cannot use a stored procedure because the database is hosted by another company and i'm not allowed to create any new procedures. If i run my query on the sql mgmt studio the first time
is approx 3 secs then every query after that is instant. My query is looking for date ranges and accounts. So if i loop thru accounts each one takes approx 3 secs in my code. If i close the program and run it again the accounts that originally took 3 secs
now are instant in my code. So my conclusion was that it is using an execution plan that is cached. I cannot find how to make the execution plan run on non-stored procedure code. I have created a sqlcommand object with my queary and 3 params. I loop thru each
one keeping the same command object and only changing the 3 params. It seems that each version with the different params are getting cached in the execution plans so they are now fast for that particular query. My question is how can i get sql to not do this
by either loading the execution plan or by making sql think that my query is the same execution plan as the previous? I have found multiple questions on this that pertain to stored procedures but nothing i can find with direct text query code.
Bob;
I did the query running different accounts and different dates with instant results AFTER the very first query that took the expected 3 secs. I changed all 3 fields that i've got code for parameters for and it still remains instant in the mgmt studio but
still remains slow in my code. I'm providing a sample of the base query i'm using.
select i.Field1, i.Field2,
d.Field3 'Field3',
ip.Field4 'Field4',
k.Field5 'Field5'
from SampleDataTable1 i,
SampleDataTable2 k,
SampleDataTable3 ip,
SampleDataTable4 d
where i.Field1 = k.Field1 and i.Field4 = ip.Field4
i.FieldDate between '<fromdate>' and '<thrudate>'
and k.Field6 = <Account>
Obviously the field names have been altered because the database is not mine but other then the actual names it is accurate. It works it just takes too long in code as described in the initial post.
My params setup during the init for the connection and the command.
sqlCmd.Parameters.Add("@FromDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@ThruDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@Account", SqlDbType.Decimal);
Each loop thru the code changes these 3 fields.
sqlCommand.Parameters["@FromDate"].Value = dtFrom;
sqlCommand.Parameters["@ThruDate"].Value = dtThru;
sqlCommand.Parameters["@Account"].Value = sAccountNumber;
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
reader.Close();
One thing i have noticed is that the account field is decimal(20,0) and by default the init i'm using defaults to decimal(10) so i'm going to change the init to
sqlCmd.Parameters["@Account"].Precision = 20;
sqlCmd.Parameters["@Account"].Scale = 0;
I don't believe this would change anything but at this point i'm ready to try anything to get the query running faster.
Bob; -
I have 2 Oracle instances with nearly identical sets of objects -- one for development and one for production use. I am about to deploy some upgraded functionality from development to production and I have spent hours optimizing application SQL statements with Enterprise Manager's SQL Tuner and it has found several optimized execution plans that make a significant difference in performance. Is there a way to migrate these optimized execution plans from development to production when I deploy the rest of my application upgrade or will I need to again spend time running things through the SQL Tuner on production?
Thanks for your help.I think you can export and import the histogram statistics of that schema or tables (DBMS_STATS.EXPORT_SCHEMA_STATS or DBMS_STATS.EXPORT_TABLE_STATS) from your development to production.
http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref7903
Optimizing SQL Statements
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/part4.htm
Exporting and Importing Statistics
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41854 -
How to write SQL in crystal report that can reuse SQL execution plan cache?
I write the following SQL with crystal report parameter fields, and it is connecting to SQL 2005
Select Name from Customer where CustID = '{?CustID}'
The SQL profiler show that It is an ad-hoc query, how to write parameterized SQL which can reuse Execution Plan.
Edited by: Chan Yue Wah on May 14, 2009 3:17 AMSince there are too many report, it is not possible rewrite all. Is that crystal report do not have option to change how it query the database ?
-
SQL - Need Tunning tips for group by [LATEST EXECUTION PLAN IS ATTACHED]
Hi All Experts,
My SQL is taking so much time to execute. If I remove the group by clause it is running within a minute but as soon as I am putting sum() and group by clause it is taking ages to run the sql. Number of records are wihout group by clause is almost 85 Lachs (8 Million). Is hugh dataset is killing this? Is there any way to tune the data on Group by clause. Below is my Select hints and execution plan. Please help.
SQL
SELECT /*+ CURSOR_SHARING_EXACT gather_plan_statistics all_rows no_index(atm) no_expand
leading (src cpty atm)
index(bk WBKS_PK) index(src WSRC_UK1) index(acct WACC_UK1)
use_nl(acct src ccy prd cpty grate sb) */
EXECUTION PLAN
PLAN_TABLE_OUTPUT
SQL_ID 1y5pdhnb9tks5, child number 0
SELECT /*+ CURSOR_SHARING_EXACT gather_plan_statistics all_rows no_index(atm) no_expand leading (src cpty atm) index(bk
WBKS_PK) index(src WSRC_UK1) index(acct WACC_UK1) use_nl(acct src ccy prd cpty grate sb) */ atm.business_date,
atm.entity legal_entity, TO_NUMBER (atm.set_of_books) setofbooksid, atm.source_system_id sourcesystemid,
ccy.ccy_currency_code ccy_currency_code, acct.acct_account_code, 0 gl_bal, SUM (atm.amount)
atm_bal, 0 gbp_equ, ROUND (SUM (atm.amount * grate.rate), 4) AS
atm_equ, prd.prd_product_code, glacct.parentreportingclassification parentreportingclassification,
cpty_counterparty_code FROM wh_sources_d src,
Plan hash value: 4193892926
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | HASH GROUP BY | | 1 | 1 | 471 |00:31:38.26 | 904M| 76703 | 649K| 649K| 1149K (0)|
| 2 | NESTED LOOPS | | 1 | 1 | 8362K|00:47:06.06 | 904M| 76703 | | | |
| 3 | NESTED LOOPS | | 1 | 1 | 10M|00:28:48.84 | 870M| 17085 | | | |
| 4 | NESTED LOOPS | | 1 | 1 | 10M|00:27:56.05 | 849M| 17084 | | | |
| 5 | NESTED LOOPS | | 1 | 8 | 18M|00:14:10.93 | 246M| 17084 | | | |
| 6 | NESTED LOOPS | | 1 | 22 | 18M|00:11:58.96 | 189M| 17084 | | | |
| 7 | NESTED LOOPS | | 1 | 22 | 18M|00:10:24.69 | 152M| 17084 | | | |
| 8 | NESTED LOOPS | | 1 | 1337 | 18M|00:06:00.74 | 95M| 17083 | | | |
| 9 | NESTED LOOPS | | 1 | 1337 | 18M|00:02:52.20 | 38M| 17073 | | | |
|* 10 | HASH JOIN | | 1 | 185K| 18M|00:03:46.38 | 1177K| 17073 | 939K| 939K| 575K (0)|
| 11 | NESTED LOOPS | | 1 | 3 | 3 |00:00:00.01 | 11 | 0 | | | |
| 12 | TABLE ACCESS BY INDEX ROWID | WH_SOURCES_D | 1 | 3 | 3 |00:00:00.01 | 3 | 0 | | | |
|* 13 | INDEX RANGE SCAN | WSRC_UK1 | 1 | 3 | 3 |00:00:00.01 | 2 | 0 | | | |
|* 14 | TABLE ACCESS BY INDEX ROWID | WH_COUNTERPARTIES_D | 3 | 1 | 3 |00:00:00.01 | 8 | 0 | | | |
|* 15 | INDEX UNIQUE SCAN | WCPY_U1 | 3 | 1 | 3 |00:00:00.01 | 5 | 0 | | | |
| 16 | PARTITION RANGE SINGLE | | 1 | 91M| 91M|00:00:00.08 | 1177K| 17073 | | | |
|* 17 | TABLE ACCESS FULL | WH_ATM_BALANCES_F | 1 | 91M| 91M|00:00:00.04 | 1177K| 17073 | | | |
|* 18 | TABLE ACCESS BY INDEX ROWID | WH_PRODUCTS_D | 18M| 1 | 18M|00:01:43.88 | 37M| 0 | | | |
|* 19 | INDEX UNIQUE SCAN | WPRD_UK1 | 18M| 1 | 18M|00:00:52.13 | 18M| 0 | | | |
|* 20 | TABLE ACCESS BY GLOBAL INDEX ROWID| WH_BOOKS_D | 18M| 1 | 18M|00:02:53.01 | 56M| 10 | | | |
|* 21 | INDEX UNIQUE SCAN | WBKS_PK | 18M| 1 | 18M|00:01:08.32 | 37M| 10 | | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | T_SDM_SOURCEBOOK | 18M| 1 | 18M|00:03:43.66 | 56M| 1 | | | |
|* 23 | INDEX RANGE SCAN | TSSB_N5 | 18M| 2 | 23M|00:01:11.50 | 18M| 1 | | | |
|* 24 | TABLE ACCESS BY INDEX ROWID | WH_CURRENCIES_D | 18M| 1 | 18M|00:01:51.21 | 37M| 0 | | | |
|* 25 | INDEX UNIQUE SCAN | WCUR_PK | 18M| 1 | 18M|00:00:49.26 | 18M| 0 | | | |
| 26 | TABLE ACCESS BY INDEX ROWID | WH_GL_DAILY_RATES_F | 18M| 1 | 18M|00:01:55.84 | 56M| 0 | | | |
|* 27 | INDEX UNIQUE SCAN | WGDR_U2 | 18M| 1 | 18M|00:01:10.89 | 37M| 0 | | | |
| 28 | INLIST ITERATOR | | 18M| | 10M|00:22:40.03 | 603M| 0 | | | |
|* 29 | TABLE ACCESS BY INDEX ROWID | WH_ACCOUNTS_D | 150M| 1 | 10M|00:20:19.05 | 603M| 0 | | | |
|* 30 | INDEX UNIQUE SCAN | WACC_UK1 | 150M| 5 | 150M|00:10:16.81 | 452M| 0 | | | |
| 31 | TABLE ACCESS BY INDEX ROWID | T_SDM_GLACCOUNT | 10M| 1 | 10M|00:00:50.64 | 21M| 1 | | | |
|* 32 | INDEX UNIQUE SCAN | TSG_PK | 10M| 1 | 10M|00:00:26.17 | 10M| 0 | | | |
|* 33 | TABLE ACCESS BY INDEX ROWID | WH_COMMON_TRADES_D | 10M| 1 | 8362K|00:18:52.56 | 33M| 59618 | | | |
|* 34 | INDEX UNIQUE SCAN | WCTD_PK | 10M| 1 | 10M|00:03:06.56 | 21M| 5391 | | | |
Edited by: user535789 on Mar 17, 2011 9:45 PM
Edited by: user535789 on Mar 20, 2011 8:33 PMuser535789 wrote:
Hi All Experts,
My SQL is taking so much time to execute. If I remove the group by clause it is running within a minute but as soon as I am putting sum() and group by clause it is taking ages to run the sql. Number of records are wihout group by clause is almost 85 Lachs (*8 Million*). Is hugh dataset is killing this? Is there any way to tune the data on Group by clause. Below is my Select hints and execution plan. Please help.I doubt that your 8 million records are shown within minutes.
I guess that the output started after a few minutes. But this does not mean that the full resultset is there. It just means the database is able to return to you the first few records after a few minutes.
Once you add a group by (or an order by) then this requires that all the rows need to be fetched before the database can start showing them to you.
But maybe you could run some tests to compare the full output. I find it useful to SET AUTOTRACE TRACEONLY for such a purpose (in sql plus). This avoids to print the selection on the screen. -
Executions Plans stored on CACHE
Hello Friends and Oracle Gurus...
I'm not much expert on Execution Plan but I have a select on Oracle 9.2.0.1.0 Windows 2003 server that takes too long since a few days ago...
When I look on E.Manager, Session Details... its doing a full table scan in one of my tables....
But just above the Execution Plan is a message saying..
EXECUTION PLAN STORED ON CACHE: MODE: ALL_ROWS
However, my OPTIMIZER_MODE is CHOOSE and all tables on select have statistics up to date...
How Did Oracle use this plan and how eliminate it and use another one? that would accept CHOOSE because the statistics?
Tks for everyoneGuys..
here is the select I was talking about
SELECT SUM(NOTAS_ITENS.QUANTIDADE),NOTAS_ITENS.CHAVE_NOTA, NOTAS_ITENS.NUMPED,SUM(NOTAS_ITENS.PESO_BRUTO),NOTAS_ITENS.CHAVE_ALMOXARIFADO,
NOTAS_ITENS.VALOR_TOTAL, NOTAS_ITENS.CHAVE,
PRODUTOS.CPROD, PRODUTOS.CODIGO, PRODUTOS.DESCRICAO, PRODUTOS.LOCACAO, PRODUTOS.VASILHAME,PRODUTOS.PESO_LIQUIDO,
UNIDADES.UNIDADE,
PERICULOSIDADE.DESCRICAO
FROM NOTAS_ITENS, PRODUTOS, UNIDADES, PERICULOSIDADE
WHERE (NOTAS_ITENS.CHAVE_PRODUTO = PRODUTOS.CPROD)
AND (NOTAS_ITENS.QUANTIDADE > 0)
AND (PRODUTOS.CHAVE_UNIDADE = UNIDADES.CHAVE)
AND (PRODUTOS.CHAVE_PERICULOSIDADE = PERICULOSIDADE.CHAVE(+))
AND ( CHAVE_NOTA IN
(SELECT CHAVE FROM NOTAS WHERE CHAVE = CHAVE AND (NOTAS.ATIVA = 'SIM') AND (NOTAS.IMPRESSO_ROMANEIO = 'NAO')))
GROUP BY PRODUTOS.CPROD, PRODUTOS.CODIGO, PRODUTOS.DESCRICAO, PRODUTOS.LOCACAO, PRODUTOS.VASILHAME, PRODUTOS.PESO_LIQUIDO,
UNIDADES.UNIDADE,
PERICULOSIDADE.DESCRICAO,
NOTAS_ITENS.CHAVE_NOTA, NOTAS_ITENS.NUMPED, NOTAS_ITENS.CHAVE_ALMOXARIFADO, NOTAS_ITENS.CHAVE, NOTAS_ITENS.VALOR_TOTAL
ORDER BY NOTAS_ITENS.CHAVE;
and here is the execution plan for him..
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=10615 Card=66372 Byt
es=11747844)
1 0 SORT (GROUP BY) (Cost=10615 Card=66372 Bytes=11747844)
2 1 HASH JOIN (Cost=8855 Card=66372 Bytes=11747844)
3 2 TABLE ACCESS (FULL) OF 'UNIDADES' (Cost=2 Card=30 Byte
s=240)
4 2 HASH JOIN (OUTER) (Cost=8851 Card=66372 Bytes=11216868
5 4 HASH JOIN (Cost=8696 Card=66372 Bytes=9225708)
6 5 TABLE ACCESS (FULL) OF 'PRODUTOS' (Cost=98 Card=19
01 Bytes=171090)
7 5 HASH JOIN (Cost=8584 Card=66387 Bytes=3252963)
8 7 VIEW OF 'index$_join$_005' (Cost=347 Card=13193
Bytes=171509)
9 8 HASH JOIN
10 9 HASH JOIN
11 10 INDEX (RANGE SCAN) OF 'NOTAS_ATIVA_IDX' (N
ON-UNIQUE) (Cost=140 Card=13193 Bytes=171509)
12 10 INDEX (RANGE SCAN) OF 'NOTAS_IMPRESSO_ROMA
NEIO' (NON-UNIQUE) (Cost=140 Card=13193 Bytes=171509)
13 9 INDEX (FAST FULL SCAN) OF 'NOTAS_PK' (UNIQUE
) (Cost=140 Card=13193 Bytes=171509)
14 7 TABLE ACCESS (FULL) OF 'NOTAS_ITENS' (Cost=8170
Card=265547 Bytes=9559692)
15 4 TABLE ACCESS (FULL) OF 'PERICULOSIDADE' (Cost=2 Card
=1 Bytes=30)
Estatística
0 recursive calls
0 db block gets
855476 consistent gets
83917 physical reads
0 redo size
1064 bytes sent via SQL*Net to client
368 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
Note that the cost for the HASH JOIN is high-- Is there anyway to get it lower ?
Note that Oracle performs FTS on NOTAS_ITENS table which is quite big for ue... I tryied a lot of hints but none of them avoid FTS ...
Any tips ? -
Two different HASH GROUP BY in execution plan
Hi ALL;
Oracle version
select *From v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - ProductionSQL
select company_code, account_number, transaction_id,
decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
(last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
from
select /*+ PARALLEL(use, 2) */ company_code,substr(account_number, 1, 5) account_number,transaction_id,
decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
from financials.unbalanced_subledger_entries use
where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
UNION ALL
select /*+ PARALLEL(se, 2) */ company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
where se.account_number = an.account_number
and an.subledger_type in ('C', 'AC')
) z
group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
having abs(sum(z.amount)) >= 0.01explain plan
Plan hash value: 1993777817
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 76718 (100)| | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,02 | P->S | QC (RAND) |
|* 3 | FILTER | | | | | | Q1,02 | PCWC | |
| 4 | HASH GROUP BY | | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,02 | PCWP | |
| 5 | PX RECEIVE | | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,02 | PCWP | |
| 6 | PX SEND HASH | :TQ10001 | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,01 | P->P | HASH |
| 7 | HASH GROUP BY | | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,01 | PCWP | |
| 8 | VIEW | | 15M| 2055M| 76116 (1)| 00:15:14 | Q1,01 | PCWP | |
| 9 | UNION-ALL | | | | | | Q1,01 | PCWP | |
| 10 | PX BLOCK ITERATOR | | 11 | 539 | 1845 (1)| 00:00:23 | Q1,01 | PCWC | |
|* 11 | TABLE ACCESS FULL | UNBALANCED_SUBLEDGER_ENTRIES | 11 | 539 | 1845 (1)| 00:00:23 | Q1,01 | PCWP | |
|* 12 | HASH JOIN | | 15M| 928M| 74270 (1)| 00:14:52 | Q1,01 | PCWP | |
| 13 | BUFFER SORT | | | | | | Q1,01 | PCWC | |
| 14 | PX RECEIVE | | 21 | 210 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 21 | 210 | 2 (0)| 00:00:01 | | S->P | BROADCAST |
|* 16 | TABLE ACCESS FULL| ACCOUNT_NUMBERS | 21 | 210 | 2 (0)| 00:00:01 | | | |
| 17 | PX BLOCK ITERATOR | | 25M| 1250M| 74183 (1)| 00:14:51 | Q1,01 | PCWC | |
|* 18 | TABLE ACCESS FULL | TEMP2_SL_SNAPSHOT_ENTRIES | 25M| 1250M| 74183 (1)| 00:14:51 | Q1,01 | PCWP | |
Predicate Information (identified by operation id):
3 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=.01)
11 - access(:Z>=:Z AND :Z<=:Z)
filter(("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"USE"."ACCOUNTING_DATE">=TO_DATE(' 2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
12 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
16 - filter(("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C'))
18 - access(:Z>=:Z AND :Z<=:Z)I have few doubts regarding this execution plan and i am sure my questions would get answered here.
Q-1: Why am i getting two different HASH GROUP BY operations (Operation id 4 & 7) even though there is only a single GROUP BY clause ? Is that due to UNION ALL operator that is merging two different row sources and HASH GROUP BY is being applied on both of them individually ?
Q-2: What does 'BUFFER SORT' (Operation id 13) indicate ? Some time i got this operation and sometime i am not. For some other queries, i have observing around 10GB TEMP space and high cost against this operation. So just curious about whether it is really helpful ? if no, how to avoid that ?
Q-3: Under PREDICATE Section, what does step 18 suggest ? I am not using any filter like this ? access(:Z>=:Z AND :Z<=:Z)aychin wrote:
Hi,
About BUFFER SORT, first of all it is not specific for Parallel Executions. This step in the plan indicates that internal sorting have a place. It doesn't mean that rows will be returned sorted, in other words it doesn't guaranty that rows will be sorted in resulting row set, because it is not the main purpose of this operation. I've previously suggested that the "buffer sort" should really simply say "buffering", but that it hijacks the buffering mechanism of sorting and therefore gets reported completely spuriously as a sort. (see http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/ ).
In this case, I think the buffer sort may be a consequence of the broadcast distribution - and tells us that the entire broadcast is being buffered before the hash join starts. It's interesting to note that in the recent of the two plans with a buffer sort the second (probe) table in the hash join seems to be accessed first and broadcast before the first table is scanned to allow the join to occur.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"Science is more than a body of knowledge; it is a way of thinking"+
+Carl Sagan+ -
Execution plan with Concatenation
Hi All,
Could anyone help in finding why concatenation is being used by optimizer and how can i avoid it.
Oracle Version : 10.2.0.4
select * from
select distinct EntityType, EntityID, DateModified, DateCreated, IsDeleted
from ife.EntityIDs i
join (select orgid from equifaxnormalize.org_relationships where orgid is not null and related_orgid is not null
and ((Date_Modified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Modified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
OR (Date_Created >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Created < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
) r on(r.orgid= i.entityid)
where EntityType = 1
and ((DateModified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateModified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
OR (DateCreated >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateCreated < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
and ( IsDeleted = 0)
and IsDistributable = 1
and EntityID >= 0
order by EntityID
--order by NLSSORT(EntityID,'NLS_SORT=BINARY')
where rownum <= 10;
Execution Plan
Plan hash value: 227906424
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 10 | 570 | 39 (6)| 00:00:01 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 56 | 3192 | 39 (6)| 00:00:01 | | |
|* 3 | SORT ORDER BY STOPKEY | | 56 | 3416 | 39 (6)| 00:00:01 | | |
| 4 | HASH UNIQUE | | 56 | 3416 | 38 (3)| 00:00:01 | | |
| 5 | CONCATENATION | | | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 29 | 1 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 27 | 1647 | 17 (0)| 00:00:01 | | |
| 8 | TABLE ACCESS BY GLOBAL INDEX ROWID| ENTITYIDS | 27 | 864 | 4 (0)| 00:00:01 | ROWID | ROWID |
|* 9 | INDEX RANGE SCAN | UX_TYPE_MOD_DIST_DEL_ENTITYID | 27 | | 2 (0)| 00:00:01 | | |
|* 10 | INDEX RANGE SCAN | IX_EFX_ORGRELATION_ORGID | 1 | | 1 (0)| 00:00:01 | | |
|* 11 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 29 | 1 (0)| 00:00:01 | | |
| 12 | NESTED LOOPS | | 29 | 1769 | 20 (0)| 00:00:01 | | |
| 13 | PARTITION RANGE ALL | | 29 | 928 | 5 (0)| 00:00:01 | 1 | 3 |
|* 14 | TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS | 29 | 928 | 5 (0)| 00:00:01 | 1 | 3 |
|* 15 | INDEX RANGE SCAN | IDX_ENTITYIDS_ETYPE_DC | 29 | | 4 (0)| 00:00:01 | 1 | 3 |
|* 16 | INDEX RANGE SCAN | IX_EFX_ORGRELATION_ORGID | 1 | | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter(ROWNUM<=10)
6 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
9 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00',
'syyyy-mm-dd hh24:mi:ss'))
filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
10 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)
11 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
14 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
"I"."ENTITYID">=0)
15 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."DATECREATED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
16 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)ife.entityids table has been range - partitioned on data_provider column.
Is there any better way to rewrite this sql OR is there any way to eliminate concatenation ?
ThanksWe cant use data_provider in the given query. We need to pull data irrespective of data_provider and it should be based on ENTITYID.
Yes table has only three partitions...
Not sure issue is due to concatenation....but we are in process to create desired indexes which will help for this sql.
In development we have created multicolumn index and below is the execution plan.....Also in development it takes just 4-5 seconds to execute. But in production it takes more than 8-9 minutes.
Below is the execution plan from Dev which seems to perform fast:
Execution Plan
Plan hash value: 3121857971
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 57 | 353 (1)| 00:00:05 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 1 | 57 | 353 (1)| 00:00:05 | | |
|* 3 | SORT ORDER BY STOPKEY | | 1 | 58 | 353 (1)| 00:00:05 | | |
| 4 | HASH UNIQUE | | 1 | 58 | 352 (1)| 00:00:05 | | |
| 5 | CONCATENATION | | | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 26 | 3 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 1 | 58 | 170 (1)| 00:00:03 | | |
| 8 | PARTITION RANGE ALL | | 56 | 1792 | 16 (0)| 00:00:01 | 1 | 3 |
|* 9 | TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS | 56 | 1792 | 16 (0)| 00:00:01 | 1 | 3 |
|* 10 | INDEX RANGE SCAN | IDX_ENTITYIDS_ETYPE_DC | 56 | | 7 (0)| 00:00:01 | 1 | 3 |
|* 11 | INDEX RANGE SCAN | EFX_ORGID | 2 | | 2 (0)| 00:00:01 | | |
|* 12 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 26 | 3 (0)| 00:00:01 | | |
| 13 | NESTED LOOPS | | 1 | 58 | 181 (0)| 00:00:03 | | |
| 14 | PARTITION RANGE ALL | | 57 | 1824 | 10 (0)| 00:00:01 | 1 | 3 |
|* 15 | INDEX RANGE SCAN | UX_TYPE_MOD_DIST_DEL_ENTITYID | 57 | 1824 | 10 (0)| 00:00:01 | 1 | 3 |
|* 16 | INDEX RANGE SCAN | EFX_ORGID | 2 | | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter(ROWNUM<=10)
6 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
9 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
10 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
11 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)
12 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
15 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<TO_DATE(' 2011-06-13 14:00:00',
'syyyy-mm-dd hh24:mi:ss'))
filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
"I"."ENTITYID">=0)
16 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)Thanks -
Query optimization - Query is taking long time even there is no table scan in execution plan
Hi All,
The below query execution is taking very long time even there are all required indexes present.
Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution.
Please help, this is required very urgently. Thanks in advance. :)
WITH cte
AS (
SELECT Acc_ex1_3
FROM Acc_ex1
INNER JOIN Acc_ex5 ON (
Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
WHERE (
cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
SELECT DISTINCT R.ReportsTo AS directReportingUserId
,UC.UserName AS EmpName
,UC.EmployeeCode AS EmpCode
,UEx1.Use_ex1_1 AS PortfolioCode
SELECT TOP 1 TerritoryName
FROM UserTerritoryLevelView
WHERE displayOrder = 6
AND UserId = R.ReportsTo
) AS BranchName
,GroupsNotContacted AS groupLastContact
,GroupCount AS groupTotal
FROM ReportingMembers R
INNER JOIN TeamMembers T ON (
T.OwnerID = R.OwnerID
AND T.MemberID = R.ReportsTo
AND T.ReportsTo = 1
INNER JOIN UserContact UC ON (
UC.CompanyID = R.OwnerID
AND UC.UserID = R.ReportsTo
INNER JOIN Use_ex1 UEx1 ON (
UEx1.OwnerId = R.OwnerID
AND UEx1.Use_ex1_Id = R.ReportsTo
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupCount
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.Acc_ex1_3 > '0'
AND Accounts.OwnerID = 109
GROUP BY Accounts.AssignedTo
) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
INNER JOIN (
SELECT Accounts.AssignedTo
,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
FROM Accounts
INNER JOIN Acc_ex1 ON (
Accounts.AccountID = Acc_ex1.Acc_ex1_Id
AND Acc_ex1.OwnerID = Accounts.OwnerID
AND Acc_ex1.Acc_ex1_3 > '0'
INNER JOIN Acc_ex5 ON (
Accounts.AccountID = Acc_ex5.Acc_ex5_Id
AND Acc_ex5.OwnerID = Accounts.OwnerID
WHERE Accounts.OwnerID = 109
AND Acc_ex1.Acc_ex1_3 NOT IN (
SELECT Acc_ex1_3
FROM cte
GROUP BY Accounts.AssignedTo
) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
WHERE R.OwnerID = 109
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash JugranHi All,
Thanks for the replies.
I have optimized that query to make it run in few seconds.
Here is my final query.
select ReportsTo as directReportingUserId,
UserName AS EmpName,
EmployeeCode AS EmpCode,
Use_ex1_1 AS PortfolioCode,
BranchName,
GroupInfo.groupTotal,
GroupInfo.groupLastContact,
case when exists
(select 1 from ReportingMembers RM
where RM.ReportsTo = UserInfo.ReportsTo
and RM.MemberID <> UserInfo.ReportsTo
) then 0 else UserInfo.ReportsTo end as memberid1,
(select code from Regions where ownerid=109 and name=UserInfo.BranchName) as BranchCode,
ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
FROM
(select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
union
select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
(select top 1 TerritoryName
from UserTerritoryLevelView
where displayOrder = 6
and UserId = R.ReportsTo) as BranchName,
Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
from ReportingMembers R
--INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
where R.MemberID = 1
) UserInfo
inner join
select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
from
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
union
select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
FROM ReportingMembers R
INNER JOIN TeamMembers T
ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
--where TerritoryID in ( select ChildRegionID RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
where R.MemberID = 1
) GroupWiseInfo
group by directReportingUserId
) GroupInfo
on UserInfo.ReportsTo = GroupInfo.directReportingUserId
Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran -
Same query at same time, but different execution plans from two schemas
Hi!
We just had some performance problems in our production system and I would like to ask for some advice from you.
We had a select-query that was run from USER1 on SCHEMA1, and it ran a table scan on a huge table.
Using session browser in TOAD I copied the Sql-statement, logged on SCHEMA1 and ran the same query. I got a different execution plan where I avoided the table scan.
So my question is:
How is it possible that the same query get different execution plans when running in two different schemas at the same time.
Some more information:
The user USER1 runs "alter session set current_schema=SCHEMA1;" when it logs on. Besides that it doesn't do anything so session parameter values are the same for USER1 and SCHEMA1.
SCHEMA1 is the schema owning the tables.
ALL_ROWS is used for both USER1 and SCHEMA1
Our database:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
PL/SQL Release 9.2.0.8.0 - Production
CORE 9.2.0.8.0 Production
TNS for Linux: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production
Anybody has some suggestions to why I experience different execution plan for the same query, run at the same time, but from different users?Thanks for clarification of the schema structure.
What happens if instead of setting the current session schema to SCHEMA1, if you simply add the schema name to alle tables, views and other objects inside your select statement?
As in select * from schema1.dual;I know that this is not what you want eventually, but it might help to find any misleading objects.
Furthermore it is not clear what you meant with: "avoided a table scan".
Did you avoid a full table scan (FTS) or was the table completely removed from the execution plan?
Can you post both plans?
Edited by: Sven W. on Mar 30, 2010 5:27 PM -
Error while running ETL Execution plan in DAC(BI Financial Analytics)
Hi All,
I have Installed and configured BI Analytics and everything gone well but when I run ETL in DAC to to load the data from source to Analytics Warehouse Execution plan Failed with the following error message - Error while creating Server connections Unable to ping repository server.can anyone please help me on resolving the error.and here is the error error description.
Error message description:
ETL Process Id : 4
ETL Name : New_Tessco_Financials_Oracle R12
Run Name : New_Tessco_Financials_Oracle R12: ETL Run - 2009-02-06 16:08:48.169
DAC Server : oratestbi(oratestbi.tessco.com)
DAC Port : 3141
Status: Failed
Log File Name: New_Tessco_Financials_Oracle_R12.4.log
Database Connection(s) Used :
DataWarehouse jdbc:oracle:thin:@oratestbi:1521:DEVBI
ORA_R12 jdbc:oracle:thin:@oratestr12:1531:DEV
Informatica Server(s) Used :
Start Time: 2009-02-06 16:08:48.177
Message: Error while creating Server connections Unable to ping repository server.
Actual Start Time: 2009-02-06 16:08:48.177
End Time: 2009-02-06 16:08:51.785
Total Time Taken: 0 Minutes
Thanks in Advance,
Prashanth
Edited by: user10719430 on Feb 6, 2009 2:08 PMI am facing a similar error.. can you pls help me in fixing it.
Following is the log from DAC server:
31 SEVERE Fri Oct 16 17:22:18 EAT 2009
START OF ETL
32 SEVERE Fri Oct 16 17:22:21 EAT 2009 MESSAGE:::Unable to ping :'ebsczc9282brj', because '
=====================================
STD OUTPUT
=====================================
Informatica(r) PMCMD, version [8.1.1 SP5], build [135.0129], Windows 32-bit
Copyright (c) Informatica Corporation 1994 - 2008
All Rights Reserved.
Invoked at Fri Oct 16 17:22:20 2009
The command: [pingserver] is deprecated. Please use the command [pingservice] in the future.
ERROR: Cannot connect to Integration Service [ebsczc9282brj:6006].
Completed at Fri Oct 16 17:22:21 2009
=====================================
ERROR OUTPUT
=====================================
' Make sure that the server is up and running.
EXCEPTION CLASS::: com.siebel.etl.gui.core.MetaDataIllegalStateException
com.siebel.etl.engine.bore.ServerTokenPool.populate(ServerTokenPool.java:231)
com.siebel.etl.engine.core.ETL.thisETLProcess(ETL.java:225)
com.siebel.etl.engine.core.ETL.run(ETL.java:604)
com.siebel.etl.engine.core.ETL.execute(ETL.java:840)
com.siebel.etl.etlmanager.EtlExecutionManager$1.executeEtlProcess(EtlExecutionManager.java:211)
com.siebel.etl.etlmanager.EtlExecutionManager$1.run(EtlExecutionManager.java:165)
java.lang.Thread.run(Thread.java:619)
33 SEVERE Fri Oct 16 17:22:21 EAT 2009
* CLOSING THE CONNECTION POOL DataWarehouse
34 SEVERE Fri Oct 16 17:22:21 EAT 2009
* CLOSING THE CONNECTION POOL SEBL_80
35 SEVERE Fri Oct 16 17:22:21 EAT 2009
END OF ETL
-------------------------------------------- -
Error in DAC 7.9.4 while building the execution plan
I'm getting Java exception EXCEPTION CLASS::: java.lang.NullPointerException while building the execution plan. The parameters are properly generated.
Earlier we used to get the error - No physical database mapping for the logical source was found for :DBConnection_OLAP as used in QUERY_INDEX_CREATION(DBConnection_OLAP->DBConnection_OLAP)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
We resolved this issue by using the in built connection parameters i.e. DBConnection_OLAP. This connection parameter has to be used because the execution plan cannot be built without OLAP connection.
We are not using 7.9.4 OLAP data model since we have highly customized 7.8.3 OLAP model. We have imported 7.8.3 tables in DAC.
We have created all the tasks with syncronzation method, created the task group and subject area. We are using in built DBConnection_OLAP and DBConnection_OLTP parameters and pointed them to relevant databases.
system set up -
OBI DAC server - windows server
Informatica server and repository sever 7.1.4 - installed on local machine and
provied PATH variables.
IS this problem regarding the different versions i.e. we are using OBI DAC 7.9.4 and underlying data model is 7.8.3?
Please help,
Thanks and regards,
AshishHi,
Can anyone help me here as I have stuck with the following issue................?
I have created a command task in workflow at Informatica that will execute a script in Unix to purge chache at OBIEE.But I want that workflow to be added as a task in DAC at already existing Plan and should be run at the last one whenever the Incremental load happens.
I created a Task in DAC with name of Workflow like WF_AUTO_PURGE and added that task as following task at Execution mode,The problem here is,I want to build that task after adding to the plan.I some how stuck here , When I try to build the task It is giving following error !!!!!
MESSAGE:::Error while loading pre post steps for Execution Plan. CompleteLoad_withDeleteNo physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE (DBConnection_INFA->DBConnection_INFA)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1317)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
::: CAUSE :::
MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE(DBConnection_INFA->DBConnection_INFA)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
com.siebel.analytics.etl.execution.ExecutionParameterHelper.substitute(ExecutionParameterHelper.java:208)
com.siebel.analytics.etl.execution.ExecutionParameterHelper.parameterizeTask(ExecutionParameterHelper.java:139)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.handlePrePostTasks(ExecutionPlanDesigner.java:949)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:790)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
Regards,
Arul
Edited by: 869389 on Jun 30, 2011 11:02 PM
Edited by: 869389 on Jul 1, 2011 2:00 AM
Maybe you are looking for
-
Apps don't update after installing iOS 8 or iPhone 5s
I updated to IOS 8.0.2 on iPhone 5s. Since update, my apps will not update. Apps awaiting update are WSJ, ESPN Sportscenter, and US Airways.
-
File will not save or 'save as' - error message is: "...could not be saved. No such file or directory." File has been save previously. Is this a preference issue or perhaps a permissions issue?
-
Hi, I currently subscribe to the Photographers Package for $9.99/month, which I'm more than happy with. However, now I also want to subscribe to the Premiere Pro app. Can I just add the Premiere Pro single app subscription to my current plan, there
-
Big text showing in PDF from flex input
Hi, I have a problem printing data to PDF from SQL. I'm using Rich text editor as it allows the user to add bullets, bold etc... and I'm using font Family arial and font size 12 in my flex form. Now when the data goes into DB it saves all the html ta
-
CompileCommand for Weblogic 6.0
My JSP pages are failing Weblogic 6.0 with the error Couldn't find init param: compileCommand Any ideas why?