Execution plan with Concatenation
Hi All,
Could anyone help in finding why concatenation is being used by optimizer and how can i avoid it.
Oracle Version : 10.2.0.4
select * from
select distinct EntityType, EntityID, DateModified, DateCreated, IsDeleted
from ife.EntityIDs i
join (select orgid from equifaxnormalize.org_relationships where orgid is not null and related_orgid is not null
and ((Date_Modified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Modified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
OR (Date_Created >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and Date_Created < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
) r on(r.orgid= i.entityid)
where EntityType = 1
and ((DateModified >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateModified < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
OR (DateCreated >= to_date('2011-06-12 14:00:00','yyyy-mm-dd hh24:mi:ss') and DateCreated < to_date('2011-06-13 14:00:00','yyyy-mm-dd hh24:mi:ss'))
and ( IsDeleted = 0)
and IsDistributable = 1
and EntityID >= 0
order by EntityID
--order by NLSSORT(EntityID,'NLS_SORT=BINARY')
where rownum <= 10;
Execution Plan
Plan hash value: 227906424
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 10 | 570 | 39 (6)| 00:00:01 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 56 | 3192 | 39 (6)| 00:00:01 | | |
|* 3 | SORT ORDER BY STOPKEY | | 56 | 3416 | 39 (6)| 00:00:01 | | |
| 4 | HASH UNIQUE | | 56 | 3416 | 38 (3)| 00:00:01 | | |
| 5 | CONCATENATION | | | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 29 | 1 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 27 | 1647 | 17 (0)| 00:00:01 | | |
| 8 | TABLE ACCESS BY GLOBAL INDEX ROWID| ENTITYIDS | 27 | 864 | 4 (0)| 00:00:01 | ROWID | ROWID |
|* 9 | INDEX RANGE SCAN | UX_TYPE_MOD_DIST_DEL_ENTITYID | 27 | | 2 (0)| 00:00:01 | | |
|* 10 | INDEX RANGE SCAN | IX_EFX_ORGRELATION_ORGID | 1 | | 1 (0)| 00:00:01 | | |
|* 11 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 29 | 1 (0)| 00:00:01 | | |
| 12 | NESTED LOOPS | | 29 | 1769 | 20 (0)| 00:00:01 | | |
| 13 | PARTITION RANGE ALL | | 29 | 928 | 5 (0)| 00:00:01 | 1 | 3 |
|* 14 | TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS | 29 | 928 | 5 (0)| 00:00:01 | 1 | 3 |
|* 15 | INDEX RANGE SCAN | IDX_ENTITYIDS_ETYPE_DC | 29 | | 4 (0)| 00:00:01 | 1 | 3 |
|* 16 | INDEX RANGE SCAN | IX_EFX_ORGRELATION_ORGID | 1 | | 1 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter(ROWNUM<=10)
6 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
9 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00',
'syyyy-mm-dd hh24:mi:ss'))
filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
10 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)
11 - filter(("DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13
14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')) AND "RELATED_ORGID" IS NOT NULL)
14 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATEMODIFIED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
"I"."ENTITYID">=0)
15 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."DATECREATED"<=TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
16 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)ife.entityids table has been range - partitioned on data_provider column.
Is there any better way to rewrite this sql OR is there any way to eliminate concatenation ?
Thanks
We cant use data_provider in the given query. We need to pull data irrespective of data_provider and it should be based on ENTITYID.
Yes table has only three partitions...
Not sure issue is due to concatenation....but we are in process to create desired indexes which will help for this sql.
In development we have created multicolumn index and below is the execution plan.....Also in development it takes just 4-5 seconds to execute. But in production it takes more than 8-9 minutes.
Below is the execution plan from Dev which seems to perform fast:
Execution Plan
Plan hash value: 3121857971
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 57 | 353 (1)| 00:00:05 | | |
|* 1 | COUNT STOPKEY | | | | | | | |
| 2 | VIEW | | 1 | 57 | 353 (1)| 00:00:05 | | |
|* 3 | SORT ORDER BY STOPKEY | | 1 | 58 | 353 (1)| 00:00:05 | | |
| 4 | HASH UNIQUE | | 1 | 58 | 352 (1)| 00:00:05 | | |
| 5 | CONCATENATION | | | | | | | |
|* 6 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 26 | 3 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 1 | 58 | 170 (1)| 00:00:03 | | |
| 8 | PARTITION RANGE ALL | | 56 | 1792 | 16 (0)| 00:00:01 | 1 | 3 |
|* 9 | TABLE ACCESS BY LOCAL INDEX ROWID| ENTITYIDS | 56 | 1792 | 16 (0)| 00:00:01 | 1 | 3 |
|* 10 | INDEX RANGE SCAN | IDX_ENTITYIDS_ETYPE_DC | 56 | | 7 (0)| 00:00:01 | 1 | 3 |
|* 11 | INDEX RANGE SCAN | EFX_ORGID | 2 | | 2 (0)| 00:00:01 | | |
|* 12 | TABLE ACCESS BY INDEX ROWID | ORG_RELATIONSHIPS | 1 | 26 | 3 (0)| 00:00:01 | | |
| 13 | NESTED LOOPS | | 1 | 58 | 181 (0)| 00:00:03 | | |
| 14 | PARTITION RANGE ALL | | 57 | 1824 | 10 (0)| 00:00:01 | 1 | 3 |
|* 15 | INDEX RANGE SCAN | UX_TYPE_MOD_DIST_DEL_ENTITYID | 57 | 1824 | 10 (0)| 00:00:01 | 1 | 3 |
|* 16 | INDEX RANGE SCAN | EFX_ORGID | 2 | | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=10)
3 - filter(ROWNUM<=10)
6 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
9 - filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0)
10 - access("I"."ENTITYTYPE"=1 AND "I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))
11 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)
12 - filter("RELATED_ORGID" IS NOT NULL AND ("DATE_CREATED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"DATE_CREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "DATE_MODIFIED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "DATE_MODIFIED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss')))
15 - access("I"."ENTITYTYPE"=1 AND "I"."DATEMODIFIED">=TO_DATE(' 2011-06-12 14:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND "I"."ENTITYID">=0 AND "I"."DATEMODIFIED"<TO_DATE(' 2011-06-13 14:00:00',
'syyyy-mm-dd hh24:mi:ss'))
filter("I"."ISDISTRIBUTABLE"=1 AND "I"."ISDELETED"=0 AND (LNNVL("I"."DATECREATED">=TO_DATE(' 2011-06-12 14:00:00',
'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("I"."DATECREATED"<TO_DATE(' 2011-06-13 14:00:00', 'syyyy-mm-dd hh24:mi:ss'))) AND
"I"."ENTITYID">=0)
16 - access("ORGID"="I"."ENTITYID")
filter("ORGID" IS NOT NULL AND "ORGID">=0)Thanks
Similar Messages
-
Execution plan with bind variable
dear all
I join two tables and get "index fast full scan" with high cost using bind variable
but when I remove the bind variable it executes with "index" and with lower cost
What is the reason and how should I know which execution plan is really used in real life?
thanks
john1) What is oracle version?
2) Post here both query and their explain plan.
In fact INDEX FAST FULL SCAN indicate is multiblock read for composite indexes(and based on your query and predicates).In this case CBO behavior as FULL TABLE SCAN(affected by db_multiblock_read_count,system stats,etc).And you use bind variable.So in bind variable case CBO define selectivity based on arithmetic(5% of the cardinality) ,if you use concrete values instead of bind variable CBO identify other selectivity based on statistics,histograms,.... then it was identify cost of multiblock read and single block reads.You can see these 10053 event.Finally it choose lower cost`s plan. -
Test Execution Planning with eManager?
I'd use MQC terminology to put up my question...
IN MQC we have Test Lab for test execution planning. we can create test sets and track there completion, assign that execution efort to a tester etc. Do we have similar abilities in eManager as well? Does it allow us to define test execution plan and then track the same? How do we handle Test Execution Planning in eManager?Unlike in MQC, where test planning and test execution are handled through setting up in two different steps. In e-Manager enterprise, during the test planning phase, you would import an e-tester script and have it run manually as required on an agent box or e-Manager enterprise system. Or you could also have it scheduled to run for automated execution using the Schedule tool bar item that looks like a clock.
-
How are execution plan created with tables of stale stats
Hello
I would like to ask the group
1. How oracle handels the execution plan with table joins where some tables have stale stats
2. How would oracle handel execution plan where the table has histogram but the stats are stale.
Database version 11.1.0.7.0
Thanks
ArunALTER SESSION SET EVENTS='10053 trace name context forever, level 1';
by doing above before executing the SQL, you can see what & how CBO arrives at the actual execution plan -
Error while building execution plan
Hi, I'm trying to create an execution plan with container EBS 11.5.10 and subject area Project Analytics.
I get this error while building:
PA-EBS11510
MESSAGE:::group TASK_GROUP_Load_PositionHierarchy for SIL_PositionDimensionHierarchy_PostChangeTmp is not found!!!
EXCEPTION CLASS::: java.lang.NullPointerException
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:818)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
Sorry for my english, I'm french.
Thank you for helping meHi,
Find the what are all the subjectarea's in execution plan having the task 'SIL_PositionDimensionHierarchy_PostChangeTmp ', add the 'TASK_GROUP_Load_PositionHierarchy ' task group to those those subjectares.
Assemble your subject area's and build the execution plan again.
Thanks -
Oem explain plan produced doesn't correspond to explain plan with tkprof
Hello all,
I am running OEM on a 10.2.0.3 database and I have a very slow query with cognos 8 BI on my data warehouse.
I went to the dbconsole through OEM and connected to target database I went to look at the query execution and then got an explain plan.
Then I did a trace file and ran it throught tkprof.
When I look at both produced explain plan, I can see the tree looks the same exept the corresponding values. In OEM, it says I am going throug 18000 rows and in the tkprof result it says more like millions of records.
As anybody had these kind of results?
Shall I have more confidence in TKprof then OEM?
It is very important to me since I am being chalanged by an external DBA.I would recommend you to get Christian Antogini´s book "Troublshooting Oracle Performance", (http://www.antognini.ch/top/) which explains everything you need to know when analyzing Oracle SQL Performance and Explain Plans.
If you properly trace your SQL Statement, you will get "STAT" lines in the trace file. These stat lines show you the actual number of rows processed per row source operation. Explain plan per default does only show you the "estimated" row counts for the row source operations no matter whether you use "explain=" in the tkprof call or OEM to get the explain plan. Tkprof reads the STAT lines from the trace and outputs a section which is similar to an execution plan but contains the "real" number of rows.
However, if you want to troubleshoot Oracle Performance, I would recommend you to run the statement with the hint /*+ GATHER_PLAN_STATISTICS */ or set statistics_level=ALL in your session (not database-wide!).
If you do, you can get an excellent execution plan with DBMS_XPLAN.DISPLAY_CURSOR containing both estimated rows as well as actual rows combined with the "number of starts" for e.g. nested loop joins.
Get the book, read it, and you will be able to discuss this issue with your external dba in a professional way. ;-)
Regards,
Martin
www.ora-solutions.net -
Bad execution plans when using parameters, subquery and partition by
We have an issue with the following...
We have this table:
CREATE TABLE dbo.Test (
Dt date NOT NULL,
Nm int NOT NULL,
CONSTRAINT PK_Test PRIMARY KEY CLUSTERED (Dt)
This table can have data but it will not matter for this topic.
Consider this query (thanks Tom Phillips for simplifying my question):
declare @date as date = '2013-01-01'
select *
from (
select Dt,
row_number() over (partition by Dt order by Nm desc) a
from Test
where Dt = @date
) x
go
This query generates an execution plan with a clustered seek.
Our queries however needs the parameter outside of the sub-query:
declare @date as date = '2013-01-01'
select *
from (
select Dt,
row_number() over (partition by Dt order by Nm desc) a
from Test
) x
where Dt = @date
go
The query plan now does a scan followed by a filter on the date.
This is extremely inefficient.
This query plan is the same if you change the subquery into a CTE or a view (which is what we use).
We have this kind of setup all over the place and the only way to generate a good plan is if we use dynamic sql to select data from our views.
Is there any kind of solution for this?
We have tested this (with the same result) on SQL 2008R2, 2012 SP1, 2014 CU1.
Any help is appreciated.Hi Tom,
Parameter sniffing is a different problem. A query plan is made based on the value which may be really off in the data distribution. We do have this problem as well e.g. when searching for today's data when the statistics think the table only has yesterday's
data (a problem we have a lot involves the lack of sufficient amount of buckets in statistics objects).
This problem is different.
- It doesn't matter what the parameter value is.
- You can reproduce this example with a fresh table. I.e. without statistics.
- No matter what the distribution of data (or even if the statistics are correct), there is absolutely no case possible (in my example) where doing a scan followed by a filter should have better results than doing a seek.
- You can't change the behavior with hints like "option(recompile)" or "optimize for".
This problem has to do with the specific combination of "partition by" and the filter being done outside of the immediate query. Try it out, play with the code, e.g. move the where clause inside the subquery. You'll see what I mean.
Thanks for responding.
Michel -
Same Query returning different result (Different execution plan)
Hi all,
To day i have discovered a strange thing: a query that return a different result when using a different execution plan.
The query :
SELECT *
FROM schema.table@database a
WHERE column1 IN ('3')
AND column2 = '101'
AND EXISTS
(SELECT null
FROM schema.table2 c
WHERE a.column3 = SUBSTR (c.column1, 2, 12));where schema.table@database is a remote table.
when executed with the hint /*+ ordered use_nl(a c) */ these query return no result and its execution plan is :
Rows Row Source Operation
0 NESTED LOOPS (cr=31 r=0 w=0 time=4894659 us)
4323 SORT UNIQUE (cr=31 r=0 w=0 time=50835 us)
4336 TABLE ACCESS FULL TABLE2 (cr=31 r=0 w=0 time=7607 us)
0 REMOTE (cr=0 r=0 w=0 time=130536 us)When i changed the execution plan with the hint /*+ use_hash(c a) */
Rows Row Source Operation
3702 HASH JOIN SEMI (cr=35 r=0 w=0 time=497839 us)
22556 REMOTE (cr=0 r=0 w=0 time=401176 us)
4336 TABLE ACCESS FULL TABLE2 (cr=35 r=0 w=0 time=7709 us)It seem that when the execution plan have changed the remote query return no result.
It'is a bug or i have missed somthing ?
PS: The two table are no subject to insert or update statement.
Oracle version : 9.2.0.2.0
System version : HP-UX v1
Thanks.H.Mahmoud wrote:
Oracle version : 9.2.0.2.0
System version : HP-UX v1Hard to say. You're using a very old and deprecated version of the database, and one that was known to contain bugs.
9.2.0.7 was really the lowest version of 9i that was considered to be 'stable', but even so, it's old and lacking in many ways.
Consider upgrading to the latest database version at your earliest opportunity. (or at least apply patches up to the latest 9i version before querying if there is bugs in your really low buggy version) -
DAC Execution plan build error - for multi-sources
We are implementing BI apps 7.9.6.1 Financial& HR analytics. We have multiple sources ( Peoplesoft (oracle)8.9 & 9.0, DB2, Flat file ....) and built 4 containers one for each source.We can build the execution plan sucessfully, however getting the below error when trying to reset the sources . This message is very sporadic. We used workaround to run the build and reset sources multiple times. This workaround seems to be working when we have 3 containers in execution plan and It is not working for 4 containers. Does anybody come across this issue.
Thank you in advance for your help.
DAC ver 10.1.3.4.1 patch .20100105.0812 Build date: Jan 5 2010
ANOMALY INFO::: Failure
MESSAGE:::com.siebel.analytics.etl.execution.NoSuchDatabaseException: No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:62)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
::: CAUSE :::
MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
com.siebel.analytics.etl.execution.ExecutionParameterHelper.substituteNodeTables(ExecutionParameterHelper.java:176)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.retrieveExecutionPlanTasks(ExecutionPlanDesigner.java:420)
com.siebel.analytics.etl.execution.ExecutionPlanDiscoverer.<init>(ExecutionPlanDiscoverer.java:60)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:189)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)Hi, In reference to this message
MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_OLTP_ELM as used in TASK_GROUP_Extract_CodeDimension(null->null)
1. I notice that you are using custom DAC Logical Name DBConnection_OLTP_ELM.
2. While you Generate Parameters before Building execution plan , Can you please verify to which Source System container you are using this logical name DBConnection_OLTP_ELM as a source? and what is the value assigned to it
3. Are you building Execution Plan with Subject Areas from 4 Containers.? did u Generate Parameters before Building Execution Plan?
4. Also verify at the DAC Task level for the 4th Container what is the Primary Source value for all the Tasks? (TASK_GROUP_Extract_CodeDimension) -
How to clear out failed Execution Plans?
Hi everyone,
This should be an easy thing but I can't seem to figure out how to do it.
I created a custom execution plan and added every Financial Subject Area into the plan. Then I generated the parameters, clicked build and then ran it. The run failed.
I noticed that there was a pre-configured execution plan with "Financials_Oracle R12" and decided that what I really wanted to run was Financials_Oracle R12 and not my custom plan. So I clicked build for Financials_Oracle R12 execution plan and once that finished, I clicked "Run Now". Upon clicking run now it told me that my custom execution plan had finished with a failed status and until it completed successfully, no other plans could be ran. I really don't want my custom plan to complete successfully. I want to run the other plan instead.
How do I delete/clear out my failed run data so I can start a new execution plan?
Thanks for the help!
-JoeIn DAC under the EXECUTE window do the following:
- Navigate to the 'Current Run' tab.
- Highlight the failed execution plan.
- Right click and seleted 'Mark as completed.'
- Enter the numbers/text in the box.
Then:
- In the top toolbar select Tools --> ETL Management --> Reset Data Sources
- Enter the numbers/text in the boox.
Now you're ready to start the new execution plan.
Hope this helps.
- Austin -
What does cost mean in execution plan?
What does a cost increase actually mean compared to the decrease of data that has to be processed?
More details:
I have 3 tables:
- users
- transaction types
- user transactions
I'm joining user transactions with transaction types for a particular user and aggregating some result from the transaction type table.
Originally there was no index on the user transactions table containing both the user_id (on which is filtered) and the transaction id. This lead to a TABLE ACCESS and joining a lot of data for nothing. I created an index to contain both fields so that no more TABLE ACCESS is needed. It indeed reduced the amount of data to be merged and aggregated considerably, but the COST of the query went up! This I don't understand. (the query itself did not seem to take more/less time).
What does a cost increase actually mean compared to the decrease of data that has to be processed?
Below are the two execution plans:
Execution plan with low cost and table access.
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=51 Card=1 Bytes=33)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=51 Card=283759 Bytes=9364047)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'THIRD_PARTY_TRANSACT
IONS' (Cost=2 Card=448 Bytes=8960)
4 3 INDEX (RANGE SCAN) OF 'X_TP_TRANSACTIONS_USER' (NON-
UNIQUE) (Cost=1 Card=448)
5 2 INDEX (FAST FULL SCAN) OF 'X_ISP_TRANSACTIONS_TRID_FIN
AL' (NON-UNIQUE) (Cost=4 Card=63339 Bytes=823407)
Execution plan with only index join but high cost
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=272 Card=1 Bytes=28)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=272 Card=3287 Bytes=92036)
3 2 INDEX (FAST FULL SCAN) OF 'X_TP_TRANSACTIONS_TRID_USER
ID' (NON-UNIQUE) (Cost=21 Card=3287 Bytes=49305)
4 2 INDEX (FAST FULL SCAN) OF 'X_ISP_TRANSACTIONS_TRID_FIN
AL' (NON-UNIQUE) (Cost=4 Card=63339 Bytes=823407)when oracle parses and optimises a query it creates several execution plans and assigns a cost to each plan.
It then uses this cost to compare these plans.
But you can't use this cost as an indication of how "heavy" a query is, nor can you compare the costs between different queries.
It is only a value used internally by oracle.
greetings
Freek D -
SQL query with Bind variable with slower execution plan
I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
(Cost=2 Card=135 Bytes=6480)
Statistics
0 recursive calls
18 db block gets
15558 consistent gets
47 physical reads
9896 redo size
423 bytes sent via SQL*Net to client
1095 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
Statistics
0 recursive calls
12 db block gets
3003199 consistent gets
54 physical reads
9448 redo size
423 bytes sent via SQL*Net to client
1258 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
Regards
IvanMany thanks for your reply.
I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
for table I use:-
begin
dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
end;
for index I use:-
begin
dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
end;
Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
regards
Ivan -
Same sqlID with different execution plan and Elapsed Time (s), Executions time
Hello All,
The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
17th Oct 18th Oct
221,808,602
21
2tc2d3u52rppt
213,170,100
72,495,618
9c8wqzz7kyf37
209,239,059
71,477,888
9c8wqzz7kyf37
139,331,777
1
7b0kzmf0pfpzn
144,813,295
1
0cqc3bxxd1yqy
102,045,818
1
8vp1ap3af0ma5
128,892,787
16,673,829
84cqfur5na6fg
89,485,065
1
5kk8nd3uzkw13
127,467,250
16,642,939
1uz87xssm312g
67,520,695
8,058,820
a9n705a9gfb71
104,490,582
12,443,376
a9n705a9gfb71
62,627,205
1
ctwjy8cs6vng2
101,677,382
15,147,771
3p8q3q0scmr2k
57,965,892
268,353
akp7vwtyfmuas
98,000,414
1
0ybdwg85v9v6m
57,519,802
53
1kn9bv63xvjtc
87,293,909
1
5kk8nd3uzkw13
52,690,398
0
9btkg0axsk114
77,786,274
74
1kn9bv63xvjtc
34,767,882
1,003
bdgma0tn8ajz9
Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
The other big difference is the average read time on two days
Tablespace IO Stats
17th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
947,766
59
4.24
4.86
185,084
11
2,887
6.42
UNDOTBS2
517,609
32
4.27
1.00
112,070
7
108
11.85
INDUS_MST_DATA01
288,994
18
8.63
8.38
52,541
3
23,490
7.45
INDUS_TRN_INDX01
223,581
14
11.50
2.03
59,882
4
533
4.26
TEMP
198,936
12
2.77
17.88
11,179
1
732
2.13
INDUS_LOG_DATA01
45,838
3
4.81
14.36
348
0
1
0.00
INDUS_TMP_DATA01
44,020
3
4.41
16.55
244
0
1,587
4.79
SYSAUX
19,373
1
19.81
1.05
14,489
1
0
0.00
INDUS_LOG_INDX01
17,559
1
4.75
1.96
2,837
0
2
0.00
SYSTEM
7,881
0
12.15
1.04
1,361
0
109
7.71
INDUS_TMP_INDX01
1,873
0
11.48
13.62
231
0
0
0.00
INDUS_MST_INDX01
256
0
13.09
1.04
194
0
2
10.00
UNDOTBS1
70
0
1.86
1.00
60
0
0
0.00
STG_DATA01
63
0
1.27
1.00
60
0
0
0.00
USERS
63
0
0.32
1.00
60
0
0
0.00
INDUS_LOB_DATA01
62
0
0.32
1.00
60
0
0
0.00
TS_AUDIT
62
0
0.48
1.00
60
0
0
0.00
18th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
980,283
91
1.40
4.74The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
You wrote with different execution plan, I think, you saw plans. It is very difficult, you get old plan.
I think Execution plans is not changed in different days, if you not added index or ...
What say ADDM report about this script?
As you know, It is normally, different Elapsed Time for same statement in different day.
It is depend your database workload.
It think you must use SQL Access and SQl Tuning advisor for this script.
You can get solution for slow running problem.
Regards
Mahir M. Quluzade -
Oracle 10g Diff in execution plan query with binding var Vs without
We recently did 10g upgrade. In 10g, execution plan differs for query with binding var(thru jdbc etc) Vs without it as given below. For query with binding var,
it chooses poor execution plan(no index is used, full scan is done etc). everything worked fine in 9i. To rectify the problem, we have to hint query with right index,join etc. but i dont like this solution.
I would rather prefer to correct database to choose right execution path instead of eacy query level. but not sure what causes the issue.
Does anybody came across this issue? if so, Please share your experiences. Thanks for the help. Do let me know if you need more info.
1. Query without binding bar:
select * from test where col1 = :1 and col2 = :2
1. Query without binding bar:
select * from test where col1 = 'foo' and col2= 'bar'I am not an expert but in my humble opinion it is the developer's responsability to ensure the correct explain plan is used before deploying code to production, if the explain plan returned by the DB is bad, then the use of a hint is perfectly acceptable.
Check this out: http://lcgapp.cern.ch/project/CondDB/snapshot/performance.html
Excerpt:
Bind variable peeking. If an SQL query contains bind variables, the optimal execution plan may depend on the values of those variables. When the Oracle query optimizer chooses the execution plan for such a query, it may indeed look at the values of all bind variables involved: this is known as "bind variable peeking".
In summary, the execution plan used for a given SQL query cannot be predicted a priori because it depends on the presence and quality of statistics and on the values of bind variables used at the time the query was first executed (hard-parsed). As a consequence of this instability of execution plans, very different performances may be observed for the same SQL query. In COOL, this issue is addressed by adding Oracle hints to the queries, to make sure that the same (good) plan is used in all cases, even with unreliable statistics or unfavourable bind variables.
Edited by: Rodolfo Ferrari on Jun 3, 2009 9:40 PM -
Same query with different execution plan
Hello All,
I wonder why does sql server create different execution plan for these below queries ?
Thanks.You can look at the expected query plan. Either visually in SSMS, or alternatively, you can run the query after the instruction SET SHOWPLAN_TEXT ON.
The Optimizer is the component of SQL Server that determines how the query is executed. It is cost based. It will assess different execution plans, estimate the cost of each of them and then select the cheapest. In this context, cheapest means the one with
the shortest runtime.
In your particular case, the estimation for the second query is, that scanning just a small part of the nonclustered index and then looking up the table data of the qualifying rows is the cheapest approach, because the estimated number of qualifying rows
is low.
In the first query, it estimated that looking up the many qualifying rows there would be too expensive, and that it would be cheaper to simply scan the entire clustered index, and simply filter out all unwanted rows. Note that the clustered index includes
the actual table data.
Gert-Jan
Maybe you are looking for
-
HOW DO YOU TURN OFF "FIREFOX PREVENTED THIS PAGE FROM AUTOMATICALLY RELOADING". I did perform the "see Advanced - General - Accessibility. It worked for me and my "user" site (don't know how to say my site) and still having problems with my wife's. I
-
Bluetooth connection with android
So I've been trying to connect my iPhone 5c with my galaxy s3 to transfer some stuff over but it won't connect. It connects to my car and the speakers I use in my room via Bluetooth but it won't connect to any android phone. Is there a way to fix tha
-
Can't re-enable accounts with workgroup manager
Morning, We have had cause to disable approximately 100 student accounts earlier this week, and we are now finding that we can not re-enable them. This is a major problem as it's coming to the end of term and they all need to be able to login! The se
-
Hi All What is the T code for releasing Blocked MM Invoices in bulk regards Samir Bhansali
-
Tracking the custom table changes
Hai, plz let me know how to track the changes in custom table? thankyou. Moderator message: please search for available information before asking. locked by: Thomas Zloch on Sep 17, 2010 12:55 PM