About Execution Plan in OBIEE
Friends,
As we know when creating the BMM layer we can, for ex., create one logical table
which includes columns from two or more different physical tables and OBIEE generates
smth like a view internally.
But what about the execution plan of the query?
What happens if the size of the table chnages and the old plan is not that good anymore?
Is it somehow possible to control the Execution Plan, maybe in BMM layer, in OBIEE?
Thanks in advance.
"The performance problem is not in the writting of the query but on its path (execution plan). In the way where the database will retrieve the data."
I wouldnt say that its always the way WHERE the database will retrieve data but HOW as well, sometimes.
Indeed a lot of options availbale to speed up the preformance of a query but during my own year old
experience with Oracle Ive come across not few scripts that didnt require creation of any temporary table,
materilaized views or smth. All I needed to do is just rewrite the query and thats it.
"....execute the best explain plan...." Personally I came across a lot of queries with the tables having the freshest statistics and so on that Oracle would choose not the best plan and the one it chose didnt execute that fast as if I would set up my own plan for it.
Yes the internal algorithm may decide that this plan is the best but as experience show its not always true.
All I was asking is if there is a way to control the plan in OBIEE or not........
Similar Messages
-
Too many nested loops in execution plan?
Hi,
i wonder about execution plan not indicating that access to some tables (for join) is in parallel.
Please see this example:
------------------------ snip ------------------------------------
drop table test_a1;
drop table test_a2;
drop table test_b;
drop table test_c;
drop table test_d;
create table test_a1 (
x number,
y number,
z number);
create unique index testa1_pk on test_a1 (x);
create table test_a2 (
x number,
y number,
z number);
create unique index testa2_pk on test_a2 (x);
create table test_b (
x number,
y number,
z number);
create unique index testb_pk on test_b (y);
create table test_c (
x number,
y number,
z number);
create unique index testc_pk on test_b (z);
create table test_d (
x number,
y number,
z number);
create unique index testd_pk on test_d (y);
select
a1.x a1_x,
a1.y a1_y,
a1.z a1_z,
a2.x a2_x,
a2.y a2_y,
a2.z a2_z,
b.x b_x,
b.y b_y,
b.z b_z,
c.x c_x,
c.y c_y,
c.z c_z,
d.x d_x,
d.y d_y,
d.z d_z
from test_a1 a1, test_a2 a2, test_b b, test_c c, test_d d
where a1.x = 100
and a2.x = 200
and b.y = a1.y
and c.z = b.z
and d.y = a1.y;
------------------------ snap ------------------------------------
The execution plan goes like this:
Select Stmt
nested loops
nested loops
nested loops
nested loops
table access
index
access predicate
a2.x = 200
table access
index
access predicate
a1.x = 100
table access
index
access predicate
d.y = a1.y
table access
index
access predicate
b.y = a1.y
table acess
index
acess predicate
c.z = b.z
Access to tables a1 and a2 is on the same level (in parallel - i guess).
However, why isn't access to table d and b on the same level?
Both depend on a1. So no need to execute one after the other (no inter-dependency).
Maybe i have just wrong expectation to the output of the execution plan(?!)
- many thanks!
best regards,
FrankPreservation of identation and spacing is invaluable when it comes to reading an explain plan.
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 195 | 2 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 195 | 2 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 156 | 0 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 117 | 0 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 78 | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| TEST_A2 | 1 | 39 | 0 (0)| 00:00:01 |
|* 6 | INDEX UNIQUE SCAN | TESTA2_PK | 1 | | 0 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| TEST_A1 | 1 | 39 | 0 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | TESTA1_PK | 1 | | 0 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID | TEST_D | 82 | 3198 | 0 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | TESTD_PK | 1 | | 0 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | TEST_B | 82 | 3198 | 0 (0)| 00:00:01 |
|* 12 | INDEX UNIQUE SCAN | TESTB_PK | 1 | | 0 (0)| 00:00:01 |
|* 13 | TABLE ACCESS FULL | TEST_C | 1 | 39 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
6 - access("A2"."X"=200)
8 - access("A1"."X"=100)
10 - access("D"."Y"="A1"."Y")
12 - access("B"."Y"="A1"."Y")
13 - filter("C"."Z"="B"."Z")
Access to tables a1 and a2 is on the same level (in parallel - i guess).
Maybe i have just wrong expectation to the output of the execution plan(?!)You guess wrong, there's nothing parallel going on here.
Execution plan is a tree of parent-child operations.
For example, the NESTED LOOP at operation 4 has two children @ 5 and 7.
Both of these operations- 5 & 7 - have a single child operation.
The execution tree starts with operation 6, using the TESTA2_PK index to identify rows where A2.X=100.
From this list of rowids, we go to the table TEST_A2 operation 5.
The rows from operation five feed into the NESTED LOOP - operation 4.
For each of these rows, we go to TEST_A1 via the index TEST_A1_PK for rows where A1.X=100.
This is really a cartesian join because there's no join condition between the two tables.
etc, etc, etc
Three things in particular to point out.
Firstly, that nothing joins to A2. So there will be a cartesian product - i.e. for every row in the result set between the joined tables A1, B, C and D, these will be multiplied by the number of rows returned by the the A2 rowsource.
Secondly, when everything has got one or zero rows (or the optimizer thinks that it's one or zero rows), you can get very different plans from when there are known/thought to be more rows.
Both depend on a1. So no need to execute one after the other (no inter-dependency).Thirdly, in terms of isolated join operations (ignoring A2 and C for the moment), A1 cannot join to B and D at the same time, you can either join A1 to B and then join the result of that to D, or join A1 to D then B, which is what you've got in your plan (well, actually we have A2 joined to A1 then the result of that joined to D and then the result of that to B).
Edited by: Dom Brooks on Jul 6, 2011 4:07 PM
Corrected typo -
Force statement to use a given rule or execution plan
Hi!
We have a statement that in our production system takes 6-7 seconds to complete. The statement comes from our enterprise application's core code and we are not able to change the statement.
When using a RULE-hint (SELECT /*+RULE*/ 0 pay_rec...........) for this statement, the execution time is down to 500 milliseconds.
My question is: Is there any way to pin a execution plan to a given statement. I have started reading about outlines, which seems promising. However, the statement is not using bind-variables, and since this is core code in an enterprise application I cannot change that either. Is it possible to use outlines with such a statement?
Additional information:
When I remove all statistics for the involved tables, the query blows away in 500 ms.
The table tran_info_types has 61 rows and is a stable table with few updates
The table ab_tran_info has 1 717 439 records and is 62 MB in size.
The table query_result has 777 015 records and is 216 MB in size. This table is constantly updated/insterted/deleted.
The query below return 0 records as there is no hits in the table query_result.
This is the statement:
SELECT /*+ALL_ROWS*/
0 pay_rec, abi.tran_num, abi.type_id, abi.VALUE
FROM ab_tran_info abi,
tran_info_types ti,
query_result qr1,
query_result qr2
WHERE abi.tran_num = qr1.query_result
AND abi.type_id = qr2.query_result
AND abi.type_id = ti.type_id
AND ti.ins_or_tran = 0
AND qr1.unique_id = 5334549
AND qr2.unique_id = 5334550
UNION ALL
SELECT 1 pay_rec, abi.tran_num, abi.type_id, abi.VALUE
FROM ab_tran_info abi,
tran_info_types ti,
query_result qr1,
query_result qr2
WHERE abi.tran_num = qr1.query_result
AND abi.type_id = qr2.query_result
AND abi.type_id = ti.type_id
AND ti.ins_or_tran = 0
AND qr1.unique_id = 5334551
AND qr2.unique_id = 5334552;Here is the explain plan with statistics:
Plan
SELECT STATEMENT HINT: ALL_ROWSCost: 900 Bytes: 82 Cardinality: 2
15 UNION-ALL
7 NESTED LOOPS Cost: 450 Bytes: 41 Cardinality: 1
5 NESTED LOOPS Cost: 449 Bytes: 1,787,940 Cardinality: 59,598
3 NESTED LOOPS Cost: 448 Bytes: 19,514,824 Cardinality: 1,027,096
1 INDEX RANGE SCAN UNIQUE TRADEDB.TIT_DANIEL_2 Search Columns: 1 Cost: 1 Bytes: 155 Cardinality: 31
2 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_7 Search Columns: 1 Cost: 48 Bytes: 471,450 Cardinality: 33,675
4 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1
6 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1
14 NESTED LOOPS Cost: 450 Bytes: 41 Cardinality: 1
12 NESTED LOOPS Cost: 449 Bytes: 1,787,940 Cardinality: 59,598
10 NESTED LOOPS Cost: 448 Bytes: 19,514,824 Cardinality: 1,027,096
8 INDEX RANGE SCAN UNIQUE TRADEDB.TIT_DANIEL_2 Search Columns: 1 Cost: 1 Bytes: 155 Cardinality: 31
9 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_7 Search Columns: 1 Cost: 48 Bytes: 471,450 Cardinality: 33,675
11 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1
13 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 11 Cardinality: 1 Here is the execution plan when I have removed all statistics (exec DBMS_STATS.DELETE_TABLE_STATS(.........,..........); )
Plan
SELECT STATEMENT HINT: ALL_ROWSCost: 12 Bytes: 3,728 Cardinality: 16
15 UNION-ALL
7 NESTED LOOPS Cost: 6 Bytes: 1,864 Cardinality: 8
5 NESTED LOOPS Cost: 6 Bytes: 45,540 Cardinality: 220
3 NESTED LOOPS Cost: 6 Bytes: 1,145,187 Cardinality: 6,327
1 TABLE ACCESS FULL TRADEDB.TRAN_INFO_TYPES Cost: 2 Bytes: 104 Cardinality: 4
2 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_6 Search Columns: 1 Cost: 1 Bytes: 239,785 Cardinality: 1,547
4 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1
6 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1
14 NESTED LOOPS Cost: 6 Bytes: 1,864 Cardinality: 8
12 NESTED LOOPS Cost: 6 Bytes: 45,540 Cardinality: 220
10 NESTED LOOPS Cost: 6 Bytes: 1,145,187 Cardinality: 6,327
8 TABLE ACCESS FULL TRADEDB.TRAN_INFO_TYPES Cost: 2 Bytes: 104 Cardinality: 4
9 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_6 Search Columns: 1 Cost: 1 Bytes: 239,785 Cardinality: 1,547
11 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1
13 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2 Bytes: 26 Cardinality: 1 Our Oracle 9.2 database is set up with ALL_ROWS.
Outlines: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/outlines.htm#13091
Cursor sharing: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3696883368520Hi!
We are on Oracle 9iR2, running on 64-bit Linux.
We are going to upgrade to Oracle 10gR2 in some months. Oracle 11g is not an option for us as our application is not certified by our vendor to run on that version.
However, our performance problems are urgent so we are looking for a solution before we upgrade as we are not able to upgrade before we have done extensive testing which takes 2-3 months.
We have more problem sql's than the one shown in this post. I am using the above SQL as a sample as I think we can solve many other slow running SQL's if we solve this one.
Is the SQL Plan management an option on Oracle 9i and/or Oracle 10g? -
Effect of RLS policy (VPD) on execution plan of a query
Hi
I have been working on tuning of few queries. A RLS policy is defined on most of the tables which appends an extra where condition (something like AREA_CODE=1). I am not able to understand the effect of this extra where clause on the execution plan of the query. In the execution plan there is no mention of the clause added by VPD. In 10046 trace it does show the policy function being executed but nothing after that.
Can someone shed some light on the issue that has VPD any effect on the execution plan of the query ? Also would it matter whether the column on which VPD is applied, was indexed or non-indexed ?
Regards,
Amardeep SidhuAmardeep Sidhu wrote:
I have been working on tuning of few queries. A RLS policy is defined on most of the tables which appends an extra where condition (something like AREA_CODE=1). I am not able to understand the effect of this extra where clause on the execution plan of the query. In the execution plan there is no mention of the clause added by VPD. In 10046 trace it does show the policy function being executed but nothing after that.
VPD is supposed to be invisible - which is why you get minimal information about security predicates in the standard trace file. However, if you reference a table with a security preidcate in your query, the table is effectively replaced by an inline view of the form: "select * from original_table where {security_predicate}", and the result is then optimised. So the effects of the security predicate is just the same as you writing the predicate into the query.
Apart from your use of v$sql_plan to show the change in plan and the new predicates, you can see the effects of the predicates by setting event 10730 with 10046. In current versions of Oracle this causes the substitute view being printed in the trace file.
Bear in mind that security predicates can be very complex - including subqueries - so the effect isn't just that of including the selectivity of "another simple predicate".
Can someone shed some light on the issue that has VPD any effect on the execution plan of the query ? Also would it matter whether the column on which VPD is applied, was indexed or non-indexed ?
Think of the effect of changing the SQL by hand - and how you would need to optimise the resultant query. Sometimes you do need to modify your indexing to help the security predicates, sometimes it won't make enough difference to matter.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format. -
Hello ,
I am working on oracle 11g R2 on AIX.
One query was performing good around 20 sec. but suddenly it took more then 15 min.
We check that the sql executoin plan changes , it showing that order of operation changed like order of using indexes is different.
Now the new plan is not good.
we want to force the old plan of sql to use in future.
I read about sql plan management , it shows a manual method to create baseline and evolve the all plan. In one texample we found that
first query execution plan was created using with out index and then with index So, second plan was good and accepted.
But in this case we do not need to change any thing ,query is performing bad may be becasue changes order of operation ..
One other way to use hint , but for this we need to change sqls , which is not possiable in production now.
The issue is
For this we need to run the sql again and oracle may not create plan like old one.So we will not be having old good plan to accept.
All 2 execution plan are already in cache .
I am looking for a way using that we can set sql plan hash value ( of good plan) or any other id of that sql plan to force to use that plan only.
any idea how to do it ..Stored Outlines are deprecated.
OP:
To fix a specific plan you have two choices:
1. SQL Plan Baselines - assuming the "good" plan is in AWR still then the steps are along the lines of load old plan from AWR into sql tuning set using DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY and DBMS_SQLTUNE.LOAD_SQLSET then load plans from sqlset into sql plan baseline using DBMS_SPM.LOAD_PLANS_FROM_SQLSET.
2. Using SQL Profiles to fix the outline hints - so similar to a stored outline but using the sql profile mechanism - using the coe_xfr_sql_profile.sql script, part of an approach in Oracle support doc id 215187.1
But +1 for Nikolay's recommendation of understanding whether there is a root cause to this problem instability (plan instability being "normal", but flip flopping between "good" and "bad" being a problem). Cardinality feedback is an obvious possible influence, different peeked binds another, stat changes, etc. -
Error in DAC 7.9.4 while building the execution plan
I'm getting Java exception EXCEPTION CLASS::: java.lang.NullPointerException while building the execution plan. The parameters are properly generated.
Earlier we used to get the error - No physical database mapping for the logical source was found for :DBConnection_OLAP as used in QUERY_INDEX_CREATION(DBConnection_OLAP->DBConnection_OLAP)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
We resolved this issue by using the in built connection parameters i.e. DBConnection_OLAP. This connection parameter has to be used because the execution plan cannot be built without OLAP connection.
We are not using 7.9.4 OLAP data model since we have highly customized 7.8.3 OLAP model. We have imported 7.8.3 tables in DAC.
We have created all the tasks with syncronzation method, created the task group and subject area. We are using in built DBConnection_OLAP and DBConnection_OLTP parameters and pointed them to relevant databases.
system set up -
OBI DAC server - windows server
Informatica server and repository sever 7.1.4 - installed on local machine and
provied PATH variables.
IS this problem regarding the different versions i.e. we are using OBI DAC 7.9.4 and underlying data model is 7.8.3?
Please help,
Thanks and regards,
AshishHi,
Can anyone help me here as I have stuck with the following issue................?
I have created a command task in workflow at Informatica that will execute a script in Unix to purge chache at OBIEE.But I want that workflow to be added as a task in DAC at already existing Plan and should be run at the last one whenever the Incremental load happens.
I created a Task in DAC with name of Workflow like WF_AUTO_PURGE and added that task as following task at Execution mode,The problem here is,I want to build that task after adding to the plan.I some how stuck here , When I try to build the task It is giving following error !!!!!
MESSAGE:::Error while loading pre post steps for Execution Plan. CompleteLoad_withDeleteNo physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE (DBConnection_INFA->DBConnection_INFA)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.ExecutionPlanInitializationException
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1317)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
::: CAUSE :::
MESSAGE:::No physical database mapping for the logical source was found for :DBConnection_INFA as used in WF_AUTO_PURGE(DBConnection_INFA->DBConnection_INFA)
EXCEPTION CLASS::: com.siebel.analytics.etl.execution.NoSuchDatabaseException
com.siebel.analytics.etl.execution.ExecutionParameterHelper.substitute(ExecutionParameterHelper.java:208)
com.siebel.analytics.etl.execution.ExecutionParameterHelper.parameterizeTask(ExecutionParameterHelper.java:139)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.handlePrePostTasks(ExecutionPlanDesigner.java:949)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.getExecutionPlanTasks(ExecutionPlanDesigner.java:790)
com.siebel.analytics.etl.execution.ExecutionPlanDesigner.design(ExecutionPlanDesigner.java:1267)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:169)
com.siebel.analytics.etl.client.util.tables.DefnBuildHelper.calculate(DefnBuildHelper.java:119)
com.siebel.analytics.etl.client.view.table.EtlDefnTable.doOperation(EtlDefnTable.java:169)
com.siebel.etl.gui.view.dialogs.WaitDialog.doOperation(WaitDialog.java:53)
com.siebel.etl.gui.view.dialogs.WaitDialog$WorkerThread.run(WaitDialog.java:85)
Regards,
Arul
Edited by: 869389 on Jun 30, 2011 11:02 PM
Edited by: 869389 on Jul 1, 2011 2:00 AM -
Unable to get the execution plan when using dbms_sqltune (11gR2)
Hi,
Database version: 11gR2
I have a user A that is granted privileges to execute dbms_sqltune.
I can create a task, excute it and run the report.
But, when I run the report I get the following error:
SQL> show user
USER is "A"
SQL> set long 10000 longchunksize 10000 linesize 200 pagesize 000
select dbms_sqltune.report_tuning_task(task_name => 'MYTEST') from dual;SQL>
GENERAL INFORMATION SECTION
Tuning Task Name : MYTEST
Tuning Task Owner : A
Workload Type : Single SQL Statement
Scope : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status : COMPLETED
Started at : 05/15/2013 11:53:22
Completed at : 05/15/2013 11:53:23
Schema Name: SYSMAN
SQL ID : gjm43un5cy843
SQL Text : SELECT SUM(USED), SUM(TOTAL) FROM (SELECT /*+ ORDERED */
SUM(D.BYTES)/(1024*1024)-MAX(S.BYTES) USED,
SUM(D.BYTES)/(1024*1024) TOTAL FROM (SELECT TABLESPACE_NAME,
SUM(BYTES)/(1024*1024) BYTES FROM (SELECT /*+ ORDERED USE_NL(obj
tab) */ DISTINCT TS.NAME FROM SYS.OBJ$ OBJ, SYS.TAB$ TAB,
SYS.TS$ TS WHERE OBJ.OWNER# = USERENV('SCHEMAID') AND OBJ.OBJ# =
TAB.OBJ# AND TAB.TS# = TS.TS# AND BITAND(TAB.PROPERTY,1) = 0 AND
BITAND(TAB.PROPERTY,4194400) = 0) TN, DBA_FREE_SPACE SP WHERE
SP.TABLESPACE_NAME = TN.NAME GROUP BY SP.TABLESPACE_NAME) S,
DBA_DATA_FILES D WHERE D.TABLESPACE_NAME = S.TABLESPACE_NAME
GROUP BY D.TABLESPACE_NAME)
ERRORS SECTION
- ORA-00942: table or view does not exist
SQL>
It seems there a missing privileg for dislaying the execution plan.
As a workaround, this is solved by granting select any dictionay (which I don't want) to the user A.
Does someone have an idea about what privilege is missing?
Kind Regards.Hi,
SELECT ANY DICTIONARY system privilege provides access to SYS schema objects only => which you are using as workaround
SELECT_CATALOG_ROLE provides access to all SYS views only.==> Safe option
SQL> grant SELECT ANY DICTIONARY to test;
Grant succeeded.
SQL> conn test/test
Connected.
SQL> select count(*) from sys.obj$;
COUNT(*)
13284
SQL> conn /as sysdba
Connected.
SQL> revoke SELECT ANY DICTIONARY from test;
Revoke succeeded.
SQL> grant SELECT_CATALOG_ROLE to test;
Grant succeeded.
SQL> conn test/test
Connected.
SQL> select count(*) from sys.obj$;
select count(*) from sys.obj$
ERROR at line 1:
ORA-00942: table or view does not existHTH -
What to look for in Execution plans ?
Hi Pals,
Assuming the query execution is slow , I have collected the xml plan. What should I look for in the actual execution plan . What are the top 10 things I need to watch out for?
I know this is a broad and generic question but I am looking for anyone who is experienced in query tuning to answer this question.
Thanks in Advance.Reading execution plans is a bit of an art. But a couple of things:
1) Download and install SQL Sentry Plan Explorer (www.sqlsentry.net). This is a free tool that in msny cases gives a better experience to look at execution plans, particularly big and ugly ones.
2) I usually look for the thickest arrows, as thickness indicates the number of rows being processed.
3) Look for deviations between estimates and actual values, as these deviations are often the source for bad performance. In Plan Explorer, you can quickly flip between the too. In SSMS you need to look at the popup for every operator. (But again, it is
the operator with fat arrows that are of most interest - and those before them.
4) The way to read the plan is that the left-most operator asks the operators it is connected to for data. The net effect is that data flows from right to left, and the right-hand side if often more interesting.
5) Don't pay much attention to the percentages about operator cost. These percentages are estimates only,
not actual values. They are only reliable if the estimates are correct all the way through - and when you have a bad plan that is rarely the case.
This was the overall advice. Then there is more specific one: are indexes being used when expected? Note that scans are necessarily not bad. Sometimes your problem is that you have a loop join + index seek, when you should have had two scans and a hash join.
Try to get a grip of how you would process the query, if you had to do it manually. Does the plan match that idea?
Erland Sommarskog, SQL Server MVP, [email protected] -
Same sqlID with different execution plan and Elapsed Time (s), Executions time
Hello All,
The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
17th Oct 18th Oct
221,808,602
21
2tc2d3u52rppt
213,170,100
72,495,618
9c8wqzz7kyf37
209,239,059
71,477,888
9c8wqzz7kyf37
139,331,777
1
7b0kzmf0pfpzn
144,813,295
1
0cqc3bxxd1yqy
102,045,818
1
8vp1ap3af0ma5
128,892,787
16,673,829
84cqfur5na6fg
89,485,065
1
5kk8nd3uzkw13
127,467,250
16,642,939
1uz87xssm312g
67,520,695
8,058,820
a9n705a9gfb71
104,490,582
12,443,376
a9n705a9gfb71
62,627,205
1
ctwjy8cs6vng2
101,677,382
15,147,771
3p8q3q0scmr2k
57,965,892
268,353
akp7vwtyfmuas
98,000,414
1
0ybdwg85v9v6m
57,519,802
53
1kn9bv63xvjtc
87,293,909
1
5kk8nd3uzkw13
52,690,398
0
9btkg0axsk114
77,786,274
74
1kn9bv63xvjtc
34,767,882
1,003
bdgma0tn8ajz9
Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
The other big difference is the average read time on two days
Tablespace IO Stats
17th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
947,766
59
4.24
4.86
185,084
11
2,887
6.42
UNDOTBS2
517,609
32
4.27
1.00
112,070
7
108
11.85
INDUS_MST_DATA01
288,994
18
8.63
8.38
52,541
3
23,490
7.45
INDUS_TRN_INDX01
223,581
14
11.50
2.03
59,882
4
533
4.26
TEMP
198,936
12
2.77
17.88
11,179
1
732
2.13
INDUS_LOG_DATA01
45,838
3
4.81
14.36
348
0
1
0.00
INDUS_TMP_DATA01
44,020
3
4.41
16.55
244
0
1,587
4.79
SYSAUX
19,373
1
19.81
1.05
14,489
1
0
0.00
INDUS_LOG_INDX01
17,559
1
4.75
1.96
2,837
0
2
0.00
SYSTEM
7,881
0
12.15
1.04
1,361
0
109
7.71
INDUS_TMP_INDX01
1,873
0
11.48
13.62
231
0
0
0.00
INDUS_MST_INDX01
256
0
13.09
1.04
194
0
2
10.00
UNDOTBS1
70
0
1.86
1.00
60
0
0
0.00
STG_DATA01
63
0
1.27
1.00
60
0
0
0.00
USERS
63
0
0.32
1.00
60
0
0
0.00
INDUS_LOB_DATA01
62
0
0.32
1.00
60
0
0
0.00
TS_AUDIT
62
0
0.48
1.00
60
0
0
0.00
18th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
980,283
91
1.40
4.74The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
You wrote with different execution plan, I think, you saw plans. It is very difficult, you get old plan.
I think Execution plans is not changed in different days, if you not added index or ...
What say ADDM report about this script?
As you know, It is normally, different Elapsed Time for same statement in different day.
It is depend your database workload.
It think you must use SQL Access and SQl Tuning advisor for this script.
You can get solution for slow running problem.
Regards
Mahir M. Quluzade -
Out of the box execution plan for Payables EBS 11.5.10
Has anyone else experienced performance issues with the out of the box execution plan for the Payables subject area for Oracle EBS 11.5.10? Our incremental ETL for this particular subject area is taking 8+ hours. I understand that there are several factors involved with performance and that there are a lot of AP transactions, but this is ridiculous for a nightly incremental ETL job.
In particular it is the SDE_ORA_APTransactionFact_Payment task that is taking forever. This query appears to have extremely high cost (see explain plan below). Has anyone been successful in rewriting or changing this query?
SELECT STATEMENT ALL_ROWSCost: 586,953 Bytes: 16,550 Cardinality: 50
13 NESTED LOOPS OUTER Cost: 586,953 Bytes: 16,550 Cardinality: 50
10 NESTED LOOPS Cost: 586,952 Bytes: 15,800 Cardinality: 50
7 HASH JOIN Cost: 468,320 Bytes: 11,693,526 Cardinality: 59,358
5 HASH JOIN Cost: 429,964 Bytes: 9,200,490 Cardinality: 59,358
3 HASH JOIN Cost: 366,009 Bytes: 7,740,544 Cardinality: 60,473
1 TABLE ACCESS FULL TABLE AP.AP_AE_LINES_ALL Cost: 273,240 Bytes: 15,212,604 Cardinality: 230,494
2 TABLE ACCESS FULL TABLE AP.AP_INVOICE_PAYMENTS_ALL Cost: 45,211 Bytes: 715,512,860 Cardinality: 11,540,530
4 TABLE ACCESS FULL TABLE AP.AP_PAYMENT_SCHEDULES_ALL Cost: 39,003 Bytes: 309,648,420 Cardinality: 11,468,460
6 TABLE ACCESS FULL TABLE AP.AP_CHECKS_ALL Cost: 28,675 Bytes: 130,126,920 Cardinality: 3,098,260
9 TABLE ACCESS BY INDEX ROWID TABLE AP.AP_INVOICES_ALL Cost: 2 Bytes: 119 Cardinality: 1
8 INDEX UNIQUE SCAN INDEX (UNIQUE) AP.AP_INVOICES_U1 Cost: 1 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE PO.PO_HEADERS_ALL Cost: 1 Bytes: 15 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) PO.PO_HEADERS_U1 Cost: 1 Cardinality: 1Hi Srini, All,
Thanks for the reply.
The payables documentation (i.e. User Guide) discusses about options that could be used in implementing EFT. However, if possible, we would like suggestions on what would be the better ways to implement EFT (US bank) using either XML or text formats. We would also prefer not using e-commerce gateway or EDI.
Thanks in advance.
MM -
9i 10g upgrade execution plan differences.
Hi all,
I am tring to find execution plan differences after I upgrade production system from 9i to 10gR2. So I have restored only needed tablespaces from my production system (9i) to a new machine and then upgraded thiat Oracle server to 10GR2. At this new server I run a script to get new execution plans of 10g. What suprises me is that query plans of 10g is different and most of new plans choose to access tables via indexes instead of full table scans stated in the original plans taken from 9i. My idea about those differences is that the optimizer takes some values for its cost formula from other system tables that I do not have in 10g server.I guess I am missing something which is not documented in upgrade book.
any idea?
Regards.9i database is my production database and I regularly run cron jobs for missing or stale statistics on all tables. So there is no possibility to hit a problem on object level statistics. I guess that I am missing something about system level statistics which are the part of the formula (single block read time etc) for cost calculating. So I tried to use setting some statastics via dbms_stat.set_system_statistics procedure, but it did not work.
Any idea?
Regards.
ALPER ÖNEY -
Worst execution plan ever?
World record estimation fail. We expect 1 row back, Sql Server expects over 23 trillion! The estimated memory is 111 Petabytes (yes, I said Peta).
We're using a pretty ugly view. Ugly because it has nested views, a correlated subquery and about 20 total joins. On the good side, the call is restricting the view with a single ID against the base table (this is for a single patient). The
rest of the view goes out and gets the patient's address, phone, contacts, status, insurance, diagnosis, etc. The db structure is fairly normalized so that does include about 20 tables.
This is a new application and as such there isn't much data yet. When we run the view on our server with thousands of patients, the results are returned quickly. Query time is subsecond. The execution plan is ugly as expected, it's got
hundreds of nodes, but the cost is pretty low and the performance is acceptable.
When we run the same view on a disconnected device running Sql Server Localdb, it sometimes loses it's mind. Note that the number of patients on the device is rarely over 100, it's a subset of the records on the server. That's when we get the
numbers that I'm quoting above. Basically, the first join thinks there might be 12 records returns, then the next estimates 20 times that many, then 50 times that many, and that number just keeps multiplying until we get to trillions.
I have a screenshot in case anyone thinks I'm exaggerating those numbers. I also have the execution plan XML.
Bottom line is we're going to rewrite the query, but this now becomes an excuse to learn. Where is Grant Fritchey when you need him?In situations like this, there are a few typical situations:
The statistics are stale
You are using parameters and parameter sniffing goes awry
You've hit a bug or flaw in the optimizer
The query can not be properly optimized
The first possible situation is the easiest to find and fix. Simply run UPDATE STATISTICS (preferably WITH FULLSCAN) on each table that is part of the query.
The second situation is also easily testable. For example, you can add OPTION WITH RECOMPILE (or any other relevant Compilation Option) to defeat parameter sniffing.
Of course, you should always check whether you have proper indexes in place. Be aware that Foreign Key relations are not automatically indexed.
If you are out of luck, and it is not any of the first two, then you can dive deeper and find out what is going wrong. If you lack the time or knowledge to do that, you can break the query in several queries and use temporary tables with intermediate results.
Gert-Jan -
Hi All,
Oracle v11.2.0.2
I have a SELECT query which executes in less than a second and selects few records.
Now, if I put this SELECT query in IN clause of a DELETE command, that takes ages (even when DELETE is done using its primary key).
See below query and execution plan.
Here is the SELECT query
SQL> SELECT ITEM_ID
2 FROM APP_OWNER.TABLE1
3 WHERE COLUMN1 = 'SomeValue1234'
4 OR (COLUMN1 LIKE 'SomeValue1234%'
5 AND REGEXP_LIKE (
6 COLUMN1,
7 '^SomeValue1234[A-Z]{3}[0-9]{5}$'
8 ));
ITEM_ID
74206192
1 row selected.
Elapsed: 00:00:40.87
Execution Plan
Plan hash value: 3153606419
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 38 | 7 (0)| 00:00:01 |
| 1 | CONCATENATION | | | | | |
|* 2 | INDEX RANGE SCAN | PK_TABLE1 | 1 | 19 | 4 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN| PK_TABLE1 | 1 | 19 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("COLUMN1" LIKE 'SomeValue1234%')
filter("COLUMN1" LIKE 'SomeValue1234%' AND REGEXP_LIKE
("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
3 - access("COLUMN1"='SomeValue1234')
filter(LNNVL("COLUMN1" LIKE 'SomeValue1234%') OR LNNVL(
REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$')))
Statistics
0 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
348 bytes sent via SQL*Net to client
360 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processedNow see the DELETE command. ITEM_ID is the primary key for TABLE2
SQL> delete from TABLE2 where ITEM_ID in (
2 SELECT ITEM_ID
3 FROM APP_OWNER.TABLE1
4 WHERE COLUMN1 = 'SomeValue1234'
5 OR (COLUMN1 LIKE 'SomeValue1234%'
6 AND REGEXP_LIKE (
7 COLUMN1,
8 '^SomeValue1234[A-Z]{3}[0-9]{5}$'
9 ))
10 );
1 row deleted.
Elapsed: 00:02:12.98
Execution Plan
Plan hash value: 173781921
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | DELETE STATEMENT | | 4 | 228 | 63490 (2)| 00:12:42 |
| 1 | DELETE | TABLE2 | | | | |
| 2 | NESTED LOOPS | | 4 | 228 | 63490 (2)| 00:12:42 |
| 3 | SORT UNIQUE | | 1 | 19 | 63487 (2)| 00:12:42 |
|* 4 | INDEX FAST FULL SCAN| I_TABLE1_3 | 1 | 19 | 63487 (2)| 00:12:42 |
|* 5 | INDEX RANGE SCAN | PK_TABLE2 | 7 | 266 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("COLUMN1"='SomeValue1234' OR "COLUMN1" LIKE 'SomeValue1234%' AND
REGEXP_LIKE ("COLUMN1",'^SomeValue1234[A-Z]{3}[0-9]{5}$'))
5 - access("ITEM_ID"="ITEM_ID")
Statistics
1 recursive calls
5 db block gets
227145 consistent gets
167023 physical reads
752 redo size
765 bytes sent via SQL*Net to client
1255 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
1 rows processedWhat can be the issue here?
I tried NO_UNNEST hint, which made difference, but still the DELETE was taking around a minute (instead of 2 minutes), but that is way more than that sub-second response.
Thanks in advancerahulras wrote:
SQL> delete from TABLE2 where ITEM_ID in (
2 SELECT ITEM_ID
3 FROM APP_OWNER.TABLE1
4 WHERE COLUMN1 = 'SomeValue1234'
5 OR (COLUMN1 LIKE 'SomeValue1234%'
6 AND REGEXP_LIKE (
7 COLUMN1,
8 '^SomeValue1234[A-Z]{3}[0-9]{5}$'
9 ))
10 );
The optimizer will transform this delete statement into something like:
delete from table2 where rowid in (
select t2.rowid
from
table2 t2,
table1 t1
where
t1.itemid = t2.itemid
and (t1.column1 = etc.... )
)With the standalone subquery against t1 the optimizer has been a little clever with the concatenation operation, but it looks as if there is something about this transformed join that makes it impossible for the concatenation mechanism to be used. I'd also have to guess that something about the way the transformation has happened has made Oracle "lose" the PK index. As I said in another thread a few minutes ago, I don't usually look at 10053 trace files to solve optimizer problems - but this is the second one today where I'd start looking at the trace if it were my problem.
You could try rewriting the query in this explicit join and select rowid form - that way you could always force the optimizer into the right path through table1. It's probably also possible to hint the original to make the expected path appear, but since the thing you hint and the thing that Oracle optimises are so different it might turn out to be a little difficult. I'd suggest raising an SR with Oracle.
Regards
Jonathan Lewis -
Hi,
I have a question about reading query execution plans. I don't understand what is meant by "Hash join" in a plan step. I understand "Nested_Loops" and "Merge_Join", but after looking at different documentations, I am still not clear about "Hash_Join". Please don't give links to Oracle documentation, because I have seen a lot of it, but it is still not clear. I also don't fully understand what is meant by a "Hash Table", what are all these "Hashesssssss"???
Please explain in your own words, if possible, and with the help of some example.
Thanks in Advance.,user566817 wrote:
Can any one else answer my question? I would be highly thankful.Hash joins are one of the 3 techniques used by Oracle to join to sets of data.
As a brief, and very very crude overview
Generally after applying the filter to one data source, the join key is hashed into a memory table (as described by link sb provided). The second data set areas filtered and it's join key is also hashed, but the hash is used to find matches in the memory table created by the first data source. Assuming no hash chaining, found matches are joins; if hassh chaining exists, the chain needs to be traversed to verify joins.
This, of course, has a lot of holes which would be filled in by the documentation and by external links that you do not seem to want to use.
(Remember: School papers need to cite references. Teachers these days DO check.) -
Problems with execution plans of OL queries in MGP
I'm just facing some strange behavior of OL MGP process. Its' performance is really poor on one of our servers and I just executed Consperf to figure out that the execution plans looks really weird. It looks like OL doesn't use available indexes at all even though statistics are ok and even when I execute the same SQL manually I can see that the execution plan looks totally different - there are almost none TABLE ACCESS FULL lookups. Is there any OL setup property which could cause this strange behavior?
Consperf explain plan output for one of the snapshots:
********** BASE - Publication item query ***********
SELECT d.VISITID, d.TASKID, d.QTY FROM HEINAPS.PA_TASKS d
WHERE d.VisitID IN (SELECT h.VisitID FROM HEINAPS.PA_VISITS_H_LIST h WHERE h.DSM = ?)
| Operation | Name | Rows | Bytes| Cost | Optimizer
| SELECT STATEMENT | | 1 | 24 | 0 |ALL_ROWS
| FILTER | | | | |
| HASH JOIN RIGHT SEMI | | 2M| 61M| 20743 |
| TABLE ACCESS FULL |PA_VISITS_H_LIST | 230K| 2M| 445 |ANALYZED
| TABLE ACCESS FULL |PA_TASKS | 11M| 134M| 6522 |ANALYZED
explain plan result of the same query executed in Pl/SQL Developer:
UPDATE STATEMENT, GOAL = ALL_ROWS Cost=3345 Cardinality=39599 Bytes=2969925
UPDATE Object owner=MOBILEADMIN Object name=CMP$JPHSK_PA_TASKS
HASH JOIN ANTI Cost=3345 Cardinality=39599 Bytes=2969925
TABLE ACCESS BY INDEX ROWID Object owner=MOBILEADMIN Object name=CMP$JPHSK_PA_TASKS Cost=1798 Cardinality=39599 Bytes=910777
INDEX RANGE SCAN Object owner=MOBILEADMIN Object name=CMP$1527381C Cost=239 Cardinality=49309
VIEW Object owner=SYS Object name=VW_SQ_1 Cost=1547 Cardinality=29101 Bytes=1513252
NESTED LOOPS Cost=1547 Cardinality=29101 Bytes=640222
INDEX RANGE SCAN Object owner=HEINAPS Object name=IDX_PAVISITSHL_DSM_VISITID Cost=39 Cardinality=1378 Bytes=16536
INDEX RANGE SCAN Object owner=HEINAPS Object name=PK_PA_TASKS Cost=2 Cardinality=21 Bytes=210
This query and also few others run in MGP for few minutes for each user, because of the poor execution plan. Is there any method how to force OL to use "standard" execution plans the DB produces to get MGP back to usable performance?The problem is that the MGP process does not run the publication item query as such. What id does is wrap it up inside insert and update statements and then execute via java, and this is what can cause problems.
Set the trace to all for MGPCOMPOSE on a user, wait for the MGP cycle and you will find in the trace files a series of files for the user. Look through this and you should find the actual wrapped up query that is executed. This should also be in the consperf file. Consperf should give a few different execution stats for the query (ins_1, ins_2) if these are better then set these in c$consperf. The automatic setting does nort always choose the best one.
If all else fails, try expressing the query in other ways and test them in the MGP process. I have found that this kind of trial and error is the only approach
couple of bits about the query below
1) do you sopecifically need to restrict the columns from HEINAPS.PA_TASKS ? if not use select * in the PI select statement as it tends to bind better
2) what is the data type of HEINAPS.PA_VISITS_H_LIST.DSM. If numberic, then do a to_number() on the bind variable and the type casting is not very efficient
Maybe you are looking for
-
To change user status profile on sales order at run time.
Hi Experts, We have defined status profile for Sales returns ( Doc type RE) which blocks the return order on creation. There are two statuses here which are to remove the delivery block and to remove the billing block by authorised users. The requir
-
MobileMe gallery shows thumbnails but no photos
I uploaded/Backedup some photos to MobileMe Gallery a while back, and now that I've moved my account to icloud I want to grab those photos to store on my mac. However when I go to view in thumbnail I can see them but then I click to open and MobileMe
-
Using different user accounts?
Does it slow down your computer if you add users to your computer and does it slow it down using fast user switching?
-
Previous version on disk for FinalCutStudio 2 update
We have two licenses of FinalCutStudio 1 installed in two PowerMac G5. One of those G5 is beginning to have problems and we are planning to substitute it. I remember that the installation in the G5 was a nightmare because it required the previous ver
-
can i use it here?