SQL performance problem.. here is explain plan
This takes 12 seconds... and give 100 K rows in return..
I need to do pagination and start and end index should be any rang like 200 TO 300
So when I get that it never comes back..
select rownum rn , X.*
from v_online_item X,
(SELECT c.category_seq AS seq_no, c.assoc_seq, c.assoc_table_name
FROM v_online_category_assoc c,
(SELECT assoc_seq
FROM v_online_category_assoc a
WHERE a.assoc_table_name = 'V_ONLINE_CATEGORY'
CONNECT BY category_seq = PRIOR assoc_seq
AND assoc_table_name = 'V_ONLINE_CATEGORY'
START WITH category_seq = 170101
AND assoc_table_name = 'V_ONLINE_CATEGORY'
UNION
SELECT to_number(170101)
FROM DUAL
) v
WHERE c.category_seq = v.assoc_seq
AND assoc_table_name = 'V_ONLINE_ITEM'
) Y
where X.item_seq = Y.assoc_seq
Plan
SELECT STATEMENT ALL_ROWSCost: 1,484,076 Bytes: 76,566,065,969 Cardinality: 4,868,447
17 FILTER
16 VIEW VZW_ADMIN. Cost: 1,484,076 Bytes: 76,566,065,969 Cardinality: 4,868,447
15 COUNT
14 HASH JOIN Cost: 1,484,076 Bytes: 5,588,977,156 Cardinality: 4,868,447
12 NESTED LOOPS Cost: 31,929 Bytes: 189,869,433 Cardinality: 4,868,447
10 VIEW VZW_ADMIN. Cost: 8 Bytes: 416 Cardinality: 32
9 SORT UNIQUE Cost: 8 Bytes: 806 Cardinality: 32
8 UNION-ALL
6 FILTER
5 CONNECT BY WITH FILTERING
1 INDEX RANGE SCAN INDEX (UNIQUE) VZW_ADMIN.UDX1_ONLINE_CATEGORY_ASSOC Cost: 4 Bytes: 735 Cardinality: 15
4 NESTED LOOPS
2 CONNECT BY PUMP
3 INDEX RANGE SCAN INDEX (UNIQUE) VZW_ADMIN.UDX1_ONLINE_CATEGORY_ASSOC Cost: 4 Bytes: 806 Cardinality: 31
7 FAST DUAL Cost: 2 Cardinality: 1
11 INDEX RANGE SCAN INDEX (UNIQUE) VZW_ADMIN.UDX1_ONLINE_CATEGORY_ASSOC Cost: 998 Bytes: 3,955,614 Cardinality: 152,139
13 TABLE ACCESS FULL TABLE VZW_ADMIN.V_ONLINE_ITEM Cost: 670,275 Bytes: 16,092,289,779 Cardinality: 14,510,631
select * from
(select rownum rn , X.*
from v_online_item X,
(SELECT c.category_seq AS seq_no, c.assoc_seq, c.assoc_table_name
FROM v_online_category_assoc c,
(SELECT assoc_seq
FROM v_online_category_assoc a
WHERE a.assoc_table_name = 'V_ONLINE_CATEGORY'
CONNECT BY category_seq = PRIOR assoc_seq
AND assoc_table_name = 'V_ONLINE_CATEGORY'
START WITH category_seq = 170101
AND assoc_table_name = 'V_ONLINE_CATEGORY'
UNION
SELECT to_number(170101)
FROM DUAL
) v
WHERE c.category_seq = v.assoc_seq
AND assoc_table_name = 'V_ONLINE_ITEM'
) Y
where X.item_seq = Y.assoc_seq
where rn between :s_index and :e_index ;
Plan
SELECT STATEMENT ALL_ROWSCost: 1,484,076 Bytes: 76,566,065,969 Cardinality: 4,868,447
17 FILTER
16 VIEW VZW_ADMIN. Cost: 1,484,076 Bytes: 76,566,065,969 Cardinality: 4,868,447
15 COUNT
14 HASH JOIN Cost: 1,484,076 Bytes: 5,588,977,156 Cardinality: 4,868,447
12 NESTED LOOPS Cost: 31,929 Bytes: 189,869,433 Cardinality: 4,868,447
10 VIEW VZW_ADMIN. Cost: 8 Bytes: 416 Cardinality: 32
9 SORT UNIQUE Cost: 8 Bytes: 806 Cardinality: 32
8 UNION-ALL
6 FILTER
5 CONNECT BY WITH FILTERING
1 INDEX RANGE SCAN INDEX (UNIQUE) VZW_ADMIN.UDX1_ONLINE_CATEGORY_ASSOC Cost: 4 Bytes: 735 Cardinality: 15
4 NESTED LOOPS
2 CONNECT BY PUMP
3 INDEX RANGE SCAN INDEX (UNIQUE) VZW_ADMIN.UDX1_ONLINE_CATEGORY_ASSOC Cost: 4 Bytes: 806 Cardinality: 31
7 FAST DUAL Cost: 2 Cardinality: 1
11 INDEX RANGE SCAN INDEX (UNIQUE) VZW_ADMIN.UDX1_ONLINE_CATEGORY_ASSOC Cost: 998 Bytes: 3,955,614 Cardinality: 152,139
13 TABLE ACCESS FULL TABLE VZW_ADMIN.V_ONLINE_ITEM Cost: 670,275 Bytes: 16,092,289,779 Cardinality: 14,510,631
user550024 wrote:
This takes 12 seconds... and give 100 K rows in return..
I need to do pagination and start and end index should be any rang like 200 TO 300
So when I get that it never comes back..Refer to the Tom Kyte column mentioned, but just a few comments: Your query seems to be missing an explicit ORDER BY, so it's unsure what order you're actually thinking of when performing your pagination.
Second, using "rn between :s_index and :e_index" doesn't activate the FIRST_ROWS_n mode of the optimizer. You should use either an explicit FIRST_ROWS(n) hint, or even better, use the more convoluted double-inline view syntax as shown by Tom (... where rownum <= :M ... where rnum >= :N).
If you don't have a supporting index for the ORDER requested, you should see at least a SORT ORDER BY STOPKEY operation. At least one of the columns of the index needs to be defined as NOT NULL to be supported by a normal b*tree index. If you have character-based columns in the ORDER BY/index definition you need to make sure that you're using NLS_SORT = binary on the clients involved, otherwise again a normal b*tree index can't be used for returning the result set in the order requested.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Similar Messages
-
Passing PL/SQL Procedure call to custom explain plan script
Hi,
Oracle have sent me a custom explain plan script and have asked me to pass in my problematic code through it. They use the GET command in their script to retrieve the command which is passed in as a parameter e.g:
@custom_explain_plan <my sql script>
The script has a section as follows:
get &&1 -- my script
0 explain plan set statement_id = 'APG' into plan_table for
This works for a standard SQL statement e.g. SELECT SYSDATE FROM DUAL but (and Oracle Support know this) my issue is triggered by a call to an e-Business Suite API that updates the database. This is contained within a PL/SQL block and COMMITs a few sample updates. I've not used explain plan before but I suspect it can only be used on SQL statements? I have no access to the DELETEs and INSERTs that the API triggers, so what am I supposed to do?
Am I missing something or have Oracel Support sold me a pup?
I thought I'd post here first before going back to them...
Many thanks,
BagpussSybrand,
Many thanks for the swift confirmation. I didn't think I was going mad! Already gave them a standard trace so I guess it's up to them now - my experience with Oracle Support in the last few years has been very deflating - they evidently do not read/understand much of the information they are given in an SR. Added to that, we've had patches from them with basic syntax errors! I've consistently received much more informed responses from the various OTN forums, which are much appreciated.
I may construct the DELETE and INSERT statements dynamically myself using a typical employee record to at least give them something to ponder over...
Thanks again,
Bagpuss. -
PL/SQL Performance problem
I am facing a performance problem with my current application (PL/SQL packaged procedure)
My application takes data from 4 temporary tables, does a lot of validation and
puts them into permanent tables.(updates if present else inserts)
One of the temporary tables is parent table and can have 0 or more rows in
the other tables.
I have analyzed all my tables and indexes and checked all my SQLs
They all seem to be using the indexes correctly.
There are 1.6 million records combined in all 4 tables.
I am using Oracle 8i.
How do I determine what is causing the problem and which part is taking time.
Please help.
The skeleton of the code which we have written looks like this
MAIN LOOP ( 255308 records)-- Parent temporary table
-----lots of validation-----
update permanent_table1
if sql%rowcount = 0 then
insert into permanent_table1
Loop2 (0-5 records)-- child temporary table1
-----lots of validation-----
update permanent_table2
if sql%rowcount = 0 then
insert into permanent_table2
end loop2
Loop3 (0-5 records)-- child temporary table2
-----lots of validation-----
update permanent_table3
if sql%rowcount = 0 then
insert into permanent_table3
end loop3
Loop4 (0-5 records)-- child temporary table3
-----lots of validation-----
update permanent_table4
if sql%rowcount = 0 then
insert into permanent_table4
end loop4
-- COMMIT after every 3000 records
END MAIN LOOP
Thanks
Ashwin N.Do this intead of ditching the PL/SQL.
DECLARE
TYPE NumTab IS TABLE OF NUMBER(4) INDEX BY BINARY_INTEGER;
TYPE NameTab IS TABLE OF CHAR(15) INDEX BY BINARY_INTEGER;
pnums NumTab;
pnames NameTab;
t1 NUMBER(5);
t2 NUMBER(5);
t3 NUMBER(5);
BEGIN
FOR j IN 1..5000 LOOP -- load index-by tables
pnums(j) := j;
pnames(j) := 'Part No. ' || TO_CHAR(j);
END LOOP;
t1 := dbms_utility.get_time;
FOR i IN 1..5000 LOOP -- use FOR loop
INSERT INTO parts VALUES (pnums(i), pnames(i));
END LOOP;
t2 := dbms_utility.get_time;
FORALL i IN 1..5000 -- use FORALL statement
INSERT INTO parts VALUES (pnums(i), pnames(i));
get_time(t3);
dbms_output.put_line('Execution Time (secs)');
dbms_output.put_line('---------------------');
dbms_output.put_line('FOR loop: ' || TO_CHAR(t2 - t1));
dbms_output.put_line('FORALL: ' || TO_CHAR(t3 - t2));
END;
Try this link, http://download-west.oracle.com/docs/cd/B10501_01/appdev.920/a96624/05_colls.htm#23723 -
Problem to create Explain Plan and use XML Indexes. Plz follow scenario..
Hi,
Oracle Version - Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit
I have been able to reproduce the error as below:
Please run the following code in Schema1:
CREATE TABLE TNAME1
DB_ID VARCHAR2 (10 BYTE),
DATA_ID VARCHAR2 (10 BYTE),
DATA_ID2 VARCHAR2 (10 BYTE),
IDENTIFIER1 NUMBER (19) NOT NULL,
ID1 NUMBER (10) NOT NULL,
STATUS1 NUMBER (10) NOT NULL,
TIME_STAMP NUMBER (19) NOT NULL,
OBJECT_ID VARCHAR2 (40 BYTE) NOT NULL,
OBJECT_NAME VARCHAR2 (80 BYTE) NOT NULL,
UNIQUE_ID VARCHAR2 (255 BYTE),
DATA_LIVE CHAR (1 BYTE) NOT NULL,
XML_MESSAGE SYS.XMLTYPE,
ID2 VARCHAR2 (255 BYTE) NOT NULL,
FLAG1 CHAR (1 BYTE) NOT NULL,
KEY1 VARCHAR2 (255 BYTE),
HEADER1 VARCHAR2 (2000 BYTE) NOT NULL,
VERSION2 VARCHAR2 (255 BYTE) NOT NULL,
TYPE1 VARCHAR2 (15 BYTE),
TIMESTAMP1 TIMESTAMP (6),
SOURCE_NUMBER NUMBER
XMLTYPE XML_MESSAGE STORE AS BINARY XML
PARTITION BY RANGE (TIMESTAMP1)
(PARTITION MAX
VALUES LESS THAN (MAXVALUE)
NOCOMPRESS
NOCACHE
ENABLE ROW MOVEMENT
begin
app_utils.drop_parameter('TNAME1_PAR');
end;
BEGIN
DBMS_XMLINDEX.REGISTERPARAMETER(
'TNAME1_PAR',
'PATH TABLE TNAME1_RP_PT
PATHS (INCLUDE ( /abc:Msg/product/productType
/abc:Msg/Products/Owner
NAMESPACE MAPPING ( xmlns:abc="Abc:Set"
END;
CREATE INDEX Indx_XPATH_TNAME1
ON "TNAME1" (XML_MESSAGE)
INDEXTYPE IS XDB.XMLINDEX PARAMETERS ( 'PARAM TNAME1_PAR' )
local;Then in Schema2, create
create synonym TNAME1 FOR SCHEMA1.TNAME1
SCHEMA1:
GRant All on TNAME1 to SCHEMA2Now in SCHEMA2, if we try:
Explain Plan for
SELECT xmltype.getclobval (XML_MESSAGE)
FROM TNAME1 t
WHERE XMLEXISTS (
'declare namespace abc="Abc:Set"; /abc:Msg/product/productType= ("1", "2") '
PASSING XML_MESSAGE);WE GET -> ORA-00942: table or view does not exist
whereas this works:
Explain Plan for
SELECT xmltype.getclobval (XML_MESSAGE)
FROM TNAME1 t- Please tell me, what is the reason behind it and how can I overcome it. It's causing all my views based on this condition to fail in another schema i.e. not picking up the XMLIndexes.
Also
SELECT * from DBA_XML_TAB_COLS WHERE TABLE_NAME like 'TNAME1';Output is like:
OWNER, || TABLE_NAME, || COLUMN_NAME, || XMLSCHEMA || SCHEMA_OWNER, || ELEMENT_NAME, || STORAGE_TYPE, || ANYSCHEMA, || NONSCHEMA
SCHEMA1 || TNAME1 || XML_MESSAGE || || || BINARY || NO || YES ||
SCHEMA1 || TNAME1 || SYS_NC00025$ || || || CLOB || ||
- Can I change AnySchema to YES from NO for -column_name = XML_MESSAGE ? May be that will solve my problem.
- SYS_NC00025$ is the XML Index, Why don't I get any values for ANYSCHEMA, NONSCHEMA on it. Is this what is causing the problem.
Kindly suggest.. Thanks..The problem sounds familiar. Please create a SR on http://support.oracle.com for this one.
-
Query Performance and reading an Explain Plan
Hi,
Below I have posted a query that is running slowly for me - upwards of 10 minutes which I would not expect. I have also supplied the explain plan. I'm fairly new to explain plans and not sure what the danger signs are that I should be looking out for.
I have added indexes to these tables, a lot of which are used in the JOIN and so I expected this to be quicker.
Any help or pointers in the right direction would be very much appreciated -
SELECT a.lot_id, a.route, a.route_rev
FROM wlos_owner.tbl_current_lot_status_dim a, wlos_owner.tbl_last_seq_num b, wlos_owner.tbl_hist_metrics_at_op_lkp c
WHERE a.fw_ver = '2'
AND a.route = b.route
AND a.route_rev = b.route_rev
AND a.fw_ver = b.fw_ver
AND a.route = c.route
AND a.route_rev = c.route_rev
AND a.fw_ver = c.fw_ver
AND a.prod = c.prod
AND a.lot_type = c.lot_type
AND c.step_seq_num >= a.step_seq_num
PLAN_TABLE_OUTPUT
Plan hash value: 2447083104
| Id | Operation | Name | Rows | Bytes | Cost
(%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 333 | 33633 | 1347
(8)| 00:00:17 |
|* 1 | HASH JOIN | | 333 | 33633 | 1347
(8)| 00:00:17 |
|* 2 | HASH JOIN | | 561 | 46002 | 1333
(7)| 00:00:17 |
|* 3 | TABLE ACCESS FULL| TBL_CURRENT_LOT_STATUS_DIM | 11782 | 517K| 203
(5)| 00:00:03 |
PLAN_TABLE_OUTPUT
|* 4 | TABLE ACCESS FULL| TBL_HIST_METRICS_AT_OP_LKP | 178K| 6455K| 1120
(7)| 00:00:14 |
|* 5 | TABLE ACCESS FULL | TBL_LAST_SEQ_NUM | 8301 | 154K| 13
(16)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND
"A"."FW_VER"=TO_NUMBER("B"."FW_VER"))
2 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV" AND
"A"."FW_VER"="C"."FW_VER" AND "A"."PROD"="C"."PROD" AND "A"."LOT_T
YPE"="C"."LOT_TYPE")
filter("C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
3 - filter("A"."FW_VER"=2)
PLAN_TABLE_OUTPUT
4 - filter("C"."FW_VER"=2)
5 - filter(TO_NUMBER("B"."FW_VER")=2)
24 rows selected.Guys thank you for your help.
I changed the type of the offending column and the plan looks a lot better and results seem a lot quicker.
However I have added to my SELECT, quite substantially, and have a new explain plan.
There are two sections in particular that have a high cost and I was wondering if you seen anything inherently wrong or can explain more fully what the PLAN_TABLE_OUTPUT descriptions are telling me - in particular
INDEX FULL SCAN
PLAN_TABLE_OUTPUT
Plan hash value: 3665357134
| Id | Operation | Name | Rows
| Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | |
4 | 316 | 52 (2)| 00:00:01 |
|* 1 | VIEW | |
4 | 316 | 52 (2)| 00:00:01 |
| 2 | WINDOW SORT | |
4 | 600 | 52 (2)| 00:00:01 |
|* 3 | TABLE ACCESS BY INDEX ROWID | TBL_HIST_METRICS_AT_OP_LKP |
1 | 71 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 4 | NESTED LOOPS | |
4 | 600 | 51 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 7
5 | 5925 | 32 (0)| 00:00:01 |
|* 6 | INDEX FULL SCAN | UNIQUE_LAST_SEQ | 8
9 | 2492 | 10 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| TBL_CURRENT_LOT_STATUS_DIM |
PLAN_TABLE_OUTPUT
1 | 51 | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | TBL_CUR_LOT_STATUS_DIM_IDX1 |
1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | TBL_HIST_METRIC_AT_OP_LKP_IDX1 | 2
9 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
1 - filter("SEQ"=1)
3 - filter("C"."FW_VER"=2 AND "A"."PROD"="C"."PROD" AND "A"."LOT_TYPE"="C"."L
OT_TYPE" AND
"C"."STEP_SEQ_NUM">="A"."STEP_SEQ_NUM")
6 - access("B"."FW_VER"=2)
filter("B"."FW_VER"=2)
PLAN_TABLE_OUTPUT
8 - access("A"."ROUTE"="B"."ROUTE" AND "A"."ROUTE_REV"="B"."ROUTE_REV" AND "A
"."FW_VER"=2)
9 - access("A"."ROUTE"="C"."ROUTE" AND "A"."ROUTE_REV"="C"."ROUTE_REV") -
SQL Tuning - Not understanding the explain plan.
Hi All,
I am using 11g R2 and I have 2 questions about tuning a query.
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for 64-bit Windows: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - ProductionMy parameters relevant to the optimizer are :
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUEThe query I would like to execute is pretty simple. It just returns some exceptions with a filter.
SELECT ERO.DVC_EVT_ID, E.DVC_EVT_DTTM
FROM D1_DVC_EVT E, D1_DVC_EVT_REL_OBJ ERO
WHERE ERO.MAINT_OBJ_CD = 'D1-DEVICE'
AND ERO.PK_VALUE1 = :H1
AND ERO.DVC_EVT_ID = E.DVC_EVT_ID
AND E.DVC_EVT_TYPE_CD IN ('END-GSMLOWLEVEL-EXCP-SEV-1', 'STR-GSMLOWLEVEL-EXCP-SEV-1')
ORDER BY E.DVC_EVT_DTTM DESC;The execution plan is the following :
Plan hash value: 3627978539
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | | 7131 (100)| | | 1181 |00:00:17.17 | 8627 | 2978 | | | |
| 1 | SORT ORDER BY | | 1 | 3137 | 275K| 7131 (1)| | | 1181 |00:00:17.17 | 8627 | 2978 | 80896 | 80896 |71680 (0)|
| 2 | NESTED LOOPS | | 1 | | | | | | 1181 |00:00:17.16 | 8627 | 2978 | | | |
| 3 | NESTED LOOPS | | 1 | 3137 | 275K| 7130 (1)| | | 2058 |00:00:08.09 | 6709 | 1376 | | | |
| 4 | TABLE ACCESS BY INDEX ROWID | D1_DVC_EVT_REL_OBJ | 1 | 3137 | 125K| 845 (1)| | | 2058 |00:00:04.37 | 820 | 799 | | | |
|* 5 | INDEX RANGE SCAN | D1T404S0 | 1 | 3137 | | 42 (0)| | | 2058 |00:00:00.08 | 27 | 23 | | | |
| 6 | PARTITION RANGE ITERATOR | | 2058 | 1 | | 1 (0)| KEY | KEY | 2058 |00:00:03.69 | 5889 | 577 | | | |
|* 7 | INDEX UNIQUE SCAN | D1T400P0 | 2058 | 1 | | 1 (0)| KEY | KEY | 2058 |00:00:03.66 | 5889 | 577 | | | |
|* 8 | TABLE ACCESS BY GLOBAL INDEX ROWID| D1_DVC_EVT | 2058 | 1 | 49 | 2 (0)| ROWID | ROWID | 1181 |00:00:09.05 | 1918 | 1602 | | | |
Peeked Binds (identified by position):
1 - (VARCHAR2(30), CSID=178): '271792300706'
Predicate Information (identified by operation id):
5 - access("ERO"."PK_VALUE1"=:H1 AND "ERO"."MAINT_OBJ_CD"='D1-DEVICE')
filter("ERO"."MAINT_OBJ_CD"='D1-DEVICE')
7 - access("ERO"."DVC_EVT_ID"="E"."DVC_EVT_ID")
8 - filter(("E"."DVC_EVT_TYPE_CD"='END-GSMLOWLEVEL-EXCP-SEV-1' OR "E"."DVC_EVT_TYPE_CD"='STR-GSMLOWLEVEL-EXCP-SEV-1'))So as you can see, row 8, I have a TABLE ACCESS BY GLOBAL INDEX ROWID. But what I am failling to see is how Oracle display a TABLE ACCESS BY GLOBAL INDEX ROWID without using any index. As thought that Oracle always got a ROWID because of an index.
I also have an index on the column DVC_EVT_TYPE_CD in my table (row 8 of the predicate information section)
And finally what would be your suggestion to improve the runtime of this query...
Many thanks.if I read the rowsource statistics information and the plan correctly then the table access in step 4 on D1_DVC_EVT_REL_OBJ needs 4.37 sec and accesses 820 buffers. If you had an index D1_DVC_EVT_REL_OBJ(MAINT_OBJ_CD, PK_VALUE1, DVC_EVT_ID) I assume that this table access could be avoided. To avoid the table access on D1_DVC_EVT an index on D1_DVC_EVT(DVC_EVT_ID, DVC_EVT_TYPE_CD, DVC_EVT_DTTM) should be sufficient.
I think these indexes would improve the performance of the given query but of course they would have a negative impact on DML performance on the table and they could influence other queries on the table. -
We are running oracle 9i. We have web-based Java thin client calling oracle, some SQLs ran for half hours. But when we ran the same SQL from sqlplus, it finished in seconds. We know it is not a communication issue, because we saw the SQL was running in oracle. Anyone has an idea what the problem is and how to fix it?
Thanks in Advance
John Wang9i is a marketing label. What is the real full version number of Oracle?
Look in v$sql_plan to see what plan the java submitted SQL used.
Compare that to the SQL you ran in SQLPlus.
How did you convert the java based SQL to a statement you ran in SQLPlus? If the java code used bind variables and the SQLPlus code used constants then to Oracle those are very different SQL statements and can result in a different plan. That difference would be of interest to you.
Are there histograms on the tables used in the query? Bind variable peeking is a potential issue, especially with a bad peek.
HTH -- Mark D Powell -- -
SQL Tuning Advisor Recommends New Explain Plan
Hi:
I have to believe this has been asked before but didn't see it in a forum search so I'll ask here. I had SQL Tuning Advisor look at a query and it is recommending a new plan for a 50+% improvement (hazah!). The trouble is, I don't want Oracle to re-write the plan to execute the query better, I want to know how I can re-write the query to generate that more optimal plan in the first place because I have similar systems in the field that I would like to also be optimized. What are my options?
Thanks.Sorry Gaff I know where you are talking about but I don't have your answer, but it may be a good start going over the 19g reference guide for these dictionary views -
SQL> select view_name from dba_views where view_name like 'DBA%ADVISOR%' ;
VIEW_NAME
DBA_ADVISOR_DEFINITIONS
DBA_ADVISOR_COMMANDS
DBA_ADVISOR_OBJECT_TYPES
DBA_ADVISOR_USAGE
DBA_ADVISOR_TASKS
DBA_ADVISOR_TEMPLATES
DBA_ADVISOR_LOG
...Best regards. -
hi all,
i am facing problem with the explain plan of the following query , firslty my database version is 9.2.0.5 , EBS is 11.5.10 and OS is HP UX 11.11
select
a.vendor_site_code,
'GAIN/LOSS' invoice_type_lookup_code,
api.invoice_num,
d. ACCOUNTING_DATE INVOICE_DATE ,
null description,
a.accts_pay_code_combination_id ccid,
c.currency_code invoice_currency_code,
c.currency_conversion_rate rate ,
DECODE(c.ae_line_type_code,'GAIN',accounted_cr/NVL(c.currency_conversion_rate,1),0) dr_val,
DECODE(c.ae_line_type_code,'LOSS',accounted_dr/NVL(c.currency_conversion_rate,1),0) cr_val,
null payment_num,
d.accounting_date pay_accounting_date,
null check_number,
b.segment1 ,
null ATTRIBUTE5,
b.vendor_name,
b.vendor_type_lookup_code,
apd.po_distribution_id po_distribution_id,
api.exchange_rate_type,
api.org_id,
api.batch_id,
sysdate exchange_date,
api.invoice_id,
d.accounting_date
from po_vendor_sites_all a,
po_vendors b,
ap_ae_lines_all c,
ap_ae_headers_all d,
ap_invoices_all api,
ap_invoice_distributions_all apd
where a.vendor_id = b.vendor_id AND
a.vendor_id = api.vendor_id AND
a.vendor_site_id = api.vendor_site_id AND
b.vendor_id = api.vendor_id AND
c.ae_header_id = d.ae_header_id AND
c.ae_line_type_code in ( 'GAIN','LOSS') AND
c.source_table = 'AP_INVOICE_DISTRIBUTIONS' AND
c.source_id = apd.invoice_distribution_id AND
D.ACCOUNTING_DATE < :p_to_date AND
b.vendor_id = NVL(:p_vendor_id,b.vendor_id) AND
(api.org_id = :p_org_id OR api.org_id IS NULL) AND
b.segment1 = NVL(:p_vendor_no, b.segment1) AND
api.vendor_id = NVL(:p_vendor_id, api.vendor_id)
this query was taking around 7-8 hrs to complete , so i ran gather schema statistics , this solved my problem , but few days back some one accidenlty cancelled
gather schema statistics , so i started facing the same problem.
but this time ,even after running stats i am unable to solve the problem , the same query runs fine on test server, please view the outputs of both plans and the query, there seems to be a difference of table selection in both plans, can any one plz help me on this one, iam runnig out of ideas here
the plans are uploaded to
http://rapidshare.com/files/226264001/9iprod_plan.zip.htmlsir,
the query is same ob both instances i have double checked it
the prob on production server started only after gather schema statistics was cancelled by mistake on PROD, earlier it was working fine
another difference in explain plans is that in TEST instance it uses ap_ae_lines_all table
and Prod uses ap_ae_headers_all which is slow
Edited by: user7864753 on Apr 27, 2009 5:11 PM -
Dear All.
DBMS_XPLAN.DISPLAY_CURSOR(
sql_id IN VARCHAR2 DEFAULT NULL,
child_number IN NUMBER DEFAULT NULL,
format IN VARCHAR2 DEFAULT 'TYPICAL');
SQL> SELECT * FROM table (
2 DBMS_XPLAN.DISPLAY_CURSOR('b7jn4mf49n569'));
PLAN_TABLE_OUTPUT
SQL_ID b7jn4mf49n569, child number 0
select o.name, u.name from obj$ o, type$ t, user$ u where o.oid$ = t.tvoid and
u.user#=o.owner# and bitand(t.properties,8388608) = 8388608 and
(sysdate-o.ctime) > 0.0007
Plan hash value: 4266358741
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| T
| 0 | SELECT STATEMENT | | | | 94 (100)|
| 1 | NESTED LOOPS | | 1 | 72 | 94 (2)| 0
| 2 | NESTED LOOPS | | 1 | 56 | 93 (2)| 0
|* 3 | TABLE ACCESS FULL | OBJ$ | 71 | 2414 | 37 (3)| 0
|* 4 | TABLE ACCESS BY INDEX ROWID| TYPE$ | 1 | 22 | 1 (0)| 0
|* 5 | INDEX UNIQUE SCAN | I_TYPE2 | 1 | | 0 (0)|
| 6 | TABLE ACCESS CLUSTER | USER$ | 1 | 16 | 1 (0)| 0
|* 7 | INDEX UNIQUE SCAN | I_USER# | 1 | | 0 (0)|
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
3 - filter(("O"."OID$" IS NOT NULL AND SYSDATE@!-"O"."CTIME">.0007))
4 - filter(BITAND("T"."PROPERTIES",8388608)=8388608)
5 - access("O"."OID$"="T"."TVOID")
7 - access("U"."USER#"="O"."OWNER#")
29 rows selected
SQL> As you can see using DBMS_XPLAN.DISPLAY_CURSOR. I can display the explain plan of any query IN SQL*PLUS.
But I want to write a PL/SQL function that generating an explain plan for any query and displaying the explain plan in a htlm page
how can i do same thing in pl/sql? I need yours advice.
Thanks in advance!Generate the plan like so:
begin
execute immediate 'explain plan for select * from dual';
end;Then you can put the dbms_xplan.display bit into a ref cursor and pass that across to the front end.
Eg:
SQL> variable rc refcursor
SQL> begin
2 execute immediate 'explain plan for select * from dual';
3 open :rc for select * from table(dbms_xplan.display);
4* end;
SQL> /
PL/SQL procedure successfully completed.
SQL> print rc;
PLAN_TABLE_OUTPUT
Plan hash value: 3543395131
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 2 | 5 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 5 (0)| 00:00:01 |
8 rows selected. -
How does SQL Developer run EXPLAIN PLAN?
Hello,
I noticed that SQL Developer can do an EXPLAIN PLAN even though I don't have a PLAN_TABLE available for the user that I use to connect in SQL Developer.
I was always wondering whether there was a way to do an EXPLAIN PLAN without a PLAN_TABLE, apparently this seems to be possible.
ThomasIf I try an explain plan when I do not have a plan table available, I get the message "A PLAN_TABLE is required to access the explain plan, but it does not exist. Do you want to create a PLAN_TABLE?". Unfortunately, as that user does not have create table privs, if I say Yes I get the following error "ORA-01031: insufficient privileges".
Given this, if you are doing an explain plan, you either have access to a plan table or you have create table privs and SQL Developer has created the plan table for you. -
Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
Checklist for Quick Performance problem Resolution
· get trace, code and other information for given PE case
- Latest Code from Production env
- Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
- Program parameters & their frequently used values
- Run Frequency of the program
- existing Run-time/response time in Production
- Business Purpose
· Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
· Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
· Identify most time consuming operation(s) using Row Source Operation section
· Study program parameter input directly mapped to SQL
· Identify all Input bind parameters being used to SQL
· Is SQL query returning large records for given inputs
· what are the large tables and their respective columns being used to mapped with input parameters
· which operation is scanning highest number of records in Row Source operation/Explain Plan
· Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
· Check the time consuming index on large table and measure Index Selectivity
· Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
· Is correct index being used for all large tables?
· Is there any Full Table Scan on Large tables ?
· Is there any unwanted Table being used in SQL ?
· Evaluate Join condition on Large tables and their columns
· Is FTS on large table b'cos of usage of non index columns
· Is there any implicit or explicit conversion causing index not getting used ?
· Statistics of all large tables are upto date ?
Quick Resolution tips
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
2) Use Data Caching Technique/Options to cache static data
3) Use Pipe Line Table Functions whenever possible
4) Use Global Temporary Table, Materialized view to process complex records
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
8) Follow Oracle PL/SQL Best Practices
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
12) Review Join condition on existing query explain plan
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
14) Avoid applying SQL functions on index columns
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
Thanks
PrafulI understand you were trying to post something helpful to people, but sorry, this list is appalling.
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
No, use pure SQL.
2) Use Data Caching Technique/Options to cache static data
No, use pure SQL, and the database and operating system will handle caching.
3) Use Pipe Line Table Functions whenever possible
No, use pure SQL
4) Use Global Temporary Table, Materialized view to process complex records
No, use pure SQL
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
No, use pure SQL
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
Makes no sense.
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
What about using the execution trace?
8) Follow Oracle PL/SQL Best Practices
Which are?
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
You mean design your database and queries properly? And table scanning is not always bad.
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
It depends if that is necessary or not.
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan. There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
12) Review Join condition on existing query explain plan
Well, if you don't have your join conditions right then your query won't work, so that's obvious.
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
No. Oracle recommends you do not use hints for query optimization (it says so in the documentation). Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general. Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
14) Avoid applying SQL functions on index columns
Why? If there's a need for a function based index, then it should be used.
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
See 13.
In short, there are no silver bullets for dealing with performance. Each situation is different and needs to be evaluated on its own merits. -
[8i] Can someone help me on using explain plan, tkprof, etc.?
I am trying to follow the instructions at When your query takes too long ...
I am trying to figure out why a simple query takes so long.
The query is:
SELECT COUNT(*) AS tot_rows FROM my_table;It takes a good 5 minutes or so to run (best case), and the result is around 22 million (total rows).
My generic username does not (evidently) allow access to PLAN_TABLE, so I had to log on as SYSTEM to run explain plan. In SQL*Plus, I typed in:
explain plan for (SELECT COUNT(*) AS tot_rows FROM my_table);and the response was "Explained."
Isn't this supposed to give me some sort of output, or am I missing something?
Then, the next step in the post I linked is to use tkprof. I see that it says it will output a file to a path specified in a parameter. The only problem is, I don't have access to the db's server. I am working remotely, and do not have any way to remotely (or directly) access the db server. Is there any way to have the file output to my local machine, or am I just S.O.L.?SomeoneElse used "create table as" (CTAS), wich automatically gathers the stats. You can see the differende before and after stats clearly in this example.
This is the script:
drop table ttemp;
create table ttemp (object_id number not null, owner varchar2(30), object_name varchar2(200));
alter table ttemp add constraint ttemp_pk primary key (object_id);
insert into ttemp
select object_id, owner, object_name
from dba_objects
where object_id is not null;
set autotrace on
select count(*) from ttemp;
exec dbms_stats.gather_table_stats('PROD','TTEMP');
select count(*) from ttemp;And the result:
Table dropped.
Table created.
Table altered.
46888 rows created.
COUNT(*)
46888
1 row selected.
Execution Plan
SELECT STATEMENT Optimizer Mode=CHOOSE
1 SORT AGGREGATE
2 1 TABLE ACCESS FULL PROD.TTEMP
Statistics
1 recursive calls
1 db block gets
252 consistent gets
0 physical reads
120 redo size
0 PX remote messages sent
0 PX remote messages recv'd
0 buffer is pinned count
0 workarea memory allocated
4 workarea executions - optimal
1 rows processed
PL/SQL procedure successfully completed.
COUNT(*)
46888
1 row selected.
Execution Plan
SELECT STATEMENT Optimizer Mode=CHOOSE (Cost=4 Card=1)
1 SORT AGGREGATE (Card=1)
2 1 INDEX FAST FULL SCAN PROD.TTEMP_PK (Cost=4 Card=46 K)
Statistics
1 recursive calls
2 db block gets
328 consistent gets
0 physical reads
8856 redo size
0 PX remote messages sent
0 PX remote messages recv'd
0 buffer is pinned count
0 workarea memory allocated
4 workarea executions - optimal
1 rows processed -
Explain plan cannot get information
I try to run explain plan to tunning my sql. After I run explain plan, the result did't show any Statistics.
Who Can I get these Statistics? Should I install any pacakges or set any configs?
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | | | | |"
"|* 1 | INDEX RANGE SCAN | CUSTOMER_PK | | | | |"
"| 2 | TABLE ACCESS BY INDEX ROWID| CUSTOMERINFO | | | | |"
"|* 3 | INDEX UNIQUE SCAN | CUSTOMERINFO_PK | | | | |"
"---------------------------------------------------------------------------------------------"Hi..
The INVALID status could be the reason of the your problem.You still haven't said what ORACLE version you are using.
Run UTLRP -- This script recompiles all PL/SQL modules that may be in an INVALID state, including packages, procedures, types, and so on.
SQL> @?\rdbms\admin\utlrp
And then check back the status of the packages by using the same query.Then, run
exec dbms_stats.gather_table_stats(ownname =>'ONWER','tabname => 'CUSTOMERINFO', cascade => true, estimate_percent =>DBMS_STATS.AUTO_SAMPLE_SIZE, degree => n);
try to first gather the table's statistics, then go for the system stats.
HTH
Anand -
Different explain plan between 10.2.0.3 and 10.2.0.4
Had a problem with an explain plan changing after upgrade from 10.2.0.3 to 10.2.0.4. Managed to simplify as much as possible for now:
Query is :
SELECT * FROM m_promo_chk_str
WHERE (m_promo_chk_str.cust_cd) IN (
SELECT cust_cd
FROM s_usergrp_pda
GROUP BY cust_cd)
On 10.2.0.3 explain plan is:
| 0 | SELECT STATEMENT | | 1 | 1227 | 26 (16)| 00:00:01 |
|* 1 | HASH JOIN SEMI | | 1 | 1227 | 26 (16)| 00:00:01 |
| 2 | TABLE ACCESS FULL | M_PROMO_CHK_STR | 1 | 1185 | 14 (0)| 00:00:01 |
| 3 | VIEW | VW_NSO_1 | 137 | 5754 | 11 (28)| 00:00:01 |
| 4 | HASH GROUP BY | | 137 | 548 | 11 (28)| 00:00:01 |
| 5 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 9 (12)| 00:00:01 |
On 10.2.0.4 with same data is:
| 0 | SELECT STATEMENT | | 1 | 1201 | 46 (5)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 1201 | 46 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 1201 | 45 (3)| 00:00:01 |
| 3 | TABLE ACCESS FULL| M_PROMO_CHK_STR | 1 | 1197 | 29 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| S_USERGRP_PDA | 5219 | 20876 | 15 (0)| 00:00:01 |
Explain plan is reasonable for when M_PROMO_CHK_STR is empty, however we have the case where stats are gathered when table is empty, but table is then populated and the query runs slowly. I understand that this is not a problem with the database exactly, but want to try to understand why the different behaviour.
Will look into CBO trace tommorrow, but for now anyone want to share any thoughts?PatHK wrote:
Here is further simplification to reproduce the different behaviour - I think about as simple as I can get it!
SELECT * FROM dual WHERE (dummy) IN (SELECT dummy FROM dual GROUP BY dummy);
On 10.2.0.3
| 0 | SELECT STATEMENT | | 1 | 4 | 5 (20)| 00:00:01 |
| 1 | NESTED LOOPS SEMI | | 1 | 4 | 5 (20)| 00:00:01 |
| 2 | TABLE ACCESS FULL | DUAL | 1 | 2 | 2 (0)| 00:00:01 |
|* 3 | VIEW | VW_NSO_1 | 1 | 2 | 3 (34)| 00:00:01 |
| 4 | SORT GROUP BY | | 1 | 2 | 3 (34)| 00:00:01 |
| 5 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |On 10.2.0.4
| 0 | SELECT STATEMENT | | 1 | 4 | 4 (0)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT| | 1 | 4 | 4 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 4 | 4 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | DUAL | 1 | 2 | 2 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL | DUAL | 1 | 2 | 2 (0)| 00:00:01 |
Timur's suggestion to look at a 10053 trace file is a good idea. It might be the case that someone disabled complex view merging in the 10.2.0.3 database instance. See the following:
_complex_view_merging
http://jonathanlewis.wordpress.com/2007/03/08/transformation-and-optimisation/
Here is a test you might try on both database versions:
ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=TRUE;
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST1';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';
ALTER SESSION SET "_COMPLEX_VIEW_MERGING"=FALSE;
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'TEST2';
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
SELECT * FROM DUAL WHERE (DUMMY) IN (SELECT DUMMY FROM DUAL GROUP BY DUMMY);
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,NULL));
ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT OFF';The first plan output:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 8 (100)| |
| 1 | SORT GROUP BY NOSORT| | 1 | 4 | 8 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 1 | 4 | 8 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | DUAL | 1 | 2 | 4 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL | DUAL | 1 | 2 | 4 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter("DUMMY"="DUMMY")The second plan output:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 9 (100)| |
| 1 | NESTED LOOPS SEMI | | 1 | 4 | 9 (12)| 00:00:01 |
| 2 | TABLE ACCESS FULL | DUAL | 1 | 2 | 4 (0)| 00:00:01 |
|* 3 | VIEW | VW_NSO_1 | 1 | 2 | 5 (20)| 00:00:01 |
| 4 | SORT GROUP BY | | 1 | 2 | 5 (20)| 00:00:01 |
| 5 | TABLE ACCESS FULL| DUAL | 1 | 2 | 4 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("DUMMY"="$nso_col_1")From the first 10053 trace file:
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
_pga_max_size = 368640 KB
pgamax_size is the only parameter non-default value which could affect the optimizer.
From the second 10053 trace file:
PARAMETERS USED BY THE OPTIMIZER
PARAMETERS WITH ALTERED VALUES
_pga_max_size = 368640 KB
_complex_view_merging = false
*********************************This section in the first 10053 trace seems to show the complex view merging:
SU: Considering interleaved complex view merging
SU: Transform an ANY subquery to semi-join or distinct.
CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
CVM: CBQT Marking query block SEL$683B0107 (#2)as valid for CVM.
CVM: Merging complex view SEL$683B0107 (#2) into SEL$5DA710D3 (#1).
qbcp:******* UNPARSED QUERY IS *******
SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM (SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY") "VW_NSO_2","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
vqbcp:******* UNPARSED QUERY IS *******
SELECT /*+ */ DISTINCT "DUAL"."DUMMY" "$nso_col_1" FROM "SYS"."DUAL" "DUAL" GROUP BY "DUAL"."DUMMY"
CVM: result SEL$5DA710D3 (#1).
******* UNPARSED QUERY IS *******
SELECT /*+ */ "DUAL"."DUMMY" "DUMMY" FROM "SYS"."DUAL" "DUAL","SYS"."DUAL" "DUAL" WHERE "DUAL"."DUMMY"="DUAL"."DUMMY" GROUP BY "DUAL"."DUMMY","DUAL".ROWID,"DUAL"."DUMMY"
Registered qb: SEL$C9C6826C 0x155e2020 (VIEW MERGE SEL$5DA710D3; SEL$683B0107)
signature (): qb_name=SEL$C9C6826C nbfros=2 flg=0
fro(0): flg=0 objn=258 hint_alias="DUAL"@"SEL$1"
fro(1): flg=0 objn=258 hint_alias="DUAL"@"SEL$2"
FPD: Considering simple filter push in SEL$C9C6826C (#1)
FPD: Current where clause predicates in SEL$C9C6826C (#1) :
"DUAL"."DUMMY"="DUAL"."DUMMY"
kkogcp: try to generate transitive predicate from check constraints for SEL$C9C6826C (#1)
predicates with check contraints: "DUAL"."DUMMY"="DUAL"."DUMMY"
after transitive predicate generation: "DUAL"."DUMMY"="DUAL"."DUMMY"
finally: "DUAL"."DUMMY"="DUAL"."DUMMY"
CVM: Costing transformed query.
kkoqbc-start
: call(in-use=25864, alloc=65448), compile(in-use=115280, alloc=118736)
kkoqbc-subheap (create addr=000000001556CD70)This is the same section from the second 10053 trace:
SU: Considering interleaved complex view merging
SU: Transform an ANY subquery to semi-join or distinct.
CVM: Considering view merge (candidate phase) in query block SEL$5DA710D3 (#1)
CVM: Considering view merge (candidate phase) in query block SEL$683B0107 (#2)
FPD: Considering simple filter push in SEL$5DA710D3 (#1)
FPD: Current where clause predicates in SEL$5DA710D3 (#1) :
"DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
kkogcp: try to generate transitive predicate from check constraints for SEL$5DA710D3 (#1)
predicates with check contraints: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
after transitive predicate generation: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
finally: "DUAL"."DUMMY"="VW_NSO_2"."$nso_col_1"
FPD: Considering simple filter push in SEL$683B0107 (#2)
FPD: Current where clause predicates in SEL$683B0107 (#2) :
CVM: Costing transformed query.
kkoqbc-start
: call(in-use=25656, alloc=65448), compile(in-use=113992, alloc=114592)
kkoqbc-subheap (create addr=00000000157E9078)Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Maybe you are looking for
-
My 4th generation nano is no longer associated with my account
I completed the trouble shooting steps suggested by Apple but my nano is still not an associated device under my iTunes account. I have no idea how to fix this! Pls help!
-
Default value in dropdown field for FPM form in HCM Processes and forms
Hi Experts, I am developing HCM Processes and forms using FPM forms and I had a dropdown list contains ten values. So, every time while opening the form first time, my dropdown field should be defaulted with fifth value from the list. How can we achi
-
How do I make all new tabs open with my home page?
When I open a New Tab, it opens with a blank page. Is there any setting which will force each New Tab to open with my Home Page?
-
My appstore featured app (new and what's hot section) doesn't update.
I used iPhone4s IOS 5.1.1 and find that now my new featured apps doesn't update (now apps of the weeks is pinball arcade which is not now free of the weeks app) I can't find solution of this annoying stuff in this discussion board too I restored it o
-
Uploading data on pc interfering with iPhone calls
If I am moving data from my hard drive to a network drive on my pc or vice versa, callers or recipients of my call state that my conversation on the verizon iphone 4 is interrupted - they can only understand every other word. As soon as data exchange