Query runs long
here is the scenario,
insert into xxxx
select *
from MView1 a, Table1 b, table2 C,
Mview2 D
where a.source_id= b.source_id and a.code = b.code and a.number = b.number
AND C.SOURCE_ID= A.SOURCE_ID AND A.ID=C.ID
AND A.SOURCE_ID = D.SOURCE_ID(+) AND A.ID = D.ID(+) AND
A.IT_ID=D.IT_ID(+);
the query usually takes 20 mins to complete, but nowadays it is not running for ever, Here is the Explain plan,
PLAN_TABLE_OUTPUT
Plan hash value: 2900817873
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 174 | 1385 (1)| 00:00:17 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 174 | 1385 (1)| 00:00:17 | Q1,01 | P->S | QC (RAND) |
| 3 | NESTED LOOPS OUTER | | 1 | 174 | 1385 (1)| 00:00:17 | Q1,01 | PCWP | |
| 4 | NESTED LOOPS | | 1 | 140 | 1385 (2)| 00:00:17 | Q1,01 | PCWP | |
| 5 | MERGE JOIN CARTESIAN | | 1 | 42 | 141 (0)| 00:00:02 | Q1,01 | PCWP | |
| 6 | SORT JOIN | | | | | | Q1,01 | PCWP | |
| 7 | PX RECEIVE | | 1 | 33 | 138 (0)| 00:00:02 | Q1,01 | PCWP | |
| 8 | PX SEND BROADCAST | :TQ10000 | 1 | 33 | 138 (0)| 00:00:02 | Q1,00 | P->P | BROADCAST |
| 9 | PX BLOCK ITERATOR | | 1 | 33 | 138 (0)| 00:00:02 | Q1,00 | PCWC | |
| 10 | TABLE ACCESS FULL | TABLE1 | 1 | 33 | 138 (0)| 00:00:02 | Q1,00 | PCWP | |
| 11 | BUFFER SORT | | 364 | 3276 | 3 (0)| 00:00:01 | Q1,01 | PCWP | |
| 12 | PX BLOCK ITERATOR | | 364 | 3276 | 2 (0)| 00:00:01 | Q1,01 | PCWC | |
| 13 | TABLE ACCESS FULL | TABLE2 | 364 | 3276 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 14 | MAT_VIEW ACCESS BY INDEX ROWID | MVIEW1 | 1 | 98 | 1385 (2)| 00:00:17 | Q1,01 | PCWP | |
| 15 | BITMAP CONVERSION TO ROWIDS | | | | | | Q1,01 | PCWP | |
| 16 | BITMAP AND | | | | | | Q1,01 | PCWP | |
|* 17 | BITMAP INDEX SINGLE VALUE | BM2_MVIEW1 | | | | | Q1,01 | PCWP | |
| 18 | BITMAP CONVERSION FROM ROWIDS| | | | | | Q1,01 | PCWP | |
| 19 | SORT ORDER BY | | | | | | Q1,01 | PCWP | |
|* 20 | INDEX RANGE SCAN | N3_MVIEW1 | 24100 | | 214 (3)| 00:00:03 | Q1,01 | PCWP | |
| 21 | MAT_VIEW ACCESS BY INDEX ROWID | MVIEW2 | 1 | 34 | 3 (0)| 00:00:01 | Q1,01 | PCWP | |
|* 22 | INDEX RANGE SCAN | U1_MVIEW2 | 1 | | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
10 - SEL$1 / B@SEL$1
13 - SEL$1 / C@SEL$1
14 - SEL$1 / A@SEL$1
21 - SEL$1 / D@SEL$1
22 - SEL$1 / D@SEL$1
Predicate Information (identified by operation id):
14 - filter("A"." NUMBER"="B"." NUMBER")
17 - access("A"." CODE"="B"." CODE")
20 - access("A"." SOURCE_ID"="B"."SOURCE_ID" AND "A"." ID"="C"." ID")
filter("C"."SOURCE_ID"="A"."SOURCE_ID" AND "A"." ID"="C"." ID" AND "A"."SOURCE_ID"="B"."SOURCE_ID")
22 - access("A"."SOURCE_ID"="D"."SOURCE_ID"(+) AND "A"." ID"="D"." ID"(+) AND "A"."IT_ID"="D"."IT_ID"(+))
Please help how to get back to the original completion timing.
-thanks
Here is the original execution plan while it complete in 20 mins,...The indexes are not been dropped...
PLAN_TABLE_OUTPUT
Plan hash value: 464730497
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 226 | 34578 | 100K (5)| 00:20:04 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 226 | 34578 | 100K (5)| 00:20:04 | Q1,03 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | 226 | 34578 | | | Q1,03 | PCWP | |
| 4 | NESTED LOOPS OUTER | | 226 | 34578 | 100K (5)| 00:20:04 | Q1,03 | PCWP | |
|* 5 | HASH JOIN | | 226 | 26894 | 100K (5)| 00:20:02 | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | 3491K| 69M| 534 (6)| 00:00:07 | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10001 | 3491K| 69M| 534 (6)| 00:00:07 | Q1,01 | P->P | HASH |
| 8 | MERGE JOIN CARTESIAN | | 3491K| 69M| 534 (6)| 00:00:07 | Q1,01 | PCWP | |
| 9 | SORT JOIN | | | | | | Q1,01 | PCWP | |
| 10 | PX RECEIVE | | 364 | 3276 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
| 11 | PX SEND BROADCAST | :TQ10000 | 364 | 3276 | 2 (0)| 00:00:01 | Q1,00 | P->P | BROADCAST |
| 12 | PX BLOCK ITERATOR | | 364 | 3276 | 2 (0)| 00:00:01 | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL | TABLE2 | 364 | 3276 | 2 (0)| 00:00:01 | Q1,00 | PCWP | |
| 14 | BUFFER SORT | | 9592 | 112K| 532 (6)| 00:00:07 | Q1,01 | PCWP | |
| 15 | PX BLOCK ITERATOR | | 9592 | 112K| 138 (0)| 00:00:02 | Q1,01 | PCWC | |
| 16 | TABLE ACCESS FULL | TABLE1 | 9592 | 112K| 138 (0)| 00:00:02 | Q1,01 | PCWP | |
| 17 | PX RECEIVE | | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,03 | PCWP | |
| 18 | PX SEND HASH | :TQ10002 | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,02 | P->P | HASH |
| 19 | PX BLOCK ITERATOR | | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,02 | PCWC | |
| 20 | MAT_VIEW ACCESS FULL | MVIEW1 | 13M| 1236M| 99423 (4)| 00:19:54 | Q1,02 | PCWP | |
| 21 | MAT_VIEW ACCESS BY INDEX ROWID| MVIEW2 | 1 | 34 | 3 (0)| 00:00:01 | Q1,03 | PCWP | |
|* 22 | INDEX RANGE SCAN | U1_MVIEW2 | 1 | | 2 (0)| 00:00:01 | Q1,03 | PCWP | |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
13 - SEL$1 / C@SEL$1
16 - SEL$1 / B@SEL$1
20 - SEL$1 / A@SEL$1
21 - SEL$1 / D@SEL$1
22 - SEL$1 / D@SEL$1
Predicate Information (identified by operation id):
5 - access("A"."SOURCE_ID"="B"."SOURCE_ID" AND "A"." CODE"="B"." CODE" AND "A"." NUMBER"="B"." NUMBER" AND
"C"."SOURCE_ID"="A"."SOURCE_ID" AND "A"." ID"="C"." ID")
22 - access("A"."SOURCE_ID"="D"."SOURCE_ID"(+) AND "A"." ID"="D"." ID"(+) AND "A"."IT_ID"="D"."IT_ID"(+))
Similar Messages
-
Select query running long time
Hi,
DB version : 10g
platform : sunos
My select sql query running long time (more than 20hrs) .Still running .
Is there any way to find sql query completion time approximately. (Pending time)
Also is there any possibilities to increase the speed of sql query (already running) like adding hints.
Please help me on this .
ThanksHi Sathish thanks for your reply,
I have already checked in V$SESSION_LONGOPS .But it's showing TIME_REMAINING -->0
select TOTALWORK,SOFAR,START_TIME,TIME_REMAINING from V$SESSION_LONGOPS where SID='10'
TOTALWORK SOFAR START_TIME TIME_REMAINING
1099759 1099759 27-JAN-11 0Any idea ?
Thanks. -
Is index range scan the reason for query running long time
I would like to know whether index range scan is the reason for the query running long time. Below is the explain plan. If so, how to optimise it? Please help
Operation Object COST CARDINALITY BYTES
SELECT STATEMENT () 413 1000 265000
COUNT (STOPKEY)
FILTER ()
TABLE ACCESS (BY INDEX ROWID) ORDERS 413 58720 15560800
INDEX (RANGE SCAN) IDX_SERV_PROV_ID 13 411709
TABLE ACCESS (BY INDEX ROWID) ADDRESSES 2 1 14
INDEX (UNIQUE SCAN) SYS_C004605 1 1
TABLE ACCESS (BY INDEX ROWID) ADDRESSES 2 1 14
INDEX (UNIQUE SCAN) SYS_C004605 1 1
TABLE ACCESS (BY INDEX ROWID) ADDRESSES 2 1 14
INDEX (UNIQUE SCAN) SYS_C004605 1 1The index range scan means that the optimiser has determined that it is better to read the index rather than perform a full table scan. So in answer to your question - quite possibly but the alternative might take even longer!
The best thing to do is to review your query and check that you need every table included in the query and that you are accessing the tables via the best route. For example if you can access a table via primary key index that would be better than using a non-unique index. But the best way of reducing the time the query takes to run is to give it less tables (and indexes) to read.
John Seaman
http://www.asktheoracle.net -
Hi there,
I don’t know what the capabilities are of SQL logging, so I’m wondering if anyone can help.
We have 2 different offices, about 1.5 hours away from eachother. Point A and Point B. People from point B are complaining about lag running queries against a database.
The exact same query that takes 40 minutes to run at Point A, takes over 60 minutes to run at Point B.
Is there a way to profile the SQL queries down to the level of exactly how long each section takes?
I’m thinking specifically:
How long does it take after you hit the f5 button to transfer the query to the server
How long does it take the server to actually process the query
How long does it take to transfer the results back to the client once the results are gathered
We suspect it is a network lag, but we can’t suggest solutions until we come up with metric supporting the “your network is too slow” argument.
Thanks for your help in advance!Hello,
SQL Profiler can trace the connection session after the client successfully connected (or logged in ) to the SQL Server instance. But it cannot trace the time cost before the login and after logout.
To trace the connection session for a specify user or application, you select the
Security Audit event which contains audit login and audit logout event, and
Sessions.
As for query processing time cost, please select the
T-SQL Batch completed event in the trace. The duration column value is equals to (end time column) -(start time column). This is the query processing time in microseconds.
Regards,
Fanny Liu
Fanny Liu
TechNet Community Support -
hi
I'm having a query running for long time, Im new to dba can any one suggest me methods to make it faster it's running now and i have to make it execute it faster
parallel servers=4, and there are no inactive sessions.
thanks in advanceMake a habit of putting the database version in the post
As i told u before i depends on lot of things not only merge(cartisian ) joins,
1)It depends on the load the database is having,Was this query running fastly before?if it was running fastly then was the workload same as today?
2)Any changes done to database recently or the server?
3)only this query is slow all the queris are slow?
4)When was database last restarted?
5)Are u using bind variable in the query?
6)Is you library cache properly sized?If the query is doing lots of sorts then is your PGA properly sized?
7)Database buffer cache is properly sized?
8)How much memory your database is having?
9)Is your SGA properly fits in your memory or its getting swaped?
Etc...Etc
Check all these things
Regards
Kaunain -
Hi ,
I am using Apex Version 4.1.1.00.23. I am running an Interactive report in Apex that is running about15- 20 seconds . I take the query out of the report and run the query in sqldeveloper and it runs in 4 seconds. Why does it run so much slower in APEX. It is a basic
Interactive report with one query. I will send query. Is there a way I can tune through APEX and see why it is taking so MUCH longer ?
select c.rcn
,case when logical_level- (select logical_level from cd_customer where rcn = :P132_RCN) = 1 then '. '
when logical_level - (select logical_level from cd_customer where rcn = :P132_RCN) = 2 then '. . '
when logical_level - (select logical_level from cd_customer where rcn = :P132_RCN) = 3 then '. . . '
when logical_level - (select logical_level from cd_customer where rcn = :P132_RCN) = 4 then '. . . . '
when logical_level - (select logical_level from cd_customer where rcn = :P132_RCN) = 5 then '. . . . . '
end || (logical_level - (select logical_level from cd_customer where rcn = :P132_RCN)) ||
' ' || get_name(c.rcn,'D','1') DName
,PHYSICAL_LEVEL - (select physical_level from cd_customer where RCN = :P132_RCN) "LEVEL"
, nvl(sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank),0) PGPV
, countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt
, countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt, logical_lbound,c.rank,
(select wr.abbreviation from wd_ranknames wr where wr.rank = c.rank and wr.status=c.status) "rnk_abbrv"
,&P132_START_PERIOD,&P132_END_PERIOD ,&P132_RCN
from cd_customer c
where :P132_END_PERIOD > (select commission_closed from cd_parameters) and logical_lbound > 0
and logical_lbound between (select logical_lbound from cd_customer where rcn = :P132_RCN)
and (select logical_rbound from cd_customer where rcn = :P132_RCN)
and (logical_level - (select logical_level from cd_customer where rcn = :P132_RCN)) <=:P132_LEVELS
union all
select c.rcn
,case when logical_level- (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 1 then '. '
when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 2 then '. . '
when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 3 then '. . . '
when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 4 then '. . . . '
when logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN) = 5 then '. . . . . '
end || (logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)) ||
' ' || get_name(c.rcn,'D','1') DName
,PHYSICAL_LEVEL - (select physical_level from wd_customer where pvperiod = :P132_END_PERIOD AND RCN = :P132_RCN) "LEVEL"
, sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank) PGPV
,countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt
,countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt
,logical_lbound,c.rank,(select wr.abbreviation from
wd_ranknames wr where wr.rank = c.rank and wr.status=c.status) "rnk_abbrv"
,&P132_START_PERIOD,&P132_END_PERIOD ,&p132_RCN
from wd_customer c
where pvperiod = :P132_END_PERIOD and logical_lbound > 0
and logical_lbound between (select logical_lbound from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)
and (select logical_rbound from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)
and (logical_level - (select logical_level from wd_customer where pvperiod = :P132_END_PERIOD and rcn = :P132_RCN)) <=:P132_LEVELSSorry, my OCD must have kicked in. Try removing all those in-line queries, although not an answer to why the report takes longer than elsewhere, it might help.
SELECT c.rcn ,
CASE
WHEN c.logical_level - cv.logical_level = 1 THEN '. '
WHEN c.logical_level - cv.logical_level = 2 THEN '. . '
WHEN c.logical_level - cv.logical_level = 3 THEN '. . . '
WHEN c.logical_level - cv.logical_level = 4 THEN '. . . . '
WHEN c.logical_level - cv.logical_level = 5 THEN '. . . . . '
END
|| (c.logical_level - cv.logical_level)
|| ' '
|| get_name(c.rcn,'D','1') DName ,
c.PHYSICAL_LEVEL - cv.physical_level "LEVEL" ,
NVL(sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank),0) PGPV ,
countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt ,
countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt,
c.logical_lbound,
c.rank,
(SELECT wr.abbreviation
FROM wd_ranknames wr
WHERE wr.rank = c.rank
AND wr.status =c.status ) "rnk_abbrv" ,
&P132_START_PERIOD,
&P132_END_PERIOD ,
&P132_RCN
FROM cd_customer c,
cd_customer cv
WHERE cv.rcn = :P132_RCN
AND :P132_END_PERIOD > (SELECT commission_closed FROM cd_parameters )
AND c.logical_lbound > 0
AND c.logical_lbound BETWEEN cv.logical_lbound AND cv.logical_rbound
AND (clogical_level - cv.logical_level) <=:P132_LEVELS
UNION ALL
SELECT c.rcn ,
CASE
WHEN c.logical_level - cv.logical_level = 1 THEN '. '
WHEN c.logical_level - cv.logical_level = 2 THEN '. . '
WHEN c.logical_level - cv.logical_level = 3 THEN '. . . '
WHEN c.logical_level - cv.logical_level = 4 THEN '. . . . '
WHEN c.logical_level - cv.logical_level = 5 THEN '. . . . . '
END
|| (c.logical_level - cv.logical_level )
|| ' '
|| get_name(c.rcn,'D','1') DName ,
PHYSICAL_LEVEL - cv.physical_level "LEVEL" ,
sumpgpv(c.rcn, :P132_START_PERIOD, :P132_END_PERIOD,c.rank) PGPV ,
countd(c.rcn,1, :P132_START_PERIOD, :P132_END_PERIOD) DistCnt ,
countd(c.rcn,5, :P132_START_PERIOD, :P132_END_PERIOD) MACnt ,
c.logical_lbound,
c.rank,
(SELECT wr.abbreviation
FROM wd_ranknames wr
WHERE wr.rank = c.rank
AND wr.status =c.status
) "rnk_abbrv" ,
&P132_START_PERIOD,
&P132_END_PERIOD ,
&p132_RCN
FROM wd_customer c,
wd_customer cv
WHERE cv.pvperiod = :P132_END_PERIOD
AND cv.rcn = :P132_RCN
AND c.pvperiod = :P132_END_PERIOD
AND c.logical_lbound > 0
AND c.logical_lbound BETWEEN cv.logical_lbound AND cv.logical_rbound
AND (logical_level - cv.logical_level ) <= :P132_LEVELS -
Hi All,
when i run the query in Analyzer,it is taking longer time.the query is built on DSO.
can anyone give me inputs why the query is taking much time
Thanks in Advance
ReddyHi,
Follow this thread to find out how to improve Query performance on ODS.
ODS Query Performance
Achieving BI Query Performance Building Business Intelligence
http://www.dmreview.com/issues/20051001/1038109-1.html
Hope this helps.
Thanks,
JituK -
No data query runs longer time
I have a table with 50 million records, partitioned based on date.
if i do the query select * from test where trade_date = '01-mar-2010' brings
the records in less than a second. works perfect
but if there is no data for any given date in the table, the query takes more than 1 to 2 minute to completed.
why the query takes that longer to comes back with NO DATA?
comments are appreciated..
note:
i use 11g.
statistics are collected.hello,
the trade_date range partitioned..and every day the table will have data exception weekends and holidays..
PARTITION BY RANGE (transaction_DT)
PARTITION P001 VALUES LESS THAN (TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P002 VALUES LESS THAN (TO_DATE(' 2003-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P003 VALUES LESS THAN (TO_DATE(' 2004-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P004 VALUES LESS THAN (TO_DATE(' 2005-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P005 VALUES LESS THAN (TO_DATE(' 2006-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P006 VALUES LESS THAN (TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P007 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P008 VALUES LESS THAN (TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P009 VALUES LESS THAN (TO_DATE(' 2010-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P010 VALUES LESS THAN (TO_DATE(' 2011-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P011 VALUES LESS THAN (TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P012 VALUES LESS THAN (TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P013 VALUES LESS THAN (TO_DATE(' 2014-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P014 VALUES LESS THAN (TO_DATE(' 2015-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P015 VALUES LESS THAN (TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P016 VALUES LESS THAN (TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P017 VALUES LESS THAN (TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P018 VALUES LESS THAN (TO_DATE(' 2019-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P019 VALUES LESS THAN (TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P020 VALUES LESS THAN (TO_DATE(' 2021-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P021 VALUES LESS THAN (TO_DATE(' 2022-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P022 VALUES LESS THAN (TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P023 VALUES LESS THAN (TO_DATE(' 2024-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P024 VALUES LESS THAN (TO_DATE(' 2025-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
PARTITION P025 VALUES LESS THAN (TO_DATE(' 9999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
Edited by: user520824 on Sep 1, 2010 12:12 PM -
Query takes longer to run with indexes.
Here is my situation. I had a query which I use to run in Production (oracle 9.2.0.5) and Reporting database (9.2.0.3). The time taken to run in both databases was almost the same until 2 months ago which was about 2 minutes. Now in production the query does not run at all where as in Reporting it continues to run in about 2 minutes. Some of the things I obsevred in P are 1) the optimizer_index_cost_adj parameter was changed to 20 from 100 in order to improve the performance of a paycalc program about 3 months ago. Even with this parameter being set to 20, the query use to run in 2 minutes until 2 months ago. in the last two months the GL table increased in size from 25 million rows to 27 million rows. With optimizer_index_cost_adj of 20 and Gl table of 25 million rows it runs fine, but with 27 million rows it does not run at all. If I change the value of optimizer_index_cost_adj to 100 then the query runs with 27 million rows in 2 minutes and I found that it uses full table scan. In Reporting database it always used full table sacn as found thru explain plan. CBO determines which scan is best and it uses that. So my question is that by making optimizer_index_cost_adj = 20, does oracle forces it to use index scan when the table size is 27 million rows? Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? If I drop all the indexes on the GL table then the query runs faster in production as it uses full table scan. What is the real benefit of changing optimizer_index_cost_adj values? Any input is most welcome.
Isn't the index scan is not faster than full table scan? In what situation the full table scan is faster than index scan? No. It is not about which one is the "+fastest+" as that concept is flawed. How can an index be "faster" than a table for example? Does it have better tires and shinier paint job? ;-)
It is about the amount of I/O that the database needs to perform in order to use that object's contents for resolving/executing that applicable SQL statement.
If the CBO determines that it needs a 100 widgets worth of I/O to scan the index, and then another 100 widgets of I/O to scan the table, it may decide to not use the index at all, as a full table scan will cost only a 180 I/O widgets - 20 less than the combined scanning of index and table.
Also, a full scan can make use of multi-block reads - and this, on most storage/file systems, is faster than single block reads.
So no - a full table scan is NOT a Bad Thing (tm) and not an indicator of a problem. The thing that is of concern is the amount of I/O. The more I/O, the slower the operation. So obviously, we want to make sure that we design SQL that requires the minimal amount of I/O, design a database that support minimal I/O to find the required data (using clusters/partitions/IOTs/indexes/etc), and then check that the CBO also follows suit (which can be the complex bit).
But before questioning the CBO, first question your code and design - and whether or not they provide the optimal (smallest) I/O footprint for the job at hand. -
What are the ways to make Query run fast?
Hi Experts,
When a query runs slow, we generally go for creating an aggregate. My doubt is - what other things can be done to make a query run faster before creating an aggregate? What is the thumb rule to be carried out for creating an aggregate?
Regards,
ShreeemHi Shreem,
If you keep Query simple not complicate it with runtime calculations , it would be smooth. However as per business requirements we will have to go for it anyways mostly.
regarding aggregates:
Please do not use the standard proposal , it will give you hundreds based on std. rules , which consumes lots of space and adds up to load times. If you have users already using the Query and you are planning to tune it then go for the statistics tables:
1.RSDDSTAT_OLAP find the query with long runtimes get the Stepuid
2. RSDDSTAT_DM
3. RSDDSTATAGGRDEF - use the stepuid above to see which aggregate is necessary for which cube.
Another way to check ; check the users as in 1 to find the highest runtime users and find the last used bookmarks by user thru RSZWBOOKMARK for this query and check if the time matches and create the aggregates as in 3 above.
You can also Use Transaction RSRT > execute & debug (display stats ) - to create generic aggregates to support navigations for New queries and later refine as above.
Hope it helps .
Thnks
Ram -
SQL Query Executing longer time
Hi , The below SQL query executing longer time . Please help to Improve the query performance. The query continuously running for more than 24 hours and failing with roolback segment error. Not getting the final output. Most of the tables are having milions of records.
Select distinct
IBS.ADSL_ACCESS_INFO,
IBS.LIJ ,
regexp_substr(OBVS.REFERENTIE_A,'[[:digit:]]+') as O_NUMBER,
DBS.CKR_NUMMER_CONTRACTANT,
DBS.DNUMBER
FROM CD.IBS,
CD.OIBL,
CD.IH,
CD.ODL,
CD.OH,
CD.DBS,
CD.OBVS
Where IBS.END_DT = To_Date('31129999', 'ddmmyyyy')
AND OIBL.END_DT = to_date('31129999', 'ddmmyyyy')
AND DBS.END_DT = to_date('31129999', 'ddmmyyyy')
AND OBVS.END_DT = to_date('31129999', 'ddmmyyyy')
AND OBVS.REFERENTIE_A LIKE 'OFM%'
AND OIBL.INFRA_KEY = IH.INFRA_KEY
AND OIBL.ORDERS_KEY = OH.ORDERS_KEY
AND IBS.INFH_ID = IH.INFH_ID
AND ODL.ORDH_ID = OH.ORDH_ID
AND DBS.DEBH_ID = ODL.DEBH_ID
AND OBVS.ORDH_ID = ODL.ORDH_ID
Order By IBS.LIJ
All the columns which are present in the where condition are having either Index/key (Primary/unique) except END_DT column.
Please AdvisePredicate pushing can help when it greatlly restricts the number of rows - you must experiment - might not work with all predicates pushed (as shown here)
select distinct
ibs.adsl_access_info,
ibs.lij,
obvs.o_number,
dbs.ckr_nummer_contractant,
dbs.dnumber
from (select infh_id,adsl_access_info,lij
from cd.ibs
where end_dt = to_date('31129999','ddmmyyyy')
) ibs,
(select infra_key,orders_key
from cd.oibl
where end_dt = to_date('31129999','ddmmyyyy')
) oibl,
(select ordh_id,regexp_substr(obvs.referentie_a,'[[:digit:]]+') as o_number
from cd.obvs
where end_dt = to_date('31129999','ddmmyyyy')
and referentie_a like 'OFM%'
) obvs,
(select debh_id,ckr_nummer_contractant,dnumber
from cd.dbs
where end_dt = to_date('31129999','ddmmyyyy')
) dbs,
cd.ih,
cd.odl,
cd.oh
where oibl.infra_key = ih.infra_key
and oibl.orders_key = oh.orders_key
and ibs.infh_id = ih.infh_id
and odl.ordh_id = oh.ordh_id
and dbs.debh_id = odl.debh_id
and obvs.ordh_id = odl.ordh_id
order by ibs.lijRegards
Etbin -
Following Query running more than 4 hrs. could somone please suggest me to tune this query.
SELECT fi_contract_id, a.cust_id, a.product_id, a.currency_cd,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4), '4992', posted_tran_amt, 0),
2
) ftp_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4), '4992', posted_base_amt, 0),
2
) ftp_base_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4),
'4994', posted_tran_amt,
'4995', posted_tran_amt,
0
2
) col_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 4),
'4994', posted_base_amt,
'4995', posted_base_amt,
0
2
) col_base_amt,
ROUND (DECODE (SUBSTR (a.ACCOUNT, 1, 3), '499', 0, posted_tran_amt),
2
) closing_bal,
a.ACCOUNT, a.deptid, a.business_unit,
CASE
WHEN a.ACCOUNT LIKE '499%'
THEN '990'
ELSE a.operating_unit
END operating_unit,
a.base_currency, NVL (TRIM (pf_system_code), a.SOURCE) pf_system_code,
b.setid, a.channel_id, scb_arm_code, scb_tp_product, scb_tranche_id,
CASE
WHEN pf_system_code = 'CLS'
THEN scb_bncpr_flg
ELSE NULL
END tranche_purpose,
CASE
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 1, 1)
ELSE NULL
END lc_ind,
CASE
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 2, 3)
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) NOT IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 1, 3)
ELSE NULL
END bill_branch_id,
CASE
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 5, 1)
WHEN pf_system_code = 'IMX'
AND SUBSTR (scb_bncpr_flg, 1, 1) NOT IN ('Y', 'N')
THEN SUBSTR (scb_bncpr_flg, 4, 1)
ELSE NULL
END section_id,
CASE
WHEN pf_system_code = 'IFS'
THEN SUBSTR (scb_bncpr_flg, 1, 1)
ELSE NULL
END recourse_ind,
CASE
WHEN pf_system_code = 'IFS'
THEN SUBSTR (scb_bncpr_flg, 2, 1)
ELSE NULL
END disclosure_ind,
TO_CHAR (LAST_DAY (upload_date), 'DDMMYYYY')
FROM ps_fi_ildgr_f00 a,
(SELECT c.business_unit, c.fi_instrument_id, c.scb_arm_code,
c.scb_tp_product, c.scb_tranche_id, c.scb_bncpr_flg
FROM ps_fi_iother_r00 c, ps_scb_bus_unit b1
WHERE c.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND c.asof_dt =
(SELECT MAX (c1.asof_dt)
FROM ps_fi_iother_r00 c1
WHERE c.business_unit = c1.business_unit
AND c1.fi_instrument_id = c.fi_instrument_id)) c,
ps_scb_bus_unit b,
(SELECT upload_date - 15 upload_date
FROM stg_ftp_trans_bal_tb
WHERE setid = 'PKSTN' AND ROWNUM < 2),
(SELECT i.business_unit, i.fi_instrument_id, i.pf_system_code,
i.fi_contract_id
FROM ps_fi_instr_f00 i, ps_scb_bus_unit b1
WHERE i.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND (i.asof_dt) =
(SELECT MAX (i1.asof_dt)
FROM ps_fi_instr_f00 i1
WHERE i.business_unit = i1.business_unit
AND i1.fi_instrument_id = i.fi_instrument_id)) d
WHERE a.business_unit = b.business_unit
AND a.business_unit = c.business_unit
AND a.business_unit = d.business_unit
AND a.fi_instrument_id = c.fi_instrument_id(+)
AND a.fi_instrument_id = d.fi_instrument_id(+)
AND fiscal_year = TO_CHAR (upload_date, 'YYYY')
AND a.ACCOUNT != '191801'
AND a.pf_scenario_id LIKE '%M_'
AND accounting_period = TO_CHAR (upload_date, 'MM')
AND b.setid = 'PKSTN'
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 225 | | 14059 (2)| | | | | |
|* 1 | FILTER | | | | | | | | | | |
| 2 | PX COORDINATOR | | | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10006 | 962 | 211K| | 13578 (2)| | | Q1,06 | P->S | QC (RAND) |
|* 4 | HASH JOIN | | 962 | 211K| | 13578 (2)| | | Q1,06 | PCWP | |
| 5 | PX RECEIVE | | 977 | 190K| | 4273 (2)| | | Q1,06 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10004 | 977 | 190K| | 4273 (2)| | | Q1,04 | P->P | BROADCAST |
PLAN_TABLE_OUTPUT
|* 7 | HASH JOIN | | 977 | 190K| | 4273 (2)| | | Q1,04 | PCWP | |
| 8 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 9 | PX RECEIVE | | 1 | 10 | | 2 (0)| | | Q1,04 | PCWP | |
| 10 | PX SEND BROADCAST | :TQ10000 | 1 | 10 | | 2 (0)| | | | S->P | BROADCAST |
|* 11 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID| PS_FI_INSTR_F00 | 1 | 42 | | 1 (0)| | | Q1,04 | PCWC | |
| 13 | NESTED LOOPS | | 1954 | 362K| | 4271 (2)| | | Q1,04 | PCWP | |
|* 14 | HASH JOIN | | 1954 | 282K| | 3999 (2)| | | Q1,04 | PCWP | |
| 15 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 16 | PX RECEIVE | | 1 | 10 | | 2 (0)| | | Q1,04 | PCWP | |
| 17 | PX SEND BROADCAST | :TQ10001 | 1 | 10 | | 2 (0)| | | | S->P | BROADCAST |
PLAN_TABLE_OUTPUT
|* 18 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
|* 19 | HASH JOIN | | 3907 | 526K| | 3997 (2)| | | Q1,04 | PCWP | |
| 20 | PX RECEIVE | | 54702 | 4700K| | 616 (1)| | | Q1,04 | PCWP | |
| 21 | PX SEND HASH | :TQ10003 | 54702 | 4700K| | 616 (1)| | | Q1,03 | P->P | HASH |
| 22 | PX BLOCK ITERATOR | | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWC | |
|* 23 | TABLE ACCESS FULL | PS_FI_ILDGR_F00 | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWP | |
| 24 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 25 | PX RECEIVE | | 221K| 10M| | 3380 (3)| | | Q1,04 | PCWP | |
| 26 | PX SEND HASH | :TQ10002 | 221K| 10M| | 3380 (3)| | | | S->P | HASH |
| 27 | NESTED LOOPS | | 221K| 10M| | 3380 (3)| | | | | |
| 28 | NESTED LOOPS | | 1 | 16 | | 2351 (2)| | | | | |
PLAN_TABLE_OUTPUT
| 29 | VIEW | | 1 | 6 | | 2349 (2)| | | | | |
|* 30 | COUNT STOPKEY | | | | | | | | | | |
| 31 | PARTITION LIST SINGLE | | 661K| 7755K| | 2349 (2)| KEY | KEY | | | |
| 32 | TABLE ACCESS FULL | STG_FTP_TRANS_BAL_TB | 661K| 7755K| | 2349 (2)| 2 | 2 | | | |
|* 33 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 34 | PARTITION LIST ITERATOR | | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
|* 35 | TABLE ACCESS FULL | PS_FI_IOTHER_R00 | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
| 36 | PARTITION LIST ITERATOR | | 1 | | | 1 (0)| KEY | KEY | Q1,04 | PCWP | |
|* 37 | INDEX RANGE SCAN | PS_FI_INSTR_F00 | 1 | | | 1 (0)| KEY | KEY | Q1,04 | PCWP | |
| 38 | VIEW | VW_SQ_1 | 5220K| 124M| | 9296 (1)| | | Q1,06 | PCWP | |
| 39 | SORT GROUP BY | | 5220K| 169M| 479M| 9296 (1)| | | Q1,06 | PCWP | |
PLAN_TABLE_OUTPUT
| 40 | PX RECEIVE | | 5220K| 169M| | 9220 (1)| | | Q1,06 | PCWP | |
| 41 | PX SEND HASH | :TQ10005 | 5220K| 169M| | 9220 (1)| | | Q1,05 | P->P | HASH |
| 42 | PX BLOCK ITERATOR | | 5220K| 169M| | 9220 (1)| 1 | 7 | Q1,05 | PCWC | |
| 43 | TABLE ACCESS FULL | PS_FI_INSTR_F00 | 5220K| 169M| | 9220 (1)| 1 | 7 | Q1,05 | PCWP | |
| 44 | SORT AGGREGATE | | 1 | 20 | | | | | | | |
| 45 | PARTITION LIST SINGLE | | 1 | 20 | | 1 (0)| KEY | KEY | | | |
|* 46 | INDEX RANGE SCAN | PS_FI_IOTHER_R00 | 1 | 20 | | 1 (0)| KEY | KEY | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("C"."ASOF_DT"= (SELECT /*+ */ MAX("C1"."ASOF_DT") FROM "PS_FI_IOTHER_R00" "C1" WHERE "C1"."FI_INSTRUMENT_ID"=:B1 AND
"C1"."BUSINESS_UNIT"=:B2))
4 - access("I"."ASOF_DT"="VW_COL_1" AND "I"."BUSINESS_UNIT"="BUSINESS_UNIT" AND "FI_INSTRUMENT_ID"="I"."FI_INSTRUMENT_ID")
7 - access("I"."BUSINESS_UNIT"="B1"."BUSINESS_UNIT")
11 - filter("B1"."SETID"='PKSTN')
14 - access("A"."BUSINESS_UNIT"="B"."BUSINESS_UNIT")
18 - filter("B"."SETID"='PKSTN')
19 - access("A"."BUSINESS_UNIT"="C"."BUSINESS_UNIT" AND "A"."FI_INSTRUMENT_ID"="C"."FI_INSTRUMENT_ID" AND
"FISCAL_YEAR"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'YYYY')) AND "ACCOUNTING_PERIOD"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'MM')))
23 - filter("A"."PF_SCENARIO_ID" LIKE '%M_' AND "A"."ACCOUNT"<>'191801')
PLAN_TABLE_OUTPUT
30 - filter(ROWNUM<2)
33 - filter("B1"."SETID"='PKSTN')
35 - filter("C"."BUSINESS_UNIT"="B1"."BUSINESS_UNIT")
37 - access("A"."BUSINESS_UNIT"="I"."BUSINESS_UNIT" AND "A"."FI_INSTRUMENT_ID"="I"."FI_INSTRUMENT_ID")
46 - access("C1"."BUSINESS_UNIT"=:B1 AND "C1"."FI_INSTRUMENT_ID"=:B2)
Note
- 'PLAN_TABLE' is old version
75 rows selected.[email protected] wrote:
Following Query running more than 4 hrs. could somone please suggest me to tune this query.1. You can try to avoid self-joins or FILTER operations in the C and D inline views if you change below queries to use analytic functions instead:
(SELECT c.business_unit, c.fi_instrument_id, c.scb_arm_code,
c.scb_tp_product, c.scb_tranche_id, c.scb_bncpr_flg
FROM ps_fi_iother_r00 c, ps_scb_bus_unit b1
WHERE c.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND c.asof_dt =
(SELECT MAX (c1.asof_dt)
FROM ps_fi_iother_r00 c1
WHERE c.business_unit = c1.business_unit
AND c1.fi_instrument_id = c.fi_instrument_id)) c,
(SELECT upload_date - 15 upload_date
FROM stg_ftp_trans_bal_tb
WHERE setid = 'PKSTN' AND ROWNUM < 2),
(SELECT i.business_unit, i.fi_instrument_id, i.pf_system_code,
i.fi_contract_id
FROM ps_fi_instr_f00 i, ps_scb_bus_unit b1
WHERE i.business_unit = b1.business_unit
AND b1.setid = 'PKSTN'
AND (i.asof_dt) =
(SELECT MAX (i1.asof_dt)
FROM ps_fi_instr_f00 i1
WHERE i.business_unit = i1.business_unit
AND i1.fi_instrument_id = i.fi_instrument_id)) d
...Try to use something like this instead:
(select * from
(SELECT c.business_unit, c.fi_instrument_id, c.scb_arm_code,
c.scb_tp_product, c.scb_tranche_id, c.scb_bncpr_flg,
rank() over (order by c.asof_dt desc partition by c.business_unit, c.fi_instrument_id) rnk
FROM ps_fi_iother_r00 c, ps_scb_bus_unit b1
WHERE c.business_unit = b1.business_unit
AND b1.setid = 'PKSTN')
where rnk = 1) c,
...2. This piece seems to be questionable since it seems to pick the "UPLOAD_DATE" from an arbitrary row where SETID = 'PKSTN'. I assume that the UPLOAD_DATE is then the same for all these rows, otherwise this would potentially return a different UPLOAD_DATE for each execution of the query. Still it's a questionable approach and seems to be de-normalized data.
(SELECT upload_date - 15 upload_date
FROM stg_ftp_trans_bal_tb
WHERE setid = 'PKSTN' AND ROWNUM < 2),3. Your execution plan contains some parts that are questionable and might lead to inappropriate work performed by the database if the estimates of optimizer are wrong:
a. Are you sure that the filter predicate "SETID"='PKSTN' on PS_SCB_BUS_UNIT returns only a single row? If not, below NESTED LOOP operation could scan the PS_FI_IOTHER_R00 table more than once making this rather inefficient
| 27 | NESTED LOOPS | | 221K| 10M| | 3380 (3)| | | | | |
| 28 | NESTED LOOPS | | 1 | 16 | | 2351 (2)| | | | | |
| 29 | VIEW | | 1 | 6 | | 2349 (2)| | | | | |
|* 30 | COUNT STOPKEY | | | | | | | | | | |
| 31 | PARTITION LIST SINGLE | | 661K| 7755K| | 2349 (2)| KEY | KEY | | | |
| 32 | TABLE ACCESS FULL | STG_FTP_TRANS_BAL_TB | 661K| 7755K| | 2349 (2)| 2 | 2 | | | |
|* 33 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 34 | PARTITION LIST ITERATOR | | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
|* 35 | TABLE ACCESS FULL | PS_FI_IOTHER_R00 | 442K| 14M| | 1029 (3)| KEY | KEY | | | |b. The optimizer assumes that below join returns only 3907 rows out of the 54k and 221k source sets. This could be wrong, because the join expression contains multiple function calls and an implicit TO_NUMBER conversion you haven't mentioned in your SQL which is bad practice in general:
19 - access("A"."BUSINESS_UNIT"="C"."BUSINESS_UNIT" AND "A"."FI_INSTRUMENT_ID"="C"."FI_INSTRUMENT_ID" AND
"FISCAL_YEAR"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'YYYY')) AND "ACCOUNTING_PERIOD"=TO_NUMBER(TO_CHAR("UPLOAD_DATE",'MM')))The conversion functions might hide from the optimizer that the join returns many more rows than estimated, because the optimizer uses default selectivities or guesses for function expressions. If you can't fix the data model to use appropriate join expressions you could try to create function based indexes on the expressions TO_NUMBER(TO_CHAR("UPLOAD_DATE",'YYYY')) and TO_NUMBER(TO_CHAR("UPLOAD_DATE",'MM')) and gather statistics on the corresponding hidden columns (method_opt parameter of DBMS_STATS.GATHER_TABLE_STATS call set to "FOR ALL HIDDEN COLUMNS"). If you're already on 11g you can achieve the same by using virtual columns.
|* 19 | HASH JOIN | | 3907 | 526K| | 3997 (2)| | | Q1,04 | PCWP | |
| 20 | PX RECEIVE | | 54702 | 4700K| | 616 (1)| | | Q1,04 | PCWP | |
| 21 | PX SEND HASH | :TQ10003 | 54702 | 4700K| | 616 (1)| | | Q1,03 | P->P | HASH |
| 22 | PX BLOCK ITERATOR | | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWC | |
|* 23 | TABLE ACCESS FULL | PS_FI_ILDGR_F00 | 54702 | 4700K| | 616 (1)| 1 | 6119 | Q1,03 | PCWP | |
| 24 | BUFFER SORT | | | | | | | | Q1,04 | PCWC | |
| 25 | PX RECEIVE | | 221K| 10M| | 3380 (3)| | | Q1,04 | PCWP | |
| 26 | PX SEND HASH | :TQ10002 | 221K| 10M| | 3380 (3)| | | | S->P | HASH |
| 27 | NESTED LOOPS | | 221K| 10M| | 3380 (3)| | | | | |
| 28 | NESTED LOOPS | | 1 | 16 | | 2351 (2)| | | | | |
| 29 | VIEW | | 1 | 6 | | 2349 (2)| | | | | |
|* 30 | COUNT STOPKEY | | | | | | | | | | |
| 31 | PARTITION LIST SINGLE | | 661K| 7755K| | 2349 (2)| KEY | KEY | | | |
| 32 | TABLE ACCESS FULL | STG_FTP_TRANS_BAL_TB | 661K| 7755K| | 2349 (2)| 2 | 2 | | | |
|* 33 | TABLE ACCESS FULL | PS_SCB_BUS_UNIT | 1 | 10 | | 2 (0)| | | | | |
| 34 | PARTITION LIST ITERATOR | | 442K| 14M| | 1029 (3)| KEY | KEY | | | |
|* 35 | TABLE ACCESS FULL | PS_FI_IOTHER_R00 | 442K| 14M| | 1029 (3)| KEY | KEY | | | |c. Due to the small number of rows estimated, mainly caused by b. above, the result of the joins is broadcasted to all parallel slaves when performing the final join. This might be quite inefficient if the result is much larger than expected.
| 6 | PX SEND BROADCAST | :TQ10004 | 977 | 190K| | 4273 (2)| | | Q1,04 | P->P | BROADCAST |Note that this join is not necessary any longer / obsolete if you introduce above analytic functions as suggested.
4. Your PLAN_TABLE does not match your Oracle version. If you're already on 10g or later, simply drop all PLAN_TABLEs in non-SYS schemas since there is already one provided as part of the data dictionary. Otherwise re-create them using $ORACLE_HOME/rdbms/admin/utlxplan.sql
Note
- 'PLAN_TABLE' is old versionIf you want to understand where the majority of the time is spent you need to trace the execution. Note that your statement introduces an increased complexity because it uses parallel execution, therefore you'll end up with multiple trace files per parallel slave and query coordinator process, which makes the analysis not that straightforward.
Please read this HOW TO: Post a SQL statement tuning request - template posting that explains how you can enable the statement trace and what you should provide if you have SQL statement tuning question and how to format it here so that the posted information is readable by others.
This accompanying blog post shows step-by-step instructions how to obtain that information.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Hi Gurus,
We are using Report 10g on 10g Application server and solaris. we created a report on a table which has 10,000 rows. The report has 25 columns. when we run this query in Toad it took 12 sec for fetching all these 10,000 rows
But when we run the report with Destype = 'FILE' and Desformat = 'DELIMITEDDDATA', it is taking 5 to 8 minutes
to open in excel ( we concatenated mimetype=vnd-msexcel at the end of the url if the Destype=FILE). We removed the layout in the report as it is taking 10 to 15 mins to run to Screen with Desformat=HTML/PDF(formating pages taking more time). We are wondering why DELIMITEDDATA format is taking long time as it runs only query.
Does RWSERVLET take more time of writing the data to the Physical file in the cache dir? Our cache size is 1 GB. we have 2 report servers clustered. Tracing is off.
Please advise me if there are any report server settings to boost the performance.
Thanks alot,
Ram.Duplicate of Strange problem... Query runs faster, but report runs slow... in the Reports forum.
[Thread closed] -
Query running sometimes slow and sometimes fast on both prod and dev. Help
We are running a job that is behaving so inconsistently that I am ready to jump off the 19th floor. :-)
This query that goes against one table, was just coming to halt in production. After 4 days of investigation we thought it was the resource on production box. Now we have another dev box where we were also having the same issue. This box gets updated with production data everyday. There is a 3rd box. DBA ran update statistics on the 3rd box and the job was never slow there. When we updated 2nd box (dev) with statistics from th 3rd box, the job also ran fine. So we thought we know for sure that it is the statistics that we need to up. Now for business testing, the 2nd and 3rd box got updated with data and statistics from production box (the troubled one). We thought surely we will see issues on the 2nd and 3rd box, but the job was just running fine on these boxes. As I said, the 2nd box gets updated with production data everyday. After last night's refresh this job is running long on the 2nd box again. We are really puzzled. Has any one experience anything like this before?
thanks in advance.
Reaz.We got our dba who is checking the plan when ever we run the job.
The dba is running the trace right now. Here is the trace result from the trace:
SELECT STATUS_FLAG, FSI_TID, FSI_REC_TID, AREA, VALUE_DATE, CANCEL_DATE,
TO_DATE(ENTRY_DATE_TIME), PRODUCT, CUST_TID
FROM
ORD_FX WHERE WSS_GDP_SITE = :B3 AND DEAL_NUMBER = :B2 AND TICKET_AREA = :B1
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 514 0.23 0.27 0 0 0 0
Fetch 514 253.40 247.44 0 16932188 0 514
total 1028 253.63 247.71 0 16932188 0 514
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 26 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch: cache buffers chains 2 0.00 0.00
no IO issue in any database at any time. We saw a little high IO in production yesterday so we thought we are looking at a resource issue, but today the system is fine and we had no luck with the job. -
HI I have a query which is running longer .
I have checked every option of tuning this .I have attached the explain plan for the same . It seems it is doing a cartisian product .
SELECT analytic_source_cd,
SUM (CASE WHEN pricing_dt = '24jan2014' THEN cnt ELSE 0 END)
AS Prev_Count,
SUM (CASE WHEN pricing_dt = '27jan2014' THEN cnt ELSE 0 END)
AS Current_Count
FROM (SELECT af.analytic_source_cd,
af.pricing_dt,
COUNT (DISTINCT af.fi_instrument_id) cnt
FROM analytics_fact af,
fund f,
instrument_alternate_id iai,
(SELECT pricing_dt, vendor_instrument_id, index_cd
FROM fi_idx_benchmark_holdings
WHERE pricing_dt IN
('24jan2014', '27jan2014')
UNION
SELECT pricing_dt, vendor_instrument_id, index_cd
FROM fi_idx_forward_holdings
WHERE pricing_dt IN
('24jan2014', '27jan2014')) bh
WHERE
af.pricing_dt = bh.pricing_dt
AND f.official_index = bh.index_cd
AND af.fi_instrument_id = iai.fi_instrument_id
AND bh.vendor_instrument_id = iai.alternate_id
AND iai.alternate_id_type_code IN ('FMR_CUSIP', 'CUSIP')
and af.pricing_dt IN ('24jan2014', '27jan2014')
AND f.official_index IS NOT NULL
AND af.oad IS NOT NULL
GROUP BY af.analytic_source_cd, af.pricing_dt
GROUP BY analytic_source_cd
ORDER BY 1;
Please check the below .
Plan
SELECT STATEMENT ALL_ROWSCost: 210,133 Bytes: 27 Cardinality: 1
27 SORT GROUP BY Cost: 210,133 Bytes: 27 Cardinality: 1
26 VIEW A519350. Cost: 210,133 Bytes: 27 Cardinality: 1
25 HASH GROUP BY Cost: 210,133 Bytes: 26 Cardinality: 1
24 VIEW VIEW SYS.VM_NWVW_1 Cost: 210,133 Bytes: 26 Cardinality: 1
23 HASH GROUP BY Cost: 210,133 Bytes: 87 Cardinality: 1
22 HASH JOIN Cost: 210,132 Bytes: 87 Cardinality: 1
10 MERGE JOIN CARTESIAN Cost: 130,054 Bytes: 63 Cardinality: 1
7 NESTED LOOPS Cost: 129,831 Bytes: 61 Cardinality: 1
4 INLIST ITERATOR
3 PARTITION RANGE ITERATOR Cost: 129,827 Bytes: 30 Cardinality: 1 Partition #: 10 Partitions accessed #KEY(INLIST)
2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_PORTFOLIO_DM.ANALYTICS_FACT Cost: 129,827 Bytes: 30 Cardinality: 1 Partition #: 10 Partitions accessed #KEY(INLIST)
1 INDEX RANGE SCAN INDEX (UNIQUE) FI_PORTFOLIO_DM.ANALYTICS_FACT_PK Cost: 667 Cardinality: 206,474 Partition #: 10 Partitions accessed #KEY(INLIST)
6 PARTITION LIST INLIST Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 13 Partitions accessed #KEY(INLIST)
5 INDEX RANGE SCAN INDEX (UNIQUE) FI_REFERENCE.INSTRUMENT_ALTERNATE_ID_PPK Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 13 Partitions accessed #KEY(INLIST)
9 BUFFER SORT Cost: 130,050 Bytes: 1,642 Cardinality: 821
8 TABLE ACCESS FULL TABLE FI_REFERENCE.FUND Cost: 224 Bytes: 1,642 Cardinality: 821
21 VIEW A519350. Cost: 80,049 Bytes: 63,861,216 Cardinality: 2,660,884
20 SORT UNIQUE Cost: 80,049 Bytes: 66,522,100 Cardinality: 2,660,884
19 UNION-ALL
14 INLIST ITERATOR
13 PARTITION RANGE ITERATOR Cost: 24,599 Bytes: 25,284,850 Cardinality: 1,011,394 Partition #: 21 Partitions accessed #KEY(INLIST)
12 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_BENCHMARK.FI_IDX_BENCHMARK_HOLDINGS Cost: 24,599 Bytes: 25,284,850 Cardinality: 1,011,394 Partition #: 21 Partitions accessed #KEY(INLIST)
11 INDEX RANGE SCAN INDEX FI_BENCHMARK.FI_IDX_BENCHMARK_HOLDINGS_I2 Cost: 1,973 Cardinality: 1,011,394 Partition #: 21 Partitions accessed #KEY(INLIST)
18 INLIST ITERATOR
17 PARTITION RANGE ITERATOR Cost: 36,066 Bytes: 41,237,250 Cardinality: 1,649,490 Partition #: 25 Partitions accessed #KEY(INLIST)
16 TABLE ACCESS BY LOCAL INDEX ROWID TABLE FI_BENCHMARK.FI_IDX_FORWARD_HOLDINGS Cost: 36,066 Bytes: 41,237,250 Cardinality: 1,649,490 Partition #: 25 Partitions accessed #KEY(INLIST)
15 INDEX RANGE SCAN INDEX FI_BENCHMARK.FI_IDX_FORWARD_HOLDINGS_I2 Cost: 3,499 Cardinality: 1,649,490 Partition #: 25 Partitions accessed #KEY(INLIST)
could you please help if i miss anything?One nice best practice: do not hard code date columns, use TO_DATE function instead.
For performance issue, check the order of join tables. For ex, you need just af.pricing_dt IN ('24jan2014', '27jan2014') date range in af table, but you join all columns matching with bh table first. Then, you give date condition for af table again. Therefore, the intermediate processed rows will be higher.
Another nice best practice: You can use JOIN keywords, while joining tables. Writing all in where clause make the code complicated. Simplicity is not easy, but impressive.
Regards,
Dilek
Maybe you are looking for
-
New Prob. iTunes kills browser functionality. Have to reboot sat. modem.
When I start iTunes, my browser functionality is killed in both Firefox and IE. Pages start to load and then say, 'done'. Blank browser screen. Email and IM still function normally. I have to close browsers and then reboot my satellite modem before b
-
SCCM 2012 Query report for specific software installed.
I have Reporting point installed and it seems to be working as I was able to run OS reports. Here's what I would like to do. 1. I need a query/report to show us all the machines that do not have Microsoft Forefront Endpoint Protection installed. 2. I
-
Connecting to the internet via PC Suite / N95 8GB ...
I'm struggling to connect to the internet via PC Suite (v7.1.30.9), using Vista Ultimate SP2, repeatedly getting the error "Failed to connect to the network! One touch access could not connect to the selected device". I've tried different cables, che
-
Printing using Adobe Reader not working
I am using .NET to open .pdf file with "Print" option. This should print that .pdf file on default printer of the system. But in reality, when I run my program, it opens blank Adobe Reader window and just hangs there, no page gets printed. My C# code
-
How do I get rid of a tab requesting donations for an add-on?
I recently downloaded an add-on for Firefox that I have found very helpful. A few days later, a tab launched requesting a small donation for the developer. Since I was using and enjoying the add-on, I made a small ($10) donation. All well and good, b