Tune query's
Hi
I have created more than 15 packages for a single application
i have used many procedures functions on no:of tables.
after running the project they have taking long longtime.
what is best way to tune the plsql query's
and please explain how to tune the query's with an example..
i am very thankfull ... in advance
user1175507 wrote:
Hi
I have created more than 15 packages for a single application
i have used many procedures functions on no:of tables.
after running the project they have taking long longtime.
what is best way to tune the plsql query's
and please explain how to tune the query's with an example..
i am very thankfull ... in advanceYou mean tune SQL queries and tune the PL SQL code, NOT plsql query :)
Ok, let's start from the begining :
- Are you a DBA ?
- Which Oracle version are you using ?
Similar Messages
-
Hi,
I am having below two queries. 2nd query is taking more time & it gives time out error. Can anybody tell how to fine tune below query. Thanks.
1st Query.
SELECT EKETEBELN EKETEBELP EKETETENR EKETEINDT
EKETMENGE EKETWEMNG
INTO TABLE I_EKET
FROM EKET
WHERE EKETMENGE <> EKETWEMNG
AND
EKET~EINDT IN S_EINDT.
DESCRIBE TABLE I_EKET LINES V_ZLINES.
IF V_ZLINES > 0.
2nd Query.
SELECT EKKOEBELN EKKOAEDAT EKKOLIFNR EKPOEBELP EKPO~MATNR
EKPO~WERKS
EKPOLOEKZ EKPOELIKZ EKPOTXZ01 EKPONETPR LFA1~NAME1
INTO TABLE I_PODEL
FROM EKKO
INNER JOIN EKPO ON EKKOEBELN = EKPOEBELN
INNER JOIN LFA1 ON EKKOLIFNR = LFA1LIFNR
FOR ALL ENTRIES IN I_EKET
WHERE EKKO~EBELN = I_EKET-EBELN AND
EKPO~EBELP = I_EKET-EBELP AND
EKPO~MATNR IN S_MATNR AND
EKPO~WERKS IN S_WERKS AND
EKPO~WERKS NE 'W001' AND
EKKO~EKORG = P_EKORG AND
EKKO~LIFNR IN S_LIFNR AND
EKKO~LOEKZ NE 'X' AND
EKPO~LOEKZ NE 'S' AND
EKPO~ELIKZ NE 'X' AND
EKPO~LOEKZ NE 'L' AND
EKKO~AEDAT IN S_AEDAT.
ELSE.
WRITE 'No POs found for the selection criteria!'.
ENDIF.Not the right forum to ask this question.
VJ -
Tune query on dual table.
Hello,
I am using ORACLE 11g and RHEL 5.
I have a performance tuning issue.
One of the query which returns the output in 0.016 seconds i.e. the below one
select to_number((sysdate- to_date('1970-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS')) * 86400 -
((substr(TZ_OFFSET(sessiontimezone),1,
instr(TZ_OFFSET(sessiontimezone),':')-1) * 3600)+((substr(TZ_OFFSET(SESSIONTIMEZONE),
(instr(TZ_OFFSET(sessiontimezone),':')+1), 2)*60)))) TIME_STAMP ,
to_number(to_char(systimestamp,'FF')) MICROSEC
from dualI want to optimize this query at such a extend that it should give the output in less than 10 microseconds
now this query returns the output in 0.016 seconds...which is way to more as per the need.
I can also adjust if the output time is constant all the time....
for ex:-
if the query returns the output in 15 microsecond when fired for the first time then it should return the output in 15 microsecond for all the consequential execution. If the query gets executed for 10 times then i should get the output in 15 microsecond for all the 10 times.
can anyone guide me how to tune this query to get the output in less than 10 microsecond ???
else
how can i execute this query to return the output in the same time at every execution ???
Thanks in advance ...Hi,
VJ4 wrote:
can anyone guide me how to tune this query to get the output in less than 10 microsecond ???I don't think it is possible using a (disk based) database. Their purpose is not to measure time but to serve and manipulate relational data.
As far as i know, <10µs will be hard to achieve.
Maybe TimesTen IMDB (InMemoryDataBase) can do that...
but wouldn't warranty the constant exec time.
VJ4 wrote:
how can i execute this query to return the output in the same time at every execution ???Well, I don't think you can. The Oracle database is not real-time. The execution time will higly depends on several factors, like parsed query still in shared pool, database server not being too charged, etc...
More over : assuming the query will always execute at exactly the same "speed" in order to measure how fast you are processing unit is not a good idea, as it assumes your unit processing speed is constant.
I think you should try to explain the big picture, and how you came to decide you need a query that return in less than 10µs.
Tell us why you cannot just execute it one in a while, and do the usual math :
processing speed = ("units processed at T2" - "units processed at T1") / (T2 - T1) -
Hi Team,
could you please tell me how to tune the below query.
UPDATE OBSERVATION_T
SET OBS_NOOBSDOC=
SELECT COUNT(*)
FROM OBSERVATION_DOC_T od
WHERE od.SEQ_NO_OBS =:1
AND od.OBS_SEVERITY NOT IN('K','X')
UPD_DATE =:2 ,
VERSION_NO =VERSION_NO+1,
OBS_SEVERITY=:3 ,
OBS_CORRSTAT=:4 ,
OBS_MESSAGE =:5
WHERE SEQ_NO_OBS =:6Please post the explain plain for your query .
-
How to tune query using dblink?
Hello,
I have query running on production DB(Version 9i).
The query is fetching data over the dblink.
Its taking around 14 minutes and total number of rows fetched are around 2 million.
Is there any way I can tune the query? I have used driving site hist but that didn't helped.Ario wrote:
Hello,
I have query running on production DB(Version 9i).
The query is fetching data over the dblink.
Its taking around 14 minutes and total number of rows fetched are around 2 million.
Is there any way I can tune the query? I have used driving site hist but that didn't helped.
Do some math and tell us if there is a way you can tune this.
We have no idea what "2 million rows" means in terms of size. That could be 1GB of data, it could be 1PB of data. Figure out how much data you're trying to move (size wise), then figure out roughly how fast your network can pipe the data (again, we have no idea if you have a fiber line and this is an internal network between machines, or if you're transporting the data 1/2 way around the globe over a dial up connection).
When you have all that figured out you'll know if the 14 minutes is / or is not an acceptable amount of time.
Cheers, -
Hi,
How i can tune complex query?
what are the step to follow.A very good one
http://www.dba-oracle.com/art_sql_tune.htm
Not particularly. At best it's a rambling collection of general principles where you have to work out which bits are right, out of date, or wrong. (Or - like the bit about Oracle 7 - so far out of date as to be a waste of space).
How much are you going to trust an article about SQL tuning that manages to say the following:
<i>
For example, a simple query such as “What students received an A last semester?” can be written in three ways, as shown in below, each returning an identical result.
</i>
A nested query:
SELECT *
FROM STUDENT
WHERE
student_id =
(SELECT student_id
FROM REGISTRATION
WHERE
grade = 'A'
A correlated subquery:
SELECT *
FROM STUDENT
WHERE
0 <
(SELECT count(*)
FROM REGISTRATION
WHERE
grade = 'A'
AND
student_id = STUDENT.student_id
);Far from returning identical results, the first one returns multiple copies of each student's information, one row for each 'A' they got (along with the registration columns); the second will fail with Oracle error "ORA-01427: single-row subquery returns more than one row" if there is more than one 'A' in the entire registration table; and the third query returns the required result.
Funnily enough, the most "intuitive" query - which is the one with an existence subquery - is not mentioned.
The OP would be far better off starting with the Performance Tuning Guide from http://tahiti.oracle.com
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"The greatest enemy of knowledge is not ignorance,
it is the illusion of knowledge." Stephen Hawking. -
Hi,
I am using disc 4i Desktop edition. I want to tune the query / report by adding some Hints to the query.
I am new to discoverer. Can somebody guide me how to this. I am using 4.1.48 destop version
I want to know, how to Edit the query to introduce the HintsHi,
Using Discoverer Administrator you can enter hints in the optimizer hints property of the folders. These will then be added to the query when the folder is used in Discoverer Desktop.
Rod West -
Hi , I would like to ask the expert here..how could i fine tune the below query..now it return data within 60 seconds. however my client require the data should return <5 second
SELECT DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE) START_DATE,
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR) START_HOUR,
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER) IN_PARTNER,
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER) OUT_PARTNER,
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE) SUBSCRIBER_TYPE,
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE) TRAFFIC_TYPE,
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE) EVENT_TYPE,
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD) INTERVAL_PERIOD,
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD) THRESHOLD,
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP) CALLED_NO_GRP,
SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) EVENT_COUNT,
--SUM (DECODE (CURR.EVENT_DURATION, '', 0, CURR.EVENT_DURATION)) EVENT_DURATION,
--SUM (DECODE (CURR.DATA_VOLUME, '', 0, CURR.DATA_VOLUME)) DATA_VOLUME,
--AVG (DECODE (CURR.AVERAGE_DURATION, '', 0, CURR.AVERAGE_DURATION)) AVERAGE_DURATION,
SUM (DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV)) EVENT_COUNT_PREV,
--SUM ( DECODE (PREV.EVENT_DURATION_PREV, '', 0, PREV.EVENT_DURATION_PREV)) EVENT_DURATION_PREV,
--SUM (DECODE (PREV.DATA_VOLUME_PREV, '', 0, PREV.DATA_VOLUME_PREV)) DATA_VOLUME_PREV,
--AVG ( DECODE (PREV.AVERAGE_DURATION_PREV, '', 0, PREV.AVERAGE_DURATION_PREV)) AVERAGE_DURATION_PREV,
ABS ( SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) - SUM ( DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV))) EVENT_COUNT_DIFF
FROM ------------------------------- CURR
(SELECT START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
--rd_thr.param_value THRESHOLD,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--SUM (DATA_VOLUME) AS DATA_VOLUME,
--SUM (EVENT_DURATION) AS EVENT_DURATION,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION,
SUM (EVENT_COUNT) AS EVENT_COUNT
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC (SYSDATE - TO_DATE (START_DATE, 'YYYYMMDD')) <= rd_min.param_value
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TO_DATE (START_DATE, 'YYYYMMDD') < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
) CURR
FULL OUTER JOIN
---------------------------------- PREV --------------------------
SELECT TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD') START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--rd_thr.param_value THRESHOLD,
SUM (EVENT_COUNT) AS EVENT_COUNT_PREV
--SUM (EVENT_DURATION) AS EVENT_DURATION_PREV,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION_PREV,
--SUM (DATA_VOLUME) AS DATA_VOLUME_PREV
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC ( SYSDATE - TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD'))) <= rd_min.param_value
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')) < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD'),
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
) PREV
-------------------------- join ------------------
ON ( CURR.START_DATE = PREV.START_DATE
AND CURR.START_HOUR = PREV.START_HOUR
AND CURR.IN_PARTNER = PREV.IN_PARTNER
AND CURR.OUT_PARTNER = PREV.OUT_PARTNER
AND CURR.SUBSCRIBER_TYPE = PREV.SUBSCRIBER_TYPE
AND CURR.TRAFFIC_TYPE = PREV.TRAFFIC_TYPE
AND CURR.INTERVAL_PERIOD = PREV.INTERVAL_PERIOD
--AND CURR.THRESHOLD = PREV.THRESHOLD
AND CURR.EVENT_TYPE = PREV.EVENT_TYPE
AND CURR.CALLED_NO_GRP = PREV.CALLED_NO_GRP)
GROUP BY DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE),
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR),
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER),
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER),
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE),
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE),
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD),
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD),
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE),
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP);
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------I changed to query as below, however the performance not much difference compare to original
WITH CURR AS
SELECT /*+ MATERIALIZE */ START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
--rd_thr.param_value THRESHOLD,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--SUM (DATA_VOLUME) AS DATA_VOLUME,
--SUM (EVENT_DURATION) AS EVENT_DURATION,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION,
SUM (EVENT_COUNT) AS EVENT_COUNT
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC (SYSDATE - TO_DATE (START_DATE, 'YYYYMMDD')) <= rd_min.param_value
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TO_DATE (START_DATE, 'YYYYMMDD') < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
), PREV AS
SELECT /*+ MATERIALIZE */ TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD') START_DATE,
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value INTERVAL_PERIOD,
CALLED_NO_GRP,
--rd_thr.param_value THRESHOLD,
SUM (EVENT_COUNT) AS EVENT_COUNT_PREV
--SUM (EVENT_DURATION) AS EVENT_DURATION_PREV,
--DECODE ( SUM (NVL (EVENT_COUNT / 1000000, 0)), 0, 0, ROUND ( SUM (EVENT_DURATION / 1000000) / SUM (NVL (EVENT_COUNT / 1000000, 0)), 2)) AS AVERAGE_DURATION_PREV,
--SUM (DATA_VOLUME) AS DATA_VOLUME_PREV
FROM MSC_OUT_AGG,
raid_t_parameters rd,
raid_t_parameters rd_min,
raid_t_parameters rd_max,
raid_t_parameters rd_thr
WHERE TRUNC ( SYSDATE - TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD'))) <= rd_min.param_value
AND rd.param_id = 'histMD_IN_MSC'
AND rd.param_id2 = 'INTERVAL_PERIOD'
AND rd_min.param_id = 'histMD_IN_MSC'
AND rd_min.param_id2 = 'DASHBOARD_THRESHOLD_MIN'
AND rd_max.param_id = 'histMD_IN_MSC'
AND rd_max.param_id2 = 'DASHBOARD_THRESHOLD_MAX'
AND rd_thr.param_id = 'histMD_IN_MSC'
AND rd_thr.param_id2 = 'PER_THRESHOLD_W'
AND TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')) < SYSDATE - rd_max.param_value
AND SOURCE = 'MD_IN_MSC_HUA'
GROUP BY TO_CHAR ( TRUNC ( rd.param_value + TO_DATE (START_DATE, 'YYYYMMDD')), 'YYYYMMDD'),
START_HOUR,
IN_PARTNER,
OUT_PARTNER,
SUBSCRIBER_TYPE,
TRAFFIC_TYPE,
EVENT_TYPE,
rd.param_value,
CALLED_NO_GRP,
rd_thr.param_value
SELECT /*+ USE_HASH(T1 T2) */ DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE) START_DATE,
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR) START_HOUR,
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER) IN_PARTNER,
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER) OUT_PARTNER,
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE) SUBSCRIBER_TYPE,
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE) TRAFFIC_TYPE,
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE) EVENT_TYPE,
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD) INTERVAL_PERIOD,
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD) THRESHOLD,
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP) CALLED_NO_GRP,
SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) EVENT_COUNT,
--SUM (DECODE (CURR.EVENT_DURATION, '', 0, CURR.EVENT_DURATION)) EVENT_DURATION,
--SUM (DECODE (CURR.DATA_VOLUME, '', 0, CURR.DATA_VOLUME)) DATA_VOLUME,
--AVG (DECODE (CURR.AVERAGE_DURATION, '', 0, CURR.AVERAGE_DURATION)) AVERAGE_DURATION,
SUM (DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV)) EVENT_COUNT_PREV,
--SUM ( DECODE (PREV.EVENT_DURATION_PREV, '', 0, PREV.EVENT_DURATION_PREV)) EVENT_DURATION_PREV,
--SUM (DECODE (PREV.DATA_VOLUME_PREV, '', 0, PREV.DATA_VOLUME_PREV)) DATA_VOLUME_PREV,
--AVG ( DECODE (PREV.AVERAGE_DURATION_PREV, '', 0, PREV.AVERAGE_DURATION_PREV)) AVERAGE_DURATION_PREV,
ABS ( SUM (DECODE (CURR.EVENT_COUNT, '', 0, CURR.EVENT_COUNT)) - SUM ( DECODE (PREV.EVENT_COUNT_PREV, '', 0, PREV.EVENT_COUNT_PREV))) EVENT_COUNT_DIFF
FROM CURR
FULL OUTER JOIN
PREV
ON ( CURR.START_DATE = PREV.START_DATE
AND CURR.START_HOUR = PREV.START_HOUR
AND CURR.IN_PARTNER = PREV.IN_PARTNER
AND CURR.OUT_PARTNER = PREV.OUT_PARTNER
AND CURR.SUBSCRIBER_TYPE = PREV.SUBSCRIBER_TYPE
AND CURR.TRAFFIC_TYPE = PREV.TRAFFIC_TYPE
AND CURR.INTERVAL_PERIOD = PREV.INTERVAL_PERIOD
--AND CURR.THRESHOLD = PREV.THRESHOLD
AND CURR.EVENT_TYPE = PREV.EVENT_TYPE
AND CURR.CALLED_NO_GRP = PREV.CALLED_NO_GRP)
GROUP BY DECODE (CURR.START_DATE, '', PREV.START_DATE, CURR.START_DATE),
DECODE (CURR.START_HOUR, '', PREV.START_HOUR, CURR.START_HOUR),
DECODE (CURR.IN_PARTNER, '', PREV.IN_PARTNER, CURR.IN_PARTNER),
DECODE (CURR.OUT_PARTNER, '', PREV.OUT_PARTNER, CURR.OUT_PARTNER),
DECODE (CURR.SUBSCRIBER_TYPE, '', PREV.SUBSCRIBER_TYPE, CURR.SUBSCRIBER_TYPE),
DECODE (CURR.TRAFFIC_TYPE, '', PREV.TRAFFIC_TYPE, CURR.TRAFFIC_TYPE),
DECODE (CURR.INTERVAL_PERIOD, '', PREV.INTERVAL_PERIOD, CURR.INTERVAL_PERIOD),
--DECODE (CURR.THRESHOLD, '', PREV.THRESHOLD, CURR.THRESHOLD),
DECODE (CURR.EVENT_TYPE, '', PREV.EVENT_TYPE, CURR.EVENT_TYPE),
DECODE (CURR.CALLED_NO_GRP, '', PREV.CALLED_NO_GRP, CURR.CALLED_NO_GRP); -
Hi,
Friends, I need some help.
I have to tune some queries.
1. How to find the last STATISTICS generation date?
2. How generate statistics in database level?
3. What is the difference between
>alter table emp compute statistics;
and
>alter table emp estimate statistics;
Can Any one give some good advice to tune queries?
regards
MathewHi,
Friends, I need some help.
I have to tune some queries.
1. How to find the last STATISTICS generation
date?
query view USER_TABLES/DBA_TABLES, column LAST_ANALYZED
2. How generate statistics in database level?Use
ANALYZE TABLE <table_name> <statistic_collection_method>
Or
DBMS_STATS package
3. What is the difference between
alter table emp compute statistics;and
alter table emp estimate statistics;
Should be ANALYZE TABLE emp compute statistics;
and ANALYZE TABLE emp estimate statistics;
Compute statistics will sample all data of the table,
estimate only sample certain percent of data of the table.
ny one give some good advice to tune queries?
Oracle Performance Tuning Guide
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm -
Query taking lot of time (Tunning Query.... Help)
SELECT d.loc_channel, A.DEPOT_CODE, F.DEPOT_NAME,E.EMP_CODE,E.EMP_NAME, A.CUST_CODE, B.CUST_NAME, A.INSTRUMENT_NO,to_char(A.INSTRUMENT_DT,'dd/mm/yyyy')as INSTRUMENT_DT ,nvl(A.amount,0)as amount, G.REMARKS indorcvdfrombank,G.DOC_NO,to_char(G.DOC_DT,'dd/mm/yyyy')as doc_dt,g.doc_dt docdt,nvl(G.DOC_AMOUNT,0)as doc_amount,decode(nvl(c.BDSNo,'N') ,'N','No','Yes') PAYRCVD,c.BDSNO,to_char(c.BDSDT,'dd/mm/yyyy')as BDSDT,nvl(c.amount,0)as bdsamount FROM T_PAYRECEIPT A, M_CUSTOMER B,T_PAYRECEIPT C,M_LOCATION D, M_EMPLOYEE E, M_DEPOT F, T_CRD_DBT G WHERE A.CUST_CODE = B.CUST_CODE AND B.SE_LOCCODE = D.LOCATION_CODE AND D.EMP_CODE = E.EMP_CODE AND A.DEPOT_CODE = F.DEPOT_CODE AND A.REF_DOCNO = G.DOC_NO AND A.DEPOT_CODE = G.DEPOT_CODE and nvl(A.status,' ') = 'B' and a.voucher_no = c.ref_instrumentno(+) and trunc(a.voucher_dt) between to_date('19/04/2005','dd/mm/yyyy') and to_date('19/05/2005','dd/mm/yyyy') AND A.Cust_Code exists (select A.cust_code from M_Customer A) AND A.Depot_code exists (select depot_Code from M_DEPOT where DEPOT_CATEGORY in ('D','L') and status = 'A' ) order by a.cust_code,g.doc_no,docdt
While I agree with all of the other John's comments, as a first crack, I would re-write the statement this way:
SELECT d.loc_channel, a.depot_code, f.depot_name,e.emp_code,e.emp_name,
a.cust_code, b.cust_name, a.instrument_no,
TO_CHAR(a.instrument_dt,'dd/mm/yyyy') instrument_dt,
NVL(a.amount,0) amount, g.remarks indorcvdfrombank,
g.doc_no,TO_CHAR(g.doc_dt,'dd/mm/yyyy') doc_dt, g.doc_dt docdt,
NVL(g.doc_amount,0) doc_amount,
DECODE(NVL(c.bdsno,'N'), 'N', 'No', 'Yes') payrcvd,
c.bdsno,TO_CHAR(c.bdsdt,'dd/mm/yyyy') bdsdt,
NVL(c.amount,0) bdsamount
FROM t_payreceipt a, m_customer b,t_payreceipt c,m_location d,
m_employee e, m_depot f, t_crd_dbt g
WHERE a.cust_code = b.cust_code and
b.se_loccode = d.location_code and
d.emp_code = e.emp_code and
a.depot_code = f.depot_code and
a.ref_docno = g.doc_no and
a.depot_code = g.depot_code and
a.status = 'b' and
a.voucher_no = c.ref_instrumentno(+) and
TRUNC(a.voucher_dt) BETWEEN TO_DATE('19/04/2005','dd/mm/yyyy') AND
TO_DATE('19/05/2005','dd/mm/yyyy') and
f.depot_category IN ('d','l') and
f.status = 'a'
ORDER BY a.cust_code,g.doc_no,docdt The predicate:
a.cust_code EXISTS (SELECT a.cust_code FROM m_customer a)can be removed because the join condition:
a.cust_code = b.cust_codedoes exactly the same thing.
The predicate:
a.depot_code EXISTS (SELECT depot_code
FROM m_depot
WHERE depot_category IN ('d','l') and
status = 'a' ) can be removed because the join between:
a.depot_code = f.depot_codewill take care of it if you add the conditions into the main query lilke:
f.depot_category IN ('d','l') and
f.status = 'a'The predicate:
NVL(a.status,' ') = 'b'can be changed to a simple equality since NULL is not equal to anything. Removing the function may allow the optimizer to choose a better access path.
I would try to change the predicate:
TRUNC(a.voucher_dt) BETWEEN TO_DATE('19/04/2005','dd/mm/yyyy') AND
TO_DATE('19/05/2005','dd/mm/yyyy')to get rid of the TRUNC on voucher_dt. If there really are times in voucher_dt, I would probably use something like:
a.voucher_dt BETWEEN TO_DATE('19/04/2005','dd/mm/yyyy') AND
TO_DATE('19/05/2005 23:59:59','dd/mm/yyyy hh24:mi:ss')If there are no times in voucher_dt then just drop the TRUNC.
TTFN
John -
How to improve the query performance or tune query from Explain Plan
Hi
The following is my explain plan for sql query. (The plan is generated by Toad v9.7). How to fix the query?
SELECT STATEMENT ALL_ROWSCost: 4,160 Bytes: 25,296 Cardinality: 204
8 NESTED LOOPS Cost: 3 Bytes: 54 Cardinality: 1
5 NESTED LOOPS Cost: 2 Bytes: 23 Cardinality: 1
2 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 13 Cardinality: 1
1 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_CUST_ACCOUNTS Cost: 1 Bytes: 10 Cardinality: 1
3 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_CUST_ACCOUNTS_U1 Cost: 1 Cardinality: 1
7 TABLE ACCESS BY INDEX ROWID TABLE AR.HZ_PARTIES Cost: 1 Bytes: 31 Cardinality: 1
6 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.HZ_PARTIES_U1 Cost: 1 Cardinality: 1
10 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
9 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
15 NESTED LOOPS Cost: 2 Bytes: 29 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
11 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
14 TABLE ACCESS BY INDEX ROWID TABLE ONT.OE_ORDER_HEADERS_ALL Cost: 1 Bytes: 17 Cardinality: 1
13 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Cardinality: 1
21 FILTER
16 TABLE ACCESS FULL TABLE ONT.OE_TRANSACTION_TYPES_TL Cost: 2 Bytes: 1,127 Cardinality: 49
20 NESTED LOOPS Cost: 2 Bytes: 21 Cardinality: 1
18 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
17 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
19 INDEX RANGE SCAN INDEX (UNIQUE) ONT.OE_ORDER_HEADERS_U2 Cost: 1 Bytes: 9 Cardinality: 1
23 TABLE ACCESS BY INDEX ROWID TABLE AR.RA_CUSTOMER_TRX_ALL Cost: 1 Bytes: 12 Cardinality: 1
22 INDEX UNIQUE SCAN INDEX (UNIQUE) AR.RA_CUSTOMER_TRX_U1 Cost: 1 Cardinality: 1
45 NESTED LOOPS Cost: 4,160 Bytes: 25,296 Cardinality: 204
42 NESTED LOOPS Cost: 4,150 Bytes: 23,052 Cardinality: 204
38 NESTED LOOPS Cost: 4,140 Bytes: 19,992 Cardinality: 204
34 NESTED LOOPS Cost: 4,094 Bytes: 75,850 Cardinality: 925
30 NESTED LOOPS Cost: 3,909 Bytes: 210,843 Cardinality: 3,699
26 PARTITION LIST ALL Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
25 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_HEADERS Cost: 2,436 Bytes: 338,491 Cardinality: 14,717 Partition #: 29 Partitions accessed #1 - #18
24 INDEX SKIP SCAN INDEX XLA.XLA_AE_HEADERS_N1 Cost: 264 Cardinality: 1,398,115 Partition #: 29 Partitions accessed #1 - #18
29 PARTITION LIST ITERATOR Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
28 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_AE_LINES Cost: 1 Bytes: 34 Cardinality: 1 Partition #: 32
27 INDEX RANGE SCAN INDEX (UNIQUE) XLA.XLA_AE_LINES_U1 Cost: 1 Cardinality: 1 Partition #: 32
33 PARTITION LIST ITERATOR Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
32 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_DISTRIBUTION_LINKS Cost: 1 Bytes: 25 Cardinality: 1 Partition #: 35
31 INDEX RANGE SCAN INDEX XLA.XLA_DISTRIBUTION_LINKS_N3 Cost: 1 Cardinality: 1 Partition #: 35
37 PARTITION LIST SINGLE Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 38
36 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_EVENTS Cost: 1 Bytes: 16 Cardinality: 1 Partition #: 39 Partitions accessed #2
35 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_EVENTS_U1 Cost: 1 Cardinality: 1 Partition #: 40 Partitions accessed #2
41 PARTITION LIST SINGLE Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 41
40 TABLE ACCESS BY LOCAL INDEX ROWID TABLE XLA.XLA_TRANSACTION_ENTITIES Cost: 1 Bytes: 15 Cardinality: 1 Partition #: 42 Partitions accessed #2
39 INDEX UNIQUE SCAN INDEX (UNIQUE) XLA.XLA_TRANSACTION_ENTITIES_U1 Cost: 1 Cardinality: 1 Partition #: 43 Partitions accessed #2
44 TABLE ACCESS BY INDEX ROWID TABLE GL.GL_CODE_COMBINATIONS Cost: 1 Bytes: 11 Cardinality: 1
43 INDEX UNIQUE SCAN INDEX (UNIQUE) GL.GL_CODE_COMBINATIONS_U1 Cost: 1 Cardinality: 1damorgan wrote:
Tuning is NOT about reducing the cost of i/o.
i/o is only one of many contributors to cost and only one of many contributors to waits.
Any time you would like to explore this further run this code:
SELECT 1 FROM dual
WHERE regexp_like(' ','^*[ ]*a');but not on a production box because you are going to experience an extreme tuning event with zero i/o.
And when I say "extreme" I mean "EXTREME!"
You've been warned.I think you just need a faster server.
SQL> set autotrace traceonly statistics
SQL> set timing on
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');
no rows selected
Elapsed: 00:00:00.00
Statistics
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
243 bytes sent via SQL*Net to client
349 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processedRepeated from an Oracle 10.2.0.x instance:
SQL> SELECT DISTINCT SID FROM V$MYSTAT;
SID
310
SQL> ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
Session altered.
SQL> select 1 from dual
2 where
3 regexp_like (' ','^*[ ]*a');The session is hung. Wait a little while and connect to the database using a different session:
COLUMN STAT_NAME FORMAT A35 TRU
SET PAGESIZE 200
SELECT
STAT_NAME,
VALUE
FROM
V$SESS_TIME_MODEL
WHERE
SID=310;
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0Seems to be using a bit of time for the hard parse (hard parse elapsed time). Wait a little while, then re-execute the query:
STAT_NAME VALUE
DB time 9247
DB CPU 9247
background elapsed time 0
background cpu time 0
sequence load elapsed time 0
parse time elapsed 6374
hard parse elapsed time 5997
sql execute elapsed time 2939
connection management call elapsed 1660
failed parse elapsed time 0
failed parse (out of shared memory) 0
hard parse (sharing criteria) elaps 0
hard parse (bind mismatch) elapsed 0
PL/SQL execution elapsed time 95
inbound PL/SQL rpc elapsed time 0
PL/SQL compilation elapsed time 0
Java execution elapsed time 0
repeated bind elapsed time 48
RMAN cpu time (backup/restore) 0The session is not reporting additional CPU usage or parse time.
Let's check one of the session's statistics:
SELECT
SS.VALUE
FROM
V$SESSTAT SS,
V$STATNAME SN
WHERE
SN.NAME='consistent gets'
AND SN.STATISTIC#=SS.STATISTIC#
AND SS.SID=310;
VALUE
163Not many consistent gets after 20+ minutes.
Let's take a look at the plan:
SQL> SELECT SQL_ID,CHILD_NUMBER FROM V$SQL WHERE SQL_TEXT LIKE 'select 1 from du
al%';
SQL_ID CHILD_NUMBER
04mpgrzhsv72w 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('04mpgrzhsv72w',0,'TYPICAL'))
select 1 from dual where regexp_like (' ','^*[ ]*a')
NOTE: cannot fetch plan for SQL_ID: 04mpgrzhsv72w, CHILD_NUMBER: 0
Please verify value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in cursor cache (check v$sql_p
lan)No plan...
Let's take a look at the 10053 trace file:
Registered qb: SEL$1 0x19157f38 (PARSER)
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=258 hint_alias="DUAL"@"SEL$1"
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
CBQT: Validity checks failed for 7uqx4guu04x3g.
CVM: Considering view merge in query block SEL$1 (#0)
CBQT: Validity checks failed for 7uqx4guu04x3g.
Subquery Unnest
SU: Considering subquery unnesting in query block SEL$1 (#0)
Set-Join Conversion (SJC)
SJC: Considering set-join conversion in SEL$1 (#0).
Predicate Move-Around (PM)
PM: Considering predicate move-around in SEL$1 (#0).
PM: Checking validity of predicate move-around in SEL$1 (#0).
PM: PM bypassed: Outer query contains no views.
FPD: Considering simple filter push in SEL$1 (#0)
FPD: Current where clause predicates in SEL$1 (#0) :
REGEXP_LIKE (' ','^*[ ]*a')
kkogcp: try to generate transitive predicate from check constraints for SEL$1 (#0)
predicates with check contraints: REGEXP_LIKE (' ','^*[ ]*a')
after transitive predicate generation: REGEXP_LIKE (' ','^*[ ]*a')
finally: REGEXP_LIKE (' ','^*[ ]*a')
apadrv-start: call(in-use=592, alloc=16344), compile(in-use=37448, alloc=42256)
kkoqbc-start
: call(in-use=592, alloc=16344), compile(in-use=38336, alloc=42256)
kkoqbc-subheap (create addr=000000001915C238)Looks like the query never had a chance to start executing - it is still parsing after 20 minutes.
I am not sure that this is a good example - the query either executes very fast, or never has a chance to start executing. But, it might still make your point physical I/O is not always the problem when performance problems are experienced.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Want help on Citical Tune Query
ADDM suggested
SQL statements consuming significant database time were found.
RECOMMENDATION 1: SQL Tuning, 17% benefit (698 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"2zcnyf9ypc2g7".
RELEVANT OBJECT: SQL statement with SQL_ID 2zcnyf9ypc2g7 and
PLAN_HASH 1170350465
SELECT user_id, relation,status, to_char(ctime, :"SYS_B_0") as
creation_at, to_char(utime, :"SYS_B_1") as last_modified_date
FROM page_user WHERE page_id = :"SYS_B_2" AND upper(relation) =
sys_context(:"SYS_B_3", :"SYS_B_4") AND upper(relation) =
sys_context(:"SYS_B_5", :"SYS_B_6") AND upper(status) =
sys_context(:"SYS_B_7", :"SYS_B_8") order by utime DESC
Recommendations for SQL ID:2zcnyf9ypc2g7
Only one recommendation should be implemented.
SQL Text
SELECT user_id, relation,status, to_char(ctime, :"SYS_B_0") as creation_at, to_char(utime, :"SYS_B_1") as last_modified_date FROM page_user WHERE page_id = :"SYS_B_2" AND upper(relation) = sys_contex...
Select Recommendation
Select Type Findings Recommendations Rationale Benefit (%) New Explain Plan Compare Explain Plans
Restructure SQL The predicate UPPER("PAGE_USER"."RELATION")=SYS_CONTEXT(:B1,:B2) used at line ID 3 of the execution plan contains an expression on indexed column "RELATION". This expression prevents the optimizer from efficiently using indices on table "PAGES"."PAGE_USER". Rewrite the predicate into an equivalent form to take advantage of indices. Alternatively, create a function-based index on the expression. The optimizer is unable to use an index if the predicate is an inequality condition or if there is an expression or an implicit data type conversion on the indexed column.
Text
SELECT user_id, relation,status, to_char(ctime, :"SYS_B_0") as creation_at, to_char(utime, :"SYS_B_1") as last_modified_date
FROM page_user WHERE page_id = :"SYS_B_2" AND upper(relation) = sys_context(:"SYS_B_3", :"SYS_B_4") AND upper(relation) = sys_context(:"SYS_B_5", :"SYS_B_6") AND upper(status) = sys_context(:"SYS_B_7", :"SYS_B_8") order by utime DESC
Details
Select the plan hash value to see the details below. Plan Hash Value
Statistics Activity
Plan
Tuning Information
Data Source Snapshot (4413)
Capture Time 12-Mar-2010 16:30:33
Parsing Schema PAGES
Optimizer Mode ALL_ROWS
Expand All | Collapse All
Operation Object Object Type Order Rows Size (KB) Cost Time (sec) CPU Cost I/O Cost
[Select to collapse] SELECT STATEMENT
5 0 0.000 5 0 0 0
[Select to collapse] FILTER
4 0 0.000 0 0 0 0
[Select to collapse] SORT ORDER BY
3 1 0.039 5 1 19,417,919 4
[Select to collapse] TABLE ACCESS BY INDEX ROWID
PAGE_USER TABLE 2 1 0.039 4 1 37,794 4
INDEX RANGE SCAN
USER_PAGE_UK1 INDEX (UNIQUE) 1 1 0.000 3 1 30,364 3user8764012 wrote:
Sir,
Asking how to implement the advise provided by ADDM or also if you have a better suggestion to decrease the cost of the query
SELECT user_id, relation,status, to_char(ctime, :"SYS_B_0") as
creation_at, to_char(utime, :"SYS_B_1") as last_modified_date
FROM page_user WHERE page_id = :"SYS_B_2" AND upper(relation) =
sys_context(:"SYS_B_3", :"SYS_B_4") AND upper(relation) =
sys_context(:"SYS_B_5", :"SYS_B_6") AND upper(status) =
sys_context(:"SYS_B_7", :"SYS_B_8") order by utime DESC
Restructure SQLThe predicate UPPER("PAGE_USER"."RELATION")=SYS_CONTEXT(:B1,:B2) used at line ID 3 of the execution plan contains an expression on
indexed column "RELATION". This expression prevents the optimizer from efficiently using indices on table "PAGES"."PAGE_USER". Rewrite the
predicate into an equivalent form to take advantage of indices. Alternatively, create a function-based index on the expression. The optimizer is unable to use an
index if the predicate is an inequality condition or if there is an expression or an implicit data type conversion on the indexed column.The message is tellong you that although you have an index on RELATION the idex can't be used because you are changing the value with a function. You can do a couple of things:
1. remove the UPPER() function from the RELATION clause elements (predicates) finding another way to force the columns to match
2. use a function based index, based on upper(relation) something like (untested)
create index fb_index on page_user (upper(relation));This may or may not help in the long run as function-based indexes only work when certain intialization parmeters are set (QUERY_REWRITE_ENABLED). You can read about function-based indexes in the documentation. -
Tunning query with LIKE clause
Hi, is there any chance to improve execution plan for SQL query which is using LIKE clause?
Query:
SELECT * FROM [TABLE_NAME] WHERE ADDRESS LIKE :1 ESCAPE '\';
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 112K| 18M| 11361 (1)| 00:02:17 |
|* 1 | TABLE ACCESS FULL| [TABLE_NAME] | 112K| 18M| 11361 (1)| 00:02:17 |
Execution plan is far from ideal. Table has several millions of records.
This query is used by application to seach using patterns...
Any ideas?
Thx in advance!This example isn't entirely realistic. Your table T has only 1 column which is also indexed. Apparently, for small enough tables of one column it will search the entire index for the wildcard value. But if I add a second column, or base the single column version on a larger table, the optimizer uses a full table scan:
SQL> drop table t;
Table dropped.
SQL> CREATE TABLE t AS SELECT DBMS_RANDOM.STRING('a',100) a
2 ,DBMS_RANDOM.STRING('a',100) b
3 FROM user_objects;
Table created.
SQL>
SQL> CREATE INDEX t_idx ON t (a) COMPUTE STATISTICS;
Index created.
SQL>
SQL> SET autotrace traceonly explain
SQL>
SQL> SELECT *
2 FROM t
3 WHERE a LIKE '%acb%';
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=100)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=100)
SQL> drop table t;
Table dropped.
SQL> CREATE TABLE t AS SELECT DBMS_RANDOM.STRING('a',100) a
2 -- ,DBMS_RANDOM.STRING('a',100) b
3 FROM all_objects;
Table created.
SQL>
SQL> CREATE INDEX t_idx ON t (a) COMPUTE STATISTICS;
Index created.
SQL>
SQL> SET autotrace traceonly explain
SQL>
SQL> SELECT *
2 FROM t
3 WHERE a LIKE '%acb%';
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20 Card=399 Bytes=39900)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=20 Card=399 Bytes=39900) -
Hi,
I need to improve the performance of the next query, the cost is very high,
SELECT
date_
, daycount
, id_sub
, type
, status
, count(*) total
FROM
SELECT
trunc(a.date_) date_
, trunc(a.date_) - trunc(c.date_sub) daycount
, c.id id_sub
, CASE substr(a.service, -8)
WHEN '##rebill' THEN 2
ELSE 1
END type
, a.status
FROM temp_stg_mt_bos a
INNER JOIN dw_users b ON (b.msisdn = a.msisdn AND b.operator = a.operator)
INNER JOIN dw_subs c ON (c.id_user = b.id AND c.date_sub < a.date_)
INNER JOIN dw_crmcodes d ON
d.crmcode = c.crmcode AND
(a.service = 'mt_subs_cat' AND (a.date_ >= c.date_sub AND a.date_ < (c.date_sub + (interval '30' second))) OR
(a.service <> 'mt_subs_cat' AND d.listname = substr(a.service, 1, instr(a.service, '#') - 1)))
WHERE NOT
(lower(a.operator) = 'personal') AND (lower(trim(a.shortcode)) <> 'bos-billing')
GROUP BY date_, daycount, id_sub, type, statusthe the problem is the where not condition,
could anyone give me any idea to improve the performance.
Thanks in davance :)Ok,
the explain plan is
PLAN_TABLE_OUTPUT
Plan hash value: 2386324151
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 32 | 6944 | | 13247 (1)| 00:02:39 |
| 1 | SORT GROUP BY | | 32 | 6944 | | | |
| 2 | CONCATENATION | | | | | | |
|* 3 | HASH JOIN | | 1 | 217 | 6720K| 8455 (1)| 00:01:42 |
|* 4 | HASH JOIN | | 33400 | 6327K| | 3481 (2)| 00:00:42 |
| 5 | TABLE ACCESS FULL | DW_CRMCODES | 1727 | 50083 | | 11 (0)| 00:00:01 |
|* 6 | HASH JOIN | | 2437 | 392K| | 3469 (1)| 00:00:42 |
|* 7 | TABLE ACCESS FULL | TEMP_STG_MT_BOS | 2543 | 340K| | 2310 (1)| 00:00:28 |
| 8 | TABLE ACCESS FULL | DW_USERS | 917K| 24M| | 1153 (1)| 00:00:14 |
| 9 | TABLE ACCESS FULL | DW_SUBS | 1359K| 29M| | 2001 (1)| 00:00:25 |
| 10 | NESTED LOOPS | | 1 | 217 | | 4791 (1)| 00:00:58 |
| 11 | NESTED LOOPS | | 1 | 188 | | 4790 (1)| 00:00:58 |
| 12 | NESTED LOOPS | | 487 | 80355 | | 3328 (1)| 00:00:40 |
|* 13 | TABLE ACCESS FULL | TEMP_STG_MT_BOS | 509 | 69733 | | 2309 (1)| 00:00:28 |
| 14 | TABLE ACCESS BY INDEX ROWID| DW_USERS | 1 | 28 | | 2 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | DW_USERS_UK1 | 1 | | | 1 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID | DW_SUBS | 1 | 23 | | 3 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | DW_SUBS_UK1 | 1 | | | 2 (0)| 00:00:01 |
|* 18 | TABLE ACCESS BY INDEX ROWID | DW_CRMCODES | 1 | 29 | | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | DW_CRMCODES_PK | 1 | | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("D"."CRMCODE"="C"."CRMCODE" AND "C"."ID_USER"="B"."ID")
filter("C"."DATE_SUB"<"."DATE_")
4 - access("D"."LISTNAME"=SUBSTR("A"."SERVICE",1,INSTR("A"."SERVICE",'#')-1))
6 - access("B"."MSISDN"="A"."MSISDN" AND "B"."OPERATOR"="A"."OPERATOR")
7 - filter("A"."SERVICE"<>'mt_subs_cat' AND (LOWER("A"."OPERATOR")'personal' OR
LOWER(TRIM("A"."SHORTCODE"))='bos-billing'))
13 - filter("A"."SERVICE"='mt_subs_cat' AND (LOWER("A"."OPERATOR")'personal' OR
LOWER(TRIM("A"."SHORTCODE"))='bos-billing'))
15 - access("B"."OPERATOR"="A"."OPERATOR" AND "B"."MSISDN"="A"."MSISDN")
17 - access("C"."ID_USER"="B"."ID" AND "A"."DATE_">="C"."DATE_SUB")
filter("C"."DATE_SUB"<"."DATE_" AND "A"."DATE_">="C"."DATE_SUB" AND
"A"."DATE_"<"."DATE_SUB"+INTERVAL'+00 00:00:30.000000' DAY(2) TO SECOND(6))
18 - filter(LNNVL("D"."LISTNAME"=SUBSTR("A"."SERVICE",1,INSTR("A"."SERVICE",'#')-1)) OR
LNNVL("A"."SERVICE"<>'mt_subs_cat'))
19 - access("D"."CRMCODE"="C"."CRMCODE")
Note
{code} -
Tunning query 464 lines.
Hello everybody, I need help.
I'm trying to use DBMS_SQLTUNE.CREATE_TUNING_TASK, but my query is 464 lines and is difficult to use.
Has anyone used DBMS_SQLTUNE.CREATE_TUNING_TASK with a query of 464 lines.
Thanks,
DanielDont thinks the Advanced Queueing forum is the right place for this one - try SQL/PLSQL for future reference.
I presume you have read the doc?
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_sqltun.htm#CHEBEBJJ
Thanks
Paul
Maybe you are looking for
-
BAPI_OBJCL_CHANGE - Not updating in database
Hi, I am using BAPI_OBJCL_CHANGE and BAPI_TRANSACTION_COMMIT to update the characteristics data but it is not getting updated but gives a success message. I used BAPI_OBJCL_CONCATENATEKEY to generate a object key passing OBJECTTABLE as MARA and in OB
-
Calling an EJB throws an UNKNOWN exception
Hi EJB experts! I`m trying to call an EJB on OAS from Java application running within a JDeveloper. After several calls of remote method the application invokes the remote method again but exception java.rmi.UnexpectedException: CORBA: org.omg.CORBA.
-
Question about my code.
I am rather new to Java but am quite fluent with C, C++ and C#. I have this code with a main menu function and a main function. For whatever reason when I try to change functions/methods I get an error. Below is my pastebin. http://nettworks.pastebin
-
Unable to import or clear history because buttons do not respond and are not active
When I try to clear history under Tools or import under File, the buttons do not responds. Other buttons work well and are somewhat highlighted, but these two do not respond.
-
OPENING VIDEO I LOSE IMAGES THAT DON'T SHOW
GREETINGS, I COMPLETED A RECENT PRODUCTION. BUT WHEN I OPEN THAT FILE IN ELEMENTS PREMIERE THE IMAGES I INSERTED INTO PRODUCTION ARE NOT DISPLAYING. WHERE MIGHT THEY BE? WHILE IN PRODUCTION THEY WERE DISPLAYING IN MY PROJECT ASSETS, EASY TO UPLOAD IN