Query tune
Hi all,
Can i tune this query anymore?
TKPROF: Release 10.2.0.1.0 - Production on Thu Jan 14 15:59:21 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: treasury_ora_15244.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
select a.grant_no,grantnamehindi,a.p_nplan,totalbudgetprovision,progressive,pro,mon from vbmst_pnplan a,vddoalt_pnplan b,
(select grant_no, p_nplan,nvl(sum(case when p_nplan='N' and v_date between '01-APR-09' and '14-JAN-10' then gross_amt end),0)pro,nvl(sum(case when p_nplan='N' and v_date between '01-JAN-10' and '14-JAN-10' then gross_amt end),0)mon
from bill_ent where p_nplan='N' group by grant_no,p_nplan
union
select grant_no, p_nplan,nvl(sum(case when p_nplan='P' and v_date between '01-APR-09' and '14-JAN-10' then gross_amt end),0),nvl(sum(case when p_nplan='P' and v_date between '01-JAN-10' and '14-JAN-10' then gross_amt end),0)pmon
from bill_ent where p_nplan='P' group by grant_no,p_nplan
)c,grants d
where a.fin_year='20092010' and a.fin_year=b.fin_year
and a.grant_no=b.grant_no and a.p_nplan=b.p_nplan and b.grant_no=c.grant_no and c.p_nplan=b.p_nplan and c.grant_no=d.grantcode
and d.grantcode not in('PAC','REC','PAY')
order by 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 5 7.20 7.03 0 613294 0 58
total 7 7.21 7.04 0 613294 0 58
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61 (TREASURY)
Rows Row Source Operation
58 SORT ORDER BY (cr=613294 pr=0 pw=0 time=7039475 us)
58 NESTED LOOPS (cr=613294 pr=0 pw=0 time=7039879 us)
58 HASH JOIN (cr=613234 pr=0 pw=0 time=7039048 us)
66 HASH JOIN (cr=10828 pr=0 pw=0 time=140022 us)
66 VIEW VBMST_PNPLAN (cr=3774 pr=0 pw=0 time=84542 us)
66 SORT UNIQUE (cr=3774 pr=0 pw=0 time=84541 us)
66 UNION-ALL (cr=3774 pr=0 pw=0 time=42598 us)
33 HASH GROUP BY (cr=1887 pr=0 pw=0 time=42461 us)
10522 TABLE ACCESS BY INDEX ROWID BUDGET_MAST (cr=1887 pr=0 pw=0 time=10543 us)
68266 INDEX FULL SCAN GRANT_BM (cr=204 pr=0 pw=0 time=14 us)(object id 78670)
33 HASH GROUP BY (cr=1887 pr=0 pw=0 time=41827 us)
10522 TABLE ACCESS BY INDEX ROWID BUDGET_MAST (cr=1887 pr=0 pw=0 time=10543 us)
68266 INDEX FULL SCAN GRANT_BM (cr=204 pr=0 pw=0 time=15 us)(object id 78670)
66 VIEW VDDOALT_PNPLAN (cr=7054 pr=0 pw=0 time=54893 us)
66 UNION-ALL (cr=7054 pr=0 pw=0 time=54824 us)
33 HASH GROUP BY (cr=3527 pr=0 pw=0 time=54654 us)
33100 TABLE ACCESS FULL DDO_ALT (cr=3527 pr=0 pw=0 time=91 us)
33 HASH GROUP BY (cr=3527 pr=0 pw=0 time=55675 us)
33100 TABLE ACCESS FULL DDO_ALT (cr=3527 pr=0 pw=0 time=93 us)
115 VIEW (cr=602406 pr=0 pw=0 time=6842054 us)
115 SORT UNIQUE (cr=602406 pr=0 pw=0 time=6842051 us)
115 UNION-ALL (cr=602406 pr=0 pw=0 time=3960875 us)
72 HASH GROUP BY (cr=302739 pr=0 pw=0 time=3960596 us)
1341509 TABLE ACCESS BY INDEX ROWID BILL_ENT (cr=302739 pr=0 pw=0 time=2683066 us)
1699329 INDEX FULL SCAN GRANTS_BE (cr=6288 pr=0 pw=0 time=13594657 us)(object id 117107)
43 HASH GROUP BY (cr=299667 pr=0 pw=0 time=2881209 us)
357820 TABLE ACCESS BY INDEX ROWID BILL_ENT (cr=299667 pr=0 pw=0 time=4300648 us)
1699329 INDEX FULL SCAN GRANTS_BE (cr=4708 pr=0 pw=0 time=13594646 us)(object id 117107)
58 TABLE ACCESS BY INDEX ROWID GRANTS (cr=60 pr=0 pw=0 time=430 us)
58 INDEX UNIQUE SCAN SYS_C005950 (cr=2 pr=0 pw=0 time=198 us)(object id 52607)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
58 SORT (ORDER BY)
58 NESTED LOOPS
58 HASH JOIN
66 HASH JOIN
66 VIEW OF 'VBMST_PNPLAN' (VIEW)
66 SORT (UNIQUE)
66 UNION-ALL
33 HASH (GROUP BY)
10522 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID)
OF 'BUDGET_MAST' (TABLE)
68266 INDEX MODE: ANALYZED (FULL SCAN) OF
'GRANT_BM' (INDEX)
33 HASH (GROUP BY)
10522 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID)
OF 'BUDGET_MAST' (TABLE)
68266 INDEX MODE: ANALYZED (FULL SCAN) OF
'GRANT_BM' (INDEX)
66 VIEW OF 'VDDOALT_PNPLAN' (VIEW)
66 UNION-ALL
33 HASH (GROUP BY)
33100 TABLE ACCESS MODE: ANALYZED (FULL) OF 'DDO_ALT'
(TABLE)
33 HASH (GROUP BY)
33100 TABLE ACCESS MODE: ANALYZED (FULL) OF 'DDO_ALT'
(TABLE)
115 VIEW
115 SORT (UNIQUE)
115 UNION-ALL
72 HASH (GROUP BY)
1341509 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'BILL_ENT' (TABLE)
1699329 INDEX MODE: ANALYZED (FULL SCAN) OF
'GRANTS_BE' (INDEX)
43 HASH (GROUP BY)
357820 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'BILL_ENT' (TABLE)
1699329 INDEX MODE: ANALYZED (FULL SCAN) OF
'GRANTS_BE' (INDEX)
58 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'GRANTS'
(TABLE)
58 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C005950'
(INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 5 0.00 0.00
SQL*Net message from client 5 3.30 8.51
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 5 7.20 7.03 0 613294 0 58
total 7 7.21 7.04 0 613294 0 58
Misses in library cache during parse: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 6 0.00 0.00
SQL*Net message from client 5 3.30 8.51
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 10 0.00 0.06 0 0 18 0
Execute 177 0.03 0.04 0 0 0 0
Fetch 187 0.00 0.10 25 550 0 1397
total 374 0.04 0.20 25 550 18 1397
Misses in library cache during parse: 9
Misses in library cache during execute: 9
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 25 0.01 0.09
1 user SQL statements in session.
177 internal SQL statements in session.
178 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: treasury_ora_15244.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
1 user SQL statements in trace file.
177 internal SQL statements in trace file.
178 SQL statements in trace file.
10 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
TREASURY.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
1633 lines in trace file.
14 elapsed seconds in trace file.Thanks
Thanks for posting the tkprof file.
You can tune this query by rewriting the inline view c, because you are accessing this table twice, where one pass will do just fine:
select a.grant_no
, grantnamehindi
, a.p_nplan
, totalbudgetprovision
, progressive
, pro
, mon
from vbmst_pnplan a
, vddoalt_pnplan b
, ( select grant_no
, p_nplan
, nvl(sum(case when p_nplan in ('N','P') and v_date between date '2009-04-01' and date '2010-01-14' then gross_amt end),0) pro
, nvl(sum(case when p_nplan in ('N','P') and v_date between date '2010-01-01' and date '2010-01-14' then gross_amt end),0) mon
from bill_ent
where p_nplan in ('N','P')
group by grant_no
, p_nplan
) c
, grants d
where a.fin_year='20092010'
and a.fin_year=b.fin_year
and a.grant_no=b.grant_no
and a.p_nplan=b.p_nplan
and b.grant_no=c.grant_no
and c.p_nplan=b.p_nplan
and c.grant_no=d.grantcode
and d.grantcode not in('PAC','REC','PAY')
order by 1By this rewrite, your query will perform almost twice as fast. That's not good enough yet. The question is: how many rows are in your bill_ent table? If this is not huge - say, >100M rows, then you need this plan to not use the GRANTS_BE index. To determine how to achieve this, can you post the output of these statements?
explain plan for <your query>;
select * from table(dbms_xplan.display);Possible solutions might be to drop the index, or to add a FULL hint to the query, or to add a histogram. But to determine that, we need to know the number of rows and the explain plan, including the predicate information section.
Regards,
Rob.
Similar Messages
-
Hi Guru's
Can you please help me query tunning.
Database Version : Oracle 11g - 11.2.0.3
select distinct corporation_name custer_name,
glog_util.remove_domain(SHIP_BUY.SERVPROV_GID ) SCAC,
glog_util.remove_domain(ship_buy.shipment_gid) buy_shipment_gid,
F_Get_SELL_ID_STRING(SHIP_BUY.SHIPMENT_GID) sell_shipment_gid,
ship_buy.domain_name,
F_GET_ORDER_RELEASE_GID('B',SHIP_BUY.SHIPMENT_GID,0) ORDER_RELEASE_GID,
f_get_refnum_string('SHIPMENT', ship_buy.shipment_gid, 'MBOL_NUMBER_CLEANSED')MBOL_NUMBER,
F_GET_POD_RECEIVED_DATE (ship_BUY.SHIPMENT_GID) POD_RECEIVED_DATE,
f_get_exp_accrue_amt(ship_buy.shipment_gid,'SHIPMENT') Total_accrual_amount
from shipment ship_buy,
invoice inv,
invoice_shipment si,
--voucher v,
corporation corp
where corp.domain_name=ship_buy.domain_name
and corp.is_domain_master='Y'
and 1=1
AND ship_buy.domain_name like 'UPS/CP/DFP/%'
and F_GET_POD_RECEIVED_DATE (ship_BUY.SHIPMENT_GID) <= to_char(to_date('31-JUL-2013', 'DD-MON-YYYY'), 'dd-mon-yyyy')
--and V.INVOICE_GID(+) = inv.invoice_gid
--and ship_buy.domain_name = 'UPS/CP/VZNB'
and si.shipment_gid(+) = SHIP_BUY.SHIPMENT_GID
AND SI.INVOICE_GID = INV.INVOICE_GID(+)
and SHIP_BUY.INSERT_DATE > '1-JAN-2007'
and SHIP_BUY.USER_DEFINED1_ICON_GID = 'ACCEPTED'
UNION
select distinct corporation_name custer_name,
glog_util.remove_domain(SHIP_BUY.SERVPROV_GID ) SCAC,
glog_util.remove_domain(ship_buy.shipment_gid) buy_shipment_gid,
F_GET_SELL_ID_STRING( SHIP_BUY.SHIPMENT_GID) sell_shipment_gid,
ship_buy.domain_name,
F_GET_ORDER_RELEASE_GID('B',SHIP_BUY.SHIPMENT_GID,0) ORDER_RELEASE_GID,
f_get_refnum_string('SHIPMENT', ship_buy.shipment_gid, 'MBOL_NUMBER_CLEANSED')MBOL_NUMBER,
F_GET_POD_RECEIVED_DATE (ship_BUY.SHIPMENT_GID) POD_RECEIVED_DATE,
f_get_exp_accrue_amt(inv.invoice_gid,'INVOICE') Total_accrual_amount
from shipment ship_buy,
invoice inv,
invoice_shipment si,
-- voucher v,
corporation corp
where corp.domain_name=ship_buy.domain_name
and corp.is_domain_master='Y'
and 1=1
AND ship_buy.domain_name like 'UPS/CP/DFP/%'
and F_GET_POD_RECEIVED_DATE (ship_BUY.SHIPMENT_GID) <= to_char(to_date('31-JUL-2013', 'DD-MON-YYYY'), 'dd-mon-yyyy')
--AND INV.DOMAIN_NAME = 'UPS/CP/VZNB'
--and V.INVOICE_GID(+) = inv.invoice_gid
and si.shipment_gid(+) = SHIP_BUY.SHIPMENT_GID
AND SI.INVOICE_GID = INV.INVOICE_GID(+)
and SHIP_BUY.INSERT_DATE > '1-JAN-2007'
and INV.USER_DEFINED1_ICON_GID = 'ACCEPTED'
GROUP BY corporation_name,SHIP_BUY.SHIPMENT_GID,SHIP_BUY.SERVPROV_GID,ship_buy.domain_name,inv.invoice_gid
ORDER BY CUSTER_NAME, BUY_SHIPMENT_GID;
And I generated the execution plan :
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 3 | 448 | 415 (2)| 00:00:05 |
| 1 | SORT UNIQUE | | 3 | 448 | 414 (87)| 00:00:05 |
| 2 | UNION-ALL | | | | | |
| 3 | NESTED LOOPS OUTER | | 3 | 384 | 57 (0)| 00:00:01 |
|* 4 | HASH JOIN | | 3 | 294 | 54 (0)| 00:00:01 |
|* 5 | TABLE ACCESS BY INDEX ROWID | SHIPMENT | 3 | 195 | 40 (0)| 00:00:01 |
|* 6 | INDEX SKIP SCAN | IND_SHIP_DOM_ICON | 54 | | 25 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | CORPORATION | 4 | 132 | 14 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | IND_INVOICESHIP_SHP_GID | 1 | 30 | 1 (0)| 00:00:01 |
| 9 | HASH GROUP BY | | 1 | 192 | 356 (1)| 00:00:05 |
|* 10 | HASH JOIN | | 1 | 192 | 354 (1)| 00:00:05 |
| 11 | NESTED LOOPS | | | | | |
| 12 | NESTED LOOPS | | 1 | 159 | 339 (0)| 00:00:05 |
| 13 | NESTED LOOPS | | 145 | 13920 | 194 (0)| 00:00:03 |
| 14 | TABLE ACCESS BY INDEX ROWID| INVOICE | 145 | 5220 | 49 (0)| 00:00:01 |
|* 15 | INDEX SKIP SCAN | IDX_INV_TYP_ICON_NAM | 145 | | 17 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | UK_INVOICE_SHIPMENT | 1 | 60 | 1 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | PK_SHIPMENT | 1 | | 1 (0)| 00:00:01 |
|* 18 | TABLE ACCESS BY INDEX ROWID | SHIPMENT | 1 | 63 | 1 (0)| 00:00:01 |
|* 19 | TABLE ACCESS FULL | CORPORATION | 4 | 132 | 14 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("CORP"."DOMAIN_NAME"="SHIP_BUY"."DOMAIN_NAME")
5 - filter("F_GET_POD_RECEIVED_DATE"("SHIP_BUY"."SHIPMENT_GID")<=TO_DATE(' 2013-07-31 00:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "SHIP_BUY"."INSERT_DATE">TO_DATE(' 2007-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
6 - access("SHIP_BUY"."USER_DEFINED1_ICON_GID"='ACCEPTED' AND "SHIP_BUY"."DOMAIN_NAME" LIKE
'UPS/CP/DFP/%')
filter("SHIP_BUY"."DOMAIN_NAME" LIKE 'UPS/CP/DFP/%' AND
"SHIP_BUY"."USER_DEFINED1_ICON_GID"='ACCEPTED')
7 - filter("CORP"."IS_DOMAIN_MASTER"='Y' AND "CORP"."DOMAIN_NAME" LIKE 'UPS/CP/DFP/%')
8 - access("SI"."SHIPMENT_GID"(+)="SHIP_BUY"."SHIPMENT_GID")
10 - access("CORP"."DOMAIN_NAME"="SHIP_BUY"."DOMAIN_NAME")
15 - access("INV"."USER_DEFINED1_ICON_GID"='ACCEPTED')
filter("INV"."USER_DEFINED1_ICON_GID"='ACCEPTED')
16 - access("SI"."INVOICE_GID"="INV"."INVOICE_GID")
17 - access("SI"."SHIPMENT_GID"="SHIP_BUY"."SHIPMENT_GID")
filter("F_GET_POD_RECEIVED_DATE"("SHIP_BUY"."SHIPMENT_GID")<=TO_DATE(' 2013-07-31 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))
18 - filter("SHIP_BUY"."DOMAIN_NAME" LIKE 'UPS/CP/DFP/%' AND "SHIP_BUY"."INSERT_DATE">TO_DATE('
2007-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
19 - filter("CORP"."IS_DOMAIN_MASTER"='Y' AND "CORP"."DOMAIN_NAME" LIKE 'UPS/CP/DFP/%')
Statistics
246247 recursive calls
2 db block gets
1660067 consistent gets
13839 physical reads
0 redo size
592054 bytes sent via SQL*Net to client
6024 bytes received via SQL*Net from client
502 SQL*Net roundtrips to/from client
15296 sorts (memory)
0 sorts (disk)
7513 rows processedHmmm...why does this look familiar?
F_GET_POD_RECEIVED_DATE (ship_BUY.SHIPMENT_GID) <= to_char(to_date('31-JUL-2013', 'DD-MON-YYYY'), 'dd-mon-yyyy')
SHIP_BUY.INSERT_DATE > '1-JAN-2007'
Like I said in your other thread about this, these two lines need to be fixed and your function needs to be fixed so the return statement doesn't do an implicit date conversion.
Can't you see what that first line is doing? You're taking a character string, turning it into a date, then back to a character string.
If nothing else, these lines should be...
F_GET_POD_RECEIVED_DATE (ship_BUY.SHIPMENT_GID) <= to_date('31-JUL-2013', 'DD-MON-YYYY')
SHIP_BUY.INSERT_DATE > to_date('01-JAN-2007','DD-MON-YYYY')
(assuming insert_date is a proper date format, fingers crossed) -
Hi,
I have a RACsystem with 2 nodes (oracle 10.2.0.1.0, Linux RedHat 4)
I have a simple query that take a long time but in the same server with a single database it's fast!!??
here is the query:
select count(*) as y0_, this_.NOTIFICATION_TYPE as y1_
from NOTIFICATION this_
where this_.OPCO_ID=1
and this_.SENTDATE is null
and this_.UPDATED_TIME is null
and this_.CREATED_TIME>=sysdate
and this_.CREATED_TIME<=sysdate
group by this_.NOTIFICATION_TYPE
order by this_.NOTIFICATION_TYPE asc
With AWR I can see that I lost a long time in gc_buffer_busy_wait!!
NB:I have a blob in my table
How I can solve my problem?
Message was edited by:
vittelYou have hit one of the many features of RAC. On a single instance database a piece of data is either in its SGA in local memory, or out on disk. On a multi-instance RAC database a piece of data can now either be in the local SGA memory, or the remote SGA memory of the other node, or out on disk.
Given that Oracle assumes that it is quicker to go across the network interconnect between the 2 RAC nodes to retrieve the copy of the data from the other SGA than it is to go all the way out to disk to read that data page, Oracle RAC is biased to using copies of data blocks in the SGAs of the other RAC nodes.
You cannot stop this. Oracle RAC is doing as it is intended to. Either tune and improve the performance of the interconnect between your 2 nodes. Are you using 100 baseT or 1000 baseT or something even faster? It is the latency that is the issue, not the bandwidth.
Or stop using RAC, and go back to just one database instance.
John -
Hi,
I have a query which is used in one of ur reports which tries to access 11 tables using oracle reports application. when users are trying to access thsi report its taking aroung 4 min. to compete the report, in a day we need this report to run on more than for nearly 70 times(70 reports in a day).
we provide the registration number, based on this number it generates the details of the owner, address, registration details, permits etc
we have more than 1gb of sga in which shared pool has nearly 750mb.
Select
pt.ppr_pert_no PERT_NO,
gm.GOF_OFF_NM TRAN_A,
(ROW_FST_NAME || ' ' || ROW_MID_NAME || ' ' ||
ROW_LST_NAME) NAME,
ROW_FH_NAME G_NAME,
rad_addr_ln1 addr1,
rad_addr_ln2 addr2,
rad_addr_ln3 addr3,
RAD_CITY city,
tm.tsn_state_nm state,
nvl( fn_get_rte_nm(prt_off_cd, prt_route) , '-' )
ROUTE,
rt.RVD_REGN_NO re_no,
rt.RVD_MKR_NAME make,
rt.RVD_MKR_CLAS model,
rt.RVD_FUEL petrol,
rt.RVD_CHAS_NO chasis,
rt.RVD_ENG_NO engine,
pd.ppd_oper_dt dt,
pd.ppd_fr_mt_no FARE_MNO,
pd.ppd_fr_mt_mk FARE_MK,
( nvl(rt.RVD_STNG_CAP,0) || '+' ||
nvl(rt.RVD_DC_CAP, 0)) S_CAPACITY,
pt.ppr_exp_dt EXPIRY_DT,
pd.ppd_fare_rt RATE_FR,
pd.ppd_oper_flg
OPER_TYP,
pt.ppr_pert_typ PERT_TYP
FROM
gm_off_dtl gm,
pt_permits pt,
pt_per_dtls pd,
pt_per_route pr,
pm_route pm,
rt_breg rb,
rt_veh_dtl rt,
rt_owner po,
rt_address pa,
tm_state_nm tm
WHERE GOF_OFF_CD = (select gpr_value from gm_prm_chr
where gpr_type = 'G_OF_CODE' )
AND pt.ppr_off_cd = GOF_OFF_CD
and pt.ppr_pert_no = :per_no
and pt.ppr_pert_st in ('PDG', 'VAL')
and pt.ppr_pert_typ in
('CCP','DP','BSP','TPCC','TCP')
and pd.ppd_off_cd = pt.ppr_off_cd
and pd.ppd_pert_no = pt.ppr_pert_no
and pd.ppd_pd_st = 'N'
and rt.RVD_REGN_NO = pd.ppd_regn_no
and pr.pru_off_cd(+) = pd.ppd_off_cd
and pr.pru_pert_dtl(+) = pd.ppd_pert_dtl
and pm.prt_route (+) = pr.pru_route
and rb.rbr_regn_no = rt.rvd_regn_no
and rb.rbr_active = 'A'
and ROW_OFF_CD = pd.PPD_OFF_CD
AND po.ROW_OWNER_ID = pd.ppd_owner
and RAD_OFF_CD = ROW_OFF_CD
and pa.rad_addr_id = po.row_addr_id
and (tm.tsn_state_nm = pa.rad_state or
tm.tsn_state_cd = pa.RAD_STATE)
group by
pt.ppr_pert_no ,
gm.GOF_OFF_NM ,
(ROW_FST_NAME || ' ' || ROW_MID_NAME || ' ' ||
ROW_LST_NAME),
ROW_FH_NAME ,
RAD_ADDR_LN1 ,
RAD_ADDR_LN2 ,
RAD_ADDR_LN3 ,
RAD_CITY ,
tm.tsn_state_nm ,
nvl( fn_get_rte_nm(prt_off_cd, prt_route) , '-' ) ,
rt.RVD_REGN_NO ,
rt.RVD_MKR_NAME ,
rt.RVD_MKR_CLAS ,
rt.RVD_FUEL ,
rt.RVD_CHAS_NO ,
rt.RVD_ENG_NO ,
pd.ppd_fr_mt_no ,
pd.ppd_fr_mt_mk ,
( nvl(rt.RVD_STNG_CAP,0) || '+' ||
nvl(rt.RVD_DC_CAP, 0)) ,
pt.ppr_exp_dt ,
pd.ppd_fare_rt ,
pd.ppd_oper_dt,
pd.ppd_oper_flg ,
pt.ppr_pert_typ
how to tune or any other ways to do it.here are the details
rt_owner,rt_address,tm_state_nm has around 1,50,234 records
pt_per_route,pt_per_dtls,pt_permits,rt_berg,pm_route has around 2lac records
reprot will be based on registration number and will display all the details like his name of the owner,father name,address,vehicle type,class,manufacture,engine no,chassisno,permit obtain date,permit expire date, permit fare etc.pertain to that registration and if permit is obtain more than twice the details will also be shown.
oracle ver.8.1.7/windows2000 server
optimizer mode is rule based. -
Query:
SELECT SUM(TF.DURATION) TIME_ON_DAYS,
TO_NUMBER(TO_CHAR(TD.calendar_date, 'YYYYMM')) MONTH_WID,
TF.THERAPY_AREA_WID,
TF.BUSINESS_CYCLE_WID,
TF.SR_POSITION_ROW_WID,
TF.SR_POSITION_DH_WID
FROM NNOLAP.WC_TIME_REPORT_F TF,
NNOLAP.W_DAY_D TD
WHERE TF.date_wid = TD.row_wid
AND TF.type = 'Time On Territory'
AND TD.day_name NOT IN ('Saturday', 'Sunday')
GROUP BY TO_NUMBER(TO_CHAR(TD.calendar_date, 'YYYYMM')),
TF.THERAPY_AREA_WID,
TF.BUSINESS_CYCLE_WID,
TF.USR_LOGIN,
TF.SR_POSITION_ROW_WID,
TF.SR_POSITION_DH_WID
Explain plan:
PLAN_TABLE_OUTPUT
Plan hash value: 2978986116
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 992K| 71M| | 16653 (1)| 00:03:54 | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10005 | 992K| 71M| | 16653 (1)| 00:03:54 | Q1,05 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 992K| 71M| 91M| 16653 (1)| 00:03:54 | Q1,05 | PCWP | |
| 4 | PX RECEIVE | | 992K| 71M| | 5581 (2)| 00:01:19 | Q1,05 | PCWP | |
| 5 | PX SEND HASH | :TQ10004 | 992K| 71M| | 5581 (2)| 00:01:19 | Q1,04 | P->P | HASH |
|* 6 | HASH JOIN | | 992K| 71M| | 5581 (2)| 00:01:19 | Q1,04 | PCWP | |
| 7 | PX RECEIVE | | 11204 | 251K| | 114 (29)| 00:00:02 | Q1,04 | PCWP | |
| 8 | PX SEND HASH | :TQ10003 | 11204 | 251K| | 114 (29)| 00:00:02 | Q1,03 | P->P | HASH |
| 9 | VIEW | index$_join$_002 | 11204 | 251K| | 114 (29)| 00:00:02 | Q1,03 | PCWP | |
|* 10 | HASH JOIN BUFFERED | | | | | | | Q1,03 | PCWP | |
| 11 | BUFFER SORT | | | | | | | Q1,03 | PCWC | |
| 12 | PX RECEIVE | | | | | | | Q1,03 | PCWP | |
| 13 | PX SEND HASH | :TQ10000 | | | | | | | S->P | HASH |
|* 14 | HASH JOIN | | | | | | | | | |
| 15 | BITMAP CONVERSION TO ROWIDS| | 11204 | 251K| | 30 (0)| 00:00:01 | | | |
| 16 | BITMAP INDEX FULL SCAN | W_DAY_D_M30 | | | | | | | | |
| 17 | BITMAP CONVERSION TO ROWIDS| | 11204 | 251K| | 3 (0)| 00:00:01 | | | |
|* 18 | BITMAP INDEX FULL SCAN | W_DAY_D_M9 | | | | | | | | |
| 19 | PX RECEIVE | | 11204 | 251K| | 23 (0)| 00:00:01 | Q1,03 | PCWP | |
| 20 | PX SEND HASH | :TQ10002 | 11204 | 251K| | 23 (0)| 00:00:01 | Q1,02 | P->P | HASH |
| 21 | PX BLOCK ITERATOR | | 11204 | 251K| | 23 (0)| 00:00:01 | Q1,02 | PCWC | |
| 22 | INDEX FAST FULL SCAN | W_DAY_D_P1 | 11204 | 251K| | 23 (0)| 00:00:01 | Q1,02 | PCWP | |
| 23 | BUFFER SORT | | | | | | | Q1,04 | PCWC | |
| 24 | PX RECEIVE | | 993K| 50M| | 5460 (1)| 00:01:17 | Q1,04 | PCWP | |
| 25 | PX SEND HASH | :TQ10001 | 993K| 50M| | 5460 (1)| 00:01:17 | | S->P | HASH |
|* 26 | TABLE ACCESS FULL | WC_TIME_REPORT_F | 993K| 50M| | 5460 (1)| 00:01:17 | | | |
Predicate Information (identified by operation id):
6 - access("TF"."DATE_WID"="TD"."ROW_WID")
10 - access(ROWID=ROWID)
14 - access(ROWID=ROWID)
18 - filter("TD"."DAY_NAME"<>'Sunday' AND "TD"."DAY_NAME"<>'Saturday')
26 - filter("TF"."TYPE"='Time On Territory')
42 rows selectedPlease give the suggestion for tunning this query.
Thanks,dba wrote:
Query:
SELECT SUM(TF.DURATION) TIME_ON_DAYS,
TO_NUMBER(TO_CHAR(TD.calendar_date, 'YYYYMM')) MONTH_WID,
TF.THERAPY_AREA_WID,
TF.BUSINESS_CYCLE_WID,
TF.SR_POSITION_ROW_WID,
TF.SR_POSITION_DH_WID
FROM NNOLAP.WC_TIME_REPORT_F TF,
NNOLAP.W_DAY_D TD
WHERE TF.date_wid = TD.row_wid
AND TF.type = 'Time On Territory'
AND TD.day_name NOT IN ('Saturday', 'Sunday')
GROUP BY TO_NUMBER(TO_CHAR(TD.calendar_date, 'YYYYMM')),
TF.THERAPY_AREA_WID,
TF.BUSINESS_CYCLE_WID,
TF.USR_LOGIN,
TF.SR_POSITION_ROW_WID,
TF.SR_POSITION_DH_WID
Query is returning a lot of data. PQO is probably going to help - have you checked serial performance; PQO does not always help performance though so checking would be a good idea.
The SQL is performing bitmap conversions 16 to 18. Are you using bitmap indexes? Bitmap conversions can use a lot of resources. The internal view in step 9 is also a concern; internal views can also use a lot of system resources. The cost in this section is only 114 compared to 5460 in the other section so this may not be signifciant.
An index to avoid the full table scan on WC_TIME_REPORT_F might help if the optimizer uses it. Put the index on the columns in the WHERE clause, with the more restrictive leading columns first (the most restrictive columns fisrt usually works best with btree indexes). -
I am using oracle Db 9.2.4 ,
Beloq query is taking lot of time ie.5mins, can anybody help me to tune the query....
SELECT LTRIM
(RTRIM ( DECODE (tit_name,
'MR.', 'CA.',
'MS.', 'CA.',
'MRS.', 'CA.'
|| ' '
|| mrh_first_name
|| ' '
|| mrh_middle_name
|| ' '
|| mrh_sur_name
|| ' '
|| DECODE (mrh_appr_uid,
NULL, NULL,
DECODE (mrh_mem_status,
2, NULL,
DECODE (mrh_fellow_status_yn,
'Y', 'FCA',
'ACA'
|| DECODE (mrh_resi_status,
'A', ' AIR - MAIL'
|| CHR (10)
|| DECODE (mrh_cop_status,
1, DECODE (mrh_cop_type,
13, 'CHARTERED ACCOUNTANT' || CHR (10),
NULL
NULL
|| LTRIM (RTRIM (mrh_prof_addr_line_1))
|| DECODE (mrh_prof_addr_line_1, NULL, NULL, CHR (10))
|| LTRIM (RTRIM (mrh_prof_addr_line_2))
|| DECODE (mrh_prof_addr_line_2, NULL, NULL, CHR (10))
|| LTRIM (RTRIM (mrh_prof_addr_line_3))
|| DECODE (mrh_prof_addr_line_3, NULL, NULL, CHR (10))
|| LTRIM (RTRIM (mrh_prof_addr_line_4))
|| DECODE (mrh_prof_addr_line_4, NULL, NULL, CHR (10))
|| LTRIM (RTRIM ( city_name
|| '-'
|| mrh_prof_zip_postal_code
|| DECODE (mrh_resi_status,
'A', CHR (10) || cou_name,
NULL
) l_common,
DECODE (mrh_appr_uid, NULL, 'T', 'P') p_t_flag
FROM cadata3.om_mem_reg_head, cadata3.om_city, cadata3.om_country, cadata3.om_title
WHERE cou_code = mrh_prof_cou_code
AND city_code = mrh_prof_city_code(+)
AND mrh_title = tit_code(+)
AND NVL (mrh_clo_status, 0) != 1
164245 rows selected.
Elapsed: 00:04:26.09
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempS
pc| Cost | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 164K| 25M|
PLAN_TABLE_OUTPUT
| 2332 | | |
| 1 | HASH JOIN | | 164K| 25M|
| 2332 | | |
| 2 | TABLE ACCESS FULL | OM_COUNTRY | 676 | 12168 |
| 3 | | |
| 3 | HASH JOIN OUTER | | 164K| 22M| 2
3M| 2296 | | |
PLAN_TABLE_OUTPUT
| 4 | FILTER | | | |
| | | |
| 5 | HASH JOIN OUTER | | | |
| | | |
| 6 | INDEX FAST FULL SCAN| IDM_OM_CITY_CITY_NAME | 24226 | 449K|
| 11 | | |
| 7 | PARTITION LIST ALL | | | |
| | 1 | 7 |
PLAN_TABLE_OUTPUT
| 8 | TABLE ACCESS FULL | OM_MEM_REG_HEAD | 164K| 18M|
| 1155 | 1 | 7 |
| 9 | TABLE ACCESS FULL | OM_TITLE | 8 | 88 |
| 2 | | |
PLAN_TABLE_OUTPUT
Note: cpu costing is off, 'PLAN_TABLE' is old version
17 rows selected.
SQL>
SQL> select index_name, COLUMN_NAMe,column_position from dba_ind_columns where
table_name='OM_MEM_REG_HEAD' and column_name like 'MRH_PROF%';
INDEX_NAME
COLUMN_NAME
COLUMN_POSITION
IDM_MRH_PROF_CITY2
MRH_PROF_CITY_CODE
1
ABC
MRH_PROF_REGION_CODE
1
INDEX_NAME
COLUMN_NAME
COLUMN_POSITION
---------------After using an index hint in the query of the table OM_MEM_REG_HEAD, timing of the query goes down to 3.5 mins,pls suggest me to tune the query so that the timing of the query goes down.....Pls find a tkprof report
TKPROF: Release 9.2.0.4.0 - Production on Mon Aug 18 11:53:08 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: G:\oracle\admin\hotest\udump\hotest_ora_3216.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT LTRIM
(RTRIM ( DECODE (tit_name,
'MR.', 'CA.',
'MS.', 'CA.',
'MRS.', 'CA.'
|| ' '
|| mrh_first_name
|| ' '
|| mrh_middle_name
|| ' '
|| mrh_sur_name
|| ' '
|| DECODE (mrh_appr_uid,
NULL, NULL,
DECODE (mrh_mem_status,
2, NULL,
DECODE (mrh_fellow_status_yn,
'Y', 'FCA',
'ACA'
|| DECODE (mrh_resi_status,
'A', ' AIR - MAIL'
|| CHR (10)
|| DECODE (mrh_cop_status,
1, DECODE (mrh_cop_type,
13, 'CHARTERED ACCOUNTANT' || CHR (10),
NULL
NULL
|| LTRIM (RTRIM (mrh_prof_addr_line_1))
|| DECODE (mrh_prof_addr_line_1, NULL, NULL, CHR (10))
|| LTRIM (RTRIM (mrh_prof_addr_line_2))
|| DECODE (mrh_prof_addr_line_2, NULL, NULL, CHR (10))
|| LTRIM (RTRIM (mrh_prof_addr_line_3))
|| DECODE (mrh_prof_addr_line_3, NULL, NULL, CHR (10))
|| LTRIM (RTRIM (mrh_prof_addr_line_4))
|| DECODE (mrh_prof_addr_line_4, NULL, NULL, CHR (10))
|| LTRIM (RTRIM ( city_name
|| '-'
|| mrh_prof_zip_postal_code
|| DECODE (mrh_resi_status,
'A', CHR (10) || cou_name,
NULL
) l_common,
DECODE (mrh_appr_uid, NULL, 'T', 'P') p_t_flag
FROM om_mem_reg_head, om_city, om_country, om_title
WHERE cou_code = mrh_prof_cou_code
AND city_code = mrh_prof_city_code(+)
AND mrh_title = tit_code(+)
AND NVL (mrh_clo_status, 0) != 1
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 4 0.01 0.05 2 16 0 0
Execute 5 0.00 0.03 0 0 0 0
Fetch 10258 4.78 23.62 12420 12157 0 153818
total 10267 4.79 23.72 12422 12173 0 153818
Misses in library cache during parse: 3
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.00 0.00 0 0 0 0
Execute 10 0.00 0.00 0 0 0 0
Fetch 10 0.00 0.06 5 21 0 8
total 29 0.00 0.06 5 21 0 8
Misses in library cache during parse: 2
5 user SQL statements in session.
9 internal SQL statements in session.
14 SQL statements in session.
Trace file: G:\oracle\admin\hotest\udump\hotest_ora_3216.trc
Trace file compatibility: 9.00.01
Sort options: prsela exeela fchela
1 session in tracefile.
5 user SQL statements in trace file.
9 internal SQL statements in trace file.
14 SQL statements in trace file.
6 unique SQL statements in trace file.
10522 lines in trace file.here we can easily seen rows/call is much greater than elapsed/call......
pls suggest... -
Using substr function loses index. Query tune help
Hi
when i use substr function in the join condition it loses index and the query is slow. How to fix this problem or tune this query. These are the lines in the query.
and substr(a.invoice_num,1,9) = l.invoice_num
and substr(c.invoice_num,1,9) = mgr_apprv_lst.user_key
and substr(c.invoice_num,1,9) = pbl_apprv_lst.user_key
select
pap.full_name employe_name,
k.SEGMENT1 ||'.' ||k.segment2 Cost_Center,
a.invoice_num Invoice_Number,
b.item_description Line_Item,
b.amount Amount,
cc.trx_id Corporate_Card_Transaction_Id,
cc.transaction_date Date_Charge_Incurred,
cc.posted_date Date_Charge_Posted_To_USBank,
cc.creation_date Date_Posted_To_IExpense,
a.creation_date Expense_Report_Creation_Date,
l.report_submitted_date Expense_Report_Submitted_Date,
mgr_apprv_lst.activity_begin_date Managers_Approval_Begin_Date,
mgr_apprv_lst.activity_end_date Managers_Approval_End_Date,
pbl_apprv_lst.activity_begin_date AP_Approval_Begin_Date,
pbl_apprv_lst.activity_end_date AP_Approval_End_Date,
e.check_date Payment_Date_To_USBank,
e.check_number Payment_Number,
mgr_apprv_lst.activity_result_display_name Managers_Process_Result,
pbl_apprv_lst.activity_result_display_name AP_Process_Result
from
ap_checks_all e,
ap_invoice_payments_all d,
ap_invoices_all c,
ap_expense_report_headers_all a,
ap_credit_card_trxns_all cc,
per_all_people_f pap,
ap_expense_report_headers_all l,
ap_expense_report_lines_all b,
gl_code_combinations k,
(select ias1.user_key,
ias1.activity_result_display_name,
ias1.activity_begin_date,
ias1.activity_end_date
from wf_item_activity_statuses_v ias1,
(select c1.invoice_num
from ap_checks_all e1,
ap_invoice_payments_all d1,
ap_invoices_all c1
where trunc(e1.check_date) between nvl(:From_Date, trunc(e1.check_date))
and nvl(:To_Date, trunc(e1.check_date))
and e1.org_id = 141
and e1.void_date IS null
and d1.check_id = e1.check_id
and c1.invoice_id = d1.invoice_id) inv_lst1
where ias1.item_type = 'APEXP'
and ias1.user_key = inv_lst1.invoice_num
and ias1.activity_name = 'AP_MANAGER_APPROVAL_PROCESS') mgr_apprv_lst,
(select ias2.user_key,
ias2.activity_result_display_name,
ias2.activity_begin_date,
ias2.activity_end_date
from wf_item_activity_statuses_v ias2,
(select c2.invoice_num
from ap_checks_all e2,
ap_invoice_payments_all d2,
ap_invoices_all c2
where trunc(e2.check_date) between nvl(:From_Date, trunc(e2.check_date))
and nvl(:To_Date, trunc(e2.check_date))
and e2.org_id = 141
and e2.void_date IS null
and d2.check_id = e2.check_id
and c2.invoice_id = d2.invoice_id) inv_lst2
where ias2.item_type = 'APEXP'
and ias2.user_key = inv_lst2.invoice_num
and ias2.activity_name = 'AP_PAYABLES_APPROVAL_PROCESS') pbl_apprv_lst
where
trunc(e.check_date) between nvl(:From_Date, trunc(e.check_date))
and nvl(:To_Date, trunc(e.check_date))
and e.org_id = 141
and e.void_date IS null
and d.check_id = e.check_id
and c.invoice_id = d.invoice_id
and a.invoice_num = c.invoice_num
and a.source = 'CREDIT CARD'
and a.employee_id = nvl(:Emp_id,a.employee_id)
and a.report_header_id = b.report_header_id
and cc.trx_id = b.credit_card_trx_id
and pap.person_id = a.employee_id
and pap.effective_start_date <= trunc(sysdate)
and pap.effective_end_date >= trunc(sysdate)
and k.code_combination_id = b.code_combination_id
and k.segment2 between nvl(:From_Cost_Center,k.segment2)
and nvl(:To_Cost_Center,k.segment2)
and substr(a.invoice_num,1,9) = l.invoice_num
and substr(c.invoice_num,1,9) = mgr_apprv_lst.user_key
and substr(c.invoice_num,1,9) = pbl_apprv_lst.user_keyHi
If I understood correctly your logic, and if the columns involved are of type varchar2, you can use the like operator:
and a.invoice_num like l.invoice_num || '%'
and c.invoice_num like mgr_apprv_lst.user_key || '%'
and c.invoice_num like pbl_apprv_lst.user_key || '%'In this case, Oracle will be able to use the indexes. If they are numeric, you need to do something like:
and a.invoice_num between l.invoice_num * 1000000
and l.invoice_num * 1000000 + 999999I hope this makes sense...
Luis -
Query Tune - Expensive Window sort.
I want some advise to tune this query.
My database version:
SQL> SELECT * FROM v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.5.0 - Productio
NLSRTL Version 10.2.0.5.0 - Production
SQL> The query
SQL> EXPLAIN PLAN FOR
2 WITH INN AS
3 (
4 SELECT /*+ PARALLEL(log,8) */
5 item_no,
6 item_type,
7 bu_code_send,
8 bu_type_send,
9 bu_code_rcv,
10 bu_type_rcv,
11 from_date_rcv,
12 to_date_rcv,
13 lt_val,
14 lt_val_uom_code,
15 upd_dtime,
16 delete_dtime,
17 ROW_NUMBER() OVER(PARTITION BY item_no, item_type, bu_code_send, bu_type_send,
18 bu_code_rcv, bu_type_rcv, from_date_rcv
19 ORDER BY upd_dtime DESC) AS nr
20 FROM log.CEM_TOTAL_LEADTIME_T_LOG log
21 WHERE UPD_DTIME < TO_DATE ('13-11-2012','DD-MM-YYYY')
22 )
23 SELECT
24 item_no,
25 item_type,
26 bu_code_send,
27 bu_type_send,
28 bu_code_rcv,
29 bu_type_rcv,
30 from_date_rcv,
31 to_date_rcv,
32 lt_val,
33 lt_val_uom_code,
34 SYSDATE
35 FROM inn
36 WHERE DELETE_DTIME IS NULL
37 AND NR=1
38 AND to_DATE ('13-11-2012','DD-MM-YYYY') BETWEEN from_date_rcv AND NVL(to_date_rcv, '31-DEC-9999')
39 ;
Explained.The Plan
PLAN_TABLE_OUTPUT
Plan hash value: 3866412310
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 331M| 26G| | 374K (4)| 00:50:30 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | 331M| 26G| | 374K (4)| 00:50:30 | | | Q1,01 | P->S | QC (RAND) |
|* 3 | VIEW | | 331M| 26G| | 374K (4)| 00:50:30 | | | Q1,01 | PCWP | |
|* 4 | WINDOW SORT PUSHED RANK | | 331M| 20G| 29G| 374K (4)| 00:50:30 | | | Q1,01 | PCWP | |
| 5 | PX RECEIVE | | 331M| 20G| | 374K (4)| 00:50:30 | | | Q1,01 | PCWP | |
| 6 | PX SEND HASH | :TQ10000 | 331M| 20G| | 374K (4)| 00:50:30 | | | Q1,00 | P->P | HASH |
|* 7 | WINDOW CHILD PUSHED RANK| | 331M| 20G| | 374K (4)| 00:50:30 | | | Q1,00 | PCWP | |
| 8 | PX BLOCK ITERATOR | | 331M| 20G| | 7123 (44)| 00:00:58 | 1 | 167 | Q1,00 | PCWC | |
|* 9 | TABLE ACCESS FULL | CEM_TOTAL_LEADTIME_T_LOG | 331M| 20G| | 7123 (44)| 00:00:58 | 1 | 167 | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
3 - filter("DELETE_DTIME" IS NULL AND "NR"=1 AND NVL("TO_DATE_RCV",TO_DATE(' 9999-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))>=TO_DATE(' 2012-11-13
PLAN_TABLE_OUTPUT
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
4 - filter(ROW_NUMBER() OVER ( PARTITION BY "ITEM_NO","ITEM_TYPE","BU_CODE_SEND","BU_TYPE_SEND","BU_CODE_RCV","BU_TYPE_RCV","FROM_DATE_RCV" ORDER BY
INTERNAL_FUNCTION("UPD_DTIME") DESC )<=1)
7 - filter(ROW_NUMBER() OVER ( PARTITION BY "ITEM_NO","ITEM_TYPE","BU_CODE_SEND","BU_TYPE_SEND","BU_CODE_RCV","BU_TYPE_RCV","FROM_DATE_RCV" ORDER BY
INTERNAL_FUNCTION("UPD_DTIME") DESC )<=1)
9 - filter("FROM_DATE_RCV"<=TO_DATE(' 2012-11-13 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "UPD_DTIME"<TO_DATE(' 2012-11-13 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))Some more thoughts/inputs:
1. The table CEM_TOTAL_LEADTIME_T_LOG has 333 million rows, It is a partition table but the range partition is on LOG_DATE column (not used in this query).
2. The window sort is taking huge temp space (wrong cardinality estimate ? How to correct ?)
3. Actually it is taking around 56 min to complete.Not tested,
Just trying to eliminate the analytic function here. (I cannot check this --can do only syntax check)..
Firstly check if this is similar to your query and if yes, check if this improves the performance.
WITH t AS
(SELECT item_no,
item_type,
bu_code_send,
bu_type_send,
bu_code_rcv,
bu_type_rcv,
from_date_rcv,
to_date_rcv,
lt_val,
lt_val_uom_code,
upd_dtime,
delete_dtime
FROM LOG.CEM_TOTAL_LEADTIME_T_LOG LOG
WHERE UPD_DTIME < TO_DATE ('13-11-2012', 'DD-MM-YYYY')),
t1 AS
( SELECT item_no,
item_type,
bu_code_send,
bu_type_send,
bu_code_rcv,
bu_type_rcv,
from_date_rcv,
to_date_rcv,
MAX (upd_dtime) upd_dtime
FROM t
GROUP BY item_no,
item_type,
bu_code_send,
bu_type_send,
bu_code_rcv,
bu_type_rcv,
from_date_rcv,
to_date_rcv)
SELECT a.item_no,
a.item_type,
a.bu_code_send,
a.bu_type_send,
a.bu_code_rcv,
a.bu_type_rcv,
a.from_date_rcv,
a.to_date_rcv,
a.lt_val,
a.lt_val_uom_code,
SYSDATE,
a.upd_dtime
FROM t1 a, t b
WHERE a.item_no = b.item_no
AND a.item_type = b.item_type
AND a.bu_code_send = b.bu_code_send
AND a.bu_type_send = b.bu_type_send
AND a.bu_code_rcv = b.bu_code_rcv
AND a.bu_type_rcv = b.bu_type_rcv
AND a.from_date_rcv = b.from_date_rcv
AND a.to_date_rcv = b.to_date_rcv
AND a.upd_dtime = b.upd_dtime
AND DELETE_DTIME IS NULL
AND TO_DATE ('13-11-2012', 'DD-MM-YYYY') BETWEEN from_date_rcv
AND NVL (to_date_rcv,
'31-DEC-9999');Cheers,
Manik
Edited by: Manik on Nov 15, 2012 3:13 PM -
Query tunning in Oracle using Explain Plan
Adding to my below question: I have now modified the query and the path shownby 'Explain plan' has reduced. The 'Time' column of plan_table is also showing much lesser value. However, some people are suggesting me to consider the time required by the query to execute on Toad. Will it be practical? Please help!!
Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...
Edited by: 885901 on Sep 20, 2011 2:10 AM885901 wrote:
Hi, I am using Oracle 11g. I need to optimize a Select query(Need to minimize the execution time). I need to know how 'Explain Plan' would help me. I know how to use Explain Plan command. I refer Plan_table table to see the details of the plan. Please guide me regarding which columns of the Plan_table should be considered while modifying the query for optimization. Some people say, 'Time' column should be considered, some say 'Bytes' etc. Some suggest on minimizing the full table scans, while some people say that I should minimize the total no. operations (less no. of rows should be displayed in Plan_table). As per an experienced friend of mine, full table scans should be reduced (for e.g. if there are 5 full table scans in the plan, then try to reduce them to less than 5. ). However, if I consider any full table scan operation in the plan_table, its shows value of 'time' column as only 1 which is very very less. Does this mean the full scan is actually taking very less time?? If yes, then this means full table scans are very fast in my case and no need to work on them. Some articles suggest that plan shown by 'Explain Plan' command is not necessarily followed while executing the query. So what should I look for then? How should I optimize the query and how will I come to know that it's optimized?? Please help!!...how fast is fast enough? -
Hi All,
Could you please help me out the below SQL query tuning .
Temp table is having 1 Million records
Master table is having 60 Million records
Query :
SELECT B.*,U.ID, SD, LE, LAE
FROM client.Temp B, client.Master
U WHERE U.policyno = B.policyno
AND B.UPFLAG = 0
1. Indexes are created on both email columns and Upflag for both tables.
2. Gathered DBMS Stats for MASTER Table
Data is loading 100k/hour on production .When your query takes too long ...
When your query takes too long ...
HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting -
For one record it is taking 15mins. Unable to find out where the problem is beacuse we are joining only two tables and processing with few records taking lot of time.
Select colmn1, colmn2,...
from
table 1,
table 2
Where table1.col_x ='C'
and table 1.col_y =table2.col_y
and table1.col_z IN ( select col_z from table1 where col_m is NOT NULL and
col_n NOT IN ( 'Y','N'))
and table1.col_z NOT IN ( select col_z from table1 where col_n='T' )
Thanks ,
Sunil999732 wrote:
The query you suggested uses TABLE1 instead of TABLE3
SELECT T1.*, T2.*
FROM TABLE1 T1, TABLE1 T2
WHERE T1.COL_X ='C'
AND T1.COL_Y = T2.COL_Y
AND EXISTS (SELECT 1 FROM TABLE1 WHERE T1.COL_Z = COL_Z AND COL_M IS NOT NULL AND (COL_N = 'Y' OR COL_N = 'N'))
AND NOT EXISTS (SELECT 1 FROM TABLE1 WHERE T1.COL_Z = COL_Z AND COL_N = 'T')
Actual Query:-
SELECT COLMN1, COLMN2,...
FROM
TABLE 1,
TABLE 2
WHERE TABLE1.COL_X ='C'
AND TABLE 1.COL_Y =TABLE2.COL_Y
AND TABLE1.COL_Z IN ( SELECT COL_Z FROM TABLE3 WHERE COL_M IS NOT NULL AND COL_N NOT IN ( 'Y','N'))
AND TABLE1.COL_Z NOT IN ( SELECT COL_Z FROM TABLE3 WHERE COL_N='T' AND COL_K='EMP_INFORMATION' AND COL_T='15000' )My idea that, you can use EXISTS instead of IN.
Then we can write scipt as
SELECT COLMN1, COLMN2,...
FROM
TABLE1 T1,
TABLE2 T2
WHERE T1.COL_X ='C'
AND T1.COL_Y = T2.COL_Y
AND EXISTS (SELECT 1 FROM TABLE3 T3 WHERE T3.COL_Z = T1.COL_Z AND T3.COL_M IS NOT NULL AND (T3.COL_N = 'Y' OR T3.COL_N = 'N'))
AND NOT EXISTS (SELECT 1 FROM TABLE3 T3
WHERE T3.COL_Z = T1.COL_Z
AND T3.COL_N='T'
AND T3.COL_K='EMP_INFORMATION'
AND T3.COL_T='15000')Please try.
Mahir -
this query is giving high cost pls help.
SELECT DISTINCT
SI.ORG_ID
,SI.ID SI_NO
,JO.ID JOB_NO
,JO.BOOKING_REF_NO
,ORGSERPO.DESCRIPTION ETA_POL
,TSCH.VESSEL_1||TSCH.VOYAGE_1 VESSEL_VOYAGE_1
,TSCH.ETD_1
,ORGSERPO1.DESCRIPTION POD
,TSCH.ETA_1
,TSCH.VESSEL_2||TSCH.VOYAGE_2 VESSEL_VOYAGE_2
,TSCH.ETD_2
,ORGSERPO2.DESCRIPTION POD2
,TSCH.ETA_2
,TSCH.VESSEL_3||TSCH.VOYAGE_3 VESSEL_VOYAGE_3
,TSCH.ETA_3
,TSCH.ETD_3
,ORGSERPO3.DESCRIPTION POD3
,ORGSERPO4.DESCRIPTION FINAL_DESTINATION
,ORGP.SHORT_NAME CARRIER_SHORT_NAME
,ORGP1.SHORT_NAME CONSIGNEE_SHORT_NAME
,ORGP2.SHORT_NAME FACTORY_SHORT_NAME
,JO.SHIPMENT_TYPE
,SUBD.DESCRIPTION PRODUCT_TYPE ,
Pshm05.Get_SI_Forwarder(jo.org_id,jo.id) Forwarder,
JO.POL_TERMS||JO.POD_TERMS,
decode(po.ext_sys_ref_no,po.trans_id_1,po.id,po.ext_sys_ref_no),
SI.ORGP_ID,
pshm07.Get_Cont_count(pal.org_id,pal.id,'20F') f20,
pshm07.Get_Cont_count(pal.org_id,pal.id,'40F') f40,
pshm07.Get_Cont_count(pal.org_id,pal.id,'40H') h40,
pshm07.Get_Cont_count(pal.org_id,pal.id,'45F') f45,
sum(palcsi.qty) QTY,
sum(palcsi.gross_weight) WEIGHT,
sum(palcsi.volume) M3,
(si.instruction_1||si.instruction_2||si.instruction_3||si.instruction_4||si.instruction_5
||si.instruction_6||si.instruction_7||si.instruction_8||si.instruction_9||si.instruction_10) INSTRUCTIONS ,
TSCH.ETA_POL,
PAL.ID,
PAL.EXT_SYS_REF_NO
FROM TRANSPORT_SCHEDULES TSCH,
TRANSPORT_ORDERS SI,
JOB_ORDERS JO,
PURCHASE_ORDERS PO,
PACKING_LISTS PAL,
PALCS_ITEMS PALCSI,
ORG_SERVICE_POINTS ORGSERPO,
ORG_SERVICE_POINTS ORGSERPO1,
ORG_SERVICE_POINTS ORGSERPO2,
ORG_SERVICE_POINTS ORGSERPO3,
ORG_SERVICE_POINTS ORGSERPO4,
ORG_PARTIES ORGP,
ORG_PARTIES ORGP1,
ORG_PARTIES ORGP2,
SUBJECT_DOMAINS SUBD
WHERE TSCH.ORG_ID = JO.ORG_ID
AND TSCH.ID = JO.TSCH_ID
AND JO.ORG_ID = PAL.EXT_SYS_REF_ORG_ID
AND JO.ID = PAL.TRANS_ID_1
AND PAL.ORG_ID = PALCSI.PALCS_PAL_ORG_ID
AND PAL.ID = PALCSI.PALCS_PAL_ID
AND PALCSI.PALCS_PAL_ORG_ID = PO.ORG_ID
AND PALCSI.TRANS_ID_1 = PO.ID
AND JO.ORG_ID = SI.ORG_ID
AND JO.ID = SI.TRANS_ID_1
AND TSCH.ORG_ID = ORGSERPO.ORG_ID
AND TSCH.ORGSERPO_ID = ORGSERPO.ID
AND TSCH.ORG_ID = ORGSERPO1.ORG_ID
AND TSCH.ORGSERPO_ID_1 = ORGSERPO1.ID
AND TSCH.ORG_ID = ORGSERPO2.ORG_ID (+)
AND TSCH.ORGSERPO_ID_2 = ORGSERPO2.ID (+)
AND TSCH.ORG_ID = ORGSERPO3.ORG_ID (+)
AND TSCH.ORGSERPO_ID_3 = ORGSERPO3.ID (+)
AND TSCH.ORG_ID = ORGSERPO4.ORG_ID
AND TSCH.ORGSERPO_ID_4 = ORGSERPO4.ID
AND TSCH.ORG_ID = ORGP.ORG_ID
AND TSCH.ORGP_ID_5 = ORGP.ID
AND PAL.ORG_ID = ORGP1.ORG_ID
AND PAL.ORGP_ID_2 = ORGP1.ID
AND SI.ORG_ID = ORGP2.ORG_ID
AND SI.ORGP_ID = ORGP2.ID
AND PO.TRANS_TYPE = SUBD.ID
AND SUBD.SUB_ID = 'PRODUCT_TYPE'
AND SI.TRANS_TYPE = 'JO_FACT'
GROUP BY
SI.ORG_ID
,SI.ID
,JO.ID
,JO.BOOKING_REF_NO
,ORGSERPO.DESCRIPTION
,TSCH.VESSEL_1
,TSCH.VOYAGE_1
,TSCH.ETD_1
,ORGSERPO1.DESCRIPTION
,TSCH.ETA_1
,TSCH.VESSEL_2
,TSCH.VOYAGE_2
,TSCH.ETD_2
,ORGSERPO2.DESCRIPTION
,TSCH.ETA_2
,TSCH.VESSEL_3
,TSCH.VOYAGE_3
,TSCH.ETA_3
,TSCH.ETD_3
,ORGSERPO3.DESCRIPTION
,ORGSERPO4.DESCRIPTION
,ORGP.SHORT_NAME
,ORGP1.SHORT_NAME
,ORGP2.SHORT_NAME
,JO.SHIPMENT_TYPE
,SUBD.DESCRIPTION,
Pshm05.Get_SI_Forwarder(jo.org_id,jo.id),
JO.POL_TERMS,
JO.POD_TERMS,
decode(po.ext_sys_ref_no,po.trans_id_1,po.id,po.ext_sys_ref_no),
SI.ORGP_ID,
pshm07.Get_Cont_count(pal.org_id,pal.id,'20F'),
pshm07.Get_Cont_count(pal.org_id,pal.id,'40F'),
pshm07.Get_Cont_count(pal.org_id,pal.id,'40H'),
pshm07.Get_Cont_count(pal.org_id,pal.id,'45F'),
(si.instruction_1||si.instruction_2||si.instruction_3||si.instruction_4||si.instruction_5
||si.instruction_6||si.instruction_7||si.instruction_8||si.instruction_9||si.instruction_10,
TSCH.ETA_POL,
PAL.ID,
PAL.EXT_SYS_REF_NO)i got PRE thanks
<Pre>
select *
from
v_si_schedule Where Rownum < 10
call count cpu elapsed disk query current rows
Parse 1 0.91 0.88 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 37.11 36.93 6202 7794 0 9
total 3 38.02 37.82 6202 7794 0 9
Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Rows Row Source Operation
9 COUNT STOPKEY (cr=1745575 pr=6243 pw=0 time=60273765 us)
9 VIEW (cr=1745575 pr=6243 pw=0 time=60273759 us)
9 SORT UNIQUE STOPKEY (cr=1745575 pr=6243 pw=0 time=60273731 us)
24125 SORT GROUP BY (cr=1745575 pr=6243 pw=0 time=60177004 us)
99836 HASH JOIN (cr=7794 pr=6202 pw=0 time=3451945 us)
3953 TABLE ACCESS FULL ORG_SERVICE_POINTS (cr=61 pr=59 pw=0 time=233 us)
99836 HASH JOIN RIGHT OUTER (cr=7733 pr=6143 pw=0 time=3147141 us)
3953 TABLE ACCESS FULL ORG_SERVICE_POINTS (cr=61 pr=0 pw=0 time=36 us)
99836 HASH JOIN RIGHT OUTER (cr=7672 pr=6143 pw=0 time=2942252 us)
3953 TABLE ACCESS FULL ORG_SERVICE_POINTS (cr=61 pr=0 pw=0 time=49 us)
99836 HASH JOIN (cr=7611 pr=6143 pw=0 time=2637117 us)
3953 TABLE ACCESS FULL ORG_SERVICE_POINTS (cr=61 pr=0 pw=0 time=50 us)
99836 HASH JOIN (cr=7550 pr=6143 pw=0 time=2331803 us)
3953 TABLE ACCESS FULL ORG_SERVICE_POINTS (cr=61 pr=0 pw=0 time=48 us)
99836 HASH JOIN (cr=7489 pr=6143 pw=0 time=2026580 us)
3413 TABLE ACCESS FULL ORG_PARTIES (cr=122 pr=93 pw=0 time=100 us)
99836 HASH JOIN (cr=7367 pr=6050 pw=0 time=1719269 us)
6112 TABLE ACCESS FULL TRANSPORT_SCHEDULES (cr=248 pr=245 pw=0 time=6301 us)
100271 HASH JOIN (cr=7119 pr=5805 pw=0 time=1300940 us)
3413 TABLE ACCESS FULL ORG_PARTIES (cr=122 pr=1 pw=0 time=99 us)
100684 HASH JOIN (cr=6997 pr=5804 pw=0 time=1098029 us)
5655 TABLE ACCESS FULL TRANSPORT_ORDERS (cr=1067 pr=13 pw=0 time=22723 us)
84236 HASH JOIN (cr=5930 pr=5791 pw=0 time=1089549 us)
6415 TABLE ACCESS FULL JOB_ORDERS (cr=310 pr=307 pw=0 time=6611 us)
84236 HASH JOIN (cr=5620 pr=5484 pw=0 time=816414 us)
3413 TABLE ACCESS FULL ORG_PARTIES (cr=122 pr=0 pw=0 time=59 us)
84236 HASH JOIN (cr=5498 pr=5484 pw=0 time=557678 us)
17884 TABLE ACCESS FULL PACKING_LISTS (cr=1005 pr=1001 pw=0 time=18094 us)
93201 HASH JOIN (cr=4493 pr=4483 pw=0 time=462268 us)
25081 HASH JOIN (cr=1326 pr=1322 pw=0 time=102379 us)
8 TABLE ACCESS BY INDEX ROWID SUBJECT_DOMAINS (cr=7 pr=7 pw=0 time=178 us)
8 INDEX RANGE SCAN SUBD_PK (cr=2 pr=2 pw=0 time=97 us)(object id 198829)
25081 TABLE ACCESS FULL PURCHASE_ORDERS (cr=1319 pr=1315 pw=0 time=50367 us)
93201 TABLE ACCESS FULL PALCS_ITEMS (cr=3167 pr=3161 pw=0 time=93374 us)
unable to set optimizer goal
ORA-01986: OPTIMIZER_GOAL is obsolete
parse error offset: 33
Rows Execution Plan
0 SELECT STATEMENT GOAL: ALL_ROWS
9 COUNT (STOPKEY)
9 VIEW OF 'V_SI_SCHEDULE' (VIEW)
9 SORT (UNIQUE STOPKEY)
24125 SORT (GROUP BY)
99836 HASH JOIN
3953 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_SERVICE_POINTS' (TABLE)
99836 HASH JOIN (RIGHT OUTER)
3953 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_SERVICE_POINTS' (TABLE)
99836 HASH JOIN (RIGHT OUTER)
3953 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_SERVICE_POINTS' (TABLE)
99836 HASH JOIN
3953 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_SERVICE_POINTS' (TABLE)
99836 HASH JOIN
3953 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_SERVICE_POINTS' (TABLE)
99836 HASH JOIN
3413 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_PARTIES' (TABLE)
99836 HASH JOIN
6112 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'TRANSPORT_SCHEDULES' (TABLE)
100271 HASH JOIN
3413 TABLE ACCESS GOAL: ANALYZED (FULL) OF
'ORG_PARTIES' (TABLE)
100684 HASH JOIN
5655 TABLE ACCESS GOAL: ANALYZED (FULL)
OF 'TRANSPORT_ORDERS' (TABLE)
84236 HASH JOIN
6415 TABLE ACCESS GOAL: ANALYZED (FULL)
OF 'JOB_ORDERS' (TABLE)
84236 HASH JOIN
3413 TABLE ACCESS GOAL: ANALYZED
(FULL) OF 'ORG_PARTIES' (TABLE)
84236 HASH JOIN
17884 TABLE ACCESS GOAL: ANALYZED
(FULL) OF 'PACKING_LISTS' (TABLE)
93201 HASH JOIN
25081 HASH JOIN
8 TABLE ACCESS GOAL:
ANALYZED (BY INDEX ROWID) OF
'SUBJECT_DOMAINS' (TABLE)
8 INDEX GOAL: ANALYZED
(RANGE SCAN) OF 'SUBD_PK' (INDEX (UNIQUE))
25081 TABLE ACCESS GOAL:
ANALYZED (FULL) OF 'PURCHASE_ORDERS' (TABLE)
93201 TABLE ACCESS GOAL: ANALYZED
(FULL) OF 'PALCS_ITEMS' (TABLE)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 14 1.84 1.81 3 21 0 0
Execute 16 0.00 0.00 0 0 0 6
Fetch 96 74.72 74.49 13598 15603 0 9113
total 126 76.56 76.30 13601 15624 0 9119
Misses in library cache during parse: 4
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 86 0.06 0.05 0 25 0 0
Execute 998985 25.06 23.86 38 617 0 0
Fetch 1000325 24.25 23.09 1782 3480045 0 980187
total 1999396 49.37 47.01 1820 3480687 0 980187
Misses in library cache during parse: 30
Misses in library cache during execute: 29
18 user SQL statements in session.
84 internal SQL statements in session.
102 SQL statements in session.
5 statements EXPLAINed in this session.
Sort options: default
1 session in tracefile.
18 user SQL statements in trace file.
84 internal SQL statements in trace file.
102 SQL statements in trace file.
36 unique SQL statements in trace file.
5 SQL statements EXPLAINed using schema:
schema.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
2000131 lines in trace file.
<Pre> -
Hi All
Please can one provide me steps for query tunningHi;
What is DB version?
Please review:
Steps in Tuning
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7381085186834
Also see:
http://www.orafaq.com/wiki/Oracle_database_Performance_Tuning_FAQ
Regard
Helios -
Hi,
This is a part of the procedure which is veryyy slow.. Need some suggestions for the below Query to improve performance. The explain plan and all will be correct in PRODUCTION DB.. In that system, I cant have the explain plan to be copied here. So need suggestions by the Query.
Here PB,PBD and PBD1 are very huge tables..
I am unable to paste the Query.. Please help me
Thanks in advance..My car is broken, but I cannot produce it for repairs.
Not sure of your expectations, but this isn't a way where volunteers have been provided any tiny piece of information to suggest any tuning advice.
Please read the linked thread HOW TO: Post a SQL statement tuning request - template posting and post with mentioned information. -
How can I optimize below query?
DELETE FROM #Data WHERE ID NOT IN (SELECT MIN(ID) FROM #Data GROUP BY SerialNumber, VendorName)
It takes like upto 2 minutes to execute.
Here is the table structure:
CREATE TABLE #Data
ID INT IDENTITY (1, 1),
ItemSupplierKey INT NOT NULL,
SerialNumber VARCHAR(100) NOT NULL,
VendorName VARCHAR(50) NOT NULL
It contains 257316 records.
Here are indexes on this temp table:
CREATE CLUSTERED INDEX PX_Data ON #Data (SerialNumber, VendorName);
CREATE INDEX IX_AliasData ON #AliasData (ID);
Thanks, GreenBinaryHow much data does it DELETE? You can the result of the sub query into a temporary table and then DELETE the data based on temporary table instead of sub query,
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence
Maybe you are looking for
-
Hi! I have an iPhone 4S with iOS 6.1.3 and I have iTunes 11.1.3.8. on my Windows 8. Now I can't sync my iPhone, because the device is not recognised by iTunes. Somebody have any idea why, or what can I do (except refresh the iPhone software) ?
-
How to test the communication stability and calculate ber using the CVI ?
Our chief engineer gave me a task yeasterday.He asked me to do software testing and calculate bit error rate that we would know our communication stability. However,I have never touched this aspect of knowledge. I have known a bit about CVI and I wan
-
Back ground scheduling and capturing error log
hi friends, i have to write a report and call a function module for passing one by one those conversion progs which fm is suitable what it is one more boubt like i can maintain those conversions in structure and i can loop those one by one passing in
-
WES610N -- turn off LEDs?
I recently got a WES610N to connect the devices in my home entertainment center. Setup was a snap, it works beautifully. The problem I have is that the blue blinking LED lights are a huge distraction! Every other home entertainment device I own ha
-
Accessing Notes and Essbase data
Are there any tools/interfaces that allow Lotus notes to access Essbase or Essbase to access Lotus notes data or a separate tool that would combine both. we are looking for a way to have someone using a notes database to e able to view essbase data a