Response time issue querying a view
Hi All,
The following query is taking more than 20-25 mins to run and it does not return any result. Ideally it should return 0 rows but still it takes 20-25 mins to achieve that.
Select * from WV_WMS_STOCK_MOVEMENT; --> it’s a View
Database: Oracle 8i
O/S : Solaris 8
Below is the explain plan for above stmt
SQL> @?/rdbms/admin/utlxpls
| Operation | Name | Rows | Bytes| Cost | Pstart| Pstop |
| SELECT STATEMENT | | 364 | 774K|1042445 | | |
| VIEW |WV_WMS_ST | 364 | 774K|1042445 | | |
| SORT UNIQUE | | 364 | 30K|1042445 | | |
| UNION-ALL | | | | | | |
| FILTER | | | | | | |
| NESTED LOOPS | | 1 | 116 | 173829 | | |
| HASH JOIN | | 1 | 102 | 173828 | | |
| NESTED LOOPS | | 12K| 822K| 172929 | | |
| TABLE ACCESS FULL |MSK | 145K| 7M| 172929 | | |
| INDEX UNIQUE SCAN |TBL_IDX1 | 15 | 210 | | | |
| TABLE ACCESS FULL |OST | 85K| 2M| 445 | | |
| INDEX UNIQUE SCAN |PRO_IDX1 | 261K| 3M| 1 | | |
| TABLE ACCESS BY INDEX|TBL | 1 | 25 | 7 | | |
| INDEX RANGE SCAN |TBL_IDX1 | 1 | | 2 | | |
| FILTER | | | | | | |
| HASH JOIN | | 240 | 19K| 174015 | | |
| NESTED LOOPS | | 12K| 822K| 172929 | | |
| TABLE ACCESS FULL |MSK | 145K| 7M| 172929 | | |
| INDEX UNIQUE SCAN |TBL_IDX1 | 15 | 210 | | | |
| INDEX FAST FULL SCAN|PRO_IDX1 | 261K| 3M| 293 | | |
| TABLE ACCESS BY INDEX|TBL | 1 | 25 | 7 | | |
| INDEX RANGE SCAN |TBL_IDX1 | 1 | | 2 | | |
| FILTER | | | | | | |
| HASH JOIN | | 120 | 10K| 174017 | | |
| NESTED LOOPS | | 12K| 858K| 172929 | | |
| TABLE ACCESS FULL |MSK | 145K| 8M| 172929 | | |
| INDEX UNIQUE SCAN |TBL_IDX1 | 15 | 210 | | | |
| INDEX FAST FULL SCAN|PRO_IDX1 | 261K| 3M| 293 | | |
| TABLE ACCESS BY INDEX|TBL | 1 | 25 | 7 | | |
| INDEX RANGE SCAN |TBL_IDX1 | 1 | | 2 | | |
| FILTER | | | | | | |
| NESTED LOOPS | | 1 | 165 | 173485 | | |
| NESTED LOOPS | | 1 | 151 | 173484 | | |
| HASH JOIN | | 1 | 101 | 173478 | | |
| TABLE ACCESS FULL |MSK | 803 | 54K| 172929 | | |
| TABLE ACCESS FULL |OST | 85K| 2M| 445 | | |
| TABLE ACCESS BY IND|MSK | 32K| 1M| 6 | | |
| INDEX RANGE SCAN |MSK_IDX1 | 32K| | 5 | | |
| INDEX UNIQUE SCAN |PRO_IDX1 | 261K| 3M| 1 | | |
| TABLE ACCESS BY INDEX|TBL | 1 | 25 | 7 | | |
| INDEX RANGE SCAN |TBL_IDX1 | 1 | | 2 | | |
| FILTER | | | | | | |
| NESTED LOOPS | | 1 | 133 | 173560 | | |
| HASH JOIN | | 16 | 1K| 173464 | | |
| TABLE ACCESS FULL |MSK | 803 | 54K| 172929 | | |
| INDEX FAST FULL SCA|PRO_IDX1 | 261K| 3M| 293 | | |
| TABLE ACCESS BY INDE|MSK | 32K| 1M| 6 | | |
| INDEX RANGE SCAN |MSK_IDX1 | 32K| | 5 | | |
| TABLE ACCESS BY INDEX|TBL | 1 | 25 | 7 | | |
| INDEX RANGE SCAN |TBL_IDX1 | 1 | | 2 | | |
| FILTER | | | | | | |
| NESTED LOOPS | | 1 | 133 | 173512 | | |
| HASH JOIN | | 8 | 664 | 173464 | | |
| TABLE ACCESS FULL |MSK | 803 | 54K| 172929 | | |
| INDEX FAST FULL SCA|PRO_IDX1 | 261K| 3M| 293 | | |
| TABLE ACCESS BY INDE|MSK | 32K| 1M| 6 | | |
| INDEX RANGE SCAN |MSK_IDX1 | 32K| | 5 | | |
| TABLE ACCESS FULL |TBL | 1 | 25 | 49 | | |
-------------------------------------------------------------------------------- Code of view is :
CREATE OR REPLACE VIEW "SOC1"."WV_WMS_STOCK_MOVEMENT" ("CODSOC",
"CODOSK","WAREHOUSE_SITE","SIGDEP_FROM","SIGDEP_TO","CODPRO",
"QTEOPE","DATMVT","HEUMVT","REFLOT","DATLC","LIBMSK","NUMMSK",
"INDTRT","NUMLOT","ID","PUMP") AS
SELECT
msk.codsoc, msk.codosk, DECODE(SUBSTR(msk.sigdep,1,3), 'GER', 'GAR', SUBSTR(msk.sigdep,1,3)),
msk.sigdep, msk.sigdep,
msk.codpro, msk.qteope, msk.datmvt, WF_WMS_HEURE(msk.heumvt), ost.reflot,
ost.datlc, msk.libmsk, msk.nummsk, msk.indtrt, msk.numlot,
RPAD(msk.sigdep,12,' ')||RPAD(msk.codpro,12,' ')||RPAD(msk.numlot,12,' ')||msk.nummsk, '1.23'
from tbl t954, pro, ost, msk
where
msk.codsoc = 0
and msk.indtrt = ' '
and ost.codsoc = msk.codsoc
and ost.codpro = msk.codpro
and ost.numlot = msk.numlot
and pro.codsoc = msk.codsoc
and pro.codpro = msk.codpro
and WF_WMS_SUISTK(msk.sigdep, pro.codpro) in ('L', 'X')
and msk.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = msk.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')
and t954.codsoc = msk.codsoc
and t954.codtbl = '954'
and t954.cletbl = msk.codosk
UNION
-- Single MVT
SELECT
msk.codsoc, msk.codosk, DECODE(SUBSTR(msk.sigdep,1,3), 'GER', 'GAR', SUBSTR(msk.sigdep,1,3)),
msk.sigdep, msk.sigdep,
msk.codpro, msk.qteope, msk.datmvt, WF_WMS_HEURE(msk.heumvt), ' ',
' ', msk.libmsk, msk.nummsk, msk.indtrt, msk.numlot,
RPAD(msk.sigdep,12,' ')||RPAD(msk.codpro,12,' ')||RPAD(msk.numlot,12,' ')||msk.nummsk , '1.23'
from tbl t954, pro, msk
where
msk.codsoc = 0
and msk.indtrt = ' '
and pro.codsoc = msk.codsoc
and pro.codpro = msk.codpro
and WF_WMS_SUISTK(msk.sigdep, pro.codpro) in ('S', 'E')
and msk.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = msk.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')
and t954.codsoc = msk.codsoc
and t954.codtbl = '954'
and t954.cletbl = msk.codosk
UNION
-- Single MVT
SELECT
msk.codsoc, msk.codosk, DECODE(SUBSTR(msk.sigdep,1,3), 'GER', 'GAR', SUBSTR(msk.sigdep,1,3)),
msk.sigdep, msk.sigdep,
msk.codpro, msk.qteope, msk.datmvt, WF_WMS_HEURE(msk.heumvt), msk.numdeb,
' ', msk.libmsk, msk.nummsk, msk.indtrt, msk.numlot,
RPAD(msk.sigdep,12,' ')||RPAD(msk.codpro,12,' ')||RPAD(msk.numlot,12,' ')||msk.nummsk, '1.23'
from tbl t954, pro, msk
where
msk.codsoc = 0
and msk.indtrt = ' '
and pro.codsoc = msk.codsoc
and pro.codpro = msk.codpro
and WF_WMS_SUISTK(msk.sigdep, pro.codpro) = 'U'
and msk.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = msk.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')
and t954.codsoc = msk.codsoc
and t954.codtbl = '954'
and t954.cletbl = msk.codosk
UNION
-- transfer
SELECT
a.codsoc, a.codosk, DECODE(SUBSTR(a.sigdep,1,3), 'GER', 'GAR', SUBSTR(a.sigdep,1,3)),
a.sigdep, b.sigdep,
a.codpro, a.qteope, a.datmvt, WF_WMS_HEURE(a.heumvt), ost.reflot,
ost.datlc,DECODE(a.typeve, 'RET', a.typeve||a.numeve, a.libmsk), a.nummsk, a.indtrt, a.numlot,
RPAD(a.sigdep,12,' ')||RPAD(a.codpro,12,' ')||RPAD(a.numlot,12,' ')||a.nummsk , '1.23'
from ost, pro, msk a, msk b
where
a.codsoc=0
and a.codosk = 'WHOR'
and a.indtrt = ' '
and b.codsoc = a.codsoc
and b.codpro = a.codpro
and b.numlot = a.numlot
and b.numdeb = a.numdeb
and b.datmvt = a.datmvt
and b.heumvt = a.heumvt
and b.qteope = a.qteope
and b.codosk = 'WHIR'
and ost.codsoc = a.codsoc
and ost.codpro = a.codpro
and ost.numlot = a.numlot
and pro.codsoc = a.codsoc
and pro.codpro = a.codpro
and WF_WMS_SUISTK(a.sigdep, pro.codpro) in ('L', 'X')
and a.sigdep NOT IN (select t.lib1
from tbl t
where t.codsoc = a.codsoc
and t.codtbl = '961'
and t.lir = 'DEP')
UNION
-- Transfer
SELECT
a.codsoc, a.codosk, DECODE(SUBSTR(a.sigdep,1,3), 'GER', 'GAR', SUBSTR(a.sigdep,1,3)),
a.sigdep, b.sigdep,
a.codpro, a.qteope, a.datmvt, WF_WMS_HEURE(a.heumvt), ' ',
' ', DECODE(a.typeve, 'RET', a.typeve||a.numeve, a.libmsk), a.nummsk, a.indtrt, a.numlot,
RPAD(a.sigdep,12,' ')||RPAD(a.codpro,12,' ')||RPAD(a.numlot,12,' ')||a.nummsk , '1.23'
from pro, msk a, msk b
where
a.codsoc=0
and a.indtrt = ' '
and a.codosk = 'WHOR'
and b.codsoc = a.codsoc
and b.codpro = a.codpro
and b.numlot = a.numlot
and b.numdeb = a.numdeb
and b.datmvt = a.datmvt
and b.heumvt = a.heumvt
and b.qteope = a.qteope
and b.codosk = 'WHIR'
and pro.codsoc = a.codsoc
and pro.codpro = a.codpro
and WF_WMS_SUISTK(a.sigdep, pro.codpro) in ('S', 'E')
and a.sigdep NOT IN (select t.lib1
from tbl t
where t.codsoc = a.codsoc
and t.codtbl = '961'
and t.lir = 'DEP')
UNION
-- TRFINI
SELECT
a.codsoc, a.codosk, DECODE(SUBSTR(a.sigdep,1,3), 'GER', 'GAR', SUBSTR(a.sigdep,1,3)),
a.sigdep, b.sigdep,
a.codpro, a.qteope, a.datmvt,WF_WMS_HEURE(a.heumvt), a.numdeb,
' ', DECODE(a.typeve, 'RET', a.typeve||a.numeve, a.libmsk), a.nummsk, a.indtrt, a.numlot,
RPAD(a.sigdep,12,' ')||RPAD(a.codpro,12,' ')||RPAD(a.numlot,12,' ')||a.nummsk, '1.23'
from pro, msk a, msk b
where
a.codsoc=0
and a.indtrt = ' '
and a.codosk = 'WHOR'
and b.codsoc = a.codsoc
and b.codpro = a.codpro
and b.numlot = a.numlot
and b.numdeb = a.numdeb
and b.datmvt = a.datmvt
and b.heumvt = a.heumvt
and b.qteope = a.qteope
and b.codosk = 'WHIR'
and pro.codsoc = a.codsoc
and pro.codpro = a.codpro
and WF_WMS_SUISTK(a.sigdep, pro.codpro) = 'U'
and a.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = a.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')Please help...
Thanks
I see three problems, one of which was pointed out by Ignacio Ruiz.
#1 IN (SELECT ...) and NOT IN (SELECT ...) tends to be slow in Oracle 8i due to the requirement that the subquery be processed multiple times. It is often more efficient to convert this syntax into an outer join, and specify that the join column is either NOT NULL or NULL, depending on if you are trying to replace IN (SELECT ...) or, NOT IN (SELECT ...) syntax.
#2 It appears that there area couple PL/SQL calls, which can cause a context switch, and hinder performance: WF_WMS_HEURE(msk.heumvt), WF_WMS_SUISTK(msk.sigdep, pro.codpro), WF_WMS_HEURE(a.heumvt), etc.
#3 UNION syntax is used rather than UNION ALL - if possible, use UNION ALL instead.
An example of fixing problem #1, adjusting just the portion of the query before the "UNION":
CREATE OR REPLACE VIEW "SOC1"."WV_WMS_STOCK_MOVEMENT" ("CODSOC",
"CODOSK","WAREHOUSE_SITE","SIGDEP_FROM","SIGDEP_TO","CODPRO",
"QTEOPE","DATMVT","HEUMVT","REFLOT","DATLC","LIBMSK","NUMMSK",
"INDTRT","NUMLOT","ID","PUMP") AS
SELECT
msk.codsoc, msk.codosk, DECODE(SUBSTR(msk.sigdep,1,3), 'GER', 'GAR', SUBSTR(msk.sigdep,1,3)),
msk.sigdep, msk.sigdep,
msk.codpro, msk.qteope, msk.datmvt, WF_WMS_HEURE(msk.heumvt), ost.reflot,
ost.datlc, msk.libmsk, msk.nummsk, msk.indtrt, msk.numlot,
RPAD(msk.sigdep,12,' ')||RPAD(msk.codpro,12,' ')||RPAD(msk.numlot,12,' ')||msk.nummsk, '1.23'
from tbl t954, pro, ost, msk,
(select distinct
a.codsoc,
a.lib1
from
tbl a
where
a.codtbl = '961'
and a.lir = 'DEP') a
where
msk.codsoc = 0
and msk.indtrt = ' '
and ost.codsoc = msk.codsoc
and ost.codpro = msk.codpro
and ost.numlot = msk.numlot
and pro.codsoc = msk.codsoc
and pro.codpro = msk.codpro
and WF_WMS_SUISTK(msk.sigdep, pro.codpro) in ('L', 'X')
and msk.codsoc=a.codsoc(+)
and msk.sigdep=a.lib1(+)
and a.codsoc is null
and t954.codsoc = msk.codsoc
and t954.codtbl = '954'
and t954.cletbl = msk.codosk
UNION
-- Single MVT
SELECT
msk.codsoc, msk.codosk, DECODE(SUBSTR(msk.sigdep,1,3), 'GER', 'GAR', SUBSTR(msk.sigdep,1,3)),
msk.sigdep, msk.sigdep,
msk.codpro, msk.qteope, msk.datmvt, WF_WMS_HEURE(msk.heumvt), ' ',
' ', msk.libmsk, msk.nummsk, msk.indtrt, msk.numlot,
RPAD(msk.sigdep,12,' ')||RPAD(msk.codpro,12,' ')||RPAD(msk.numlot,12,' ')||msk.nummsk , '1.23'
from tbl t954, pro, msk
where
msk.codsoc = 0
and msk.indtrt = ' '
and pro.codsoc = msk.codsoc
and pro.codpro = msk.codpro
and WF_WMS_SUISTK(msk.sigdep, pro.codpro) in ('S', 'E')
and msk.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = msk.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')
and t954.codsoc = msk.codsoc
and t954.codtbl = '954'
and t954.cletbl = msk.codosk
UNION
-- Single MVT
SELECT
msk.codsoc, msk.codosk, DECODE(SUBSTR(msk.sigdep,1,3), 'GER', 'GAR', SUBSTR(msk.sigdep,1,3)),
msk.sigdep, msk.sigdep,
msk.codpro, msk.qteope, msk.datmvt, WF_WMS_HEURE(msk.heumvt), msk.numdeb,
' ', msk.libmsk, msk.nummsk, msk.indtrt, msk.numlot,
RPAD(msk.sigdep,12,' ')||RPAD(msk.codpro,12,' ')||RPAD(msk.numlot,12,' ')||msk.nummsk, '1.23'
from tbl t954, pro, msk
where
msk.codsoc = 0
and msk.indtrt = ' '
and pro.codsoc = msk.codsoc
and pro.codpro = msk.codpro
and WF_WMS_SUISTK(msk.sigdep, pro.codpro) = 'U'
and msk.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = msk.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')
and t954.codsoc = msk.codsoc
and t954.codtbl = '954'
and t954.cletbl = msk.codosk
UNION
-- transfer
SELECT
a.codsoc, a.codosk, DECODE(SUBSTR(a.sigdep,1,3), 'GER', 'GAR', SUBSTR(a.sigdep,1,3)),
a.sigdep, b.sigdep,
a.codpro, a.qteope, a.datmvt, WF_WMS_HEURE(a.heumvt), ost.reflot,
ost.datlc,DECODE(a.typeve, 'RET', a.typeve||a.numeve, a.libmsk), a.nummsk, a.indtrt, a.numlot,
RPAD(a.sigdep,12,' ')||RPAD(a.codpro,12,' ')||RPAD(a.numlot,12,' ')||a.nummsk , '1.23'
from ost, pro, msk a, msk b
where
a.codsoc=0
and a.codosk = 'WHOR'
and a.indtrt = ' '
and b.codsoc = a.codsoc
and b.codpro = a.codpro
and b.numlot = a.numlot
and b.numdeb = a.numdeb
and b.datmvt = a.datmvt
and b.heumvt = a.heumvt
and b.qteope = a.qteope
and b.codosk = 'WHIR'
and ost.codsoc = a.codsoc
and ost.codpro = a.codpro
and ost.numlot = a.numlot
and pro.codsoc = a.codsoc
and pro.codpro = a.codpro
and WF_WMS_SUISTK(a.sigdep, pro.codpro) in ('L', 'X')
and a.sigdep NOT IN (select t.lib1
from tbl t
where t.codsoc = a.codsoc
and t.codtbl = '961'
and t.lir = 'DEP')
UNION
-- Transfer
SELECT
a.codsoc, a.codosk, DECODE(SUBSTR(a.sigdep,1,3), 'GER', 'GAR', SUBSTR(a.sigdep,1,3)),
a.sigdep, b.sigdep,
a.codpro, a.qteope, a.datmvt, WF_WMS_HEURE(a.heumvt), ' ',
' ', DECODE(a.typeve, 'RET', a.typeve||a.numeve, a.libmsk), a.nummsk, a.indtrt, a.numlot,
RPAD(a.sigdep,12,' ')||RPAD(a.codpro,12,' ')||RPAD(a.numlot,12,' ')||a.nummsk , '1.23'
from pro, msk a, msk b
where
a.codsoc=0
and a.indtrt = ' '
and a.codosk = 'WHOR'
and b.codsoc = a.codsoc
and b.codpro = a.codpro
and b.numlot = a.numlot
and b.numdeb = a.numdeb
and b.datmvt = a.datmvt
and b.heumvt = a.heumvt
and b.qteope = a.qteope
and b.codosk = 'WHIR'
and pro.codsoc = a.codsoc
and pro.codpro = a.codpro
and WF_WMS_SUISTK(a.sigdep, pro.codpro) in ('S', 'E')
and a.sigdep NOT IN (select t.lib1
from tbl t
where t.codsoc = a.codsoc
and t.codtbl = '961'
and t.lir = 'DEP')
UNION
-- TRFINI
SELECT
a.codsoc, a.codosk, DECODE(SUBSTR(a.sigdep,1,3), 'GER', 'GAR', SUBSTR(a.sigdep,1,3)),
a.sigdep, b.sigdep,
a.codpro, a.qteope, a.datmvt,WF_WMS_HEURE(a.heumvt), a.numdeb,
' ', DECODE(a.typeve, 'RET', a.typeve||a.numeve, a.libmsk), a.nummsk, a.indtrt, a.numlot,
RPAD(a.sigdep,12,' ')||RPAD(a.codpro,12,' ')||RPAD(a.numlot,12,' ')||a.nummsk, '1.23'
from pro, msk a, msk b
where
a.codsoc=0
and a.indtrt = ' '
and a.codosk = 'WHOR'
and b.codsoc = a.codsoc
and b.codpro = a.codpro
and b.numlot = a.numlot
and b.numdeb = a.numdeb
and b.datmvt = a.datmvt
and b.heumvt = a.heumvt
and b.qteope = a.qteope
and b.codosk = 'WHIR'
and pro.codsoc = a.codsoc
and pro.codpro = a.codpro
and WF_WMS_SUISTK(a.sigdep, pro.codpro) = 'U'
and a.sigdep NOT IN (select a.lib1
from tbl a
where a.codsoc = a.codsoc
and a.codtbl = '961'
and a.lir = 'DEP')Try changing the remaining NOT IN subqueries to outer joins, and compare the performance.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Similar Messages
-
Response time of query utterly upside down because of small where clause change
Hello,
I'm wondering why a small change on a where clause in a query has a dramatic impact on its response time.
Here is the query, with its plan and a few details:
select * from (
SELECT xyz_id, time_oper, ...
FROM (SELECT
d.xyz_id xyz_id,
TO_CHAR (di.time_operation, 'DD/MM/YYYY') time_oper,
di.time_operation time_operation,
UPPER (d.delivery_name || ' ' || d.delivery_firstname) custname,
d.ticket_language ticket_language, d.payed,
dsum.delivery_mode delivery_mode,
d.station_delivery station_delivery,
d.total_price total_price, d.crm_cust_id custid,
d.bene_cust_id person_id, d.xyz_num, dpe.ers_pnr ers_pnr,
d.delivery_name,
TO_CHAR (dsum.first_travel_date, 'DD/MM/YYYY') first_traveldate,
d.crm_company custtype, UPPER (d.client_name) partyname,
getremark(d.xyz_num) remark,
d.client_app, di.work_unit, di.account_unit,
di.distrib_code,
UPPER (d.crm_name || ' ' || d.crm_firstname) crm_custname,
getspecialproduct(di.xyz_id) specialproduct
FROM xyz d, xyz_info di, xyz_pnr_ers dpe, xyz_summary dsum
WHERE d.cancel_state = 'N'
-- AND d.payed = 'N'
AND dsum.delivery_mode NOT IN ('DD')
AND dsum.payment_method NOT IN ('AC', 'AG')
AND d.xyz_blocked IS NULL
AND di.xyz_id = d.xyz_id
AND di.operation = 'CREATE'
AND dpe.xyz_id(+) = d.xyz_id
AND EXISTS (SELECT 1
FROM xyz_ticket dt
WHERE dt.xyz_id = d.xyz_id)
AND dsum.xyz_id = di.xyz_id
ORDER BY di.time_operation DESC)
WHERE ROWNUM < 1002
) view
WHERE view.DISTRIB_CODE in ('NS') AND view.TIME_OPERATION > TO_DATE('20/5/2013', 'dd/MM/yyyy')
plan with "d.payed = 'N'" (no rows, *extremely* slow):
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 4166K| 39354 (1)| 00:02:59 |
|* 1 | VIEW | | 1001 | 4166K| 39354 (1)| 00:02:59 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1001 | 4166K| 39354 (1)| 00:02:59 |
| 4 | NESTED LOOPS OUTER | | 1001 | 130K| 39354 (1)| 00:02:59 |
| 5 | NESTED LOOPS SEMI | | 970 | 111K| 36747 (1)| 00:02:47 |
| 6 | NESTED LOOPS | | 970 | 104K| 34803 (1)| 00:02:39 |
| 7 | NESTED LOOPS | | 970 | 54320 | 32857 (1)| 00:02:30 |
|* 8 | TABLE ACCESS BY INDEX ROWID| XYZ_INFO | 19M| 704M| 28886 (1)| 00:02:12 |
| 9 | INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5 | 36967 | | 296 (2)| 00:00:02 |
|* 10 | TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY | 1 | 19 | 2 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | SB11_DSMM_XYZ_UK | 1 | | 1 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | XYZ | 1 | 54 | 2 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | XYZ_PK | 1 | | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DNTI_NI1 | 32M| 249M| 2 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | XYZ_PNR_ERS | 1 | 15 | 4 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | DNPE_XYZ | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
2 - filter(ROWNUM<1002)
8 - filter("DI"."OPERATION"='CREATE')
10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
12 - filter("D"."PAYED"='N' AND "D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
^^^^^^^^^^^^^^
13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
plan with "d.payed = 'N'" (+/- 450 rows, less than two minutes):
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 4166K| 58604 (1)| 00:04:27 |
|* 1 | VIEW | | 1001 | 4166K| 58604 (1)| 00:04:27 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1002 | 4170K| 58604 (1)| 00:04:27 |
| 4 | NESTED LOOPS OUTER | | 1002 | 130K| 58604 (1)| 00:04:27 |
| 5 | NESTED LOOPS SEMI | | 1002 | 115K| 55911 (1)| 00:04:14 |
| 6 | NESTED LOOPS | | 1476 | 158K| 52952 (1)| 00:04:01 |
| 7 | NESTED LOOPS | | 1476 | 82656 | 49992 (1)| 00:03:48 |
|* 8 | TABLE ACCESS BY INDEX ROWID| XYZ_INFO | 19M| 704M| 43948 (1)| 00:03:20 |
| 9 | INDEX FULL SCAN DESCENDING| DNIN_IDX_NI5 | 56244 | | 449 (1)| 00:00:03 |
|* 10 | TABLE ACCESS BY INDEX ROWID| XYZ_SUMMARY | 1 | 19 | 2 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | AAAA_DSMM_XYZ_UK | 1 | | 1 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | XYZ | 1 | 54 | 2 (0)| 00:00:01 |
|* 13 | INDEX UNIQUE SCAN | XYZ_PK | 1 | | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DNTI_NI1 | 22M| 168M| 2 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | XYZ_PNR_ERS | 1 | 15 | 4 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | DNPE_XYZ | 1 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("DISTRIB_CODE"='NS' AND "TIME_OPERATION">TO_DATE(' 2013-05-20', 'syyyy-mm-dd'))
2 - filter(ROWNUM<1002)
8 - filter("DI"."OPERATION"='CREATE')
10 - filter("DSUM"."DELIVERY_MODE"<>'DD' AND "DSUM"."PAYMENT_METHOD"<>'AC' AND "DSUM"."PAYMENT_METHOD"<>'AG')
11 - access("DSUM"."XYZ_ID"="DI"."XYZ_ID")
12 - filter("D"."XYZ_BLOCKED" IS NULL AND "D"."CANCEL_STATE"='N')
13 - access("DI"."XYZ_ID"="D"."XYZ_ID")
14 - access("DT"."XYZ_ID"="D"."XYZ_ID")
16 - access("DPE"."XYZ_ID"(+)="D"."XYZ_ID")
XYZ.PAYED values breakdown:
P COUNT(1)
Y 12202716
N 9430207
tables nb of records:
TABLE_NAME NUM_ROWS
XYZ 21606776
XYZ_INFO 186301951
XYZ_PNR_ERS 9716471
XYZ_SUMMARY 21616607
Everything that comes inside the "select * from(...) view" parentheses is defined in a view. We've noticed that the line "AND d.payed = 'N'" (commented above) is the guilty clause: the query takes one or two seconds to return between 400 and 500 rows if this line is removed, when included in the query, the response time then switches to *hours* -sic !- but then the result set is empty (no rows returned). The plan is exactly the same whether this "d.payed = 'N'" is added or removed, I mean the nb of steps, access paths, join order etc., only the rows/bytes/cost columns values change, as you can see.
We've found no other way of solving this perf issue but by taking out this "d.payed = 'N'" condition and setting it outside the view along with view.DISTRIB_CODE and view.TIME_OPERATION.
But we would like to understand why such a small change on the XYZ.PAYED column turns everything upside down that much, and we'd like to be able to tell the optimizer to perform this check on payed = 'N' by itself in the end, just like we did, through the use of a hint if possible...
Anybody ever encountered such a behaviour before ? Do you have any advice regarding the use of a hint to reach the same response time as that we've got by setting the payed = N condition outside of the view definition ??
Thanks a lot in advance.
Regards,
SebI am really sorry I couldn't get back earlier to this forum...
Thanks to you all for your answers.
First I'd just like to correct a small mistake I made, when writing
"the query takes one or two seconds": I meant one or 2 *minutes*. Sorry.
> What table/columns are indexed by "DNTI_NI1"?
aaaa.dnti_ni1 is an index ON aaaa.xyz_ticket(xyz_id, ticket_status)
> And what are the indexes on xyz table?
Too many:
XYZ_ARCHIV_STATE_IND ARCHIVE_STATE
XYZ_BENE_CUST_ID_IND BENE_CUST_ID
XYZ_BENE_TTL_IND BENE_TTL
XYZ_CANCEL_STATE_IND CANCEL_STATE
XYZ_CLIENT_APP_NI CLIENT_APP
XYZ_CRM_CUST_ID_IND CRM_CUST_ID
XYZ_DELIVE_MODE_IND DELIVERY_MODE
XYZ_DELIV_BLOCK_IND DELIVERY_BLOCKED
XYZ_DELIV_STATE_IND DELIVERY_STATE
XYZ_XYZ_BLOCKED XYZ_BLOCKED
XYZ_FIRST_TRAVELDATE_IND FIRST_TRAVELDATE
XYZ_MASTER_XYZ_IND MASTER_XYZ_ID
XYZ_ORG_ID_NI ORG_ID
XYZ_PAYMT_STATE_IND PAYMENT_STATE
XYZ_PK XYZ_ID
XYZ_TO_PO_IDX TO_PO
XYZ_UK XYZ_NUM
For ex. XYZ_CANCEL_STATE_IND on CANCEL_STATE seems superfluous to me, as the column may only contain Y or N (or be null)...
> Have you traced both cases to compare statistics? What differences did it reveal?
Yes but it only shows more of *everything* (more tables blocks accessed, the same
for indexes blocks, for almost all objects involved) for the slowest query !
Greping WAIT on the two trc files made for every statement and counting the
object IDs access show that the quicker query requires much less I/Os; the
slowest one overall needs much more blocks to be read (except for the indexes
DNSG_NI1 or DNPE_XYZ for example). Below I replaced obj# with the table/index
name, the first column is the figure showing how many times the object was
accessed in the 10053 file (I ctrl-C'ed my second execution ofr course, the
figures should be much higher !!):
[login.hostname] ? grep WAIT OM-quick.trc|...|sort|uniq -c
335 XYZ_SUMMARY
20816 AAAA_DSMM_XYZ_UK (index on xyz_summary.xyz_id)
192 XYZ
4804 XYZ_INFO
246 XYZ_SEGMENT
6 XYZ_REMARKS
63 XYZ_PNR_ERS
719 XYZ_PK (index on xyz.xyz_id)
2182 DNIN_IDX_NI5 (index on xyz.xyz_id)
877 DNSG_NI1 (index on xyz_segment.xyz_id, segment_status)
980 DNTI_NI1 (index on xyz_ticket.xyz_id, ticket_status)
850 DNPE_XYZ (index on xyz_pnr_ers.xyz_id)
[login.hostname] ? grep WAIT OM-slow.trc|...|sort|uniq -c
1733 XYZ_SUMMARY
38225 AAAA_DSMM_XYZ_UK (index on xyz_summary.xyz_id)
4359 XYZ
12536 XYZ_INFO
65 XYZ_SEGMENT
17 XYZ_REMARKS
20 XYZ_PNR_ERS
8598 XYZ_PK
7406 DNIN_IDX_NI5
29 DNSG_NI1
2475 DNTI_NI1
27 DNPE_XYZ
The overwhelmingly dominant wait event is by far 'db file sequential read':
[login.hostname] ? grep WAIT OM-*elect.txt|cut -d"'" -f2|sort |uniq -c
36 SQL*Net message from client
38 SQL*Net message to client
107647 db file sequential read
1 latch free
1 latch: object queue header operation
3 latch: session allocation
> It will be worth knowing the estimations...
It show the same plan with a higher cost when PAYED = N is added:
SQL> select * from sb11.dnr d
2* where d.dnr_blocked IS NULL and d.cancel_state = 'N'
SQL> /
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1002 | 166K| 40 (3)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| XYZ | 1002 | 166K| 40 (3)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | XYZ_CANCEL_STATE_IND | | | 8 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("D"."XYZ_BLOCKED" IS NULL)
2 - access("D"."CANCEL_STATE"='N')
SQL> select * from sb11.dnr d
2 where d.dnr_blocked IS NULL and d.cancel_state = 'N'
3* and d.payed = 'N'
SQL> /
Execution Plan
Plan hash value: 1292668880
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1001 | 166K| 89 (3)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID| XYZ | 1001 | 166K| 89 (3)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | XYZ_CANCEL_STATE_IND | | | 15 (0)| 00:00:01 | -
Slow response time on query from one schema to another
I've got two separate databases (one test, the other production). The schemas are pretty much identical and contain the same data. This particular query is executed from .NET (SQL text command, not SP) and takes about 3 seconds when performed against my test db, but takes anywhere from 45 seconds to 5 minutes on the production system. I've exceuted the same statement using Aquadata studio and get the same response times. I didn't build the queries or the schemas, nor do I manage the database servers. But I am responsible for making this work now. These queries have been working in the past, but recently have been changed. I am able to speed this one up with some sql changes, but the others I'm not sure about. I've checked and production db has indexes on the fields I'm querying while the test db does not. I would expect test to be slower, but it's not. Any ideas that I could try. On Monday I'll ask our server admins to check out the server end of it, and see if they need to reboot or something. In the meantime, I'm going to work to revamp the SQL.
Any ideas you can provide would be much appreciated.
Sincerely,
Zamwhen your query takes long...
When your query takes too long ... -
Having response time issues using Studio to manage 3000+ forms
We are currently using Documaker Studio to create and maintain our forms, of which we have thousands. Once we create the form we export it to a very old version of Documerge where it is then used in our policy production.
The problem is that because we have so many forms/sections, everytime we click on "SECTIONS" in Studio it takes a significant amount of time to load the screen that lists all of the sections. Many of these forms/sections are old and will never change but we want to still have access to them in the future.
What is the best way to "backup" all these forms somewhere where they are still accessible? Ideally I think I would like to have one workspace (let's call it "PRODUCTION") that has all 3000+ forms and delete the older resources from our existing workspace (called "FORMS") so that just has the forms that we are currently working on. This way the response time in the "FORMS" workspace would be much better. Couple questions:
1. How would I copy my existing workspace "FORMS" (and all the resources in it) to a new workspace called "PRODUCTION"?
2. How would I delete from the "FORMS" workspace all of the older resources?
3. Once I am satisfied with a new form/section in my "FORMS" workspace how would I move it to "PRODUCTION"?
4. How could I move a form/section from "PRODUCTION" back into "FORMS" in order to make corrections, or use it as a base for a new form down the road?
5. Most importantly....Is there a better way to do this?
Again, we are only using this workspace for forms creation and not using it to generate output...we will be doing that in the future once we upgrade from the very old Documerge on the mainframe, to Documaker Studio.
Many thanks to any of you who can help me with this!However, I am a little confused on the difference between extracting and promoting. Am I correct in assuming that I would go into my PROD workspace and EXTRACT the resources that I want to continue to work on. I would then go into my new, and empty, DEV workspace and IMPORT FILES (or IMPORT LIBRARY?) using the file(s) that I created with the EXTRACT? In effect, I would have two totally separate workspaces, one called DEV and one called PROD?
Extraction is writing a copy of a resource from the library out to disk. Promotion is copying a resource from one library to another, with the option of modifying the metadata values of the source and target resources. You would use extract in a case where you don't have access to both libraries to do a promote.
An example promotion scenario would go something like this. You have resources in the source (DEV) that you want to promote to the target (PROD). Items to be promoted are tagged with the MODE = "To Promote". When you perform the promotion, you can select the items that you want to promote with the filter MODE="To Promote". When you perform the promotion, you can also configure Studio to set the MODE of the resource(s) in the source to be MODE="To Delete", and set the MODE of the resource(s) in the target to be MODE="" (empty). Then you can go back and delete the resources from the source (DEV) where MODE=DELETE.
Once you have the libraries configured you could bypass the whole extract/import bit and just use promote. The source would be PROD, and the target would be DEV. During promotion, set the target MODE = "To Do", and source MODE = "In Development". In this fashion you will see which resources in PROD are currently being edited in DEV (because in PROD the MODE = "In Development"). When development is completed, change the MODE in DEV to "To Promote", then proceed with the promotion scenario described above.
I am a bit confused on the PROMOTE function and the libraries that have the _DEV _TEST _PROD suffixes. This looks like it duplicates the entire workspace to new libraries _PROD but it is all part of the same workspace, not two separate workspaces? Any clarification here would be helpful.
Those suffixes are just attached by default; these suffixes don't mean anything to Documaker. You could name your library PROD and use it for DEV. It might be confusing though ;-) The usual best practice is to name the library and subsequent tablespaces/schemas according to their use. It's possible to have multiple libraries within a single tablespace or schema (but not recommended to mix PROD and non-PROD libraries).
Getting there, I think!
-A -
Discoverer Performance/ Response Time
Hi everyone,
I have a few questions regarding the response time for discoverer.
I have Table A with 120 columns. I need to generate a report based on 12 columns from this table A.
The questions are whether the factors bellow contribute to the response time of discoverer.
1. The number of items included in the business area folder (i.e. whether to include 120 cols or just 12 cols)
2. The actual size of the physical table (120 cols) although I only selected 12 cols. If the actual size of the physical table is only 12 cols, would it improve the performance?
3. Will more parameters increase the processing time?
4. Does Joins increase the processing time?
5. Will using Custom Folder and writing an sql statement to select the 12 columns increase the performance?
Really appreciate anyone's help on this.
Cheers,
AngelineHi,
NP and Rod, thanks a lot for your replies!
Actually I was experiencing a different thing that contradicts your replies.
1. When I reduced the no of items included in my Biz Area from 120 to 12 the response time improve significantly from around 5 minutes to 2-3 minutes.
2. When I tried to create a dummy table with just 12 cols needed for the report, I could get a very fast response time, i.e. 1 second to generate the report. But of course the dummy table contains much less data (only around 500 K records). Btw, is Discoverer able to handle large database? What is the biggest record size can it handle?
3. When I add more parameters it seems to add more processing time to the discoverer.
4. Thanks for the clarification on this one.
5. And the funny thing is, when I use custom folder to just select the 12 columns, the performance also significantly improves with estimated query time reduced from 2 minutes plus to just 1 mins 30 secs. But still the performance time is inconsistent. Sometimes it only takes around 1 mins 40 secs, but sometimes it can run up to 3 mintues for the same data.
Now I am creating my report using the custom folder cause it has the best response time so far for me. But based on your replies it's not really encouraged to use the custom folder?
I need to improve the response time for the Discoverer Viewer as the response time is very slow and users don't really like it.
Would appreciate anyone's help in solving this issue :) Thanks..
Cheers,
Angeline -
Slow response time for "Print Settings" dialog in CS6
steps performed to resolve - with no success:
disabled NET DRIVER HPZ12
updated CS6 with latest fixes
printer driver is up to date
usb driver up to date
swapped USB ports across three ports
ran font test with no errors
changed default Win 7 printer to Adobe PDF.
Windows 7 has latest fixes.
Configuration is:
Win 7 Home 64 bit
8 core AMD
16 GB memory
Photoshop CS6 64 bit
HP c410a printer - USB connected
32 bit Photoshop CS4 on the same platform does not have the response time issue.From Adobe Support: 17 secs response time is a normal time interval for the "Print Settings" dialogue.
-
Slow Response Time Joining Multiple Domain Indexes
Hi All,
I am working with a schema that has several full text searchable columns along with several other convential columns. I noticed that when I join on 2 Domain Indexes, the response time for query completion nearly doubles. However, If I were to combine my 2 clob columns into 1 clob, the extra cost of finding the intersection of 2 rows sets can be saved..
In my query, I am taking 2 sets of random high probability words (the top 1000 sorted by token_count DESC).
NOTE: All of my query execution times are taken with words not previously used to avoid caching by the engine..
HERE IS THE SLOW VERSION OF THE QUERY WHICH REFERENCES THE BODY CLOB TWICE:
SELECT count(NSS_ID) FROM jk_test_2 WHERE
CONTAINS (body, '( STANDARDS and HELP ) ' ) > 0
AND
CONTAINS (body, '( WORKING and LIMITED ) ' ) > 0 ;
THE EXPLAIN PLAN shows the intersection being calculated:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 99 | 3836 (0)| 00:00:47 |
| 1 | SORT AGGREGATE | | 1 | 99 | | |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 3 | BITMAP AND | | | | | |
| 4 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 5 | SORT ORDER BY | | | | | |
|* 6 | DOMAIN INDEX | JK_BODY_NDX | | | 1284 (0)| 00:00:16 |
| 7 | BITMAP CONVERSION FROM ROWIDS| | | | | |
| 8 | SORT ORDER BY | | | | | |
|* 9 | DOMAIN INDEX | JK_BODY_NDX | | | 2547 (0)| 00:00:31 |
Predicate Information (identified by operation id):
6 - access("CTXSYS"."CONTAINS"("BODY",'( PURCHASE and POSSIBLE)')>0 AND
"CTXSYS"."CONTAINS"("BODY",'(NATIONAL and OFFICIAL)')>0)
9 - access("CTXSYS"."CONTAINS"("BODY",'(NATIONAL and OFFICIAL)')>0)
I RAN 3 QUERIES and got these times:
Elapsed: 00:00:00.25
Elapsed: 00:00:00.21
Elapsed: 00:00:00.27
HERE IS THE QUERY RE-WRITTEN INTO A DIFFERENT FORMAT WHICH COMBINES THE 2 PARTS INTO 1:
SELECT count(NSS_ID) FROM jk_test_2 WHERE
CONTAINS (body, '( ( STANDARDS and HELP ) AND ( WORKING and LIMITED ) ) ' ) > 0;
The Plan is now:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 99 | 3207 (0)| 00:00:39 |
| 1 | SORT AGGREGATE | | 1 | 99 | | |
|* 2 | DOMAIN INDEX | JK_BODY_NDX | 5 | 495 | 3207 (0)| 00:00:39 |
Predicate Information (identified by operation id):
2 - access("CTXSYS"."CONTAINS"("BODY",'( ( FORM and GIVE ) AND (WEB and AGREED) ) ')>0)
I RAN 3 QUERIES using new words and got these times:
Elapsed: 00:00:00.12
Elapsed: 00:00:00.11
Elapsed: 00:00:00.13
Although the cost is only %15 lower, it executes twice as fast. Also, the same improvement is gained if OR's are used instead of ands:
With --> CONTAINS (BODY,'( ( ( FORM OR GIVE ) and (WEB OR AGREED) ) ') >0
My 2 timings are .25 and .50 with the OR'ed clause above getting the better response time..
BASED ON THIS, On my project I am tempted to merged 2 totally seperate clob columns into 1, to get the better response time. The 2 columns are 1) Body Text and 2) Codes. They LOGICALLY DO NOT BELONG WITH ONE ANOTHER. This would be very ackward for the project as it would cause a lot of fudging of the data in many places throughout the system. I have done this testing taking averages of 500 unique queries, my indexes are up to date with full statistics computed on my tables and indexes. Response time is HIGHLY CRITICAL for this project.
Does anyone have any advice that would allow me to obtain the good response time avoiding the ackwardness of globbing all the data into one clob ?
JoekYou might try adding sections and querying using within.
-
Query Tuning - Response time Statistics collection
Our Application is Load tested for a period of 1 hour with peak load.
For this specific period of time, say thousands of queries gets executed in the database,
What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
Number of times Executed
Average Response time
Maximum response time
minimum response time
90th percentile response time ( sorted in ascending order, 90th percentile guy)
All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
I tried using sql trace and TKPROF but unable to get all these statistics...
Application uses connection pooling, so connections are taken as and when needed...
Any thoughts on this?
Appreciate your help.I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
Any more thoughts on this? -
SBWP transaction - viewing folders/sending documents Long response times
Hi all,
I have some complains from some users (about 3-4 out of ~3000 user in my ECC 6.0 system) within my company about their response times on the Business Workplace. In particulat they started complaining about the response time of calling the TCOD SBWP . For 1-2 of them up to 4-5 minutes when myself as well as my other 2 colleagues were getting response of 500ms.
Then they wanted also to view some folders on the Workplace and they had also response times of minutes instead of seconds.
I checked that some of their shared folders as well as the Trash Bin had thousands of PDFs. I told to delete them and they deleted most of them. Stil when they want to open a folder it takes >2 minutes while for me to open the same shared folder takes 1-2 seconds.
I checked in ST03N (user profiles, single records) and 1 of them had long database calls and request time in the Analysis of ABAP/4 database requests (Single Statistical Records).
I am running out of ideas as I cannot explain why only for those 2-3 users happens to have long response times.
Is it related to their folders in the Workplace? Where should I focus my investigation for the SBWP like transactions? Is it the case that some Oracle parameters might need to be checked?
I run the automatic Oracle parameters check (O/S AIX 5.3 , Oracle 10.2 , ECC 6.0) and here are some recommandations:
fixcontrol (5705630) add with value "5705630:ON" use optimal OR concatenation; note 176754 NO 5705630:ON B 1
fixcontrol (5765456) add with value "5765456:3" no further information available NO 5765456:3 B 1
optimpeek_user_binds add with value "FALSE" avoid bind value peaking NO FALSE B 1
optimizerbetter_inlist_costing add with value "OFF" avoid preference of index supporting inlist NO OFF B 1
optimizermjc_enabled add with value "FALSE" avoid cartesean merge joins in general NO FALSE B 1
sortelimination_cost_ratio add with value "10" use non-order-by-sorted indexes (first_rows) NO 10 B 1
event (10027) add with value "10027 trace name context forever, level 1" avoid process state dump at deadlock NO 10027 trace name context forever, level 1 B 1
event (10028) add with value "10028 trace name context forever, level 1" do not wait while writing deadlock trace NO 10028 trace name context forever, level 1 B 1
event (10091) add with value "10091 trace name context forever, level 1" avoid CU Enqueue during parsing NO 10091 trace name context forever, level 1 B 1
event (10142) add with value "10142 trace name context forever, level 1" avoid Btree Bitmap Conversion plans NO 10142 trace name context forever, level 1 B 1
event (10183) add with value "10183 trace name context forever, level 1" avoid rounding during cost calculation NO 10183 trace name context forever, level 1 B 1
event (10191) add with value "10191 trace name context forever, level 1" avoid high CBO memory consumption NO 10191 trace name context forever, level 1 B 1
event (10411) add with value "10411 trace name context forever, level 1" fixes int-does-not-correspond-to-number bug NO 10411 trace name context forever, level 1 B 1
event (10629) add with value "10629 trace name context forever, level 32" influence rebuild online error handling NO 10629 trace name context forever, level 32 B 1
event (10753) add with value "10753 trace name context forever, level 2" avoid wrong values caused by prefetch; note 1351737 NO 10753 trace name context forever, level 2 B 1
event (10891) add with value "10891 trace name context forever, level 1" avoid high parsing times joining many tables NO 10891 trace name context forever, level 1 B 1
event (14532) add with value "14532 trace name context forever, level 1" avoid massive shared pool consumption NO 14532 trace name context forever, level 1 B 1
event (38068) add with value "38068 trace name context forever, level 100" long raw statistic; implement note 948197 NO 38068 trace name context forever, level 100 B 1
event (38085) add with value "38085 trace name context forever, level 1" consider cost adjust for index fast full scan NO 38085 trace name context forever, level 1 B 1
event (38087) add with value "38087 trace name context forever, level 1" avoid ora-600 at star transformation NO 38087 trace name context forever, level 1 B 1
event (44951) add with value "44951 trace name context forever, level 1024" avoid HW enqueues during LOB inserts NOHi Loukas,
Your message is not well formatted so you are making it harder for people to read. However your problem is that you have 3-4 users of SBWP with slow runtimes when accessing folders. Correct ?
You mentioned that there is a large number of documents in the users folders so usually these type of problems are caused by a large number of table joins on the SAPoffice tables specific to your users.
Firstly please refer to SAP Note 988057 Reorganization - Information.
To help with this issue you can use report RSBCS_REORG in SE38 to remove any deleted documents from the SAPoffice folders. This should speed up the access to your users documents in folders as it removes unnecessary documents from the SAPoffice tables.
If your users do not show a significant speed up of access to their SAPoffice folders please refer to SAP Note 904711 - SAPoffice: Where are documents physically stored and verify that your statistics and indexes mentioned in this note are up to date.
If neither of these help with the issue you can trace these users in ST12 and find out which table is cauing the longest runtime and see if there is a solution to either reduce this table or improve the access method on the DB level.
Hope this helps
Michael -
How to build sql query for view object at run time
Hi,
I have a LOV on my form that is created from a view object.
View object is read-only and is created from a SQL query.
SQL query consists of few input parameters and table joins.
My scenario is such that if input parameters are passed, i have to join extra tables, otherwise, only one table can fetch the results I need.
Can anyone please suggest, how I can solve this? I want to build the query for view object at run time based on the values passed to input parameters.
Thanks
Srikanth AddankiAs I understand you want to change the query at run time.
If this is what you want, you can use setQuery Method then use executeQuery.
http://download.oracle.com/docs/cd/B14099_19/web.1012/b14022/oracle/jbo/server/ViewObjectImpl.html#setQuery_java_lang_String_ -
Response Time of a query in 2 different enviroment
Hi guys Luca speaking, sorry for the bad written english
the questions is:
The same query on the same table, for definition, number of rows, defined on the same kind of tablespace, the tables are analized
*) I have a query in Benchmark with good results in execution time, the execution plan is really good
*) in Production the execution plan is not so good, the response time isn't comparable (hours vs seconds)
#### The Execution Plan are different ####
#### The stats are the same ####
this a table storico.FLUSSO_ASTCM_INC A with this stats in benchmark
chk Owner Name Partition Subpartition Tablespace NumRows Blocks EmptyBlocks AvgSpace ChainCnt AvgRowLen AvgSpaceFLBlocks NumFLBlocks UserStats GlobalStats LastAnalyzed SampleSize Monitoring Status
True STORICO FLUSSO_ASTCM_INC TBS_DATA 2861719 32025 0 0 0 74 NO YES 10/01/2006 15.53.43 2861719 NO Normal, Successful Completion: 10/01/2006 16.26.05
in Production the stas are the same
the other one is an external_table
the only differences that I noticed at the moment is about the tablespace used to defined the table on:
Production
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
Benchmark
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
I'm studing on at the moment
What I have to check to obtain the same execution plan (without change the query)
This is the query:
SELECT
'test query',
sysdate,
storico.tc_scarti_seq.NEXTVAL,
NULL, --ROW_ID
-- A.AZIONE,
'I',
A.CODE_PREF_TCN,
A.CODE_NUM_TCN,
'ADSL non presente su CRM' ,
-- a.AZIONE
'I'
|| ';' || a.CODE_PREF_TCN
|| ';' || a.CODE_NUM_TCN
|| ';' || a.DATA_ATVZ_CMM
|| ';' || a.CODE_PREF_DSR
|| ';' || a.CODE_NUM_TFN
|| ';' || a.DATA_CSSZ_CMM
|| ';' || a.TIPO_EVENTO
|| ';' || a.INVARIANTE_FONIA
|| ';' || a.CODE_TIPO_ADSL
|| ';' || a.TIPO_RICHIESTA_ATTIVAZIONE
|| ';' || a.TIPO_RICHIESTA_CESSAZIONE
|| ';' || a.ROW_ID_ATTIVAZIONE
|| ';' || a.ROW_ID_CESSAZIONE
FROM storico.FLUSSO_ASTCM_INC A
WHERE NOT EXISTS (SELECT 1 FROM storico.EXT_CRM_X_ADSL B
WHERE A.CODE_PREF_DSR = B.CODE_PREF_DSR
AND A.CODE_NUM_TFN = B.CODE_NUM_TFN
AND A.INVARIANTE_FONIA = B.INVARIANTE_FONIA
AND B.NOME_SERVIZIO NOT IN ('ADSL SMART AGGREGATORE','ADSL SMART TWIN','ALICE IMPRESA TWIN',
'SERVIZIO ADSL PER VIDEOLOTTERY','WI - FI') )
Esito di set autotrace traceonly explain ESERCIZIO
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=144985 Card=143086 B
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=1899 C
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q370300
4 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
Esito di set autotrace traceonly explain BENCHMARK
Execution Plan
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3084 Card=2861719 By
tes=291895338)
1 0 SEQUENCE OF 'TC_SCARTI_SEQ'
2 1 HASH JOIN* (ANTI) (Cost=3084 Card=2861719 Bytes=29189533 :Q810002
8)
3 2 TABLE ACCESS* (FULL) OF 'FLUSSO_ASTCM_INC' (Cost=3082 :Q810000
Card=2861719 Bytes=183150016)
4 2 EXTERNAL TABLE ACCESS* (FULL) OF 'EXT_CRM_X_ADSL' (Cos :Q810001
t=2 Card=1 Bytes=38)
2 PARALLEL_TO_SERIAL SELECT /*+ ORDERED NO_EXPAND USE_HASH(A2) US
E_ANTI(A2) */ A1.C0,A1.C1,A1.C2,A1.C
3 PARALLEL_FROM_SERIAL
4 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND FULL(A1) */ A1."CODE_PR
EF_DSR" C0,A1."CODE_NUM_TFN" C1,A1."
The differences on the InitOra are on these parameters:
Could they influence the Optimizer, and the execution plan are so different
background_dump_dest
cpu_count
db_file_multiblock_read_count
db_files
db_32k_cache_size
dml_locks
enqueue_resources
event
fast_start_mttr_target
fast_start_parallel_rollback
hash_area_size
log_buffer
log_parallelism
max_rollback_segments
open_cursors
open_links
parallel_execution_message_size
parallel_max_servers
processes
query_rewrite_enabled
remote_login_passwordfile
session_cached_cursors
sessions
sga_max_size
shared_pool_reserved_size
sort_area_retained_size
sort_area_size
star_transformation_enabled
transactions
undo_retention
user_dump_dest
utl_file_dir
Please Help me
Thanks a lot LucaHi Luca,
test and production system are nearly identicall (same OS, same HW Plattform, same software version, same release)
you're using external tables. Are the speed of these drives are identically?
have you analyzed the schema with the same statement? Could you send me the statement?
have you system statistics?
have you testet the statement in an environment which is nearly like the production? concurrent user etc.
Could you send me the top 5 wait events from the statspack report.
Are the data from production and test identical? No data changed. No Index drop? No additional Index? All tables and indexes are analyzed
Regards
Marc -
Help required in optimizing the query response time
Hi,
I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
The first table can have maximum of 6 million rows but the second table rows will be around 9000.
My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
the second query selection criteria is to find the value in the range .
for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
My first query returns a result which then will be used to select using the following query
select column1 from my_table where start_range < my_value and end_range> my_value;
I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
I am using a preparedStatement for the second query loop.
Can some one suggest me how I can improve the query response time?
Regards,
ShyamTry the code below.
Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
I have written a sample db code for the same interraction.
Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
Good luck.
DROP TYPE idlist;
CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
CREATE OR REPLACE PACKAGE mypkg1
AS
PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
END mypkg1;
CREATE OR REPLACE PACKAGE BODY mypkg1
AS
PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
AS
ctr NUMBER;
BEGIN
DBMS_OUTPUT.put_line (myval_list.COUNT);
FOR x IN (SELECT object_name, object_id, myvalue
FROM user_objects a,
(SELECT myval_list (ROWNUM + 1) myvalue
FROM TABLE (myval_list)) b
WHERE a.object_id < b.myvalue)
LOOP
DBMS_OUTPUT.put_line ( x.object_name
|| ' - '
|| x.object_id
|| ' - '
|| x.myvalue
END LOOP;
END;
END mypkg1;
[pre]
Testing the code above. Make sure dbms output is ON.
[pre]
DECLARE
a idlist;
refc sys_refcursor;
c number;
BEGIN
SELECT x.nu
BULK COLLECT INTO a
FROM (SELECT 5000 nu
FROM DUAL) x;
mypkg1.get_list (a, refc);
END;
[pre]
Vishal V. -
Strange response time for an RFC call viewed from STAD on R/3 4.7
Hello,
On our R/3 4.7 production system, we have a lot of external RFC calls to execute an abap module function. There are 70 000 of these calls per day.
The mean response time for this RFC call is 35 ms.
Some times a few of them (maybe 10 to 20 per day) are way much longer.
I am currently analysing with STAD one of these long calls which lasted 10 seconds !
Here is the info from STAD
Response time : 10 683 ms
Total time in workprocess : 10 683 ms
CPU time : 0 ms
RFC+CPIC time : 0 ms
Wait for work process 0 ms
Processing time 10.679 ms
Load time 1 ms
Generating time 0 ms
Roll (in) time 0 ms
Database request time 3 ms
Enqueue time 0 ms
Number Roll ins 0
Roll outs 0
Enqueues 0
Load time Program 1 ms
Screen 0 ms
CUA interf. 0 ms
Roll time Out 0 ms
In 0 ms
Wait 0 ms
Frontend No.roundtrips 0
GUI time 0 ms
Net time 0 ms
There is nearly no abap processing in the function module.
I really don't uderstand what is this 10 679 ms processing time especially with 0 ms cpu time and 0 ms wait time.
A usual fast RFC call gives this data
23 ms response time
16 ms cpu time
14 ms processing time
1 ms load time
8 ms Database request time
Does anybody have an idea of what is the system doing during the 10 seconds processing time ?
Regards,
OlivierHi Graham,
Thank you for your input and thoughts.
I will have to investigate on RZ23N and RZ21 because I'm not used to use them.
I'm used to investigate performance problems with ST03 and STAD.
My system is R/3 4.7 WAS 6.20. ABAP and BASIS 43
Kernel 6.40 patch level 109
We know these are old patch levels but we are not allowed to stop this system for upgrade "if it's not broken" as it is used 7/7 24/24.
I'm nearlly sure that the problem is not an RFC issue because I've found other slow dialog steps for web service calls and even for a SAPSYS technical dialog step of type <no buffer>. (what is this ?)
This SAPSYS dialog step has the following data :
User : SAPSYS
Task type : B
Program : <no buffer>
CPU time 0 ms
RFC+CPIC time 0 ms
Total time in workprocs 5.490 ms
Response time 5.490 ms
Wait for work process 0 ms
Processing time 5.489 ms
Load time 0 ms
Generating time 0 ms
Roll (in+wait) time 0 ms
Database request time 1 ms ( 3 Database requests)
Enqueue time 0 ms
All hundreds of other SAPSYS <no buffer> steps have a less than 5 ms response time.
It looks like the system was frozen during 5 seconds...
Here are some extracts from STAD of another case from last saturday.
11:00:03 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 13 13 0 0
11:00:03 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 19 19 0 16
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 77 77 0 16
11:00:04 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:04 bt1sqkvf_PLG_18 RFC R 4 USER_LECDIS 14 14 0 16
11:00:05 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 12 12 0 16
11:00:05 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 53 53 0 0
11:00:06 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 76 76 0 0
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 20 20 0 31
11:00:06 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 12 12 0 0
11:00:06 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 13 13 0 0
11:00:06 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 34 34 0 16
11:00:07 bt1sqkvh_PLG_18 RFC R 0 USER_LECDIS 15 15 0 0
11:00:07 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 13 13 0 16
11:00:07 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 19 19 0 0
11:00:07 bt1fsaplpr02_PLG RFC R 3 USER_LECKIT 23 13 10 0
11:00:07 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 38 38 0 0
11:00:08 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 20 20 0 16
11:00:09 bt1sqkvg_PLG_18 RFC R 0 USER_LECDIS 9 495 9 495 0 16
11:00:09 bt1sqk2t_PLG_18 RFC R 0 USER_LECDIS 9 404 9 404 0 0
11:00:09 bt1sqkvh_PLG_18 RFC R 1 USER_LECKIT 9 181 9 181 0 0
11:00:10 bt1fsaplpr02_PLG RFC R 3 USER_LECDIS 23 23 0 0
11:00:10 bt1sqkve_PLG_18 RFC R 4 USER_LECKIT 8 465 8 465 0 16
11:00:18 bt1sqkvh_PLG_18 RFC R 0 USER_LECKIT 18 18 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 0 USER_LECKIT 89 89 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 0 USER_LECKIT 75 75 0 0
11:00:18 bt1sqkvh_PLG_18 RFC R 1 USER_LECDIS 43 43 0 0
11:00:18 bt1sqk2t_PLG_18 RFC R 1 USER_LECDIS 32 32 0 16
11:00:18 bt1sqkvg_PLG_18 RFC R 1 USER_LECDIS 15 15 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 13 13 0 0
11:00:18 bt1sqkve_PLG_18 RFC R 4 USER_LECDIS 14 14 0 0
11:00:18 bt1sqkvf_PLG_18 RFC R 4 USER_LECKIT 69 69 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 49 49 0 16
11:00:18 bt1sqkve_PLG_18 RFC R 5 USER_LECKIT 19 19 0 16
11:00:18 bt1sqkvf_PLG_18 RFC R 5 USER_LECDIS 15 15 0 16
The load at that time was very light with only a few jobs starting :
11:00:08 bt1fsaplpr02_PLG RSCONN01 B 31 USER_BATCH 39
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 31 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG /SDF/RSORAVSH B 33 USER_BATCH 64
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 33 USER_BATCH 43
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 34 USER_BATCH 34
11:00:08 bt1fsaplpr02_PLG RSBTCRTE B 35 USER_BATCH 37
11:00:09 bt1fsaplpr02_PLG RVV50R10C B 34 USER_BATCH 60
11:00:09 bt1fsaplpr02_PLG ZLM_HDS_IS_PURGE_RESERVATION B 35 USER_BATCH 206
I'm thinking also now about the message server as there is load balancing for each RFC call ?
Regards,
Olivier -
Online response time in discoverer viewer?
Dear sir,
Can anyone help me,
In discoverer view and plus,how to maximise the response time for the viewer,while n-number of visitors requesting the same report.
How to analyse that? what settings are there?
Plz help me , its urgent for my project?
Regards
chandrakumarIn case this helps anyone else - this is the reply I got when I asked SAP this question:
the total user response time is response time + frontend network time.
GUI time is already included in the response time (it is the roll in andwait time from the application server's point of view).
The 'FE (Frontend) net time' is the time consumed in the network for thefirst transfer of data from the frontend to the application server andthe last transfer of data from the application server to the frontend
(during one dialog step). -
O/S - Sun Solaris
ver - Oracle 8.1.7
I am trying to improve the response time of the following query. Both tables contain polygons.
select a.data_id, a.GEOLOC from information_data a, shape_data b where a.info_id = 2 and b.shape_id = 271 and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE'
The response time with info_id not indexed is 9 seconds. When I index info_id, I get the following error. Why is indexing info_id causing a spatial index error ? Also, other than manipulating the tiling level, is there anything else that could improve the response time ?
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-13208: internal error while evaluating [window SRID does not match layer
SRID] operator
ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
ORA-06512: at line 1
Thanks,
Ravi.Hello Ravi,
Both layers should have SDO_SRID values set in order for the index to work properly.
After you do that you might want to add an Oracle hint to the query:
select /*+ ordered */ a.data_id, a.GEOLOC
from shape_data b, information_data a
where a.info_id = 2 and b.shape_id = 271
and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE' ;
Hope this helps,
Dan
Also, if only one or very few rows have a.info_id=2 then the function sdo_geom.relate
might also work quickly.
Maybe you are looking for
-
Installation Problem in SAPNW7.0 with Oracle on RHEL5.4
Hi Everyone, I am installing SAP NETWEAVER7.0 SR3 with ORACLE 10.2.0.4 on RHEL5.4 Operating System. I have installed the java and i have install the ORACLE 10.2.0.4 Successfully. I set the environment variable in vi .bash_profile and i set th
-
Is it possible... Xbox Live + Mac ?
I'm trying to connect my Xbox 360 to Xbox Live trough my MacBook Pro. I did started the internet sharing on my mac, and i have the correct settings (share yout connection from AirPort. To computers using Ethernet) but my xbox still doesn't recognize.
-
Why is DVD not recognized or burnable?
I am using an iMac G5, along with some HP DVD=R 4.7 GB disks. If I put them into the superdrive, they do not appear on the desktop. If I attempt to burn on to them it starts, then spit out with error message that the "burn failed due to an internal e
-
Problem for Restoring after Updating to iOS5
I saw on the phone, it stated updated completed for iOS5 BUT on the iTunes still updating.. so i pluck out and see but the phone installed back ALL my unwanted apps again and therefore I plug in to iTunes again thinking that i can sync to my previous
-
How to adjust time before Mac sleeps?
I recently started with Mountain Lion and can't find a way to set the amount of time before the computer goes to sleep. I looked in System Preferences -> Energy Saver and it has a slider to set the time before turning off the display and a box to ke