Performance issue of the query
Hi All,
Please help me to avoid sequential access on the table VBAK.
as i am using this query on VBAK,VBUK,VBPA and VBFA.
the query as follows.
How to improve the performance of this query,its very urgent please give me the sample query if possible.
SELECT a~vbeln
a~erdat
a~ernam
a~vkorg
a~vkbur
a~kunnr
b~kunnr AS kunnr_shp
b~adrnr
c~gbstk
c~fkstk
d~vbeln AS vbeln_i
INTO CORRESPONDING FIELDS OF TABLE t_z92sales
FROM vbak AS a INNER JOIN vbuk AS c ON avbeln = cvbeln
INNER JOIN vbpa AS b ON avbeln = bvbeln
AND bposnr = 0 AND bparvw = 'WE'
LEFT OUTER JOIN vbfa AS d ON avbeln = dvbelv AND d~vbtyp_n = 'M'
WHERE ( a~vbeln BETWEEN fvbeln AND tvbeln )
AND a~vbtyp = 'C'
AND a~vkorg IN vkorg_s
AND c~gbstk IN gbstk_s.
Hi Ramada,
Try using the following code.
RANGES: r_vbeln FOR vbka-vbeln.
DATA: w_index TYPE sy-tabix,
BEGIN OF t_vbak OCCURS 0,
vbeln TYPE vbak-vbeln,
erdat TYPE vbak-erdat,
ernam TYPE vbak-ernam,
vkorg TYPE vbak-vkorg,
vkbur TYPE vbak-vkbur,
kunnr TYPE vbak-kunnr,
del(1) TYPE c ,
END OF t_vbak,
BEGIN OF t_vbuk OCCURS 0,
vbeln TYPE vbuk-vbeln,
gbstk TYPE vbuk-gbstk,
fkstk TYPE vbuk-fkstk,
END OF t_vbuk,
BEGIN OF t_vbpa OCCURS 0,
vbeln TYPE vbpa-vbeln,
kunnr TYPE vbpa-kunnr,
adrnr TYPE vbpa-adrnr,
END OF t_vbpa,
BEGIN OF t_vbfa OCCURS 0,
vbelv TYPE vbfa-vbelv,
vbeln TYPE vbfa-vbeln,
END OF t_vbfa,
BEGIN OF t_z92sales OCCURS 0,
vbeln TYPE vbak-vbeln,
erdat TYPE vbak-erdat,
ernam TYPE vbak-ernam,
vkorg TYPE vbak-vkorg,
vkbur TYPE vbak-vkbur,
kunnr TYPE vbak-kunnr,
gbstk TYPE vbuk-gbstk,
fkstk TYPE vbuk-fkstk,
kunnr_shp TYPE vbpa-kunnr,
adrnr TYPE vbpa-adrnr,
vbeln_i TYPE vbfa-vbeln,
END OF t_z92sales.
IF NOT fvbeln IS INITIAL
OR NOT tvbeln IS INITIAL.
REFRESH r_vbeln.
r_vbeln-sign = 'I'.
IF fvbeln IS INITIAL.
r_vbeln-option = 'EQ'.
r_vbeln-low = tvbeln.
ELSEIF tvbeln IS INITIAL.
r_vbeln-option = 'EQ'.
r_vbeln-low = fvbeln.
ELSE.
r_vbeln-option = 'BT'.
r_vbeln-low = fvbeln.
r_vbeln-high = tvbeln.
ENDIF.
APPEND r_vbeln.
CLEAR r_vbeln.
SELECT vbeln
erdat
ernam
vkorg
vkbur
kunnr
FROM vbak
INTO TABLE t_vbak
WHERE vbeln IN r_vbeln
AND vbtyp EQ 'C'
AND vkorg IN vkorg_s.
IF sy-subrc EQ 0.
SORT t_vbak BY vbeln.
SELECT vbeln
gbstk
fkstk
FROM vbuk
INTO TABLE t_vbuk
FOR ALL ENTRIES IN t_vbak
WHERE vbeln EQ t_vbak-vbeln
AND gbstk IN gbstk_s.
IF sy-subrc EQ 0.
SORT t_vbuk BY vbeln.
LOOP AT t_vbak.
w_index = sy-tabix.
READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING NO FIELDS.
IF sy-subrc NE 0.
t_vbak-del = 'X'.
MODIFY t_vbak INDEX w_index TRANSPORTING del.
ENDIF.
ENDLOOP.
DELETE t_vbak WHERE del EQ 'X'.
ENDIF.
ENDIF.
ELSE.
SELECT vbeln
gbstk
fkstk
FROM vbuk
INTO TABLE t_vbuk
WHERE vbtyp EQ 'C'
AND gbstk IN gbstk_s.
IF sy-subrc EQ 0.
SORT t_vbuk BY vbeln.
SELECT vbeln
erdat
ernam
vkorg
vkbur
kunnr
FROM vbak
INTO TABLE t_vbak
FOR ALL ENTRIES IN t_vbuk
WHERE vbeln EQ t_vbuk-vbeln
AND vkorg IN vkorg_s.
IF sy-subrc EQ 0.
SORT t_vbak BY vbeln.
ENDIF.
ENDIF.
ENDIF.
IF NOT t_vbak[] IS INITIAL.
SELECT vbeln
kunnr
adrnr
FROM vbpa
INTO TABLE t_vbpa
FOR ALL ENTRIES IN t_vbak
WHERE vbeln EQ t_vbak-vbeln
AND posnr EQ '000000'
AND parvw EQ 'WE'.
IF sy-subrc EQ 0.
SORT t_vbpa BY vbeln.
SELECT vbelv
vbeln
FROM vbfa
INTO TABLE t_vbfa
FOR ALL ENTRIES IN t_vbpa
WHERE vbelv EQ t_vbpa-vbeln
AND vbtyp_n EQ 'M'.
IF sy-subrc EQ 0.
SORT t_vbfa BY vbelv.
ENDIF.
ENDIF.
ENDIF.
REFRESH t_z92sales.
LOOP AT t_vbak.
READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING
gbstk
fkstk.
IF sy-subrc EQ 0.
READ TABLE t_vbpa WITH KEY vbeln = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING
kunnr
adrnr.
IF sy-subrc EQ 0.
READ TABLE t_vbfa WITH KEY vbelv = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING
vbeln.
IF sy-subrc EQ 0.
t_z92sales-vbeln = t_vbak-vbeln.
t_z92sales-erdat = t_vbak-erdat.
t_z92sales-ernam = t_vbak-ernam.
t_z92sales-vkorg = t_vbak-vkorg.
t_z92sales-vkbur = t_vbak-vkbur.
t_z92sales-kunnr = t_vbak-kunnr.
t_z92sales-gbstk = t_vbuk-gbstk.
t_z92sales-fkstk = t_vbuk-fkstk.
t_z92sales-kunnr_shp = t_vbpa-kunnr.
t_z92sales-adrnr = t_vbpa-adrnr.
t_z92sales-vbeln_i = t_vbfa-vbeln.
APPEND t_z92sales.
CLEAR t_z92sales.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP.
Similar Messages
-
Performance Issue with the query
Hi Experts,
While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
Here is the query
SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID, E.DESCR, A.EMPLID, D.NAME, C.DESCR, A.TRC,
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( A.EST_GROSS),
DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
'2012-07-01',
'2012-07-31',
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / ' || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT,
E.PROJECT_ID
FROM
PS_TL_PAYABLE_TIME A,
PS_TL_TRC_TBL C,
PS_PROJECT E,
PS_PERSONAL_DATA D,
PS_PERALL_SEC_QRY D1,
PS_JOB F,
PS_EMPLMT_SRCH_QRY F1
WHERE
D.EMPLID = D1.EMPLID
AND D1.OPRID = 'TMANI'
AND F.EMPLID = F1.EMPLID
AND F.EMPL_RCD = F1.EMPL_RCD
AND F1.OPRID = 'TMANI'
AND A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
AND C.TRC = A.TRC
AND C.EFFDT = (SELECT
MAX(C_ED.EFFDT)
FROM
PS_TL_TRC_TBL C_ED
WHERE
C.TRC = C_ED.TRC
AND C_ED.EFFDT <= SYSDATE
AND E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
AND E.PROJECT_ID = A.PROJECT_ID
AND A.EMPLID = D.EMPLID
AND A.EMPLID = F.EMPLID
AND A.EMPL_RCD = F.EMPL_RCD
AND F.EFFDT = (SELECT
MAX(F_ED.EFFDT)
FROM
PS_JOB F_ED
WHERE
F.EMPLID = F_ED.EMPLID
AND F.EMPL_RCD = F_ED.EMPL_RCD
AND F_ED.EFFDT <= SYSDATE
AND F.EFFSEQ = (SELECT
MAX(F_ES.EFFSEQ)
FROM
PS_JOB F_ES
WHERE
F.EMPLID = F_ES.EMPLID
AND F.EMPL_RCD = F_ES.EMPL_RCD
AND F.EFFDT = F_ES.EFFDT
AND F.GP_PAYGROUP = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
AND A.CURRENCY_CD = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
AND DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
AND ( A.EMPLID, A.EMPL_RCD) IN (SELECT
B.EMPLID,
B.EMPL_RCD
FROM
PS_TL_GROUP_DTL B
WHERE
B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012')
AND E.PROJECT_USER1 = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
AND A.PROJECT_ID = DECODE(' ', ' ', A.PROJECT_ID, ' ')
AND A.PAYABLE_STATUS <>
CASE
WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
THEN
CASE
WHEN A.DUR > last_day(add_months(sysdate, -2))
THEN ' '
ELSE 'NA'
END
ELSE
CASE
WHEN A.DUR > last_day(add_months(sysdate, -1))
THEN ' '
ELSE 'NA'
END
END
AND A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
GROUP BY A.BUSINESS_UNIT_PC,
A.PROJECT_ID,
E.DESCR,
A.EMPLID,
D.NAME,
C.DESCR,
A.TRC,
'2012-07-01',
'2012-07-31',
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / '
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ') || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT, E.PROJECT_ID
HAVING SUM( A.EST_GROSS) <> 0
ORDER BY 1,
2,
5,
6 ;Here is the screenshot for IO wait
[https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
Edited by: Oceaner on Aug 14, 2012 5:38 PMHere is the execution plan with all the statistics present
PLAN_TABLE_OUTPUT
Plan hash value: 1575300420
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 237 | 2703 (1)| 00:00:33 |
|* 1 | FILTER | | | | | |
| 2 | SORT GROUP BY | | 1 | 237 | | |
| 3 | CONCATENATION | | | | | |
|* 4 | FILTER | | | | | |
|* 5 | FILTER | | | | | |
|* 6 | HASH JOIN | | 1 | 237 | 1695 (1)| 00:00:21 |
|* 7 | HASH JOIN | | 1 | 204 | 1689 (1)| 00:00:21 |
|* 8 | HASH JOIN SEMI | | 1 | 193 | 1352 (1)| 00:00:17 |
| 9 | NESTED LOOPS | | | | | |
| 10 | NESTED LOOPS | | 1 | 175 | 1305 (1)| 00:00:16 |
|* 11 | HASH JOIN | | 1 | 148 | 1304 (1)| 00:00:16 |
| 12 | JOIN FILTER CREATE | :BF0000 | | | | |
| 13 | NESTED LOOPS | | | | | |
| 14 | NESTED LOOPS | | 1 | 134 | 967 (1)| 00:00:12 |
| 15 | NESTED LOOPS | | 1 | 103 | 964 (1)| 00:00:12 |
|* 16 | TABLE ACCESS FULL | PS_PROJECT | 197 | 9062 | 278 (1)| 00:00:04 |
|* 17 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 7 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX$$_C44D0007 | 16 | | 3 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
| 21 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
PLAN_TABLE_OUTPUT
| 22 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 23 | FILTER | | | | | |
| 24 | JOIN FILTER USE | :BF0000 | 55671 | 2827K| 335 (1)| 00:00:05 |
| 25 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 26 | TABLE ACCESS BY INDEX ROWID| PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|* 28 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 29 | CONCATENATION | | | | | |
| 30 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 31 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 32 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 33 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 34 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 36 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 38 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
|* 39 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 40 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
|* 41 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
| 42 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 43 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|* 44 | FILTER | | | | | |
|* 45 | FILTER | | | | | |
| 46 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 47 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 48 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 49 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 50 | CONCATENATION | | | | | |
| 51 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 52 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 53 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 54 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 56 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 57 | CONCATENATION | | | | | |
| 58 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 59 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 60 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 61 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 62 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 64 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 65 | INLIST ITERATOR | | | | | |
|* 66 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|* 67 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 68 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 69 | SORT AGGREGATE | | 1 | 20 | | |
|* 70 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 71 | SORT AGGREGATE | | 1 | 14 | | |
|* 72 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 73 | SORT AGGREGATE | | 1 | 23 | | |
|* 74 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
|* 75 | FILTER | | | | | |
PLAN_TABLE_OUTPUT
|* 76 | FILTER | | | | | |
|* 77 | HASH JOIN | | 1 | 237 | 974 (2)| 00:00:12 |
|* 78 | HASH JOIN | | 1 | 226 | 637 (1)| 00:00:08 |
|* 79 | HASH JOIN | | 1 | 193 | 631 (1)| 00:00:08 |
| 80 | NESTED LOOPS | | | | | |
| 81 | NESTED LOOPS | | 1 | 179 | 294 (1)| 00:00:04 |
| 82 | NESTED LOOPS | | 1 | 152 | 293 (1)| 00:00:04 |
| 83 | NESTED LOOPS | | 1 | 121 | 290 (1)| 00:00:04 |
|* 84 | HASH JOIN SEMI | | 1 | 75 | 289 (1)| 00:00:04 |
|* 85 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 242 (0)| 00:00:03 |
|* 86 | INDEX SKIP SCAN | IDX$$_C44D0007 | 587 | | 110 (0)| 00:00:02 |
|* 87 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
|* 88 | TABLE ACCESS BY INDEX ROWID | PS_PROJECT | 1 | 46 | 1 (0)| 00:00:01 |
|* 89 | INDEX UNIQUE SCAN | PS_PROJECT | 1 | | 0 (0)| 00:00:01 |
|* 90 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
|* 91 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 92 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 93 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
| 94 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
| 95 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 96 | FILTER | | | | | |
| 97 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 98 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 99 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*100 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 101 | CONCATENATION | | | | | |
| 102 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*103 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*104 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 105 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*106 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*107 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 108 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*109 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*110 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 111 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 112 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 113 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|*114 | FILTER | | | | | |
|*115 | FILTER | | | | | |
| 116 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 117 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|*118 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*119 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 120 | CONCATENATION | | | | | |
| 121 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|*122 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*123 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 124 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*125 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*126 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 127 | CONCATENATION | | | | | |
| 128 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*129 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*130 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 131 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*132 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*133 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 134 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 135 | INLIST ITERATOR | | | | | |
|*136 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|*137 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 138 | SORT AGGREGATE | | 1 | 20 | | |
|*139 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 140 | SORT AGGREGATE | | 1 | 14 | | |
|*141 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 142 | SORT AGGREGATE | | 1 | 23 | | |
|*143 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
| 144 | SORT AGGREGATE | | 1 | 14 | | |
|*145 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 146 | SORT AGGREGATE | | 1 | 20 | | |
|*147 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 148 | SORT AGGREGATE | | 1 | 23 | | |
|*149 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
Second part of the Question continues.... -
Performance Issue with the Query urgent
is there any way to get the data for the material Rejected with Movement type 122 & 123 except MSEG table, as my report is very slow...
my query is as below : -
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122')
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
abhishek suppalHi Abhi,
Try like this ....
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122'<b>,'123'</b>)
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
or ...
define ranges...like
ranges: r_bwart for XXXX-bwart.
r_bwart-sign = 'I'.
r_bwart-option = 'EQ'.
r_bwart-low = 122.
append r_bwart.
r_bwart-low = 123.
append r_bwart.
now...
in select statement u just add
AND a~bwart IN r_bwart
Thanks
Eswar -
Performance issue while generating Query
Hi BI Gurus.
I am facing performance issue while generating query on 0IC_C03.
It has a variable as (from & to) for generating the report for a particular time duration.
if the variable (from & to) fields is filled then after taking a long time it shows run time error.
& if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
& after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
Regards
RitikaHI RITIKA,
WEL COME TO SDN.
YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
High Database Runtime
High OLAP Runtime
High Frontend Runtime
if its high Database Runtime :
- check the aggregates or create aggregates on cube and this helps you.
if its High OLAP Runtime :
- check the user exits if any.
- check the hier. are used and fetching in deep level.
If its high frontend runtime:
- check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
For From and to date variables, create one more set and use it and try.
Regs,
VACHAN -
Performance Issues in Text query.
Hi,
We are doing a text mining on a column which has been indexed as a "MULTI_COLUMN_DATASTORE" on two CLOB fields.
While searching on this column, we face performance issues. The tables (both TABLE 1 and TABLE2) contains more than 46 million records. The query takes more than 10 minutes to execute.
This is the query used:
SELECT TABLE1.COLUMN1
FROM TABLE1, TABLE2
WHERE CONTAINS
(TABLE2.CLOB_COLUMN,
'(((({TEMPERATURE} OR {TEMP} OR {TEMPS} OR {THERMAL}) OR ({ENGINE} OR {ENG} OR {ENGINES})) AND (({SENSOR} OR {PROBE} OR {SEN} OR {SENSORS} OR {SENSR} OR {SNSR} OR {TRANSDUCER}))) AND (({INSTALL} OR {INST} OR {INSTALLATION} OR {INSTALLED} OR {INSTD} OR {INSTL} OR {INSTLD} OR {INSTN}) OR ({REMOVED} OR {REMOVAL} OR {REMOVE} OR {REMVD} OR {RMV} OR {RMVD} OR {RMVL}) OR ({REPLACED} OR {R+R} OR {R/I} OR {R/R} OR {REPL} OR {REPLACE} OR {REPLCD} OR {REPLD} OR {RM/RP} OR {RPL} OR {RPLCD} OR {RPLCED} OR {RPLD} OR {RPLED}) OR ({INOPERATIVE} OR {INOP}) OR ({FAILURE} OR {FAIL} OR {FAILD} OR {FAILED} OR {FAILR}) OR ({CHANGE} OR {CHANGED} OR {CHG} OR {CHGD} OR {CHGE} OR {CHGED}))) AND (({PRESSURE} OR {PRES} OR {PRESR} OR {PRESS} OR {PRESSURES} OR {PRESSURIZATION} OR {PRESSURIZE} OR {PRESSURIZED} OR {PRESSURIZING} OR {PRSZ} OR {PRSZD} OR {PRSZG} OR {PX})) NOT ({CARRIED} OR ({COOLANT} OR {COLNT}) OR ({DISPLAYED} OR {DSPLYD}))'
) > 0
AND TABLE1.COLUMN1 = TABLE2.COLUMN1
AND TABLE1.COLUMN2 = TABLE2.COLUMN2.
We created the index using the following procedure:
begin
ctx_ddl.create_preference('my_alias', 'MULTI_COLUMN_DATASTORE');
ctx_ddl.set_attribute('my_alias', 'columns', 'column1, column2, column3');
end;
create index index_name
on table_name(column1)
indextype is ctxsys.context
parameters ('datastore ctxsys.my_alias');
Please let me know if there are any optimization techniques that can be used to tune this query.
Also, I would like to know if using "MULTI_COLUMN_DATASTORE" would cause performance issues. Will Query REWRITE improve the performance in this case?
Thanks in Advance!What happens if you remove the join and just run the query against TABLE2?
Try it without the join, and just fetch the first 100 records (or something) from the query and see how that performs.
Have you optimized the index? Always worth doing after an index creation on a large table, especially as you've used the default index memory setting for the index. -
Performance Issue with sql query
Hi,
My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
With bsoa table as
SQL> desc bsoa;
Name Null? Type
ID NOT NULL NUMBER
LOGIN_TIME DATE
LOGOUT_TIME DATE
SUCCESSFUL_IND VARCHAR2(1)
WORK_STATION_NAME VARCHAR2(80)
OS_USER VARCHAR2(30)
USER_NAME NOT NULL VARCHAR2(30)
FORM_ID NUMBER
AUDIT_TRAIL_NO NUMBER
CREATED_BY VARCHAR2(30)
CREATION_DATE DATE
LAST_UPDATED_BY VARCHAR2(30)
LAST_UPDATE_DATE DATE
SITE_NO NUMBER
SESSION_ID NUMBER(8)
The query
UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
Is taking a lot of time to execute and in AWR reports also it is on top in
1. SQL Order by elapsed time
2. SQL order by reads
3. SQL order by gets
So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
Also:
SQL> SELECT COUNT(1) FROM BSOA;
COUNT(1)
7800373
The explain plan for "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
{code}
PLAN_TABLE_OUTPUT
Plan hash value: 1184960901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 1 | 26 | 18748 (3)| 00:03:45 |
| 1 | UPDATE | BSOA | | | | |
|* 2 | TABLE ACCESS FULL| BSOA | 1 | 26 | 18748 (3)| 00:03:45 |
Predicate Information (identified by operation id):
2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
{code}Hi,
There are also triggers before update and AUDITS on this table.
CREATE OR REPLACE TRIGGER B2.TRIGGER1
BEFORE UPDATE
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
CREATE OR REPLACE TRIGGER B2.TRIGGER2
BEFORE INSERT
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.CREATED_BY := USER ;
:NEW.CREATION_DATE := SYSDATE ;
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
And also there is an audit on this table
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
And the sessionid column in BSOA has height balanced histogram.
When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
Thanks -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance issues with the Tuxedo MQ Adapter
We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?Hi Todd,
We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
This is our configuration:
"MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
SYSTEM_ACCESS=FASTPATH
MAXGEN=1 GRACE=86400 RESTART=N
MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
SICACHEENTRIESMAX="500"
/tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
*SERVER
MINMSGLEVEL=0
MAXMSGLEVEL=0
DEFMAXMSGLEN=4096
TPESVCFAILDATA=Y
*QUEUE_MANAGER
LQMID=QMTESX01
NAME=QMTESX01
*SERVICE
NAME=txgralms0
FORMAT=MQSTR
TRAN=N
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
Thanks in advance,
Marling -
Performance issue in the Incremental ETL
Hi Experts,
we have performance issue and the task "Custom_SIL_OvertimeFact" is taking more than 55 mins.
this tasks reads the .csv file coming from business every 2 weeks with a few rows of data updated and appended.
every day this tasks runs and scans the whole of 70,000 records when in reality we just have 4-5 records updated/apended.
is there a way to shrink the time taken by this task?
when the record in the .csv file comes only once in a month then why do the mapping has to scan this many number of records on daily Incremental load?
please provide me some suggestions .
best regards,
KamThe reason you have to scan all the records is because you do not have an identifier to find what has been updated.
Becuase its a insert/update job it would be using normal load.
Truncating the table and reloading it would help you in terms of perfromance since then you would be able to use bulk load.
If there is any reason you cant do that then you can also look at the udpate strategy and see that all the fields that are being used to determine if a particular transaction is insert/update/existing must have an index defined on it , apart from this you should try and minimize the number of indexes on this table or make sure that you are dropping these when the load is running and creating them back once its finished !!
Also check your commit point and see if you can bump it up !!
Edited by: Sid_BI on Jan 28, 2010 4:03 PM -
Dear Experts
We are facing some performance issue in the CRM-WEBGUI . All the standard and custom one are slow it take more time to open.
However from the ABAP side is OK the response time is less than one second.
Any specific operation i have to perform in term of CRM ?
Regards
HaroonHello,
i guess you started the webUI for the first time, right?
At the first time it is very slow - SGEN does not really help here.
It should be much better if you call the same screens a second and third time.
See the link:
[CRM Webclient UI - Generation of views after system restart / upgrade|CRM Webclient UI - Generation of views after system restart / upgrade]
Kind regards
Manfred -
Exceptional aggregation on Non *** KF - Aggregation issue in the Query
Hi Gurus,
Can anyone tell me a solution for the below scenario. I am using BW 3.5 front end.
I have a non cumulative KF coming from my Stock cube and Pricing KF coming from my
Pricing Cube.(Both the cubes are in Multiprovider and my Query is on top of it).
I want to multiply both the KF's to get WSL Value CKF but my query is not at the material level
it is at the Plant level.
So it is behaving like this: for Eg: ( Remember my Qty is Non-*** KF)
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20
My WSL val should be 600 but it is giving me 15 * 95 which is way too high.
I have tried out all options of storing the QTY and PRC in two separate CKF and setting the aggregation
as before aggregation and then multiplying them but it din't work.
I also tried to use Exceptional Aggregation but we don't have option of ' TOTAL' as we have in BI 7.0
front end here.
So any other ideas guys. Any responses would be appreciated.
Thanks
Jay.I dont think you are able to solve this issue on the query level
This type of calculation should be done before agregation and this feature doesnt exist in BI 7.0 any longer. Any kind of exceptional aggregation wont help here
It should be be done either through virtual KF (see below ) or use stock snapshot approach
Key figure QTY*PRC should be virtual key figure. In this case U just need to one cbe (stock quantity) and pick up PRC on the query run time
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20 -
Performance issue with the ABAP statements
Hello,
Please can some help me with the below statements where I am getting performance problem.
SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
SORT CHDATE by DOC_NUMBER.
SORT SOURCE_PACKAGE by DOC_NUMBER.
LOOP AT CHDATE INTO WA_CHDATE.
READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
WA_CHDATE-DOC_NUMBER BINARY SEARCH.
MOVE WA_CHDATE-CREATEDON to WA_CIDATE-CREATEDON.
APPEND WA_CIDATE to CIDATE.
ENDLOOP.
I wrote an above code for the follwing requirement.
1. I have 2 tables from where i am getting the data
2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
3. While accessing the 2 table and copying to thrid table i have to modify the field.
I am getting performance issues with the above statements.
Than
Edited by: Rob Burbank on Jul 29, 2010 10:06 AMHello,
try a select like the following one instead of you code.
SELECT field field2 ...
INTO TABLE it_table
FROM table1 AS T1 INNER JOIN table2 AS T2
ON t1-doc_number = t2-doc_number -
Performance issue with the Select query
Hi,
I have an issue with the performance with a seclet query.
In table AFRU - AUFNR is not a key field.
So i had selected the low and high values into s_reuck and used it in Where condition.
Still i have an issue with the Performance.
SELECT SINGLE RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO table t_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
Is there anyway that we can declare Key field while declaring the Internal table....?
ANy suggestions to fix the performance issue is apprecaited!
Regards,
KittuHi,
Thank you for your quick response!
Rui dantas, i have lill confusion...this is my code below :
data : t_zscprt type standard table of ty_zscprt,
wa_zscprt type ty_zscprt.
types : BEGIN OF ty_zscprt100,
aufnr type zscprt100-aufnr,
posnr type zscprt100-posnr,
ezclose type zscprt100-ezclose,
serialnr type zscprt100-serialnr,
lgort type zscprt100-lgort,
END OF ty_zscprt100.
data : t_zscprt100 type standard table of ty_zscprt100,
wa_zscprt100 type ty_zscprt100.
Types: begin of ty_afru,
rueck type CO_RUECK,
rmzhl type CO_RMZHL,
iedd type RU_IEDD,
aufnr type AUFNR,
stokz type CO_STOKZ,
stzhl type CO_STZHL,
end of ty_afru.
data : t_afru type STANDARD TABLE OF ty_afru,
WA_AFRU TYPE TY_AFRU.
SELECT AUFNR
POSNR
EZCLOSE
SERIALNR
LGORT
FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
FOR ALL ENTRIES IN T_ZSCPRT
WHERE AUFNR = T_ZSCPRT-PRTNUM
AND SERIALNR IN S_SERIAL
AND LGORT IN S_LGORT.
IF sy-subrc <> 0.
MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
stop."BDCG87
ENDIF.
ENDIF.
SELECT RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO TABLE T_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
Using AUFNR, get AUFPL from AFKO
Using AUFPL, get RUECK from AFVC
Using RUEKC, read AFRU
In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
This is my select query, would you want me to write another select query to meet this criteria..
From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
ANy suggestions wil be appreciated!
Regards
Kittu -
Performance issue on the sys.dba_audit_session
i have the following query which is taking long time and had performance issue.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS
failed_count
FROM sys.dba_audit_session
WHERE returncode != 0
AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
call count cpu elapsed disk query current rows
Parse 1 0.01 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 68.42 216.08 3943789 3960058 0 1
total 4 68.43 216.13 3943789 3960058 0 1
The view dba_audit_session is a select from the view dba_audit_trail. If you
look at the definition of dba_audit_trail it does a CAST on the ntimestamp#
column. Therefore disabling index access because there is not a function
based index on ntimestamp#. I am not even sure a function based index would
work to match what the view does.
cast ( /* TIMESTAMP */
(from_tz(ntimestamp#,'00:00') at local) as date),
To get index access the metric would have to avoid the use of the view. I have changed the query like this.
SELECT /*+ INDEX(a I_AUD3) */ TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
HH24:MI:SS TZD') AS curr_timestamp, COUNT(userid) AS failed_count
FROM sys.aud$ a
WHERE returncode != 0
and action# between 100 and 102
AND ntimestamp# >= systimestamp at time zone 'GMT' - 30/1440
is it correct way to di it?
could you comment on this ?The query is run by grid Control (or DBConsole) to count the metric related to audit sessions which is ON by default in 11g. To decrease the impact of this query you should purge the aud$ table regularly.
Best way is to use DBMS_AUDIT_MGMT to periodically purge the data older than "whatever date". If you don't need the audit infor, you can simply truncate aud$.
Maybe you are looking for
-
Menu painter with default application toolbar
Hi All, 1. I'm developing a report with customised menu ,Here I want to customise only menu bar not application toolabar. and I want the default SAP application tool bar is it possible ? if yes then How? 2.In the same report i have developed a selcti
-
During approval process thru work flow for a Purchase Order, among various approvals (codes). Here after creation of PO by first person(buyer), second approver able to see PO in his SAP inbox and when second person approves it, it is moves to third p
-
Client wants print materials designed in @#$%! Word now
So I've got this client that I've had for about a year. I've designed postcards, flyers and posters for her in InDesign and delivered print-ready PDFs to her in the past, and everything was dandy. A few months ago, I designed a certificate of complet
-
Last version for a Pentium III processor
I have a very old Thinkpad 600x that has a Pentium III (coppermine) processor. The current system requirements for the latest versions of Firefox reference a Pentium 4. I'm currently running 3.5.13. None of the current add ons work with this version.
-
Short dump when database updating
Hi all, I'm updating database by using insert statements. i'm deleting all the duplicate entries. eventhough i'm getting dump. dump error is like this " If you use an ABAP/4 Open SQL array insert to insert a record in the database and tha