Performance issue with the ztable.
I have created a Ztable with 5 fields and generated the table maintienance. My client complains that the performance is very slow while updating or adding new entries in the table. They are doing manual entry.
Is there any measure that could be taken to improve the performance ?
hi,
how many entries ?
you can maintain ztab using sm30 with "restrict data range"
-> I think that will improve performance
btw @Shiva ,
I think always 1 person can maintain one table with sm30 and not several persons at the same time
A.
Message was edited by:
Andreas Mann
Similar Messages
-
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance issues with the Tuxedo MQ Adapter
We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?Hi Todd,
We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
This is our configuration:
"MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
SYSTEM_ACCESS=FASTPATH
MAXGEN=1 GRACE=86400 RESTART=N
MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
SICACHEENTRIESMAX="500"
/tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
*SERVER
MINMSGLEVEL=0
MAXMSGLEVEL=0
DEFMAXMSGLEN=4096
TPESVCFAILDATA=Y
*QUEUE_MANAGER
LQMID=QMTESX01
NAME=QMTESX01
*SERVICE
NAME=txgralms0
FORMAT=MQSTR
TRAN=N
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
Thanks in advance,
Marling -
Performance issue with the ABAP statements
Hello,
Please can some help me with the below statements where I am getting performance problem.
SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
SORT CHDATE by DOC_NUMBER.
SORT SOURCE_PACKAGE by DOC_NUMBER.
LOOP AT CHDATE INTO WA_CHDATE.
READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
WA_CHDATE-DOC_NUMBER BINARY SEARCH.
MOVE WA_CHDATE-CREATEDON to WA_CIDATE-CREATEDON.
APPEND WA_CIDATE to CIDATE.
ENDLOOP.
I wrote an above code for the follwing requirement.
1. I have 2 tables from where i am getting the data
2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
3. While accessing the 2 table and copying to thrid table i have to modify the field.
I am getting performance issues with the above statements.
Than
Edited by: Rob Burbank on Jul 29, 2010 10:06 AMHello,
try a select like the following one instead of you code.
SELECT field field2 ...
INTO TABLE it_table
FROM table1 AS T1 INNER JOIN table2 AS T2
ON t1-doc_number = t2-doc_number -
Performance issue with the Select query
Hi,
I have an issue with the performance with a seclet query.
In table AFRU - AUFNR is not a key field.
So i had selected the low and high values into s_reuck and used it in Where condition.
Still i have an issue with the Performance.
SELECT SINGLE RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO table t_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
Is there anyway that we can declare Key field while declaring the Internal table....?
ANy suggestions to fix the performance issue is apprecaited!
Regards,
KittuHi,
Thank you for your quick response!
Rui dantas, i have lill confusion...this is my code below :
data : t_zscprt type standard table of ty_zscprt,
wa_zscprt type ty_zscprt.
types : BEGIN OF ty_zscprt100,
aufnr type zscprt100-aufnr,
posnr type zscprt100-posnr,
ezclose type zscprt100-ezclose,
serialnr type zscprt100-serialnr,
lgort type zscprt100-lgort,
END OF ty_zscprt100.
data : t_zscprt100 type standard table of ty_zscprt100,
wa_zscprt100 type ty_zscprt100.
Types: begin of ty_afru,
rueck type CO_RUECK,
rmzhl type CO_RMZHL,
iedd type RU_IEDD,
aufnr type AUFNR,
stokz type CO_STOKZ,
stzhl type CO_STZHL,
end of ty_afru.
data : t_afru type STANDARD TABLE OF ty_afru,
WA_AFRU TYPE TY_AFRU.
SELECT AUFNR
POSNR
EZCLOSE
SERIALNR
LGORT
FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
FOR ALL ENTRIES IN T_ZSCPRT
WHERE AUFNR = T_ZSCPRT-PRTNUM
AND SERIALNR IN S_SERIAL
AND LGORT IN S_LGORT.
IF sy-subrc <> 0.
MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
stop."BDCG87
ENDIF.
ENDIF.
SELECT RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO TABLE T_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
Using AUFNR, get AUFPL from AFKO
Using AUFPL, get RUECK from AFVC
Using RUEKC, read AFRU
In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
This is my select query, would you want me to write another select query to meet this criteria..
From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
ANy suggestions wil be appreciated!
Regards
Kittu -
Performance issue with the table use vrkpa
Hi.
here is the selection criteria that i am using and the table use vrkpa i only used to map the table kna1 and vbrk.vbrk and kna1 doesnot have the direct primary key relationship.
please check and let me know wht this vrkpa is taking time and how can i improve the performance as from kna1,i am fetching data very easily while fetching nothing from vrkpa and fetching fkdat from vbrk.
the idea behind using these tables is just for one kunnr (from kna1)getting the relevant entries based on the fkdat(selection screen input field),please suggest.
SELECT kunnr
name1
land1
regio
ktokd
FROM kna1
INTO TABLE it_kna1
FOR ALL ENTRIES IN it_knb1
WHERE kunnr = it_knb1-kunnr
AND ktokd = '0003'.
IF sy-subrc = 0.
SORT it_kna1 BY kunnr.
DELETE ADJACENT DUPLICATES FROM it_kna1 COMPARING kunnr.
ENDIF.
ENDIF.
IF NOT it_kna1[] IS INITIAL.
SELECT kunnr
vbeln
FROM vrkpa
INTO TABLE it_vrkpa
FOR ALL ENTRIES IN it_kna1
WHERE kunnr = it_kna1-kunnr.
IF sy-subrc = 0.
SORT it_vrkpa BY kunnr vbeln.
ENDIF.
ENDIF.
IF NOT it_vrkpa[] IS INITIAL.
SELECT vbeln
kunrg
fkdat
kkber
bukrs
FROM vbrk
INTO TABLE it_vbrk
FOR ALL ENTRIES IN it_vrkpa
WHERE vbeln = it_vrkpa-vbeln.
IF sy-subrc = 0.
DELETE it_vbrk WHERE fkdat NOT IN s_indate.
DELETE it_vbrk WHERE fkdat NOT IN s_chdate.
DELETE it_vbrk WHERE bukrs NOT IN s_ccode.
SORT it_vbrk DESCENDING BY vbeln fkdat.
ENDIF.
ENDIF.Hi,
Transaction SE11
Table VRKPA => Display (not Change)
Click on "Indexes"
Click on "Create" (if your system is Basis 7.00, then click on the "Create" drop-down icon and choose "Create extension index")
Choose a name (up to 3 characterss, start with Z)
Enter a description for the index
Enter the field names of the index
Choose "Save" (prompts for transport request)
Choose "Activate"
If after "Activate' the status shows "Index exists in database system <...>", then you have nothing more to dotable is very large the activation will not create the index in the database and the status remains "Index does nor exist". In that case:
- Transaction SE14
- Table VRKPA -> Edit
- Choose "Indexes" and select your new index
- Choose "Create database index"; mark the option "Background"
- Wait until the job is finished and check in SE11 that the index now exists in the DB
You don't have to do anyhting to your program because Oracle should choose the new index automatically. Run a SQL Trace to make sure.
Rgds,
Mark -
Performance Issue with the query
Hi Experts,
While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
Here is the query
SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID, E.DESCR, A.EMPLID, D.NAME, C.DESCR, A.TRC,
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( A.EST_GROSS),
DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
'2012-07-01',
'2012-07-31',
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / ' || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT,
E.PROJECT_ID
FROM
PS_TL_PAYABLE_TIME A,
PS_TL_TRC_TBL C,
PS_PROJECT E,
PS_PERSONAL_DATA D,
PS_PERALL_SEC_QRY D1,
PS_JOB F,
PS_EMPLMT_SRCH_QRY F1
WHERE
D.EMPLID = D1.EMPLID
AND D1.OPRID = 'TMANI'
AND F.EMPLID = F1.EMPLID
AND F.EMPL_RCD = F1.EMPL_RCD
AND F1.OPRID = 'TMANI'
AND A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
AND C.TRC = A.TRC
AND C.EFFDT = (SELECT
MAX(C_ED.EFFDT)
FROM
PS_TL_TRC_TBL C_ED
WHERE
C.TRC = C_ED.TRC
AND C_ED.EFFDT <= SYSDATE
AND E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
AND E.PROJECT_ID = A.PROJECT_ID
AND A.EMPLID = D.EMPLID
AND A.EMPLID = F.EMPLID
AND A.EMPL_RCD = F.EMPL_RCD
AND F.EFFDT = (SELECT
MAX(F_ED.EFFDT)
FROM
PS_JOB F_ED
WHERE
F.EMPLID = F_ED.EMPLID
AND F.EMPL_RCD = F_ED.EMPL_RCD
AND F_ED.EFFDT <= SYSDATE
AND F.EFFSEQ = (SELECT
MAX(F_ES.EFFSEQ)
FROM
PS_JOB F_ES
WHERE
F.EMPLID = F_ES.EMPLID
AND F.EMPL_RCD = F_ES.EMPL_RCD
AND F.EFFDT = F_ES.EFFDT
AND F.GP_PAYGROUP = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
AND A.CURRENCY_CD = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
AND DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
AND ( A.EMPLID, A.EMPL_RCD) IN (SELECT
B.EMPLID,
B.EMPL_RCD
FROM
PS_TL_GROUP_DTL B
WHERE
B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012')
AND E.PROJECT_USER1 = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
AND A.PROJECT_ID = DECODE(' ', ' ', A.PROJECT_ID, ' ')
AND A.PAYABLE_STATUS <>
CASE
WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
THEN
CASE
WHEN A.DUR > last_day(add_months(sysdate, -2))
THEN ' '
ELSE 'NA'
END
ELSE
CASE
WHEN A.DUR > last_day(add_months(sysdate, -1))
THEN ' '
ELSE 'NA'
END
END
AND A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
GROUP BY A.BUSINESS_UNIT_PC,
A.PROJECT_ID,
E.DESCR,
A.EMPLID,
D.NAME,
C.DESCR,
A.TRC,
'2012-07-01',
'2012-07-31',
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / '
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ') || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT, E.PROJECT_ID
HAVING SUM( A.EST_GROSS) <> 0
ORDER BY 1,
2,
5,
6 ;Here is the screenshot for IO wait
[https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
Edited by: Oceaner on Aug 14, 2012 5:38 PMHere is the execution plan with all the statistics present
PLAN_TABLE_OUTPUT
Plan hash value: 1575300420
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 237 | 2703 (1)| 00:00:33 |
|* 1 | FILTER | | | | | |
| 2 | SORT GROUP BY | | 1 | 237 | | |
| 3 | CONCATENATION | | | | | |
|* 4 | FILTER | | | | | |
|* 5 | FILTER | | | | | |
|* 6 | HASH JOIN | | 1 | 237 | 1695 (1)| 00:00:21 |
|* 7 | HASH JOIN | | 1 | 204 | 1689 (1)| 00:00:21 |
|* 8 | HASH JOIN SEMI | | 1 | 193 | 1352 (1)| 00:00:17 |
| 9 | NESTED LOOPS | | | | | |
| 10 | NESTED LOOPS | | 1 | 175 | 1305 (1)| 00:00:16 |
|* 11 | HASH JOIN | | 1 | 148 | 1304 (1)| 00:00:16 |
| 12 | JOIN FILTER CREATE | :BF0000 | | | | |
| 13 | NESTED LOOPS | | | | | |
| 14 | NESTED LOOPS | | 1 | 134 | 967 (1)| 00:00:12 |
| 15 | NESTED LOOPS | | 1 | 103 | 964 (1)| 00:00:12 |
|* 16 | TABLE ACCESS FULL | PS_PROJECT | 197 | 9062 | 278 (1)| 00:00:04 |
|* 17 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 7 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX$$_C44D0007 | 16 | | 3 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
| 21 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
PLAN_TABLE_OUTPUT
| 22 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 23 | FILTER | | | | | |
| 24 | JOIN FILTER USE | :BF0000 | 55671 | 2827K| 335 (1)| 00:00:05 |
| 25 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 26 | TABLE ACCESS BY INDEX ROWID| PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|* 28 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 29 | CONCATENATION | | | | | |
| 30 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 31 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 32 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 33 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 34 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 36 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 38 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
|* 39 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 40 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
|* 41 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
| 42 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 43 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|* 44 | FILTER | | | | | |
|* 45 | FILTER | | | | | |
| 46 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 47 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 48 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 49 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 50 | CONCATENATION | | | | | |
| 51 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 52 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 53 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 54 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 56 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 57 | CONCATENATION | | | | | |
| 58 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 59 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 60 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 61 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 62 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 64 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 65 | INLIST ITERATOR | | | | | |
|* 66 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|* 67 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 68 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 69 | SORT AGGREGATE | | 1 | 20 | | |
|* 70 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 71 | SORT AGGREGATE | | 1 | 14 | | |
|* 72 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 73 | SORT AGGREGATE | | 1 | 23 | | |
|* 74 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
|* 75 | FILTER | | | | | |
PLAN_TABLE_OUTPUT
|* 76 | FILTER | | | | | |
|* 77 | HASH JOIN | | 1 | 237 | 974 (2)| 00:00:12 |
|* 78 | HASH JOIN | | 1 | 226 | 637 (1)| 00:00:08 |
|* 79 | HASH JOIN | | 1 | 193 | 631 (1)| 00:00:08 |
| 80 | NESTED LOOPS | | | | | |
| 81 | NESTED LOOPS | | 1 | 179 | 294 (1)| 00:00:04 |
| 82 | NESTED LOOPS | | 1 | 152 | 293 (1)| 00:00:04 |
| 83 | NESTED LOOPS | | 1 | 121 | 290 (1)| 00:00:04 |
|* 84 | HASH JOIN SEMI | | 1 | 75 | 289 (1)| 00:00:04 |
|* 85 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 242 (0)| 00:00:03 |
|* 86 | INDEX SKIP SCAN | IDX$$_C44D0007 | 587 | | 110 (0)| 00:00:02 |
|* 87 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
|* 88 | TABLE ACCESS BY INDEX ROWID | PS_PROJECT | 1 | 46 | 1 (0)| 00:00:01 |
|* 89 | INDEX UNIQUE SCAN | PS_PROJECT | 1 | | 0 (0)| 00:00:01 |
|* 90 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
|* 91 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 92 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 93 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
| 94 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
| 95 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 96 | FILTER | | | | | |
| 97 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 98 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 99 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*100 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 101 | CONCATENATION | | | | | |
| 102 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*103 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*104 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 105 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*106 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*107 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 108 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*109 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*110 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 111 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 112 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 113 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|*114 | FILTER | | | | | |
|*115 | FILTER | | | | | |
| 116 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 117 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|*118 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*119 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 120 | CONCATENATION | | | | | |
| 121 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|*122 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*123 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 124 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*125 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*126 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 127 | CONCATENATION | | | | | |
| 128 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*129 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*130 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 131 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*132 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*133 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 134 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 135 | INLIST ITERATOR | | | | | |
|*136 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|*137 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 138 | SORT AGGREGATE | | 1 | 20 | | |
|*139 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 140 | SORT AGGREGATE | | 1 | 14 | | |
|*141 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 142 | SORT AGGREGATE | | 1 | 23 | | |
|*143 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
| 144 | SORT AGGREGATE | | 1 | 14 | | |
|*145 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 146 | SORT AGGREGATE | | 1 | 20 | | |
|*147 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 148 | SORT AGGREGATE | | 1 | 23 | | |
|*149 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
Second part of the Question continues.... -
Performance issue with the FM FKK_CLEARING_PROPOSAL_GEN_0110.
Hi Experts,
I am passing some 60000 transactions to this FM FKK_CLEARING_PROPOSAL_GEN_0110 and it takes a lot of time to process.
Please let me know if there is any way / SAP notes to be implemented so that I can reduce the time consumed by this FM to process the same number of transactions.
Regards,
KarthikHi,
To determine the root cause you can use ST05 Performance Trace which can help to record database access, locking activities, and remote calls of reports and transactions in a trace file and display the performance log as a list.
Once you are done with this you can check if any modifications are required for poor code performance ( like PERFORM commit routine ON COMMIT), initilization of all internal tables again from which data was updated at the end of the commit routine - to prevent a duplicate update in the next call.
And then you can adjust this module in event 110 : Automatic Clearing Proposal using tcode FQEVENTS
or at Financial Accounting -> Contract Accounts Receivable and Payable -> Program Enhancements -> Define Customer-Specific Function Modules) for event 110.
You can also activate the following fields in table T_FKKCL for clearing amounts to be assigned
XAKTP 'X' (marked) Item will be used
XAKTS 'X' (marked) Specified cash discount will be used
AUGBW Assigned amount (gross)
ASKTW Assigned cash discount amount
SKTPA Accepted cash discount percentage (which is similar as SKTPZ)
Alternately check the clearing control rules at
Contract Accounts Receivable and Payable -> Basic Functions -> Open Item Management -> Clearing Control
by assigning a clearing variant to the clearing type of the clearing process you can make use of the clearing control to assign open items automatically by:
1. Selection of open items for clearing
2. Grouping of open items for joint clearing
3. Sorting of open items for the order of processing for individual item groups or items within the group
4. Split of payment amount according to different methods for partial clearing
Thanks,
Sagar -
Performance Issue with the Query urgent
is there any way to get the data for the material Rejected with Movement type 122 & 123 except MSEG table, as my report is very slow...
my query is as below : -
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122')
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
abhishek suppalHi Abhi,
Try like this ....
SELECT SUM( a~dmbtr )INTO value1
FROM mseg AS a INNER JOIN mkpf AS b
ON amblnr = bmblnr
AND amjahr = bmjahr
WHERE a~lifnr = p_lifnr
AND a~bwart IN ('122'<b>,'123'</b>)
AND b~budat IN s_budat
GROUP BY lifnr.
ENDSELECT.
or ...
define ranges...like
ranges: r_bwart for XXXX-bwart.
r_bwart-sign = 'I'.
r_bwart-option = 'EQ'.
r_bwart-low = 122.
append r_bwart.
r_bwart-low = 123.
append r_bwart.
now...
in select statement u just add
AND a~bwart IN r_bwart
Thanks
Eswar -
Performance issue with the code
hi,
i have below code.for printing it is taking enough time..
which way i can improve the performance of below piece of code and what changes i will do for improving performance??
kindly help me..
form get_komgd.
tables: kotd994. "kondd
data : tfill_auswahl type i.
data: begin of auswahl occurs 10,
kappl like kotd994-kappl ,
kschl like kotd994-kschl ,
vkorg like kotd994-vkorg ,
vtweg like kotd994-vtweg ,
spart like kotd994-spart ,
kvgr1 like kotd994-kvgr1 ,
matwa like kotd994-matwa ,
datbi like kotd994-datbi ,
datab like kotd994-datab ,
knumh like kotd994-knumh ,
smatn like kondd-smatn,
meins like kondd-meins,
sugrd like kondd-sugrd,
end of auswahl.
tables: kondd.
select * from kondd
where smatn = ltap-matnr.
select * from kotd994
where kappl = 'V'
and kschl = vbak-kschl
and vkorg = vbak-vkorg
and vtweg = vbak-vtweg
and spart = vbak-spart
and kvgr1 = vbak-kvgr1
and matwa = vbak-matwa
and datbi >= sy-datum
and datab <= sy-datum
and knumh = kondd-knumh .
endselect.
if sy-subrc = 0.
and kotd994-kvgr1(1) = 'Z'.
move-corresponding kotd994 to auswahl.
move-corresponding kondd to auswahl.
append auswahl .
endif. " sy-subrc = 0.
write: / auswahl.
endselect.
describe table auswahl lines tfill_auswahl.
if tfill_auswahl = 1.
komgd-matwa = auswahl-matwa.
else.
clear komgd-matwa.
endif. " tfill_auswahl = 1.
endform. " ZHX_GET_COSTUMER_NRHi,
Using two select statements will take more time.
Rather use this sample code:
select * from kondd in to corresponding fields of table auswahl
where smatn = ltap-matnr.
select * from kotd994 into corresponding fields of auswahl
for all entries of auswahl
where kappl = 'V'
and vkorg = vbak-vkorg
and vtweg = vbak-vtweg
and kvgr1 = vbak-kvgr1
and datbi >= sy-datum
and datab <= sy-datum
and knumh = auswahl-knumh .
however, in the structure of auswahl maintain the field of knumh as well, so that we can pass the values in the second select directly.
This will improve the performance very well.
if possible, try and use the primary key combinations in the where clause while extracting the data. -
Hi Gurus,
We have a select statement in which we are using this view pa_draft_inv_line_details_v.This view is causing Performane Issue while exectuing select statement.Can you please help me on this.
Thanks,
RS.Hussien,
Sorry for Late Reply.
Here are the details:
Gather Schema Statistics
INDEX_NAME TABLE_NAME
AP_SUPPLIERS_N1 AP_SUPPLIERS
AP_SUPPLIERS_N2 AP_SUPPLIERS
AP_SUPPLIERS_N3 AP_SUPPLIERS
AP_SUPPLIERS_N4 AP_SUPPLIERS
AP_SUPPLIERS_N5 AP_SUPPLIERS
AP_SUPPLIERS_N6 AP_SUPPLIERS
AP_SUPPLIERS_N7 AP_SUPPLIERS
AP_SUPPLIERS_N8 AP_SUPPLIERS
AP_SUPPLIERS_U1 AP_SUPPLIERS
AP_SUPPLIERS_U2 AP_SUPPLIERS
FND_LOOKUP_VALUES_U1 FND_LOOKUP_VALUES
FND_LOOKUP_VALUES_U2 FND_LOOKUP_VALUES
GL_DAILY_CONVERSION_TYPES_U1 GL_DAILY_CONVERSION_TYPES
GL_DAILY_CONVERSION_TYPES_U2 GL_DAILY_CONVERSION_TYPES
HR_ORGANIZATION_UNITS_FK1 HR_ALL_ORGANIZATION_UNITS
HR_ORGANIZATION_UNITS_FK2 HR_ALL_ORGANIZATION_UNITS
HR_ORGANIZATION_UNITS_FK3 HR_ALL_ORGANIZATION_UNITS
HR_ORGANIZATION_UNITS_FK4 HR_ALL_ORGANIZATION_UNITS
HR_ORGANIZATION_UNITS_PK HR_ALL_ORGANIZATION_UNITS
HR_ORGANIZATION_UNITS_UK2 HR_ALL_ORGANIZATION_UNITS
HR_ALL_ORGANIZATION_UNTS_TL_N2 HR_ALL_ORGANIZATION_UNITS_TL
HR_ALL_ORGANIZATION_UNTS_TL_PK HR_ALL_ORGANIZATION_UNITS_TL
PA_COST_DISTRIBUTION_LINES_N10 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N12 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N13 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N14 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N15 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N16 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N17 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N19 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N2 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N20 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N3 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N4 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N5 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N6 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N7 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N8 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_N9 PA_COST_DISTRIBUTION_LINES_ALL
PA_COST_DISTRIBUTION_LINES_U1 PA_COST_DISTRIBUTION_LINES_ALL
PA_CUST_EVENT_REV_DIST_LINE_N1 PA_CUST_EVENT_RDL_ALL
PA_CUST_EVENT_REV_DIST_LINE_N2 PA_CUST_EVENT_RDL_ALL
PA_CUST_EVENT_REV_DIST_LINE_N3 PA_CUST_EVENT_RDL_ALL
PA_CUST_EVENT_REV_DIST_LINE_N4 PA_CUST_EVENT_RDL_ALL
PA_CUST_EVENT_REV_DIST_LINE_N5 PA_CUST_EVENT_RDL_ALL
PA_CUST_EVENT_REV_DIST_LINE_U1 PA_CUST_EVENT_RDL_ALL
PA_CUST_REV_DIST_LINES_N1 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N2 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N3 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N4 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N5 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N6 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N7 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_N9 PA_CUST_REV_DIST_LINES_ALL
PA_CUST_REV_DIST_LINES_U1 PA_CUST_REV_DIST_LINES_ALL
PA_DRAFT_INVOICE_ITEMS_N1 PA_DRAFT_INVOICE_ITEMS
PA_DRAFT_INVOICE_ITEMS_N2 PA_DRAFT_INVOICE_ITEMS
PA_DRAFT_INVOICE_ITEMS_N3 PA_DRAFT_INVOICE_ITEMS
PA_DRAFT_INVOICE_ITEMS_N4 PA_DRAFT_INVOICE_ITEMS
PA_DRAFT_INVOICE_ITEMS_N5 PA_DRAFT_INVOICE_ITEMS
PA_DRAFT_INVOICE_ITEMS_N6 PA_DRAFT_INVOICE_ITEMS
PA_DRAFT_INVOICE_ITEMS_U1 PA_DRAFT_INVOICE_ITEMS
PA_EVENTS_N1 PA_EVENTS
PA_EVENTS_N2 PA_EVENTS
PA_EVENTS_N3 PA_EVENTS
PA_EVENTS_N4 PA_EVENTS
PA_EVENTS_N5 PA_EVENTS
PA_EVENTS_N6 PA_EVENTS
PA_EVENTS_U1 PA_EVENTS
PA_EVENTS_U2 PA_EVENTS
PA_EVENTS_U3 PA_EVENTS
PA_EVENTS_U4 PA_EVENTS
PA_EXPENDITURES_ALL_N11 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N1 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N2 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N3 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N4 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N5 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N6 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N7 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N8 PA_EXPENDITURES_ALL
PA_EXPENDITURES_N9 PA_EXPENDITURES_ALL
PA_EXPENDITURES_U1 PA_EXPENDITURES_ALL
PA_EXPENDITURES_ITEMS_N18 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N1 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N10 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N11 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N12 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N13 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N14 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N15 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N16 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N17 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N18 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N19 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N2 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N20 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N21 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N22 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N23 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N24 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N25 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N26 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N27 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N28 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N29 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N3 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N30 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N31 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N32 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N33 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N35 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N4 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N5 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N6 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N7 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N8 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_N9 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_ITEMS_U1 PA_EXPENDITURE_ITEMS_ALL
PA_EXPENDITURE_TYPES_N1 PA_EXPENDITURE_TYPES
PA_EXPENDITURE_TYPES_N2 PA_EXPENDITURE_TYPES
PA_EXPENDITURE_TYPES_N3 PA_EXPENDITURE_TYPES
PA_EXPENDITURE_TYPES_U1 PA_EXPENDITURE_TYPES
PA_EXPENDITURE_TYPES_U2 PA_EXPENDITURE_TYPES
PA_PROJECT_STATUSES_U2 PA_PROJECT_STATUSES
PA_PROJECT_STATUSES_U3 PA_PROJECT_STATUSES
PA_PROJECT_TYPES_N1 PA_PROJECT_TYPES_ALL
PA_PROJECT_TYPES_U1 PA_PROJECT_TYPES_ALL
PA_PROJECT_TYPES_U2 PA_PROJECT_TYPES_ALL
PA_TASKS_N1 PA_TASKS
PA_TASKS_N10 PA_TASKS
PA_TASKS_N11 PA_TASKS
PA_TASKS_N12 PA_TASKS
PA_TASKS_N13 PA_TASKS
PA_TASKS_N14 PA_TASKS
PA_TASKS_N2 PA_TASKS
PA_TASKS_N3 PA_TASKS
PA_TASKS_N4 PA_TASKS
PA_TASKS_N5 PA_TASKS
PA_TASKS_N6 PA_TASKS
PA_TASKS_N7 PA_TASKS
PA_TASKS_N8 PA_TASKS
PA_TASKS_N9 PA_TASKS
PA_TASKS_U1 PA_TASKS
PA_TASKS_U2 PA_TASKS
PER_PEOPLE_F_FK1 PER_ALL_PEOPLE_F
PER_PEOPLE_F_FK2 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N1 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N2 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N50 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N51 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N52 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N53 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N54 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N55 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N56 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N57 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N58 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N59 PER_ALL_PEOPLE_F
PER_PEOPLE_F_N60 PER_ALL_PEOPLE_F
PER_PEOPLE_F_PK PER_ALL_PEOPLE_F
xxx_PERALL_EFFDATE PER_ALL_PEOPLE_F -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Performance issue with HRALXSYNC report..
HI,
I'm facing performance issue with the HRALXSYNC report. As this is Standard report, Can any body suggest me how to optimize the standard report..
Thanks in advance.
Saleem Javed
Moderator message: Please Read before Posting in the Performance and Tuning Forum, also look for existing SAP notes and/or send a support message to SAP.
Edited by: Thomas Zloch on Aug 23, 2011 4:17 PMSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later
Maybe you are looking for
-
Ask the Experts :LAN Switching
With Matt Blanshard and Jane Gao Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to ask your toughest layer 2 questions to two of the technical leaders of the San Jose LAN Switching team, Matt Blanshard and
-
2 questions about my Html report
Dear All, I've created this Html reporting VI as a template for future work. As you can probably see on the VI i'm saving at a set time so that i can save the report once a day. I'm doing this by comparing the current time to the time input it works
-
Associating Application Icons with Files
I just moved about a Gigs worth of files over from my G4 to my Wallstreet. The .pdf files do not obtain the Adobe Acrobat Reader icon until I actually open the file. Is there a way to change all the files icons at once without opening them all?
-
CDM failover and recovery step-by-step
Hi all, I got a question regarding CDM fail-over (using two CE-565 with ACNS 5.2.3.9) I´ve configured both of them according to the documenation for the 5.1 release in chapter 8. First one is primary, the second one standby. When the primary fails an
-
Inspection plan for all materials GOODS Receipt
Hi qm experts, I have a scenario. How could I make an inspection plan which covers all materials with variable base unit of measure. Its because my client is using winlims for their results recording. I want to create only one inspection plan to inco