Performance tuning Issue
Hi folks,
I having a problem with performance tuning ... Below is a sample query
SELECT /*+ PARALLEL (K 4) */ DISTINCT ltrim(rtrim(ibc_item)), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE ltrim(rtrim(ibc_item)) NOT IN
select /*+ PARALLEL (II 4) */ DISTINCT ltrim(rtrim(THIRD_MAINKEY)) FROM BBB II
WHERE SECOND_MAINKEY = 3
UNION
SELECT /*+ PARALLEL (III 4) */ DISTINCT ltrim(rtrim(BLN_BUSINESS_LINE_NAME)) FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
The above query is having a cost of 460 Million. I tried creating index but oracle is not using index as a FT scan looks better. (I too feel FT scan is the best as 90% of the rows are used in the table)
After using the parallel hint the cost goes to 100 Million ....
Is there any way to decrease the cost ...
Thanks in advance for ur help !
Be aware too Nalla, that the PARALLEL hint will rule out the use of an index if Oracle adheres to it.
This is what I would try:
SELECT /*+ PARALLEL (K 4) */ DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = TRIM(K.ibc_item))
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = TRIM(K.ibc_item))But I don't like this at all: TRIM(K.ibc_item), and you never need to use DISTINCT with NOT IN or NOT EXISTS.
Try this:
SELECT DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = K.ibc_item
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = K.ibc_itemThis may not work though, since you may have whitespaces in K.ibc_item.
Similar Messages
-
Performance tuning issues -- plz help
Hi Tuning gurus
this querry works fine for lesser number of rows
eg :--
where ROWNUM <= 10 )
where rnum >=1;
but takes lot of time as we increase rownum ..
eg :--
where ROWNUM <= 10000 )
where rnum >=9990;
results are posted below
pls suggest me
oracle version -Oracle Database 10g Enterprise Edition
Release 10.2.0.1.0 - Prod
os version red hat enterprise linux ES release 4
also statistics differ when we use table
and its views
results of view v$mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.84
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
294 recursive calls
0 db block gets
8715 consistent gets
8669 physical reads
0 redo size
7060 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from v$mail;
Elapsed: 00:00:00.17
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
8 recursive calls
0 db block gets
2171 consistent gets
2057 physical reads
260 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
results of original table mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.21
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
1 recursive calls
119544 db block gets
8686 consistent gets
8648 physical reads
0 redo size
13510 bytes sent via SQL*Net to client
4084 bytes received via SQL*Net from client
41 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from mail;
Elapsed: 00:00:00.34
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
1 recursive calls
0 db block gets
2183 consistent gets
2062 physical reads
72 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Thanks n regards
ps : sorry i could not preserve the format plz
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBAJust to answer the OP's fundamental question:
The query starts off quick (rows between 1 and 10)
but gets increasingly slower as the start of the
window increases (eg to row 1000, 10,000, etc).
The original (unsorted) query would get first rows
very quickly, but each time you move the window, it
has to fetch and discard an increasing number of rows
before it finds the first one you want. So the time
taken is proportional to the rownumber you have
reached.
With Charles's correction (which is unavoidable), the
entire query has to be retrieved and sorted
before the rows you want can be returned. That's
horribly inefficient. This technique works for small
sets (eg 10 - 1000 rows) but I can't tell you how
wrong it is to process data in this way especially if
you are expecting lacs (that's 100,000s isn't
it) of rows returned. You are pounding your database
simply to give you the option of being able to go
back as well as forwards in your query results. The
time taken is proportional to the total number of
rows (so the time to get to the end of the entire set
is proportional to the square of the total
number of rows.
If you really need to page back and forth
through large sets, consider one of the following
options:
1) saving the set (eg as a materialised view or in a
temp table - and include "row number" as an indexed
column)
2) retrieve ALL the rowids into an array/collection
in a single pass, then go get 10 rows by rowid for
each page
3) assuming you can sort by a unique identifier, use
that (instead of rownumber) to remember the first row
in each page; use a range scan on the index on that
UID to get back the rows you want quickly (doing this
with a non-unique sort key is quite a bit harder)
Remember also that if someone else inserts into your
table while you are paging around, some of these
methods can give confusing results - because every
time you start a new query, you get a new
read-consistent point.
Anyway, try to redesign so you don't need to page
through lacs of rows....
HTH
Regards NigelYou are correct regarding the OP's original SQL statement that:
"the entire query has to be retrieved and sorted before the rows you want can be returned"
However, that is not the case with the SQL statement that I posted. The problem with the SQL statement I posted is that Oracle insists on performing full tablescans on the table. The following is a full test run with 2,000,000 rows in a table, including an analysis of the problem, and a method of working around the problem:
CREATE TABLE T1 (
MAIL_ID NUMBER(10),
USER_ID NUMBER(10),
FOLDER_ID NUMBER(10),
MAIL_DATE DATE,
PRIMARY KEY(MAIL_ID));
<br>
CREATE INDEX T1_USER_FOLDER ON T1(USER_ID,FOLDER_ID);
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID);
<br>
INSERT INTO
T1
SELECT
ROWNUM MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
INSERT INTO
T1
SELECT
ROWNUM+1000000 MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
COMMIT;
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | HASH JOIN | | 1 | 8801 | 10 |00:00:15.62 | 13610 | 1010K| 1010K| 930K (0)|
|* 2 | VIEW | | 1 | 8801 | 10 |00:00:00.34 | 6805 | | | |
|* 3 | WINDOW SORT PUSHED RANK| | 1 | 8801 | 910 |00:00:00.34 | 6805 | 74752 | 74752 |65536 (0)|
|* 4 | TABLE ACCESS FULL | T1 | 1 | 8801 | 8630 |00:00:00.29 | 6805 | | | |
| 5 | TABLE ACCESS FULL | T1 | 1 | 2000K| 2000K|00:00:04.00 | 6805 | | | |
<br>
Predicate Information (identified by operation id):
1 - access("MAIL_ID"="M"."MAIL_ID")
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("MAIL_DATE") DESC )<=909)
4 - filter(("USER_ID"=6 AND "FOLDER_ID"=1))The above performed two tablescans of the T1 table and required 15.6 seconds to complete, which was not the desired result. Now, to create an index that will be helpful for the query, and provide Oracle an additional hint:
(http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html "Pagination in Getting Rows N Through M" shows a similar approach)
DROP INDEX T1_USER_FOLDER_MAIL;
<br>
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID,MAIL_DATE DESC,MAIL_ID);
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.01 | 47 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.01 | 7 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 909 |00:00:00.01 | 7 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 910 |00:00:00.01 | 7 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=909)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.01 seconds.
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 8600 AND 8609) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.11 | 81 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.11 | 41 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 8609 |00:00:00.09 | 41 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 8610 |00:00:00.05 | 41 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=8600 AND "RN"<=8609))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=8609)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.11 seconds.
As the above shows, it is possible to efficiently retrieve the desired records very rapidly without having to leave the cursor open.
If this SQL statement will be used in a web browser, it probably does not make sense to leave the cursor open. If the SQL statement will be used in an application that maintains state, and the user is expected to always page from the first row toward the last, then leaving the cursor open and reading rows as needed makes sense.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Performance tuning issue of 8.1.7's PL/SQL
My trouble sample code is list below,I know I can fix this proble easily at 9i,but,you know.
My procedure is called by receive a parameter,data_segmentseqno,its value maybe is 'segment1' or 'segment1,segment2,segment3'.In first case,procedure is work,it can find what I need,but,it will faile in case 2.
After I check it in DBAStudio's session,I found it was be parse to 'SELECT .. FROM .. WHERE E.SEGMENTSEQNO IN ( :1 )',oracle engine just think it has only one parameter,not three ones.SO,how should I do when I get a parameter include multi segment.
Somebody can help me,or the only way to solve this problem is use cursor instead use BULK COLLECTION.IN ORACLE 8.1.7.
create or replace package body RoundRobin is
procedure dispatchRoundRobin(
data_segmentseqno in varchar2
) is
type Cust_type is table of varchar2(18);
Cust_data Cust_type;
begin
/********** HERE IS MY TROUBLE *********
HOW SHOULD I DO FOR MULTI SEGMENTSEQNO
SELECT rowid BULK COLLECT INTO Cust_data
FROM dispatchedrecord e
where e.segmentseqno in ( data_segmentseqno ) ;
exception
when others then
dbms_output.put_line('Error'||sqlerrm);
end dispatchRoundRobin;Hello
You are using a single bind variable to represent multiple values. In this case you are asking oracle to see if e.segmentseqno is equal to 'segment1,segment2,segment3', which it isn't. What you need to do is either use separate bind variables for each value you want to test i.e.
WHERE e.segmentseqno IN (data_segmentseqno, data_segmentseqno2, data_segmentseqno3)Which isn't going to be very usefull unless you have a fixed number of values that are always used.
Another alternative would be to use dynamic SQL to form the where clause and put the values into the where clause directly
EXECUTE IMMEDIATE 'SELECT rowid FROM dispatchedrecord e
where e.segmentseqno in ('|| data_segmentseqno||')' BULK COLLECT INTO CustData ;But this isn't ideal either as you really should use bind variables for these values rather than litterals.
I'm not sure whether using a collection here for the list of segment values would help or not. I haven't used collections much in SQL statements, maybe someone else will have a better idea...
HTH
David -
Performance Tuning Issues: UNION and Stored Outlines
Hi,
I have two questions,
Firstly I have read this:
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_1016.htm#i35699
What I can understand is using UNION ALL is better than UNION.
The ALL in UNION ALL is logically valid because of this exclusivity. It allows the plan to be carried out without an expensive sort to rule out duplicate rows for the two halves of the query.
Can someone explain me the following sentences.
Secondly my Oracle Database 10g is on FIRST_ROWS_1, how can stored outlines help in reducing I/O cost and response time in general?Please explain.
Thank you,
AdithUnion ALL and Union
SQL> select 1, 2 from dual
union
select 1, 2 from dual;
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 6 (67)| 00:00:01 |
| 1 | SORT UNIQUE | | 2 | 6 (67)| 00:00:01 |
| 2 | UNION-ALL | | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 4 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
11 rows selected.
SQL>select 1, 2 from dual
union all
select 1, 2 from dual;
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 (50)| 00:00:01 |
| 1 | UNION-ALL | | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
10 rows selected.
Adith -
Performance Tuning Issues ( How to Optimize this Code)
_How to Optimize this Code_
FORM MATL_CODE_DESC.
SELECT * FROM VBAK WHERE VKORG EQ SAL_ORG AND
VBELN IN VBELN AND
VTWEG IN DIS_CHN AND
SPART IN DIVISION AND
VKBUR IN SAL_OFF AND
VBTYP EQ 'C' AND
KUNNR IN KUNNR AND
ERDAT BETWEEN DAT_FROM AND DAT_TO.
SELECT * FROM VBAP WHERE VBELN EQ VBAK-VBELN AND
MATNR IN MATNR.
SELECT SINGLE * FROM MAKT WHERE MATNR EQ VBAP-MATNR.
IF SY-SUBRC EQ 0.
IF ( VBAP-NETWR EQ 0 AND VBAP-UEPOS NE 0 ).
IF ( VBAP-UEPVW NE 'B' AND VBAP-UEPVW NE 'C' ).
MOVE VBAP-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-FREE_MATL.
MOVE VBAP-KWMENG TO ITAB1-FREE_QTY.
MOVE VBAP-KLMENG TO ITAB1-KLMENG.
MOVE VBAP-VRKME TO ITAB1-FREE_UNIT.
MOVE VBAP-WAVWR TO ITAB1-FREE_VALUE.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAP-UEPOS TO ITAB1-UEPOS.
ENDIF.
ELSE.
MOVE VBAK-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAK-ERDAT TO ITAB1-SAL_ORD_DATE.
MOVE VBAK-KUNNR TO ITAB1-CUST_NUM.
MOVE VBAK-KNUMV TO ITAB1-KNUMV.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK EQ 'A' AND
KMPRS = 'X'.
IF SY-SUBRC EQ 0.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK IN ('C','D') AND
KMPRS = 'X' AND
KRECH IN ('A','B').
IF SY-SUBRC EQ 0.
IF KONV-KRECH EQ 'A'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = ( KONV-KBETR / 10 ).
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1 '%'
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ELSEIF KONV-KRECH EQ 'B'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = KONV-KBETR.
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ENDIF.
ELSE.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
CLEAR : G_KBETR, G_KSCHL,G_KBETR1.
MOVE VBAP-KWMENG TO ITAB1-QTY.
MOVE VBAP-VRKME TO ITAB1-QTY_UNIT.
IF VBAP-UMVKN NE 0.
ITAB1-KLMENG = ( VBAP-UMVKZ / VBAP-UMVKN ) * VBAP-KWMENG.
ENDIF.
IF ITAB1-KLMENG NE 0.
VBAP-NETWR = ( VBAP-NETWR / VBAP-KWMENG ).
MOVE VBAP-NETWR TO ITAB1-INV_PRICE.
ENDIF.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-MATNR.
MOVE MAKT-MAKTX TO ITAB1-MAKTX.
ENDIF.
SELECT SINGLE * FROM VBKD WHERE VBELN EQ VBAK-VBELN AND
BSARK NE 'DFUE'.
IF SY-SUBRC EQ 0.
ITAB1-INV_PRICE = ITAB1-INV_PRICE * VBKD-KURSK.
APPEND ITAB1.
CLEAR ITAB1.
ELSE.
CLEAR ITAB1.
ENDIF.
ENDIF.
ENDSELECT.
ENDSELECT.
ENDFORM. " MATL_CODE_DESCHi Vijay,
You could start by using INNER JOINS:
SELECT ......
FROM ( VBAK
INNER JOIN VBAP
ON VBAPVBELN = VBAKVBELN
INNER JOIN MAKT
ON MAKTMATNR = VBAPMATNR AND
MAKT~SPRAS = SYST-LANGU )
INTO TABLE itab
WHERE VBAK~VBELN IN VBELN
AND VBAK~VTWEG IN DIS_CHN
AND VBAK~SPART IN DIVISION
AND VBAK~VKBUR IN SAL_OFF
AND VBAK~VBTYP EQ 'C'
AND VBAK~KUNNR IN KUNNR
AND VBAK~ERDAT BETWEEN DAT_FROM AND DAT_TO
AND VBAP~NETWR EQ 0
AND VBAP~UEPOS NE 0
Regards,
John. -
Performance tuning issues........
Please guide me alternate option for below set of code:
LOOP AT ITAB1 WHERE DISC LT 0.
SELECT * FROM KONV WHERE KNUMV EQ ITAB1-KNUMV AND
KPOSN EQ ITAB1-POSNR AND
KSTEU EQ 'C'.
IF SY-SUBRC EQ 0.
ITAB1-FREE_INDI = 'Y'.
EXIT.
ENDIF.
ENDSELECT.
MODIFY ITAB1 TRANSPORTING FREE_INDI.
ENDLOOP.
*How to merge into one loop :
LOOP AT ITAB1.
IF ITAB1-FREE_MATL NE ''.
ITAB1-FREE_INDI = 'Y'.
MODIFY ITAB1.
GTEST = ITAB1-POSNR - 10.
READ TABLE ITAB1 WITH KEY SAL_ORD_NUM = ITAB1-SAL_ORD_NUM
POSNR = GTEST.
ITAB1-FREE_MATL = 'X'.
MODIFY ITAB1 TRANSPORTING FREE_MATL WHERE
SAL_ORD_NUM = ITAB1-SAL_ORD_NUM AND POSNR EQ GTEST.
CLEAR GTEST.
ENDIF.
ENDLOOP.
LOOP AT ITAB1 WHERE FREE_INDI EQ 'Y'.
IF ITAB1-UEPOS EQ G_UEPOS.
CLEAR ITAB1-FREE_INDI.
MODIFY ITAB1.
ENDIF.
MOVE ITAB1-UEPOS TO G_UEPOS.
ENDLOOP.
Thanx & Regrds.
Vijay..._How to Optimize this Code_
FORM MATL_CODE_DESC.
SELECT * FROM VBAK WHERE VKORG EQ SAL_ORG AND
VBELN IN VBELN AND
VTWEG IN DIS_CHN AND
SPART IN DIVISION AND
VKBUR IN SAL_OFF AND
VBTYP EQ 'C' AND
KUNNR IN KUNNR AND
ERDAT BETWEEN DAT_FROM AND DAT_TO.
SELECT * FROM VBAP WHERE VBELN EQ VBAK-VBELN AND
MATNR IN MATNR.
SELECT SINGLE * FROM MAKT WHERE MATNR EQ VBAP-MATNR.
IF SY-SUBRC EQ 0.
IF ( VBAP-NETWR EQ 0 AND VBAP-UEPOS NE 0 ).
IF ( VBAP-UEPVW NE 'B' AND VBAP-UEPVW NE 'C' ).
MOVE VBAP-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-FREE_MATL.
MOVE VBAP-KWMENG TO ITAB1-FREE_QTY.
MOVE VBAP-KLMENG TO ITAB1-KLMENG.
MOVE VBAP-VRKME TO ITAB1-FREE_UNIT.
MOVE VBAP-WAVWR TO ITAB1-FREE_VALUE.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAP-UEPOS TO ITAB1-UEPOS.
ENDIF.
ELSE.
MOVE VBAK-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAK-ERDAT TO ITAB1-SAL_ORD_DATE.
MOVE VBAK-KUNNR TO ITAB1-CUST_NUM.
MOVE VBAK-KNUMV TO ITAB1-KNUMV.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK EQ 'A' AND
KMPRS = 'X'.
IF SY-SUBRC EQ 0.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK IN ('C','D') AND
KMPRS = 'X' AND
KRECH IN ('A','B').
IF SY-SUBRC EQ 0.
IF KONV-KRECH EQ 'A'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = ( KONV-KBETR / 10 ).
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1 '%'
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ELSEIF KONV-KRECH EQ 'B'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = KONV-KBETR.
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ENDIF.
ELSE.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
CLEAR : G_KBETR, G_KSCHL,G_KBETR1.
MOVE VBAP-KWMENG TO ITAB1-QTY.
MOVE VBAP-VRKME TO ITAB1-QTY_UNIT.
IF VBAP-UMVKN NE 0.
ITAB1-KLMENG = ( VBAP-UMVKZ / VBAP-UMVKN ) * VBAP-KWMENG.
ENDIF.
IF ITAB1-KLMENG NE 0.
VBAP-NETWR = ( VBAP-NETWR / VBAP-KWMENG ).
MOVE VBAP-NETWR TO ITAB1-INV_PRICE.
ENDIF.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-MATNR.
MOVE MAKT-MAKTX TO ITAB1-MAKTX.
ENDIF.
SELECT SINGLE * FROM VBKD WHERE VBELN EQ VBAK-VBELN AND
BSARK NE 'DFUE'.
IF SY-SUBRC EQ 0.
ITAB1-INV_PRICE = ITAB1-INV_PRICE * VBKD-KURSK.
APPEND ITAB1.
CLEAR ITAB1.
ELSE.
CLEAR ITAB1.
ENDIF.
ENDIF.
ENDSELECT.
ENDSELECT.
ENDFORM. " MATL_CODE_DESC
Edited by: Vijay kumar on Jan 8, 2008 6:50 PM -
App. Server performance Tuning issue
Hi.
Platform:
iAS : Oracle Application server Version 10.1.2.0.2
Installation Type : Forms and Reports Services
OS : Windows server 2003 SP2
DB server : Oracle 10g ver.10.2.0.4.0 win.server 2003
programs developed with developer suite 10g.
Problem:
When the user login screen pop up in internet browser and user type the user id and password then click enter the iAS is taking between 12 to 15 seconds to connect to db, once is connected the application works fine but ..
Is there any way to speed up the first time connection?
I will appreciate whatever help.
Thanks in advanceCheck the order of connection mechanisms in your SQLNET.ORA file. If LDAP is first in the list, the login process will look for OID and if it finds one running, will look for your user there. If there is no OID running, it may wait for a timeout before moving on to the next listed connection mechanism. In general, if you're not using OID to authenticate to your database, you should make sure it's not first in the list in SQLNET.ORA.
TGF -
Performance tuning in oracle 10g
Hi Guys
i hope all are well,Have a nice day Today
i have discuss with some performance tuning issue
recently , i joined the new project that project improve the efficiency of the applicaton. in this environment oracle plsql langauage are used , so if i need to improve effiency of application what are the step are taken
and what are the way to go through the process improvement
kindly help megenerate statspack/AWR reports
HOW To Make TUNING request
https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003 -
Profitability analysis activated in POS Interface, performane/tuning issue?
We are about to golive with a SAP retail system. All purchases made by customers in store are sent into SAP via and idoc via the so called POS (point of sales) Interface.
A reciept received via an idoc creates material docs, invoice docs, accounting documents, controlling documents, profit center docs and profit ability analysis documents.
Each day we recive all receipt from each store collected in one idoc per store.
When deactivate our profit ability analysis an average store are posted in about 40 seconds with sales from an average day. When we post and profit ability analysis are activated the average time per store are almost 75 seconds.
How can simple postings to profit ability analysis increase the posting time with almost 50 %? Is this a performance/tuning issue?
Best regards
Carl-Johan
Point will be assigned generously for info that leads to better performance!which CO document does the system creates : CCA document ?
on which cost centre ? PCA document ?
What is the CE category of the CE used for posting the variance ? -
Oracle Memory Issue/ performance tuning
I have Oracle 9i running on Window 2003 server. 2 GB memory is allocated to Oralce DB( even though server has 14GB memory)
Recently, the oracle process has been slow.. running query
I ran the window task manager. Here is the numbers that I see
Mem usage: 556660k
page Faults: 1075029451
VM size: 1174544 K
I am not sure how to analyze this data. why the page fault is so huge. and Mem usage is half of VM size?
How can I do the performance tuning on this box?I'm having a similar issue with Oracle 10g R2 64-bit on Windows 2003 x64. Performance on complicated queries is abysmal because [I think] most of the SGA is sitting in a page file, even though there is plenty of physical RAM to be had. Performance on simple queries is probably bad also, but it's not really noticable. Anyway, page faults skyrocket when I hit the "go" button on big queries. Our legacy system runs our test queries in about 5 minutes, but the new system takes at least 30 if not 60. The new system has 24 gigs of RAM, but at this point, I'm only allocating 1 gig to the SGA and 1/2 gig to the PGA. Windows reports oracle.exe has 418,000K in RAM and 1,282,000K in the page file (I rounded a bit). When I had the PGA set to 10 gigs, the page usage jumped to over 8 gigs.
I tried adding ORA_LPENABLE=1 to the registry, but this issue seems to be independent. Interestingly, the amount of RAM taken by oracle.exe goes down a bit (to around 150,000K) when I do this. I also added "everyone" to the security area "lock pages in memory", but again, this is probably unrelated.
I did an OS datafile copy and cloned the database to a 32-bit windows machine (I had to invalidate and recompile all objects to get this to work), and this 32-bit test machine now has the same problem.
Any ideas? -
Performance tuning related issues
hi experts
i am new to performance tuning application. can any one share some stuff(bw3.5& bi7)related to the concern area.send any relavent docs to my id: [email protected] .
thanks in advance
regards
gavaskar
[email protected]hi Gavaskar,
check this, you can download lot of performance materials
Business Intelligence Performance Tuning [original link is broken]
and e-learning -> intermediate course and advance course
https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/fe5b0b5e-0501-0010-cd88-c871915ec3bf [original link is broken]
e.g
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/10b589ad-0701-0010-0299-e5c282b7aaad
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/8e6183ad-0701-0010-e083-9ab1c6afe6f2
performance tools in bw 3.5
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/07a4f070-0701-0010-3b91-a6bf7644c98f
(here also you can download the presentation by righ click the disk drive icon)
hope this helps. -
hi,
I have to do perofrmance for one program, it is taking 67000 secs in back ground for execution and 1000 secs for some varints .It is an ALV report.
please suggest me how to proced to change the code.Performance tuning for Data Selection Statement
<b>http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm</b>Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector. -
Performance tuning in XI, (SAP Note 857530 )
Could any one pls tell me where to find sap notes.
I am looking for "SAP Note 857530 "
Integration process performance(in sap XI).
or how can I view the performance of the integration process ? or exactly how performance tuning is done.
pls help,
Best regards,
verma.Hi,
SAP Note:
Symptom
Performance bottlenecks when executing integration processes.
Other terms
ccBPM
BPE
Performance
Integration Processes
Solution
This note refers to all notes that are concerned with improving the performance of the ccBPM runtime.
This note will be continually updated as improvements are made.
Also read the document "Checklist: Making Correct Use of Integration Processes" in the SAP Library documentation, on SAP Service Marketplace, and in SDN; it contains information about performance issues to bear in mind when you model integration processes.
Refer to the appended notes and maintain the default code changes by using SNOTE, or by importing the relevant service packs. Note that some performance improvements cannot be implemented by using SNOTE and are instead only available in service packs.
Regards
vijaya -
Hello All,
We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
Regards,
RajSQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
To tune your bqy documents...
1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
5) Saved compressed documents
6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
7) Remove all duplicate images and keep the image file size small.
Hope this helps!
PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
Edited by: user10899957 on Feb 10, 2009 6:07 AM -
Performance tuning in progress
Due to performance issues caused by high traffic, if you see
a blank page, just go to Create to start using kuler. Performance
tuning is in progress.Hello, I'm the lead developer on kuler. Over the weekend we
added a new database server as well as made many performance
enhancements to the site. The kuler site is a lot more stable now
and you should not experience the slowdowns previously encountered.
Thanks,
Tim Strickland
Maybe you are looking for
-
MANUAL DEPRECIATION FOR GROUP ASSET
dear sap gurus i want to post manual depreciaiton to an asset which is attached to a group asset could anyone tell the ways to do it please regards srini
-
What is LookUp's in XI
-
SIP and SMTP domains don't match
Hi, We've a situation where SIP, SMTP and AD domain are different than each other and I am sure this will present integration issues at the least... Speaking from LYNC 2013 and exchange 2010/2013 integration standpoint, could someone please point out
-
K7N420 Pro and optical connection to speakers
Can anyone tell me about this setup? I'm looking into getting a 5.1 set with optical connection (which means the digital amplifier is built in the set) Is there a recommendation for a certain speaker set? preferably in an affordable price and good qu
-
Will lightroom 3.6 work remotely tethered to a Nikon D4?
I have a Nikon D4 with a WT-5 wireless transmiter and lightroom 3.6. When I follow the steps to make it work, lightroom cannot detect the camera, but the camera does seem to recognize lightroom. Any suggestions?