Select statement performance improvement.
HI Guru's,
I am new to ABAP.
I have the below select stement
000304 SELECT mandt msgguid pid exetimest
000305 INTO TABLE lt_key
000306 UP TO lv_del_rows ROWS
000307 FROM (gv_master)
000308 WHERE
000309 * msgstate IN rt_msgstate
000310 * AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Can anyone help me how i can improve the performance of this statement?
Here is the sql trace for the statement:
SELECT
/*+
FIRST_ROWS (100)
"MANDT" , "MSGGUID" , "PID" , "EXETIMEST"
FROM
"SXMSPMAST"
WHERE
"MANDT" = :A0 AND "EXETIMEST" <= :A1 AND "EXETIMEST" >= :A2 AND "REORG" = :A3
ORDER BY
"MANDT" , "ITFACTION" , "REORG" , "EXETIMEST"
Execution Plan
SELECT STATEMENT ( Estimated Costs = 3 , Estimated #Rows = 544 )
4 SORT ORDER BY
( Estim. Costs = 2 , Estim. #Rows = 544 )
Estim. CPU-Costs = 15.671.852 Estim. IO-Costs = 1
3 FILTER
2 TABLE ACCESS BY INDEX ROWID SXMSPMAST
( Estim. Costs = 1 , Estim. #Rows = 544 )
Estim. CPU-Costs = 11.130 Estim. IO-Costs = 1
1 INDEX RANGE SCAN SXMSPMAST~TST
Search Columns: 2
Estim. CPU-Costs = 3.329 Estim. IO-Costs = 0
Do I need to create any new index ? Do i need to remove the Order By clause?
Thanks in advance.
why is there an
UP TO lv_del_rows ROWS
together with an ORDER BY?
The database will find all rows fulfilling the condition but returns only the largest Top lv_del_rows.
Therefore it can take a while.
Your index, always put the client field at first position.
actually I am not really convinced by your logic:
itfaction reorg exetimest.
itfaction is the first in the sort order, so all records with the smallest itfactio will come first, but itfaction is not specified, is this really what you want?
Change the index to mandt reorg exetimest reorg
and change the ORDER BY to mandt reorg exetimest
then it will become fast.
* AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Similar Messages
-
Joining select statements-performance problem
Here two cases are there.Is it posiible to combine the two cases and make it one? I am using OR in the select statement, performance is very bad, i want to improve it.Is there any chance of avoiding that...
1.
select vbeln from vbfa into itab-vbeln where vbelv = final_data-xblnr1
or vbelv = final_data-xblnr2.
select single vbeln from vbrk into tabvbrk-vbeln
where vbeln = itab-vbeln and fkart = 'RTS'.
if sy-subrc eq 0.
move tabvbrk-vbeln to final_data-vbeln1.
modify final_data.
endif.
endselect.
2.
select vbeln from vbfa into itab-vbeln where vbelv = final_data-xblnr1
or vbelv = final_data-xblnr2.
select single vbeln from vbrk into tabvbrk-vbeln
where vbeln = itab-vbeln and fkart = 'ZTR'.
if sy-subrc eq 0.
move tabvbrk-vbeln to final_data-vbeln2.
modify final_data.
endif.
endselect.
Thanks,
fractalThe first main thing is dont use select... endselect.
select vbeln from vbfa into itab-vbeln where vbelv = final_data-xblnr1
or vbelv = final_data-xblnr2.
U can use
select vbeln from vbfa into
corresponding fields of table itab
for all entries in final_data
where vbelv = final_data-xblnr1
or vbelv = final_data-xblnr2.
Instead of this
select single vbeln from vbrk into tabvbrk-vbeln
where vbeln = itab-vbeln and fkart = 'RTS'.
if sy-subrc eq 0.
move tabvbrk-vbeln to final_data-vbeln1.
modify final_data.
endif.
endselect.
Use
select vbeln from vbrk
appending table final_data
where vbeln = itab-vbeln
and fkart = 'RTS'.
if sy-subrc eq 0.
endif.
Try like this.
Hope this helps.
Kindly reward points for the answer which helped u and helped to solve the problem. -
How to find for which select statement performance is more
hi gurus
can anyone suggest me
if we have 2 select statements than
how to find for which select statement performance is more
thanks®ards
kals.hi check this..
1 .the select statement in which the primary and secondary keys are used will gives the good performance .
2.if the select statement had select up to i row is good than the select single..
go to st05 and check the performance..
regards,
venkat -
In how many ways we can filter this select statement to improve performance
Hi Experts,
This select statement taking 2.5 hrs in production, Can we filter the where condition, to improve the performance.Plz suggest with coding ASAP.
select * from dfkkop into table t_dfkkop
where vtref like 'EPC%' and
( ( augbd = '00000000' and
xragl = 'X' )
or
( augbd between w_clrfr and w_clrto ) ) and
augrd ne '03' and
zwage_type in s_wtype .
Regards,
Sam.if it really takes 2.5 hours, try the followingtry to run the SQL trace and
select *
into table t_dfkkop
from dfkkop
where vtref like 'EPC%'
and augbd = '00000000' and xragl
and augrd ne '03'
and zwage_type in s_wtype .
select *
appending table t_dfkkop
from dfkkop
where vtref like 'EPC%'
and augbd between w_clrfr and w_clrto
and augrd ne '03'
and zwage_type in s_wtype .
Do a DESCRIBE TABLE after the first SELECT and after the second,
or run an SQL Trace.
What is time needed for both parts, how many records come back, which index is used.
Siegfried -
Provide alternative select statements to improve performance
Hi Frnds
I want to improve the performance of my report. below statement is taking more time. Please provide any suggestions to include alternative statement for the below select statement.
SELECT H~CLMNO H~PNGUID P~PVGUID V~PNGUID AS V_PNGUID
V~AKTIV V~KNUMV P~POSNR V~KATEG AS ALTNUM V~VERSN
APPENDING CORRESPONDING FIELDS OF TABLE EX_WTYKEY_T
FROM ( ( PNWTYH AS H
INNER JOIN PNWTYV AS V ON V~HEADER_GUID = H~PNGUID )
INNER JOIN PVWTY AS P ON P~VERSION_GUID = V~PNGUID )
FOR ALL ENTRIES IN lt_claim
WHERE H~PNGUID = lt_claim-pnguid
AND V~AKTIV IN rt_AKTIV
AND V~KATEG IN IM_ALTNUM_T.
Thanks
Amminesh.
Moderator message - Moved to the correct forum
Edited by: Rob Burbank on May 14, 2009 11:00 AMHi,
Copy the internal table lt_claim contents to another temp internal table.
lt_claim_temp[] = lt_claim[].
sort lt_claim_temp by pnguid.
delete adjacent duplicates from lt_claim_temp comparing pnguid.
SELECT HCLMNO HPNGUID PPVGUID VPNGUID AS V_PNGUID
VAKTIV VKNUMV PPOSNR VKATEG AS ALTNUM V~VERSN
APPENDING CORRESPONDING FIELDS OF TABLE EX_WTYKEY_T
FROM ( ( PNWTYH AS H
INNER JOIN PNWTYV AS V ON VHEADER_GUID = HPNGUID )
INNER JOIN PVWTY AS P ON PVERSION_GUID = VPNGUID )
FOR ALL ENTRIES IN lt_claim_temp
WHERE H~PNGUID = lt_claim-pnguid
AND V~AKTIV IN rt_AKTIV
AND V~KATEG IN IM_ALTNUM_T.
refresh lt_claim_temp. -
ABAP Select statement performance (with nested NOT IN selects)
Hi Folks,
I'm working on the ST module and am working with the document flow table VBFA. The query takes a large amount of time, and is timing out in production. I am hoping that someone would be able to give me a few tips to make this run faster. In our test environment, this query take 12+ minutes to process.
SELECT vbfa~vbeln
vbfa~vbelv
Sub~vbelv
Material~matnr
Material~zzactshpdt
Material~werks
Customer~name1
Customer~sortl
FROM vbfa JOIN vbrk AS Parent ON ( Parentvbeln = vbfavbeln )
JOIN vbfa AS Sub ON ( Subvbeln = vbfavbeln )
JOIN vbap AS Material ON ( Materialvbeln = Subvbelv )
JOIN vbak AS Header ON ( Headervbeln = Subvbelv )
JOIN vbpa AS Partner ON ( Partnervbeln = Subvbelv )
JOIN kna1 AS Customer ON ( Customerkunnr = Partnerkunnr )
INTO (WA_Transfers-vbeln,
WA_Transfers-vbelv,
WA_Transfers-order,
WA_Transfers-MATNR,
WA_Transfers-sdate,
WA_Transfers-sfwerks,
WA_Transfers-name1,
WA_Transfers-stwerks)
WHERE vbfa~vbtyp_n = 'M' "Invoice
AND vbfa~fktyp = 'L' "Delivery Related Billing Doc
AND vbfa~vbtyp_v = 'J' "Delivery Doc
AND vbfa~vbelv IN S_VBELV
AND Sub~vbtyp_n = 'M' "Invoice Document Type
AND Sub~vbtyp_v = 'C' "Order Document Type
AND Partner~parvw = 'WE' "Ship To Party(actual desc. is SH)
AND Material~zzactshpdt IN S_SDATE
AND ( Parentfkart = 'ZTRA' OR Parentfkart = 'ZTER' )
AND vbfa~vbelv NOT IN
( SELECT subvbfa~vbelv
FROM vbfa AS subvbfa
WHERE subvbfavbelv = vbfavbelv
AND subvbfa~vbtyp_n = 'V' ) "Purchase Order
AND vbfa~vbelv NOT IN
( SELECT DelList~vbeln
FROM vbfa AS DelList
WHERE DelListvbeln = vbfavbelv
AND DelList~vbtyp_v = 'C' "Order Document Type
AND DelList~vbelv IN "Delivery Doc
( SELECT OrderList~vbelv
FROM vbfa AS OrderList
WHERE OrderList~vbtyp_n = 'H' ) "Return Ord
APPEND WA_Transfers TO ITAB_Transfers.
ENDSELECT.
Cheers,
ChrisI am sending u some of the performance isuues that are to be kept in mind while coding.
1.Donot use Select *...... instead use Select <required list>......
2.Donot fetch data from CLUSTER tables.
3.Donot use Nested Select statements as. U have used nested select which reduces performance to a greater extent.
Instead use views/join .
Also keep in mind that not use join condition for more for more than three tables unless otherwise required.
So split select statements into three or four and use Select ......for all entries....
4.Extract the data from the database atonce consolidated upfront into table.
i.e. use INTO TABLE <ITAB> clause instead of using
Select----
End Select.
5.Never use order by clause in Select ..... statement. instead use SORT<itab>.
6.When ever u need to calculate max,min,avg,sum,count use AGGREGATE FUNCTIONS and GROUP BY clause insted of calculating by userself..
7.Donot use the same table once for Validation and another time for data extraction.select data only once.
8.When the intention is for validation use Select single ....../Select.......up to one rows ......statements.
9.If possible always use array operations to update the database tables.
10.Order of the fields in the where clause select statement must be in the same order in the index of table.
11.Never release the object unless throughly checked by st05/se30/slin.
12.Avoid using identical select statements. -
Select query performance improvement - Index on EDIDC table
Hi Experts,
I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
SELECT docnum
direct
mestyp
mescod
rcvprn
sndprn
upddat
updtim
INTO CORRESPONDING FIELDS OF TABLE t_edidc
FROM edidc
FOR ALL ENTRIES IN t_error_idoc
WHERE
upddat GE gv_date1 AND
upddat LE gv_date2 AND
updtim GE p_time AND
status EQ t_error_idoc-status.
As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
Question:
4. How do we identify the index to be used.
5. On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
6. What will be the impact on the table performance if we create a new index.
Regards ,
RaghavQuestion:
1. How do we identify the index to be used.
Generally the index is automatically selected by SAP (DB Optimizer ) ( You can still mention the index name in your select query by changing the syntax)
For your select Query the second Index will be called automatically by the Optimizer, ( Because the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
2. On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
(Create a new Index with MANDT and the 4 fields which are in the where clause in sequence )
3. What will be the impact on the table performance if we create a new index.
( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid into corresponding fields .
Regards ,
Seth -
Problem with Select Statements
Hi All,
I have a performance problem for my report because of the following statements.
How can i modify the select statements for improving the performance of the report.
DATA : shkzg1h LIKE bsad-shkzg,
shkzg1s LIKE bsad-shkzg,
shkzg2h LIKE bsad-shkzg,
shkzg2s LIKE bsad-shkzg,
shkzg1hu LIKE bsad-shkzg,
shkzg1su LIKE bsad-shkzg,
shkzg2hu LIKE bsad-shkzg,
shkzg2su LIKE bsad-shkzg,
kopbal1s LIKE bsad-dmbtr,
kopbal2s LIKE bsad-dmbtr,
kopbal1h LIKE bsad-dmbtr,
kopbal2h LIKE bsad-dmbtr,
kopbal1su LIKE bsad-dmbtr,
kopbal2su LIKE bsad-dmbtr,
kopbal1hu LIKE bsad-dmbtr,
kopbal2hu LIKE bsad-dmbtr.
*These statements are in LOOP.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1s , kopbal1s)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1su , kopbal1su)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1h , kopbal1h)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1hu , kopbal1hu)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2s , kopbal2s)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2su , kopbal2su)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2h , kopbal2h)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2hu , kopbal2hu)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.>
Siegfried Boes wrote:
> Please stop writing answers if you understrand nothing about database SELECTS!
> All above recommendations are pure nonsense!
>
> As always with such questions, you must do an analysis before you ask! The coding itself is perfectly o.k., a SELECT with an aggregate and a GROUP BY can not be changed into a SELECT SINGLE or whatever.
>
> But your SELECTS mustr be supported by indexes!
>
> Please run SQL Trace, and tell us the results:
>
> I see 8 statements, what is the duration and the number of records coming back for each statement?
> Maybe only one statement is slow.
>
> See
> SQL trace:
> /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
>
>
> Siegfried
Nice point there Siegfried. Instead of giving constructive suggestion, people here give a very bad suggestion on using SELECT SINGLE combined with SUM and GROUP BY.
I hope the person already look at your reply before he try using select single and wondering why he has error.
Anyway, the most important thing is how many loop expected for those select statements?
If you have like thousands of loop, you can expect a poor performance.
So, you should also look at how many times the select statement is called and not only performance for each select statement when you're doing SQL trace.
Regards,
Abraham -
Help needed with a SELECT statement. How can I make it run faster?
Hi,
not sure if my brain is just too tired but I can't seem to crack this problem today.
Here is my scenario.
I have 2 tables
TABLE1 (searchId INTEGER, routeId INTEGER);
TABLE2 (routeId INTEGER, cityId INTEGER);
There are indexes on all 4 columns.
(routeId on TABLE1 is a primary key).
In the data I am using, a given search has more than 500 routes, each route has between 10 and 300 cities among more than 4000 possible different cities.
Now, what I want to create is the list of route couple, within a certain search, that do not have a single city in common.
That list should populate a table with the following structure
TABLE3 (searchId INTEGER, routeId1 INTEGER, routeId2 INTEGER)
Here is the fastest select statement I have found so far.
SELECT :searchId, t1.routeId, t2.routeId FROM table1 t1, table1 t2
WHERE t1.searchId=:searchId AND t2.searchId=:searchId
AND t1.routeId>t2.routeId
AND NOT EXISTS (
SELECT cityId FROM table2
WHERE routeId=t1.routeId
INTERSECT
SELECT cityId FROM table2
WHERE routeId=t2.routeId);
But it still seem really slow to me.
Any suggestion for an improved version is welcome.
Thanks,
Martin.
Title was edited by:
user453358I originaly posted this thread because I tought I was missing something "obvious" that would perform better that would make my SELECT statement perform better.
So I did not want to go as deep as using TKPROOF yet.
Here is the statistics I gets on my statement.
1 SELECT t1.searchId,t1.routeId, t2.routeId id2
2 FROM table1 t1, table1 t2
3 WHERE t1.searchid=t2.searchid
4 AND t1.searchId=91
5 AND t1.routeId>t2.routeId
6 AND NOT EXISTS (
7 SELECT cityId FROM table2
8 WHERE routeId=t1.routeId
9 INTERSECT
10 SELECT cityId FROM table2
11* WHERE routeId=t2.routeId)
SQL> /
43302 rows.
Tidsåtgång: 00:01:55.02
Körschema
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=9 Card=1 Bytes=14)
1 0 FILTER
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'table1' (TABLE) (Cost=1 Card=1 Bytes=7)
3 2 NESTED LOOPS (Cost=3 Card=1 Bytes=14)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'table1' (TABLE) (Cost=2 Card=1 Bytes=7)
5 4 INDEX (RANGE SCAN) OF 'table1_IDX1' (INDEX) (Cost=1 Card=1)
6 3 INDEX (RANGE SCAN) OF 'table1_IDX1' (INDEX) (Cost=0 Card=1)
7 1 INTERSECTION
8 7 SORT (UNIQUE) (Cost=3 Card=108 Bytes=864)
9 8 TABLE ACCESS (BY INDEX ROWID) OF 'table2' (TABLE) (Cost=2 Card=108 Bytes=864)
10 9 INDEX (RANGE SCAN) OF 'table2_IDX1' (INDEX) (Cost=1 Card=108)
11 7 SORT (UNIQUE) (Cost=3 Card=108 Bytes=864)
12 11 TABLE ACCESS (BY INDEX ROWID) OF 'table2' (TABLE) (Cost=2 Card=108 Bytes=864)
13 12 INDEX (RANGE SCAN) OF 'table2_IDX1' (INDEX) (Cost=1 Card=108)
Statistik
1 recursive calls
0 db block gets
2872765 consistent gets
0 physical reads
812 redo size
964172 bytes sent via SQL*Net to client
32245 bytes received via SQL*Net from client
2888 SQL*Net roundtrips to/from client
860256 sorts (memory)
0 sorts (disk)
43302 rows processed Looks like a big number of consistent gets! Any idea how to improve on that?
Martin. -
Increase performance of the following SELECT statement.
Hi All,
I have the following select statement which I would want to fine tune.
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
INTO TABLE GT_RESB
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
The following query is being run for a period of 1 year where the number of records returned will be approx 3 million. When the program is run in background the execution time is around 76 hours. When I run the same program dividing the selection period into smaller parts I am able to execute the same in about an hour.
After a previous posting I had changed the select statement to
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
APPENDING TABLE GT_RESB PACKAGE SIZE LV_SIZE
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
ENDSELECT.
But the performance improvement is very negligible.
Please suggest.
Regards,
KarthikHi Karthik,
<b>Do not use the appending statement</b>
Also you said if you reduce period then you get it quickly.
Why not try dividing your internal table LT_MARC into small internal tables having max 1000 entries.
You can read from index 1 - 1000 for first table. Use that in the select query and append the results
Then you can refresh that table and read table LT_MARC from 1001-2000 into the same table and then again execute the same query.
I know this sounds strange but you can bargain for better performance by increasing data base hits in this case.
Try this and let me know.
Regards
Nishant
> I have the following select statement which I would
> want to fine tune.
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> INTO TABLE GT_RESB
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
>
> e following query is being run for a period of 1 year
> where the number of records returned will be approx 3
> million. When the program is run in background the
> execution time is around 76 hours. When I run the
> same program dividing the selection period into
> smaller parts I am able to execute the same in about
> an hour.
>
> After a previous posting I had changed the select
> statement to
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> APPENDING TABLE GT_RESB
> PACKAGE SIZE LV_SIZE
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
> the performance improvement is very negligible.
> Please suggest.
>
> Regards,
> Karthik
Hi Karthik, -
Performance Tuning -To find the execution time for Select Statement
Hi,
There is a program that takes 10 hrs to execute. I need tune its performance. The program is basically reading few tables like KNA1,ANLA,ANLU,ADRC etc and updates to Custom table. I did my analysis and found few performance techniques for ABAP coding.
Now my problem is, to get this object approved I need to submit the execution statistics to client.I checked both ST05 and SE30. I heard of a Tcode where we can execute a select statement and note its time, then modify and find its improved Performance. Can anybody suggest me on this.
Thanks,
Rajani.Hi,
This is documentation regarding performance analysis. Hope this will be useful
It is a general practice to use Select * from <database> This statement populates all the values of the structure in the database.
The effect is many fold:-
It increases the time to retrieve data from database
There is large amount of unused data in memory
It increases the processing time from work area or internal tables
It is always a good practice to retrieve only the required fields. Always use the syntax Select f1 f2 fn from <database>
e.g. Do not use the following statement:-
Data: i_mara like mara occurs 0 with header line.
Data: i_marc like marc occurs 0 with header line.
Select * from mara
Into table i_mara
Where matnr in s_matnr.
Select * from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Instead use the following statement:-
Data: begin of i_mara occurs 0,
Matnr like mara-matnr,
End of i_mara.
Data: begin of i_marc occurs 0,
Matnr like marc-matnr,
Werks like marc-werks,
End of i_marc.
Select matnr from mara
Into table i_mara
Where matnr in s_matnr. -
Select query and Insert statement performance
Hi all,
Can anyone plz guide us on below problem I am facing ?
1) One of the simple Insert statement runs very slow..What might be the reason? Its simple table without any LOBs ,LONG or so. Everything else in the DB works fine.
2) one of the SELECT statement runs very slow. It selects all records (around 1000) from a table..How can i improve its performance?
3)Which columns in the Master and its detail tables should be indexed to improve Query performance on them.
Many Thanks
Regards
sandeepTo get an answer to your questions you have to post some informations about your system:
1. operating system
2. RAM
3. oracle version
4. init.ora
Thomas -
Need to Improve pefromance for select statement using MSEG table
Hi all,
We are using a select statement using MSEG table
which takes a very long time to run the program which is scheduled in back ground.
Please see the history below.;
1) Previously this program was using SELECT-ENDSELECT statement inside the loop i.e.
LOOP AT I_MCHB.
To get Material Doc. Details
SELECT MBLNR
MJAHR
ZEILE INTO (MSEG-MBLNR,MSEG-MJAHR,MSEG-ZEILE)
UP TO 1 ROWS
FROM MSEG
WHERE CHARG EQ I_MCHB-CHARG
AND MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
ENDSELECT.
Endloop.
The program was taking 1 hr for 20 k data
2)The above statement was replaced by ALL ENTRIES to remove the SELECT-ENDSELECT from the loop.
***GET MATERIAL DOC NUMBER AND FINANCIAL YEAR DETAILS FROM MSEG TABLE
SELECT MBLNR
MJAHR
ZEILE
MATNR
CHARG
WERKS
LGORT
INTO TABLE I_MSEG
FROM MSEG
FOR ALL ENTRIES IN I_MCHB
WHERE CHARG EQ I_MCHB-CHARG
AND MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
3)After getting the further technical analysis from BASIS team , And with the suggestion to optimize the program by changing the INDEX RANGE SCAN to
MSEG~M.
SELECT MBLNR
MJAHR
ZEILE
MATNR
CHARG
WERKS
LGORT
INTO TABLE I_MSEG
FROM MSEG
FOR ALL ENTRIES IN I_MCHB
WHERE MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
At present the program is taking 3 to 4 hrs in back ground .
The table is complete table scan using index
MSEG~M.
Please suggest to improve the performance of this
many many thanks
deepakThe benchmark should be the join, and I can not see how any of your solutions can be faster than the join
SELECT .....
INTO TABLE ....
UP TO 1 ROWS
FROM mchb as a
INNER JOIN mseg as b
ON amatnr EQ bmatnr
AND awerks EQ bwerks
AND algort EQ blgort
And acharg EQ bcharg
WHERE a~ ....
The WHERE condition must come from the select on MCHB, the field list from the total results
you want.
If you want to compare, must compare your solutions plus the select to fill I_MCHB.
Siegfried
Edited by: Siegfried Boes on Dec 20, 2007 2:28 PM -
How to improve select stmt performance without going for secondary index
Hi friends,
I have a select statement which does not contains key fields(Primary index) in where condition. And I have to improve that select stmt performance without going for the secondary indexes.
Can you plese suggest the alternative way for this?.
Thanks in advance,
Ramesh.Hi,
If , possible create a secondary index opf your own But if you have restriction on this, try to arrange the fields in where clause in the same order as they appear in the very table.
This will help the performance a bit.
Another issue, If your table doesn't contain any critical data or data in them are not updated frequently, you may go for Bufferring . it is a good alternate of Indexing with above limitations.
For details in bufferring , check, and all the sublinks.
[concept of buffering|http://help.sap.com/saphelp_nw04/helpdata/en/cf/21f244446011d189700000e8322d00/content.htm]
Regards,
Anirban -
"Check Statistics" in the Performance tab. How to see SELECT statement?
Hi,
In a previous mail on SDN, it was explained (see below) that the "Check Statistics" in the Performance tab, under Manage in the context of a cube, executes the SELECT stament below.
Would you happen to know how to see the SELECT statements that the "Check Statistics" command executes as mentioned in the posting below?
Thanks
====================================
When you hit the Check Statistics tab, it isn't just the fact tables that are checked, but also all master data tables for all the InfoObjects (characteristics) that are in the cubes dimensions.
Checking nbr of rows inserted, last analyzed dates, etc.
SELECT
T.TABLE_NAME, M.PARTITION_NAME, TO_CHAR (T.LAST_ANALYZED, 'YYYYMMDDHH24MISS'), T.NUM_ROWS,
M.INSERTS, M.UPDATES, M.DELETES, M.TRUNCATED
FROM
USER_TABLES T LEFT OUTER JOIN USER_TAB_MODIFICATIONS M ON T.TABLE_NAME = M.TABLE_NAME
WHERE
T.TABLE_NAME = '/BI0/PWBS_ELEMT' AND M.PARTITION_NAME IS NULL
When you Refresh the stats, all the tables that need stats refreshed, are refreshed again. SInce InfoCube queries access the various master data tables in quereis, it makes sense that SAP would check their status.
In looking at some of the results in 7.0, I'm not sure that the 30 day check is being doen as it was in 3.5. This is one area SAP retooled quite a bit.
Yellow only indicates that there could be a problem. You could have stale DB stats on a table, but if they don't cause the DB optimizer to choose a poor execution plan, then it has no impact.
Good DB stats are vital to query performance and old stats could be responsible for poor performance. I'm just syaing that the Statistics check yellow light status is not a definitive indicator.
If your DBA has BRCONNECT running daily, you really should not have to worry about stats collection on the BW side except in cases immediately after large loads /deletes, and the nightly BRCONNECT hasn't run.
BRCONNECT should produce a lof every time it runs showing you all the tables that it determeined should have stats refreshed. That might be worth a review. It should be running daily. If it is not being run, then you need to look at running stats collection from the BW side, either in Process Chains or via InfoCube automatisms.
Best bet is to use ST04 to get Explain Plans of a poor running InfoCube query and then it can be reviewed to see where the time is being spent and whether stats ate a culprit.Hi,
Thanks, this is what I came up with:
st05,
check SQL Trace, Activate Trace
Now, in Rsa1
on Cube, Cube1,
Manage, Performance tab, Check Statistics
Again, back to st05
Deactivate Trace
then click on Displace Trace
Now, in the trace display, after scanning through the output,
how do I see the SELECT statements that the "Check Statistics" command executes
I will appreciate your help.
Maybe you are looking for
-
Transistion in SAP to know The Stock For List Of Material
Dear Sir, Is there any transistion in SAP. To know the stock for list of material. For Ex= I want to know the stock for below material in one shot 1) A 2) B 3) C 4) D Pls solve my problem soon Thanks & Regards Ajay Pareek
-
Linking two .exe projectors
I am trying to link two .exe projectors. is there specific actionscript to load/unload .exe as opposed to .swf files? (PC)>
-
1st Gen iPod convert to iTunes?
Can an older generation iPod...which currently uses MusicMatch for Windows...be reformatted to work with iTunes 6 on the same Win XP machine it's currently linked to? Thanks!
-
Address Book Won't Exit If...
I opened up Address Book and found two cards for my account. In the process of consolidating them, I opened up one in a separate window to edit the picture. Then back in the main window, I deleted that card. The window for that card remained open. Th
-
Hi All, I have requirement to get the business partner associated with the Storage location address. when i call the bbp_pd_get detail FM i am getting the storage location of particulat line item, now i want to get the address associated for that par