Provide alternative select statements to improve performance
Hi Frnds
I want to improve the performance of my report. below statement is taking more time. Please provide any suggestions to include alternative statement for the below select statement.
SELECT H~CLMNO H~PNGUID P~PVGUID V~PNGUID AS V_PNGUID
V~AKTIV V~KNUMV P~POSNR V~KATEG AS ALTNUM V~VERSN
APPENDING CORRESPONDING FIELDS OF TABLE EX_WTYKEY_T
FROM ( ( PNWTYH AS H
INNER JOIN PNWTYV AS V ON V~HEADER_GUID = H~PNGUID )
INNER JOIN PVWTY AS P ON P~VERSION_GUID = V~PNGUID )
FOR ALL ENTRIES IN lt_claim
WHERE H~PNGUID = lt_claim-pnguid
AND V~AKTIV IN rt_AKTIV
AND V~KATEG IN IM_ALTNUM_T.
Thanks
Amminesh.
Moderator message - Moved to the correct forum
Edited by: Rob Burbank on May 14, 2009 11:00 AM
Hi,
Copy the internal table lt_claim contents to another temp internal table.
lt_claim_temp[] = lt_claim[].
sort lt_claim_temp by pnguid.
delete adjacent duplicates from lt_claim_temp comparing pnguid.
SELECT HCLMNO HPNGUID PPVGUID VPNGUID AS V_PNGUID
VAKTIV VKNUMV PPOSNR VKATEG AS ALTNUM V~VERSN
APPENDING CORRESPONDING FIELDS OF TABLE EX_WTYKEY_T
FROM ( ( PNWTYH AS H
INNER JOIN PNWTYV AS V ON VHEADER_GUID = HPNGUID )
INNER JOIN PVWTY AS P ON PVERSION_GUID = VPNGUID )
FOR ALL ENTRIES IN lt_claim_temp
WHERE H~PNGUID = lt_claim-pnguid
AND V~AKTIV IN rt_AKTIV
AND V~KATEG IN IM_ALTNUM_T.
refresh lt_claim_temp.
Similar Messages
-
In how many ways we can filter this select statement to improve performance
Hi Experts,
This select statement taking 2.5 hrs in production, Can we filter the where condition, to improve the performance.Plz suggest with coding ASAP.
select * from dfkkop into table t_dfkkop
where vtref like 'EPC%' and
( ( augbd = '00000000' and
xragl = 'X' )
or
( augbd between w_clrfr and w_clrto ) ) and
augrd ne '03' and
zwage_type in s_wtype .
Regards,
Sam.if it really takes 2.5 hours, try the followingtry to run the SQL trace and
select *
into table t_dfkkop
from dfkkop
where vtref like 'EPC%'
and augbd = '00000000' and xragl
and augrd ne '03'
and zwage_type in s_wtype .
select *
appending table t_dfkkop
from dfkkop
where vtref like 'EPC%'
and augbd between w_clrfr and w_clrto
and augrd ne '03'
and zwage_type in s_wtype .
Do a DESCRIBE TABLE after the first SELECT and after the second,
or run an SQL Trace.
What is time needed for both parts, how many records come back, which index is used.
Siegfried -
Problem with Select Statements
Hi All,
I have a performance problem for my report because of the following statements.
How can i modify the select statements for improving the performance of the report.
DATA : shkzg1h LIKE bsad-shkzg,
shkzg1s LIKE bsad-shkzg,
shkzg2h LIKE bsad-shkzg,
shkzg2s LIKE bsad-shkzg,
shkzg1hu LIKE bsad-shkzg,
shkzg1su LIKE bsad-shkzg,
shkzg2hu LIKE bsad-shkzg,
shkzg2su LIKE bsad-shkzg,
kopbal1s LIKE bsad-dmbtr,
kopbal2s LIKE bsad-dmbtr,
kopbal1h LIKE bsad-dmbtr,
kopbal2h LIKE bsad-dmbtr,
kopbal1su LIKE bsad-dmbtr,
kopbal2su LIKE bsad-dmbtr,
kopbal1hu LIKE bsad-dmbtr,
kopbal2hu LIKE bsad-dmbtr.
*These statements are in LOOP.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1s , kopbal1s)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1su , kopbal1su)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1h , kopbal1h)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1hu , kopbal1hu)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2s , kopbal2s)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2su , kopbal2su)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2h , kopbal2h)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2hu , kopbal2hu)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.>
Siegfried Boes wrote:
> Please stop writing answers if you understrand nothing about database SELECTS!
> All above recommendations are pure nonsense!
>
> As always with such questions, you must do an analysis before you ask! The coding itself is perfectly o.k., a SELECT with an aggregate and a GROUP BY can not be changed into a SELECT SINGLE or whatever.
>
> But your SELECTS mustr be supported by indexes!
>
> Please run SQL Trace, and tell us the results:
>
> I see 8 statements, what is the duration and the number of records coming back for each statement?
> Maybe only one statement is slow.
>
> See
> SQL trace:
> /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
>
>
> Siegfried
Nice point there Siegfried. Instead of giving constructive suggestion, people here give a very bad suggestion on using SELECT SINGLE combined with SUM and GROUP BY.
I hope the person already look at your reply before he try using select single and wondering why he has error.
Anyway, the most important thing is how many loop expected for those select statements?
If you have like thousands of loop, you can expect a poor performance.
So, you should also look at how many times the select statement is called and not only performance for each select statement when you're doing SQL trace.
Regards,
Abraham -
Autocommit and select statements
Hi,
Does auto-commit affect select statements (in the performance aspect) or does it apply only for insert/update/delete statements?
thanks,
YaelSELECTS participate in DB transactionality just as much as INSERT/UPDATE/DELETE statements do. The only way I know to demonstrate this is using SERIALIZABLE isolation level.
Using an Oracle database and 2 SQL*Plus windows (the translation into Java/JDBC is left as an exercise for the reader).
Setup:
SQL> create table foo (n number);
Table created.
SQL> insert into foo (n) values (3);
1 row created.
SQL> c/3/4/
1* insert into foo (n) values (4)
SQL> /
1 row created.
SQL> commit;Now the demo...
Window A:
SQL> alter session set isolation_level = serializable;
Session altered.
SQL> select sum(n) from foo;
SUM(N)
7Window B:
SQL> insert into foo (n) values (6);
1 row created.
SQL> commit;And Window A again:
SQL> select sum(n) from foo;
SUM(N)
72 minutes or 2 years later, Window A must either produce the same answer or an error. Once a commit or rollback occurs in Window A, then and only then can the connection in Window A can see the changed sum. Therefore, the DB has to be maintaining a transaction boundary for the connection in Window A, even though only SELECTS have occured.
The dreaded Oracle "Snapshot too old" error would be a common error, indicating that Oracle has reused the space that stored whatever was need to reconstruct the old state of the table - this is a bad example in that I don't think the additional row added by B can cause that error, it could if it were a row deletion though. As I understand it, Postgres can't have an error because the old state is lost, instead, Postgres will refuse to reuse changed storage until it can guarantee that the changed storage will never be needed to reconstruct the old state. In other words, if we ran this on Postgres and waited long enough before committing or rolling back, the DB storage would fill up with every changed value going back to the moment of the first select in Window A.
But back to the main point; SELECTs can participate in DB transactionality. How a DB manages that is up to the implementation. That management has a small (teeny tiny) cost, at least some of the time, and COMMITs interact with that cost; just how is up to the implementation (and in a good implementation it probably also depends on the isolation level).
At the practical level, we can generally totally ignore the transactionality of SELECTs; most of us use READ COMITTED instead of SERIALIZABLE, and we don't keep connections open for days or months, so it hardly matters in our day-to-day... -
Select statement performance improvement.
HI Guru's,
I am new to ABAP.
I have the below select stement
000304 SELECT mandt msgguid pid exetimest
000305 INTO TABLE lt_key
000306 UP TO lv_del_rows ROWS
000307 FROM (gv_master)
000308 WHERE
000309 * msgstate IN rt_msgstate
000310 * AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Can anyone help me how i can improve the performance of this statement?
Here is the sql trace for the statement:
SELECT
/*+
FIRST_ROWS (100)
"MANDT" , "MSGGUID" , "PID" , "EXETIMEST"
FROM
"SXMSPMAST"
WHERE
"MANDT" = :A0 AND "EXETIMEST" <= :A1 AND "EXETIMEST" >= :A2 AND "REORG" = :A3
ORDER BY
"MANDT" , "ITFACTION" , "REORG" , "EXETIMEST"
Execution Plan
SELECT STATEMENT ( Estimated Costs = 3 , Estimated #Rows = 544 )
4 SORT ORDER BY
( Estim. Costs = 2 , Estim. #Rows = 544 )
Estim. CPU-Costs = 15.671.852 Estim. IO-Costs = 1
3 FILTER
2 TABLE ACCESS BY INDEX ROWID SXMSPMAST
( Estim. Costs = 1 , Estim. #Rows = 544 )
Estim. CPU-Costs = 11.130 Estim. IO-Costs = 1
1 INDEX RANGE SCAN SXMSPMAST~TST
Search Columns: 2
Estim. CPU-Costs = 3.329 Estim. IO-Costs = 0
Do I need to create any new index ? Do i need to remove the Order By clause?
Thanks in advance.why is there an
UP TO lv_del_rows ROWS
together with an ORDER BY?
The database will find all rows fulfilling the condition but returns only the largest Top lv_del_rows.
Therefore it can take a while.
Your index, always put the client field at first position.
actually I am not really convinced by your logic:
itfaction reorg exetimest.
itfaction is the first in the sort order, so all records with the smallest itfactio will come first, but itfaction is not specified, is this really what you want?
Change the index to mandt reorg exetimest reorg
and change the ORDER BY to mandt reorg exetimest
then it will become fast.
* AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest. -
How to improve Performance of the Statements.
Hi,
I am using Oracle 10g. My problem is when i am Execute & fetch the records from the database it is taking so much time. I have created Statistics also but no use. Now what i have to do to improve the Performance of the SELECT, INSERT, UPDATE, DELETE Statements.
Is it make any differents because i am using WindowsXP, 1 GB RAM in Server Machine, and WindowsXP, 512 GB RAM in Client Machine.
Pls. Give me advice for me to improve Performance.
Thank u...!What and where to change parameters and values ?Well, maybe my previous post was not clear enough, but if you want to keep your job, you shouldn't change anything else on init parameter and you shouldn't fall in the Compulsive Tuning Disorder.
Everyone who advise you to change some parameters to some value without any more info shouldn't be listen.
Nicolas. -
Increase performance of the following SELECT statement.
Hi All,
I have the following select statement which I would want to fine tune.
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
INTO TABLE GT_RESB
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
The following query is being run for a period of 1 year where the number of records returned will be approx 3 million. When the program is run in background the execution time is around 76 hours. When I run the same program dividing the selection period into smaller parts I am able to execute the same in about an hour.
After a previous posting I had changed the select statement to
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
APPENDING TABLE GT_RESB PACKAGE SIZE LV_SIZE
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
ENDSELECT.
But the performance improvement is very negligible.
Please suggest.
Regards,
KarthikHi Karthik,
<b>Do not use the appending statement</b>
Also you said if you reduce period then you get it quickly.
Why not try dividing your internal table LT_MARC into small internal tables having max 1000 entries.
You can read from index 1 - 1000 for first table. Use that in the select query and append the results
Then you can refresh that table and read table LT_MARC from 1001-2000 into the same table and then again execute the same query.
I know this sounds strange but you can bargain for better performance by increasing data base hits in this case.
Try this and let me know.
Regards
Nishant
> I have the following select statement which I would
> want to fine tune.
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> INTO TABLE GT_RESB
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
>
> e following query is being run for a period of 1 year
> where the number of records returned will be approx 3
> million. When the program is run in background the
> execution time is around 76 hours. When I run the
> same program dividing the selection period into
> smaller parts I am able to execute the same in about
> an hour.
>
> After a previous posting I had changed the select
> statement to
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> APPENDING TABLE GT_RESB
> PACKAGE SIZE LV_SIZE
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
> the performance improvement is very negligible.
> Please suggest.
>
> Regards,
> Karthik
Hi Karthik, -
Performance Tuning -To find the execution time for Select Statement
Hi,
There is a program that takes 10 hrs to execute. I need tune its performance. The program is basically reading few tables like KNA1,ANLA,ANLU,ADRC etc and updates to Custom table. I did my analysis and found few performance techniques for ABAP coding.
Now my problem is, to get this object approved I need to submit the execution statistics to client.I checked both ST05 and SE30. I heard of a Tcode where we can execute a select statement and note its time, then modify and find its improved Performance. Can anybody suggest me on this.
Thanks,
Rajani.Hi,
This is documentation regarding performance analysis. Hope this will be useful
It is a general practice to use Select * from <database> This statement populates all the values of the structure in the database.
The effect is many fold:-
It increases the time to retrieve data from database
There is large amount of unused data in memory
It increases the processing time from work area or internal tables
It is always a good practice to retrieve only the required fields. Always use the syntax Select f1 f2 fn from <database>
e.g. Do not use the following statement:-
Data: i_mara like mara occurs 0 with header line.
Data: i_marc like marc occurs 0 with header line.
Select * from mara
Into table i_mara
Where matnr in s_matnr.
Select * from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Instead use the following statement:-
Data: begin of i_mara occurs 0,
Matnr like mara-matnr,
End of i_mara.
Data: begin of i_marc occurs 0,
Matnr like marc-matnr,
Werks like marc-werks,
End of i_marc.
Select matnr from mara
Into table i_mara
Where matnr in s_matnr. -
Performance problem(ANEA/ANEP table) in Select statement
Hi
I am using below select statement to fetch data.
Does the below where statement have performance issue?
can you Pls suggest.
1)In select of ANEP table, i am not using all the Key field in where condition. will it have performance problem?
2)does the order of where condition should be same as in table, if any one field order change also will have effect performance
SELECT bukrs
anln1
anln2
afabe
gjahr
peraf
lnran
bzdat
bwasl
belnr
buzei
anbtr
lnsan
FROM anep
INTO TABLE o_anep
FOR ALL ENTRIES IN i_anla
WHERE bukrs = i_anla-bukrs
AND anln1 = i_anla-anln1
AND anln2 = i_anla-anln2
AND afabe IN s_afabe
AND bzdat =< p_date
AND bwasl IN s_bwasl.
SELECT bukrs
anln1
anln2
gjahr
lnran
afabe
aufwv
nafal
safal
aafal
erlbt
aufwl
nafav
aafav
invzv
invzl
FROM anea
INTO TABLE o_anea
FOR ALL ENTRIES IN o_anep
WHERE bukrs = o_anep-bukrs
AND anln1 = o_anep-anln1
AND anln2 = o_anep-anln2
AND gjahr = o_anep-gjahr
AND lnran = o_anep-lnran
AND afabe = o_anep-afabe.
Moderator message: Please Read before Posting in the Performance and Tuning Forum
Edited by: Thomas Zloch on Aug 9, 2011 9:37 AM1. Yes. If you have only a few primary keys in youe WHERE condition that does affect the performance. But some times requirement itself may be in that way. We may not be knowing all the primary keys to given them in WHER conditon. If you know the values, then provide them without fail.
2. Yes. It's better to always follow the sequence in WHERE condition and even in the fields being fetched.
One important point is, whenever you use FOR ALL ENTRIES IN, please make sure that the itab IS NOT INITIAL i.e. the itab must have been filled in. So, place the same conditin before both the SELECT queries like:
IF i_anla[] IS NOT INITIAL.
SELECT bukrs
anln1
anln2
afabe
gjahr
peraf
lnran
bzdat
bwasl
belnr
buzei
anbtr
lnsan
FROM anep
INTO TABLE o_anep
FOR ALL ENTRIES IN i_anla
WHERE bukrs = i_anla-bukrs
AND anln1 = i_anla-anln1
AND anln2 = i_anla-anln2
AND afabe IN s_afabe
AND bzdat =< p_date
AND bwasl IN s_bwasl.
ENDIF.
IF o_anep[] IS NOT INITIAL.
SELECT bukrs
anln1
anln2
gjahr
lnran
afabe
aufwv
nafal
safal
aafal
erlbt
aufwl
nafav
aafav
invzv
invzl
FROM anea
INTO TABLE o_anea
FOR ALL ENTRIES IN o_anep
WHERE bukrs = o_anep-bukrs
AND anln1 = o_anep-anln1
AND anln2 = o_anep-anln2
AND gjahr = o_anep-gjahr
AND lnran = o_anep-lnran
AND afabe = o_anep-afabe.
ENDIF. -
Modify a SELECT Query on ISU DB tables to improve performance
Hi Experts,
I have a SELECT query in a Program which is hitting 6 DB tables by means of 5 inner joins.
The outcome is that the program takes an exceptionally long time to execute, the SELECT statement being the main time consumer.
Need your expertise on how to split the Query without affecting functionality -
The Query :
SELECT fkkvkpgpart eablablbelnr eabladat eablistablart
FROM eabl
INNER JOIN eablg ON eablgablbelnr = eablablbelnr
INNER JOIN egerh ON egerhequnr = eablequnr
INNER JOIN eastl ON eastllogiknr = egerhlogiknr
INNER JOIN ever ON everanlage = eastlanlage
INNER JOIN fkkvkp ON fkkvkpvkont = evervkonto
INTO TABLE itab
WHERE eabl~adat GT [date which is (sy-datum - 3 years)]
Thanks in advance,
PDHi Prajakt
There are a couple of issues with the code provided by Aviansh:
1) Higher Memory consumption by extensive use of internal tables (possible shortdump TSV_NEW_PAGE_ALLOC_FAILED)
2) In many instances multiple SELECT ... FOR ALL ENTRIES... are not faster than a single JOIN statement
3) In the given code the timeslices tables are limited to records active of today, which is not the same as your select (taking into account that you select for the last three years you probably want historical meter/installation relationships as well*)
4) Use of sorted/hashed internal tables instead of standard ones could also improve the runtime (in case you stick to all the internal tables)
Did you create an index on EABL including columns MANDT, ADAT?
Did you check the execution plan of your original JOIN Select statement?
Yep
Jürgen
You should review your selection, because you probably want business partner that was linked to the meter reading at the time of ADAT, while your select doesn't take the specific Contract / Device Installation of the time of ADAT into account.
Example your meter reading is from 16.02.2010
Meter 00001 was in Installation 3000001 between 01.02.2010 and 23.08.2010
Meter 00002 was in Installation 3000001 between 24.08.2010 and 31.12.9999
Installation 3000001 was linked to Account 4000001 between 01.01.2010 and 23.01.2011
Installation 3000001 was linked to Account 4000002 between 24.01.2010 and 31.12.9999
This means with your select returns four lines and you probably want only one.
To achieve that you have to limit all timeslices to the date of EABL-ADAT (selects from EGERH, EASTL, EVER).
Update:
Coming back to point one and the memory consumption:
What are you planning to do with the output of the select statment?
Did you get a shortdump TSV_NEW_PAGE_ALLOC_FAILED with three years meter reading history?
Or did you never run on production like volumes yet?
Dependent on this you might want to redesign your program anyway.
Edited by: sattlerj on Jun 24, 2011 10:38 AM -
Need to improve Performance of select...endselect query
Hi experts,
I have a query in my program like below with inner join of 3 tables.
In my program used select....endselect again inside this select...endselect statements used..
While executing in production taking lot of time to fetch records. Can anyone suggest to improve performance of below query urgently...
Greatly appreciated ur help...
SELECT MVKEDWERK MVKEMATNR MVKEVKORG MVKEVTWEG MARA~MATNR
MARAMTART ZM012MTART ZM012ZLIND ZM012ZPRICEREF
INTO (MVKE-DWERK , MVKE-MATNR , MVKE-VKORG , MVKE-VTWEG , MARA-MATNR
, MARA-MTART , ZM012-MTART , ZM012-ZLIND , ZM012-ZPRICEREF )
FROM ( MVKE
INNER JOIN MARA
ON MARAMATNR = MVKEMATNR
INNER JOIN ZM012
ON ZM012MTART = MARAMTART )
WHERE MVKE~DWERK IN SP$00004
AND MVKE~MATNR IN SP$00001
AND MVKE~VKORG IN SP$00002
AND MVKE~VTWEG IN SP$00003
AND MARA~MTART IN SP$00005
AND ZM012~ZLIND IN SP$00006
AND ZM012~ZPRICEREF IN SP$00007.
%DBACC = %DBACC - 1.
IF %DBACC = 0.
STOP.
ENDIF.
CHECK SP$00005.
CHECK SP$00004.
CHECK SP$00001.
CHECK SP$00002.
CHECK SP$00003.
CHECK SP$00006.
CHECK SP$00007.
clear Check_PR00.
select * from A004
where kappl = 'V'
and kschl = 'PR00'
and vkorg = mvke-vkorg
and vtweg = mvke-vtweg
and matnr = mvke-matnr
and DATAB le sy-datum
and DATBI ge sy-datum.
if sy-subrc = 0.
select * from konp
where knumh = a004-knumh.
if sy-subrc = 0.
Check_PR00 = konp-kbetr.
endif.
endselect.
endif.
endselect.
CHECK SP$00008.
clear Check_ZPR0.
select * from A004
where kappl = 'V'
and kschl = 'ZPR0'
and vkorg = mvke-vkorg
and vtweg = mvke-vtweg
and matnr = mvke-matnr
and DATAB le sy-datum
and DATBI ge sy-datum.
if sy-subrc = 0.
select * from konp
where knumh = a004-knumh.
if sy-subrc = 0.
Check_ZPR0 = konp-kbetr.
endif.
endselect.
endif.
endselect.
CHECK SP$00009.
clear ZFMP.
select * from A004
where kappl = 'V'
and kschl = 'ZFMP'
and vkorg = mvke-vkorg
and vtweg = mvke-vtweg
and matnr = mvke-matnr
and DATAB le sy-datum
and DATBI ge sy-datum.
if sy-subrc = 0.
select * from konp
where knumh = a004-knumh.
if sy-subrc = 0.
ZFMP = konp-kbetr.
endif.
endselect.
endif.
endselect.
CHECK SP$00010.
clear mastercost.
clear ZDCF.
select * from A004
where kappl = 'V'
and kschl = 'ZDCF'
and vkorg = mvke-vkorg
and vtweg = mvke-vtweg
and matnr = mvke-matnr
and DATAB le sy-datum
and DATBI ge sy-datum.
if sy-subrc = 0.
select * from konp
where knumh = a004-knumh.
if sy-subrc = 0.
ZDCF = konp-kbetr.
endif.
endselect.
endif.
endselect.
CHECK SP$00011.
clear masterprice.
clear Standardcost.
select * from mbew
where matnr = mvke-matnr
and bwkey = mvke-dwerk.
Standardcost = mbew-stprs.
mastercost = MBEW-BWPRH.
masterprice = mBEW-BWPH1.
endselect.
ADD 1 TO %COUNT-MVKE.
%LINR-MVKE = '01'.
EXTRACT %FG01.
%EXT-MVKE01 = 'X'.
EXTRACT %FGWRMVKE01.
ENDSELECT.
best rgds..
hari..Hi there.
Some advices:
- why going to MVKE first and MARA then? You will find n rows in MVKE for 1 matnr, and then go n times to the same record in MARA. Do the oposite, i.e, go first to MARA (1 time per matnr) and then to MVKE.
- avoid select *, you will save time.
- use trace or measure performance in tcodes ST05 and SM30.
- replace:
select * from konp
where knumh = a004-knumh.
if sy-subrc = 0.
Check_ZPR0 = konp-kbetr.
endif.
endselect.
by
select * from konp
where knumh = a004-knumh.
Check_ZPR0 = konp-kbetr.
exit.
endselect.
Here, if I understood, you only need to atribute kbetr value to Check_ZPR0 if selecting anything (don't need the IF because if enters in select, subrc always equal to 0, and also don't need to do it several times from same a004-knumh - reason for the EXIT.
Hope this helps.
Regards.
Valter Oliveira.
Edited by: Valter Oliveira on Jun 5, 2008 3:16 PM -
How to improve performance of select query when primary key is not referred
Hi,
There is a select query where we are unable to refrence primary key of the tables:
Since, the the below code is refrensing to vgbel and vgpos fields instead of vbeln and posnr..... the performance is very slow.
select vbeln posnr into (wa-vbeln1, wa-posnr1)
from lips
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endselect.
Please le t me know if you have some tips..hi,
I hope you are using the select statement inside a loop ...endloop get that outside to improve the performance ..
if not i_vbap[] is initial.
select vbeln posnr into table it_lips
from lips
for all entries in i_vbap
where ( pstyv ne 'ZBAT'
and pstyv ne 'ZNLN' )
and vgbel = i_vbap-vbeln
and vgpos = i_vbap-posnr.
endif. -
How to improve performance of insert statement
Hi all,
How to improve performance of insert statement
I am inserting 1lac records into table it takes around 20 min..
Plz help.
Thanx In Advance.I tried :
SQL> create table test as select * from dba_objects;
Table created.
SQL> delete from test;
3635 rows deleted.
SQL> commit;
Commit complete.
SQL> select count(*) from dba_extents where segment_name='TEST';
COUNT(*)
4
SQL> insert /*+ APPEND */ into test select * from dba_objects;
3635 rows created.
SQL> commit;
Commit complete.
SQL> select count(*) from dba_extents where segment_name='TEST';
COUNT(*)
6
Cheers, Bhupinder -
Need to Improve pefromance for select statement using MSEG table
Hi all,
We are using a select statement using MSEG table
which takes a very long time to run the program which is scheduled in back ground.
Please see the history below.;
1) Previously this program was using SELECT-ENDSELECT statement inside the loop i.e.
LOOP AT I_MCHB.
To get Material Doc. Details
SELECT MBLNR
MJAHR
ZEILE INTO (MSEG-MBLNR,MSEG-MJAHR,MSEG-ZEILE)
UP TO 1 ROWS
FROM MSEG
WHERE CHARG EQ I_MCHB-CHARG
AND MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
ENDSELECT.
Endloop.
The program was taking 1 hr for 20 k data
2)The above statement was replaced by ALL ENTRIES to remove the SELECT-ENDSELECT from the loop.
***GET MATERIAL DOC NUMBER AND FINANCIAL YEAR DETAILS FROM MSEG TABLE
SELECT MBLNR
MJAHR
ZEILE
MATNR
CHARG
WERKS
LGORT
INTO TABLE I_MSEG
FROM MSEG
FOR ALL ENTRIES IN I_MCHB
WHERE CHARG EQ I_MCHB-CHARG
AND MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
3)After getting the further technical analysis from BASIS team , And with the suggestion to optimize the program by changing the INDEX RANGE SCAN to
MSEG~M.
SELECT MBLNR
MJAHR
ZEILE
MATNR
CHARG
WERKS
LGORT
INTO TABLE I_MSEG
FROM MSEG
FOR ALL ENTRIES IN I_MCHB
WHERE MATNR EQ I_MCHB-MATNR
AND WERKS EQ I_MCHB-WERKS
AND LGORT EQ I_MCHB-LGORT.
At present the program is taking 3 to 4 hrs in back ground .
The table is complete table scan using index
MSEG~M.
Please suggest to improve the performance of this
many many thanks
deepakThe benchmark should be the join, and I can not see how any of your solutions can be faster than the join
SELECT .....
INTO TABLE ....
UP TO 1 ROWS
FROM mchb as a
INNER JOIN mseg as b
ON amatnr EQ bmatnr
AND awerks EQ bwerks
AND algort EQ blgort
And acharg EQ bcharg
WHERE a~ ....
The WHERE condition must come from the select on MCHB, the field list from the total results
you want.
If you want to compare, must compare your solutions plus the select to fill I_MCHB.
Siegfried
Edited by: Siegfried Boes on Dec 20, 2007 2:28 PM -
How can we improve performance while selection production orders from resb
Dear all,
there is a performance issue in a report which compares sales order and production order.
Below is the code, in this while reading production order data from resb with the below select statement.
can any body tell me how can we improve the performance? should we use indexing, if yes how to use indexing.
*read sales order data
SELECT vbeln posnr arktx zz_cl zz_qty
INTO (itab-vbeln, itab-sposnr, itab-arktx, itab-zz_cl, itab-zz_qty)
FROM vbap
WHERE vbeln = p_vbeln
AND uepos = p_posnr.
itab-so_qty = itab-zz_cl * itab-zz_qty / 1000.
CONCATENATE itab-vbeln itab-sposnr
INTO itab-document SEPARATED BY '/'.
CLEAR total_pro.
**read production order data*
SELECT aufnr posnr roms1 roanz
INTO (itab-aufnr, itab-pposnr, itab-roms1, itab-roanz)
FROM resb
WHERE kdauf = p_vbeln
AND ablad = itab-sposnr+2.Himanshu,
Put a break point before these two select statements and execute in the production.This way you will come to know which select statement is taking much time to get executed.
In both the select statements the where clause is not having the primary keys.
Coming to the point of selecting the data from vbap do check the SAP note no:-185530 accordigly modify the select statement.
As far as the table RESB is concerened here also the where clause doesn't have the primary keys.Do check the SAP Note No:-187906.
I guess not using primary keys is maring the performance.
K.Kiran.
Maybe you are looking for
-
As stated in the title, I am a forensic examiner, although most of my work is on Windows devices. My problem is an iPod 5G with Touch Wheel that I received to exam, and I have not been able to get anything from conventional methods. But not really un
-
I upgraded my iphone4 to a iphone 5 and lost my icloud storage and most of my data Why
I recently purchased 20Gig of Icloud storage. I upgraded my iphone from a iphone4 to a iphone5. Before copy the APPs and data to my new phone. I backedup my Iphone4 using ITUNES to both my computer and the icloud. When I try to restore it to my iphon
-
Essbase DataExport to relational: where to define the dsn?
I am running Essbase 11.1.1.2, and using DataExport in a calc to generate a flat file. Now I want to push the data into a relational table using: DATAEXPORT "DSN" "dsnName" "tableName" "userName" "password" What do I need to do to get the 'dsnName' r
-
Hi gurus, In Quick Guide to Installing the SAP Best Practices Baseline Package it says; Please refer to Quick Guide to Quest Tool. But I couldn't find this guide neither in service.sap.com nor in SDN. Any suggestions?
-
just a quick question. i had a 820 on talk mobile from carfone warehouse it was stolen at christmas. i could get on and use bet365.com. i bought another 820 off ebay its unlocked but was origonally on o2. it works fine apart from now on a few website