Performance of SELECT Statements
Hi All,
I have a small confusion.
Consider these 3 statements :
A.
SELECT * from table1 where {conditions based on primary key}.
exit.
Endselect.
B.
SELECT SINGLE * from table1 where {conditions based on primary key}.
C.
SELECT upto 1 rows from table1 where {conditions based on primary key}.
Considering that table1 has more than 1 primary keys,Which statement gives the best peformance when
1.All the keys are used in the where condition.
2.Only a few keys are used in the where condition
Thanks in advance.
Hari
Hi,
If you are interested in only one record then
1.All the keys are used in the where condition.
Ans B
2.Only a few keys are used in the where condition
Ans C
santhosh
Similar Messages
-
Performance issue - Select statement
Hi I am having the 10 lack records in the KONP table . If i am trying to see all the records in SE11 , it is giving the short dump TSV_TNEW_PAGE_ALLOC_FAILED . I know this is because of less memory in the Server . Is there any other way to get the data ? How to optimise the below SELECT statement if i have large data in the table .
i_condn_data - is having 8 lack records .
SELECT knumh kznep valtg valdt zterm
FROM konp
INTO TABLE i_condn_data_b
FOR ALL ENTRIES IN i_condn_data
WHERE knumh = i_condn_data-knumh
AND kschl = p_kschl.
Please suggest .Hi,
try to use "UP TO n ROWS" to control the quantity of selected data in each Loop step.
Something like this:
sort itab by itab-knumh.
flag = 'X'.
while flag = 'X'.
SELECT knumh kznep valtg valdt zterm
FROM konp
INTO TABLE i_condn_data_b UP TO one_million ROWS
WHERE knumh > new_value_for_selection
AND kschl = p_kschl.
describe table i_condn_data_b lines i.
read table i_condn_data_b index i.
new_value_for_selection = i_condn_data_b-knumh.
*....your logic for table i_condn_data_b
if one_million > i.
clear flag.
endif.
endwhile.
Regards -
How to improve Performance for Select statement
Hi Friends,
Can you please help me in improving the performance of the following query:
SELECT SINGLE MAX( policyterm ) startterm INTO (lv_term, lv_cal_date) FROM zu1cd_policyterm WHERE gpart = gv_part GROUP BY startterm.
Thanks and Regards,
Johnylong lists can not be produced with a SELECT SINGLE, there is also nothing to group.
But I guess the SINGLE is a bug
SELECT MAX( policyterm ) startterm
INTO (lv_term, lv_cal_date)
FROM zu1cd_policyterm
WHERE gpart = gv_part
GROUP BY startterm.
How many records are in zu1cd_policyterm ?
Is there an index starting with gpart?
If first answer is 'large' and second 'no' => slow
What is the meaning of gpart? How many different values can it assume?
If many different values then an index makes sense, if you are allowed to create
an index.
Otherwise you must be patient.
Siegfried -
Hi,
I have recently been asked to generate a program that reports of payroll postings to FI. This involves creating a giant select statement from the ppoix table to gather all the postings. My select statement is as follows:
SELECT pernr "EE Number
seqno "Sequential number
actsign "Indicator: Status of record
runid "Number of posting run
postnum "Number
tslin "Line number of data transfer
lgart "Wage Type
betrg "Amount
waers "Currency
anzhl "Number
meins "Base unit of measure
spprc "Special processing of posting items
momag "Transfer to FI/CO:EE grouping for acct determi
komok "Transfer to FI/CO: Symbolic account
mcode "Matchcode search term
koart "Account assignment type
auart "Expenditure type
nofin "Indicator: Expenditure type is not funded
INTO CORRESPONDING FIELDS OF TABLE i_ppoix
FROM ppoix
FOR ALL ENTRIES IN run_doc_xref
WHERE runid = run_doc_xref-runid
AND tslin = run_doc_xref-linum
AND spprc <> 'A'
AND lgart IN s_lgart
AND pernr in s_pernr.
where s_pernr is a select option that holds personnel nummbers and s_lgart is a select option that holds wagetypes. This statement works fine for a certain amount of personnel numbers and a certain amount of wagetypes, but once you exceed a certain limit the Database does not allow you to perform a select statement this large. Is there a better way to perform such a large select such as this one) ie: FM, or some other method I am not aware of. This select statement comes from the standard SAP delivered cost center admin report and this report dumps as well when too much data is passed to it.
any ideas would be much appreciated.
thanks.The problem here is with the select-options.
For a select statement, you cannot have more that certain amount of data.
The problem with your select becomes complex because of the FOR ALL ENTRIES in and the huge s_pernr and the 40 million records :(.
I am guessing that the s_lgart will be small.
How many entries do you have in internal table "run_doc_xref"?
If there are not that many, then I would suggest this:
TYPES:
BEGIN OF ty_temp_ppoix,
pernr TYPE ppoix-pernr,
lgart TYPE ppoix-lgart,
seqno TYPE ppoix-seqno,
actsign TYPE ppoix-actsign,
runid TYPE ppoix-runid,
postnum TYPE ppoix-postnum,
tslin TYPE ppoix-tslin,
betrg TYPE ppoix-betrg,
spprc TYPE ppoix-spprc,
END OF ty_temp_ppoix.
DATA:
i_temp_ppoix TYPE SORTED TABLE OF ty_temp_ppoix
WITH NON-UNIQUE KEY pernr lgart
INITIAL SIZE 0
WITH HEADER LINE.
DATA:
v_pernr_lines TYPE sy-tabix,
v_lgart_lines TYPE sy-tabix.
IF NOT run_doc_xref[] IS INITIAL.
DESCRIBE TABLE s_pernr LINES v_pernr_lines.
DESCRIBE TABLE s_lgart LINES v_lgart_lines.
IF v_pernr_lines GT 800 OR
v_lgart_lines GT 800.
* There is an index on runid and tslin. This should be ok
* ( still bad because of the huge table :( )
SELECT pernr lgart seqno actsign runid postnum tslin betrg spprc
* Selecting into sorted TEMP table here
INTO TABLE i_temp_ppoix
FROM ppoix
FOR ALL ENTRIES IN run_doc_xref
WHERE runid = run_doc_xref-runid
AND tslin = run_doc_xref-linum
AND spprc <> 'A'.
* The sorted table should make the delete faster
DELETE i_temp_ppoix WHERE NOT pernr IN s_pernr
AND NOT lgart IN s_lgart.
* Now populate the actual target
LOOP AT i_temp_ppoix.
MOVE: i_temp_ppoix-pernr TO i_ppoix-pernr.
* and the rest of the fields
APPEND i_ppoix.
DELETE i_temp_ppoix.
ENDLOOP.
ELSE.
SELECT pernr seqno actsign runid postnum tslin lgart betrg spprc
* Selecting into your ACTUAL target here
INTO TABLE i_ppoix
FROM ppoix
FOR ALL ENTRIES IN run_doc_xref
WHERE runid = run_doc_xref-runid
AND tslin = run_doc_xref-linum
AND spprc <> 'A'
AND pernr IN s_pernr
AND lgart IN s_lgart.
ENDIF.
ELSE.
* Error message because of no entries in run_doc_xref?
* Please answer this so a new solution can be implemented here
* if it is NOT an error
ENDIF.
Hope this helps.
Regards,
-Ramesh -
Hey everyone,
First of all, yes, I have been looking through the 8.5 database schema guide. As I have been reviewing the schema, I have been developing some ideas as to how to collect the desired data. However, if anyone has already developed or found the SQL statements (which I'm sure someone already has) it would help me by minimizing buggs in my data collection program.
All of these statistics need to be grouped by CSQ and selected for a certain time range (<start time> and <stop time>). I.e. 1 hour increments. I have no problem getting a list of results and then performing calulations to get the desired end result. Also, if I need to perform multiple select statements to essentialy join two tables, please include both statements. Finally, I saw the RtCSQsSummary table, but I have to collect data for the past, not at that given time.
1. Total calls presented per CSQ
2. Total calls answered per CSQ
3. Total calls abandoned per CSQ
4. Percentage of calls abandoned per CSQ (if this is not stored in the database, I'm thinking: <calls abandonded>/<calls presented>)
5. Average abandon time in seconds (if this is not stored in the db, I'm thinking: sum(<abandon time>)/<calls abandonded>)
6. Service Level - % calls answered in 90 seonds by a skill-set (I saw metServiceLevel in table ContactQueueDetail; however, I would have to find how to configure this threshold for the application)
7. Average speed of answer per CSQ
8. Average call talk time per CSQ
9. Aggregate logged in time of CSQ resources/agents
10. Aggregate ready time of CSQ resources/agents
I realize that some of these should be easy to find (as I am still digging through the db schema guide), but I was reading how a new record is created for every call leg so I could easily see how I could get inaccurate information without properly developed select statements.
Any help will be greatly appreciated.
BrendanHi,
kindly use the below link
http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_8_5/user/guide/uccx85dbschema.pdf
it is the data base schema for UCCX 8.5.
if you want to connect to DB. go to page 123. it shows you how to connect to DB. it is for UCCX 9 not 8.5 but it worths to try
http://www.cisco.com/en/US/docs/voice_ip_comm/cust_contact/contact_center/crs/express_9_02/programming/guide/UCCX_BK_CFD16E30_00_cisco-unified-contact-center-express.pdf
HTH
Anas
please rate all helpful posts -
Increase performance of the following SELECT statement.
Hi All,
I have the following select statement which I would want to fine tune.
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
INTO TABLE GT_RESB
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
The following query is being run for a period of 1 year where the number of records returned will be approx 3 million. When the program is run in background the execution time is around 76 hours. When I run the same program dividing the selection period into smaller parts I am able to execute the same in about an hour.
After a previous posting I had changed the select statement to
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
APPENDING TABLE GT_RESB PACKAGE SIZE LV_SIZE
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
ENDSELECT.
But the performance improvement is very negligible.
Please suggest.
Regards,
KarthikHi Karthik,
<b>Do not use the appending statement</b>
Also you said if you reduce period then you get it quickly.
Why not try dividing your internal table LT_MARC into small internal tables having max 1000 entries.
You can read from index 1 - 1000 for first table. Use that in the select query and append the results
Then you can refresh that table and read table LT_MARC from 1001-2000 into the same table and then again execute the same query.
I know this sounds strange but you can bargain for better performance by increasing data base hits in this case.
Try this and let me know.
Regards
Nishant
> I have the following select statement which I would
> want to fine tune.
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> INTO TABLE GT_RESB
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
>
> e following query is being run for a period of 1 year
> where the number of records returned will be approx 3
> million. When the program is run in background the
> execution time is around 76 hours. When I run the
> same program dividing the selection period into
> smaller parts I am able to execute the same in about
> an hour.
>
> After a previous posting I had changed the select
> statement to
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> APPENDING TABLE GT_RESB
> PACKAGE SIZE LV_SIZE
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
> the performance improvement is very negligible.
> Please suggest.
>
> Regards,
> Karthik
Hi Karthik, -
Performance Tuning -To find the execution time for Select Statement
Hi,
There is a program that takes 10 hrs to execute. I need tune its performance. The program is basically reading few tables like KNA1,ANLA,ANLU,ADRC etc and updates to Custom table. I did my analysis and found few performance techniques for ABAP coding.
Now my problem is, to get this object approved I need to submit the execution statistics to client.I checked both ST05 and SE30. I heard of a Tcode where we can execute a select statement and note its time, then modify and find its improved Performance. Can anybody suggest me on this.
Thanks,
Rajani.Hi,
This is documentation regarding performance analysis. Hope this will be useful
It is a general practice to use Select * from <database> This statement populates all the values of the structure in the database.
The effect is many fold:-
It increases the time to retrieve data from database
There is large amount of unused data in memory
It increases the processing time from work area or internal tables
It is always a good practice to retrieve only the required fields. Always use the syntax Select f1 f2 fn from <database>
e.g. Do not use the following statement:-
Data: i_mara like mara occurs 0 with header line.
Data: i_marc like marc occurs 0 with header line.
Select * from mara
Into table i_mara
Where matnr in s_matnr.
Select * from marc
Into table i_marc
For all entries in i_mara
Where matnr eq i_mara-matnr.
Instead use the following statement:-
Data: begin of i_mara occurs 0,
Matnr like mara-matnr,
End of i_mara.
Data: begin of i_marc occurs 0,
Matnr like marc-matnr,
Werks like marc-werks,
End of i_marc.
Select matnr from mara
Into table i_mara
Where matnr in s_matnr. -
"Check Statistics" in the Performance tab. How to see SELECT statement?
Hi,
In a previous mail on SDN, it was explained (see below) that the "Check Statistics" in the Performance tab, under Manage in the context of a cube, executes the SELECT stament below.
Would you happen to know how to see the SELECT statements that the "Check Statistics" command executes as mentioned in the posting below?
Thanks
====================================
When you hit the Check Statistics tab, it isn't just the fact tables that are checked, but also all master data tables for all the InfoObjects (characteristics) that are in the cubes dimensions.
Checking nbr of rows inserted, last analyzed dates, etc.
SELECT
T.TABLE_NAME, M.PARTITION_NAME, TO_CHAR (T.LAST_ANALYZED, 'YYYYMMDDHH24MISS'), T.NUM_ROWS,
M.INSERTS, M.UPDATES, M.DELETES, M.TRUNCATED
FROM
USER_TABLES T LEFT OUTER JOIN USER_TAB_MODIFICATIONS M ON T.TABLE_NAME = M.TABLE_NAME
WHERE
T.TABLE_NAME = '/BI0/PWBS_ELEMT' AND M.PARTITION_NAME IS NULL
When you Refresh the stats, all the tables that need stats refreshed, are refreshed again. SInce InfoCube queries access the various master data tables in quereis, it makes sense that SAP would check their status.
In looking at some of the results in 7.0, I'm not sure that the 30 day check is being doen as it was in 3.5. This is one area SAP retooled quite a bit.
Yellow only indicates that there could be a problem. You could have stale DB stats on a table, but if they don't cause the DB optimizer to choose a poor execution plan, then it has no impact.
Good DB stats are vital to query performance and old stats could be responsible for poor performance. I'm just syaing that the Statistics check yellow light status is not a definitive indicator.
If your DBA has BRCONNECT running daily, you really should not have to worry about stats collection on the BW side except in cases immediately after large loads /deletes, and the nightly BRCONNECT hasn't run.
BRCONNECT should produce a lof every time it runs showing you all the tables that it determeined should have stats refreshed. That might be worth a review. It should be running daily. If it is not being run, then you need to look at running stats collection from the BW side, either in Process Chains or via InfoCube automatisms.
Best bet is to use ST04 to get Explain Plans of a poor running InfoCube query and then it can be reviewed to see where the time is being spent and whether stats ate a culprit.Hi,
Thanks, this is what I came up with:
st05,
check SQL Trace, Activate Trace
Now, in Rsa1
on Cube, Cube1,
Manage, Performance tab, Check Statistics
Again, back to st05
Deactivate Trace
then click on Displace Trace
Now, in the trace display, after scanning through the output,
how do I see the SELECT statements that the "Check Statistics" command executes
I will appreciate your help. -
Performance problem(ANEA/ANEP table) in Select statement
Hi
I am using below select statement to fetch data.
Does the below where statement have performance issue?
can you Pls suggest.
1)In select of ANEP table, i am not using all the Key field in where condition. will it have performance problem?
2)does the order of where condition should be same as in table, if any one field order change also will have effect performance
SELECT bukrs
anln1
anln2
afabe
gjahr
peraf
lnran
bzdat
bwasl
belnr
buzei
anbtr
lnsan
FROM anep
INTO TABLE o_anep
FOR ALL ENTRIES IN i_anla
WHERE bukrs = i_anla-bukrs
AND anln1 = i_anla-anln1
AND anln2 = i_anla-anln2
AND afabe IN s_afabe
AND bzdat =< p_date
AND bwasl IN s_bwasl.
SELECT bukrs
anln1
anln2
gjahr
lnran
afabe
aufwv
nafal
safal
aafal
erlbt
aufwl
nafav
aafav
invzv
invzl
FROM anea
INTO TABLE o_anea
FOR ALL ENTRIES IN o_anep
WHERE bukrs = o_anep-bukrs
AND anln1 = o_anep-anln1
AND anln2 = o_anep-anln2
AND gjahr = o_anep-gjahr
AND lnran = o_anep-lnran
AND afabe = o_anep-afabe.
Moderator message: Please Read before Posting in the Performance and Tuning Forum
Edited by: Thomas Zloch on Aug 9, 2011 9:37 AM1. Yes. If you have only a few primary keys in youe WHERE condition that does affect the performance. But some times requirement itself may be in that way. We may not be knowing all the primary keys to given them in WHER conditon. If you know the values, then provide them without fail.
2. Yes. It's better to always follow the sequence in WHERE condition and even in the fields being fetched.
One important point is, whenever you use FOR ALL ENTRIES IN, please make sure that the itab IS NOT INITIAL i.e. the itab must have been filled in. So, place the same conditin before both the SELECT queries like:
IF i_anla[] IS NOT INITIAL.
SELECT bukrs
anln1
anln2
afabe
gjahr
peraf
lnran
bzdat
bwasl
belnr
buzei
anbtr
lnsan
FROM anep
INTO TABLE o_anep
FOR ALL ENTRIES IN i_anla
WHERE bukrs = i_anla-bukrs
AND anln1 = i_anla-anln1
AND anln2 = i_anla-anln2
AND afabe IN s_afabe
AND bzdat =< p_date
AND bwasl IN s_bwasl.
ENDIF.
IF o_anep[] IS NOT INITIAL.
SELECT bukrs
anln1
anln2
gjahr
lnran
afabe
aufwv
nafal
safal
aafal
erlbt
aufwl
nafav
aafav
invzv
invzl
FROM anea
INTO TABLE o_anea
FOR ALL ENTRIES IN o_anep
WHERE bukrs = o_anep-bukrs
AND anln1 = o_anep-anln1
AND anln2 = o_anep-anln2
AND gjahr = o_anep-gjahr
AND lnran = o_anep-lnran
AND afabe = o_anep-afabe.
ENDIF. -
How to find for which select statement performance is more
hi gurus
can anyone suggest me
if we have 2 select statements than
how to find for which select statement performance is more
thanks®ards
kals.hi check this..
1 .the select statement in which the primary and secondary keys are used will gives the good performance .
2.if the select statement had select up to i row is good than the select single..
go to st05 and check the performance..
regards,
venkat -
In how many ways we can filter this select statement to improve performance
Hi Experts,
This select statement taking 2.5 hrs in production, Can we filter the where condition, to improve the performance.Plz suggest with coding ASAP.
select * from dfkkop into table t_dfkkop
where vtref like 'EPC%' and
( ( augbd = '00000000' and
xragl = 'X' )
or
( augbd between w_clrfr and w_clrto ) ) and
augrd ne '03' and
zwage_type in s_wtype .
Regards,
Sam.if it really takes 2.5 hours, try the followingtry to run the SQL trace and
select *
into table t_dfkkop
from dfkkop
where vtref like 'EPC%'
and augbd = '00000000' and xragl
and augrd ne '03'
and zwage_type in s_wtype .
select *
appending table t_dfkkop
from dfkkop
where vtref like 'EPC%'
and augbd between w_clrfr and w_clrto
and augrd ne '03'
and zwage_type in s_wtype .
Do a DESCRIBE TABLE after the first SELECT and after the second,
or run an SQL Trace.
What is time needed for both parts, how many records come back, which index is used.
Siegfried -
Select statement performance improvement.
HI Guru's,
I am new to ABAP.
I have the below select stement
000304 SELECT mandt msgguid pid exetimest
000305 INTO TABLE lt_key
000306 UP TO lv_del_rows ROWS
000307 FROM (gv_master)
000308 WHERE
000309 * msgstate IN rt_msgstate
000310 * AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Can anyone help me how i can improve the performance of this statement?
Here is the sql trace for the statement:
SELECT
/*+
FIRST_ROWS (100)
"MANDT" , "MSGGUID" , "PID" , "EXETIMEST"
FROM
"SXMSPMAST"
WHERE
"MANDT" = :A0 AND "EXETIMEST" <= :A1 AND "EXETIMEST" >= :A2 AND "REORG" = :A3
ORDER BY
"MANDT" , "ITFACTION" , "REORG" , "EXETIMEST"
Execution Plan
SELECT STATEMENT ( Estimated Costs = 3 , Estimated #Rows = 544 )
4 SORT ORDER BY
( Estim. Costs = 2 , Estim. #Rows = 544 )
Estim. CPU-Costs = 15.671.852 Estim. IO-Costs = 1
3 FILTER
2 TABLE ACCESS BY INDEX ROWID SXMSPMAST
( Estim. Costs = 1 , Estim. #Rows = 544 )
Estim. CPU-Costs = 11.130 Estim. IO-Costs = 1
1 INDEX RANGE SCAN SXMSPMAST~TST
Search Columns: 2
Estim. CPU-Costs = 3.329 Estim. IO-Costs = 0
Do I need to create any new index ? Do i need to remove the Order By clause?
Thanks in advance.why is there an
UP TO lv_del_rows ROWS
together with an ORDER BY?
The database will find all rows fulfilling the condition but returns only the largest Top lv_del_rows.
Therefore it can take a while.
Your index, always put the client field at first position.
actually I am not really convinced by your logic:
itfaction reorg exetimest.
itfaction is the first in the sort order, so all records with the smallest itfactio will come first, but itfaction is not specified, is this really what you want?
Change the index to mandt reorg exetimest reorg
and change the ORDER BY to mandt reorg exetimest
then it will become fast.
* AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest. -
Performance in the select statement
Hi All,
(select a.siteid siteid,a.bpaadd_0 bpaadd_0,a.bpanum_0 bpanum_0,
case when a.bpaaddlig_0 = '' then '-' else a.bpaaddlig_0 end
address1,
case when a.bpaaddlig_1 = '' then '-' else a.bpaaddlig_1 end
address2,
case when a.bpaaddlig_2 = '' then '-' else a.bpaaddlig_2 end
address3,
case when a.bpades_0 = '' then '-' else a.bpades_0 end place,
case when a.cty_0 = '' then '-' else a.cty_0 end city,
case when a.poscod_0 = '' then '-' else a.poscod_0 end
pincode,
case when b.cntnam_0 = '' then '-' else b.cntnam_0 end
contactname,
case when b.fax_0 = '' then '-' else b.fax_0 end fax,
case when b.MOBTEL_0 = '' then '-' else b.MOBTEL_0 end mobile,
case when b.TEL_0 = '' then '-' else b.TEL_0 end phone,
case when b.web_0 = '' then '-' else b.web_0 end website,
c.zinvcty_0 zcity,c.bpainv_0 bpainv_0,c.bpcnum_0 bpcnum_0
from lbcreport.bpaddress@info a,lbcreport.contact@info b
,lbcreport.bpcustomer@info c
where (a.bpanum_0=b.bpanum_0) and (a.cty_0 = c.zinvcty_0) and
(a.siteid = c.siteid))
In the above query Is there any performance degradation could i proceed with same query
or any other solution is there to increase the speed of the query.
in one select statement these many cases are allowed ah?
Please could anybody help me in this?
Thanks in advance
bye
SrikaviChange you query as follows...
(select
a.siteid siteid,
a.bpaadd_0 bpaadd_0,
a.bpanum_0 bpanum_0,
nvl(a.bpaaddlig_0, '-') address1,
nvl(a.bpaaddlig_1,'-' ) address2,
nvl(a.bpaaddlig_2,'-' ) address3,
nvl(a.bpades_0,'-' ) place,
nvl(a.cty_0,'-' ) city,
nvl(a.poscod_0,'-' ) pincode,
nvl(b.cntnam_0,'-' ) end
contactname,
nvl(b.fax_0,'-' ) fax,
nvl(b.MOBTEL_0,'-' ) mobile,
nvl(b.TEL_0,'-' ) phone,
nvl(b.web_0,'-' ) website,
c.zinvcty_0 zcity,c.bpainv_0 bpainv_0,c.bpcnum_0 bpcnum_0
from
lbcreport.bpaddress@info a,
lbcreport.contact@info b,
lbcreport.bpcustomer@info c
where
(a.bpanum_0=b.bpanum_0) and
(a.cty_0 = c.zinvcty_0) and
(a.siteid = c.siteid))
/ For performace check the execution plan of the query.. also BluShadow's post
Regards
Singh -
Provide alternative select statements to improve performance
Hi Frnds
I want to improve the performance of my report. below statement is taking more time. Please provide any suggestions to include alternative statement for the below select statement.
SELECT H~CLMNO H~PNGUID P~PVGUID V~PNGUID AS V_PNGUID
V~AKTIV V~KNUMV P~POSNR V~KATEG AS ALTNUM V~VERSN
APPENDING CORRESPONDING FIELDS OF TABLE EX_WTYKEY_T
FROM ( ( PNWTYH AS H
INNER JOIN PNWTYV AS V ON V~HEADER_GUID = H~PNGUID )
INNER JOIN PVWTY AS P ON P~VERSION_GUID = V~PNGUID )
FOR ALL ENTRIES IN lt_claim
WHERE H~PNGUID = lt_claim-pnguid
AND V~AKTIV IN rt_AKTIV
AND V~KATEG IN IM_ALTNUM_T.
Thanks
Amminesh.
Moderator message - Moved to the correct forum
Edited by: Rob Burbank on May 14, 2009 11:00 AMHi,
Copy the internal table lt_claim contents to another temp internal table.
lt_claim_temp[] = lt_claim[].
sort lt_claim_temp by pnguid.
delete adjacent duplicates from lt_claim_temp comparing pnguid.
SELECT HCLMNO HPNGUID PPVGUID VPNGUID AS V_PNGUID
VAKTIV VKNUMV PPOSNR VKATEG AS ALTNUM V~VERSN
APPENDING CORRESPONDING FIELDS OF TABLE EX_WTYKEY_T
FROM ( ( PNWTYH AS H
INNER JOIN PNWTYV AS V ON VHEADER_GUID = HPNGUID )
INNER JOIN PVWTY AS P ON PVERSION_GUID = VPNGUID )
FOR ALL ENTRIES IN lt_claim_temp
WHERE H~PNGUID = lt_claim-pnguid
AND V~AKTIV IN rt_AKTIV
AND V~KATEG IN IM_ALTNUM_T.
refresh lt_claim_temp. -
Performance Tuning 'Runtime Error' on Select statement
Hi Experts,
Good Day!
I would like to ask some help regarding a custom program that will encounter 'Runtime Error' on the below codes on how to perform performance tunning especially number 1.
1.
SELECT A~VBELN A~ERDAT A~AUART A~VKORG A~VTWEG A~SPART A~VDATU
A~KUNNR B~POSNR B~MATNR B~ARKTX B~ABGRU B~KWMENG B~VRKME
B~WERKS B~VSTEL B~ROUTE
FROM VBAK AS A INNER JOIN VBAP AS B ON A~VBELN EQ B~VBELN
INNER JOIN VBEP AS C ON A~VBELN EQ C~VBELN
AND B~POSNR EQ C~POSNR
INTO CORRESPONDING FIELDS OF TABLE I_DATA_TAB
WHERE A~VBELN IN S_VBELN
AND A~VKORG IN S_VKORG
AND A~AUART IN S_AUART
AND A~VTWEG IN S_VTWEG
AND A~SPART IN S_SPART
AND A~VDATU IN S_VDATU
AND A~KUNNR IN S_KUNNRD
AND B~MATNR IN S_MATNR
AND B~KWMENG IN S_KWMENG
AND B~VRKME IN S_VRKME
AND B~WERKS IN S_WERKS
AND C~EDATU IN S_VDATU.
2.
SELECT VBELN FROM LIKP INTO LIKP-VBELN
WHERE LFDAT IN S_VDATU
AND VKORG IN S_VKORG
AND LFART EQ 'YSTD'
AND KUNNR IN S_KUNNRP
AND KUNAG IN S_KUNNRD
SELECT VBELN POSNR LFIMG MATNR WERKS
FROM LIPS INTO (LIPS-VBELN, LIPS-POSNR, DISPLAY_TAB-DEL_QTY,
LIPS-MATNR, LIPS-WERKS)
WHERE VBELN EQ LIKP-VBELN
AND MATNR IN S_MATNR
AND VTWEG IN S_VTWEG
AND SPART IN S_SPART
AND WERKS IN S_WERKS.
ENDSELECT.
ENDSELECT.
4.
SELECT DELIVERY POSNR MATNR PODLFIMG FROM T9YPODI INTO
(T9YPODI-DELIVERY, T9YPODI-POSNR, T9YPODI-MATNR, T9YPODI-PODLFIMG)
WHERE MATNR IN S_MATNR
AND PODDATE IN S_VDATU.
Answer's will be a great help.
~Thank You,
Lourd
Edited by: Lourd06 on Oct 23, 2009 10:32 AM
Moderator message - Welcome to SCN.
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting. You're in the driver's seat here. It's up to you to do some analysis before expecting that people can halp you. - post locked
And please use code tags.
Edited by: Rob Burbank on Oct 23, 2009 9:13 AMHi All,
We've checked the transaction ST22 it is TIME OUT. I really need your help on this the program will dump in number 1 Select statement. Can you help me perform a performance tunning.
In transaction ST22
Runtime Errors TIME_OUT
Date and Time 21.10.2009 08:51:33
Short text
Time limit exceeded.
What happened?
The program "ZV0PSR10" has exceeded the maximum permitted runtime without
interruption and has therefore been terminated.
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
After a specific time, the program is terminated to make the work area
available to other users who may be waiting.
This is to prevent a work area being blocked unnecessarily long by, for
example:
- Endless loops (DO, WHILE, ...),
- Database accesses with a large result set
- Database accesses without a suitable index (full table scan)
The maximum runtime of a program is limited by the system profile
parameter "rdisp/max_wprun_time". The current setting is 1200 seconds. If this
time limit is
exceeded, the system attempts to cancel any running SQL statement or
signals the ABAP processor to stop the running program. Then the system
waits another 60 seconds maximum. If the program is then still active,
the work process is restarted.
~Thank you
Lourd
Edited by: Lourd06 on Oct 23, 2009 11:22 AM
Edited by: Lourd06 on Oct 23, 2009 11:33 AM
Maybe you are looking for
-
How do I create a new ssh SMF service?
Hello. I looked all over before posting this...I just completed four Solaris 10 builds using OPSWARE. All four builds are on V120's. I had two Solaris 10 with older hardware release 3/05 I belive, those were fine, but the two with release 1/06 did no
-
Can I use iCloud to publish my calendar from one mac to another?
I am on a Macbook Pro and my wife has an iMac. She has all our kids activities on it and I want to be able to import or publish (for lack of a better term) the calendar onto my calendar on my Macbook. I want all the activities to be added to my calen
-
Can't Send Mail on OSX10.7 - No issues with iPad/iPhone
I've got an issue that has surfaced with OSX Lion Mail.App where i can receive, but cannot send email outside of my office network. OS: 10.7.3 Location: In my office within my domain Can send and receive emails via Mail.app (configured using imap def
-
Hello all, At a client site am using Pooled capacity at Work centers(SAP resources) to help capacity planning with some pooled resources. So during planned order scheduling or in production(process) order the time required to complete the operation i
-
2 router help.. computer guy needs help
Hey everyone.. trying to isolate wireless on my network. router one netgear setup as dhcp and ppoe to my dsl modem. would like to keep the wired network first on the netgear. router 2 linksys wireless is plugged from netgear port into linksys wan ..