Performance problem in SELECT statement
HI All,
How to improve the performance in given query?
Query is..
data : begin of tzdate OCCURS 0,
zdate like sy-datum,
end of tzdate.
data: p_adrnr like lfa1-lifnr.
SELECT single adrnr into p_adrnr
FROM lfa1
WHERE lifnr = ilifnr-lifnr
AND land1 IN s_land1.
CONCATENATE '%' p_adrnr '%' INTO email_objectid.
SELECT udate INTO table tzdate
FROM cdhdr
WHERE objectclas = 'ADRESSE'
AND objectid LIKE email_objectid
AND tcode IN r_tcode.
Regards,
-D.
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Edited by: Rob Burbank on Sep 16, 2009 10:14 AM
HI All,
How to improve the performance in given query?
Query is..
data : begin of tzdate OCCURS 0,
zdate like sy-datum,
end of tzdate.
data: p_adrnr like lfa1-lifnr.
SELECT single adrnr into p_adrnr
FROM lfa1
WHERE lifnr = ilifnr-lifnr
AND land1 IN s_land1.
CONCATENATE '%' p_adrnr '%' INTO email_objectid.
SELECT udate INTO table tzdate
FROM cdhdr
WHERE objectclas = 'ADRESSE'
AND objectid LIKE email_objectid
AND tcode IN r_tcode.
Regards,
-D.
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting - post locked
Edited by: Rob Burbank on Sep 16, 2009 10:14 AM
Similar Messages
-
Problem with Select Statements
Hi All,
I have a performance problem for my report because of the following statements.
How can i modify the select statements for improving the performance of the report.
DATA : shkzg1h LIKE bsad-shkzg,
shkzg1s LIKE bsad-shkzg,
shkzg2h LIKE bsad-shkzg,
shkzg2s LIKE bsad-shkzg,
shkzg1hu LIKE bsad-shkzg,
shkzg1su LIKE bsad-shkzg,
shkzg2hu LIKE bsad-shkzg,
shkzg2su LIKE bsad-shkzg,
kopbal1s LIKE bsad-dmbtr,
kopbal2s LIKE bsad-dmbtr,
kopbal1h LIKE bsad-dmbtr,
kopbal2h LIKE bsad-dmbtr,
kopbal1su LIKE bsad-dmbtr,
kopbal2su LIKE bsad-dmbtr,
kopbal1hu LIKE bsad-dmbtr,
kopbal2hu LIKE bsad-dmbtr.
*These statements are in LOOP.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1s , kopbal1s)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1su , kopbal1su)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1h , kopbal1h)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg1hu , kopbal1hu)
FROM bsid
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2s , kopbal2s)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2su , kopbal2su)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'S'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2h , kopbal2h)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz EQ ''
GROUP BY shkzg.
ENDSELECT.
SELECT shkzg SUM( dmbtr )
INTO (shkzg2hu , kopbal2hu)
FROM bsad
WHERE bukrs = ibukrs
AND kunnr = ktab-kunnr
AND budat < idate-low
AND shkzg = 'H'
AND umskz IN zspgl
GROUP BY shkzg.
ENDSELECT.>
Siegfried Boes wrote:
> Please stop writing answers if you understrand nothing about database SELECTS!
> All above recommendations are pure nonsense!
>
> As always with such questions, you must do an analysis before you ask! The coding itself is perfectly o.k., a SELECT with an aggregate and a GROUP BY can not be changed into a SELECT SINGLE or whatever.
>
> But your SELECTS mustr be supported by indexes!
>
> Please run SQL Trace, and tell us the results:
>
> I see 8 statements, what is the duration and the number of records coming back for each statement?
> Maybe only one statement is slow.
>
> See
> SQL trace:
> /people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy
>
>
> Siegfried
Nice point there Siegfried. Instead of giving constructive suggestion, people here give a very bad suggestion on using SELECT SINGLE combined with SUM and GROUP BY.
I hope the person already look at your reply before he try using select single and wondering why he has error.
Anyway, the most important thing is how many loop expected for those select statements?
If you have like thousands of loop, you can expect a poor performance.
So, you should also look at how many times the select statement is called and not only performance for each select statement when you're doing SQL trace.
Regards,
Abraham -
Performance problem with selecting records from BSEG and KONV
Hi,
I am having performance problem while selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
Regards,
PrashantHi,
Some steps to improve performance
SOME STEPS USED TO IMPROVE UR PERFORMANCE:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
5. Avoid using nested SELECT statement SELECT within LOOPs.
6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
7. Avoid using SELECT * and Select only the required fields from the table.
8. Avoid nested loops when working with large internal tables.
9. Use assign instead of into in LOOPs for table types with large work areas
10. When in doubt call transaction SE30 and use the examples and check your code
11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
12. Use "CHECK" instead of IF/ENDIF whenever possible.
13. Use "CASE" instead of IF/ENDIF whenever possible.
14. Use "MOVE" with individual variable/field moves instead of "MOVE-
CORRESPONDING" creates more coding but is more effcient. -
Performance problem whlile selecting(extracting the data)
i have one intermediate table.
iam inserting the rows which are derived from a select statement
The select statement having a where clause which joins a view (created by 5 tables)
The problem is select statement which is getting the data is taking more time
i identified the problems like this
1) The view which is using in the select statement is not indexed---is index is necessary on view ????
2) Because the tables which are used to create a view have already properly indexed
3) while extracting the data it is taking the more time
the below query will extract the data and insert the data in the intermediate table
SELECT 1414 report_time,
2 dt_q,
1 hirearchy_no_q,
p.unique_security_c,
p.source_code_c,
p.customer_specific_security_c user_security_c,
p.par_value par_value, exchange_code_c,
(CASE WHEN p.ASK_PRICE_L IS NOT NULL THEN 1
WHEN p.BID_PRICE_L IS NOT NULL THEN 1
WHEN p.STRIKE_PRICE_L IS NOT NULL THEN 1
WHEN p.VALUATION_PRICE_L IS NOT NULL THEN 1 ELSE 0 END) bill_status,
p.CLASS_C AS CLASS,
p.SUBCLASS_C AS SUBCLASS,
p.AGENT_ADDRESS_LINE1_T AS AGENTADDRESSLINE1,
p.AGENT_ADDRESS_LINE2_T AS AGENTADDRESSLINE2,
p.AGENT_CODE1_T AS AGENTCODE1,
p.AGENT_CODE2_T AS AGENTCODE2,
p.AGENT_NAME_LINE1_T AS AGENTNAMELINE1,
p.AGENT_NAME_LINE2_T AS AGENTNAMELINE2,
p.ASK_PRICE_L AS ASKPRICE,
p.ASK_PRICE_DATE_D AS ASKPRICEDATE,
p.ASSET_CLASS_T AS ASSETCLASS
FROM (SELECT
DISTINCT x.*,m.customer_specific_security_c,m.par_value
FROM
HOLDING_M m JOIN ED_DVTKQS_V x ON
m.unique_security_c = x.unique_security_c AND
m.customer_c = 'CONF100005' AND
m.portfolio_c = 24 AND
m.status_c = 1
WHERE exists
(SELECT 1 FROM ED_DVTKQS_V y
WHERE x.unique_security_c = y.unique_security_c
GROUP BY y.unique_security_c
HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
any one please give me the valueble suggestions on the performancethanks for the updating
in the select query we used some functions like max
(SELECT 1 FROM ED_DVTKQS_V y
WHERE x.unique_security_c = y.unique_security_c
GROUP BY y.unique_security_c
HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
will these type of functions will cause the performance problem ??? -
Problem with SELECT statement. What is wrong with it?
Why is this query....
<cfquery datasource="manna_premier" name="kit_report">
SELECT Orders.ID,
SaleDate,
Orders.UserID,
Distributor,
DealerID,
Variable,
TerritoryManager,
US_Dealers.ID,
DealerName,
DealerAddress,
DealerCity,
DealerState,
DealerZIPCode,
(SELECT SUM(Quantity)
FROM ProductOrders PO
WHERE PO.OrderID = Orders.ID) as totalProducts,
FROM Orders, US_Dealers
WHERE US_Dealers.ID = DealerID AND SaleDate BETWEEN #CreateODBCDate(FORM.Start)# AND #CreateODBCDate(FORM.End)# AND Variable = '#Variable#'
</cfquery>
giving me this error message...
Error Executing Database Query.
[Macromedia][SequeLink JDBC Driver][ODBC Socket][Microsoft][ODBC Microsoft Access Driver] The SELECT statement includes a reserved word or an argument name that is misspelled or missing, or the punctuation is incorrect.
The error occurred in D:\Inetpub\mannapremier\kit_report2.cfm: line 20
18 : WHERE PO.OrderID = Orders.ID) as totalProducts,
19 : FROM Orders, US_Dealers
20 : WHERE US_Dealers.ID = DealerID AND SaleDate BETWEEN #CreateODBCDate(FORM.Start)# AND #CreateODBCDate(FORM.End)# AND Variable = '#Variable#'
21 : </cfquery>
22 :
SQLSTATE
42000
SQL
SELECT Orders.ID, SaleDate, Orders.UserID, Distributor, DealerID, Variable, TerritoryManager, US_Dealers.ID, DealerName, DealerAddress, DealerCity, DealerState, DealerZIPCode, (SELECT SUM(Quantity) FROM ProductOrders PO WHERE PO.OrderID = Orders.ID) as totalProducts, FROM Orders, US_Dealers WHERE US_Dealers.ID = DealerID AND SaleDate BETWEEN {d '2009-10-01'} AND {d '2009-10-31'} AND Variable = 'Chick Days pre-book'
VENDORERRORCODE
-3504
DATASOURCE
manna_premier
Resources:
I copied it from a different template where it works without error...
<cfquery name="qZVPData" datasource="manna_premier">
SELECT UserID,
TMName,
UserZone,
(SELECT COUNT(*)
FROM Sales_Calls
WHERE Sales_Calls.UserID = u.UserID) as totalCalls,
(SELECT COUNT(*)
FROM Orders
WHERE Orders.UserID = u.UserID) as totalOrders,
(SELECT SUM(Quantity)
FROM ProductOrders PO
WHERE PO.UserID = u.UserID AND PO.NewExisting = 1) as newItems,
(SELECT SUM(NewExisting)
FROM ProductOrders PO_
WHERE PO_.UserID = u.UserID) as totalNew,
SUM(totalOrders)/(totalCalls) AS closePerc
FROM Users u
WHERE UserZone = 'Central'
GROUP BY UserZone, UserID, TMName
</cfquery>
What is the problem?It's hard to say: what's your request timeout set to?
700-odd records is not much of a fetch for a decent DB, and I would not expect that to case the problem. But then you're using Access which doesn't fit the description of "decent DB" (or "fit for purpose" or "intended for purpose"), so I guess all bets are off one that one. If this query is slow when ONE request is asking for it, what is going to happen when it goes live and multiple requests are asking for it, along with all the other queries your site will want to run? Access is not designed for this. It will really struggle, and cause your site to run like a dog. One that died serveral weeks ago.
What else is on the template? I presume you're doing something with the query once you fetch it, so could it be that code that's running slowly? Have you taken any steps to isolate which part of the code is taking so long?
How does the query perform if you take the subquery out of the select line? Is there any other way of getting that data? What subquery will be running once for every row of the result set... not very nice.
Adam -
Performance Issue in select statements
In the following statements, it is taking too much time for execution,Is there any best way to change these selection statements...int_report_data is my final internal table....
select fsplant fvplant frplant fl1_sto pl1_delivery pl1_gr pl2_sto pl2_delivery perr_msg into (dochdr-swerks,dochdr-vwerks,dochdr-rwerks,dochdr-l1sto,docitem-l1xblnr, docitem-l1gr,docitem-l2sto, docitem-l2xblnr,docitem-err_msg) from zdochdr as f inner join zdocitem as p on fl1_sto = pl1_sto where fsplant in s_werks and
fvplant in v_werks and frplant in r_werks and pl1_delivery in l1_xblnr and pl1_gr in l1_gr and p~l2_delivery in l2_xblnr.
move : dochdr-swerks to int_report_data-i_swerks,
dochdr-vwerks to int_report_data-i_vwerks,
dochdr-rwerks to int_report_data-i_rwerks,
dochdr-l1sto to int_report_data-i_l1sto,
docitem-l1xblnr to int_report_data-i_l1xblnr,
docitem-l1gr to int_report_data-i_l1gr,
docitem-l2sto to int_report_data-i_l2sto,
docitem-l2xblnr to int_report_data-i_l2xblnr,
docitem-err_msg to int_report_data-i_errmsg.
append int_report_data.
endselect.
Goods receipt
loop at int_report_data.
select single ebeln from ekbe into l2gr where ebeln = int_report_data-i_l2sto and bwart = '101' and bewtp = 'E' and vgabe = '1'.
if sy-subrc eq 0.
move l2gr to int_report_data-i_l2gr.
modify int_report_data.
endif.
endloop.
first Billing document (I have to check fkart = ZRTY for second billing *document..how can i write the statement)
select vbeln from vbfa into (tabvbfa-vbeln) where vbelv = int_report_data-i_l2xblnr or vbelv = int_report_data-i_l1xblnr.
select single vbeln from vbrk into tabvbrk-vbeln where vbeln = tabvbfa-vbeln and fkart = 'IV'.
if sy-subrc eq 0.
move tabvbrk-vbeln to int_report_data-i_l2vbeln.
modify int_report_data.
endif.
endselect.
Thanks in advance,
YadHi!
Which of your selects is slow? Make a SQL-trace, check which select(s) is(are) slow.
For EKBE and VBFA you are selecting first key field - in general that is fast. If your z-tables are the problem, maybe an index might help.
Instead of looping and making a lot of select singles, one select 'for all entries' can help, too.
Please analyze further and give feedback.
Regards,
Christian -
Performance problem in select data from data base
hello all,
could you please suggest me which select statement is good for fetch data form data base if data base contain more than 10 lac records.
i am using SELECT PACKAGE SIZE n statement, but it's taking lot of time .
with best regards
srinivas rathodHi Srinivas,
if you have huge data and selecting ,you could decrease little bit time if you use better techniques.
I do not think SELECT PACKAGE SIZE will give good performance
see the below examples :
ABAP Code Samples for Simple Performance Tuning Techniques
1. Query including select and sorting functionality
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on
fmatnr = gmatnr where gstlal = '01' order by fernam.
Code B
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01'.
sort itab_new by ernam.
Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: The Order by clause associated with a select statement increases the execution time of the statement, so it is profitable to sort the internal table once after selecting the data.
2. Performance Improvement Due to Identical Statements Execution Plan
Consider the below queries and their levels of efficiencies is saving the execution
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
sort itab_new.
select fmatnr fernam
fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as
f inner join mast as g on f~matnr =
gmatnr where gstlal
= '01' .
Code D (Identical Select Statements)
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
sort itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: Each SQL statement during the process of execution is converted into a series of database operation phases. In the second phase of conversion (Prepare phase) an execution plan is determined for the current SQL statement and it is stored, if in the program any identical select statement is used, then the same execution plan is reused to save time. So retain the structure of the select statement as the same when it is used more than once in the program.
3. Reducing Parse Time Using Aliasing
A statement which does not have a cached execution plan should be parsed before execution; this parsing phase is a highly time and resource consuming, so parsing time for any sql query must include an alias name in it for the following reason.
1. Providing the alias name will enable the query engine to resolve the tables to which the specified fields belong to.
2. Providing a short alias name, (a single character alias name) is more efficient that providing a big alias name.
Code E
select jmatnr jernam jmtart jmatkl
gwerks gaenam g~stlal into table itab_new from mara as
j inner join mast as g on jmatnr = gmatnr where
g~stlal = '01' .
In the above code the alias name used is j .
4. Performance Tuning Using Order by Clause
If in a SQL query you are going to read a particular database record based on some key values mentioned in the select statement, then the read query can be very well optimized by ordering the fields in the same order in which we are going to read them in the read query.
Code F
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
end of itab_new.
select MATNR ERNAM MTART MATKL from mara into table itab_new where
MTART = 'HAWA' ORDER BY MATNR ERNAM MTART MATKL.
read table itab_new with key MATNR = 'PAINT1' ERNAM = 'RAMANUM'
MTART = 'HAWA' MATKL = 'OFFICE'.
Code G
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
end of itab_new.
select MATNR ERNAM MTART MATKL from mara into table itab_new where
MTART = 'HAWA' ORDER BY ERNAM MATKL MATNR MTART.
read table itab_new with key MATNR = 'PAINT1' ERNAM = 'RAMANUM'
MTART = 'HAWA' MATKL = 'OFFICE'.
In the above code F, the read statement following the select statement is having the order of the keys as MATNR, ERNAM, MTART, MATKL. So it is less time intensive if the internal table is ordered in the same order as that of the keys in the read statement.
5. Performance Tuning Using Binary Search
A very simple but useful method of fine tuning performance of a read statement is using Binary search addition to it. If the internal table consists of more than 20 entries then the traditional linear search method proves to be more time intensive.
Code H
select * from mara into corresponding fields of table intab.
sort intab.
read table intab with key matnr = '11530' binary search.
Code I
select * from mara into corresponding fields of table intab.
sort intab.
read table intab with key matnr = '11530'.
Thanks
Seshu -
Performance Problem in Select query
Hi,
I have performance Problem in following Select Query :
SELECT VBELN POSNR LFIMG VRKME VGBEL VGPOS
FROM LIPS INTO CORRESPONDING FIELDS OF TABLE GT_LIPS
FOR ALL ENTRIES IN GT_EKPO1
WHERE VGBEL = GT_EKPO1-EBELN
AND VGPOS = GT_EKPO1-EBELP.
as per trace i have analysed that it is fetch the complete table scan from the LIPS table and table contants almost 3 lakh records.
Kindly Suggest what we can do to optimize this query.
Regards,
Harshtypes: begin of line,
vbeln type lips-vbeln
posnr type lips-posnr
lfimg type lips-lfimg
vrkme type lips-vrkme
vgbel type lips- vgbel
vgpos type lips-vgpos
end of line.
data: itab type standard table of line,
wa type line.
IF GT_EKPO1[] IS NOT INITIAL.
SELECT VBELN POSNR LFIMG VRKME VGBEL VGPOS
FROM LIPS INTO TABLE ITAB
FOR ALL ENTRIES IN GT_EKPO1
WHERE VGBEL = GT_EKPO1-EBELN
AND VGPOS = GT_EKPO1-EBELP.
ENDIF. -
Problem with Select statement.
DATA: wa_usr05 TYPE usr05.
The select statement always gives sy-subrc = 0
even if there is no entry with parid = 'ZRD'.
On successful it fills the structure wa_usr05 as
MANDT C 3 ACC
BNAME C 12 SCL
PARID C 20 X
PARVA C 18
but mandt is 310.
USR05 is a pool table and has mandt field.
SELECT SINGLE bname
parid
parva
FROM usr05
INTO wa_usr05
WHERE bname = sy-uname AND
parid = 'ZRD' AND
parva = 'x' OR parva = 'X'.
Let me know the reason and solution to the problem.SELECT SINGLE * FROM usr05
INTO wa_usr05
WHERE bname = sy-uname AND
parid = 'ZRD' AND
parva = <b>'X'</b> .
Use single * as u have defined the wa+usr05 as usr05.
Else.
DATA: i_usr05 TYPE STANDARD TABLE of usr05.
SELECT * FROM USR05
INTO TABLE usr05
WHERE bname = sy-uname AND
parid = 'ZRD' AND
parva = <b>'X'</b> .
Then loop at itab and write data.
Hope this solves ur query.
Reward points if this helps.
Message was edited by:
Judith Jessie Selvi -
Performance Problem in SQL Statement
Hi,
I am using a SELECT statement where the execution time is taking longer time:
SELECT MSD,IMS,BOOK_IND FROM RW_TABLE,TABLE(GET_SUD)
WHERE SERVED_MSISDN=MSD
(MSD Column is a Object type from TABLE Function TABLE(GET_SUD) and (SERVED_MSISDN is a column of the table RW_TABLE)
Any help will be needful for me
Thanks and RegardsYour query:
SELECT MSD,IMS,BOOK_IND
FROM RW_TABLE,TABLE(GET_SUD)
WHERE SERVED_MSISDN=MSDYou reference two tables - RW_TABLE and GET_SUD - yet you have no join between them. Hence Oracle is doing a CARTESIAN PRODUCT between the two data sets. As each table or data set gets bigger, so will the results, and so will the elapsed time.
You need to join the two tables together properly. Read a book on Relational Databases if you do not understand what a join is between tables, and why it is needed.
I would also guess that you lack an index on the SERVED_MSISDN column on its own, or as the leading column in an index. Instead Oracle is reading every entry in an index to find any matching rows (FAST FULL SCAN CTVA_ETL.TEMP_INDX). So again, as the tables get bigger so the FULL SCAN of the whole index will take longer too.
John
[Database Performance Blog|http://databaseperformance.blogspot.com] -
Hi Experts,
I am facing the problem in the select statement where it giving the short dump
DBIF_RSQL_INVALID_RSQL CX_SY_OPEN_S.
i have searched many forms, but i found that the select option s_matnr have the limitaion 2000 entreis, but i am passing same s_matnr to other select statement with more than 2000 entries but it is not giving me any short dump.
but i am facing problem with only one select statement where if i pass select option s_matnr more than 1500 entris also giving short dump.
my select statement is
SELECT * FROM bsim
INTO CORRESPONDING FIELDS OF TABLE g_t_bsim_lean
FOR ALL ENTRIES IN t_bwkey WHERE bwkey = t_bwkey-bwkey
AND matnr IN matnr
AND bwtar IN bwtar
AND budat >= datum-low.
in the internal table g_t_bsim_lean internal table contain all the fields of the table bsim with 2 fields from other table.
Please let me know whether i need to change the select statement or any other solution for this.
Regards,
udupimy select query is like this:
DATA: BEGIN OF t_bwkey OCCURS 0, "184465
bwkey LIKE bsim-bwkey, "184465
END OF t_bwkey. "184465
LOOP AT g_t_organ WHERE keytype = c_bwkey.
MOVE g_t_organ-bwkey TO t_bwkey-bwkey.
COLLECT t_bwkey. "184465
ENDLOOP. "184465
READ TABLE t_bwkey INDEX 1. "184465
CHECK sy-subrc = 0. "184465
SELECT * FROM bsim "n443935
INTO CORRESPONDING FIELDS OF TABLE g_t_bsim_lean "n443935
FOR ALL ENTRIES IN t_bwkey WHERE bwkey = t_bwkey-bwkey
AND matnr IN matnr
AND bwtar IN bwtar
AND budat >= datum-low. -
Performance in the select statement
Hi All,
(select a.siteid siteid,a.bpaadd_0 bpaadd_0,a.bpanum_0 bpanum_0,
case when a.bpaaddlig_0 = '' then '-' else a.bpaaddlig_0 end
address1,
case when a.bpaaddlig_1 = '' then '-' else a.bpaaddlig_1 end
address2,
case when a.bpaaddlig_2 = '' then '-' else a.bpaaddlig_2 end
address3,
case when a.bpades_0 = '' then '-' else a.bpades_0 end place,
case when a.cty_0 = '' then '-' else a.cty_0 end city,
case when a.poscod_0 = '' then '-' else a.poscod_0 end
pincode,
case when b.cntnam_0 = '' then '-' else b.cntnam_0 end
contactname,
case when b.fax_0 = '' then '-' else b.fax_0 end fax,
case when b.MOBTEL_0 = '' then '-' else b.MOBTEL_0 end mobile,
case when b.TEL_0 = '' then '-' else b.TEL_0 end phone,
case when b.web_0 = '' then '-' else b.web_0 end website,
c.zinvcty_0 zcity,c.bpainv_0 bpainv_0,c.bpcnum_0 bpcnum_0
from lbcreport.bpaddress@info a,lbcreport.contact@info b
,lbcreport.bpcustomer@info c
where (a.bpanum_0=b.bpanum_0) and (a.cty_0 = c.zinvcty_0) and
(a.siteid = c.siteid))
In the above query Is there any performance degradation could i proceed with same query
or any other solution is there to increase the speed of the query.
in one select statement these many cases are allowed ah?
Please could anybody help me in this?
Thanks in advance
bye
SrikaviChange you query as follows...
(select
a.siteid siteid,
a.bpaadd_0 bpaadd_0,
a.bpanum_0 bpanum_0,
nvl(a.bpaaddlig_0, '-') address1,
nvl(a.bpaaddlig_1,'-' ) address2,
nvl(a.bpaaddlig_2,'-' ) address3,
nvl(a.bpades_0,'-' ) place,
nvl(a.cty_0,'-' ) city,
nvl(a.poscod_0,'-' ) pincode,
nvl(b.cntnam_0,'-' ) end
contactname,
nvl(b.fax_0,'-' ) fax,
nvl(b.MOBTEL_0,'-' ) mobile,
nvl(b.TEL_0,'-' ) phone,
nvl(b.web_0,'-' ) website,
c.zinvcty_0 zcity,c.bpainv_0 bpainv_0,c.bpcnum_0 bpcnum_0
from
lbcreport.bpaddress@info a,
lbcreport.contact@info b,
lbcreport.bpcustomer@info c
where
(a.bpanum_0=b.bpanum_0) and
(a.cty_0 = c.zinvcty_0) and
(a.siteid = c.siteid))
/ For performace check the execution plan of the query.. also BluShadow's post
Regards
Singh -
Performance problem with MERGE statement
Version : 11.1.0.7.0
I have an insert statement like following which is taking less than 2 secs to complete and inserts around 4000 rows:
INSERT INTO sch.tab1
(c1,c2,c3)
SELECT c1,c2,c3
FROM sch1.tab1@dblink
WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink);I wanted to change it to a MERGE statement just to avoid duplicate data. I changed it to following :
MERGE INTO sch.tab1 t1
USING (SELECT c1,c2,c3
FROM sch1.tab1@dblink
WHERE c1 IN (SELECT c1 FROM sch1.tab2@dblink) t2
ON (t1.c1 = t2.c1)
WHEN NOT MATCHED THEN
INSERT (t1.c1,t1.c2,t1.c3)
VALUES (t2.c1,t2.c2,t2.c3);The MERGE statement is taking more than 2 mins (and I stopped the execution after that). I removed the WHERE clause subquery inside the subquery of the USING section and it executed in 1 sec.
If I execute the same select statement with the WHERE clause outside the MERGE statement, it takes just 1 sec to return the data.
Is there any known issue with MERGE statement while implementing using above scenario?riedelme wrote:
Are your join columns indexed?
Yes, the join columns are indexed.
You are doing a remote query inside the merge; remote queries can slow things down. Do you have to select all thr rows from the remote table? What if you copied them locally using a materialized view?Yes, I agree that remote queries will slow things down. But the same is not happening while select, insert and pl/sql. It happens only when we are using MERGE. I have to test what happens if we use a subquery refering to a local table or materialized view. Even if it works, I think there is still a problem with MERGE in case of remote subqueries (atleast till I test local queries). I wish some one can test similar scenarios so that we can know whether it is a genuine problem or some specific problem from my side.
>
BTW, I haven't had great luck with MERGE either :(. Last time I tried to use it I found it faster to use a loop with insert/update logic.
Edited by: riedelme on Jul 28, 2009 12:12 PM:) I used the same to overcome this situation. I think MERGE needs to be still improved functionally from Oracle side. I personally feel that it is one of the robust features to grace SQL or PL/SQL. -
Performance required in select statement
hi gurus,
my select statement below is taking around 15 sec to execute, need your help in this regard.
select single * from tablename where no = itab-no.
Thanks,
vjVIQMEL is not a table itself - it is a view of tables QMIH, QMEL and ILOA.
the field you are searching on (EQFNR) is a non-key field on table ILOA which is not an index field either.
This will be very slow depending on the size of the tables.
Is there any other data you can use to search by? perhaps it would be quicker to get extra key data for this view by reading some other tables first?
If not, you may need to add an index to ILOA for this field - I would check with SAP as adding indexes to standard tables such as this which already appears to have several indexes may have unforseen impacts on standard processing. Generally it is all right, but an extra index for the system to maintain can cause problems sometimes. -
Performance tunning for select statements using likp lips and vbrp
Dear all,
I have a report where i am using select statements using first on likp the for all entries of likp i am taking data from lips and then for all entries in lips i am taking data from vbrp by matching VGBEL and VGPOS. Now the problem is that when it fetches data from vbrp it is taking lot of time around 13mins. to fetch data from vbrp. How can i overcome the problem.
regards
AmitHi,
there is also no secondary index for preceding document in VBFA table.
You will also have to create it here.
Regards,
Przemysław
Maybe you are looking for
-
I converted the dvd files to .mp4 files (Using cusoft). The files were loaded in the itune (by using add files to library). I can play the dvd in the itune on my pc but cannot transfer the dvd files to the ipod because: The dvd tab in the ipod is loc
-
Planned order to Manufacturing order conversion
Hi Experts, We are in SCM 5.0 & R/3 ECC 5.0. I have developed one RFC function module in APO, for planned order to manufacturing order conversion, using BDC for transaction /sapapo/rrp2. This rfc function module, iam calling in r/3 program. The iss
-
Open PDF not auto included when combining files in Acrobat X
If I have a PDF open in Acrobat X and I click Create -> Combine files into a single PDF, the PDF that is open is not automatically included in the list of files to combine in the Combine Files window. I need to click the Add Files drop-down and then
-
Upgraded to lion can't access microsoft word!! help?
upgraded to lion and can't acces microsoft word for mac/get PowerPc applications are no longer supported. help?
-
How do I identify the actual /dev nodes in use by ASM?
I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so th