Performance issue on a select statement
Hi all @ SAPforums and thanks for your attention,
the task is quite simple: given a Purchase Requisition number and position (banfn, bnfpo) I have to check if a contract with the same PR as source exists in the EKPO table.
In order to check for it, I simply typed the following select:
SELECT SINGLE * FROM EKPO INTO wa_checkekpo
WHERE bstyp EQ 'K'
AND banfn EQ l_eban-banfn
AND bnfpo EQ l_eban-bnfpo.
This kind of query is quite consuming (more than three seconds in my process) due to the fact that banfn and bnfpo don't belong to a key for the table.
Any idea/workaround that can lead to better performance? Please note I'm not interested in retrieving the contract number (KONNR), it's sufficient to know it exists.
Hi,
> Do not use select * if you just want to check existence of the record.
so far so good.
> Use a single variable to store teh count.
But why should we count the records if we just want to know if a key exists in
the db or not? change the second half of the recommendation to:
Use a single field of the used index for the select list to check (sy-subrc or result
of that field) if the record exists or not.
Kind regards,
Hermann
Similar Messages
-
Performance issue with the ABAP statements
Hello,
Please can some help me with the below statements where I am getting performance problem.
SELECT * FROM /BIC/ASALHDR0100 into Table CHDATE.
SORT CHDATE by DOC_NUMBER.
SORT SOURCE_PACKAGE by DOC_NUMBER.
LOOP AT CHDATE INTO WA_CHDATE.
READ TABLE SOURCE_PACKAGE INTO WA_CIDATE WITH KEY DOC_NUMBER =
WA_CHDATE-DOC_NUMBER BINARY SEARCH.
MOVE WA_CHDATE-CREATEDON to WA_CIDATE-CREATEDON.
APPEND WA_CIDATE to CIDATE.
ENDLOOP.
I wrote an above code for the follwing requirement.
1. I have 2 tables from where i am getting the data
2.I have common fields in both the table names CREATEDON date. In both the tables I hve the values.
3. While accessing the 2 table and copying to thrid table i have to modify the field.
I am getting performance issues with the above statements.
Than
Edited by: Rob Burbank on Jul 29, 2010 10:06 AMHello,
try a select like the following one instead of you code.
SELECT field field2 ...
INTO TABLE it_table
FROM table1 AS T1 INNER JOIN table2 AS T2
ON t1-doc_number = t2-doc_number -
Performance issue with view selection after migration from oracle to MaxDb
Hello,
After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
Does anybody know about this problem and how to solve it ??
Best regards !!!
Gert-JanHello Gert-Jan,
most probably you need additional indexes to get better performance.
Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
Best regards,
Melanie Handreck -
Performance issue when using select count on large tables
Hello Experts,
I have a requirement where i need to get count of data from a database table.Later on i need to display the count in ALV format.
As per my requirement, I have to use this select count inside a nested loops.
Below is the count snippet:
LOOP at systems assigning <fs_sc_systems>.
LOOP at date assigning <fs_sc_date>.
SELECT COUNT( DISTINCT crmd_orderadm_i~header )
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient "MANDT is referred as client
AND crmd_orderadm_iguid EQ bbp_pdigpguid
INTO w_sc_count
WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
AND <fs_sc_date>-end_timestamp
AND bbp_pdigp~zz_scsys EQ <fs_sc_systems>-sys_name.
endloop.
endloop.
In the above code snippet,
<fs_sc_systems>-sys_name is having the system name,
<fs_sc_date>-start_timestamp is having the start date of month
and <fs_sc_date>-end_timestamp is the end date of month.
Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
Can any one pls help me out to optimize this code.
Thanks,
SumanHi Choudhary Suman ,
Try this:
SELECT crmd_orderadm_i~header
INTO it_header " interna table
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient
AND crmd_orderadm_iguid EQ bbp_pdigpguid
FOR ALL ENTRIES IN date
WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
AND date-end_timestamp
AND bbp_pdigp~zz_scsys EQ date-sys_name.
SORT it_header BY header.
DELETE ADJACENT DUPLICATES FROM it_header
COMPARING header.
describe table it_header lines v_lines.
Hope this information is help to you.
Regards,
José -
Increase performance of the following SELECT statement.
Hi All,
I have the following select statement which I would want to fine tune.
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
INTO TABLE GT_RESB
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
The following query is being run for a period of 1 year where the number of records returned will be approx 3 million. When the program is run in background the execution time is around 76 hours. When I run the same program dividing the selection period into smaller parts I am able to execute the same in about an hour.
After a previous posting I had changed the select statement to
CHECK NOT LT_MARC IS INITIAL.
SELECT RSNUM
RSPOS
RSART
MATNR
WERKS
BDTER
BDMNG FROM RESB
APPENDING TABLE GT_RESB PACKAGE SIZE LV_SIZE
FOR ALL ENTRIES IN LT_MARC
WHERE XLOEK EQ ' '
AND MATNR EQ LT_MARC-MATNR
AND WERKS EQ P_WERKS
AND BDTER IN S_PERIOD.
ENDSELECT.
But the performance improvement is very negligible.
Please suggest.
Regards,
KarthikHi Karthik,
<b>Do not use the appending statement</b>
Also you said if you reduce period then you get it quickly.
Why not try dividing your internal table LT_MARC into small internal tables having max 1000 entries.
You can read from index 1 - 1000 for first table. Use that in the select query and append the results
Then you can refresh that table and read table LT_MARC from 1001-2000 into the same table and then again execute the same query.
I know this sounds strange but you can bargain for better performance by increasing data base hits in this case.
Try this and let me know.
Regards
Nishant
> I have the following select statement which I would
> want to fine tune.
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> INTO TABLE GT_RESB
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
>
> e following query is being run for a period of 1 year
> where the number of records returned will be approx 3
> million. When the program is run in background the
> execution time is around 76 hours. When I run the
> same program dividing the selection period into
> smaller parts I am able to execute the same in about
> an hour.
>
> After a previous posting I had changed the select
> statement to
>
> CHECK NOT LT_MARC IS INITIAL.
> SELECT RSNUM
> RSPOS
> RSART
> MATNR
> WERKS
> BDTER
> BDMNG FROM RESB
> APPENDING TABLE GT_RESB
> PACKAGE SIZE LV_SIZE
> FOR ALL ENTRIES IN LT_MARC
> WHERE XLOEK EQ ' '
> AND MATNR EQ LT_MARC-MATNR
> AND WERKS EQ P_WERKS
> AND BDTER IN S_PERIOD.
> the performance improvement is very negligible.
> Please suggest.
>
> Regards,
> Karthik
Hi Karthik, -
Performance issue with the Select query
Hi,
I have an issue with the performance with a seclet query.
In table AFRU - AUFNR is not a key field.
So i had selected the low and high values into s_reuck and used it in Where condition.
Still i have an issue with the Performance.
SELECT SINGLE RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO table t_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
I had also cheked by createing an index for AUFNR in the table AFRU...it does not help.
Is there anyway that we can declare Key field while declaring the Internal table....?
ANy suggestions to fix the performance issue is apprecaited!
Regards,
KittuHi,
Thank you for your quick response!
Rui dantas, i have lill confusion...this is my code below :
data : t_zscprt type standard table of ty_zscprt,
wa_zscprt type ty_zscprt.
types : BEGIN OF ty_zscprt100,
aufnr type zscprt100-aufnr,
posnr type zscprt100-posnr,
ezclose type zscprt100-ezclose,
serialnr type zscprt100-serialnr,
lgort type zscprt100-lgort,
END OF ty_zscprt100.
data : t_zscprt100 type standard table of ty_zscprt100,
wa_zscprt100 type ty_zscprt100.
Types: begin of ty_afru,
rueck type CO_RUECK,
rmzhl type CO_RMZHL,
iedd type RU_IEDD,
aufnr type AUFNR,
stokz type CO_STOKZ,
stzhl type CO_STZHL,
end of ty_afru.
data : t_afru type STANDARD TABLE OF ty_afru,
WA_AFRU TYPE TY_AFRU.
SELECT AUFNR
POSNR
EZCLOSE
SERIALNR
LGORT
FROM ZSCPRT100 INTO TABLE T_ZSCPRT100
FOR ALL ENTRIES IN T_ZSCPRT
WHERE AUFNR = T_ZSCPRT-PRTNUM
AND SERIALNR IN S_SERIAL
AND LGORT IN S_LGORT.
IF sy-subrc <> 0.
MESSAGE ID 'Z2' TYPE 'I' NUMBER '41'. "BDCG87
stop."BDCG87
ENDIF.
ENDIF.
SELECT RUECK
RMZHL
IEDD
AUFNR
STOKZ
STZHL
FROM AFRU INTO TABLE T_AFRU
FOR ALL ENTRIES IN T_ZSCPRT100
WHERE RUECK IN S_RUECK AND
AUFNR = T_ZSCPRT100-AUFNR AND
STOKZ = SPACE AND
STZHL = 0.
Using AUFNR, get AUFPL from AFKO
Using AUFPL, get RUECK from AFVC
Using RUEKC, read AFRU
In other words, one select joining AFKO <-> AFVC <-> AFRU should get what you want.
This is my select query, would you want me to write another select query to meet this criteria..
From AUFNR> I will get AUFPL from AFKO> BAsed on AUFPL I will get RUECK, based on RUEKC i need to read AFRU..but i need to select few field from AFRu based on AUFNR....
ANy suggestions wil be appreciated!
Regards
Kittu -
TDE Issue with UPDATE/SELECT statement
We just implemented TDE on a table and now our import script is getting errors. The import script has not changed and has been running fine for over a year. The script failed right after applying TDE on the table.
Oracle 10g Release 2 on Solaris.
Here are the encrypted colums:
COLUMN_NAME ENCRYPTION_ALG SALT
PERSON_ID AES 192 bits key NO
PERSON_KEY AES 192 bits key NO
USERNAME AES 192 bits key NO
FIRST_NAME AES 192 bits key NO
MIDDLE_NAME AES 192 bits key NO
LAST_NAME AES 192 bits key NO
NICKNAME AES 192 bits key NO
EMAIL_ADDRESS AES 192 bits key NO
AKO_EMAIL AES 192 bits key NO
CREATION_DATE AES 192 bits key NO
Here is the UPDATE/SELECT statement that is failing:
UPDATE cslmo_framework.users a
SET ( person_id
, username
, first_name
, middle_name
, last_name
, suffix
, user_status_seq
= (
SELECT person_id
, username
, first_name
, middle_name
, last_name
, suffix
, user_status_seq
FROM cslmo.vw_import_employee i
WHERE i.person_key = a.person_key
WHERE EXISTS
SELECT 1
FROM cslmo.vw_import_employee i
WHERE i.person_key = a.person_key
AND ( NVL(a.person_id,0) <> NVL(i.person_id,0)
OR NVL(a.username,' ') <> NVL(i.username,' ')
OR NVL(a.first_name,' ') <> NVL(i.first_name,' ')
OR NVL(a.middle_name,' ') <> NVL(i.middle_name,' ')
OR NVL(a.last_name,' ') <> NVL(i.last_name,' ')
OR NVL(a.suffix,' ') <> NVL(i.suffix,' ')
OR NVL(a.user_status_seq,99) <> NVL(i.user_status_seq,99)
cslmo@awpswebj-dev> exec cslmo.pkg_acpers_import.p_users
Error importing USERS table.START p_users UPDATE
Error Message: ORA-01483: invalid length for DATE or NUMBER bind variableI rewrote the procedure using BULK COLLECT and a FORALL statement and that seems to work fine. Here is the new code:
declare
bulk_errors EXCEPTION ;
PRAGMA EXCEPTION_INIT(bulk_errors,-24381) ;
l_idx NUMBER ;
l_err_msg VARCHAR2(2000) ;
l_err_code NUMBER ;
l_update NUMBER := 0 ;
l_count NUMBER := 0 ;
TYPE person_key_tt
IS
TABLE OF cslmo_framework.users.person_key%TYPE
INDEX BY BINARY_INTEGER ;
arr_person_key person_key_tt ;
TYPE person_id_tt
IS
TABLE OF cslmo_framework.users.person_id%TYPE
INDEX BY BINARY_INTEGER ;
arr_person_id person_id_tt ;
TYPE username_tt
IS
TABLE OF cslmo_framework.users.username%TYPE
INDEX BY BINARY_INTEGER ;
arr_username username_tt ;
TYPE first_name_tt
IS
TABLE OF cslmo_framework.users.first_name%TYPE
INDEX BY BINARY_INTEGER ;
arr_first_name first_name_tt ;
TYPE middle_name_tt
IS
TABLE OF cslmo_framework.users.middle_name%TYPE
INDEX BY BINARY_INTEGER ;
arr_middle_name middle_name_tt ;
TYPE last_name_tt
IS
TABLE OF cslmo_framework.users.last_name%TYPE
INDEX BY BINARY_INTEGER ;
arr_last_name last_name_tt ;
TYPE suffix_tt
IS
TABLE OF cslmo_framework.users.suffix%TYPE
INDEX BY BINARY_INTEGER ;
arr_suffix suffix_tt ;
TYPE user_status_seq_tt
IS
TABLE OF cslmo_framework.users.user_status_seq%TYPE
INDEX BY BINARY_INTEGER ;
arr_user_status_seq user_status_seq_tt ;
CURSOR users_upd IS
SELECT i.person_key
,i.person_id
,i.username
,i.first_name
,i.middle_name
,i.last_name
,i.suffix
,i.user_status_seq
FROM cslmo.vw_import_employee i ,
cslmo_framework.users u
WHERE i.person_key = u.person_key ;
begin
OPEN users_upd ;
LOOP
FETCH users_upd
BULK
COLLECT
INTO arr_person_key
, arr_person_id
, arr_username
, arr_first_name
, arr_middle_name
, arr_last_name
, arr_suffix
, arr_user_status_seq
LIMIT 100 ;
FORALL idx IN 1 .. arr_person_key.COUNT
SAVE EXCEPTIONS
UPDATE cslmo_framework.users u
SET
person_id = arr_person_id(idx)
, username = arr_username(idx)
, first_name = arr_first_name(idx)
, middle_name = arr_middle_name(idx)
, last_name = arr_last_name(idx)
, suffix = arr_suffix(idx)
, user_status_seq = arr_user_status_seq(idx)
WHERE u.person_key = arr_person_key(idx)
AND
( NVL(u.person_id,0) != NVL(arr_person_id(idx),0)
OR
NVL(u.username,' ') != NVL(arr_username(idx),' ')
OR
NVL(u.first_name,' ') != NVL(arr_first_name(idx),' ')
OR
NVL(u.middle_name, ' ') != NVL(arr_middle_name(idx), ' ')
OR
NVL(u.last_name,' ') != NVL(arr_last_name(idx),' ')
OR
NVL(u.suffix,' ') != NVL(arr_suffix(idx),' ')
OR
NVL(u.user_status_seq,99) != NVL(arr_user_status_seq(idx),99)
l_count := arr_person_key.COUNT ;
l_update := l_update + l_count ;
EXIT WHEN users_upd%NOTFOUND ;
END LOOP ;
CLOSE users_upd ;
COMMIT ;
dbms_output.put_line('updated records: ' || l_update);
EXCEPTION
WHEN bulk_errors THEN
FOR i IN 1 .. sql%BULK_EXCEPTIONS.COUNT
LOOP
l_err_code := sql%BULK_EXCEPTIONS(i).error_code ;
l_err_msg := sqlerrm(-l_err_code) ;
l_idx := sql%BULK_EXCEPTIONS(i).error_index;
dbms_output.put_line('error code: ' || l_err_code);
dbms_output.put_line('error msg: ' || l_err_msg);
dbms_output.put_line('at index: ' || l_idx);
END LOOP ;
ROLLBACK;
RAISE;
end ;
cslmo@awpswebj-dev> @cslmo_users_update
updated records: 1274There are about 20 or so other procedure in the import script. I don't want to rewrite them.
Does anyone know why the UPDATE/SELECT is failing? I checked Metalink and could not find anything about this problem.This is now an Oracle bug, #9182070 on Metalink.
TDE (transparent data encryption) does not work when an update/select statement references a remote database. -
Performance problem of asset selection statement.
Hi guys quick question.
SELECT a~bukrs
a~anln1
b~anln2
a~anlkl
a~aktiv
a~txt50
a~lvtnr
b~afabe
b~afabg
b~ndjar
b~ndper
INTO TABLE it_anla
FROM anla AS a
INNER JOIN anlb AS b
ON a~bukrs EQ b~bukrs
AND a~anln1 EQ b~anln1
WHERE a~bukrs IN s_bukrs
AND a~anln1 IN s_anln1
AND a~anln2 EQ '0000'
AND a~anlkl IN s_anlkl
AND a~aktiv EQ '00000000'
AND a~lvtnr IN s_lvtnr
AND b~afabe EQ 01.
I have a select statement which is filtered by capitalization date which is the field AKTIV. This select alone has 450 thousand hits. Having an ANLN2 = 0 and AKTIV = blank/no date means that the asset group is no longer active.
Now for another scenario I have to retrieve table ANLA again excluding all the Main Asset Number/ANLN1 found on that table.
Is there a way to select by doing it only once?
I tried to pass all the anln1 to a range table but the program dumps. I think the range table can't handle too many entries.
Retrieving all the entries from the DB then processing it takes longer.
I tried to delete the table using a loop but it takes to long as it process the table every loop.
LOOP AT it_anla WHERE anln2 EQ '0000' AND aktiv EQ '0000000'.
DELETE it_anla WHERE bukrs EQ it_anla-bukrs AND anln1 EQ it_anla-anln1.
ENDLOOP.
Thanks.
Edited by: Thomas Zloch on Sep 21, 2010 5:39 PM - please use code tagsModerator message - Welcome to SCN
If the range table for anln1 is large and contains distinct values, you can try using it in a FOR ALL ENTRIES construct rather than IN.
Rob -
Query related to excution and performance of count in select statement
Hi,
What are the difference between
select count(1) from table
and
select count(*) from tableThanks for the reply.I searched lots of thread for the same and found that it's same (excution wise and performance wise).
But in few thread i found that someone is saying
"At the time of exceution Count(1) will conevrted internally to count(*) " --- is it correct
"is there any soecial significance of using count(42)" ?
Tx,
Anit -
Performance issue with Printing AR Statements
We are using XML Publisher 5.2. We are usign XML Publisher to print our AR statements.
To print our statements, it took us nearly 6-8 hrs for the entire process.
Is there any performance tuning parameters, that we should set in order to improve the performance.
Thanks,
- vasu -Thats way too long ... should have been 1/2 hr at most based on our tests. Please log a TAR and be sure to load template and the XML data (yes all of it :o) and we can take a look.
Regards, Tim -
Performance issue doing a prepare statement with LIKE
Hi
I'm doing a preparestatement like this
select foo
from bar
where des like ?
and then do the prepare with the string 'foo%'
If I do the prepare of this variable, the query is very slow. If I don't do the prepare, the query is very fast. Can anyone help me out on this? DBA's will kill me if I don't use prepare statements but users will kill be if the query is slow...
Thank you!
HappyGuyAs indicated in the last reply this probably has to do with use of string literals over bound variables
With a prepared statement the optimiser dosen't get the chance to use histograms against your data... so whilst it might be more efficent to drive off an index for some of the searches it looses access to the data (histograms) that enable it to make this decission...
Hints are probably the way to go... if your sure that driving off a index is always going to be the most efficent...
Using explain (inside of JDeveloper9i or turning on the autotrace functionality inside of sqlplus) will give you some idea of how Oracle would drive the query if you use a string literal
Dom -
Performance Issue: Select From BSEG & BKPF
Hi experts,
Performance issue on the select statements; how can I improve the performance?
Select Company Code (BUKRS)
Accounting Document Number (BELNR)
Document Type (BLART)
Posting Date in the Document (BUDAT)
Document Status (BSTAT)
Reversal Document or Reversed Document Indicator (XREVERSAL)
From Accounting Document Header (BKPF)
Into I_BKPF
Where BKPF-BUKRS = I_VBAK-BUKRS_VF
BKPF-BLART = KI
BKPF-BUDAT = SY-DATUM 2 days
BKPF-BSTAT = Initial
BKPF-XREVERSAL <> 1 or 2
Select Company Code (BUKRS)
Accounting Document Number (BELNR)
Assignment Number (ZUONR)
Sales Document (VBEL2)
Sales Document Item (POSN2)
P & L Statement Account Type (GVTYP)
From Accounting Document Segment (BSEG)
Into I_BSEG
Where BSEG-BUKRS = I_VBAK-BUKRS
BSEG-VBELN = I_VBAK-VBEL2
BSEG-POSN2 = I_VBAP-POSNR
BSEG-BELNR = I_BKPF-BELNR
P & L Statement Account Type (GVTYP) = XHi,
to improve the performance, you can use the secondary indices viz., BSIK / BSAK, BSID / BSAD, BSIS.
Hope this helps.
Best Regards, Murugesh AS -
Multiple table select Performance Issue
Hi,
I would like to get an opinion which from these query which is faster and has a performance issue..
SELECT EMP_ID, NAME, DEPT_NAME
FROM EMP, DEPT
WHERE EMP_ID = DEPT_ID;
or
SELECT EMP_ID, NAME, (SELECT DEPT_NAME FROM DEPT WHERE ID = P_ID)
FROM EMP
WHERE EMP_ID = P_ID;lets say that EMP_ID on Dept table is linked to EMP_ID table on EMP..Well...I don't get your design, but the two queries may return different results.
Comparing the performance doesn't make sense.
Nevertheless, the only way is to run them both and see which one is faster or see which one has the lowest IO.
There's no way we can tell you which is faster by just looking at the text of the queries.
Post some explain plans or traces. -
Secondary Index Select Statement Problem
Hi friends.
I have a issue with a select statement using secondary index,
SELECT SINGLE * FROM VEKP WHERE VEGR4 EQ STAGE_DOCK
AND VEGR5 NE SPACE
AND WERKS EQ PLANT
%_HINTS ORACLE
'INDEX("&TABLE&" "VEKP~Z3" "VEKP^Z3" "VEKP_____Z3")'.
given above statement is taking long time for processing.
when i check for the same secondary index in vekp table i couldn't see any DB index name with vekp~z3 or vekp^z3 or vekp____z3.
And the sy-subrc value after select statement is 4. (even though values avaliable in VEKP with given where condition values)
My question is why my select statement is taking long time and sy-subrc is 4?
what happens if a secnodary index given in select statement, which is not avaliable in that DB Table?Hi,
> ONe more question: is it possible to give more than one index name in select statement.
yes you can:
read the documentation:
http://download.oracle.com/docs/cd/A97630_01/server.920/a96533/hintsref.htm#5156
index_hint:
This hint can optionally specify one or more indexes:
- If this hint specifies a single available index, then the optimizer performs
a scan on this index. The optimizer does not consider a full table scan or
a scan on another index on the table.
- If this hint specifies a list of available indexes, then the optimizer
considers the cost of a scan on each index in the list and then performs
the index scan with the lowest cost. The optimizer can also choose to
scan multiple indexes from this list and merge the results, if such an
access path has the lowest cost. The optimizer does not consider a full
table scan or a scan on an index not listed in the hint.
- If this hint specifies no indexes, then the optimizer considers the
cost of a scan on each available index on the table and then performs
the index scan with the lowest cost. The optimizer can also choose to
scan multiple indexes and merge the results, if such an access path
has the lowest cost. The optimizer does not consider a full table scan.
Kind regards,
Hermann -
How to solve performance issue
Hi
How to resolve performance issue in following select query-
SELECT *
INTO CORRESPONDING FIELDS OF TABLE it_final
FROM ce1zcsc
WHERE paledger EQ c_10 "Currency Type
AND vrgar IN s_vrgar "Record Type
AND versi EQ space "Plan Version
AND perio IN r_perio "Period
AND bukrs IN s_bukrs. "Company Code
TABLE CE1ZCSC has around 173 fields,but it_final has around 105 fields.
The indexes are created for the following fields:
paledger
vrgar
versi
perio
bukrs and
prctr.
I doubt whether we should look for Estim. CPU-Costs in index range scan or table access by index rowid in Execution plan for SQL statement.
If anybody can provide me with informative documents on performance issue.Hi,
Dont use " * " & " corresponding fields " in the select query rather declare all the fields and use the "into table" clause.
Let me know if you still face the same problem.
-Naveen.
Maybe you are looking for
-
XML to ABAP internal table conversion
Hi Guys, Via debugging,I downloaded the context of an ABAP webdynpro application into an XML file on my desktop. Now I want to use this file as an input to a FM. This is what I have done for it 1) Used the GUI_UPLOAD fm to convert the xml file on m
-
Finally managed to install Windows 8.1 Enterprise on MBP2010.
Hi. After several hours of tweaking, I finally managed to install Windows 8.1 - or ANY Windows, for that matter - on my old MacBook. Bootcamp has a few major flaws right now. I do no longer have an internal optical drive, so I had to use a usb flash
-
I want to change Purchase Order Print
I want to change Purchase Order Print. I want to add the name who has done last change in Purchase Order. Please help.
-
After purchasing Apple's A/V(iPhone) cable hook up set (50$$). I get video in black/white in a rather large tv set. Does anyone else have this problem? And R there any solutions to this matter? Also R there any recommendations for video converters?
-
how do I convert my address book from explorer to Safari? I am unable to download Explore anymore because it has been discontinued for the Mac.