Select query performance improvement - Index on EDIDC table
Hi Experts,
I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
SELECT docnum
direct
mestyp
mescod
rcvprn
sndprn
upddat
updtim
INTO CORRESPONDING FIELDS OF TABLE t_edidc
FROM edidc
FOR ALL ENTRIES IN t_error_idoc
WHERE
upddat GE gv_date1 AND
upddat LE gv_date2 AND
updtim GE p_time AND
status EQ t_error_idoc-status.
As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
Question:
4. How do we identify the index to be used.
5. On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
6. What will be the impact on the table performance if we create a new index.
Regards ,
Raghav
Question:
1. How do we identify the index to be used.
Generally the index is automatically selected by SAP (DB Optimizer ) ( You can still mention the index name in your select query by changing the syntax)
For your select Query the second Index will be called automatically by the Optimizer, ( Because the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
2. On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
(Create a new Index with MANDT and the 4 fields which are in the where clause in sequence )
3. What will be the impact on the table performance if we create a new index.
( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid into corresponding fields .
Regards ,
Seth
Similar Messages
-
SELECT query performance : One big table Vs many small tables
Hello,
We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
multiple small tables will help ?
For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
Thanks.Hello,
There is some information on this topic in the FAQ at:
http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
If this does not address your question, please just let me know.
Thanks,
Sandra -
Index not getting used in the query(Query performance improvement)
Hi,
I am using oracle 10g version and have this query:
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from Action_type at,
action_history ah,
sensitivity_audit sa,
logical_entity le,
feed_static fs,
feed_book_status fbs,
feed_instance fi,
marsnode bk
where at.description = 'Regress Positions'
and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and fi.most_recent = 'Y'
and bk.close_date is null
and ah.time_draft = 'after'
and sa.close_action_id is null
and le.close_action_id is null
and at.action_type_id = ah.action_type_id
and ah.action_id = sa.create_action_id
and le.logical_entity_id = sa.type_id
and sa.feed_id = fs.feed_id
and sa.book_id = bk.node_id
and sa.feed_instance_id = fi.feed_instance_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
union
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from feed_book_status fbs,
marsnode bk,
feed_instance fi,
feed_static fs,
feed_book_status_history fbsh,
Action_type at,
Action_history ah
where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and ah.action_type_id = 103
and bk.close_date is null
and ah.time_draft = 'after'
-- and ah.action_id = fbs.action_id
and fbs.book_id = bk.node_id
and fbs.book_id = fbsh.book_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
and fbsh.create_action_id = ah.action_id
and at.action_type_id = ah.action_type_id
union
select distinct bk.name "Book Name",
fs.feed_description "Feed Name",
fbs.cob_date "Cob",
at.description "Data Type",
ah.user_name " User",
ah.comments "Comments",
ah.time_draft
from feed_book_status fbs,
marsnode bk,
feed_instance fi,
feed_static fs,
feed_book_status_history fbsh,
Action_type at,
Action_history ah
where fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011'
and ah.action_type_id = 101
and bk.close_date is null
and ah.time_draft = 'after'
and fbs.book_id = bk.node_id
and fbs.book_id = fbsh.book_id
and fbs.feed_instance_id = fi.feed_instance_id
and fi.feed_id = fs.feed_id
and fbsh.create_action_id = ah.action_id
and at.action_type_id = ah.action_type_id;This is the execution plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 231 | 43267 | 104K (85)|
| 1 | SORT UNIQUE | | 231 | 43267 | 104K (85)|
| 2 | UNION-ALL | | | | |
| 3 | NESTED LOOPS | | 1 | 257 | 19540 (17)|
| 4 | NESTED LOOPS | | 1 | 230 | 19539 (17)|
| 5 | NESTED LOOPS | | 1 | 193 | 19537 (17)|
| 6 | NESTED LOOPS | | 1 | 152 | 19534 (17)|
|* 7 | HASH JOIN | | 213 | 26625 | 19530 (17)|
|* 8 | TABLE ACCESS FULL | LOGICAL_ENTITY | 12 | 264 | 2 (0)|
|* 9 | HASH JOIN | | 4267 | 429K| 19527 (17)|
|* 10 | HASH JOIN | | 3602 | 90050 | 1268 (28)|
|* 11 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 46826 | 22 (5)|
|* 12 | TABLE ACCESS FULL | FEED_INSTANCE | 335K| 3927K| 1217 (27)|
|* 13 | TABLE ACCESS FULL | SENSITIVITY_AUDIT | 263K| 19M| 18236 (17)|
| 14 | TABLE ACCESS BY INDEX ROWID | FEED_STATIC | 1 | 27 | 1 (0)|
|* 15 | INDEX UNIQUE SCAN | IDX_FEED_STATIC_FI | 1 | | 0 (0)|
|* 16 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
|* 17 | INDEX RANGE SCAN | PK_MARSNODE | 3 | | 2 (0)|
|* 18 | TABLE ACCESS BY INDEX ROWID | ACTION_HISTORY | 1 | 37 | 2 (0)|
|* 19 | INDEX UNIQUE SCAN | PK_ACTION_HISTORY | 1 | | 1 (0)|
|* 20 | TABLE ACCESS BY INDEX ROWID | ACTION_TYPE | 1 | 27 | 1 (0)|
|* 21 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
|* 22 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
| 23 | NESTED LOOPS | | 115 | 21505 | 42367 (28)|
|* 24 | HASH JOIN | | 114 | 16644 | 42023 (28)|
| 25 | NESTED LOOPS | | 114 | 13566 | 42007 (28)|
|* 26 | HASH JOIN | | 114 | 12426 | 41777 (28)|
|* 27 | HASH JOIN | | 957 | 83259 | 41754 (28)|
|* 28 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | 91760 | 30731 (28)|
| 29 | NESTED LOOPS | | 9570K| 456M| 10234 (21)|
| 30 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | 27 | 1 (0)|
|* 31 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
| 32 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| 209M| 10233 (21)|
|* 33 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 79244 | 22 (5)|
| 34 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | 10 | 2 (0)|
|* 35 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | 1 (0)|
| 36 | TABLE ACCESS FULL | FEED_STATIC | 2899 | 78273 | 16 (7)|
|* 37 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | 2 (0)|
|* 38 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | 41 | 3 (0)|
| 39 | NESTED LOOPS | | 115 | 21505 | 42367 (28)|
|* 40 | HASH JOIN | | 114 | 16644 | 42023 (28)|
| 41 | NESTED LOOPS | | 114 | 13566 | 42007 (28)|
|* 42 | HASH JOIN | | 114 | 12426 | 41777 (28)|
|* 43 | HASH JOIN | | 957 | 83259 | 41754 (28)|
|* 44 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | 91760 | 30731 (28)|
| 45 | NESTED LOOPS | | 9570K| 456M| 10234 (21)|
| 46 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | 27 | 1 (0)|
|* 47 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | 0 (0)|
| 48 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| 209M| 10233 (21)|
|* 49 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | 79244 | 22 (5)|
| 50 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | 10 | 2 (0)|
|* 51 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | 1 (0)|
| 52 | TABLE ACCESS FULL | FEED_STATIC | 2899 | 78273 | 16 (7)|
|* 53 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | 2 (0)|
------------------------------------------------------------------------------------------------------and the predicate info
Predicate Information (identified by operation id):
7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - filter("FI"."MOST_RECENT"='Y')
13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
15 - access("FI"."FEED_ID"="FS"."FEED_ID")
filter("SA"."FEED_ID"="FS"."FEED_ID")
16 - filter("BK"."CLOSE_DATE" IS NULL)
17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
18 - filter("AH"."TIME_DRAFT"='after')
19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
20 - filter("AT"."DESCRIPTION"='Regress Positions')
21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
22 - filter("BK"."CLOSE_DATE" IS NULL)
24 - access("FI"."FEED_ID"="FS"."FEED_ID")
26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
28 - filter("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after')
31 - access("AT"."ACTION_TYPE_ID"=103)
33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
38 - filter("BK"."CLOSE_DATE" IS NULL)
40 - access("FI"."FEED_ID"="FS"."FEED_ID")
42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
44 - filter("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after')
47 - access("AT"."ACTION_TYPE_ID"=101)
49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
Note
- 'PLAN_TABLE' is old versionIn this query, mainly the ACTION_HISTORY and FEED_BOOK_STATUS_HISTORY tables are getting accessed fullly though there are indexes createdon them like this
ACTION_HISTORY
ACTION_ID column Unique index
FEED_BOOK_STATUS_HISTORY
(FEED_INSTANCE_ID, BOOK_ID, COB_DATE, VERSION) composite indexI tried all the best combinations however the indexes are not getting used anywhere.
Could you please suggest some way so the query will perform better way.
Thanks,
AashishHi Mohammed,
This is what I got after your method of execution plan
SQL_ID 4vmc8rzgaqgka, child number 0
select distinct bk.name "Book Name" , fs.feed_description "Feed Name" , fbs.cob_date
"Cob" , at.description "Data Type" , ah.user_name " User" , ah.comments "Comments"
, ah.time_draft from Action_type at, action_history ah, sensitivity_audit sa, logical_entity
le, feed_static fs, feed_book_status fbs, feed_instance fi, marsnode bk where at.description =
'Regress Positions' and fbs.cob_date BETWEEN '01 Feb 2011' AND '08 Feb 2011' and
fi.most_recent = 'Y' and bk.close_date is null and ah.time_draft='after' and
sa.close_action_id is null and le.close_action_id is null and at.action_type_id =
ah.action_type_id and ah.action_id=sa.create_action_id and le.logical_entity_id = sa.type_id
and sa.feed_id = fs.feed_id and sa.book_id = bk.node_id and sa.feed_instance_id =
fi.feed_instance_id and fbs.feed_instance_id = fi.feed_instance_id and fi.feed_id = fs.feed_id
union select distinct bk.name "Book Name" , fs.
Plan hash value: 1006571916
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 1 | SORT UNIQUE | | 231 | 6144 | 6144 | 6144 (0)|
| 2 | UNION-ALL | | | | | |
| 3 | NESTED LOOPS | | 1 | | | |
| 4 | NESTED LOOPS | | 1 | | | |
| 5 | NESTED LOOPS | | 1 | | | |
| 6 | NESTED LOOPS | | 1 | | | |
|* 7 | HASH JOIN | | 213 | 1236K| 1236K| 1201K (0)|
|* 8 | TABLE ACCESS FULL | LOGICAL_ENTITY | 12 | | | |
|* 9 | HASH JOIN | | 4267 | 1023K| 1023K| 1274K (0)|
|* 10 | HASH JOIN | | 3602 | 1095K| 1095K| 1296K (0)|
|* 11 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
|* 12 | TABLE ACCESS FULL | FEED_INSTANCE | 335K| | | |
|* 13 | TABLE ACCESS FULL | SENSITIVITY_AUDIT | 263K| | | |
| 14 | TABLE ACCESS BY INDEX ROWID | FEED_STATIC | 1 | | | |
|* 15 | INDEX UNIQUE SCAN | IDX_FEED_STATIC_FI | 1 | | | |
|* 16 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
|* 17 | INDEX RANGE SCAN | PK_MARSNODE | 3 | | | |
|* 18 | TABLE ACCESS BY INDEX ROWID | ACTION_HISTORY | 1 | | | |
|* 19 | INDEX UNIQUE SCAN | PK_ACTION_HISTORY | 1 | | | |
|* 20 | TABLE ACCESS BY INDEX ROWID | ACTION_TYPE | 1 | | | |
|* 21 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
| 23 | NESTED LOOPS | | 115 | | | |
|* 24 | HASH JOIN | | 114 | 809K| 809K| 817K (0)|
| 25 | NESTED LOOPS | | 114 | | | |
|* 26 | HASH JOIN | | 114 | 868K| 868K| 1234K (0)|
|* 27 | HASH JOIN | | 957 | 933K| 933K| 1232K (0)|
|* 28 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | | | |
| 29 | NESTED LOOPS | | 9570K| | | |
| 30 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | | | |
|* 31 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
| 32 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| | | |
|* 33 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
| 34 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | | | |
|* 35 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | | |
| 36 | TABLE ACCESS FULL | FEED_STATIC | 2899 | | | |
|* 37 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | | |
|* 38 | TABLE ACCESS BY INDEX ROWID | MARSNODE | 1 | | | |
| 39 | NESTED LOOPS | | 115 | | | |
|* 40 | HASH JOIN | | 114 | 743K| 743K| 149K (0)|
| 41 | NESTED LOOPS | | 114 | | | |
|* 42 | HASH JOIN | | 114 | 766K| 766K| 208K (0)|
|* 43 | HASH JOIN | | 957 | 842K| 842K| 204K (0)|
|* 44 | TABLE ACCESS FULL | ACTION_HISTORY | 2480 | | | |
| 45 | NESTED LOOPS | | 9570K| | | |
| 46 | TABLE ACCESS BY INDEX ROWID| ACTION_TYPE | 1 | | | |
|* 47 | INDEX UNIQUE SCAN | PK_ACTION_TYPE | 1 | | | |
| 48 | TABLE ACCESS FULL | FEED_BOOK_STATUS_HISTORY | 9570K| | | |
|* 49 | INDEX RANGE SCAN | IDX_FBS_CD_FII_BI | 3602 | | | |
| 50 | TABLE ACCESS BY INDEX ROWID | FEED_INSTANCE | 1 | | | |
|* 51 | INDEX UNIQUE SCAN | PK_FEED_INSTANCE | 1 | | | |
| 52 | TABLE ACCESS FULL | FEED_STATIC | 2899 | | | |
|* 53 | INDEX RANGE SCAN | PK_MARSNODE | 1 | | | |
Predicate Information (identified by operation id):
7 - access("LE"."LOGICAL_ENTITY_ID"="SA"."TYPE_ID")
8 - filter("LE"."CLOSE_ACTION_ID" IS NULL)
9 - access("SA"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
10 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
11 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
12 - filter("FI"."MOST_RECENT"='Y')
13 - filter("SA"."CLOSE_ACTION_ID" IS NULL)
15 - access("FI"."FEED_ID"="FS"."FEED_ID")
filter("SA"."FEED_ID"="FS"."FEED_ID")
16 - filter("BK"."CLOSE_DATE" IS NULL)
17 - access("SA"."BOOK_ID"="BK"."NODE_ID")
18 - filter("AH"."TIME_DRAFT"='after')
19 - access("AH"."ACTION_ID"="SA"."CREATE_ACTION_ID")
20 - filter("AT"."DESCRIPTION"='Regress Positions')
21 - access("AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
22 - filter("BK"."CLOSE_DATE" IS NULL)
24 - access("FI"."FEED_ID"="FS"."FEED_ID")
26 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
27 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
28 - filter(("AH"."ACTION_TYPE_ID"=103 AND "AH"."TIME_DRAFT"='after'))
31 - access("AT"."ACTION_TYPE_ID"=103)
33 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
35 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
37 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
38 - filter("BK"."CLOSE_DATE" IS NULL)
40 - access("FI"."FEED_ID"="FS"."FEED_ID")
42 - access("FBS"."BOOK_ID"="FBSH"."BOOK_ID")
43 - access("FBSH"."CREATE_ACTION_ID"="AH"."ACTION_ID" AND
"AT"."ACTION_TYPE_ID"="AH"."ACTION_TYPE_ID")
44 - filter(("AH"."ACTION_TYPE_ID"=101 AND "AH"."TIME_DRAFT"='after'))
47 - access("AT"."ACTION_TYPE_ID"=101)
49 - access("FBS"."COB_DATE">=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"FBS"."COB_DATE"<=TO_DATE(' 2011-02-08 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
51 - access("FBS"."FEED_INSTANCE_ID"="FI"."FEED_INSTANCE_ID")
53 - access("FBS"."BOOK_ID"="BK"."NODE_ID")
Note
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
122 rows selected.
Elapsed: 00:00:02.18The action_type_id column is of NUMBER type. -
Query performance improvement using pipelined table function
Hi,
I have got two select queries one is like...
select * from table
another is using pielined table function
select *
from table(pipelined_function(cursor(select * from table)))
which query will return result set more faster????????
suggest methods for retrieving dataset more faster (using pipelined table function) than a normal select query.
rgds
somyCompare the performance between these solutions:
create table big as select * from all_objects;
First test the performance of a normal select statement:
begin
for r in (select * from big) loop
null;
end loop;
end;
/Second a pipelined function:
create type rc_vars as object
(OWNER VARCHAR2(30)
,OBJECT_NAME VARCHAR2(30));
create or replace type rc_vars_table as table of rc_vars ;
create or replace
function rc_get_vars
return rc_vars_table
pipelined
as
cursor c_aobj
is
select owner, object_name
from big;
l_aobj c_aobj%rowtype;
begin
for r_aobj in c_aobj loop
pipe row(rc_vars(r_aobj.owner,r_aobj.object_name));
end loop;
return;
end;
/Test the performance of the pipelined function:
begin
for r in (select * from table(rc_get_vars)) loop
null;
end loop;
end;
/On my system the simple select-statement is 20 times faster.
Correction: It is 10 times faster, not 20.
Message was edited by:
wateenmooiedag -
Using DB Links - Improving SELECT query performance
Hi there,
I am using dblink in the following query:
I would like to improve performance of the query by using hints as per described in the link: http://www.experts-exchange.com/Database/Oracle/9.x/Q_23640348.html. However, i am not sure how can i include this in my select query.
Details are:
Oracle - 9i Database Terminal Release .8
DB Link: TCPROD
Could someone please explain with an example how to use hints to get the query to select data on the remote database and then return the results to the target database?
Many Thanks.
SELECT ec.obid AS prObid,
ec.b2ProgramName AS program,
ec.projectName AS project,
ec.wbsID AS prNo,
ec.wbsName AS title,
ec.revision AS revision,
ec.superseded AS revisionSuperseded,
ec.lifeCycleState AS lifeCycleState,
ec.b2ChangeType AS type,
ec.b2Complexity AS subType,
ec.r1SsiCode AS ssi,
ec.b2disposition as disposition,
ec.wbsOriginator AS requestor,
ec.wbsAdministrator AS administrator,
ec.changepriority as priority,
ec.r1tsc as tsc,
ec.t1comments as tenixComments,
ec.b2securityclass as securityClassification,
ec.t1changesafety as safety,
ec.t1actionofficer as actionOfficer,
ec.t1changereason as changeReason,
ec.t1wbsextchangenumber as extChangeNo,
ec.creator as creator,
to_date(substr(ec.creationdate,
0,
instr(ec.creationdate, ':', 1, 3) - 1),
'YYYY/MM/DD-HH24:MI:SS') as creationdate,
to_date(ec.originatorassigndate, 'YYYY/MM/DD') as originatorassigndate,
zbd.description as description,
zbc.comments as comments
FROM (SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM awdbt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mart1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mpsdt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM nondt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnast1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnlht1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnolt1m4.cmPrRpIt@TCPROD
UNION
SELECT obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rzptt1m4.cmPrRpIt@TCPRODit's the tablename in the hint, not the column name
something like
SELECT ec.obid AS prObid,
ec.b2ProgramName AS program,
ec.projectName AS project,
ec.wbsID AS prNo,
ec.wbsName AS title,
ec.revision AS revision,
ec.superseded AS revisionSuperseded,
ec.lifeCycleState AS lifeCycleState,
ec.b2ChangeType AS type,
ec.b2Complexity AS subType,
ec.r1SsiCode AS ssi,
ec.b2disposition as disposition,
ec.wbsOriginator AS requestor,
ec.wbsAdministrator AS administrator,
ec.changepriority as priority,
ec.r1tsc as tsc,
ec.t1comments as tenixComments,
ec.b2securityclass as securityClassification,
ec.t1changesafety as safety,
ec.t1actionofficer as actionOfficer,
ec.t1changereason as changeReason,
ec.t1wbsextchangenumber as extChangeNo,
ec.creator as creator,
to_date(substr(ec.creationdate,
0,
instr(ec.creationdate, ':', 1, 3) - 1),
'YYYY/MM/DD-HH24:MI:SS') as creationdate,
to_date(ec.originatorassigndate, 'YYYY/MM/DD') as originatorassigndate
FROM (SELECT /*+ DRIVING_SITE(awdbt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM awdbt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(mart1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mart1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(mpsdt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM mpsdt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(nondt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM nondt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnast1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnast1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnlht1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnlht1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rnolt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rnolt1m4.cmPrRpIt@TCPROD
UNION
SELECT /*+ DRIVING_SITE(rzptt1m4.cmPrRpIt) */ obid,
b2ProgramName,
projectName,
wbsID,
wbsName,
revision,
superseded,
lifeCycleState,
b2ChangeType,
b2Complexity,
r1SsiCode,
b2disposition,
wbsOriginator,
wbsAdministrator,
changepriority,
r1tsc,
t1comments,
b2securityclass,
t1changesafety,
t1actionofficer,
t1changereason,
t1wbsextchangenumber,
creator,
creationdate,
originatorassigndate
FROM rzptt1m4.cmPrRpIt@TCPROD) ec(not tested, of course) -
SELECT query performance issue
Hello experts!!!
I am facing the performance issue in the below SELECT query. Its taking long time to execute this query.
Please suggest how can i improve the performance of this query.
SELECT MBLNR MATNR LIFNR MENGE WERKS BUKRS LGORT BWART INTO CORRESPONDING FIELDS OF TABLE IT_MSEG
FROM MSEG
WHERE MATNR IN S_MATNR
AND LIFNR IN S_LIFNR
AND WERKS IN S_WERKS
AND BUKRS IN S_BUKRS
AND XAUTO = ''
AND BWART IN ('541' , '542' , '543' , '544', '105' , '106').
Thanks in advance.
Regards
AnkurHi Ankur,
the MSEG index for material is
Index MSEG~M
MANDT
MATNR
WERKS
LGORT
BWART
SOBKZ
It could be used very efficient if you supply values for MATNR, WERKS and LGORT.
There is no index on LIFNR. IKf you want the data for specific vendor(s), you should select from EKKO first, ir has index Index EKKO~1
MANDT
LIFNR
EKORG
EKGRP
BEDAT
You can JOIN EKKO and EKBE to get the BSEG key fields GJAHR BELNR BUZEI directly.
I don't know your details but I think you can get all you need from EKKO and EKBE. You may also consider EKPO as is has a material index Index EKPO~1
MANDT
MATNR
WERKS
BSTYP
LOEKZ
ELIKZ
MATKL
Do you really need the (much bigger) MSEG?
Regards,
Clemens -
Select query with secondary index
hi,
i have a report which is giving performance issues on a perticular select query on KONH table.
the select query doesnt use the primary key fields and table already has around 19 million entries.So there was a secondary index created for the fields in the table.
now, KONH is a client specific table, and hence has MANDT as the first field. when the table is not indexed it is sorted according to the order of fields, like first MANDT, then primary key fields and then remaining fields.. (correct me if i am wrong)
but the secondary index created doesnt has MANDT in it..(yea, a mistake! )...
but instead of correccting the secondary index, i am told to change the select query..
so, i used a "client specific" syntax to sort the issue.. but i dont understand whre i should put the "where mandt eq sy-mandt" clause..
should i put it right after all my secondary index fields are over? or what happens to the order of fields which are not present in the list of secondary index?
kindaly help.
thanx.Hi chinmay kulkarni,
its better if you can ask concerned person to add MANDT field in your index as well....
Indexes and MANDT
If a table begins with the mandt field, so should its indexes. If a table begins with mandt and an index doesn't, the optimizer might not use the index.
Remember, if you will, Open SQL's automatic client handling feature. When select * from ztxlfa1 where land1 = 'US' is executed, the actual SQL sent to the database is select * from ztxlfa1 where mandt = sy-mandt and land1 = 'US'. Sy-mandt contains the current logon client. When you select rows from a table using Open SQL, the system automatically adds sy-mandt to the where clause, which causes only those rows pertaining to the current logon client to be found.
When you create an index on a table containing mandt, therefore, you should also include mandt in the index. It should come first in the index, because it will always appear first in the generated SQL.
Index: Technical key of a database table.
Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
Structure of an Index
An index can be used to speed up the selection of data records from a table.
An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
When creating indexes, please note that:
An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
Only those fields whose values significantly restrict the amount of data are meaningful in an index.
When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
Make sure that the indexes on a table are as disjunctive as possible.
(That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
For Example...
SELECT KUNNR KUNN2 INTO TABLE T_CUST_TERR
FROM KNVP CLIENT SPECIFIED
WHERE MANDT = SY-MANDT " here MANDT shd be first
AND KUNN2 IN S_TERR
AND PARVW LIKE 'Z%'.
Accessing tables using Indexes
The database optimizer decides which index on the table should be used by the database to access data records.
You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
Also pls have a look on below link
http://www.sapfans.com/sapfans/forum/devel/messages/30240.html
Hope it will solve your problem..
Reward points if useful...
Thanks & Regards
ilesh 24x7 -
the table i have
create table test of xmltype;
and the xml that i have loaded is
<root>
<company>
<department>
<id>10</id>
<name>Accounting</name>
</department>
<department>
<id>11</id>
<name>Billing</name>
</department>
</company>
</root>
select query using xmltable is
select id,name from test,xmltable('/root/company/department'
passing object_value
columns
id number path 'id',
name varchar2(20) 'name');
the query is working fine but issue is performance
i have imlplemented index using extract() and extractstringval() functions but as i have multiple
occurance of data ther two are not working. I have non-schema-based xmltype table.
I need help for creating index on multiple occurance element
Any help is appreciatedFirst of all "XMLOptimizationCheck" AFAIK is not yet explained. Haven't checked support.oracle.com for a while though.
It's more or less currently an internal used, but for the public a fast method to detect, that Oracle internal XQuery / XPath optimization rewrites towards SQL methods (shortcuts) are not properly working. SYS_XQEXVAL probably means something like XQuery Element XML Value/validation (??? towards SQL value) isn't producing a simple construct with a predicate validation. The reasons section gives insight, just like a 10053 trace, on what attempts/rules where applied and failed or worked. I am guessing that the overall cost for the use of the normal PK index is so high because it can not be properly matched and/or optimized against the global index structure supporting the partitions.
In all, a bit more info regarding the table/partition structure and its used index regime/structure would be helpful.
Beside that. THIS IS A BUG and should be reported, request for help, via support.oracle.com
Edited by: Marco Gralike on Mar 23, 2011 9:35 PM -
Select statement performance improvement.
HI Guru's,
I am new to ABAP.
I have the below select stement
000304 SELECT mandt msgguid pid exetimest
000305 INTO TABLE lt_key
000306 UP TO lv_del_rows ROWS
000307 FROM (gv_master)
000308 WHERE
000309 * msgstate IN rt_msgstate
000310 * AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest.
Can anyone help me how i can improve the performance of this statement?
Here is the sql trace for the statement:
SELECT
/*+
FIRST_ROWS (100)
"MANDT" , "MSGGUID" , "PID" , "EXETIMEST"
FROM
"SXMSPMAST"
WHERE
"MANDT" = :A0 AND "EXETIMEST" <= :A1 AND "EXETIMEST" >= :A2 AND "REORG" = :A3
ORDER BY
"MANDT" , "ITFACTION" , "REORG" , "EXETIMEST"
Execution Plan
SELECT STATEMENT ( Estimated Costs = 3 , Estimated #Rows = 544 )
4 SORT ORDER BY
( Estim. Costs = 2 , Estim. #Rows = 544 )
Estim. CPU-Costs = 15.671.852 Estim. IO-Costs = 1
3 FILTER
2 TABLE ACCESS BY INDEX ROWID SXMSPMAST
( Estim. Costs = 1 , Estim. #Rows = 544 )
Estim. CPU-Costs = 11.130 Estim. IO-Costs = 1
1 INDEX RANGE SCAN SXMSPMAST~TST
Search Columns: 2
Estim. CPU-Costs = 3.329 Estim. IO-Costs = 0
Do I need to create any new index ? Do i need to remove the Order By clause?
Thanks in advance.why is there an
UP TO lv_del_rows ROWS
together with an ORDER BY?
The database will find all rows fulfilling the condition but returns only the largest Top lv_del_rows.
Therefore it can take a while.
Your index, always put the client field at first position.
actually I am not really convinced by your logic:
itfaction reorg exetimest.
itfaction is the first in the sort order, so all records with the smallest itfactio will come first, but itfaction is not specified, is this really what you want?
Change the index to mandt reorg exetimest reorg
and change the ORDER BY to mandt reorg exetimest
then it will become fast.
* AND ( adapt_stat = cl_xms_persist=>co_stat_adap_processed
000311 * OR adapt_stat = cl_xms_persist=>co_stat_adap_undefined )
000312 * AND itfaction = ls_itfaction
000313 * AND msgtype = cl_xms_persist=>co_async
000314 * AND
000315 exetimest LE lv_timestamp
000316 AND exetimest GE last_ts
000317 AND reorg = cl_xms_persist=>co_reorg_ini
000318 ORDER BY mandt itfaction reorg exetimest. -
SELECT Query performance tunning
Hi All,
our objective is to read value from three DSO table, for that we have written three select query .
In this we have used three internal talbes.
We have written in END routine.
A model select statement for reading the Values in DSO and move statement i have given .
for 1,75000 records it is taking about 8 hours for DTP to run .
Usually they are meaning that it will take just 20 minutes.
Can anbody help on this please ??????????????????????????????
SELECT logsys
doc_num
doc_item
comp_code
/bic/gpusiteid
/bic/gpumtgrid
/bic/gpuspntyp
/bic/gpuspndid
/bic/gpuprocmt
/bic/gpubufunc
co_area
order_quan
po_unit
entry_date
/bic/gpuitmddt
/bic/gpuovpoc
currency
/bic/gpudel_in
BT8695*
costcenter
/bic/gpuordnum
/bic/gpupostxt
BT8695*
FROM (c_poadm_det)
INTO TABLE t_podetails
FOR ALL ENTRIES IN result_package
WHERE logsys EQ result_package-logsys
AND doc_num EQ result_package-doc_num
AND doc_item EQ result_package-doc_item.
LOOP AT result_package
ASSIGNING <result_fields>.
UNASSIGN <fs_podetails>.
READ TABLE t_podetails
ASSIGNING <fs_podetails>
WITH KEY logsys = <result_fields>-logsys
doc_num = <result_fields>-doc_num
doc_item = <result_fields>-doc_item.
IF sy-subrc EQ 0.
MOVE <fs_podetails>-/bic/gpusiteid TO <result_fields>-/bic/gpusiteid.
MOVE <fs_podetails>-/bic/gpumtgrid TO <result_fields>-/bic/gpumtgrid.
MOVE <fs_podetails>-/bic/gpuspntyp TO <result_fields>-/bic/gpuspntyp.
IF <result_fields>-order_quan NE ' '.
MOVE c_true TO <result_fields>-/bic/gpucount.
ENDIF.
ENDIF.Hi,
In the Read statement just use BINARY SEARCH it will improve the performance. Before putting BINARY SEARCH first the
internal table should be sort like wht field you giving the condition in read statement.
sort t_podetails by logsys doc_num doc_item."add this line
LOOP AT result_package
ASSIGNING <result_fields>.
UNASSIGN <fs_podetails>."why your giving the unassigned here it will give the dump. why because the field symbol is not assigned after the read symbol only they going to assign.
READ TABLE t_podetails
ASSIGNING <fs_podetails>
WITH KEY logsys = <result_fields>-logsys
doc_num = <result_fields>-doc_num
doc_item = <result_fields>-doc_item. " use BINARY SEARCH here
IF sy-subrc EQ 0.
MOVE <fs_podetails>-/bic/gpusiteid TO <result_fields>-/bic/gpusiteid.
MOVE <fs_podetails>-/bic/gpumtgrid TO <result_fields>-/bic/gpumtgrid.
MOVE <fs_podetails>-/bic/gpuspntyp TO <result_fields>-/bic/gpuspntyp.
IF <result_fields>-order_quan NE ' '.
MOVE c_true TO <result_fields>-/bic/gpucount.
ENDIF.
ENDIF.
Regards,
Dhina.. -
Issue with select query for secondary index
Hi all,
I have created a secondary index A on mara table with fields Mandt and Packaging Material Type VHART.
Now i am trying to write a report
Tables : mara.
data : begin of itab occurs 0.
include structure mara.
data : end of itab.
*select * from mara into table itab*
CLIENT SPECIFIED where
MANDT = SY-MANDT and
VHART = 'WER'.
I'm getting an error
Unable to interpret "CLIENT". Possible causes of error: Incorrect spelling or comma error.
if i change to my select query to
*select * from mara into table itab*
where
MANDT = SY-MANDT and
VHART = 'WER'.
I'm getting an error
Without the addition "CLIENT SPECIFIED", you cannot specify the client field "MANDT" in the WHERE condition.
Let me know if iam wrong and we are at 4.6c
ThanksLike I already said, even if you have added the mandt field in the secondary index, there is no need the use it in the select statement.
Let me elaborate on my reply before. If you have created a UNIQUE index, which I don't think you have, then you should include CLIENT in the index. A unique index for a client-dependent table must contain the client field.
Additional info:
The accessing speed does not depend on whether or not an index is defined as a unique index. A unique index is simply a means of defining that certain field combinations of data records in a table are unique.
Even if you have defined a secondary index, this does not automatically mean, that this index is used. This also depends on the database optimizer. The optimizer will determine which index is best and use it. So before transporting this index, you should make sure that the index is used. How to check this, have a look at the link:
[check if index is used|http://help.sap.com/saphelp_nw70/helpdata/EN/cf/21eb3a446011d189700000e8322d00/content.htm]
Edited by: Micky Oestreich on May 13, 2008 10:09 PM -
Query performance improvement techniques(urgent)
Hi experz,
I am having a business requirement where i have to fill the variable with 59 days(mandatory field).from different regional cubes, these are all compressedcubes, records are almost 45 mlns in a cube when i give selection by default it selects one aggrigate defined on this, but aftr running some time it is going to short dump,if the time for report execution is more than 10 mts its performance is very bad i know! can you guys suggest me how can i improve the performance of this report.
when i gave the same selections in list cube transaction it is saing LSLVCF36 error,message_type_text error;
can you guys help me in this,i ll give full points.
thanks and regards,
veeruHi,
Try these
Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
2. Use the program SAP_INFOCUBE_DESIGNS to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
Refer.
http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report.
5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
/people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
Thanks,
JituK -
Sinlge select query in different schemas for same table(Indentical Structu)
Scenario :
Table XYZ is created in Schema A
After an year, the old data from the previous year would be moved to different schema. However in the other schema the same table name would be used.
For eg
Schema A contains table XYZ with data of 2012 yr
Schema B contains table XYZ with data of 2011 yr
Table XYZ in both the schemas have identical structure.
So can we fire a single select query to read the data from both the tables in effective way.
Eg select * from XYZ where date range between 15-Oct-2011 to 15-Mar-2012.
However the data resides in 2 different schema altogether.Thanks for the reply
Creating an view is an option.
But my problem, there is ORM layer(either Hibernate or Eclipse Top Link) between the application and the database.
So the queries would be formed by the ORM layer and are not hand generated.
So i cannot use view.
So is there any option that would allow me to use single query on different schema's ? -
Sinlge select query in diff schemas for same table(Indentical Structure)
Scenario :
Table XYZ is created in Schema A
After an year, the old data from the previous year would be moved to different schema. However in the other schema the same table name would be used.
For eg
Schema A contains table XYZ with data of 2012 yr
Schema B contains table XYZ with data of 2011 yr
Table XYZ in both the schemas have identical structure.
So can we fire a single select query to read the data from both the tables in effective way.
Eg select * from XYZ where date range between 15-Oct-2011 to 15-Mar-2012.
However the data resides in 2 different schema altogether.
Creating an view is an option.
But my problem, there is ORM layer(either Hibernate or Eclipse Top Link) between the application and the database.
So the queries would be formed by the ORM layer and are not hand generated.
So i cannot use view.
So is there any option that would allow me to use single query on different schema's ?Hi,
970773 wrote:
Scenario :
Table XYZ is created in Schema A
After an year, the old data from the previous year would be moved to different schema. However in the other schema the same table name would be used.
For eg
Schema A contains table XYZ with data of 2012 yr
Schema B contains table XYZ with data of 2011 yr
Table XYZ in both the schemas have identical structure.
So can we fire a single select query to read the data from both the tables in effective way.That depends on what you mean by "effective".
Eg select * from XYZ where date range between 15-Oct-2011 to 15-Mar-2012.
However the data resides in 2 different schema altogether.You can do a UNION, so the data from the two years appears together. The number of actual tables may make the query slower, but it won;t change the results.
Given that you have 2 tables, the fact that they are in different schemas doesn't matter. Just make sure the user running the query has SELECT privileges on both of them.
Creating an view is an option.Is it? You seem to say it is not, below.
But my problem, there is ORM layer(either Hibernate or Eclipse Top Link) between the application and the database.
So the queries would be formed by the ORM layer and are not hand generated.
So i cannot use view.So creating a view is not an option. Or is it?
So is there any option that would allow me to use single query on different schema's ?Anything that you can do with a view, you can do with sub-queries. A view is merely a convenience; it just saves a sub-query, so you don't have to re-code it every time you use it. Assuming you have privilges to query the base tables, you can always avoid using a view by repeating the query that defines the view in your own query. It will not be any slower -
Dynamic Select query is failing with error "Invalid Table Name"
OPEN rc FOR 'SELECT count(*) from :s' USING tab_name;
fetch rc into rec_count;
CLOSE rc;
my requirement is to build dynamic select query to retrieve the total count of rows in each table ( variable tab_name contains the table_name )
But I am getting stuck by this errror, not sure if there is any alternative !
ORA-00903: invalid table name
ORA-06512: at line 43OPEN rc FOR 'SELECT count(*) from '||tab_name;
fetch rc into rec_count;
CLOSE rc;
-- This will work
1. Create a sql statement.
2. Open ref cursor for that statement.
Maybe you are looking for
-
Hi, I am currently using "Visible bounds" (java script)to add 1 inch to the width and to the Length of my Art board, but when i make a clipping mask it actually reads the hidden content from the Clipping mask and makes conforms my art board to that p
-
Java program to perform binary addition,subtraction and modulus.
i am a newbie to java and require ur urgent help plzzzz. i wanna perform binary addition,subtraction and modulus operation between two numbers of 512 bit without using java functions i.e by simple logics of control statements.i need to convert two 51
-
B2b mds cache impacting bpel performance
Hi All, We have xmx set to 4GB and we have set the b2b.mdsCache to 400 MB. Our process runs as JMS --> B2B --> JMS --> Composite app(mediator and BPEL)--> OSB In composite application in one of the step mediator publishes events to EDN. Now when we d
-
Which Thinkpad to buy for gaming/video editing?
I'm not sure if this is the right place, I am new to this forum, so forgive me if its not. I have a friend who has a Thinkpad T500 and I feel it would suit my needs. I want a laptop that can handle running multiple programs, gaming, video editing, et
-
hi xperts, can anyone send me blue print of any BW project to [email protected] Thanks in Advance Regards, Karthik