Performance issue on Select ...like
Hi all,
we have Oracle 9206, and when starting the following SQL statement, we have a important runtime, and we have a index on those fields, but when we use LIKE, we have a full table scan. My question is about the second SQL statement BETWEEN (or >= <= operators) if we can use it instead of LIKE, please advise. I've already export/import the table and the statistics are OK, and I've setup the Histogram on involved fields on the select but with out success, the runtime for the SELECT using LIKE is about 30/40 seconds it's not normal and the table include more than 1 million of entries.
1. Select Partner from tab1 where field1 like 'VAN%' and field2 like 'K%';
2. Select Partner from tab1 where field1 >= 'VAN' and field2 >='K';
Regards,
Aziz K.
Why not test yourself ?
SQL> create table p_tbl (partner varchar2(10), field1 varchar2(10), field2 varchar2(10));
Table created.
SQL>
SQL> begin
2 for i in 1..1000000 loop
3 insert into p_tbl values (trunc(i/100),dbms_random.string('U',10),dbms_random.string('U',10));
4 end loop;
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>
SQL> create index idx_p_tbl on p_tbl (field1,field2);
Index created.
SQL>
SQL> exec dbms_stats.gather_table_stats(user,'P_TBL', cascade=>true)
PL/SQL procedure successfully completed.
SQL>
SQL> set timi on
SQL> select partner, field1, field2
2 from p_tbl
3 where field1 like 'VAN%'
4 and field2 like 'K%';
PARTNER FIELD1 FIELD2
7453 VANMKBIEZC KYDQHLZQHM
1694 VANNBPJNQQ KTVUNNUGUR
3408 VANQNSYBQP KSSYLCQKZE
4324 VANSUKOECK KAJAYLSMLG
Elapsed: 00:00:00.04
SQL>
SQL> select partner, field1, field2
2 from p_tbl
3 where field1 between 'VAN' and 'VANZZZZZZZ'
4 and field2 between 'K' and 'KZZZZZZZZZ';
PARTNER FIELD1 FIELD2
7453 VANMKBIEZC KYDQHLZQHM
1694 VANNBPJNQQ KTVUNNUGUR
3408 VANQNSYBQP KSSYLCQKZE
4324 VANSUKOECK KAJAYLSMLG
Elapsed: 00:00:00.03
SQL> set timi off
SQL> explain plan for select partner, field1, field2
2 from p_tbl
3 where field1 like 'VAN%'
4 and field2 like 'K%';
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 500196574
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | 26 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| P_TBL | 1 | 26 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_P_TBL | 1 | | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - access("FIELD1" LIKE 'VAN%' AND "FIELD2" LIKE 'K%')
filter("FIELD1" LIKE 'VAN%' AND "FIELD2" LIKE 'K%')
15 rows selected.
SQL> explain plan for select partner, field1, field2
2 from p_tbl
3 where field1 between 'VAN' and 'VANZZZZZZZ'
4 and field2 between 'K' and 'KZZZZZZZZZ';
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 500196574
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | 26 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| P_TBL | 1 | 26 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_P_TBL | 1 | | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - access("FIELD1">='VAN' AND "FIELD2">='K' AND "FIELD1"<='VANZZZZZZZ' AND
"FIELD2"<='KZZZZZZZZZ')
filter("FIELD2"<='KZZZZZZZZZ' AND "FIELD2">='K')
16 rows selected.Conclusion ? LIKE or BETWEEN seems to be same in my case.
Nicolas.
Similar Messages
-
Performance Issue with Selection Screen Values
Hi,
I am facing a performance issue(seems like a performance issue ) in my project.
I have a query with some RKFs and sales area in filters (single value variable which is optional).
Query is by default restricted by current month.
The Cube on which the query operates has around 400,000 records for a month.
The Cube gets loaded every three hours
When I run the query with no filters I get the output within 10~15 secs.
The issue I am facing is that, when I enter a sales area in my selection screen the query gets stuck in the data selection step. In fact we are facing the same problem if we use one or two other characteristics in our selection screen
We have aggregates/indexes etc on our cube.
Has any one faced a similar situation?
Does any one have any comments on this ?
Your help will be appreciated. ThanksHi A R,
Goto RSRT--> Give ur query anme --> Execute =Debug
--> No a pop up ill come with many check boxes select "Display Aggregates found" option --> now give ur
selections in variable screen > first it will give the already existing aggregate names> continue> now after displaying all the aggregates it will display the list of objects realted to cube wise> try to copy these objects into notepad> again go with ur drill downs now u'll get the already existing aggregates for this drill down-> it will display the list of objects> copy them to notepad> now sort all the objects related to one cube by deleting duplicate objects in the note pad>goto that Infocube> context>maintain aggregates> create aggregate on the objects u copied into note pad.
now try to execyte the report... it should work properly with out delays for those selections.
I hope it helps you...
Regards,
Ramki. -
Performance Issue on Select Condition on KNA1 table
Hi,
I am facing problem when i am selecting from the table KNA1 for given account group and attribute9 it is taking lot of time .
Please suggest the select query or any other feasible soln to solve this problem
select
kunnr
kotkd
from kna1
where kunnr eq parameter value and
kotkd eq 'ZPY' and katr9 = 'L' or 'T'.
Firstly i am using in in katr9 then i removed dur to performance issue using read further please suggest further performanace solnHi,
The select should be like:
select
kunnr
kotkd
from kna1
where kunnr eq parameter value
and kotkd eq 'ZPY'
and katr9 in r_katr9. " 'L' or 'T'.
create a range for katr9 like r_katr9 with L or T.
Because while selecting the entries from KNA1, it will check for KATR9 = L and then KATR9 = T.
Hope the above statement is useful for you.
Regards,
Shiva. -
Performance issue while selecting material documents MKPF & MSEG
Hello,
I'm facing performance issues in production while selecting Material documents for Sales order and item based on the Sales order Stock.
Here is the query :
I'm first selecting data from ebew table which is the Sales order Stock table then this query.
IF ibew[] IS NOT INITIAL AND ignore_material_documents IS INITIAL.
* Select the Material documents created for the the sales orders.
SELECT mkpf~mblnr mkpf~budat
mseg~matnr mseg~mat_kdauf mseg~mat_kdpos mseg~shkzg
mseg~dmbtr mseg~menge
INTO CORRESPONDING FIELDS OF TABLE i_mseg
FROM mkpf INNER JOIN mseg
ON mkpf~mandt = mseg~mandt
AND mkpf~mblnr = mseg~mblnr
AND mkpf~mjahr = mseg~mjahr
FOR ALL entries IN ibew
WHERE mseg~matnr = ibew-matnr
AND mseg~werks = ibew-bwkey
AND mseg~mat_kdauf = ibew-vbeln
AND mseg~mat_kdpos = ibew-posnr.
SORT i_mseg BY mat_kdauf ASCENDING
mat_kdpos ASCENDING
budat DESCENDING.
ENDIF.
I need to select the material documents because the end users want to see the stock as on certain date for the sales orders and only material document lines can give this information. Also EBEW table gives Stock only for current date.
For Example :
If the report was run for Stock date 30th Sept 2008, but on the 5th Oct 2008, then I need to consider the goods movements after 30th Sept and add if stock was issued or subtract if stock was added.
I know there is an Index MSEG~M in database system on mseg, however I don't know the Storage location LGORT and Movement types BWART that should be considered, so I tried to use all the Storage locations and Movement types available in the system, but this caused the query to run even slower than before.
I could create an index for the fields mentioned in where clause , but it would be an overhead anyways.
Your help will be appreciated. Thanks in advance
regards,
AdvaitHi Thomas,
Thanks for your reply. the performance of the query has significantly improved than before after switching the join from mseg join mkpf.
Actually, I even tried without join and looped using field symbols ,this is working slightly faster than the switched join.
Here are the result , tried with 371 records as our sandbox doesn't have too many entries unfortunately ,
Results before switching the join 146036 microseconds
Results after swithing the join 38029 microseconds
Results w/o join 28068 microseconds for selection and 5725 microseconds for looping
Thanks again.
regards,
Advait -
Performance issue with select query
Hi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AMHi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AM -
Performance issue when selection LIPS table into program.
Hi expert,
I have created Pending sales order report , in that i am facing performance problem for selection of LIPS table.
i have tried to use VLPMA table but performance has not been improved so, is their any need to create secondary index and
if yes then which fields of lips table i have to includes in index.
Please reply.
Regards,
Jyotsna>
UmaDave wrote:
> Hi ,
> 1.Please make use of PACKAGE in your select query , it will definetly improve the performance.
> 2.Please use the primary index by passing the fields in where clause in the order in which they appera in LIPS table.
> 3.You can also create a secondary index with the fields which you are using in where clause of select query and maintain the fields in the same sequence (where clause and secondary index)
> 4.If there is any inner joins (more than 3) then reduce them and have few mare select queries for them and make use of for all entries.
>
> This will definitely improve the performance to great extend.
>
> Hope this is helpful.
> Regards,
> Uma
Please do some more research before offering advice:
PACKAGE SIZE is for memory management, not performance.
Creating a secondary index is using a hammer to swat a fly and the order in the SELECT is not relevant.
FAE does not improve performance over a JOIN.
Rob -
Performance issue in select when subselect is used
We have a select statement that is using a where clause that has dates in. I've simplified the SQL below to demonstrate roughly what it does:
select * from user_activity
where starttime >= trunc(sysdate - 1) + 18/24
and starttime < trunc(sysdate) + 18/24
(this selects records from a table where a starttime field has values beween 6pm today and 6pm yesterday).
We are using this statement to create a materialized view which we refresh daily. Occasionally we have the need to refresh the data for a historic day, which means that we need to go in and change the where lines, e.g. to get data from 3 days ago instead of yesterday, the where clause becomes:
select * from user_activity
where starttime >= trunc(sysdate - 3) + 18/24
and starttime < trunc(sysdate - 2) + 18/24
Having to recreate the views like this is a nuisance, so we decided to be 'smart' and create a table with a setting in that we could use to control the number of days the view looks back. So if our table is called days_ago and field called days (with a single record, set to 1), we now have
select * from user_activity
where starttime >= trunc(sysdate - (select days from days_ago) + 18/24
and starttime < trunc(sysdate - ((select days from days_ago) + 1) + 18/24
The original SQL takes a few seconds to run. However the 'improved' version takes 25 minutes.
The days table only has 1 record in, selecting directly from it is instantaneous. Running the select on its own.
Does anything jump out as being daft in this approach? We cannot explain why the performance has suddenly dropped off for such a simple change.Hi,
Do you really need a view to do this?
Can't you define a bind varibale, and use it in your query:
VARIABLE days_ago NUMBER
EXEC :days_ago := 3;
SELECT ...
WHERE starttime >= TRUNC (SYSDATE - :days_ago') + (18 / 24)
AND starttime < TRUNC (SYSDATE - :days_ago') + (42 / 24) -- 42 = 24 + 18If you really must have a view, then it might be faster if you got the parameter from a SYS_CONTEXT variable, rather than from a table.
Unfortunately, SYS_CONTEXT always returns a string, so you have to be carefule encoding the number as a string when you set the variable, and decoding it when you use the variable:
WHERE starttime >= TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (18 / 24)
AND starttime < TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (42 / 24) -- 42 = 24 + 18For more about SYS_CONTEXT, look it up in the SQL language m,anual, and follow the links there:
http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions172.htm#sthref2268
If performance is important enough, consider storing the "fiscal day" as a separate (indexed) column. Starting in Oracle 11.1, you can use a virtual column for this. In earleir versions, you'd have to use a trigger. By "fiscal day", I mean:
TRUNC (starttime + (6/24))If starttime is between 18:00:00 on December 28 and 17:59:59 on December 29, this will return 00:00:00 on December 29. You could use it in a query (or view) like this:
WHERE fiscal_day = TRUNC (SYSDATE) - 2 -
Performance Issue in Select Statement (For All Entries)
Hello,
I have a report where i have two select statement
First Select Statement:
Select A B C P Q R
from T1 into Table it_t1
where ....
Internal Table it_t1 is populated with 359801 entries through this select statement.
Second Select Statement:
Select A B C X Y Z
from T2 in it_t2 For All Entries in it_t1
where A eq it_t1-A
and B eq it_t1-B
and C eq it_t1-C
Now Table T2 contains more than 10 lac records and at the end of select statement it_t2 is populated with 844003 but it takes a lot of time (15 -20 min) to execute second select statement.
Can this code be optimized?
Also i have created respective indexes on table T1 and T2 for the fields in Where Condition.
Regards,If you have completed all the steps mentioned by others, in the above thread, and still you are facing issues then,.....
Use a Select within Select.
First Select Statement:
Select A B C P Q R package size 5000
from T1 into Table it_t1
where ....
Second Select Statement:
Select A B C X Y Z
from T2 in it_t2 For All Entries in it_t1
where A eq it_t1-A
and B eq it_t1-B
and C eq it_t1-C
do processing........
endselect
This way, while using for all entries on T2, your it_t1, will have limited number of entries and thus the 2nd select will be faster.
Thanks,
Juwin -
Performance issue in select query
Moderator message: do not post the same question in two forums. Duplicate (together with answers) deleted.
SELECT a~grant_nbr
a~zzdonorfy
a~company_code
b~language
b~short_desc
INTO TABLE i_gmgr_text
FROM gmgr AS a
INNER JOIN gmgrtexts AS b ON a~grant_nbr = b~grant_nbr
WHERE a~grant_nbr IN s_grant
AND a~zzdonorfy IN s_dono
AND b~language EQ sy-langu
AND b~short_desc IN s_cont.
How to use for all all entries in the above inner join for better performance?
then....
IF sy-subrc EQ 0.
SORT i_gmgr_text BY grant_nbr.
ENDIF.
IF i_gmgr_text[] IS NOT INITIAL.
* Actual Line Item Table
SELECT rgrant_nbr
gl_sirid
rbukrs
rsponsored_class
refdocnr
refdocln
FROM gmia
INTO TABLE i_gmia
FOR ALL ENTRIES IN i_gmgr_text
WHERE rgrant_nbr = i_gmgr_text-grant_nbr
AND rbukrs = i_gmgr_text-company_code
AND rsponsored_class IN s_spon.
IF sy-subrc EQ 0.
SORT i_gmia BY refdocnr refdocln.
ENDIF.
Edited by: Matt on Dec 17, 2008 1:40 PM> How to use for all all entries in the above inner join for better performance?
my best christmas recommendation for performance, simply ignore such recommendations.
And check the performance of your join!
Is the performance really low, if it is then there is probably no index support. Without indexes FOR ALL ENTRIES will be much slower.
Siegfried -
Performance issue with select query and for all entries.
hi,
i have a report to be performance tuned.
the database table has around 20 million entries and 25 fields.
so, the report fetches the distinct values of two fields using one select query.
so, the first select query fetches around 150 entries from the table for 2 fields.
then it applies some logic and eliminates some entries and makes entries around 80-90...
and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
in short,
it accesses the same database table twice.
so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
is around 80-90 entries too much for using "for all entries"?
the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
i really cant find the way out...
please help.chinmay kulkarni wrote:Chinmay,
Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
>
> so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
>
It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
> the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
>
That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database. -
Performance issue in selecting data from a view because of not in condition
Hi experts,
I have a requirement to select data in a view which is not available in a fact with certain join conditions. but the fact table contains 2 crore rows of data. so this view is not working at all. it is running for long time. im pasting query here. please help me to tune it. whole query except prior to not in is executing in 15 minutes. but when i add not in condition it is running for so many hours as the second table has millions of records.
CREATE OR REPLACE FORCE VIEW EDWOWN.MEDW_V_GIEA_SERVICE_LEVEL11
SYS_ENT_ID,
SERVICE_LEVEL_NO,
CUSTOMER_NO,
BILL_TO_LOCATION,
PART_NO,
SRCE_SYS_ID,
BUS_AREA_ID,
CONTRACT,
WAREHOUSE,
ORDER_NO,
LINE_NO,
REL_NO,
REVISED_DUE_DATE,
REVISED_QTY_DUE,
QTY_RESERVED,
QTY_PICKED,
QTY_SHIPPED,
ABBREVIATION,
ACCT_WEEK,
ACCT_MONTH,
ACCT_YEAR,
UPDATED_FLAG,
CREATE_DATE,
RECORD_DATE,
BASE_WAREHOUSE,
EARLIEST_SHIP_DATE,
LATEST_SHIP_DATE,
SERVICE_DATE,
SHIP_PCT,
ALLOC_PCT,
WHSE_PCT,
ABC_CLASS,
LOCATION_ID,
RELEASE_COMP,
WAREHOUSE_DESC,
MAKE_TO_FLAG,
SOURCE_CREATE_DATE,
SOURCE_UPDATE_DATE,
SOURCE_CREATED_BY,
SOURCE_UPDATED_BY,
ENTITY_CODE,
RECORD_ID,
SRC_SYS_ENT_ID,
BSS_HIERARCHY_KEY,
SERVICE_LVL_FLAG
AS
SELECT SL.SYS_ENT_ID,
SL.ENTITY_CODE
|| '-'
|| SL.order_no
|| '-'
|| SL.LINE_NO
|| '-'
|| SL.REL_NO
SERVICE_LEVEL_NO,
SL.CUSTOMER_NO,
SL.BILL_TO_LOCATION,
SL.PART_NO,
SL.SRCE_SYS_ID,
SL.BUS_AREA_ID,
SL.CONTRACT,
SL.WAREHOUSE,
SL.ORDER_NO,
SL.LINE_NO,
SL.REL_NO,
SL.REVISED_DUE_DATE,
SL.REVISED_QTY_DUE,
NULL QTY_RESERVED,
NULL QTY_PICKED,
SL.QTY_SHIPPED,
SL.ABBREVIATION,
NULL ACCT_WEEK,
NULL ACCT_MONTH,
NULL ACCT_YEAR,
NULL UPDATED_FLAG,
SL.CREATE_DATE,
SL.RECORD_DATE,
SL.BASE_WAREHOUSE,
SL.EARLIEST_SHIP_DATE,
SL.LATEST_SHIP_DATE,
SL.SERVICE_DATE,
SL.SHIP_PCT,
0 ALLOC_PCT,
0 WHSE_PCT,
SL.ABC_CLASS,
SL.LOCATION_ID,
NULL RELEASE_COMP,
SL.WAREHOUSE_DESC,
SL.MAKE_TO_FLAG,
SL.source_create_date,
SL.source_update_date,
SL.source_created_by,
SL.source_updated_by,
SL.ENTITY_CODE,
SL.RECORD_ID,
SL.SRC_SYS_ENT_ID,
SL.BSS_HIERARCHY_KEY,
'Y' SERVICE_LVL_FLAG
FROM ( SELECT SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
MAX (SL_INT.REL_NO) REL_NO,
SL_INT.REVISED_DUE_DATE,
SUM (SL_INT.REVISED_QTY_DUE) REVISED_QTY_DUE,
SUM (SL_INT.QTY_SHIPPED) QTY_SHIPPED,
SL_INT.ABBREVIATION,
MAX (SL_INT.CREATE_DATE) CREATE_DATE,
MAX (SL_INT.RECORD_DATE) RECORD_DATE,
SL_INT.BASE_WAREHOUSE,
MAX (SL_INT.LAST_SHIPMENT_DATE) LAST_SHIPMENT_DATE,
MAX (SL_INT.EARLIEST_SHIP_DATE) EARLIEST_SHIP_DATE,
MAX (SL_INT.LATEST_SHIP_DATE) LATEST_SHIP_DATE,
MAX (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
THEN
TRUNC (SL_INT.LAST_SHIPMENT_DATE)
ELSE
TRUNC (SL_INT.LATEST_SHIP_DATE)
END)
SERVICE_DATE,
MIN (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) >=
TRUNC (SL_INT.EARLIEST_SHIP_DATE)
AND TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
AND SL_INT.QTY_SHIPPED = SL_INT.REVISED_QTY_DUE
THEN
100
ELSE
0
END)
SHIP_PCT,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
MAX (SL_INT.source_create_date) source_create_date,
MAX (SL_INT.source_update_date) source_update_date,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY
FROM (SELECT SL_UNADJ.*,
DECODE (
TRIM (TIMA.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
- 1
- early_ship_days,
'sunday', SL_UNADJ.REVISED_DUE_DATE
- 2
- early_ship_days,
SL_UNADJ.REVISED_DUE_DATE - early_ship_days)
EARLIEST_SHIP_DATE,
DECODE (
TRIM (TIMB.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
+ 2
+ LATE_SHIP_DAYS,
'sunday', SL_UNADJ.REVISED_DUE_DATE
+ 1
+ LATE_SHIP_DAYS,
SL_UNADJ.REVISED_DUE_DATE + LATE_SHIP_DAYS)
LATEST_SHIP_DATE
FROM (SELECT NVL (s2.sys_ent_id, '00') SYS_ENT_ID,
cust.customer_no CUSTOMER_NO,
cust.bill_to_loc BILL_TO_LOCATION,
cust.early_ship_days,
CUST.LATE_SHIP_DAYS,
ord.PART_NO,
ord.SRCE_SYS_ID,
ord.BUS_AREA_ID,
ord.BUS_AREA_ID CONTRACT,
NVL (WAREHOUSE, ord.entity_code) WAREHOUSE,
ORDER_NO,
ORDER_LINE_NO LINE_NO,
ORDER_REL_NO REL_NO,
TRUNC (REVISED_DUE_DATE) REVISED_DUE_DATE,
REVISED_ORDER_QTY REVISED_QTY_DUE,
-- NULL QTY_RESERVED,
-- NULL QTY_PICKED,
SHIPPED_QTY QTY_SHIPPED,
sold_to_abbreviation ABBREVIATION,
-- NULL ACCT_WEEK,
-- NULL ACCT_MONTH,
-- NULL ACCT_YEAR,
-- NULL UPDATED_FLAG,
ord.CREATE_DATE CREATE_DATE,
ord.CREATE_DATE RECORD_DATE,
NVL (WAREHOUSE, ord.entity_code)
BASE_WAREHOUSE,
LAST_SHIPMENT_DATE,
TRUNC (REVISED_DUE_DATE)
- cust.early_ship_days
EARLIEST_SHIP_DATE_UnAdj,
TRUNC (REVISED_DUE_DATE)
+ CUST.LATE_SHIP_DAYS
LATEST_SHIP_DATE_UnAdj,
--0 ALLOC_PCT,
--0 WHSE_PCT,
ABC_CLASS,
NVL (LOCATION_ID, '000') LOCATION_ID,
--NULL RELEASE_COMP,
WAREHOUSE_DESC,
NVL (
DECODE (MAKE_TO_FLAG,
'S', 0,
'O', 1,
'', -1),
-1)
MAKE_TO_FLAG,
ord.CREATE_DATE source_create_date,
ord.UPDATE_DATE source_update_date,
ord.CREATED_BY source_created_by,
ord.UPDATED_BY source_updated_by,
ord.ENTITY_CODE,
ord.RECORD_ID,
src.SYS_ENT_ID SRC_SYS_ENT_ID,
ord.BSS_HIERARCHY_KEY
FROM EDW_DTL_ORDER_FACT ord,
edw_v_maxv_cust_dim cust,
edw_v_maxv_part_dim part,
EDW_WAREHOUSE_LKP war,
EDW_SOURCE_LKP src,
MEDW_PLANT_LKP s2,
edw_v_incr_refresh_ctl incr
WHERE ord.BSS_HIERARCHY_KEY =
cust.BSS_HIERARCHY_KEY(+)
AND ord.record_id = part.record_id(+)
AND ord.part_no = part.part_no(+)
AND NVL (ord.WAREHOUSE, ord.entity_code) =
war.WAREHOUSE_code(+)
AND ord.entity_code = war.entity_code(+)
AND ord.record_id = src.record_id
AND src.calculate_back_order_flag = 'Y'
AND NVL (cancel_order_flag, 'N') != 'Y'
AND UPPER (part.source_plant) =
UPPER (s2.location_code1(+))
AND mapping_name = 'MEDW_MAP_GIEA_MTOS_STG'
-- AND NVL (ord.UPDATE_DATE, SYSDATE) >=
-- MAX_SOURCE_UPDATE_DATE
AND UPPER (
NVL (ord.order_status, 'BOOKED')) NOT IN
('ENTERED', 'CANCELLED')
AND TRUNC (REVISED_DUE_DATE) <= SYSDATE) SL_UNADJ,
EDW_TIME_DIM TIMA,
EDW_TIME_DIM TIMB
WHERE TRUNC (SL_UNADJ.EARLIEST_SHIP_DATE_UnAdj) =
TIMA.ACCOUNT_DATE
AND TRUNC (SL_UNADJ.LATEST_SHIP_DATE_Unadj) =
TIMB.ACCOUNT_DATE) SL_INT
WHERE TRUNC (LATEST_SHIP_DATE) <= TRUNC (SYSDATE)
GROUP BY SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
SL_INT.REVISED_DUE_DATE,
SL_INT.ABBREVIATION,
SL_INT.BASE_WAREHOUSE,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY) SL
WHERE (SL.BSS_HIERARCHY_KEY,
SL.ORDER_NO,
Sl.line_no,
sl.Revised_due_date,
SL.PART_NO,
sl.sys_ent_id) NOT IN
(SELECT BSS_HIERARCHY_KEY,
ORDER_NO,
line_no,
revised_due_date,
part_no,
src_sys_ent_id
FROM MEDW_MTOS_DTL_FACT
WHERE service_lvl_flag = 'Y');
thanks
asnAlso 'NOT IN' + nullable columns can be an expensive combination - and may not give the expected results. For example, compare these:
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2 );
no rows selected
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2
where key2 is not null );
KEY1
1
1 row selected.Even if the columns do contain values, if they are nullable Oracle has to perform a resource-intensive filter operation in case they are null. An EXISTS construction is not concerned with null values and can therefore use a more efficient execution plan, leading people to think it is inherently faster in general. -
Performance issue in Select Query on ERCH
Dear Friends,
I have to execute a query on ERCH in production system having around 20 lakh data which keeps on increasing. "Select BELNR VERTRAG ADATSOLL from ERCH where BCREASON = u201803u2019 " . The expected volume of data that the query will return is approx 1400 records.
Since the where clause includes a field which is not a key field please suggest the performance of the query so that it doesn't time-out.
Regards,
Amit SrivastavaHello Amit Srivastava ,
You can add the Contract account number(VKONT) and Business Partner number(GPARTNER) in your query and can create a custom Index for the contract account number (We have this index created in our system).
This will improve the performance effectively.
Hope this answers your question.
Thanks,
Greetson -
Performance issue with NOT LIKE 'VA%' in Sql Query
I'm fetching vendor details from po_vendor_sites_all , in the where clause i use vendor_site_code NOT LIKE 'VA%'. Database:10g
NOT LIKE is reducing the performance, any other option to increase the performance.
Any suggestions?
Thanks
dow005Assuming a fairly even distribution of vendor_site_codes and assuming a LIKE 'VA%'
would pick up 1% of the rows and assuming an index on vendor_site_codes
and a reasonable clustering we might expect that query to use an index.
Your query is NOT LIKE 'VA%' which implies picking up 99% of the rows = Full table scan.
The only option I can think of is to use parallelism in your query (if the system has the
power to handle it) or perhaps use more where clause restrictions on indexed column(s)
to reduce the number of rows picked up.
I've had to assume a lot of things as you don't give any info about the table and data.
If you could provide more information we might be able to help more.
See SQL and PL/SQL FAQ
Edited by: Paul Horth on 02-May-2012 07:12 -
Performance issue with selection of line items.
Hi All.
I am facing seriour Time_Out error problem for my program. Actually i am developing RFC and i have to send data to non sap system as it is in sap tables. Now i have to send data for BSIS for new entries for a day. SO first i search BELNR for CPUDT in BKPF and then use for all entries on BSIS. But now my problem is for a single day i am getting 1679 documents from BKPF and when i use for all entries in BSIS, it will give time out error.
my code is like
SELECT BUKRS BELNR GJAHR
FROM BKPF INTO CORRESPONDING FIELDS OF TABLE I_BKPF
WHERE CPUDT IN S_CPUDT.
if i_bkpf[] is not initial.
SELECT * FROM BSIS INTO TABLE I_BSIS
FOR ALL ENTRIES IN I_BKPF
WHERE BUKRS = I_BKPF-BUKRS
BELNR = I_BKPF-BELNR
GJAHR = I_BKPF-GJAHR.
endif.
So please gurus help me ..its urgent..Instead of writing Select * write Select "Fields names" then try might be it will solve ur problem.
Reward points is helpfull -
Performance Issue in select statements
In the following statements, it is taking too much time for execution,Is there any best way to change these selection statements...int_report_data is my final internal table....
select fsplant fvplant frplant fl1_sto pl1_delivery pl1_gr pl2_sto pl2_delivery perr_msg into (dochdr-swerks,dochdr-vwerks,dochdr-rwerks,dochdr-l1sto,docitem-l1xblnr, docitem-l1gr,docitem-l2sto, docitem-l2xblnr,docitem-err_msg) from zdochdr as f inner join zdocitem as p on fl1_sto = pl1_sto where fsplant in s_werks and
fvplant in v_werks and frplant in r_werks and pl1_delivery in l1_xblnr and pl1_gr in l1_gr and p~l2_delivery in l2_xblnr.
move : dochdr-swerks to int_report_data-i_swerks,
dochdr-vwerks to int_report_data-i_vwerks,
dochdr-rwerks to int_report_data-i_rwerks,
dochdr-l1sto to int_report_data-i_l1sto,
docitem-l1xblnr to int_report_data-i_l1xblnr,
docitem-l1gr to int_report_data-i_l1gr,
docitem-l2sto to int_report_data-i_l2sto,
docitem-l2xblnr to int_report_data-i_l2xblnr,
docitem-err_msg to int_report_data-i_errmsg.
append int_report_data.
endselect.
Goods receipt
loop at int_report_data.
select single ebeln from ekbe into l2gr where ebeln = int_report_data-i_l2sto and bwart = '101' and bewtp = 'E' and vgabe = '1'.
if sy-subrc eq 0.
move l2gr to int_report_data-i_l2gr.
modify int_report_data.
endif.
endloop.
first Billing document (I have to check fkart = ZRTY for second billing *document..how can i write the statement)
select vbeln from vbfa into (tabvbfa-vbeln) where vbelv = int_report_data-i_l2xblnr or vbelv = int_report_data-i_l1xblnr.
select single vbeln from vbrk into tabvbrk-vbeln where vbeln = tabvbfa-vbeln and fkart = 'IV'.
if sy-subrc eq 0.
move tabvbrk-vbeln to int_report_data-i_l2vbeln.
modify int_report_data.
endif.
endselect.
Thanks in advance,
YadHi!
Which of your selects is slow? Make a SQL-trace, check which select(s) is(are) slow.
For EKBE and VBFA you are selecting first key field - in general that is fast. If your z-tables are the problem, maybe an index might help.
Instead of looping and making a lot of select singles, one select 'for all entries' can help, too.
Please analyze further and give feedback.
Regards,
Christian
Maybe you are looking for
-
How to insert multile records at a time in same db table
Hi All, I want to insert more than one record in db-tabe.As ADF-Form /ADF-Creation form will insert only one record at time is there any way to insert multiple records using ADF-Table Thanks in Advance Regards RHY
-
My computer no longer recognizes my password
my imac does not seem to recognize my password. now i cannot do anything. i never had install media. what can i do? Model Name: iMac Model Identifier: iMac7,1 Processor Name: Intel Core 2 Duo Processor Speed: 2 GHz Number Of Processors: 1
-
ITunes keeps "losing" the filepaths to my library.
My friend had an external hard drive on her computer, and every time she disconnected it, iTunes would lose her songs. So we copied all of her music to her hard drive, and put it in her iTunes folder. Everything was fine; all the file paths were corr
-
How do I get video clips from Windows 8 to mini ipad
How do I get video clips from Windows 8 to mini ipad
-
Hi, I need small help reg oracle ESB, Actually we are working on oracle ESB we are using Web Services. we are sending requesting through routing service to soap service, then we are getting response from from soap service to routingservice, I need to