Performance issue in select when subselect is used
We have a select statement that is using a where clause that has dates in. I've simplified the SQL below to demonstrate roughly what it does:
select * from user_activity
where starttime >= trunc(sysdate - 1) + 18/24
and starttime < trunc(sysdate) + 18/24
(this selects records from a table where a starttime field has values beween 6pm today and 6pm yesterday).
We are using this statement to create a materialized view which we refresh daily. Occasionally we have the need to refresh the data for a historic day, which means that we need to go in and change the where lines, e.g. to get data from 3 days ago instead of yesterday, the where clause becomes:
select * from user_activity
where starttime >= trunc(sysdate - 3) + 18/24
and starttime < trunc(sysdate - 2) + 18/24
Having to recreate the views like this is a nuisance, so we decided to be 'smart' and create a table with a setting in that we could use to control the number of days the view looks back. So if our table is called days_ago and field called days (with a single record, set to 1), we now have
select * from user_activity
where starttime >= trunc(sysdate - (select days from days_ago) + 18/24
and starttime < trunc(sysdate - ((select days from days_ago) + 1) + 18/24
The original SQL takes a few seconds to run. However the 'improved' version takes 25 minutes.
The days table only has 1 record in, selecting directly from it is instantaneous. Running the select on its own.
Does anything jump out as being daft in this approach? We cannot explain why the performance has suddenly dropped off for such a simple change.
Hi,
Do you really need a view to do this?
Can't you define a bind varibale, and use it in your query:
VARIABLE days_ago NUMBER
EXEC :days_ago := 3;
SELECT ...
WHERE starttime >= TRUNC (SYSDATE - :days_ago') + (18 / 24)
AND starttime < TRUNC (SYSDATE - :days_ago') + (42 / 24) -- 42 = 24 + 18If you really must have a view, then it might be faster if you got the parameter from a SYS_CONTEXT variable, rather than from a table.
Unfortunately, SYS_CONTEXT always returns a string, so you have to be carefule encoding the number as a string when you set the variable, and decoding it when you use the variable:
WHERE starttime >= TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (18 / 24)
AND starttime < TRUNC (SYSDATE - TO_NUMBER ( SYS_CONTEXT ('MY_VIEW_NAMESPACE', 'DAYS_AGO'))) + (42 / 24) -- 42 = 24 + 18For more about SYS_CONTEXT, look it up in the SQL language m,anual, and follow the links there:
http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions172.htm#sthref2268
If performance is important enough, consider storing the "fiscal day" as a separate (indexed) column. Starting in Oracle 11.1, you can use a virtual column for this. In earleir versions, you'd have to use a trigger. By "fiscal day", I mean:
TRUNC (starttime + (6/24))If starttime is between 18:00:00 on December 28 and 17:59:59 on December 29, this will return 00:00:00 on December 29. You could use it in a query (or view) like this:
WHERE fiscal_day = TRUNC (SYSDATE) - 2
Similar Messages
-
Performance Issue with Selection Screen Values
Hi,
I am facing a performance issue(seems like a performance issue ) in my project.
I have a query with some RKFs and sales area in filters (single value variable which is optional).
Query is by default restricted by current month.
The Cube on which the query operates has around 400,000 records for a month.
The Cube gets loaded every three hours
When I run the query with no filters I get the output within 10~15 secs.
The issue I am facing is that, when I enter a sales area in my selection screen the query gets stuck in the data selection step. In fact we are facing the same problem if we use one or two other characteristics in our selection screen
We have aggregates/indexes etc on our cube.
Has any one faced a similar situation?
Does any one have any comments on this ?
Your help will be appreciated. ThanksHi A R,
Goto RSRT--> Give ur query anme --> Execute =Debug
--> No a pop up ill come with many check boxes select "Display Aggregates found" option --> now give ur
selections in variable screen > first it will give the already existing aggregate names> continue> now after displaying all the aggregates it will display the list of objects realted to cube wise> try to copy these objects into notepad> again go with ur drill downs now u'll get the already existing aggregates for this drill down-> it will display the list of objects> copy them to notepad> now sort all the objects related to one cube by deleting duplicate objects in the note pad>goto that Infocube> context>maintain aggregates> create aggregate on the objects u copied into note pad.
now try to execyte the report... it should work properly with out delays for those selections.
I hope it helps you...
Regards,
Ramki. -
Performance Issue on Select Condition on KNA1 table
Hi,
I am facing problem when i am selecting from the table KNA1 for given account group and attribute9 it is taking lot of time .
Please suggest the select query or any other feasible soln to solve this problem
select
kunnr
kotkd
from kna1
where kunnr eq parameter value and
kotkd eq 'ZPY' and katr9 = 'L' or 'T'.
Firstly i am using in in katr9 then i removed dur to performance issue using read further please suggest further performanace solnHi,
The select should be like:
select
kunnr
kotkd
from kna1
where kunnr eq parameter value
and kotkd eq 'ZPY'
and katr9 in r_katr9. " 'L' or 'T'.
create a range for katr9 like r_katr9 with L or T.
Because while selecting the entries from KNA1, it will check for KATR9 = L and then KATR9 = T.
Hope the above statement is useful for you.
Regards,
Shiva. -
Performance issue with select query
Hi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AMHi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AM -
Performance issue with FDM when importing data
In the FDM Web console, a performance issue has been detected when importing data (.txt)
In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
It seems a performance issue when system tries to show the imported data in the web page.
It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
Thx in advance!
Cheers
MatteoHi Matteo
How much data is being imported / displayed when users are interacting with the system.
There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
Hope this helps
Stuart -
Performance issue while selecting material documents MKPF & MSEG
Hello,
I'm facing performance issues in production while selecting Material documents for Sales order and item based on the Sales order Stock.
Here is the query :
I'm first selecting data from ebew table which is the Sales order Stock table then this query.
IF ibew[] IS NOT INITIAL AND ignore_material_documents IS INITIAL.
* Select the Material documents created for the the sales orders.
SELECT mkpf~mblnr mkpf~budat
mseg~matnr mseg~mat_kdauf mseg~mat_kdpos mseg~shkzg
mseg~dmbtr mseg~menge
INTO CORRESPONDING FIELDS OF TABLE i_mseg
FROM mkpf INNER JOIN mseg
ON mkpf~mandt = mseg~mandt
AND mkpf~mblnr = mseg~mblnr
AND mkpf~mjahr = mseg~mjahr
FOR ALL entries IN ibew
WHERE mseg~matnr = ibew-matnr
AND mseg~werks = ibew-bwkey
AND mseg~mat_kdauf = ibew-vbeln
AND mseg~mat_kdpos = ibew-posnr.
SORT i_mseg BY mat_kdauf ASCENDING
mat_kdpos ASCENDING
budat DESCENDING.
ENDIF.
I need to select the material documents because the end users want to see the stock as on certain date for the sales orders and only material document lines can give this information. Also EBEW table gives Stock only for current date.
For Example :
If the report was run for Stock date 30th Sept 2008, but on the 5th Oct 2008, then I need to consider the goods movements after 30th Sept and add if stock was issued or subtract if stock was added.
I know there is an Index MSEG~M in database system on mseg, however I don't know the Storage location LGORT and Movement types BWART that should be considered, so I tried to use all the Storage locations and Movement types available in the system, but this caused the query to run even slower than before.
I could create an index for the fields mentioned in where clause , but it would be an overhead anyways.
Your help will be appreciated. Thanks in advance
regards,
AdvaitHi Thomas,
Thanks for your reply. the performance of the query has significantly improved than before after switching the join from mseg join mkpf.
Actually, I even tried without join and looped using field symbols ,this is working slightly faster than the switched join.
Here are the result , tried with 371 records as our sandbox doesn't have too many entries unfortunately ,
Results before switching the join 146036 microseconds
Results after swithing the join 38029 microseconds
Results w/o join 28068 microseconds for selection and 5725 microseconds for looping
Thanks again.
regards,
Advait -
Low performance issue with EPM Add-in SP16 using Office Excel 2010
We are having low performance issues using the EPM Excel Add-in SP16 with Excel 2010 and Windows 7
The same reports using Excel 2007 (Windows 7 or XP) run OK
Anyone with the same issue?
ThanksSolved. It's a known issue. The solution, as provided by SAP Support is:
Dear Customer,
This problem may relate to a known issue, which happens on EPM Add-in
with Excel 2010 when zoom is NOT 100%.
to verify this, could please please try to set the zoom to 100% for
your EPM report in question. and observe if refreshing performance is
recovered?.
Quite annoying, isn't it? -
Performance issue with Crystal when upgrading Oracle to 11g
Dear,
I am facing performance issue in crystal report and oracle 11g as below:
In ther report server, I have created a ODBC for connect to another Oracle 11g server. also in report server I have created and published a folder to content all of my crystal report. These report can connect to oracle 11g server via ODBC.
and I have a tomcat server to run my application in my application I refer to report folder in report server.
This way can work with SQL server and oracle 9 or 10g but it facing performance issue in oracle 11g.
please let me know the root cause.
Notes: report server, tomcate server are win 32bit, but oracle is in win 64bit, and i have upgraded DataDirect connect ODBC version 6.1 but the issue can not resolve.
Please help me to solve it.
Thanks so much,
AnhHi Anh,
Use a third party ODBC test tool now. SQL Plus will be using the Native Oracle client so you can't compare performance.
Download our old tool called SQLCON: https://smpdl.sap-ag.de/~sapidp/012002523100006252882008E/sqlcon32.zip
Connect and then click on the SQL tab and paste in the SQL from the report and time that test.
I believe the issue is because the Oracle client is 64 bit, you should install the 32 bit Oracle Client. If using the 64 bit client then the client must thunk ( convert 64 bit data to 32 bit data format ) which is going to take more time.
If you can use OLE DB or using the Oracle Server driver ( native driver ) should be faster. ODBC puts another layer on top of the Oracle client so it too takes time to communicate between the layers.
Thank you
Don -
Performance issue in selecting data from a view because of not in condition
Hi experts,
I have a requirement to select data in a view which is not available in a fact with certain join conditions. but the fact table contains 2 crore rows of data. so this view is not working at all. it is running for long time. im pasting query here. please help me to tune it. whole query except prior to not in is executing in 15 minutes. but when i add not in condition it is running for so many hours as the second table has millions of records.
CREATE OR REPLACE FORCE VIEW EDWOWN.MEDW_V_GIEA_SERVICE_LEVEL11
SYS_ENT_ID,
SERVICE_LEVEL_NO,
CUSTOMER_NO,
BILL_TO_LOCATION,
PART_NO,
SRCE_SYS_ID,
BUS_AREA_ID,
CONTRACT,
WAREHOUSE,
ORDER_NO,
LINE_NO,
REL_NO,
REVISED_DUE_DATE,
REVISED_QTY_DUE,
QTY_RESERVED,
QTY_PICKED,
QTY_SHIPPED,
ABBREVIATION,
ACCT_WEEK,
ACCT_MONTH,
ACCT_YEAR,
UPDATED_FLAG,
CREATE_DATE,
RECORD_DATE,
BASE_WAREHOUSE,
EARLIEST_SHIP_DATE,
LATEST_SHIP_DATE,
SERVICE_DATE,
SHIP_PCT,
ALLOC_PCT,
WHSE_PCT,
ABC_CLASS,
LOCATION_ID,
RELEASE_COMP,
WAREHOUSE_DESC,
MAKE_TO_FLAG,
SOURCE_CREATE_DATE,
SOURCE_UPDATE_DATE,
SOURCE_CREATED_BY,
SOURCE_UPDATED_BY,
ENTITY_CODE,
RECORD_ID,
SRC_SYS_ENT_ID,
BSS_HIERARCHY_KEY,
SERVICE_LVL_FLAG
AS
SELECT SL.SYS_ENT_ID,
SL.ENTITY_CODE
|| '-'
|| SL.order_no
|| '-'
|| SL.LINE_NO
|| '-'
|| SL.REL_NO
SERVICE_LEVEL_NO,
SL.CUSTOMER_NO,
SL.BILL_TO_LOCATION,
SL.PART_NO,
SL.SRCE_SYS_ID,
SL.BUS_AREA_ID,
SL.CONTRACT,
SL.WAREHOUSE,
SL.ORDER_NO,
SL.LINE_NO,
SL.REL_NO,
SL.REVISED_DUE_DATE,
SL.REVISED_QTY_DUE,
NULL QTY_RESERVED,
NULL QTY_PICKED,
SL.QTY_SHIPPED,
SL.ABBREVIATION,
NULL ACCT_WEEK,
NULL ACCT_MONTH,
NULL ACCT_YEAR,
NULL UPDATED_FLAG,
SL.CREATE_DATE,
SL.RECORD_DATE,
SL.BASE_WAREHOUSE,
SL.EARLIEST_SHIP_DATE,
SL.LATEST_SHIP_DATE,
SL.SERVICE_DATE,
SL.SHIP_PCT,
0 ALLOC_PCT,
0 WHSE_PCT,
SL.ABC_CLASS,
SL.LOCATION_ID,
NULL RELEASE_COMP,
SL.WAREHOUSE_DESC,
SL.MAKE_TO_FLAG,
SL.source_create_date,
SL.source_update_date,
SL.source_created_by,
SL.source_updated_by,
SL.ENTITY_CODE,
SL.RECORD_ID,
SL.SRC_SYS_ENT_ID,
SL.BSS_HIERARCHY_KEY,
'Y' SERVICE_LVL_FLAG
FROM ( SELECT SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
MAX (SL_INT.REL_NO) REL_NO,
SL_INT.REVISED_DUE_DATE,
SUM (SL_INT.REVISED_QTY_DUE) REVISED_QTY_DUE,
SUM (SL_INT.QTY_SHIPPED) QTY_SHIPPED,
SL_INT.ABBREVIATION,
MAX (SL_INT.CREATE_DATE) CREATE_DATE,
MAX (SL_INT.RECORD_DATE) RECORD_DATE,
SL_INT.BASE_WAREHOUSE,
MAX (SL_INT.LAST_SHIPMENT_DATE) LAST_SHIPMENT_DATE,
MAX (SL_INT.EARLIEST_SHIP_DATE) EARLIEST_SHIP_DATE,
MAX (SL_INT.LATEST_SHIP_DATE) LATEST_SHIP_DATE,
MAX (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
THEN
TRUNC (SL_INT.LAST_SHIPMENT_DATE)
ELSE
TRUNC (SL_INT.LATEST_SHIP_DATE)
END)
SERVICE_DATE,
MIN (
CASE
WHEN TRUNC (SL_INT.LAST_SHIPMENT_DATE) >=
TRUNC (SL_INT.EARLIEST_SHIP_DATE)
AND TRUNC (SL_INT.LAST_SHIPMENT_DATE) <=
TRUNC (SL_INT.LATEST_SHIP_DATE)
AND SL_INT.QTY_SHIPPED = SL_INT.REVISED_QTY_DUE
THEN
100
ELSE
0
END)
SHIP_PCT,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
MAX (SL_INT.source_create_date) source_create_date,
MAX (SL_INT.source_update_date) source_update_date,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY
FROM (SELECT SL_UNADJ.*,
DECODE (
TRIM (TIMA.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
- 1
- early_ship_days,
'sunday', SL_UNADJ.REVISED_DUE_DATE
- 2
- early_ship_days,
SL_UNADJ.REVISED_DUE_DATE - early_ship_days)
EARLIEST_SHIP_DATE,
DECODE (
TRIM (TIMB.DAY_DESC),
'saturday', SL_UNADJ.REVISED_DUE_DATE
+ 2
+ LATE_SHIP_DAYS,
'sunday', SL_UNADJ.REVISED_DUE_DATE
+ 1
+ LATE_SHIP_DAYS,
SL_UNADJ.REVISED_DUE_DATE + LATE_SHIP_DAYS)
LATEST_SHIP_DATE
FROM (SELECT NVL (s2.sys_ent_id, '00') SYS_ENT_ID,
cust.customer_no CUSTOMER_NO,
cust.bill_to_loc BILL_TO_LOCATION,
cust.early_ship_days,
CUST.LATE_SHIP_DAYS,
ord.PART_NO,
ord.SRCE_SYS_ID,
ord.BUS_AREA_ID,
ord.BUS_AREA_ID CONTRACT,
NVL (WAREHOUSE, ord.entity_code) WAREHOUSE,
ORDER_NO,
ORDER_LINE_NO LINE_NO,
ORDER_REL_NO REL_NO,
TRUNC (REVISED_DUE_DATE) REVISED_DUE_DATE,
REVISED_ORDER_QTY REVISED_QTY_DUE,
-- NULL QTY_RESERVED,
-- NULL QTY_PICKED,
SHIPPED_QTY QTY_SHIPPED,
sold_to_abbreviation ABBREVIATION,
-- NULL ACCT_WEEK,
-- NULL ACCT_MONTH,
-- NULL ACCT_YEAR,
-- NULL UPDATED_FLAG,
ord.CREATE_DATE CREATE_DATE,
ord.CREATE_DATE RECORD_DATE,
NVL (WAREHOUSE, ord.entity_code)
BASE_WAREHOUSE,
LAST_SHIPMENT_DATE,
TRUNC (REVISED_DUE_DATE)
- cust.early_ship_days
EARLIEST_SHIP_DATE_UnAdj,
TRUNC (REVISED_DUE_DATE)
+ CUST.LATE_SHIP_DAYS
LATEST_SHIP_DATE_UnAdj,
--0 ALLOC_PCT,
--0 WHSE_PCT,
ABC_CLASS,
NVL (LOCATION_ID, '000') LOCATION_ID,
--NULL RELEASE_COMP,
WAREHOUSE_DESC,
NVL (
DECODE (MAKE_TO_FLAG,
'S', 0,
'O', 1,
'', -1),
-1)
MAKE_TO_FLAG,
ord.CREATE_DATE source_create_date,
ord.UPDATE_DATE source_update_date,
ord.CREATED_BY source_created_by,
ord.UPDATED_BY source_updated_by,
ord.ENTITY_CODE,
ord.RECORD_ID,
src.SYS_ENT_ID SRC_SYS_ENT_ID,
ord.BSS_HIERARCHY_KEY
FROM EDW_DTL_ORDER_FACT ord,
edw_v_maxv_cust_dim cust,
edw_v_maxv_part_dim part,
EDW_WAREHOUSE_LKP war,
EDW_SOURCE_LKP src,
MEDW_PLANT_LKP s2,
edw_v_incr_refresh_ctl incr
WHERE ord.BSS_HIERARCHY_KEY =
cust.BSS_HIERARCHY_KEY(+)
AND ord.record_id = part.record_id(+)
AND ord.part_no = part.part_no(+)
AND NVL (ord.WAREHOUSE, ord.entity_code) =
war.WAREHOUSE_code(+)
AND ord.entity_code = war.entity_code(+)
AND ord.record_id = src.record_id
AND src.calculate_back_order_flag = 'Y'
AND NVL (cancel_order_flag, 'N') != 'Y'
AND UPPER (part.source_plant) =
UPPER (s2.location_code1(+))
AND mapping_name = 'MEDW_MAP_GIEA_MTOS_STG'
-- AND NVL (ord.UPDATE_DATE, SYSDATE) >=
-- MAX_SOURCE_UPDATE_DATE
AND UPPER (
NVL (ord.order_status, 'BOOKED')) NOT IN
('ENTERED', 'CANCELLED')
AND TRUNC (REVISED_DUE_DATE) <= SYSDATE) SL_UNADJ,
EDW_TIME_DIM TIMA,
EDW_TIME_DIM TIMB
WHERE TRUNC (SL_UNADJ.EARLIEST_SHIP_DATE_UnAdj) =
TIMA.ACCOUNT_DATE
AND TRUNC (SL_UNADJ.LATEST_SHIP_DATE_Unadj) =
TIMB.ACCOUNT_DATE) SL_INT
WHERE TRUNC (LATEST_SHIP_DATE) <= TRUNC (SYSDATE)
GROUP BY SL_INT.SYS_ENT_ID,
SL_INT.CUSTOMER_NO,
SL_INT.BILL_TO_LOCATION,
SL_INT.PART_NO,
SL_INT.SRCE_SYS_ID,
SL_INT.BUS_AREA_ID,
SL_INT.CONTRACT,
SL_INT.WAREHOUSE,
SL_INT.ORDER_NO,
SL_INT.LINE_NO,
SL_INT.REVISED_DUE_DATE,
SL_INT.ABBREVIATION,
SL_INT.BASE_WAREHOUSE,
SL_INT.ABC_CLASS,
SL_INT.LOCATION_ID,
SL_INT.WAREHOUSE_DESC,
SL_INT.MAKE_TO_FLAG,
SL_INT.source_created_by,
SL_INT.source_updated_by,
SL_INT.ENTITY_CODE,
SL_INT.RECORD_ID,
SL_INT.SRC_SYS_ENT_ID,
SL_INT.BSS_HIERARCHY_KEY) SL
WHERE (SL.BSS_HIERARCHY_KEY,
SL.ORDER_NO,
Sl.line_no,
sl.Revised_due_date,
SL.PART_NO,
sl.sys_ent_id) NOT IN
(SELECT BSS_HIERARCHY_KEY,
ORDER_NO,
line_no,
revised_due_date,
part_no,
src_sys_ent_id
FROM MEDW_MTOS_DTL_FACT
WHERE service_lvl_flag = 'Y');
thanks
asnAlso 'NOT IN' + nullable columns can be an expensive combination - and may not give the expected results. For example, compare these:
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2 );
no rows selected
with test1 as ( select 1 as key1 from dual )
, test2 as ( select null as key2 from dual )
select * from test1
where key1 not in
( select key2 from test2
where key2 is not null );
KEY1
1
1 row selected.Even if the columns do contain values, if they are nullable Oracle has to perform a resource-intensive filter operation in case they are null. An EXISTS construction is not concerned with null values and can therefore use a more efficient execution plan, leading people to think it is inherently faster in general. -
Performance Issue in Select Statement (For All Entries)
Hello,
I have a report where i have two select statement
First Select Statement:
Select A B C P Q R
from T1 into Table it_t1
where ....
Internal Table it_t1 is populated with 359801 entries through this select statement.
Second Select Statement:
Select A B C X Y Z
from T2 in it_t2 For All Entries in it_t1
where A eq it_t1-A
and B eq it_t1-B
and C eq it_t1-C
Now Table T2 contains more than 10 lac records and at the end of select statement it_t2 is populated with 844003 but it takes a lot of time (15 -20 min) to execute second select statement.
Can this code be optimized?
Also i have created respective indexes on table T1 and T2 for the fields in Where Condition.
Regards,If you have completed all the steps mentioned by others, in the above thread, and still you are facing issues then,.....
Use a Select within Select.
First Select Statement:
Select A B C P Q R package size 5000
from T1 into Table it_t1
where ....
Second Select Statement:
Select A B C X Y Z
from T2 in it_t2 For All Entries in it_t1
where A eq it_t1-A
and B eq it_t1-B
and C eq it_t1-C
do processing........
endselect
This way, while using for all entries on T2, your it_t1, will have limited number of entries and thus the 2nd select will be faster.
Thanks,
Juwin -
Performance issue in select query
Moderator message: do not post the same question in two forums. Duplicate (together with answers) deleted.
SELECT a~grant_nbr
a~zzdonorfy
a~company_code
b~language
b~short_desc
INTO TABLE i_gmgr_text
FROM gmgr AS a
INNER JOIN gmgrtexts AS b ON a~grant_nbr = b~grant_nbr
WHERE a~grant_nbr IN s_grant
AND a~zzdonorfy IN s_dono
AND b~language EQ sy-langu
AND b~short_desc IN s_cont.
How to use for all all entries in the above inner join for better performance?
then....
IF sy-subrc EQ 0.
SORT i_gmgr_text BY grant_nbr.
ENDIF.
IF i_gmgr_text[] IS NOT INITIAL.
* Actual Line Item Table
SELECT rgrant_nbr
gl_sirid
rbukrs
rsponsored_class
refdocnr
refdocln
FROM gmia
INTO TABLE i_gmia
FOR ALL ENTRIES IN i_gmgr_text
WHERE rgrant_nbr = i_gmgr_text-grant_nbr
AND rbukrs = i_gmgr_text-company_code
AND rsponsored_class IN s_spon.
IF sy-subrc EQ 0.
SORT i_gmia BY refdocnr refdocln.
ENDIF.
Edited by: Matt on Dec 17, 2008 1:40 PM> How to use for all all entries in the above inner join for better performance?
my best christmas recommendation for performance, simply ignore such recommendations.
And check the performance of your join!
Is the performance really low, if it is then there is probably no index support. Without indexes FOR ALL ENTRIES will be much slower.
Siegfried -
Performance issue with select query and for all entries.
hi,
i have a report to be performance tuned.
the database table has around 20 million entries and 25 fields.
so, the report fetches the distinct values of two fields using one select query.
so, the first select query fetches around 150 entries from the table for 2 fields.
then it applies some logic and eliminates some entries and makes entries around 80-90...
and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
in short,
it accesses the same database table twice.
so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
is around 80-90 entries too much for using "for all entries"?
the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
i really cant find the way out...
please help.chinmay kulkarni wrote:Chinmay,
Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
>
> so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
>
It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
> the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
>
That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database. -
Performance issues - which oracle xml technology to use?
Our company has spent some time researching different oracle xml technologies to obtain the fastest performence but also have the flexibily of a changing xml schemas across revs.
Not registering schemas gives the flexibity of quickly changing schemas between revs and simpler table structure, but hurts performance quite a bit compared to registering schemas.
Flat non xml tables seems the fastest but seeing that everything is going xml, this doesn;t seems like a choice.
Anyhow, let me know any input/experience anyone can offer.
here's what we have tested all with simple
10000 record tests, each of the form:
insert into po_tab values (1,
xmltype('<PurchaseOrder xmlns="http://www.oracle.com/PO.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/PO.xsd
http://www.oracle.com/PO.xsd">
<PONum>akkk</PONum>
<Company>Oracle Corp</Company>
<Item>
<Part>9i Doc Set</Part>
<Price>2550</Price>
</Item>
<Item>
<Part>8i Doc Set</Part>
<Price>350</Price>
</Item>
</PurchaseOrder>'));
we have tried three schenerios
-flat tables, non xml db
-xml db with registering schemas
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
now for the results
- flat tables, non xml db (we where thinking of using it like this to ease fetching of data for ui views in ad hoc situations, then have code to export the data into
xml, is there any oracle tool that will let me
export the data into
xml automatically via a schema?)
create table po_tabSimple(
id int constraint id_pk PRIMARY KEY,
part varchar2(100),
price number
insert into po_tabSimple values (i, 'test', 1.000);
select part from po_tabSimple;
2 seconds (Quickest)
-xml db with registering schemas
declare
doc varchar2(1000) := '<schema
targetNamespace="http://www.oracle.com/PO.xsd"
xmlns:po="http://www.oracle.com/PO.xsd"
xmlns="http://www.w3.org/2001/XMLSchema">
<complexType name="PurchaseOrderType">
<sequence>
<element name="PONum" type="decimal"/>
<element name="Company">
<simpleType>
<restriction base="string">
<maxLength value="100"/>
</restriction>
</simpleType>
</element>
<element name="Item" maxOccurs="1000">
<complexType>
<sequence>
<element name="Part">
<simpleType>
<restriction base="string">
<maxLength value="1000"/>
</restriction>
</simpleType>
</element>
<element name="Price" type="float"/>
</sequence>
</complexType>
</element>
</sequence>
</complexType>
<element name="PurchaseOrder" type="po:PurchaseOrderType"/>
</schema>';
begin
dbms_xmlschema.registerSchema('http://www.oracle.com/PO.xsd', doc);
end;
create table po_tab(
id number,
po sys.XMLType
xmltype column po
XMLSCHEMA "http://www.oracle.com/PO.xsd"
element "PurchaseOrder";
select EXTRACT(po_tab.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tab;
4 sec
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
create table po_tabOld(
id number,
po sys.XMLType
select EXTRACT(po_tabOld.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tabOld;
41 seconds without indexes
41 seconds with indexes
here are the indexes used
CREATE INDEX po_tabColOld_idx ON po_tabOld(po) indextype is ctxsys.ctxxpath;
CREATE INDEX po_tabOld_idx ON po_tabOld X (X.po.extract('/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal());Our company has spent some time researching different oracle xml technologies to obtain the fastest performence but also have the flexibily of a changing xml schemas across revs.
Not registering schemas gives the flexibity of quickly changing schemas between revs and simpler table structure, but hurts performance quite a bit compared to registering schemas.
Flat non xml tables seems the fastest but seeing that everything is going xml, this doesn;t seems like a choice.
Anyhow, let me know any input/experience anyone can offer.
here's what we have tested all with simple
10000 record tests, each of the form:
insert into po_tab values (1,
xmltype('<PurchaseOrder xmlns="http://www.oracle.com/PO.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.oracle.com/PO.xsd
http://www.oracle.com/PO.xsd">
<PONum>akkk</PONum>
<Company>Oracle Corp</Company>
<Item>
<Part>9i Doc Set</Part>
<Price>2550</Price>
</Item>
<Item>
<Part>8i Doc Set</Part>
<Price>350</Price>
</Item>
</PurchaseOrder>'));
we have tried three schenerios
-flat tables, non xml db
-xml db with registering schemas
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
now for the results
- flat tables, non xml db (we where thinking of using it like this to ease fetching of data for ui views in ad hoc situations, then have code to export the data into
xml, is there any oracle tool that will let me
export the data into
xml automatically via a schema?)
create table po_tabSimple(
id int constraint id_pk PRIMARY KEY,
part varchar2(100),
price number
insert into po_tabSimple values (i, 'test', 1.000);
select part from po_tabSimple;
2 seconds (Quickest)
-xml db with registering schemas
declare
doc varchar2(1000) := '<schema
targetNamespace="http://www.oracle.com/PO.xsd"
xmlns:po="http://www.oracle.com/PO.xsd"
xmlns="http://www.w3.org/2001/XMLSchema">
<complexType name="PurchaseOrderType">
<sequence>
<element name="PONum" type="decimal"/>
<element name="Company">
<simpleType>
<restriction base="string">
<maxLength value="100"/>
</restriction>
</simpleType>
</element>
<element name="Item" maxOccurs="1000">
<complexType>
<sequence>
<element name="Part">
<simpleType>
<restriction base="string">
<maxLength value="1000"/>
</restriction>
</simpleType>
</element>
<element name="Price" type="float"/>
</sequence>
</complexType>
</element>
</sequence>
</complexType>
<element name="PurchaseOrder" type="po:PurchaseOrderType"/>
</schema>';
begin
dbms_xmlschema.registerSchema('http://www.oracle.com/PO.xsd', doc);
end;
create table po_tab(
id number,
po sys.XMLType
xmltype column po
XMLSCHEMA "http://www.oracle.com/PO.xsd"
element "PurchaseOrder";
select EXTRACT(po_tab.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tab;
4 sec
-xml db but not registering schemas, just using XmlType
and adding oracle text xml indexes with paths to speed up
queries
create table po_tabOld(
id number,
po sys.XMLType
select EXTRACT(po_tabOld.po, '/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal() from po_tabOld;
41 seconds without indexes
41 seconds with indexes
here are the indexes used
CREATE INDEX po_tabColOld_idx ON po_tabOld(po) indextype is ctxsys.ctxxpath;
CREATE INDEX po_tabOld_idx ON po_tabOld X (X.po.extract('/PurchaseOrder/Item/Part/text()','xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"').getStringVal()); -
Performance issue with selection of line items.
Hi All.
I am facing seriour Time_Out error problem for my program. Actually i am developing RFC and i have to send data to non sap system as it is in sap tables. Now i have to send data for BSIS for new entries for a day. SO first i search BELNR for CPUDT in BKPF and then use for all entries on BSIS. But now my problem is for a single day i am getting 1679 documents from BKPF and when i use for all entries in BSIS, it will give time out error.
my code is like
SELECT BUKRS BELNR GJAHR
FROM BKPF INTO CORRESPONDING FIELDS OF TABLE I_BKPF
WHERE CPUDT IN S_CPUDT.
if i_bkpf[] is not initial.
SELECT * FROM BSIS INTO TABLE I_BSIS
FOR ALL ENTRIES IN I_BKPF
WHERE BUKRS = I_BKPF-BUKRS
BELNR = I_BKPF-BELNR
GJAHR = I_BKPF-GJAHR.
endif.
So please gurus help me ..its urgent..Instead of writing Select * write Select "Fields names" then try might be it will solve ur problem.
Reward points is helpfull -
Performance issue on Select ...like
Hi all,
we have Oracle 9206, and when starting the following SQL statement, we have a important runtime, and we have a index on those fields, but when we use LIKE, we have a full table scan. My question is about the second SQL statement BETWEEN (or >= <= operators) if we can use it instead of LIKE, please advise. I've already export/import the table and the statistics are OK, and I've setup the Histogram on involved fields on the select but with out success, the runtime for the SELECT using LIKE is about 30/40 seconds it's not normal and the table include more than 1 million of entries.
1. Select Partner from tab1 where field1 like 'VAN%' and field2 like 'K%';
2. Select Partner from tab1 where field1 >= 'VAN' and field2 >='K';
Regards,
Aziz K.Why not test yourself ?
SQL> create table p_tbl (partner varchar2(10), field1 varchar2(10), field2 varchar2(10));
Table created.
SQL>
SQL> begin
2 for i in 1..1000000 loop
3 insert into p_tbl values (trunc(i/100),dbms_random.string('U',10),dbms_random.string('U',10));
4 end loop;
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>
SQL> create index idx_p_tbl on p_tbl (field1,field2);
Index created.
SQL>
SQL> exec dbms_stats.gather_table_stats(user,'P_TBL', cascade=>true)
PL/SQL procedure successfully completed.
SQL>
SQL> set timi on
SQL> select partner, field1, field2
2 from p_tbl
3 where field1 like 'VAN%'
4 and field2 like 'K%';
PARTNER FIELD1 FIELD2
7453 VANMKBIEZC KYDQHLZQHM
1694 VANNBPJNQQ KTVUNNUGUR
3408 VANQNSYBQP KSSYLCQKZE
4324 VANSUKOECK KAJAYLSMLG
Elapsed: 00:00:00.04
SQL>
SQL> select partner, field1, field2
2 from p_tbl
3 where field1 between 'VAN' and 'VANZZZZZZZ'
4 and field2 between 'K' and 'KZZZZZZZZZ';
PARTNER FIELD1 FIELD2
7453 VANMKBIEZC KYDQHLZQHM
1694 VANNBPJNQQ KTVUNNUGUR
3408 VANQNSYBQP KSSYLCQKZE
4324 VANSUKOECK KAJAYLSMLG
Elapsed: 00:00:00.03
SQL> set timi off
SQL> explain plan for select partner, field1, field2
2 from p_tbl
3 where field1 like 'VAN%'
4 and field2 like 'K%';
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 500196574
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | 26 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| P_TBL | 1 | 26 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_P_TBL | 1 | | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - access("FIELD1" LIKE 'VAN%' AND "FIELD2" LIKE 'K%')
filter("FIELD1" LIKE 'VAN%' AND "FIELD2" LIKE 'K%')
15 rows selected.
SQL> explain plan for select partner, field1, field2
2 from p_tbl
3 where field1 between 'VAN' and 'VANZZZZZZZ'
4 and field2 between 'K' and 'KZZZZZZZZZ';
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 500196574
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
PLAN_TABLE_OUTPUT
| 0 | SELECT STATEMENT | | 1 | 26 | 4 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| P_TBL | 1 | 26 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_P_TBL | 1 | | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - access("FIELD1">='VAN' AND "FIELD2">='K' AND "FIELD1"<='VANZZZZZZZ' AND
"FIELD2"<='KZZZZZZZZZ')
filter("FIELD2"<='KZZZZZZZZZ' AND "FIELD2">='K')
16 rows selected.Conclusion ? LIKE or BETWEEN seems to be same in my case.
Nicolas.
Maybe you are looking for
-
How do I change the spacing between letters of a FONT
I have a Jpanel that I am writing TEXT on. I am using the graphics.drawChars(...) method, with a monospaced font. I would like to "Squeeze" the letters together, to get more letters per line. (still monospaced, just make them all 1 pixel closer to ea
-
Sharing a numbers file with joint apple id
I want to share a numbers file between my iMac in my user account, my wifes mac mini in her user account and my wifes iPad. I thought the way to accomplish this would be to create a second joint apple ID that is used for sync'g documents in iCloud. N
-
I mac g5 hard drive booting issues
My problem is this, i just tried to upgrade from tiger to snow lep, the installer got stuck so i siwthced it off and concealed defeat and went to do a full format job, once i deleted all the files i formatted to guid so the hard drive would be bootab
-
SQL Developer 3.1 out of memory
I new to SQL developer and I been trying to copy data between servers and i run out of memory a lot of times. And i see that it happens with tables with more than 1 mill rec and got clob etc in it. I had to write my own program to handle it and i fou
-
Star rating function not working
I have Adobe Bridge CS6 on Windows. Some of my images in Bridge don't allow me to give a star rating and some do. What is the difference? I would like to use the star rating function an all of my images. What should I do?