Query takes time
Hi All
one of my query is taking too much time and really appreciate that if you people guide me how to get it fast. I'm using Oracle 11g Enterprise Release 11.2.0.2.0 64 bit version. My query is below and this query extract around 1 million records.
Query
Select distinct group_number grp_nbr, ltrim(group_number,'0') trimed_grp_nbr,section_code sct, enrolment_groups.group_id,
enrolment_groups.segment_code,market_sgmt_attr_ascns.role_type_code,market_sgmt_attr_ascns.part_attr_type_code,
nvl((select 'Y' from info where ltrim(info.grp_nbr,'0')=group_number
and ltrim(rtrim(info.sct))=section_code
and subscriber_id_type='S'),'N') Mandatory_flag,
min(to_date(coverage_period.enr.group_eff_date)) over (partition by enrolment_groups.group_id) Min_Effective_date,
nvl((select 'Y' from info where ltrim(info.grp_nbr,'0')=group_number
and ltrim(rtrim(info.sct))=section_code
and subscriber_id_type='S' and id_code='1'),'N') identification_flag
from coverage_period,
enrolment_groups,
market_sgmt_attr_ascns
Where ltrim(coverage_period.grp_nbr,'0')=enrolment_groups.group_number
and ltrim(rtrim(coverage_period.sct))=enrolment_groups.section_code
and enrolment_groups.segment_code=market_sgmt_attr_ascns.segment_code
and market_sgmt_attr_ascns='GRP'
and market_sgmt_attr_ascns.part_attr_type_code='ABR-PART'
and group_number||'/'section_code in
Select ltrim(info.grp_nbr,'0')||'/'||ltrim(rtrim(info.sct)) g_s
from info
where 1=1
and info.subscriber_id_type='S')
I have to get million of records and this query will take too much time. I really appreciate if you could help me out. Thanks
Regards
Shumail
I'm working as an ETL Developer and I created a package in SQL Developer and my select query returned around 1 million rows and then these rows inserted into one of my table by using insert into table statement.
I don't understand what do you mean by this statement
it must do a SORT to get the distinct rows.
I did one mistake, my whole package insert around 1 million records and if I'm talking about this specific query then it return only 88 rows and it will take like 5 minutes and I use the same select logic means IN clause and other clauses through out my my package by using Union but my package will take too much time. My concern is that is there any way to change this query logic to get result faster.
For example
I use in clause to compare group_number and section_code as well as some other stuff like I use 'Select 'Y' .........' in some places. Please help me out that is there any other way to query these statement to get result faster. Thanks
Edited by: 976593 on Apr 10, 2013 7:17 PM
Edited by: 976593 on Apr 10, 2013 7:22 PM
Similar Messages
-
I am creating a jsp application . . . .I am retreiving some values from my database oracle 10g..my query taking time to execute even in isql plus..am uisng tomcat5 and oracle 10gR2..can anyone pls tell me why it happens...my query fetches data from four tables...i dont know exactly why its taking so muc time say 50seconds...when i run the application in my local host it retrievs fast when i do that in server it creates a problem..
Actually it works fine when i deployed in my client server it takes time sometimes it takes atmost one minute
1.pls tell me how can i test my query about the performance
2.is der any command in oracle to test
3.how much bytes it takesLook at this thread...
When your query takes too long ... -
Query takes time to execute. Review this query and give me a possible quere
SELECT DISTINCT A.REFERENCE_NUMBER
FROM A,B,C
WHERE DF_N_GET_CONTRACT_STATUS (A.REFERENCE_NUMBER, TRUNC (SYSDATE)) = 1
AND ( B.CHQ_RTN_INDICATOR IS NULL AND B.CHALLAN_NUMBER IS NULL
AND C.INSTRUMENT_TYPE IN (1, 2) AND C.MODULE_CODE = 5 )
AND ( A.HEADER_KEY = B.HEADER_KEY AND C.HEADER_KEY = B.HEADER_KEY);
This query takes 3 minutes to execute. I want to improve the performance so that it can execute with in seconds.
Table A has 10 lakhs record.
Table B has 3.7 lakhs record.
Table C has 5.3 lakhs record. Consider DF_N_GET_CONTRACT_STATUS as function. REFERENCE_NUMBER is the type of Varchar2(20).
Plz give me a correct logical and fastest execution query for the same.
Thanks in advance.I would agree with the post from Santosh above that you should review the guidelines for posting a tuning request. This is VERY important. It is impossile to come up with any useful suggestions without the proper information.
In the case of your query, I would probably focus my attention on the function that you have defined. What happens if the condition using the DF_N_GET_CONTRACT_STATUS function is removed? Does it still take 3 minutes?
SELECT reference_number
from (
SELECT DISTINCT A.REFERENCE_NUMBER
FROM A,B,C
WHERE ( B.CHQ_RTN_INDICATOR IS NULL AND B.CHALLAN_NUMBER IS NULL
AND C.INSTRUMENT_TYPE IN (1, 2) AND C.MODULE_CODE = 5 )
AND ( A.HEADER_KEY = B.HEADER_KEY AND C.HEADER_KEY = B.HEADER_KEY)
where DF_N_GET_CONTRACT_STATUS(reference_number, trunc(sysdate)) = 1;Of course, the query above really depends on the cardinality of the returned data..... which is impossible to know without following the posting guidelines. This query could return anywhere between 0 rows and 2x10^17 rows. And in the latter case evaluating the function at the end wouldn't make any difference.
Also, do you really need the distinct? Does it make any difference? Most of the time that I see distinct, to me it either means the query is wrong or the data model is wrong. -
Simple query takes time to run
Hi,
I have a simple query whcih takes about 20 mins to run.. here is the TKPROF forit:
SELECT
SY2.QBAC0,
sum(decode(SALES_ORDER.SDCRCD,'USD', SALES_ORDER.SDAEXP,'CAD', SALES_ORDER.SDAEXP /1.0452))
FROM
JDE.F5542SY2 SY2,
JDE.F42119 SALES_ORDER,
JDE.F0116 SHIP_TO,
JDE.F5542SY1 SY1,
JDE.F4101 PRODUCT_INFO
WHERE
( SHIP_TO.ALAN8=SALES_ORDER.SDSHAN )
AND ( SY1.QANRAC=SY2.QBNRAC and SY1.QAOTCD=SY2.QBOTCD )
AND ( PRODUCT_INFO.IMITM=SALES_ORDER.SDITM )
AND ( SY2.QBSHAN=SALES_ORDER.SDSHAN )
AND ( SALES_ORDER.SDLNTY NOT IN ('H ','HC','I ') )
AND ( PRODUCT_INFO.IMSRP1 Not In (' ','000','689') )
AND ( SALES_ORDER.SDDCTO IN ('CO','CR','SA','SF','SG','SP','SM','SO','SL','SR') )
AND (
( SY1.QACTR=SHIP_TO.ALCTR )
AND ( PRODUCT_INFO.IMSRP1=SY1.QASRP1 )
GROUP BY
SY2.QBAC0
call count cpu elapsed disk query current rows
Parse 1 0.07 0.07 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 10 92.40 929.16 798689 838484 0 131
total 12 92.48 929.24 798689 838484 0 131
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 62
Rows Row Source Operation
131 SORT GROUP BY
3535506 HASH JOIN
4026100 HASH JOIN
922 TABLE ACCESS FULL OBJ#(187309)
3454198 HASH JOIN
80065 INDEX FAST FULL SCAN OBJ#(30492) (object id 30492)
3489670 HASH JOIN
65192 INDEX FAST FULL SCAN OBJ#(30457) (object id 30457)
3489936 PARTITION RANGE ALL PARTITION: 1 9
3489936 TABLE ACCESS FULL OBJ#(30530) PARTITION: 1 9
97152 TABLE ACCESS FULL OBJ#(187308)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.07 0.07 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 10 92.40 929.16 798689 838484 0 131
total 13 92.48 929.24 798689 838484 0 131
Misses in library cache during parse: 1kindly suggest how to resolve this...
OS is windows and its 9i DB...
Thanks> ... you want to get rid of the IN statements.
They prevent Oracle from usering the index.
SQL> create table mytable (id,num,description)
2 as
3 select level
4 , case level
5 when 0 then 0
6 when 1 then 1
7 else 2
8 end
9 , 'description ' || to_char(level)
10 from dual
11 connect by level <= 10000
12 /
Table created.
SQL> create index i1 on mytable(num)
2 /
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'mytable')
PL/SQL procedure successfully completed.
SQL> set autotrace on explain
SQL> select id
2 , num
3 , description
4 from mytable
5 where num in (0,1)
6 /
ID NUM DESCRIPTION
1 1 description 1
1 row selected.
Execution Plan
Plan hash value: 2172953059
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5001 | 112K| 2 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| MYTABLE | 5001 | 112K| 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | I1 | 5001 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - access("NUM"=0 OR "NUM"=1)Regards,
Rob. -
Saving a query takes long time
Hi
i have developed a query on the AR cube. This query has a 36 month structure and about 5 key figures. Since the key figures are inter dependant, I am using the cell formulas in the structure.
Till Friday i had no problem in saving the query after changes to cell formula. But today i am unable to save the query in time. It takes more than 20 minutes and still does not save. What could be the problem in the system causing this? Could this be due to temp space problem? Any suggestions?
RamHi,
I think, Siggi is right.
In such situations I usually do the following:
- Hide all windows (Show desktop icon)
- CTRLALTDEL - Task Manager
- Close Task Manager
The popup window from BEx should be on the screen.
Best regards,
Eugene -
Why update query takes long time ?
Hello everyone;
My update query takes long time. In emp ( self testing) just having 2 records.
when i issue update query , it takes long time;
SQL> select * from emp;
EID ENAME EQUAL ESALARY ECITY EPERK ECONTACT_NO
2 rose mca 22000 calacutta 9999999999
1 sona msc 17280 pune 9999999999
Elapsed: 00:00:00.05
SQL> update emp set esalary=12000 where eid='1';
update emp set esalary=12000 where eid='1'
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:01:11.72
SQL> update emp set esalary=15000;
update emp set esalary=15000
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:02:22.27Hi BCV;
Thanks for your reply but it doesn't provide output, please see this.
SQL> update emp set esalary=15000;
........... Lock already occured.
>> trying to trace >>
SQL> select HOLDING_SESSION from dba_blockers;
HOLDING_SESSION
144
SQL> select sid , username, event from v$session where username='HR';
SID USERNAME EVENT
144 HR SQL*Net message from client
151 HR enq: TX - row lock contention
159 HR SQL*Net message from client
>> It does n 't provide clear output about transaction lock >>
SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
2 BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
3 request
4 FROM v$lock, v$session
5 WHERE v$lock.TYPE = 'TX'
6 AND v$lock.SID = v$session.SID
7 AND v$session.username = USER;
no rows selected
SQL> select MACHINE from v$session where sid = :sid;
SP2-0552: Bind variable "SID" not declared. -
Hi All,
I have cloned KSB1 tcode to custom one as required by business.
Below query takes more time than excepted.
Here V_DB_TABLE = COVP.
Values in Where clause are as follows
OBNJR in ( KSBB010000001224 BT KSBB012157221571)
GJAHR in blank
VERSN in '000'
WRTTP in '04' and '11'
all others are blank
VT_VAR_COND = ( CPUDT BETWEEN '20091201' and '20091208' )
SELECT (VT_FIELDS) INTO CORRESPONDING FIELDS OF GS_COVP_EXT
FROM (V_DB_TABLE)
WHERE LEDNR = '00'
AND OBJNR IN LR_OBJNR
AND GJAHR IN GR_GJAHR
AND VERSN IN GR_VERSN
AND WRTTP IN GR_WRTTP
AND KSTAR IN LR_KSTAR
AND PERIO IN GR_PERIO
AND BUDAT IN GR_BUDAT
AND PAROB IN GR_PAROB
AND (VT_VAR_COND).
Checked in table for this condition it has only 92 entries.
But when i execute program takes long time as 3 Hrs.
Could any one help me on this>1.Dont use SELECT/ENDSELECT instead use INTO TABLE addition .
> 2.Avoid using corresponding addition.create a type and reference it.
> If the select is going for dump beacause of storage limitations ,then use Cursors.
you got three large NOs .... all three recommendations are wrong!
The SE16 test is going in the right direction ... but what was filled. Nobody knows!!!!
Select options:
Did you ever try to trace the SE16? The generic statement has for every field an in-condition!
Without the information what was actually filled, nobody can say something there
are at least 2**n combinations possible!
Use ST05 for SE16 and check actual statement plus explain! -
Query takes long time to return results.
I am on Oracle database 10g Enterprise Edition Release 10.2.0.4.0 – 64 bit
This query takes about 58 seconds to return 180 rows...
SELECT order_num,
order_date,
company_num,
customer_num,
address_type,
create_date as address_create_date,
contact_name,
first_name,
middle_init,
last_name,
company_name,
street_address_1,
customer_class,
city,
state,
zip_code,
country_code,
MAX(decode(media_type,
'PHH',
phone_area_code || '''' || phone_number,
NULL)) home_phone,
MAX(decode(media_type,
'PHW',
phone_area_code || '''' || phone_number,
NULL)) work_phone,
address_seq_num,
street_address_2
FROM (SELECT oh.order_num order_num,
oh.order_datetime order_date,
oh.company_num company_num,
oh.customer_num customer_num,
ad.address_type address_type,
c.create_date create_date,
con.first_name || '''' || con.last_name contact_name,
con.first_name first_name,
con.middle_init middle_init,
con.last_name last_name,
ad.company_name company_name,
ad.street_address_1 street_address_1,
c.customer_class customer_class,
ad.city city,
ad.state state,
ad.zip_code zip_code,
ad.country_code,
cph.media_type media_type,
cph.phone_area_code phone_area_code,
cph.phone_number phone_number,
ad.address_seq_num address_seq_num,
ad.street_address_2 street_address_2
FROM reporting_base.gt_gaft_orders gt,
doms.us_ordhdr oh,
doms.us_address ad,
doms.us_customer c,
doms.us_contact con,
doms.us_contph cph
WHERE oh.customer_num = c.customer_num(+)
AND oh.customer_num = ad.customer_num(+)
AND (
ad.customer_num = c.customer_num
AND
ad.address_type = 'B'
OR (
ad.customer_num = c.customer_num
AND
ad.address_type = 'S'
AND
ad.address_seq_num = oh.ship_to_seq_num
AND ad.customer_num = con.customer_num(+)
AND ad.address_type = con.address_type(+)
AND ad.address_seq_num = con.address_seq_num(+)
AND con.customer_num = cph.customer_num(+)
AND con.contact_id = cph.contact_id(+)
AND oh.order_num = gt.order_num
AND oh.business_unit_id = gt.business_unit_id)
GROUP BY order_num,
order_date,
company_num,
customer_num,
address_type,
create_date,
contact_name,
first_name,
middle_init,
last_name,
company_name,
street_address_1,
customer_class,
city,
state,
zip_code,
country_code,
address_seq_num,
street_address_2;This is the explain plan for the query:
Plan
SELECT STATEMENT FIRST_ROWS Cost: 21 Bytes: 207 Cardinality: 1
18 HASH GROUP BY Cost: 21 Bytes: 207 Cardinality: 1
17 NESTED LOOPS OUTER Cost: 20 Bytes: 207 Cardinality: 1
14 NESTED LOOPS OUTER Cost: 16 Bytes: 183 Cardinality: 1
11 FILTER
10 NESTED LOOPS OUTER Cost: 12 Bytes: 152 Cardinality: 1
7 NESTED LOOPS OUTER Cost: 8 Bytes: 74 Cardinality: 1
4 NESTED LOOPS OUTER Cost: 5 Bytes: 56 Cardinality: 1
1 TABLE ACCESS FULL TABLE (TEMP) REPORTING_BASE.GT_GAFT_ORDERS Cost: 2 Bytes: 26 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID TABLE DOMS.US_ORDHDR Cost: 3 Bytes: 30 Cardinality: 1
2 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USORDHDR_IXUPK_ORDNUMBUID Cost: 2 Cardinality: 1
6 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CUSTOMER Cost: 3 Bytes: 18 Cardinality: 1 Partition #: 11
5 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USCUSTOMER_IXUPK_CUSTNUM Cost: 2 Cardinality: 1
9 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_ADDRESS Cost: 4 Bytes: 156 Cardinality: 2 Partition #: 13
8 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USADDR_IXUPK_CUSTATYPASEQ Cost: 3 Cardinality: 2
13 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTACT Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 15
12 INDEX RANGE SCAN INDEX DOMS.USCONT_IX_CNATAS Cost: 3 Cardinality: 1
16 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTPH Cost: 4 Bytes: 24 Cardinality: 1 Partition #: 17
15 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USCONTPH_IXUPK_CUSTCONTMEDSEQ Cost: 3 Cardinality: 1 Cost is good. All indexes are used. However the time to return the data is very high.
Any ideas to make the query faster?.
ThanksHi, here is the tkprof output as requested by Rob..
TKPROF: Release 10.2.0.4.0 - Production on Mon Jul 13 09:07:09 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: axispr1_ora_15293.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT ORDER_NUM, ORDER_DATE, COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE,
CREATE_DATE AS ADDRESS_CREATE_DATE, CONTACT_NAME, FIRST_NAME, MIDDLE_INIT,
LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1, CUSTOMER_CLASS, CITY, STATE,
ZIP_CODE, COUNTRY_CODE, MAX(DECODE(MEDIA_TYPE, 'PHH', PHONE_AREA_CODE ||
'''' || PHONE_NUMBER, NULL)) HOME_PHONE, MAX(DECODE(MEDIA_TYPE, 'PHW',
PHONE_AREA_CODE || '''' || PHONE_NUMBER, NULL)) WORK_PHONE, ADDRESS_SEQ_NUM,
STREET_ADDRESS_2
FROM
(SELECT OH.ORDER_NUM ORDER_NUM, OH.ORDER_DATETIME ORDER_DATE, OH.COMPANY_NUM
COMPANY_NUM, OH.CUSTOMER_NUM CUSTOMER_NUM, AD.ADDRESS_TYPE ADDRESS_TYPE,
C.CREATE_DATE CREATE_DATE, CON.FIRST_NAME || '''' || CON.LAST_NAME
CONTACT_NAME, CON.FIRST_NAME FIRST_NAME, CON.MIDDLE_INIT MIDDLE_INIT,
CON.LAST_NAME LAST_NAME, AD.COMPANY_NAME COMPANY_NAME, AD.STREET_ADDRESS_1
STREET_ADDRESS_1, C.CUSTOMER_CLASS CUSTOMER_CLASS, AD.CITY CITY, AD.STATE
STATE, AD.ZIP_CODE ZIP_CODE, AD.COUNTRY_CODE, CPH.MEDIA_TYPE MEDIA_TYPE,
CPH.PHONE_AREA_CODE PHONE_AREA_CODE, CPH.PHONE_NUMBER PHONE_NUMBER,
AD.ADDRESS_SEQ_NUM ADDRESS_SEQ_NUM, AD.STREET_ADDRESS_2 STREET_ADDRESS_2
FROM REPORTING_BASE.GT_GAFT_ORDERS GT, DOMS.US_ORDHDR OH, DOMS.US_ADDRESS
AD, DOMS.US_CUSTOMER C, DOMS.US_CONTACT CON, DOMS.US_CONTPH CPH WHERE
OH.ORDER_NUM = GT.ORDER_NUM AND OH.BUSINESS_UNIT_ID = GT.BUSINESS_UNIT_ID
AND OH.CUSTOMER_NUM = C.CUSTOMER_NUM(+) AND OH.CUSTOMER_NUM =
AD.CUSTOMER_NUM(+) AND AD.CUSTOMER_NUM = C.CUSTOMER_NUM AND (
AD.ADDRESS_TYPE = 'B' OR ( AD.ADDRESS_TYPE = 'S' AND AD.ADDRESS_SEQ_NUM =
OH.SHIP_TO_SEQ_NUM ) ) AND AD.CUSTOMER_NUM = CON.CUSTOMER_NUM(+) AND
AD.ADDRESS_TYPE = CON.ADDRESS_TYPE(+) AND AD.ADDRESS_SEQ_NUM =
CON.ADDRESS_SEQ_NUM(+) AND CON.CUSTOMER_NUM = CPH.CUSTOMER_NUM(+) AND
CON.CONTACT_ID = CPH.CONTACT_ID(+) ) GROUP BY ORDER_NUM, ORDER_DATE,
COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE, CREATE_DATE, CONTACT_NAME,
FIRST_NAME, MIDDLE_INIT, LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1,
CUSTOMER_CLASS, CITY, STATE, ZIP_CODE, COUNTRY_CODE, ADDRESS_SEQ_NUM,
STREET_ADDRESS_2
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 257 0.04 0.05 45 0 0 6421
total 257 0.04 0.05 45 0 0 6421
Misses in library cache during parse: 0
Parsing user id: 126
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 257 0.04 0.05 45 0 0 6421
total 257 0.04 0.05 45 0 0 6421
Misses in library cache during parse: 0
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
1 user SQL statements in session.
0 internal SQL statements in session.
1 SQL statements in session.
Trace file: axispr1_ora_15293.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
1 user SQL statements in trace file.
0 internal SQL statements in trace file.
1 SQL statements in trace file.
1 unique SQL statements in trace file.
289 lines in trace file.
83 elapsed seconds in trace file.Thanks in advance! -
SELECT CAL_EMPCALENDAR.START_DATE as main,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' ||
CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' ||
CAL_EMPCALENDAR.EMPLOYEE_ID as name,
CAL_EMPCALENDAR.START_DATE as sdate,
CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
CAL_EMPCALENDAR.LATE_IN,
CAL_EMPCALENDAR.EARLY_OUT,
CAL_EMPCALENDAR.UNDER_TIME,
CAL_EMPCALENDAR.OVERTIME,
TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
(SELECT shift_id
FROM CAL_GRPWORKDAY
WHERE CAL_GRPWORKDAY.calgrp_id =
(SELECT calgrp_id
FROM CAL_CALASSIGNMENT
WHERE employee_id = CAL_EMPCALENDAR.employee_id
AND CAL_CALASSIGNMENT.START_DATE <=
CAL_EMPCALENDAR.START_DATE
AND (CAL_CALASSIGNMENT.END_DATE is null or
CAL_CALASSIGNMENT.END_DATE >=
CAL_EMPCALENDAR.START_DATE))
AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
(SELECT max(entry_dt)
FROM , LV_TXN txn, CAL_EMPDAILYEVENT cale
WHERE status = 'Approved'
AND LV_APPSTATUSHIST.application_id = txn.application_id
AND cale.reference_id = txn.txn_id
AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id
) AS entry_dt,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 1
and BIZUNIT_ID like 'SG')) F1,
--TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 2
and bizunit_id like 'SG')) F2,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 3
and bizunit_id like 'SG')) F3,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 4
and bizunit_id like 'SG')) F4,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 5
and bizunit_id like 'SG')) F5
From CAL_EMPCALENDAR, HRM_CURR_CAREER_V, CAL_SHIFT, HRM_EMPLOYEE
Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
AND (CAL_EMPCALENDAR.WF_STATUS = 'Approved' Or
CAL_EMPCALENDAR.WF_STATUS = 'No Action')
AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
--and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
AND CAL_EMPCALENDAR.START_DATE BETWEEN
GREATEST(HRM_EMPLOYEE.COMMENCE_DATE,
TO_DATE('1-4-2006', 'DD-MM-YYYY')) AND
LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'),
NVL(HRM_EMPLOYEE.CESSATION_DATE,
TO_DATE('30-4-2006', 'DD-MM-YYYY')))
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
-- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
--AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
--$P!{ExceptionSQL}
--$P!{iHRFilterClause}
--order by $P!{OrderBy}
order by main
Hi all this query takes a very long time to run.
On the explain plan the The table in bold letter is using full tablescan rest all go for index scanning.
Table got Indexe on those CLOMUNS REFERREED
Oracle version 9.2.0.6
Message was edited by:
Maran.E
Message was edited by:
Maran.EMaran,
With tags and indentation it should be easiest to analyze at least for you :
SELECT CAL_EMPCALENDAR.START_DATE as main,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' || CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' || CAL_EMPCALENDAR.EMPLOYEE_ID as name,
CAL_EMPCALENDAR.START_DATE as sdate,
CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
CAL_EMPCALENDAR.LATE_IN,
CAL_EMPCALENDAR.EARLY_OUT,
CAL_EMPCALENDAR.UNDER_TIME,
CAL_EMPCALENDAR.OVERTIME,
TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
(SELECT shift_id
FROM CAL_GRPWORKDAY
WHERE CAL_GRPWORKDAY.calgrp_id = (SELECT calgrp_id
FROM CAL_CALASSIGNMENT
WHERE employee_id = CAL_EMPCALENDAR.employee_id
AND CAL_CALASSIGNMENT.START_DATE <= CAL_EMPCALENDAR.START_DATE
AND ( CAL_CALASSIGNMENT.END_DATE is null
or CAL_CALASSIGNMENT.END_DATE >= CAL_EMPCALENDAR.START_DATE))
AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
(SELECT max(entry_dt)
FROM LV_TXN txn, CAL_EMPDAILYEVENT cale
WHERE status = 'Approved'
AND LV_APPSTATUSHIST.application_id = txn.application_id
AND cale.reference_id = txn.txn_id
AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id) AS entry_dt,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 1
and BIZUNIT_ID like 'SG')) F1,
--TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 2
and bizunit_id like 'SG')) F2,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 3
and bizunit_id like 'SG')) F3,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 4
and bizunit_id like 'SG')) F4,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 5
and bizunit_id like 'SG')) F5
From CAL_EMPCALENDAR,
HRM_CURR_CAREER_V,
CAL_SHIFT,
HRM_EMPLOYEE
Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
AND ( CAL_EMPCALENDAR.WF_STATUS = 'Approved'
Or CAL_EMPCALENDAR.WF_STATUS = 'No Action')
AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
--and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
AND CAL_EMPCALENDAR.START_DATE BETWEEN GREATEST(HRM_EMPLOYEE.COMMENCE_DATE, TO_DATE('1-4-2006', 'DD-MM-YYYY'))
AND LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'), NVL(HRM_EMPLOYEE.CESSATION_DATE, TO_DATE('30-4-2006', 'DD-MM-YYYY')))
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
-- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
--AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
--$P!{ExceptionSQL}
--$P!{iHRFilterClause}
--order by $P!{OrderBy}
order by mainNicolas. -
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Parsing the query takes too much time.
Hello.
I hitting the bug in в Oracle XE (parsing some query takes too much time).
A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
Please, raise a bug for Oracle XE.
Steps to reproduce the issue:
1. Extract files from testcase_dump.zip and testcase_sql.zip
2. Under username SYSTEM execute script schema.sql
3. Import data from file TESTCASE14.DMP
4. Under username SYSTEM execute script testcase14.sql
SQL text can be downloaded from http://files.mail.ru/DJTTE3
Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
Regards,
Viacheslav.Bug number? Version fix applies to?
Relevant Note that describes the problem and points out bug/patch availability?
With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1. -
Batch Query takes too much time
Hi All,
A query as simple as
select * from ibt1 where batchnum = 'AAA'
takes around 180 seconds to give results. if I replace batchnum condition with itemcode or anything other than batchnum, the same query returns results in some 5 seconds. the IBT1 table has nearly 3 lac rows. Consequently, a little complex query involving batchnm conditions gives 'query execution time out' error.
what could be the reason? any resolution?
Thanks,
BinitaHello Binita,
You need some database tunning....
The IBT1 table has complex index on ItemCode, BatchNum, LineNum, WhsCode, and has no index on bacthnumber. But it has statistics, and statistics are useful for running queries ( see [ms technet here|http://technet.microsoft.com/hu-hu/library/cc966419(en-us).aspx]). Also there is a note about performance tunning databases 783183 .
There is 2 ways to "speed up" your query: indexes and statistics
Statistics
See the statistics of IBT1 table:
exec SP_HELPSTATS 'IBT1','ALL'
the result set you see : the statistics name ([IBT1_PRIMARY] and some statistics created by the system name likes WASys_XXXX. For Batchnum, you can execute the following statement:
DBCC SHOW_STATISTICS ('IBT1',_WA_Sys_00000002_4EE969DE)
--where _WA_Sys_00000002_4EE969DE is the name of the statistics of batchnum
Check the resultset. If the "updated", "rows","row sampled" columns. If necessary, you can update the statistics by:
-- update statistics on whole database
EXEC SP_UPDATESTATS
-- update all statistics of IBT1 table
UPDATE STATISTICS IBT1 WITH FULLSCAN
-- update a specific statistics of IBT1 table
UPDATE STATISTICS IBT1(_WA_Sys_00000002_4EE969DE) WITH FULLSCAN
Index defragmentation/reindex
If the index not contiguous, the seeking inside several fragmented files takes more time.
Execute your query in Management Studio Query analizer, and trun on Excution plans (CTRLL or CTRLM). This shows the compute cost of the query.
select * from ibt1 where batchnum = 'AAA'
result will be [IBT1_PRIMARY] index scan. You can check the fragmentation of the primary index of IBT1 table:
DBCC SHOWCONTIG ('IBT1') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS
In the resultset the column ScanDensity shows the fragmentation percent, This value is 100 if everything is contiguous; if this value is less than 100, some fragmentation exists.
Reindex can be done by the following statement:
DBCC DBREINDEX ('IBT1','IBT1_PRIMARY',100) WITH NO_INFOMSGS
Take care: all statements should NOT be execute during business hours.
This may helps you.
Regards,
J. -
Simple Select query with 'where', 'and', 'between' clauses takes time
Hi,
I have a select query as below
SELECT * FROM (SELECT a.*,ROWNUM currentStartRecord From (select ai_inbound.ai_inb_seq tableseq,'AI_INBOUND' tablename,'INBOUND' direction,ai_inbound.appl,ai_inbound.ai_date datetime,ai_inbound.ic_receiver_id pg_id,ai_inbound.ic_sender_id tp_id,ai_inbound.session_no,ai_inbound.ic_ctl_no,ai_inbound.msg_set_id msg_type,ai_inbound.appl_msg_ctl_no reference_no,ai_inbound.fg_version version,ai_inbound.msg_status status,ai_inbound.input_file_name,ai_inbound.output_file_name,ai_inbound.ack_file_name from ai_inbound where ai_inbound.appl = ? and ai_inbound.ai_date between ? and ? )a where ROWNUM <= 49)where currentStartRecord >= 0
The above query takes longer time through application than expected when the date fields are passed whereas it works fine when no date fields are passed. We are using oracle9.2 version of the database. All the indexed columns and partitioned indexed columns are rebuild.
Kindly let me know how can i tune up the query to improve the performance.
ThanksHi,
I have a select query as below
SELECT * FROM (SELECT a.*,ROWNUM currentStartRecord From (select ai_inbound.ai_inb_seq tableseq,'AI_INBOUND' tablename,'INBOUND' direction,ai_inbound.appl,ai_inbound.ai_date datetime,ai_inbound.ic_receiver_id pg_id,ai_inbound.ic_sender_id tp_id,ai_inbound.session_no,ai_inbound.ic_ctl_no,ai_inbound.msg_set_id msg_type,ai_inbound.appl_msg_ctl_no reference_no,ai_inbound.fg_version version,ai_inbound.msg_status status,ai_inbound.input_file_name,ai_inbound.output_file_name,ai_inbound.ack_file_name from ai_inbound where ai_inbound.appl = ? and ai_inbound.ai_date between ? and ? )a where ROWNUM <= 49)where currentStartRecord >= 0
The above query takes longer time through application than expected when the date fields are passed whereas it works fine when no date fields are passed. We are using oracle9.2 version of the database. All the indexed columns and partitioned indexed columns are rebuild.
Kindly let me know how can i tune up the query to improve the performance.
Thanks -
Query Take more time on timesten
Hi
One query takes lot of time on timesten , while the same query takes less time on oracle
Query :-
select *
from (SELECT RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE,
NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
ROUND(NVL(sum(RELD_RTO_EXP), -1), 2) RTO_EXP,
ROUND(NVL(sum(RELD_NE_EXP), -1), 2) NET_EXP,
ROUND(NVL(sum(RELD_MAR_UTILIZATION), -1),
2) MAR_EXP,
ROUND(decode(sign(sum(C.reld_m2m_exp)),
-1,
abs(sum(C.reld_m2m_exp)),
0),
2) M2M_EXP,
NVL(OLM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
EM_RP_PROFILE_ID
FROM ENTITY_MASTER A,
ORDER_LMT_MASTER B,
RMS_ENTITY_LIMIT_DTLS C
WHERE A.EM_CONTROLLER_ID = 100100010000
AND A.EM_ENTITY_ID = C.RELD_EM_ENTITY_ID
AND C.RELD_EM_ENTITY_ID = B.OLM_EPM_EM_ENTITY_ID(+)
AND C.RELD_SEGMENT_TYPE = 'E'
AND C.RELD_EXM_EXCH_ID = B.OLM_EXCH_ID(+)
AND C.RELD_EXM_EXCH_ID <> 'ALL'
AND B.OLM_SEM_SMST_SECURITY_ID(+) =
'ALL'
AND ((A.EM_ENTITY_TYPE IN ('CL') AND
A.EM_CLIENT_TYPE <> 'CC') OR
A.EM_ENTITY_TYPE <> 'CL')
AND B.OLM_PRODUCT_ID(+) = 'M' --Added by Harshit Shah on 4th June 09
GROUP BY RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE,
RELD_ACC_TYPE,
OLM_SUSPNSN_FLG,
EM_RP_PROFILE_ID,
OLM_PRODUCT_ID
UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
SELECT RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE SEGMENTID,
NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
ROUND(NVL(SUM(RELD_RTO_EXP), -1), 2) RTO_EXP,
ROUND(NVL(SUM(RELD_NE_EXP), -1), 2) NET_EXP,
ROUND(NVL(SUM(RELD_MAR_UTILIZATION), -1),
2) MAR_EXP,
ROUND(decode(sign(SUM(C.reld_m2m_exp)),
-1,
abs(SUM(C.reld_m2m_exp)),
0),
2) M2M_EXP,
NVL(OLM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
EM_RP_PROFILE_ID
FROM ENTITY_MASTER A,
ORDER_LMT_MASTER B,
RMS_ENTITY_LIMIT_DTLS C
WHERE A.EM_CONTROLLER_ID = 100100010000
AND A.EM_ENTITY_ID = B.OLM_EPM_EM_ENTITY_ID
AND B.OLM_EPM_EM_ENTITY_ID = C.RELD_EM_ENTITY_ID(+)
AND C.RELD_SEGMENT_TYPE = 'E'
AND B.OLM_EXCH_ID = 'ALL'
AND B.OLM_SEM_SMST_SECURITY_ID(+) =
'ALL'
AND ((A.EM_ENTITY_TYPE IN ('CL') AND
A.EM_CLIENT_TYPE <> 'CC') OR
A.EM_ENTITY_TYPE <> 'CL')
AND B.OLM_PRODUCT_ID(+) = 'M' --Added by Harshit Shah on 4th June 09
GROUP BY RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE,
RELD_ACC_TYPE,
OLM_SUSPNSN_FLG,
EM_RP_PROFILE_ID,
OLM_PRODUCT_ID
UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
SELECT RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE,
NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
ROUND(NVL(sum(RELD_RTO_EXP), -1), 2) RTO_EXP,
ROUND(NVL(sum(RELD_NE_EXP), -1), 2) NET_EXP,
ROUND(NVL(sum(RELD_MAR_UTILIZATION), -1),
2) MAR_EXP,
ROUND(decode(sign(sum(C.reld_m2m_exp)),
-1,
abs(sum(C.reld_m2m_exp)),
0),
2) M2M_EXP,
NVL(OLIM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
EM_RP_PROFILE_ID
FROM ENTITY_MASTER A,
DRV_ORDER_INST_LMT_MASTER B,
RMS_ENTITY_LIMIT_DTLS C
WHERE A.EM_CONTROLLER_ID = 100100010000
AND A.EM_ENTITY_ID = C.RELD_EM_ENTITY_ID
AND C.RELD_EM_ENTITY_ID =
B.OLIM_EPM_EM_ENTITY_ID(+)
AND C.RELD_SEGMENT_TYPE = 'D'
AND C.RELD_EXM_EXCH_ID = B.OLIM_EXCH_ID(+)
AND C.RELD_EXM_EXCH_ID <> 'ALL'
AND B.OLIM_INSTRUMENT_ID(+) = 'ALL'
AND ((A.EM_ENTITY_TYPE IN ('CL') AND
A.EM_CLIENT_TYPE <> 'CC') OR
A.EM_ENTITY_TYPE <> 'CL')
GROUP BY RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE,
RELD_ACC_TYPE,
OLIM_SUSPNSN_FLG,
EM_RP_PROFILE_ID
UNION --union all removed by pramod on 08-jan-2012 as it was giving multiple rows
SELECT RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE SEGMENTID,
NVL(RELD_ACC_TYPE, 0) ACC_TYPE, --END ASHA
ROUND(NVL(SUM(RELD_RTO_EXP), -1), 2) RTO_EXP,
ROUND(NVL(SUM(RELD_NE_EXP), -1), 2) NET_EXP,
ROUND(NVL(SUM(RELD_MAR_UTILIZATION), -1),
2) MAR_EXP,
ROUND(decode(sign(SUM(C.reld_m2m_exp)),
-1,
abs(SUM(C.reld_m2m_exp)),
0),
2) M2M_EXP,
NVL(OLIM_SUSPNSN_FLG, 'A') SUSPNSN_FLG,
EM_RP_PROFILE_ID
FROM ENTITY_MASTER A,
DRV_ORDER_INST_LMT_MASTER B,
RMS_ENTITY_LIMIT_DTLS C
WHERE A.EM_CONTROLLER_ID = 100100010000
AND A.EM_ENTITY_ID = B.OLIM_EPM_EM_ENTITY_ID
AND B.OLIM_EPM_EM_ENTITY_ID =
C.RELD_EM_ENTITY_ID(+)
AND C.RELD_SEGMENT_TYPE = 'D'
AND B.OLIM_EXCH_ID = 'ALL'
AND B.OLIM_INSTRUMENT_ID(+) = 'ALL'
AND ((A.EM_ENTITY_TYPE IN ('CL') AND
A.EM_CLIENT_TYPE <> 'CC') OR
A.EM_ENTITY_TYPE <> 'CL')
GROUP BY RELD_EM_ENTITY_ID,
RELD_EXM_EXCH_ID,
RELD_SEGMENT_TYPE,
RELD_ACC_TYPE,
OLIM_SUSPNSN_FLG,
EM_RP_PROFILE_ID)
ORDER BY RELD_EM_ENTITY_ID,
RELD_SEGMENT_TYPE,
RELD_EXM_EXCH_ID;
Please suggest what should i check for this.As always when examining SQL performance, start by checking the query execution plan. If you use ttIsql you can just prepend EXPLAIN to the query and the plan will be displayed. e.g.
EXPLAIN select ...........;
Check the plan is optimal and all necessary indexes are in place. You may need to add indexes depending o what the plan shows.
Please note that Oracle database can, and usually does, execute many types of query in parallel using multiple CPU cores. TimesTen does not currently support parallelisation of individual queries. Hence in some cases Oracle database may indeed be faster than TimesTen due to the parallel execution that occurs in Oracle.
Chris -
Hi,
My emp table contains one milion of records.
delete from emp;
commit;
select count(*)
from emp;
I had perform the above three queries parallely. After applying the commit operation, To retrive the no.of records
in that table it takes some more time after that it display the result zero.
I have faced this question recently in one of the interview. why it is taking some more time..
can any one help me?Did you read the link provided?
A high water mark is the set of blocks that have at one point contained data. You
might have 1000 blocks allocated to a table but only 500 are under the HWM.
The blocks under the HWM are the blocks that will be read when the table is full scanned. In your sample code, when you insert 1Milion records, the data will reside in some number of blocks - Say 1000. Now the HWM is 1000. When you delete the data, HWM will not get reset..
And after delete, when you count, a FULL TABLE SCAN will happen. Which will scan all the blocks under HWM. So, here 1000 blocks will get read, even if there is no "data". That way your query will take time..
Whats the solution? - Do a TRUNCATE or do a SHRINK.
NOW - Tell us - What is your concern?
Maybe you are looking for
-
Setting sysdate as default date in oamessagedateinput bean
hi all, I have to set sysdate in the oamessagedateinput bean, when the page is rendered. For that i have created a vo with query: select sysdate from dual; i have attached this sysdate attribute with oamessagedateinput bean. Still when the page is re
-
Where's the default location of multicam clip creation
This is silly but where does the multi cam clip goes whenever I create them? I have a fairly large project (8 years compilation) with about 1500 folders on a 96 Tb RAID array and whenever I create multicam, it's no where to be found. Is there a way t
-
Music list in songs view always goes back to top...
In previous versions of iTunes, when swapping between playlists (or building them, for example), every time you went back to the music folder in the library, you get back to where you left off. Now, it seems that if you switch out of the Music folder
-
How to manipulate the large Cisco ACS user database?
e.g. I have to change the settings of "disable the account if failed attempts exceed times" one by one. Is there a batch method of operation?
-
Hi, I bought a X-Fi Xtrememusic in december and since then I've been getting a loud distorted bass explosion in games. At first I thought it was my speakers but I bought some Logitech Z-5300e's and it sounds the same. Is there anyway to fix this? Her