Performance Issue in a Query
Hi,
I have written a query,
SELECT VBAK~VKORG VBAK~VBTYP VBAK~AUART VBAK~VTWEG VBAK~SPART
VBAK~KUNNR VBAK~KVGR1 VBAK~KVGR3 VBAP~WERKS VBAP~MATNR
VBAP~MVGR1 VBAP~MVGR2 VBAP~MVGR3 VBAP~MVGR3 VBAP~MVGR4
VBAP~MVGR5 VBEP~VBELN VBEP~POSNR VBEP~ETENR VBEP~MBDAT
VBEP~WADAT
VBAP~KDMAT VBAP~ERDAT VBAP~ERZET VBAP~KWMENG
INTO CORRESPONDING FIELDS OF
TABLE INT_OPEN_SCH_LINES
FROM VBAK
INNER JOIN VBAP ON VBAK~VBELN = VBAP~VBELN
INNER JOIN VBEP ON VBEP~POSNR = VBAP~POSNR AND
VBEP~VBELN = VBAK~VBELN AND
VBEP~VBELN = VBAP~VBELN
FOR ALL ENTRIES IN T_TVEP
WHERE VBAP~WERKS IN S_WERKS AND
VBAP~MATNR IN S_MATNR AND
VBAP~MATKL IN S_MATKL AND
VBAP~ABGRU IN S_ABGRU AND
VBAK~KUNNR IN S_KUNNR AND
VBEP~ETENR = C_0001 AND
VBEP~WADAT = D0 AND
VBAK~VBTYP IN S_VBTYP AND
VBAK~AUART IN S_AUART AND
VBEP~ETTYP = T_TVEP-ETTYP AND
VBAK~KVGR3 IN S_KVGR3.
This is working fine in Development and Production systems and takes just 2 sec to execute. But the code takes more than 5 min to execute in Quality system. What could be the problem? The query cannot be modified, as this is the requirement.
Please help me out.
Thanks,
DhanaLakshmi M S
Hi,
As Christine Evans mentioned mostly likely this could be the issue. Check out in quality system the statistics are upto date or not. If the statistics are not upto date, optimizer will choose the wrong index if there is no statistics mostly likely could be full table scans in the tables included in the where clause. which will take longer time. Which database you are using ?? if you are using oracle 10g, compare the following database parameter in prod and qa. make necessary changes for these parameters if needed.
Refer SAP Note 830576.
OPTIMIZER_INDEX_CACHING
OPTIMIZER_INDEX_COST_ADJ
INDEXJOIN_ENABLED
Hope this helps,
Regards
Dileep
Similar Messages
-
Performance issue while generating Query
Hi BI Gurus.
I am facing performance issue while generating query on 0IC_C03.
It has a variable as (from & to) for generating the report for a particular time duration.
if the variable (from & to) fields is filled then after taking a long time it shows run time error.
& if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
& after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
Regards
RitikaHI RITIKA,
WEL COME TO SDN.
YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
High Database Runtime
High OLAP Runtime
High Frontend Runtime
if its high Database Runtime :
- check the aggregates or create aggregates on cube and this helps you.
if its High OLAP Runtime :
- check the user exits if any.
- check the hier. are used and fetching in deep level.
If its high frontend runtime:
- check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
For From and to date variables, create one more set and use it and try.
Regs,
VACHAN -
Performance Issues in Text query.
Hi,
We are doing a text mining on a column which has been indexed as a "MULTI_COLUMN_DATASTORE" on two CLOB fields.
While searching on this column, we face performance issues. The tables (both TABLE 1 and TABLE2) contains more than 46 million records. The query takes more than 10 minutes to execute.
This is the query used:
SELECT TABLE1.COLUMN1
FROM TABLE1, TABLE2
WHERE CONTAINS
(TABLE2.CLOB_COLUMN,
'(((({TEMPERATURE} OR {TEMP} OR {TEMPS} OR {THERMAL}) OR ({ENGINE} OR {ENG} OR {ENGINES})) AND (({SENSOR} OR {PROBE} OR {SEN} OR {SENSORS} OR {SENSR} OR {SNSR} OR {TRANSDUCER}))) AND (({INSTALL} OR {INST} OR {INSTALLATION} OR {INSTALLED} OR {INSTD} OR {INSTL} OR {INSTLD} OR {INSTN}) OR ({REMOVED} OR {REMOVAL} OR {REMOVE} OR {REMVD} OR {RMV} OR {RMVD} OR {RMVL}) OR ({REPLACED} OR {R+R} OR {R/I} OR {R/R} OR {REPL} OR {REPLACE} OR {REPLCD} OR {REPLD} OR {RM/RP} OR {RPL} OR {RPLCD} OR {RPLCED} OR {RPLD} OR {RPLED}) OR ({INOPERATIVE} OR {INOP}) OR ({FAILURE} OR {FAIL} OR {FAILD} OR {FAILED} OR {FAILR}) OR ({CHANGE} OR {CHANGED} OR {CHG} OR {CHGD} OR {CHGE} OR {CHGED}))) AND (({PRESSURE} OR {PRES} OR {PRESR} OR {PRESS} OR {PRESSURES} OR {PRESSURIZATION} OR {PRESSURIZE} OR {PRESSURIZED} OR {PRESSURIZING} OR {PRSZ} OR {PRSZD} OR {PRSZG} OR {PX})) NOT ({CARRIED} OR ({COOLANT} OR {COLNT}) OR ({DISPLAYED} OR {DSPLYD}))'
) > 0
AND TABLE1.COLUMN1 = TABLE2.COLUMN1
AND TABLE1.COLUMN2 = TABLE2.COLUMN2.
We created the index using the following procedure:
begin
ctx_ddl.create_preference('my_alias', 'MULTI_COLUMN_DATASTORE');
ctx_ddl.set_attribute('my_alias', 'columns', 'column1, column2, column3');
end;
create index index_name
on table_name(column1)
indextype is ctxsys.context
parameters ('datastore ctxsys.my_alias');
Please let me know if there are any optimization techniques that can be used to tune this query.
Also, I would like to know if using "MULTI_COLUMN_DATASTORE" would cause performance issues. Will Query REWRITE improve the performance in this case?
Thanks in Advance!What happens if you remove the join and just run the query against TABLE2?
Try it without the join, and just fetch the first 100 records (or something) from the query and see how that performs.
Have you optimized the index? Always worth doing after an index creation on a large table, especially as you've used the default index memory setting for the index. -
Performance issue with insert query !
Hi ,
I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
My container is indexed with "node-attribute-equality-string". The insert query I used:
insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
If I modify my query with
into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
insted of
into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
Time taken reduces only by 8 secs.
I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
Has anybody got any idea regarding this performance issue.
Kindly help me out.
Thanks,
Kapil.Hi George,
Thanks for your reply.
Here is the info you requested,
dbxml> listIndexes
Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
Index: node-attribute-equality-string for node {}:mySSIAddress
2 indexes found.
dbxml> info
Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
Berkeley DB 4.6.21: (September 27, 2007)
Default container name: n_b_i_f_c_a_z.dbxml
Type of default container: NodeContainer
Index Nodes: on
Shell and XmlManager state:
Not transactional
Verbose: on
Query context state: LiveValues,Eager
The insery query with update takes ~32 sec ( shown below )
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
Time in seconds for command 'query': 32.5002
and the query without the updation part takes ~14 sec ( shown below )
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
Time in seconds for command 'query': 13.7289
The query :
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
Time in seconds for command 'query': 0.005375
is very fast.
The Updation of the document seems to consume much of the time.
Regards,
Kapil. -
Performance issue with select query
Hi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AMHi friends ,
This is my select query which is taking so much time to retrive the records.CAn Any one help me in solving this performance issue.
*- Get the Goods receipts mainly selected per period (=> MKPF secondary
SELECT msegebeln msegebelp mseg~werks
ekkobukrs ekkolifnr ekkozterm ekkoekorg ekko~ekgrp
ekkoinco1 ekkoexnum
lfa1name1 lfa1land1 lfa1ktokk lfa1stceg
mkpfmblnr mkpfmjahr msegzeile mkpfbldat mkpf~budat
mseg~bwart
*Start of changes for CIP 6203752 by PGOX02
mseg~smbln
*End of changes for CIP 6203752 by PGOX02
ekpomatnr ekpotxz01 ekpomenge ekpomeins
ekbemenge ekbedmbtr ekbewrbtr ekbewaers
ekpolgort ekpomatkl ekpowebaz ekpokonnr ekpo~ktpnr
ekpoplifz ekpobstae
INTO TABLE it_temp
FROM mkpf JOIN mseg ON msegmblnr EQ mkpfmblnr
AND msegmjahr EQ mkpfmjahr
JOIN ekbe ON ekbeebeln EQ msegebeln
AND ekbeebelp EQ msegebelp
AND ekbe~zekkn EQ '00'
AND ekbe~vgabe EQ '1'
AND ekbegjahr EQ msegmjahr
AND ekbebelnr EQ msegmblnr
AND ekbebuzei EQ msegzeile
JOIN ekpo ON ekpoebeln EQ ekbeebeln
AND ekpoebelp EQ ekbeebelp
JOIN ekko ON ekkoebeln EQ ekpoebeln
JOIN lfa1 ON lfa1lifnr EQ ekkolifnr
WHERE mkpf~budat IN so_budat
AND mkpf~bldat IN so_bldat
AND mkpf~vgart EQ 'WE'
AND mseg~bwart IN so_bwart
AND mseg~matnr IN so_matnr
AND mseg~werks IN so_werks
AND mseg~lifnr IN so_lifnr
AND mseg~ebeln IN so_ebeln
AND ekko~ekgrp IN so_ekgrp
AND ekko~bukrs IN so_bukrs
AND ekpo~matkl IN so_matkl
AND ekko~bstyp IN so_bstyp
AND ekpo~loekz EQ space
AND ekpo~plifz IN so_plifz.
Thanks & Regards,
Manoj Kumar .Thatha
Moderator message - Please see Please Read before Posting in the Performance and Tuning Forum before posting and please use code tags when posting code - post locked
Edited by: Rob Burbank on Feb 4, 2010 9:03 AM -
Performance Issue with sql query
Hi,
My db is 10.2.0.5 with RAC on ASM, Cluster ware version 10.2.0.5.
With bsoa table as
SQL> desc bsoa;
Name Null? Type
ID NOT NULL NUMBER
LOGIN_TIME DATE
LOGOUT_TIME DATE
SUCCESSFUL_IND VARCHAR2(1)
WORK_STATION_NAME VARCHAR2(80)
OS_USER VARCHAR2(30)
USER_NAME NOT NULL VARCHAR2(30)
FORM_ID NUMBER
AUDIT_TRAIL_NO NUMBER
CREATED_BY VARCHAR2(30)
CREATION_DATE DATE
LAST_UPDATED_BY VARCHAR2(30)
LAST_UPDATE_DATE DATE
SITE_NO NUMBER
SESSION_ID NUMBER(8)
The query
UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID
Is taking a lot of time to execute and in AWR reports also it is on top in
1. SQL Order by elapsed time
2. SQL order by reads
3. SQL order by gets
So i am trying a way to solve the performance issue as the application is slow specially during login and logout time.
I understand that the function in the where condition cause to do FTS, but i can not think what other parts to look at.
Also:
SQL> SELECT COUNT(1) FROM BSOA;
COUNT(1)
7800373
The explain plan for "UPDATE BSOA SET LOGOUT_TIME =SYSDATE WHERE SYS_CONTEXT('USERENV', 'SESSIONID') = SESSION_ID" is
{code}
PLAN_TABLE_OUTPUT
Plan hash value: 1184960901
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 1 | 26 | 18748 (3)| 00:03:45 |
| 1 | UPDATE | BSOA | | | | |
|* 2 | TABLE ACCESS FULL| BSOA | 1 | 26 | 18748 (3)| 00:03:45 |
Predicate Information (identified by operation id):
2 - filter("SESSION_ID"=TO_NUMBER(SYS_CONTEXT('USERENV','SESSIONID')))
{code}Hi,
There are also triggers before update and AUDITS on this table.
CREATE OR REPLACE TRIGGER B2.TRIGGER1
BEFORE UPDATE
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
CREATE OR REPLACE TRIGGER B2.TRIGGER2
BEFORE INSERT
ON B2.BSOA REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
:NEW.CREATED_BY := USER ;
:NEW.CREATION_DATE := SYSDATE ;
:NEW.LAST_UPDATED_BY := USER ;
:NEW.LAST_UPDATE_DATE := SYSDATE ;
END;
And also there is an audit on this table
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER SUCCESSFUL;
AUDIT UPDATE ON B2.BSOA BY ACCESS WHENEVER NOT SUCCESSFUL;
And the sessionid column in BSOA has height balanced histogram.
When i create an index i get the following error. As i am on 10g I can't use DDL_LOCK_TIMEOUT . I may have to wait for next down time.
SQL> CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS;
CREATE INDEX B2.BSOA_SESSID_I ON B2.BSOA(SESSION_ID) TABLESPACE B2 COMPUTE STATISTICS
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
Thanks -
Performance issue with select query and for all entries.
hi,
i have a report to be performance tuned.
the database table has around 20 million entries and 25 fields.
so, the report fetches the distinct values of two fields using one select query.
so, the first select query fetches around 150 entries from the table for 2 fields.
then it applies some logic and eliminates some entries and makes entries around 80-90...
and then it again applies the select query on the same table using for all entries applied on the internal table with 80-90 entries...
in short,
it accesses the same database table twice.
so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
is around 80-90 entries too much for using "for all entries"?
the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
i really cant find the way out...
please help.chinmay kulkarni wrote:Chinmay,
Even though you tried to ask the question with detailed explanation, unfortunately it is still not clear.
It is perfectly fine to access the same database twice. If that is working for you, I don't think there is any need to change the logic. As Rob mentioned, 80 or 8000 records is not a problem in "for all entries" clause.
>
> so, i tried to get the database table in internal table and apply the logic on internal table and delete the unwanted entries.. but it gave me memory dump, and it wont take that huge amount of data into abap memory...
>
It is not clear what you tried to do here. Did you try to bring all 20 million records into an internal table? That will certainly cause the program to short dump with memory shortage.
> the logic that is applied to eliminate the entries from internal table is too long, and hence cannot be converted into where clause to convert it into single select..
>
That is fine. Actually, it is better (performance wise) to do much of the work in ABAP than writing a complex WHERE clause that might bog down the database. -
Performance issue in Select Query on ERCH
Dear Friends,
I have to execute a query on ERCH in production system having around 20 lakh data which keeps on increasing. "Select BELNR VERTRAG ADATSOLL from ERCH where BCREASON = u201803u2019 " . The expected volume of data that the query will return is approx 1400 records.
Since the where clause includes a field which is not a key field please suggest the performance of the query so that it doesn't time-out.
Regards,
Amit SrivastavaHello Amit Srivastava ,
You can add the Contract account number(VKONT) and Business Partner number(GPARTNER) in your query and can create a custom Index for the contract account number (We have this index created in our system).
This will improve the performance effectively.
Hope this answers your question.
Thanks,
Greetson -
Performance issue with infoset query
Dear Experts,
when i trying to run a query based on infoset. the execution time of the query is very high. when i try to see the query designer i have got the following warning...
Warning Message :
Diagnosis
InfoProvider XXXX does not contain characteristic 0CALYEAR. The exception aggregation for key figure XXXX can therefore not be applied.
System Response
The key figure will therefore be aggregated using all characteristics of XXXXX with the generic aggregation SUM.
could anyone can guide me how to check at the infoset level to resolve this issue....
Thanks,
Mannu!Hi Mannu.
Go to change mode in your InfoSet and identify the Char 0CALYEAR. Check whether it is selected (click the box beside 0CALYEAR).
This needs to be done, because the exception aggregation can occur only for the fields available in the BEx Query!
Since its not available, it is trying to aggregate based on all available fields selected in Infoset. This may be the reasong to your performace issue.
Make this setting and I hope your issue will get resolved. Do post the outcome.
Cheers,
VA
Edited by: Vishwa Anand on Sep 6, 2010 5:15 PM -
Performance issue in this query:
Hi,
Here is the query:
SELECT COLLECTIONKEY, FLOWSYSKEY, LIFECYCLENUMBER, PUBLICVERSION, PUBLICOPERATION,
STATUSCODE, VALUEDATE, REASONCODE FROM AA out WHERE COLLECTIONKEY
LIKE '1437023L%' AND COLLECTIONVERSION = ( SELECT MAX(COLLECTIONVERSION) FROM cfdg_owner.AA inn WHERE inn.FLOWSYSKEY=out.FLOWSYSKEY)
Now this table AA has non clustered index on CollectionKey and CollectionVersion.
The above query will return only 4 records, but still the optimizer is using index .
Fyi, AA table is huge table with records in millions.
SELECT STATEMENT 8.0 8 58591 1 108 8 ALL_ROWS
FILTER 1
TABLE ACCESS (BY INDEX ROWID) 5.0 5 36827 2 216 1 CFDG_OWNER TB_PUBLIC_CF BY INDEX ROWID TABLE ANALYZED 1
INDEX (RANGE SCAN) 3.0 3 21764 2 1 CFDG_OWNER IDX2_TB_PUBLIC_CF RANGE SCAN INDEX ANALYZED 1
SORT (AGGREGATE) 1 41 2 AGGREGATE
INDEX (RANGE SCAN) 3.0 3 21764 1 41 1 CFDG_OWNER IDX1_TB_PUBLIC_CF RANGE SCAN INDEX ANALYZED 1
Plus the response time of this query is very fast. When i am forcing optimizer not to use Index, its taking more time.
How come Index range scan is performing better in this case as compared to FULL Scan?
Version is:
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
Any more information required, please let me know.
Thanks.
1It only returns 4 records, but how many does it have to scan through and eliminate to satisfy the MAX condition? ie) how many versions are there per key?
It's quite possible this would be a more efficient query for you (it will obviate the secondary index range scan the current query is utilizing).
SELECT
COLLECTIONKEY,
FLOWSYSKEY,
LIFECYCLENUMBER,
PUBLICVERSION,
PUBLICOPERATION,
STATUSCODE,
VALUEDATE,
REASONCODE
FROM
SELECT
COLLECTIONKEY,
FLOWSYSKEY,
LIFECYCLENUMBER,
PUBLICVERSION,
PUBLICOPERATION,
STATUSCODE,
VALUEDATE,
REASONCODE,
COLLECTIONVERSION,
MAX(COLLECTIONVERSION) OVER (PARTITION BY FLOWSYSKEY) AS MAX_COLLECTIONVERSION
FROM AA
WHERE COLLECTIONKEY LIKE '1437023L%'
WHERE COLLECTIONVERSION = MAX_COLLECTIONVERSION;But without knowing a lot more about your data distributions, etc... it's just a guess. -
Performance issue with this query.
Hi Experts,
This query is fetching 500 records.
SELECT
RECIPIENT_ID ,FAX_STATUS
FROM
FAX_STAGE WHERE LOWER(FAX_STATUS) like 'moved to%'
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 159K| 10M| 2170 (1)|
| 1 | TABLE ACCESS BY INDEX ROWID| FAX_STAGE | 159K| 10M| 2170 (1)|
| 2 | INDEX RANGE SCAN | INDX_FAX_STATUS_RAM | 28786 | | 123 (0)|
Note
- 'PLAN_TABLE' is old version
Statistics
1 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
937 bytes sent via SQL*Net to client
375 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
19 rows processed
Total number of records in the table.
SELECT COUNT(*) FROM FAX_STAGE--3679418
Distinct reccords are low for this column.
SELECT DISTINCT FAX_STATUS FROM FAX_STAGE;
Completed
BROKEN
Broken - New
moved to - America
MOVED to - Australia
Moved to Canada and australia
Functional based indexe on FAX_STAGE(LOWER(FAX_STATUS))
stats are upto date.
Still the cost is high
How to improve the performance of this query.
Please help me.
Thanks in advance.With no heavy activity on your fax_stage table a bitmap index might do better - see CREATE INDEX
I would try FTS (Full Table Scan) first as 6 vs. 3679418 is low cardinality for sure so using an index is not very helpful in this case (maybe too much Exadata oriented)
There's a lot of web pages where you can read: full table scans are not always evil and indexes are not always good or vice versa Ask Tom &quot;How to avoid the full table scan&quot;
Regards
Etbin -
Performance issue of the query
Hi All,
Please help me to avoid sequential access on the table VBAK.
as i am using this query on VBAK,VBUK,VBPA and VBFA.
the query as follows.
How to improve the performance of this query,its very urgent please give me the sample query if possible.
SELECT a~vbeln
a~erdat
a~ernam
a~vkorg
a~vkbur
a~kunnr
b~kunnr AS kunnr_shp
b~adrnr
c~gbstk
c~fkstk
d~vbeln AS vbeln_i
INTO CORRESPONDING FIELDS OF TABLE t_z92sales
FROM vbak AS a INNER JOIN vbuk AS c ON avbeln = cvbeln
INNER JOIN vbpa AS b ON avbeln = bvbeln
AND bposnr = 0 AND bparvw = 'WE'
LEFT OUTER JOIN vbfa AS d ON avbeln = dvbelv AND d~vbtyp_n = 'M'
WHERE ( a~vbeln BETWEEN fvbeln AND tvbeln )
AND a~vbtyp = 'C'
AND a~vkorg IN vkorg_s
AND c~gbstk IN gbstk_s.Hi Ramada,
Try using the following code.
RANGES: r_vbeln FOR vbka-vbeln.
DATA: w_index TYPE sy-tabix,
BEGIN OF t_vbak OCCURS 0,
vbeln TYPE vbak-vbeln,
erdat TYPE vbak-erdat,
ernam TYPE vbak-ernam,
vkorg TYPE vbak-vkorg,
vkbur TYPE vbak-vkbur,
kunnr TYPE vbak-kunnr,
del(1) TYPE c ,
END OF t_vbak,
BEGIN OF t_vbuk OCCURS 0,
vbeln TYPE vbuk-vbeln,
gbstk TYPE vbuk-gbstk,
fkstk TYPE vbuk-fkstk,
END OF t_vbuk,
BEGIN OF t_vbpa OCCURS 0,
vbeln TYPE vbpa-vbeln,
kunnr TYPE vbpa-kunnr,
adrnr TYPE vbpa-adrnr,
END OF t_vbpa,
BEGIN OF t_vbfa OCCURS 0,
vbelv TYPE vbfa-vbelv,
vbeln TYPE vbfa-vbeln,
END OF t_vbfa,
BEGIN OF t_z92sales OCCURS 0,
vbeln TYPE vbak-vbeln,
erdat TYPE vbak-erdat,
ernam TYPE vbak-ernam,
vkorg TYPE vbak-vkorg,
vkbur TYPE vbak-vkbur,
kunnr TYPE vbak-kunnr,
gbstk TYPE vbuk-gbstk,
fkstk TYPE vbuk-fkstk,
kunnr_shp TYPE vbpa-kunnr,
adrnr TYPE vbpa-adrnr,
vbeln_i TYPE vbfa-vbeln,
END OF t_z92sales.
IF NOT fvbeln IS INITIAL
OR NOT tvbeln IS INITIAL.
REFRESH r_vbeln.
r_vbeln-sign = 'I'.
IF fvbeln IS INITIAL.
r_vbeln-option = 'EQ'.
r_vbeln-low = tvbeln.
ELSEIF tvbeln IS INITIAL.
r_vbeln-option = 'EQ'.
r_vbeln-low = fvbeln.
ELSE.
r_vbeln-option = 'BT'.
r_vbeln-low = fvbeln.
r_vbeln-high = tvbeln.
ENDIF.
APPEND r_vbeln.
CLEAR r_vbeln.
SELECT vbeln
erdat
ernam
vkorg
vkbur
kunnr
FROM vbak
INTO TABLE t_vbak
WHERE vbeln IN r_vbeln
AND vbtyp EQ 'C'
AND vkorg IN vkorg_s.
IF sy-subrc EQ 0.
SORT t_vbak BY vbeln.
SELECT vbeln
gbstk
fkstk
FROM vbuk
INTO TABLE t_vbuk
FOR ALL ENTRIES IN t_vbak
WHERE vbeln EQ t_vbak-vbeln
AND gbstk IN gbstk_s.
IF sy-subrc EQ 0.
SORT t_vbuk BY vbeln.
LOOP AT t_vbak.
w_index = sy-tabix.
READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING NO FIELDS.
IF sy-subrc NE 0.
t_vbak-del = 'X'.
MODIFY t_vbak INDEX w_index TRANSPORTING del.
ENDIF.
ENDLOOP.
DELETE t_vbak WHERE del EQ 'X'.
ENDIF.
ENDIF.
ELSE.
SELECT vbeln
gbstk
fkstk
FROM vbuk
INTO TABLE t_vbuk
WHERE vbtyp EQ 'C'
AND gbstk IN gbstk_s.
IF sy-subrc EQ 0.
SORT t_vbuk BY vbeln.
SELECT vbeln
erdat
ernam
vkorg
vkbur
kunnr
FROM vbak
INTO TABLE t_vbak
FOR ALL ENTRIES IN t_vbuk
WHERE vbeln EQ t_vbuk-vbeln
AND vkorg IN vkorg_s.
IF sy-subrc EQ 0.
SORT t_vbak BY vbeln.
ENDIF.
ENDIF.
ENDIF.
IF NOT t_vbak[] IS INITIAL.
SELECT vbeln
kunnr
adrnr
FROM vbpa
INTO TABLE t_vbpa
FOR ALL ENTRIES IN t_vbak
WHERE vbeln EQ t_vbak-vbeln
AND posnr EQ '000000'
AND parvw EQ 'WE'.
IF sy-subrc EQ 0.
SORT t_vbpa BY vbeln.
SELECT vbelv
vbeln
FROM vbfa
INTO TABLE t_vbfa
FOR ALL ENTRIES IN t_vbpa
WHERE vbelv EQ t_vbpa-vbeln
AND vbtyp_n EQ 'M'.
IF sy-subrc EQ 0.
SORT t_vbfa BY vbelv.
ENDIF.
ENDIF.
ENDIF.
REFRESH t_z92sales.
LOOP AT t_vbak.
READ TABLE t_vbuk WITH KEY vbeln = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING
gbstk
fkstk.
IF sy-subrc EQ 0.
READ TABLE t_vbpa WITH KEY vbeln = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING
kunnr
adrnr.
IF sy-subrc EQ 0.
READ TABLE t_vbfa WITH KEY vbelv = t_vbak-vbeln
BINARY SEARCH
TRANSPORTING
vbeln.
IF sy-subrc EQ 0.
t_z92sales-vbeln = t_vbak-vbeln.
t_z92sales-erdat = t_vbak-erdat.
t_z92sales-ernam = t_vbak-ernam.
t_z92sales-vkorg = t_vbak-vkorg.
t_z92sales-vkbur = t_vbak-vkbur.
t_z92sales-kunnr = t_vbak-kunnr.
t_z92sales-gbstk = t_vbuk-gbstk.
t_z92sales-fkstk = t_vbuk-fkstk.
t_z92sales-kunnr_shp = t_vbpa-kunnr.
t_z92sales-adrnr = t_vbpa-adrnr.
t_z92sales-vbeln_i = t_vbfa-vbeln.
APPEND t_z92sales.
CLEAR t_z92sales.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP. -
Performance Issue with the query
Hi Experts,
While working on peoplesoft, today I was stuck with a problem when one of the Query is performing really bad. With all the statistics updates, query is not performing good. On one of the table, query is spending time doing lot of IO. (db file sequential read wait). And if I delete the stats for the table and let dynamic sampling to take place, query just works good and there is no IO wait on the table.
Here is the query
SELECT A.BUSINESS_UNIT_PC, A.PROJECT_ID, E.DESCR, A.EMPLID, D.NAME, C.DESCR, A.TRC,
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)),
SUM( A.EST_GROSS),
DECODE( A.TRC, 'ROVA1', 0, 'ROVA2', 0, ( SUM( A.EST_GROSS)/100) * 9.75),
'2012-07-01',
'2012-07-31',
SUM( DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 1, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 2, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 3, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 4, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 5, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 6, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 7, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 8, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 9, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 10, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 11, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 12, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 13, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 14, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 15, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 16, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 17, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 18, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 19, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 20, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 21, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 22, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 23, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 24, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 25, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 9, 2), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 26, 'DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 27, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 28, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 29, 'MM-DD'), A.TL_QUANTITY, 0) +
DECODE(SUBSTR( TO_CHAR(A.DUR,'YYYY-MM-DD'), 6, 5), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD') + 30, 'MM-DD'), A.TL_QUANTITY, 0)
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'),
TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / ' || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT,
E.PROJECT_ID
FROM
PS_TL_PAYABLE_TIME A,
PS_TL_TRC_TBL C,
PS_PROJECT E,
PS_PERSONAL_DATA D,
PS_PERALL_SEC_QRY D1,
PS_JOB F,
PS_EMPLMT_SRCH_QRY F1
WHERE
D.EMPLID = D1.EMPLID
AND D1.OPRID = 'TMANI'
AND F.EMPLID = F1.EMPLID
AND F.EMPL_RCD = F1.EMPL_RCD
AND F1.OPRID = 'TMANI'
AND A.DUR BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD') AND TO_DATE('2012-07-31','YYYY-MM-DD')
AND C.TRC = A.TRC
AND C.EFFDT = (SELECT
MAX(C_ED.EFFDT)
FROM
PS_TL_TRC_TBL C_ED
WHERE
C.TRC = C_ED.TRC
AND C_ED.EFFDT <= SYSDATE
AND E.BUSINESS_UNIT = A.BUSINESS_UNIT_PC
AND E.PROJECT_ID = A.PROJECT_ID
AND A.EMPLID = D.EMPLID
AND A.EMPLID = F.EMPLID
AND A.EMPL_RCD = F.EMPL_RCD
AND F.EFFDT = (SELECT
MAX(F_ED.EFFDT)
FROM
PS_JOB F_ED
WHERE
F.EMPLID = F_ED.EMPLID
AND F.EMPL_RCD = F_ED.EMPL_RCD
AND F_ED.EFFDT <= SYSDATE
AND F.EFFSEQ = (SELECT
MAX(F_ES.EFFSEQ)
FROM
PS_JOB F_ES
WHERE
F.EMPLID = F_ES.EMPLID
AND F.EMPL_RCD = F_ES.EMPL_RCD
AND F.EFFDT = F_ES.EFFDT
AND F.GP_PAYGROUP = DECODE(' ', ' ', F.GP_PAYGROUP, ' ')
AND A.CURRENCY_CD = DECODE(' ', ' ', A.CURRENCY_CD, ' ')
AND DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')) = DECODE(' ', ' ', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')), 'L', 'L', 'E', 'E', 'A', DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'L', DECODE( F.PER_ORG, 'CWR', 'L', 'E')))
AND ( A.EMPLID, A.EMPL_RCD) IN (SELECT
B.EMPLID,
B.EMPL_RCD
FROM
PS_TL_GROUP_DTL B
WHERE
B.TL_GROUP_ID = DECODE('ER012', ' ', B.TL_GROUP_ID, 'ER012')
AND E.PROJECT_USER1 = DECODE(' ', ' ', E.PROJECT_USER1, ' ')
AND A.PROJECT_ID = DECODE(' ', ' ', A.PROJECT_ID, ' ')
AND A.PAYABLE_STATUS <>
CASE
WHEN to_number(TO_CHAR(sysdate, 'DD')) < 15
THEN
CASE
WHEN A.DUR > last_day(add_months(sysdate, -2))
THEN ' '
ELSE 'NA'
END
ELSE
CASE
WHEN A.DUR > last_day(add_months(sysdate, -1))
THEN ' '
ELSE 'NA'
END
END
AND A.EMPLID = DECODE(' ', ' ', A.EMPLID, ' ')
GROUP BY A.BUSINESS_UNIT_PC,
A.PROJECT_ID,
E.DESCR,
A.EMPLID,
D.NAME,
C.DESCR,
A.TRC,
'2012-07-01',
'2012-07-31',
DECODE( A.CURRENCY_CD, 'USD', '$', 'GBP', '£', 'EUR', '€', 'AED', 'D', 'NGN', 'N', ' '),
DECODE(SUBSTR( F.GP_PAYGROUP, 1, 2), 'NG', 'NG', F.PER_ORG),
DECODE(TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'MM'), TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY'), TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'Mon ')
|| TO_CHAR(to_date('2012-07-01', 'YYYY-MM-DD'), 'YYYY') || ' / '
|| TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'Mon ') || TO_CHAR(to_date('2012-07-31', 'YYYY-MM-DD'), 'YYYY')),
C.TRC, TO_CHAR(C.EFFDT,'YYYY-MM-DD'),
E.BUSINESS_UNIT, E.PROJECT_ID
HAVING SUM( A.EST_GROSS) <> 0
ORDER BY 1,
2,
5,
6 ;Here is the screenshot for IO wait
[https://lh4.googleusercontent.com/-6PFW2FSK3yE/UCrwUbZ0pvI/AAAAAAAAAPA/eHM48AOC0Uo]
Edited by: Oceaner on Aug 14, 2012 5:38 PMHere is the execution plan with all the statistics present
PLAN_TABLE_OUTPUT
Plan hash value: 1575300420
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 237 | 2703 (1)| 00:00:33 |
|* 1 | FILTER | | | | | |
| 2 | SORT GROUP BY | | 1 | 237 | | |
| 3 | CONCATENATION | | | | | |
|* 4 | FILTER | | | | | |
|* 5 | FILTER | | | | | |
|* 6 | HASH JOIN | | 1 | 237 | 1695 (1)| 00:00:21 |
|* 7 | HASH JOIN | | 1 | 204 | 1689 (1)| 00:00:21 |
|* 8 | HASH JOIN SEMI | | 1 | 193 | 1352 (1)| 00:00:17 |
| 9 | NESTED LOOPS | | | | | |
| 10 | NESTED LOOPS | | 1 | 175 | 1305 (1)| 00:00:16 |
|* 11 | HASH JOIN | | 1 | 148 | 1304 (1)| 00:00:16 |
| 12 | JOIN FILTER CREATE | :BF0000 | | | | |
| 13 | NESTED LOOPS | | | | | |
| 14 | NESTED LOOPS | | 1 | 134 | 967 (1)| 00:00:12 |
| 15 | NESTED LOOPS | | 1 | 103 | 964 (1)| 00:00:12 |
|* 16 | TABLE ACCESS FULL | PS_PROJECT | 197 | 9062 | 278 (1)| 00:00:04 |
|* 17 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 7 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | IDX$$_C44D0007 | 16 | | 3 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
| 21 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
PLAN_TABLE_OUTPUT
| 22 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 23 | FILTER | | | | | |
| 24 | JOIN FILTER USE | :BF0000 | 55671 | 2827K| 335 (1)| 00:00:05 |
| 25 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 26 | TABLE ACCESS BY INDEX ROWID| PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|* 28 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 29 | CONCATENATION | | | | | |
| 30 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 31 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 32 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 33 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 34 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 36 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 38 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
|* 39 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 40 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
|* 41 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
| 42 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 43 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|* 44 | FILTER | | | | | |
|* 45 | FILTER | | | | | |
| 46 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 47 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 48 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 49 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 50 | CONCATENATION | | | | | |
| 51 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|* 52 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|* 53 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 54 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 55 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 56 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 57 | CONCATENATION | | | | | |
| 58 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 59 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 60 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 61 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|* 62 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|* 63 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 64 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 65 | INLIST ITERATOR | | | | | |
|* 66 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|* 67 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 68 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 69 | SORT AGGREGATE | | 1 | 20 | | |
|* 70 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 71 | SORT AGGREGATE | | 1 | 14 | | |
|* 72 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 73 | SORT AGGREGATE | | 1 | 23 | | |
|* 74 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
|* 75 | FILTER | | | | | |
PLAN_TABLE_OUTPUT
|* 76 | FILTER | | | | | |
|* 77 | HASH JOIN | | 1 | 237 | 974 (2)| 00:00:12 |
|* 78 | HASH JOIN | | 1 | 226 | 637 (1)| 00:00:08 |
|* 79 | HASH JOIN | | 1 | 193 | 631 (1)| 00:00:08 |
| 80 | NESTED LOOPS | | | | | |
| 81 | NESTED LOOPS | | 1 | 179 | 294 (1)| 00:00:04 |
| 82 | NESTED LOOPS | | 1 | 152 | 293 (1)| 00:00:04 |
| 83 | NESTED LOOPS | | 1 | 121 | 290 (1)| 00:00:04 |
|* 84 | HASH JOIN SEMI | | 1 | 75 | 289 (1)| 00:00:04 |
|* 85 | TABLE ACCESS BY INDEX ROWID | PS_TL_PAYABLE_TIME | 1 | 57 | 242 (0)| 00:00:03 |
|* 86 | INDEX SKIP SCAN | IDX$$_C44D0007 | 587 | | 110 (0)| 00:00:02 |
|* 87 | INDEX FAST FULL SCAN | PS_TL_GROUP_DTL | 323 | 5814 | 47 (3)| 00:00:01 |
|* 88 | TABLE ACCESS BY INDEX ROWID | PS_PROJECT | 1 | 46 | 1 (0)| 00:00:01 |
|* 89 | INDEX UNIQUE SCAN | PS_PROJECT | 1 | | 0 (0)| 00:00:01 |
|* 90 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 1 | 31 | 3 (0)| 00:00:01 |
|* 91 | INDEX RANGE SCAN | IDX$$_3F450003 | 1 | | 2 (0)| 00:00:01 |
|* 92 | INDEX UNIQUE SCAN | PS_PERSONAL_DATA | 1 | | 0 (0)| 00:00:01 |
| 93 | TABLE ACCESS BY INDEX ROWID | PS_PERSONAL_DATA | 1 | 27 | 1 (0)| 00:00:01 |
| 94 | VIEW | PS_EMPLMT_SRCH_QRY | 5428 | 75992 | 336 (2)| 00:00:05 |
| 95 | SORT UNIQUE | | 5428 | 275K| 336 (2)| 00:00:05 |
|* 96 | FILTER | | | | | |
| 97 | NESTED LOOPS | | 55671 | 2827K| 335 (1)| 00:00:05 |
| 98 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|* 99 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*100 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1739K| 333 (1)| 00:00:04 |
| 101 | CONCATENATION | | | | | |
| 102 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*103 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*104 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 105 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*106 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*107 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 108 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*109 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*110 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 111 | TABLE ACCESS FULL | PS_TL_TRC_TBL | 922 | 30426 | 6 (0)| 00:00:01 |
| 112 | VIEW | PS_PERALL_SEC_QRY | 7940 | 87340 | 336 (2)| 00:00:05 |
| 113 | SORT UNIQUE | | 7940 | 379K| 336 (2)| 00:00:05 |
|*114 | FILTER | | | | | |
|*115 | FILTER | | | | | |
| 116 | NESTED LOOPS | | 55671 | 2663K| 335 (1)| 00:00:05 |
| 117 | TABLE ACCESS BY INDEX ROWID | PSOPRDEFN | 1 | 20 | 2 (0)| 00:00:01 |
|*118 | INDEX UNIQUE SCAN | PS_PSOPRDEFN | 1 | | 1 (0)| 00:00:01 |
|*119 | TABLE ACCESS FULL | PS_SJT_PERSON | 55671 | 1576K| 333 (1)| 00:00:04 |
| 120 | CONCATENATION | | | | | |
| 121 | NESTED LOOPS | | 1 | 63 | 2 (0)| 00:00:01 |
|*122 | INDEX FAST FULL SCAN | PSASJT_OPR_CLS | 1 | 24 | 2 (0)| 00:00:01 |
|*123 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 124 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*125 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*126 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 127 | CONCATENATION | | | | | |
| 128 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*129 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|*130 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 131 | NESTED LOOPS | | 1 | 63 | 1 (0)| 00:00:01 |
|*132 | INDEX UNIQUE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
|*133 | INDEX UNIQUE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 0 (0)| 00:00:01 |
| 134 | NESTED LOOPS | | 1 | 63 | 3 (0)| 00:00:01 |
| 135 | INLIST ITERATOR | | | | | |
|*136 | INDEX RANGE SCAN | PSASJT_CLASS_ALL | 1 | 39 | 2 (0)| 00:00:01 |
|*137 | INDEX RANGE SCAN | PSASJT_OPR_CLS | 1 | 24 | 1 (0)| 00:00:01 |
| 138 | SORT AGGREGATE | | 1 | 20 | | |
|*139 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 140 | SORT AGGREGATE | | 1 | 14 | | |
|*141 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 142 | SORT AGGREGATE | | 1 | 23 | | |
|*143 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
| 144 | SORT AGGREGATE | | 1 | 14 | | |
|*145 | INDEX RANGE SCAN | PS_TL_TRC_TBL | 2 | 28 | 2 (0)| 00:00:01 |
| 146 | SORT AGGREGATE | | 1 | 20 | | |
|*147 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
| 148 | SORT AGGREGATE | | 1 | 23 | | |
|*149 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------Most of the IO wait occur at step 17. Though exlain plan simply doesnot show this thing..But when we run this query from Peoplesoft application (online page), this is visible through Grid control SQL Monitoring
Second part of the Question continues.... -
Performance issue with sql query. Please explain
I have a sql query
A.column1 and B.column1 are indexed.
Query1: select A.column1,A.column3, B.column2 from tableA , tableB where A.column1=B.column1;
query2: select A.column1,A.column3,B.column2,B.column4 from tableA , tableB where A.column1=B.column1;
1. Does both query takes same time? If not why?.
As they both go for same row with different number of columns. And Since the complete datablock is loaded in Database buffer cache. so there should not be extra time taken upto this time.
Please tell me if I am wrong.For me apart from required excessive bytes sent via SQL*Net to client as well bytes received via SQL*Net from client will cause to chatty network which will degrade the performance.
SQL> COLUMN plan_plus_exp FORMAT A100
SQL> SET LINESIZE 1000
SQL> SET AUTOTRACE TRACEONLY
SQL> SELECT *
2 FROM emp
3 /
14 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=616)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=616)
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
1631 bytes sent via SQL*Net to client
423 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
SQL> SELECT ename
2 FROM emp
3 /
14 rows selected.
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=154)
1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes=154)
Statistics
1 recursive calls
0 db block gets
8 consistent gets
0 physical reads
0 redo size
456 bytes sent via SQL*Net to client
423 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed Khurram -
Performance issue in select query
Moderator message: do not post the same question in two forums. Duplicate (together with answers) deleted.
SELECT a~grant_nbr
a~zzdonorfy
a~company_code
b~language
b~short_desc
INTO TABLE i_gmgr_text
FROM gmgr AS a
INNER JOIN gmgrtexts AS b ON a~grant_nbr = b~grant_nbr
WHERE a~grant_nbr IN s_grant
AND a~zzdonorfy IN s_dono
AND b~language EQ sy-langu
AND b~short_desc IN s_cont.
How to use for all all entries in the above inner join for better performance?
then....
IF sy-subrc EQ 0.
SORT i_gmgr_text BY grant_nbr.
ENDIF.
IF i_gmgr_text[] IS NOT INITIAL.
* Actual Line Item Table
SELECT rgrant_nbr
gl_sirid
rbukrs
rsponsored_class
refdocnr
refdocln
FROM gmia
INTO TABLE i_gmia
FOR ALL ENTRIES IN i_gmgr_text
WHERE rgrant_nbr = i_gmgr_text-grant_nbr
AND rbukrs = i_gmgr_text-company_code
AND rsponsored_class IN s_spon.
IF sy-subrc EQ 0.
SORT i_gmia BY refdocnr refdocln.
ENDIF.
Edited by: Matt on Dec 17, 2008 1:40 PM> How to use for all all entries in the above inner join for better performance?
my best christmas recommendation for performance, simply ignore such recommendations.
And check the performance of your join!
Is the performance really low, if it is then there is probably no index support. Without indexes FOR ALL ENTRIES will be much slower.
Siegfried
Maybe you are looking for
-
Not able to un-merge a dimension in webi report
Hello, I'm facing an issue in webi reports. That is I'm not able to un-merge a dimension. I'm using BO 4.0 sp 6 I have 6 data providers in my webi report all of those 6 have sales order number in common. The report refreshes fine but a new requiremen
-
Some of my apps are with wrong language.
Some of my apps are with wrong language. Spotify is on holland and sleep cycle are on Chinese? Is there a way to change this ? My region is set right (Norwegian)
-
Hi - I recently bought a Mac Mini so that I could have a desktop and a laptop (I'd been using the Titanium G4 as both) and I have been having trouble with the email on the Mini. Right off, I should say that the G4 had 768MB of RAM whereas my Mini onl
-
IMac vs MacBook Pro Retina for a student in residence with an iPad 4?
I will be going away to school next year and I want some kind of Mac. I'm new to Mac I've used windows all my life but since I've had my iPad 4 I've barely ever used the computer. A first I thought I would forsure get a MacBook Pro with retina, but r
-
How to intigrate OBIEE11g Securites with EBS(E Business Suite)?
Hi guys, can any one tell how to intigrate OBIEE 11g securities with EBS(E business Suite) Edited by: BI_user on Sep 22, 2011 2:09 AM