Determine why query takes 30 seconds or longer
Hi. I've got an Oracle 9i procedure that takes some 30 seconds to run. It's for a Web app, so many users won't wait that long. How do I determine where the bottlenecks are in the code? Thanks.
Hi
Run it and watch in the OEM which sql statement's execution is the longest.
When you have the longest one, show us in a reply.
Ott Karesz
http://www.trendo-kft.hu
Similar Messages
-
Query Takes 43 seconds to retrieve 650 records
Hi,
We have Query which takes 43 seconds to retrieve 650 records.We are on 10.2.0.4 version.Kindly Suggest me any changes is required.
SELECT InstrumentID, MEGroupID, MessageSequence FROM TIBEX_msgseqbyinstrumentbymeid WHERE MEGroupID = 'ME1';
PLAN_TABLE_OUTPUT
Plan hash value: 1364023912
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 25 | 1550 | 56585 (2)| 00:11:20 |
| 1 | HASH GROUP BY | | 25 | 1550 | 56585 (2)| 00:11:20 |
|* 2 | HASH JOIN | | 3272 | 198K| 56584 (2)| 00:11:20 |
|* 3 | TABLE ACCESS FULL | TIBEX_INSTRUMENT | 677 | 14894 | 18 (0)| 00:00:01 |
| 4 | VIEW | | 5689 | 222K| 56565 (2)| 00:11:19 |
| 5 | UNION-ALL | | | | | |
| 6 | HASH GROUP BY | | 614 | 11052 | 4587 (2)| 00:00:56 |
| 7 | TABLE ACCESS FULL | TIBEX_QUOTE | 455K| 8008K| 4564 (1)| 00:00:55 |
| 8 | HASH GROUP BY | | 108 | 1944 | 50283 (2)| 00:10:04 |
| 9 | TABLE ACCESS FULL | TIBEX_ORDER | 4926K| 84M| 50001 (1)| 00:10:01 |
| 10 | HASH GROUP BY | | 52 | 936 | 8 (13)| 00:00:01 |
|* 11 | TABLE ACCESS FULL | TIBEX_EXECUTION | 307 | 5526 | 7 (0)| 00:00:01 |
| 12 | HASH GROUP BY | | 1 | 40 | 3 (34)| 00:00:01 |
|* 13 | TABLE ACCESS FULL | TIBEX_TSTRADE | 1 | 40 | 2 (0)| 00:00:01 |
| 14 | HASH GROUP BY | | 396 | 7128 | 13 (8)| 00:00:01 |
| 15 | INDEX FAST FULL SCAN| IX_BESTEXREL | 3310 | 59580 | 12 (0)| 00:00:01 |
| 16 | HASH GROUP BY | | 1125 | 20250 | 12 (9)| 00:00:01 |
|* 17 | TABLE ACCESS FULL | TIBEX_MERESUMEPRDTRANSITION | 1981 | 35658 | 11 (0)| 00:00:01 |
| 18 | HASH GROUP BY | | 1 | 17 | 4 (25)| 00:00:01 |
| 19 | TABLE ACCESS FULL | TIBEX_EDPUPDATEREJECT | 10 | 170 | 3 (0)| 00:00:01 |
| 20 | HASH GROUP BY | | 1126 | 32654 | 822 (1)| 00:00:10 |
| 21 | NESTED LOOPS | | 8640 | 244K| 821 (1)| 00:00:10 |
| 22 | TABLE ACCESS FULL | TIBEX_INSTRUMENTADMIN | 17280 | 421K| 820 (1)| 00:00:10 |
|* 23 | INDEX UNIQUE SCAN | XPKTIBEX_CONFIGMEGROUP | 1 | 4 | 0 (0)| 00:00:01 |
| 24 | HASH GROUP BY | | 17 | 306 | 70 (3)| 00:00:01 |
| 25 | TABLE ACCESS FULL | TIBEX_BESTEXECPRICELOG | 12671 | 222K| 68 (0)| 00:00:01 |
| 26 | HASH GROUP BY | | 1 | 40 | 3 (34)| 00:00:01 |
|* 27 | TABLE ACCESS FULL | TIBEX_AUCTIONPRICE | 1 | 40 | 2 (0)| 00:00:01 |
| 28 | HASH GROUP BY | | 1126 | 19142 | 618 (1)| 00:00:08 |
|* 29 | TABLE ACCESS FULL | TIBEX_ADMINACK | 18121 | 300K| 616 (1)| 00:00:08 |
| 30 | HASH GROUP BY | | 1122 | 20196 | 142 (2)| 00:00:02 |
| 31 | INDEX FAST FULL SCAN| INSTRUMENTSTATEMSGSEQ | 23588 | 414K| 140 (0)| 00:00:02 |
Predicate Information (identified by operation id):
2 - access("INSTRUMENTID"="B"."INSTRUMENTID")
3 - filter("B"."MEGROUPID"='ME1')
11 - filter("INSTRUMENTID" IS NOT NULL)
13 - filter("INSTRUMENTID" IS NOT NULL)
17 - filter("INSTRUMENTID" IS NOT NULL)
23 - access("ADMINUSER"="MEGROUPID")
27 - filter("INSTRUMENTID" IS NOT NULL)
29 - filter("INSTRUMENTID" IS NOT NULL)
50 rows selected.
654 rows selected.
Elapsed: 00:00:43.67
CREATE OR REPLACE VIEW TIBEX_MSGSEQBYINSTRUMENTBYMEID
(INSTRUMENTID, MESSAGESEQUENCE, MEGROUPID)
AS
SELECT a.*, b.megroupid
FROM TIBEX_MSGSEQBYINSTRUMENT a
JOIN tibex_instrument b
ON a.instrumentid=b.instrumentid
CREATE OR REPLACE VIEW TIBEX_MSGSEQBYINSTRUMENT
(INSTRUMENTID, MESSAGESEQUENCE)
AS
SELECT instrumentID, NVL(max(MessageSequence),0) as MessageSequence
FROM (SELECT instrumentID, max(MessageSequence) as MessageSequence
FROM tibex_quote
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_order
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_execution
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_TsTrade
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_BestExRel
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_MeResumePrdTransition
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_EDPUpdateReject
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_INSTRUMENTADMIN
WHERE instrumentID IS NOT NULL
AND adminuser IN (
SELECT megroupID
FROM tibex_configMeGroup
GROUP by instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_BestExecPriceLog
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_auctionPrice
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(AckMessageSequence)
FROM tibex_adminAck
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
UNION ALL
SELECT instrumentID, max(MessageSequence)
FROM tibex_InstrumentState
WHERE instrumentID IS NOT NULL
GROUP BY instrumentID
GROUP BY instrumentID
/Regards
NarasimhaHi,
I dropped and recreated the stats without any modification(Eg adding new Indexes).The Query is hitting the indexes and it comes out in 00:00:16.86.But in the Production box the Same Query is doing Full tablescan.
The only difference in producation and Test Env is I collected the Fresh stats but in prod Kindly read below and give me suggestion
The Process Happens
In the Beginning of the Day Following tables contains Like 100 records and as the day process it will reach 1,2,3,4 millions records by the EOD.During the EOD day we generate stats and delete those records and Tables will have 100 or 200 records but the stats will be for 4 Million records.Kindly Suggest me the best option
tst_pre_eod@MIFEX3> set timing on
tst_pre_eod@MIFEX3> show parameter user_dump_dest
NAME TYPE VALUE
user_dump_dest string /u01/app/oracle/admin/MIFEX3/u
dump
tst_pre_eod@MIFEX3>
tst_pre_eod@MIFEX3> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.4
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
tst_pre_eod@MIFEX3>
tst_pre_eod@MIFEX3> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 128
tst_pre_eod@MIFEX3>
tst_pre_eod@MIFEX3> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
tst_pre_eod@MIFEX3>
tst_pre_eod@MIFEX3> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
tst_pre_eod@MIFEX3>
tst_pre_eod@MIFEX3> column sname format a20
tst_pre_eod@MIFEX3> column pname format a20
tst_pre_eod@MIFEX3> column pval2 format a20
tst_pre_eod@MIFEX3>
tst_pre_eod@MIFEX3> select
2 sname
3 , pname
4 , pval1
5 , pval2
6 from
7 sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 01-11-2010 17:16
SYSSTATS_INFO DSTOP 01-11-2010 17:16
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 1489.10722
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM .71
SYSSTATS_MAIN MREADTIM 15.027
SYSSTATS_MAIN CPUSPEED 2141
SYSSTATS_MAIN MBRC 29
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
Elapsed: 00:00:00.07
tst_pre_eod@MIFEX3> set timing on
tst_pre_eod@MIFEX3> explain plan for
2
tst_pre_eod@MIFEX3> SELECT InstrumentID, MEGroupID, MessageSequence FROM
2 TIBEX_msgseqbyinstrumentbymeid WHERE MEGroupID = 'ME1';
GLJd ME1 2.9983E+18
TALKl ME1 2.9983E+18
ENGl ME1 2.9983E+18
AGRl ME1 2.9983E+18
HHFAd ME1 2.9983E+18
GWI1d ME1 2.9983E+18
BIO3d ME1 2.9983E+18
603 rows selected.
Elapsed: 00:00:16.72
tst_pre_eod@MIFEX3> SELECT InstrumentID, MEGroupID, MessageSequence FROM
2 TIBEX_msgseqbyinstrumentbymeid WHERE MEGroupID = 'ME1';
603 rows selected.
Elapsed: 00:00:16.86
Execution Plan
Plan hash value: 2206731661
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 13 | 806 | 111K (5)| 00:01:20 |
| 1 | HASH GROUP BY | | 13 | 806 | 111K (5)| 00:01:20 |
|* 2 | HASH JOIN | | 3072 | 186K| 111K (5)| 00:01:20 |
|* 3 | TABLE ACCESS FULL | TIBEX_INSTRUMENT | 626 | 13772 | 28 (0)| 00:00:01 |
| 4 | VIEW | | 5776 | 225K| 111K (5)| 00:01:20 |
| 5 | UNION-ALL | | | | | |
| 6 | HASH GROUP BY | | 782 | 14076 | 10056 (5)| 00:00:08 |
| 7 | TABLE ACCESS FULL | TIBEX_QUOTE | 356K| 6260K| 9860 (3)| 00:00:08 |
| 8 | HASH GROUP BY | | 128 | 2304 | 101K (5)| 00:01:12 |
| 9 | VIEW | index$_join$_007 | 3719K| 63M| 98846 (3)| 00:01:11 |
|* 10 | HASH JOIN | | | | | |
| 11 | INDEX FAST FULL SCAN| IX_ORDERBOOK | 3719K| 63M| 32019 (3)| 00:00:23 |
| 12 | INDEX FAST FULL SCAN| TIBEX_ORDER_ID_ORD_INS | 3719K| 63M| 24837 (3)| 00:00:18 |
| 13 | HASH GROUP BY | | 23 | 414 | 4 (25)| 00:00:01 |
| 14 | VIEW | index$_join$_008 | 108 | 1944 | 3 (0)| 00:00:01 |
|* 15 | HASH JOIN | | | | | |
| 16 | INDEX FAST FULL SCAN| TIBEX_EXECUTION_IDX1 | 108 | 1944 | 1 (0)| 00:00:01 |
|* 17 | INDEX FAST FULL SCAN| TIBEX_EXECUTION_IDX4 | 108 | 1944 | 1 (0)| 00:00:01 |
| 18 | HASH GROUP BY | | 1 | 40 | 4 (25)| 00:00:01 |
|* 19 | TABLE ACCESS FULL | TIBEX_TSTRADE | 1 | 40 | 3 (0)| 00:00:01 |
| 20 | HASH GROUP BY | | 394 | 7092 | 30 (10)| 00:00:01 |
| 21 | INDEX FAST FULL SCAN | IX_BESTEXREL | 4869 | 87642 | 28 (4)| 00:00:01 |
| 22 | HASH GROUP BY | | 1126 | 20268 | 19 (11)| 00:00:01 |
|* 23 | TABLE ACCESS FULL | TIBEX_MERESUMEPRDTRANSITION | 1947 | 35046 | 17 (0)| 00:00:01 |
| 24 | HASH GROUP BY | | 1 | 17 | 7 (15)| 00:00:01 |
| 25 | TABLE ACCESS FULL | TIBEX_EDPUPDATEREJECT | 8 | 136 | 6 (0)| 00:00:01 |
| 26 | HASH GROUP BY | | 1099 | 31871 | 192 (6)| 00:00:01 |
|* 27 | HASH JOIN | | 6553 | 185K| 188 (4)| 00:00:01 |
| 28 | INDEX FULL SCAN | XPKTIBEX_CONFIGMEGROUP | 4 | 16 | 1 (0)| 00:00:01 |
| 29 | TABLE ACCESS FULL | TIBEX_INSTRUMENTADMIN | 14744 | 359K| 186 (4)| 00:00:01 |
| 30 | HASH GROUP BY | | 11 | 198 | 77 (7)| 00:00:01 |
| 31 | TABLE ACCESS FULL | TIBEX_BESTEXECPRICELOG | 5534 | 99612 | 74 (3)| 00:00:01 |
| 32 | HASH GROUP BY | | 1 | 40 | 4 (25)| 00:00:01 |
|* 33 | TABLE ACCESS FULL | TIBEX_AUCTIONPRICE | 1 | 40 | 3 (0)| 00:00:01 |
| 34 | HASH GROUP BY | | 1098 | 18666 | 193 (7)| 00:00:01 |
|* 35 | TABLE ACCESS FULL | TIBEX_ADMINACK | 15836 | 262K| 185 (3)| 00:00:01 |
|* 35 | TABLE ACCESS FULL | TIBEX_ADMINACK | 15836 | 262K| 185 (3)| 00:00:01 |
| 36 | HASH GROUP BY | | 1112 | 20016 | 76 (16)| 00:00:01 |
| 37 | INDEX FAST FULL SCAN | INSTRUMENTSTATEMSGSEQ | 20948 | 368K| 66 (4)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("INSTRUMENTID"="B"."INSTRUMENTID")
3 - filter("B"."MEGROUPID"='ME1')
10 - access(ROWID=ROWID)
15 - access(ROWID=ROWID)
17 - filter("INSTRUMENTID" IS NOT NULL)
19 - filter("INSTRUMENTID" IS NOT NULL)
23 - filter("INSTRUMENTID" IS NOT NULL)
27 - access("ADMINUSER"="MEGROUPID")
33 - filter("INSTRUMENTID" IS NOT NULL)
35 - filter("INSTRUMENTID" IS NOT NULL)
Statistics
175 recursive calls
0 db block gets
57737 consistent gets
18915 physical reads
0 redo size
14908 bytes sent via SQL*Net to client
558 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
603 rows processed
SELECT InstrumentID, MEGroupID, MessageSequence FROM
TIBEX_msgseqbyinstrumentbymeid WHERE MEGroupID = 'ME1'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.07 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 8 10.46 16.28 18915 57733 0 603
total 10 10.47 16.35 18915 57733 0 603
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 303
Rows Row Source Operation
603 HASH GROUP BY (cr=57733 pr=18915 pw=18900 time=16283336 us)
2853 HASH JOIN (cr=57733 pr=18915 pw=18900 time=281784 us)
626 TABLE ACCESS FULL TIBEX_INSTRUMENT (cr=38 pr=0 pw=0 time=120 us)
5594 VIEW (cr=57695 pr=18915 pw=18900 time=278405 us)
5594 UNION-ALL (cr=57695 pr=18915 pw=18900 time=278400 us)
823 HASH GROUP BY (cr=12938 pr=0 pw=0 time=272798 us)
356197 TABLE ACCESS FULL TIBEX_QUOTE (cr=12938 pr=0 pw=0 time=41 us)
136 HASH GROUP BY (cr=43989 pr=18915 pw=18900 time=15962878 us)
3718076 VIEW index$_join$_007 (cr=43989 pr=18915 pw=18900 time=13123768 us)
3718076 HASH JOIN (cr=43989 pr=18915 pw=18900 time=9405689 us)
3718076 INDEX FAST FULL SCAN IX_ORDERBOOK (cr=24586 pr=0 pw=0 time=65 us)(object id 387849)
3718076 INDEX FAST FULL SCAN TIBEX_ORDER_ID_ORD_INS (cr=19403 pr=0 pw=0 time=64 us)(object id 387867)
23 HASH GROUP BY (cr=6 pr=0 pw=0 time=1265 us)
108 VIEW index$_join$_008 (cr=6 pr=0 pw=0 time=1024 us)
108 HASH JOIN (cr=6 pr=0 pw=0 time=914 us)
108 INDEX FAST FULL SCAN TIBEX_EXECUTION_IDX1 (cr=3 pr=0 pw=0 time=155 us)(object id 386846)
108 INDEX FAST FULL SCAN TIBEX_EXECUTION_IDX4 (cr=3 pr=0 pw=0 time=129 us)(object id 386845)
0 HASH GROUP BY (cr=3 pr=0 pw=0 time=84 us)
0 TABLE ACCESS FULL TIBEX_TSTRADE (cr=3 pr=0 pw=0 time=46 us)
394 HASH GROUP BY (cr=39 pr=0 pw=0 time=2662 us)
4869 INDEX FAST FULL SCAN IX_BESTEXREL (cr=39 pr=0 pw=0 time=22 us)(object id 386757)
1126 HASH GROUP BY (cr=23 pr=0 pw=0 time=2338 us)
1947 TABLE ACCESS FULL TIBEX_MERESUMEPRDTRANSITION (cr=23 pr=0 pw=0 time=29 us)
1 HASH GROUP BY (cr=7 pr=0 pw=0 time=110 us)
8 TABLE ACCESS FULL TIBEX_EDPUPDATEREJECT (cr=7 pr=0 pw=0 time=43 us)
828 HASH GROUP BY (cr=249 pr=0 pw=0 time=6145 us)
828 HASH JOIN (cr=249 pr=0 pw=0 time=1008 us)
4 INDEX FULL SCAN XPKTIBEX_CONFIGMEGROUP (cr=1 pr=0 pw=0 time=21 us)(object id 386786)
14905 TABLE ACCESS FULL TIBEX_INSTRUMENTADMIN (cr=248 pr=0 pw=0 time=23 us)
11 HASH GROUP BY (cr=99 pr=0 pw=0 time=3728 us)
5556 TABLE ACCESS FULL TIBEX_BESTEXECPRICELOG (cr=99 pr=0 pw=0 time=32 us)
0 HASH GROUP BY (cr=3 pr=0 pw=0 time=72 us)
0 TABLE ACCESS FULL TIBEX_AUCTIONPRICE (cr=3 pr=0 pw=0 time=30 us)
1126 HASH GROUP BY (cr=248 pr=0 pw=0 time=11102 us)
16069 TABLE ACCESS FULL TIBEX_ADMINACK (cr=248 pr=0 pw=0 time=18 us)
1126 HASH GROUP BY (cr=91 pr=0 pw=0 time=11947 us)
21235 INDEX FAST FULL SCAN INSTRUMENTSTATEMSGSEQ (cr=91 pr=0 pw=0 time=38 us)(object id 386904)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 8 0.00 0.00
direct path write temp 1260 0.52 5.39
direct path read temp 1261 0.04 2.95
SQL*Net message from client 8 0.00 0.00
SQL*Net more data to client 6 0.00 0.00
PARSE #8:c=15000,e=83259,p=0,cr=4,cu=0,mis=1,r=0,dep=0,og=1,tim=532014955506
EXEC #8:c=1000,e=170,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=532014955744
WAIT #8: nam='SQL*Net message to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=572 tim=532014955794
WAIT #8: nam='direct path write temp' ela= 4090 file number=201 first dba=84873 block cnt=15 obj#=572 tim=532015639268
WAIT #8: nam='direct path write temp' ela= 2677 file number=201 first dba=84888 block cnt=15 obj#=572 tim=532015642558
WAIT #8: nam='direct path write temp' ela= 20 file number=201 first dba=84903 block cnt=15 obj#=572 tim=532015652372
WAIT #8: nam='direct path write temp' ela= 2190 file number=201 first dba=84918 block cnt=15 obj#=572 tim=532015656105
WAIT #8: nam='direct path write temp' ela= 2247 file number=201 first dba=84933 block cnt=15 obj#=572 tim=532015659146
WAIT #8: nam='direct path write temp' ela= 3386 file number=201 first dba=84948 block cnt=15 obj#=572 tim=532015662832
WAIT #8: nam='direct path write temp' ela= 3375 file number=201 first dba=84963 block cnt=15 obj#=572 tim=532015666444
WAIT #8: nam='direct path write temp' ela= 2796 file number=201 first dba=84978 block cnt=15 obj#=572 tim=532015670097
WAIT #8: nam='direct path write temp' ela= 2901 file number=201 first dba=53129 block cnt=15 obj#=572 tim=532015673308
WAIT #8: nam='direct path write temp' ela= 2933 file number=201 first dba=53144 block cnt=15 obj#=572 tim=532015676474
WAIT #8: nam='direct path write temp' ela= 15 file number=201 first dba=53159 block cnt=15 obj#=572 tim=532015686479
WAIT #8: nam='direct path write temp' ela= 2561 file number=201 first dba=53174 block cnt=15 obj#=572 tim=532015690084
WAIT #8: nam='direct path write temp' ela= 2297 file number=201 first dba=53189 block cnt=15 obj#=572 tim=532015693299
WAIT #8: nam='direct path write temp' ela= 3448 file number=201 first dba=53204 block cnt=15 obj#=572 tim=532015697026
WAIT #8: nam='direct path write temp' ela= 2633 file number=201 first dba=53219 block cnt=15 obj#=572 tim=532015700114
WAIT #8: nam='direct path write temp' ela= 2902 file number=201 first dba=53234 block cnt=15 obj#=572 tim=532015703743
WAIT #8: nam='direct path write temp' ela= 3219 file number=201 first dba=53001 block cnt=15 obj#=572 tim=532015707190
WAIT #8: nam='direct path write temp' ela= 2809 file number=201 first dba=53016 block cnt=15 obj#=572 tim=532015710215 -
Zimbra login is very slow - SQL query takes 35+ seconds
Hi,
my Zimbra login process remains very slow, 35-40 seconds with only a single user using it. I have Beehive set up as directory synchronized, with about 6500 users in it. However, I and a couple of colleagues the only ones making any use of it for testing. With just one person logging in, the following SQL query takes about 35 seconds to execute:
SELECT /*+ LEADING(rf rf_pp) USE_NL(rf_pp) INDEX_ASC(@rf_connect_by rf@rf_connect_by (ws_real_folders.parent_eid ws_real_folders.eid)) */ RF.ENTERPRISE_ID AS
ENTERPRISE_ID, RF.SITE_ID AS SITE_ID, RF.ENTITY_TYPE AS ENTITY_TYPE, RF.EID AS EID, RF.LOCK_ID AS LOCK_ID, RF.CACHE_ID AS CACHE_ID, RF.CACHE_TS AS
CACHE_TS, RF.CACHE_SQ AS CACHE_SQ, FLOOR(RF.SECURE_CHECK/10) AS ACCESS_TYPES, RF.PARENT_ENTITY_TYPE AS PARENT_ENTITY_TYPE, RF.PARENT_EID AS
PARENT_EID, RF.NAME AS NAME, RF.OWNER_ENTITY_TYPE AS OWNER_ENTITY_TYPE, RF.OWNER_EID AS OWNER_EID, RF.CREATED_ON AS CREATED_ON,
RF.CREATOR_ENTITY_TYPE AS CREATOR_ENTITY_TYPE, RF.CREATOR_EID AS CREATOR_EID, RF.MODIFIEDON AS MODIFIED_ON, RF.MODIFIED_BY_ENTITY_TYPE AS
MODIFIED_BY_ENTITY_TYPE, RF.MODIFIED_BY_EID AS MODIFIED_BY_EID, RF.VISIBILITY AS VISIBILITY, CASE WHEN (BITAND(:B13 , :B12 ) = :B12 ) THEN CAST(MULTISET(
SELECT METADATA_CEN
FROM OCS_ENTITY_METADATA_CENS_2_V META
WHERE META.ENTITY_EID = RF.EID ) AS OCS_COLLAB_ID_TBL_T) ELSE CAST(NULL AS OCS_COLLAB_ID_TBL_T) END AS METADATA_CENS, CASE WHEN
RF_PP.LAST_ACCESSED IS NULL THEN 'N' WHEN RF.MODIFIEDON > RF_PP.LAST_ACCESSED THEN 'U' ELSE NVL(RF_PP.RELATIVE_STATUS, 'N') END AS CHANGE_STATUS,
RF.PROPERTIES AS PROPERTIES_CLOB, RF_PP.PROPERTIES AS VIEWERPROPERTIES_CLOB, RF.DESCRIPTION AS DESCRIPTION
FROM (
SELECT /*+ QB_NAME(rf_connect_by) no_connect_by_cost_based */ RF.ENTERPRISE_ID ENTERPRISE_ID, :B4 SITE_ID, :B3 ENTITY_TYPE, RF.EID EID, RF.LOCK_ID LOCK_ID,
RF.ORA_ROWSCN CACHE_ID, RF.CACHE_TS CACHE_TS, RF.CACHE_SQ CACHE_SQ, RF.PARENT_TYPE PARENT_ENTITY_TYPE, RF.PARENT_EID PARENT_EID, RF.NAME
NAME, RF.OWNER_TYPE OWNER_ENTITY_TYPE, RF.OWNER_EID OWNER_EID, RF.CREATED_ON CREATED_ON, RF.CREATOR_TYPE CREATOR_ENTITY_TYPE,
RF.CREATOR_EID CREATOR_EID, RF.MODIFIED_ON MODIFIEDON, RF.MODIFIED_BY_TYPE MODIFIED_BY_ENTITY_TYPE, RF.MODIFIED_BY_EID MODIFIED_BY_EID,
RF.VISIBILITY VISIBILITY, RF.PROPERTIES PROPERTIES, RF.DESCRIPTION DESCRIPTION, RF.IS_HIDDEN IS_HIDDEN, LEVEL LEVEL_NUM, COALESCE (
(SELECT :B10 * 10 + 1
FROM AC_ENTITIES AEI
WHERE RF.EID = AEI.EID AND ( 1 = DECODE(AEI.SENSITIVITY_EID, :B9 , 1, 0) AND 1 = DECODE(AEI.OWNER_EID, :B8 , 1, 0) AND 1 = DECODE(AEI.AT_READ, :B7 , 1, 0) AND 1 =
DECODE(AEI.AT_DISCOVER, :B6 , 1, 0) AND 1 = DECODE(AEI.LOCAL_ACL_ID, :B5 , 1, 0) ) ) ,
(SELECT ACV.ACCESS_TYPES * 10 + ACV.IS_ALLOWED
FROM AC_CHECK_ONE_OF_V ACV
WHERE ACV.EID = RF.EID ) ) SECURE_CHECK
FROM WS_REAL_FOLDERS RF
WHERE RF.IS_HIDDEN = :B2 START WITH RF.PARENT_EID = :B1 CONNECT BY PRIOR RF.EID = RF.PARENT_EID ) RF, WS_RF_PRVT_PROPERTIES RF_PP
WHERE RF.EID = RF_PP.EID (+) AND :B11 = RF_PP.VIEWER_EID (+) AND 1 = BITAND(RF.SECURE_CHECK, 1) ORDER BY RF.LEVEL_NUMIt has ID atrvjdrmz2v6d in Enterprise Manager, and I've tried tuning it with the SQL tuning advisor in EM. I did the statistics gathering mentioned in another thread yesterday, to see if that helped - doesn't seem liked it. I'm running Database 11.1.0.6 and Beehive 1.5.1 in the build from the day it released.
Any hints? I'll post this to MetaLink as well, unless someone has some immediate idea what's wrong :-)No speedup, the 35-40 seconds is for normal logins - and it's very consistent, at least if the database is otherwise idle. Immediately logging out and back in gives me the same wait, and the same query shows up in the Enterprise Manager interface.
It seems it might be related to Workspaces - from EM:
"PL/SQL Source (Line Number) BEE_CODE.WS_REAL_FOLDER_PKG (2998)" -
Insert query takes too much time
I have two select clauses as follows:
"select * from employee" This returns me 6000 rows.
& I have next clause as
"select * from employee where userid in(1,2,3,....,3000)"
This returns me 3000 rows.
Now i have to insert the result of above queries into same extended list view of Visual Basic. But the insert for first query takes 11 seconds while second takes 34 sec. We have evaluated this that this time is for insert query & not for select query.
I want to know that even if the Ist query returns 6000 rows it takes lesser time than the second which is
inserting 3000 rows.
We are using Oracle 8.1.7
Thanks in advanceThe first query can do a straight dump of the table. The second select has to compare every userid to a hardcoded list of 3000 numbers. This will take quite a bit longer. Try rewriting it to
select * from employee where userid between 1 and 3000
It will run much faster then the other query. -
Why NL instead of HJ(query takes long)
Hi there!
There are a db(10.2.0.5 RAC), SLES 10 and two schemes.In the first schema query takes very fast:
SQL> explain plan for
select count(distinct c.unit)
from quantity_comp_v qc, comps c, noticequant nq, notices_table n, noticediss nd
where
c.id = qc.comp
and nq.quant = qc.quantity
and n.id = nq.note
and n.type_id = 2
and nq.link = 2
and nd.note = n.id
and n.trust >=0
and nd.dis in (select dr.dismbr from disrelflat dr where dr.disgrp = 1080801245);
Explained.
Elapsed: 00:00:00.27
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2567928671
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | | 2099 (4)| 00:00:07 |
| 1 | SORT GROUP BY | | 1 | 91 | | | |
|* 2 | HASH JOIN | | 55203 | 4905K| | 2099 (4)| 00:00:07 |
|* 3 | INDEX RANGE SCAN | DISRELFLAT_UI_DISGRP_DISMBR | 10618 | 155K| | 54 (2)| 00:00:01 |
|* 4 | HASH JOIN | | 42398 | 3146K| 2984K| 2043 (4)| 00:00:06 |
|* 5 | HASH JOIN | | 42398 | 2484K| | 1262 (4)| 00:00:04 |
|* 6 | HASH JOIN | | 24173 | 1109K| | 1022 (4)| 00:00:03 |
|* 7 | HASH JOIN | | 21471 | 650K| | 951 (3)| 00:00:03 |
|* 8 | VIEW | index$_join$_004 | 21471 | 314K| | 783 (3)| 00:00:03 |
|* 9 | HASH JOIN | | | | | | |
|* 10 | INDEX RANGE SCAN | NOTICES_TABLE_I_TYPE_ID_TRUST_ | 21471 | 314K| | 141 (5)| 00:00:01 |
| 11 | INDEX FAST FULL SCAN | NOTICES_TABLE_PK | 21471 | 314K| | 832 (2)| 00:00:03 |
| 12 | INDEX FAST FULL SCAN | NOTICEDISS_UI_NOTE_DIS | 169K| 2647K| | 162 (4)| 00:00:01 |
|* 13 | TABLE ACCESS FULL | NOTICEQUANT | 38141 | 595K| | 69 (5)| 00:00:01 |
| 14 | VIEW | | 88786 | 1127K| | 236 (3)| 00:00:01 |
| 15 | SORT UNIQUE | | | | | | |
| 16 | UNION-ALL | | | | | | |
|* 17 | HASH JOIN | | 23140 | 1355K| | 130 (8)| 00:00:01 |
| 18 | INDEX FAST FULL SCAN | QUANTITIES_I_ID_OP | 50822 | 397K| | 51 (4)| 00:00:01 |
|* 19 | HASH JOIN | | 23057 | 1170K| | 76 (7)| 00:00:01 |
| 20 | INDEX FAST FULL SCAN | QOPNC_I_ALL | 37964 | 481K| | 55 (2)| 00:00:01 |
| 21 | VIEW | | 23057 | 878K| | 18 (6)| 00:00:01 |
|* 22 | CONNECT BY WITHOUT FILTERING| | | | | | |
| 23 | TABLE ACCESS FULL | QOPNQ | 23057 | 225K| | 18 (6)| 00:00:01 |
|* 24 | HASH JOIN | | 38100 | 781K| | 109 (6)| 00:00:01 |
| 25 | INDEX FAST FULL SCAN | QOPNC_I_ALL | 37964 | 481K| | 55 (2)| 00:00:01 |
| 26 | INDEX FAST FULL SCAN | QUANTITIES_I_ID_OP | 50822 | 397K| | 51 (4)| 00:00:01 |
| 27 | TABLE ACCESS FULL | COMPS | 218K| 3406K| | 282 (4)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("ND"."DIS"="DR"."DISMBR")
3 - access("DR"."DISGRP"=1080801245)
4 - access("C"."ID"="COMP")
5 - access("NQ"."QUANT"="ID")
6 - access("NQ"."NOTE"="N"."ID")
7 - access("ND"."NOTE"="N"."ID")
8 - filter("N"."TYPE_ID"=2 AND "N"."TRUST">=0)
9 - access(ROWID=ROWID)
10 - access("N"."TYPE_ID"=2 AND "N"."TRUST">=0)
13 - filter("NQ"."LINK"=2)
17 - access("ID"="RT")
19 - access("QOP"="QUANTITY")
22 - access("QUANTITY"=PRIOR "QOP")
24 - access("ID"="QUANTITY")
52 rows selected.
Elapsed: 00:00:00.06
SQL> In second schema query takes very long:
PLAN_TABLE_OUTPUT
Plan hash value: 543063124
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 92 | 333 (2)| 00:00:01 |
| 1 | SORT GROUP BY | | 1 | 92 | | |
| 2 | NESTED LOOPS | | 91 | 8372 | 333 (2)| 00:00:01 |
| 3 | NESTED LOOPS | | 91 | 6916 | 241 (2)| 00:00:01 |
| 4 | NESTED LOOPS | | 52 | 3276 | 189 (2)| 00:00:01 |
| 5 | NESTED LOOPS | | 47 | 2209 | 123 (3)| 00:00:01 |
|* 6 | HASH JOIN | | 81 | 2592 | 42 (8)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | DISRELFLAT_UI_DISGRP_DISMBR | 18 | 288 | 2 (0)| 00:00:01 |
| 8 | INDEX FAST FULL SCAN | NOTICEDISS_UI_NOTE_DIS | 38127 | 595K| 38 (3)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID | NOTICES_TABLE | 1 | 15 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | NOTICES_TABLE_PK | 1 | | 0 (0)| 00:00:01 |
|* 11 | TABLE ACCESS BY INDEX ROWID | NOTICEQUANT | 1 | 16 | 3 (0)| 00:00:01 |
|* 12 | INDEX RANGE SCAN | NOTICEQUANT_UI_NOTE_QUANT | 1 | | 1 (0)| 00:00:01 |
| 13 | VIEW | | 2 | 26 | 1 (0)| 00:00:01 |
| 14 | SORT UNIQUE | | | | | |
| 15 | UNION-ALL PARTITION | | | | | |
| 16 | NESTED LOOPS | | 1 | 60 | 19 (6)| 00:00:01 |
| 17 | NESTED LOOPS | | 1 | 47 | 18 (6)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | QUANTITIES_UI_ID_OP | 1 | 8 | 2 (0)| 00:00:01 |
|* 19 | VIEW | | 1 | 39 | 16 (7)| 00:00:01 |
|* 20 | CONNECT BY WITHOUT FILTERING| | | | | |
| 21 | TABLE ACCESS FULL | QOPNQ | 19811 | 193K| 16 (7)| 00:00:01 |
|* 22 | INDEX RANGE SCAN | QOPNC_UI_QUANTITY_NUM_COMP | 1 | 13 | 1 (0)| 00:00:01 |
| 23 | NESTED LOOPS | | 1 | 21 | 3 (0)| 00:00:01 |
|* 24 | INDEX RANGE SCAN | QUANTITIES_UI_ID_OP | 1 | 8 | 2 (0)| 00:00:01 |
|* 25 | INDEX RANGE SCAN | QOPNC_UI_QUANTITY_NUM_COMP | 1 | 13 | 1 (0)| 00:00:01 |
| 26 | TABLE ACCESS BY INDEX ROWID | COMPS | 1 | 16 | 1 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | COMPS_UI_ID | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
6 - access("ND"."DIS"="DR"."DISMBR")
7 - access("DR"."DISGRP"=1080801245)
9 - filter("N"."TYPE_ID"=2 AND "N"."TRUST">=0)
10 - access("ND"."NOTE"="N"."ID")
11 - filter("NQ"."LINK"=2)
12 - access("NQ"."NOTE"="N"."ID")
18 - access("ID"="NQ"."QUANT")
19 - filter("ID"="RT" AND "RT"="NQ"."QUANT")
20 - access("QUANTITY"=PRIOR "QOP")
22 - access("QOP"="QUANTITY")
24 - access("ID"="NQ"."QUANT")
25 - access("QUANTITY"="NQ"."QUANT")
filter("ID"="QUANTITY")
27 - access("C"."ID"="COMP")As you can see, plans are different. Why? Statistics is up-to-date in both schemas.
In tkprof trace for the second schema i see strange thing:
select count(distinct c.unit)
from quantity_comp_v qc, comps c, noticequant nq, notices_table n,
noticediss nd
where
c.id = qc.comp
and nq.quant = qc.quantity
and n.id = nq.note
and n.type_id = 2
and nq.link = 2
and nd.note = n.id
and n.trust >=0
and nd.dis in (select dr.dismbr from disrelflat dr where dr.disgrp = 1080801245)
call count cpu elapsed disk query current rows
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 558.36 545.39 1 845921 0 1
total 4 558.38 545.42 1 845921 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 156
Rows Row Source Operation
1 SORT GROUP BY (cr=845921 pr=1 pw=0 time=545397464 us)
13865 NESTED LOOPS (cr=845921 pr=1 pw=0 time=546471010 us)
13865 NESTED LOOPS (cr=818189 pr=1 pw=0 time=546290764 us)
13308 HASH JOIN (cr=31053 pr=1 pw=0 time=172231 us)
13308 NESTED LOOPS (cr=30863 pr=0 pw=0 time=126100 us)
15326 HASH JOIN (cr=209 pr=0 pw=0 time=21640 us)
11522 INDEX RANGE SCAN DISRELFLAT_UI_DISGRP_DISMBR (cr=68 pr=0 pw=0 time=40 us)(object id 634347)
37432 INDEX FAST FULL SCAN NOTICEDISS_UI_NOTE_DIS (cr=141 pr=0 pw=0 time=50 us)(object id 638914)
13308 TABLE ACCESS BY INDEX ROWID NOTICES_TABLE (cr=30654 pr=0 pw=0 time=95620 us)
15326 INDEX UNIQUE SCAN NOTICES_TABLE_PK (cr=15328 pr=0 pw=0 time=41398 us)(object id 638937)
34391 TABLE ACCESS FULL NOTICEQUANT (cr=190 pr=1 pw=0 time=53 us)
13865 VIEW (cr=787136 pr=0 pw=0 time=544908246 us)
13865 SORT UNIQUE (cr=787136 pr=0 pw=0 time=544893509 us)
13950 UNION-ALL PARTITION (cr=787136 pr=0 pw=0 time=539509573 us)
1223 NESTED LOOPS (cr=733813 pr=0 pw=0 time=539271493 us)
1233 NESTED LOOPS (cr=731971 pr=0 pw=0 time=539245389 us)
13308 INDEX RANGE SCAN QUANTITIES_UI_ID_OP (cr=26647 pr=0 pw=0 time=120240 us)(object id 634738)
1233 VIEW (cr=705324 pr=0 pw=0 time=539109785 us)
275435676 CONNECT BY WITHOUT FILTERING (cr=705324 pr=0 pw=0 time=493869861 us)
263644788 TABLE ACCESS FULL QOPNQ (cr=705324 pr=0 pw=0 time=163378 us)
1223 INDEX RANGE SCAN QOPNC_UI_QUANTITY_NUM_COMP (cr=1842 pr=0 pw=0 time=10296 us)(object id 634729)
12727 NESTED LOOPS (cr=53323 pr=0 pw=0 time=216730 us)
13308 INDEX RANGE SCAN QUANTITIES_UI_ID_OP (cr=26647 pr=0 pw=0 time=111770 us)(object id 634738)
12727 INDEX RANGE SCAN QOPNC_UI_QUANTITY_NUM_COMP (cr=26676 pr=0 pw=0 time=84638 us)(object id 634729)
13865 TABLE ACCESS BY INDEX ROWID COMPS (cr=27732 pr=0 pw=0 time=235066 us)
13865 INDEX UNIQUE SCAN COMPS_UI_ID (cr=13867 pr=0 pw=0 time=128663 us)(object id 634315)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
gc current block 3-way 208 0.00 0.08
gc current block 2-way 93 0.00 0.02
gc cr multi block request 150 0.00 0.01
db file sequential read 1 0.01 0.01
SQL*Net message from client 2 0.00 0.00
********************************************************************************Cardinality of QOPNQ not real (over 263 millions rows instead of 19 thousands). If i set optimizer_index_cost_adj=400, plan uses HJ and cardinality in tkprof trace is right. Thanks in advance.
Best regards, Pavel.Hi there.
What I found:
In schema, where optimizer uses NL:
SQL> select TABLE_NAME,COLUMN_NAME,NUM_BUCKETS,LAST_ANALYZED from user_tab_col_statistics where table_name in ('DISRELFLAT','NOTICES_TABLE','QOPNQ','COMPS');
TABLE_NAME COLUMN_NAME NUM_BUCKETS LAST_ANALYZED
COMPS ID 254 31-JUL-10
COMPS UNIT 254 31-JUL-10
COMPS LOC 34 31-JUL-10
COMPS TIS 246 31-JUL-10
COMPS RTYP 2 31-JUL-10
DISRELFLAT DISGRP 254 31-JUL-10
DISRELFLAT DISMBR 254 31-JUL-10
DISRELFLAT CNT 174 31-JUL-10
DISRELFLAT SPL 10 31-JUL-10
DISRELFLAT DIST 16 31-JUL-10
DISRELFLAT PART 2 31-JUL-10
DISRELFLAT ISA 2 31-JUL-10
DISRELFLAT MIX 2 31-JUL-10
DISRELFLAT CLONAL 1 31-JUL-10
NOTICES_TABLE ID 1 08-SEP-10
NOTICES_TABLE TEXT 1 08-SEP-10
NOTICES_TABLE DESCRIPTION 1 08-SEP-10
NOTICES_TABLE TYPE_ID 4 08-SEP-10
NOTICES_TABLE CUSER_ID 1 08-SEP-10
NOTICES_TABLE CDATE 1 08-SEP-10
NOTICES_TABLE TRUST 10 08-SEP-10
NOTICES_TABLE RTYP 2 08-SEP-10
NOTICES_TABLE CHECKEDBY 1 08-SEP-10
NOTICES_TABLE FEATURE_ID 13 08-SEP-10
QOPNQ QUANTITY 1 31-JUL-10
QOPNQ NUM 12 31-JUL-10
QOPNQ QOP 1 31-JUL-10
27 rows selected.In schema where optimizer uses HJ:
TABLE_NAME COLUMN_NAME NUM_BUCKETS LAST_ANALYZED
COMPS ID 254 28-JUL-10
COMPS UNIT 254 28-JUL-10
COMPS LOC 29 28-JUL-10
COMPS TIS 223 28-JUL-10
COMPS RTYP 2 28-JUL-10
DISRELFLAT DISGRP 254 21-AUG-10
DISRELFLAT DISMBR 1 21-AUG-10
DISRELFLAT CNT 103 21-AUG-10
DISRELFLAT SPL 10 21-AUG-10
DISRELFLAT DIST 11 21-AUG-10
DISRELFLAT PART 2 21-AUG-10
DISRELFLAT ISA 2 21-AUG-10
DISRELFLAT MIX 2 21-AUG-10
DISRELFLAT CLONAL 2 21-AUG-10
NOTICES_TABLE ID 1 09-AUG-10
NOTICES_TABLE TEXT 254 09-AUG-10
NOTICES_TABLE DESCRIPTION 254 09-AUG-10
NOTICES_TABLE TYPE_ID 4 09-AUG-10
NOTICES_TABLE CUSER_ID 1 09-AUG-10
NOTICES_TABLE CDATE 254 09-AUG-10
NOTICES_TABLE TRUST 9 09-AUG-10
NOTICES_TABLE RTYP 2 09-AUG-10
NOTICES_TABLE CHECKEDBY 1 09-AUG-10
NOTICES_TABLE FEATURE_ID 21 09-AUG-10
QOPNQ QUANTITY 1 28-JUL-10
QOPNQ NUM 10 28-JUL-10
QOPNQ QOP 1 28-JUL-10
27 rows selected.Then i gathered statistics with estimate_percent=>null,method_opt=>'for all columns size 1'(without histogramms).
Plan became:
PLAN_TABLE_OUTPUT
Plan hash value: 2184582790
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | 969 (4)| 00:00:03 |
| 1 | SORT GROUP BY | | 1 | 91 | | |
|* 2 | HASH JOIN | | 689 | 62699 | 969 (4)| 00:00:03 |
|* 3 | HASH JOIN | | 689 | 51675 | 719 (3)| 00:00:03 |
|* 4 | HASH JOIN | | 392 | 24304 | 549 (2)| 00:00:02 |
| 5 | NESTED LOOPS | | 377 | 17342 | 479 (1)| 00:00:02 |
|* 6 | HASH JOIN | | 435 | 13485 | 43 (7)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | DISRELFLAT_UI_DISGRP_DISMBR | 12 | 180 | 3 (0)| 00:00:01 |
| 8 | INDEX FAST FULL SCAN | NOTICEDISS_UI_NOTE_DIS | 38127 | 595K| 38 (3)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID | NOTICES_TABLE | 1 | 15 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | NOTICES_TABLE_PK | 1 | | 0 (0)| 00:00:01 |
|* 11 | TABLE ACCESS FULL | NOTICEQUANT | 34306 | 536K| 69 (5)| 00:00:01 |
| 12 | VIEW | | 79375 | 1007K| 167 (3)| 00:00:01 |
| 13 | SORT UNIQUE | | | | | |
| 14 | UNION-ALL | | | | | |
|* 15 | HASH JOIN | | 19811 | 1160K| 86 (11)| 00:00:01 |
| 16 | INDEX FAST FULL SCAN | QUANTITIES_UI_ID_OP | 45161 | 352K| 33 (7)| 00:00:01 |
|* 17 | HASH JOIN | | 19811 | 1006K| 51 (10)| 00:00:01 |
| 18 | TABLE ACCESS FULL | QOPNC | 34214 | 434K| 33 (7)| 00:00:01 |
| 19 | VIEW | | 19811 | 754K| 16 (7)| 00:00:01 |
|* 20 | CONNECT BY WITHOUT FILTERING| | | | | |
| 21 | TABLE ACCESS FULL | QOPNQ | 19811 | 193K| 16 (7)| 00:00:01 |
|* 22 | HASH JOIN | | 34214 | 701K| 68 (9)| 00:00:01 |
| 23 | TABLE ACCESS FULL | QOPNC | 34214 | 434K| 33 (7)| 00:00:01 |
| 24 | INDEX FAST FULL SCAN | QUANTITIES_UI_ID_OP | 45161 | 352K| 33 (7)| 00:00:01 |
| 25 | TABLE ACCESS FULL | COMPS | 212K| 3320K| 244 (5)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("C"."ID"="COMP")
3 - access("NQ"."QUANT"="ID")
4 - access("NQ"."NOTE"="N"."ID")
6 - access("ND"."DIS"="DR"."DISMBR")
7 - access("DR"."DISGRP"=1080801245)
9 - filter("N"."TYPE_ID"=2 AND "N"."TRUST">=0)
10 - access("ND"."NOTE"="N"."ID")
11 - filter("NQ"."LINK"=2)
15 - access("ID"="RT")
17 - access("QOP"="QUANTITY")
20 - access("QUANTITY"=PRIOR "QOP")
22 - access("ID"="QUANTITY")Now,query executes as fast as in first schema.
Best regards,Pavel.
Edited by: Pavel E. -
Why update query takes long time ?
Hello everyone;
My update query takes long time. In emp ( self testing) just having 2 records.
when i issue update query , it takes long time;
SQL> select * from emp;
EID ENAME EQUAL ESALARY ECITY EPERK ECONTACT_NO
2 rose mca 22000 calacutta 9999999999
1 sona msc 17280 pune 9999999999
Elapsed: 00:00:00.05
SQL> update emp set esalary=12000 where eid='1';
update emp set esalary=12000 where eid='1'
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:01:11.72
SQL> update emp set esalary=15000;
update emp set esalary=15000
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:02:22.27Hi BCV;
Thanks for your reply but it doesn't provide output, please see this.
SQL> update emp set esalary=15000;
........... Lock already occured.
>> trying to trace >>
SQL> select HOLDING_SESSION from dba_blockers;
HOLDING_SESSION
144
SQL> select sid , username, event from v$session where username='HR';
SID USERNAME EVENT
144 HR SQL*Net message from client
151 HR enq: TX - row lock contention
159 HR SQL*Net message from client
>> It does n 't provide clear output about transaction lock >>
SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
2 BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
3 request
4 FROM v$lock, v$session
5 WHERE v$lock.TYPE = 'TX'
6 AND v$lock.SID = v$session.SID
7 AND v$session.username = USER;
no rows selected
SQL> select MACHINE from v$session where sid = :sid;
SP2-0552: Bind variable "SID" not declared. -
Query take long time in fetching when used within a procedure
The Database is : Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
Query just takes a second from toad but when used inside a procedure as a cursor it takes takes 3 to 5 minutes.
Following is the Tkprof information when running from procedure.
SELECT CHCLP.CLM_PRVDR_TYPE_LKPCD, CHCLP.PRVDR_LCTN_IID, TO_CHAR
(CHCLP.MODIFIED_DATE, 'MM-dd-yyyy hh24:mi:ss') MODIFIED_DATE,
CHCLP.PRVDR_LCTN_IDENTIFIER, CHCLP.CLM_HDR_CLM_LN_X_PVDR_LCTN_SID
FROM
CLM_HDR_CLM_LN_X_PRVDR_LCTN CHCLP WHERE CHCLP.CLAIM_HEADER_SID = :B1 AND
CHCLP.CLAIM_LINE_SID IS NULL AND CHCLP.IDNTFR_TYPE_CID = 7
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 110.79 247.79 568931 576111 0 3
total 2 110.79 247.79 568931 576111 0 3
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 93 (CMSAPP) (recursive depth: 1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
0 PARTITION RANGE (SINGLE) PARTITION:KEYKEY
0 TABLE ACCESS MODE: ANALYZED (BY LOCAL INDEX ROWID) OF
'CLM_HDR_CLM_LN_X_PRVDR_LCTN' (TABLE) PARTITION:KEYKEY
0 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XAK1CLM_HDR_CLM_LN_X_PRVDR_LCT' (INDEX (UNIQUE))
PARTITION:KEYKEY
Execution plan when running just the query from TOAD is: (it comes out in a second)
Plan
SELECT STATEMENT ALL_ROWSCost: 6 Bytes: 100 Cardinality: 2
3 PARTITION RANGE SINGLE Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 1 Partitions accessed #13
2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE CMSAPP.CLM_HDR_CLM_LN_X_PRVDR_LCTN Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 2 Partitions accessed #13
Why would fetching take such a long time? Please let me know if you need any other information.
Thank You.
Edited by: spur230 on Apr 1, 2009 10:23 AM
Edited by: spur230 on Apr 1, 2009 10:26 AM
Edited by: spur230 on Apr 1, 2009 10:28 AM
Edited by: spur230 on Apr 1, 2009 10:30 AMQuery just takes a second from toad It's possible that the query starts returning rows in a second, but that's not the time required for the entire query.
-
**Tech explanation why BACKUPS take long, how to make it very short, etc*
I noticed my backups to increase quite a bit over the last few days. I also noticed so many posts on why are backups taking so long, what can I do (only solution given is to click x or modify so that backups are not done, etc). Well to answer all those that want the "real" answer, I decided to do a few minutes of research and share what I've learned. I have seen nothing at all on google or anywhere else about the recommendations I'm suggesting, but from my experience I think it will answer 99% of all the backup threads on this site.
All your backups are being stored in:
XP: C:\Documents and Settings\(Your Name)\Application Data\Apple Computer\MobileSync\Backup\
Vista: C:\Users\*Your User*\Appdata\Local\Apple Computer\
Mac: ~/Library/Application Support/MobileSync/Backup
The main directories are GUIDs that are based on the version of the iphone, so you may see more than one folder. Make sure to view the date last modified of the folder to make sure you go into the latest one. Order the files inside this directory by date modified. Here you can see how many files are being updated ON EVERY BACKUP. The older ones are not being backed up anymore because you probably uninstalled that app on the iphone (apple doesn't delete old unnecessary backup files that won't be ever used - bad programming #1). They are encoding all these files and inside the files in base 64, so you can decrypt them to view the contents, but you don't really need to and I don't have to show how for the solution to this. Open a file in wordpad. Within the first few lines you'll get an idea of what the backup is.
So here is the solution....sort these files by date modified. My biggest single file was 29,582KB!! looked inside and it's the WeDict app. There are other files from WeDict, but obviously for me if I want fast backups I need to get rid of this app. A few other top files I found are awesome apps on the iphone that have awesome trailers, intro movies for games, etc. Well guess what, they backup every single game intro, trailer, etc. So for instance, the game tap tap tap has a few m4a songs....well every one of those m4a songs are encrypted and backed up EVERY TIME (apple backups up files that don't need to be backed up which would probably reduce backup times by at least 98% - for instance, intro movies don't need to be backed up, nor app songs, nor game instructions, etc because that information should be on itunes backed up separately when you install an app, not when you do backups and it should be smart enough to update that folder when a new version is out....and only do backups on files that can change like high scores in apps, notes, etc - bad programming #2).
So anyway, you can see which apps are taking the most amount of time. Obviously if you remove every app on your iphone, it will backup fast, but some of these apps are shockingly huge. Problem here is that you can have 1 file or 100 files associated with 1 app. You could have only 2 apps installed on your iphone and your backups could be slower than a person having 30 apps just because some apps take a ridiculous amount of more backup data and time and what makes this even worse is every part of the data is encrypted, which I will talk about below (which doesn't need to be). Only time that file is not going to get updated is if it's deleted from your iphone.
You don't HAVE to use base 64 encryption. Come on now, especially on apps? They could lighten the encryption and it would be much faster backups because it has to decode and encode every file now. You are overencrypting these files so huge programs take forever backups. (Bad apple programming #3 don't over encrypt when speed is a necessity on something that doesn't need extreme high security).
As a developer with a computer engineering degree, to also make it even faster than what users were experiencing with backup times prior 2.0. They are obviously not tagging each portion that needs to be backed up individually. Lots of software companies do this to speed stuff up. For instance if you have Kaspersky antivirus with the defaults, the first scan will always take long, but every scan afterwords will be fast because instead of rescanning that file for a virus, it just checks to see if the checksum has changed and if so rescan that file, so most of the time the scan will happen fast unless there has been a period of time that you haven't done one that updates the checksums. They could do something very similar with this. So basically even if you changed a few things on your phone, the backup should be only a few seconds because only those few changes would be signaled to be rebacked up. (bad apple programming for backups #4). This process can make backups be from 98% faster to 99.98% faster....meaning having a backup only take 4 seconds even with 100 apps installed. Actually coding this one thing would make it extremely fast, but would take the most programming time. You can even make an algorithem where for the whole backup process it would just have to read one file that it would check checksums and then tell it to modify 1 or 2 other files and that's it as opposed to backing up (for me almost 5,000 files) some being several MB in size.
So all the above should give apple suggestions on how the can speedup backups by 99.98% faster than how they are doing it now and then anwers all those questions why it takes longer, what needs to change, how can I shorten my backups (this way you can find out what apps are taking the most time), etc.
The problem here is apps will continue to grow, become larger, more trailers, movies, songs embedded inside apps, etc. This problem is only going to grow. The additional problem is that they can say they have tweaked the backup programming to make it faster, just like they did in 2.0.1, but backups seem slower because there are newer better more awesome apps out!! Well that's why it doesn't look like it's faster but it's slower! They will keep doing this as opposed to solving the root of the problem by recoding the whole backup foundation as I'm suggesting from above. Backups will forever take longer and longer and longer and longer. That is what you have to look forward to. My guess is that they are not going to fix the foundation of how backups are done anytime soon having a good idea how they are doing this one. They would have to do a massive overhaul of the whole backup code, which basically they would have to admit that all the time coding this way was a waste and they see no profit from it and we deal with it just by pressing x sometimes. We can complain all we want, but for awhile if you don't want long backups, this is the real solution. What's funny is writing this email took much more time than figuring out what apple is doing, but I think this will help many apple iphone users. Let me know what you guys think.Supposedly apple made backups slightly faster on 2.0.1, but like I said in my original posting above...it will get worse and worse because of the apps.
Just to verify and you could do this yourself. Right before you plug in to sync and it auto starts the backup process do this...go to the folder where all the files are as mentioned above, sort it by date modified. Add a refresh button on the toolbar. Then start the sync/backup, keep refreshing and you see the files getting updated. There are even temporary files apple's backup process creates that you don't see that keeps adding to a file, so even if you see one file updated, you might see it updated a few times until the whole backup for that file completes.
Here are my stats:
For me this time it took 27 minutes (earlier it took 2 minutes, but like i said with updates to apps, more apps, depending on types of apps, your computer speed, etc it will increase.
For me: In 27 minutes, it updated 1707 files with a total size of 82.8MB. There are a ton of computers that are much faster than mine that I'm sure would cut the minutes down. I have 6 pages of apps, but like I mention above, that doesn't matter because you could have one app that has 400 files with a large size. I would be interested in others posting their results, maybe in this format:
27min 1707files 82.8MB Pentium4HT Antivirus and other large programs running in background.
If you are interested you can even download program that will decrypt these files so you can view them in more detail...for mac, for instance, there is something like this: http://mac.softpedia.com/get/iPhone-Applications/Tools-Utilities/iPhone-Backup-D ecoder.shtml It allows you to backup and modify. It may help you decide what is taking your backups so long and you can decide if it's worth having that app installed versus not having a good backup. You can also pick out you sms backup file, contacts backup file, etc for those really interested in having backups of those specific files.
Hope this helps clear up things for everyone. -
Query takes long when using UNION
Hi ,
I habe a query as follows;
SELECT
'9999' site_id,
m.ghi_prov_num provnum,
SUBSTR (l.seq_num, 1, 3)provloc,
t.dea_number dea,
t.license_number statelicensenumber ,
n.npi_num npi,
m.prefix prefixname,
m.lastname lastname,
m.firstname firstname,
t.middle_name middleinitial,
m.suffix suffixname,
null clinicname,
l.street1 addressline1,
l.street2 addressline2,
l.city city,
l.state state,
l.zip5 zip,
l.phone phoneprimary,
null ext,
null fax,
null email,
null alt_phone,
null alt_phone_ext
FROM provider m, LOCATION l, npi n,TEMP_VITAL_CACTUS t,test_provider_pin pin
WHERE m.ghi_prov_num=l.ghi_prov_num
and m.ghi_prov_num=n.ghi_prov_num(+)
and m.ghi_prov_num=t.ghi_prov_num(+)
and m.tax_id=pin.tax_id
UNION
SELECT
'9999', m.ghi_prov_num ,
m.location provloc,
null ,
null ,
n.npi_num ,
null ,
m.lastname ,
m.firstname ,
null,
null ,
null ,
m.street1 ,
m.street2 ,
m.city ,
m.state ,
m.zip5 ,
m.phone ,
null ,
null ,
null ,
null ,
null
FROM dental_provider m, npi n,test_provider_pin pin
WHERE m.ghi_prov_num=n.tax_id(+)
and m.location=n.location(+)
and pin.tax_id=m.ghi_prov_num;The query takes for ever;
But Individual query takes less than a sec to execute.Is there any way can i rewrite the query?
Please help
Hena.user11253970 wrote:
But Individual query takes less than a sec to execute.Is there any way can i rewrite the query?Have a feeling you are using Toad/SQL Navigator or similar tool which returns data one screen at a time. If so, then it does not "takes less than a sec to execute" but rather to fetch first screen of rows. When you use UNION Oracle has to return distinct rows from both queries. Therefore it must fetch not just first screen but all rows. To verify, issue first query and in yiour GUI tool click on get last screen. Then you'll know how long whole select takes. Do the same for second query. Also, do you need distinct rows or akll rows? IF all rows, change UNION to UNION ALL.
SY. -
My Query takes too long ...
Hi ,
Env , DB 10G , O/S Linux Redhat , My DB size is about 80G
My query takes too long , about 5 days to get results , can you please help to rewrite this query in a better way ,
declare
x number;
y date;
START_DATE DATE;
MDN VARCHAR2(12);
TOPUP VARCHAR2(50);
begin
for first_bundle in
select min(date_time_of_event) date_time_of_event ,account_identifier ,top_up_profile_name
from bundlepur
where account_profile='Basic'
AND account_identifier='665004664'
and in_service_result_indicator=0
and network_cause_result_indicator=0
and DATE_TIME_OF_EVENT >= to_date('16/07/2013','dd/mm/yyyy')
group by account_identifier,top_up_profile_name
order by date_time_of_event
loop
select sum(units_per_tariff_rum2) ,max(date_time_of_event)
into x,y
from OLD_LTE_CDR
where account_identifier=(select first_bundle.account_identifier from dual)
and date_time_of_event >= (select first_bundle.date_time_of_event from dual)
and -- no more than a month
date_time_of_event < ( select add_months(first_bundle.date_time_of_event,1) from dual)
and -- finished his bundle then buy a new one
date_time_of_event < ( SELECT MIN(DATE_TIME_OF_EVENT)
FROM OLD_LTE_CDR
WHERE DATE_TIME_OF_EVENT > (select (first_bundle.date_time_of_event)+1/24 from dual)
AND IN_SERVICE_RESULT_INDICATOR=26);
select first_bundle.account_identifier ,first_bundle.top_up_profile_name
,FIRST_BUNDLE.date_time_of_event
INTO MDN,TOPUP,START_DATE
from dual;
insert into consumed1 VALUES(X,topup,MDN,START_DATE,Y);
end loop;
COMMIT;
end;> where account_identifier=(select first_bundle.account_identifier from dual)
Why are you doing this? It's a completely unnecessary subquery.
Just do this:
where account_identifier = first_bundle.account_identifier
Same for all your other FROM DUAL subqueries. Get rid of them.
More importantly, don't use a cursor for loop. Just write one big INSERT statement that does what you want. -
Ldap search query takes more than 10 seconds
LDAP query takes more than 10 seconds to execute.
For validating the policy configured, the Acess Manager(Sun Java System Access Manager) contacts the LDAP (Sun Java System Directory Server 6.2) to get the users in a dynamic group. The time out value configured in Access Manager for LDAP searches is 10 seconds.
Issue : The ldap query takes more than 10 seconds to execute at some times .
The query is executing with less than 10 seconds in most of the cases, but it takes more than 10 seconds in some cases. The total number of users available in the ldap is less than 1500.
7 etime =1
6 etime =1
102 etime=4
51 etime=5
26 etime=6
5 etime=7
4 etime=8
From the ldap access logs we can see the following entry,some times the query takes more than 10 seconds,
[28/May/2012:14:21:26 +0200] conn=281 op=41433 msgId=853995 - SRCH base="dc=****,dc=****,dc=com" scope=2 filter="(&(&(***=true)(**=true))(objectClass=vfperson))" attrs=ALL
[28/May/2012:14:21:36 +0200] conn=281 op=41434 msgId=854001 - ABANDON targetop=41433 msgid=853995 nentries=884 etime=10
The query was aborted by the access manger after 10 seconds.
Please post your suggestions to resolve this issue .
1.How we can find out , why the query is taking more than 10 seconds ?
2.Next steps to resolve this issue .Hi Marco,
Thanks for your suggestions.
Sorry for replying late. I was out of office for few weeks.
1) Have you already tuned the caches? (entry cache, db cache, filesystem cache?)
We are using db cache and we have not done any turning for cache. The application was working fine and there was no much changes in the number of users .
2) Unfortunately we don't have direct access to the environment and we have contacted the responsible team to verify the server health during the issue .
Regarding the IO operations we can see that, load balancer is pinging the ldap sever every 15 seconds to check the status of ldap servers which yields a new connection on every hit. (on average per minute 8 connections - )
3) We using cn=dsameuser to bind the directory server. Other configuration details for ldap
LDAP Connection Pool Minimum Size: 1
LDAP Connection Pool Maximum Size:10
Maximum Results Returned from Search: 1700
Search Timeout: 10
Is the Search Timeout value configured is proper ? ( We have less than 1500 user in the ldap server).
Also is there any impact if the value Maximum Results Returned from Search = set to 1700. ( The Sun document for AM says that the ideal value for this is 1000 and if its higher than this it will impact performance.
The application was running without time out issue for last 2 years and there was no much increase in the number of users in the system. ( at the max 200 users added to the system in last 2 years.)
Thanks,
Jay -
Query takes long time from one machine but 1 sec from machine
I got a update query which is like a application patch which takes 1 sec from one machine.I need to apply that on the other machine where application is installed
Both applications are same and connecting to the same DB server.The query ran from second machine takes so long time ....
but i can update other thing from the secon machine.
IS there anything to do with page size ,line size
Urgent PleaseHI
Everything is same except from the diff machine.
Any client version issue because the script us so wide like 240 chars
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | UPDATE STATEMENT | | | | |
| 1 | UPDATE | IDI_INTERFACE_MST | | | |
| 2 | INDEX UNIQUE SCAN | PK_IDI_INTMST | | | |
Note: rule based optimization, PLAN_TABLE' is old version
10 rows selected.
Message was edited by:
Maran.E -
My query take long time..
The output of tkprof of my trace file is :
SELECT ENEXT.NUM_PRSN_EMPLY ,ENEXT.COD_BUSUN ,ENEXT.DAT_CALDE ,ENEXT.COD_SHFT
FROM
AAC_EMPLOYEE_ENTRY_EXITS5_VIW ENEXT ,PDS.PDS_EMPLOYEES EMPL ,
PDS.PDS_EMPLOYMENT_TYPES EMPTYP ,PDS.PDS_PAY_CONDITIONS PAYCON WHERE
ENEXT.DAT_CALDE BETWEEN :B6 AND :B5 AND ENEXT.NUM_PRSN_EMPLY IN (SELECT
ATT21 FROM APPS.GLOBAL_TEMPS WHERE ATT1 = 'PRSN') AND ENEXT.NUM_PRSN_EMPLY =
EMPL.NUM_PRSN_EMPLY AND EMPL.EMTYP_COD_EMTYP = EMPTYP.COD_EMTYP AND
EMPTYP.LKP_COD_STA_PAY_EMTYP <> 3 AND
NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1 AND EMPL.PCOND_COD_STA_PCOND
= PAYCON.COD_STA_PCOND AND NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1
AND PAYCON.LKP_FLG_STA_PAY_PCOND = 1 AND ENEXT.DAT_CALDE >=
EMPL.DAT_EMPLT_EMPLY AND ENEXT.DAT_CALDE <= NVL(EMPL.DAT_DSMSL_EMPLY,
TO_DATE('15001229','YYYYMMDD')) AND 1 = (CASE WHEN
ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND
ENEXT.TYP_DAY BETWEEN 4 AND 6 THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2
AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND ENEXT.TYP_DAY NOT BETWEEN 4 AND 6 THEN
1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 2
THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
1 THEN 1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
2 THEN 0 END) AND ENEXT.LKP_COD_DPUT_BUSUN = NVL(:B4 ,
ENEXT.LKP_COD_DPUT_BUSUN) AND ENEXT.LKP_COD_MANAG_BUSUN = NVL(:B3 ,
ENEXT.LKP_COD_MANAG_BUSUN) AND ENEXT.COD_BUSUN = NVL(:B2 , ENEXT.COD_BUSUN)
AND ENEXT.COD_CAL = NVL(COD_CAL, ENEXT.COD_CAL) AND ENEXT.NUM_PRSN_EMPLY =
NVL(:B1 , ENEXT.NUM_PRSN_EMPLY) AND ENEXT.COD_SHFT IN (SELECT
SHFTBL.COD_SHTAB FROM AAC_SHIFT_TABLES SHFTBL WHERE
SHFTBL.LKP_CAT_SHFT_SHTAB = 1) AND ENEXT.DAT_CALDE NOT IN (SELECT ABN.DAT
FROM APPS.AAC_EMPL_EN_EX_ABNORMAL_VIW ABN WHERE ABN.PRSN =
ENEXT.NUM_PRSN_EMPLY AND ABN.DAT BETWEEN :B6 AND :B5 ) AND ENEXT.DAT_CALDE
IN (SELECT EMPENEXT.DAT_STR_SHFT_ENEXT FROM AAC.AAC_EMPLOYEE_ENTRY_EXITS
EMPENEXT WHERE EMPENEXT.EMPLY_NUM_PRSN_EMPLY = EMPL.NUM_PRSN_EMPLY AND
EMPENEXT.DAT_STR_SHFT_ENEXT BETWEEN :B6 AND :B5 AND
EMPENEXT.LKP_FLG_STA_ENEXT <> 3) ORDER BY ENEXT.NUM_PRSN_EMPLY,
ENEXT.DAT_CALDE
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 40.45 40.30 306 17107740 0 24
total 6 40.45 40.30 306 17107740 0 24
what is wrong in my query?
why it take long time?user13344656 wrote:
what is wrong in my query?
why it take long time?See PL/SQL forum FAQ
https://forums.oracle.com/forums/ann.jspa?annID=1535
*3. How to improve the performance of my query? / My query is running slow.*
SQL and PL/SQL FAQ
For instructions on what information to post an how to format it. -
Query takes long time to return results.
I am on Oracle database 10g Enterprise Edition Release 10.2.0.4.0 – 64 bit
This query takes about 58 seconds to return 180 rows...
SELECT order_num,
order_date,
company_num,
customer_num,
address_type,
create_date as address_create_date,
contact_name,
first_name,
middle_init,
last_name,
company_name,
street_address_1,
customer_class,
city,
state,
zip_code,
country_code,
MAX(decode(media_type,
'PHH',
phone_area_code || '''' || phone_number,
NULL)) home_phone,
MAX(decode(media_type,
'PHW',
phone_area_code || '''' || phone_number,
NULL)) work_phone,
address_seq_num,
street_address_2
FROM (SELECT oh.order_num order_num,
oh.order_datetime order_date,
oh.company_num company_num,
oh.customer_num customer_num,
ad.address_type address_type,
c.create_date create_date,
con.first_name || '''' || con.last_name contact_name,
con.first_name first_name,
con.middle_init middle_init,
con.last_name last_name,
ad.company_name company_name,
ad.street_address_1 street_address_1,
c.customer_class customer_class,
ad.city city,
ad.state state,
ad.zip_code zip_code,
ad.country_code,
cph.media_type media_type,
cph.phone_area_code phone_area_code,
cph.phone_number phone_number,
ad.address_seq_num address_seq_num,
ad.street_address_2 street_address_2
FROM reporting_base.gt_gaft_orders gt,
doms.us_ordhdr oh,
doms.us_address ad,
doms.us_customer c,
doms.us_contact con,
doms.us_contph cph
WHERE oh.customer_num = c.customer_num(+)
AND oh.customer_num = ad.customer_num(+)
AND (
ad.customer_num = c.customer_num
AND
ad.address_type = 'B'
OR (
ad.customer_num = c.customer_num
AND
ad.address_type = 'S'
AND
ad.address_seq_num = oh.ship_to_seq_num
AND ad.customer_num = con.customer_num(+)
AND ad.address_type = con.address_type(+)
AND ad.address_seq_num = con.address_seq_num(+)
AND con.customer_num = cph.customer_num(+)
AND con.contact_id = cph.contact_id(+)
AND oh.order_num = gt.order_num
AND oh.business_unit_id = gt.business_unit_id)
GROUP BY order_num,
order_date,
company_num,
customer_num,
address_type,
create_date,
contact_name,
first_name,
middle_init,
last_name,
company_name,
street_address_1,
customer_class,
city,
state,
zip_code,
country_code,
address_seq_num,
street_address_2;This is the explain plan for the query:
Plan
SELECT STATEMENT FIRST_ROWS Cost: 21 Bytes: 207 Cardinality: 1
18 HASH GROUP BY Cost: 21 Bytes: 207 Cardinality: 1
17 NESTED LOOPS OUTER Cost: 20 Bytes: 207 Cardinality: 1
14 NESTED LOOPS OUTER Cost: 16 Bytes: 183 Cardinality: 1
11 FILTER
10 NESTED LOOPS OUTER Cost: 12 Bytes: 152 Cardinality: 1
7 NESTED LOOPS OUTER Cost: 8 Bytes: 74 Cardinality: 1
4 NESTED LOOPS OUTER Cost: 5 Bytes: 56 Cardinality: 1
1 TABLE ACCESS FULL TABLE (TEMP) REPORTING_BASE.GT_GAFT_ORDERS Cost: 2 Bytes: 26 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID TABLE DOMS.US_ORDHDR Cost: 3 Bytes: 30 Cardinality: 1
2 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USORDHDR_IXUPK_ORDNUMBUID Cost: 2 Cardinality: 1
6 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CUSTOMER Cost: 3 Bytes: 18 Cardinality: 1 Partition #: 11
5 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USCUSTOMER_IXUPK_CUSTNUM Cost: 2 Cardinality: 1
9 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_ADDRESS Cost: 4 Bytes: 156 Cardinality: 2 Partition #: 13
8 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USADDR_IXUPK_CUSTATYPASEQ Cost: 3 Cardinality: 2
13 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTACT Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 15
12 INDEX RANGE SCAN INDEX DOMS.USCONT_IX_CNATAS Cost: 3 Cardinality: 1
16 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTPH Cost: 4 Bytes: 24 Cardinality: 1 Partition #: 17
15 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USCONTPH_IXUPK_CUSTCONTMEDSEQ Cost: 3 Cardinality: 1 Cost is good. All indexes are used. However the time to return the data is very high.
Any ideas to make the query faster?.
ThanksHi, here is the tkprof output as requested by Rob..
TKPROF: Release 10.2.0.4.0 - Production on Mon Jul 13 09:07:09 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: axispr1_ora_15293.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT ORDER_NUM, ORDER_DATE, COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE,
CREATE_DATE AS ADDRESS_CREATE_DATE, CONTACT_NAME, FIRST_NAME, MIDDLE_INIT,
LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1, CUSTOMER_CLASS, CITY, STATE,
ZIP_CODE, COUNTRY_CODE, MAX(DECODE(MEDIA_TYPE, 'PHH', PHONE_AREA_CODE ||
'''' || PHONE_NUMBER, NULL)) HOME_PHONE, MAX(DECODE(MEDIA_TYPE, 'PHW',
PHONE_AREA_CODE || '''' || PHONE_NUMBER, NULL)) WORK_PHONE, ADDRESS_SEQ_NUM,
STREET_ADDRESS_2
FROM
(SELECT OH.ORDER_NUM ORDER_NUM, OH.ORDER_DATETIME ORDER_DATE, OH.COMPANY_NUM
COMPANY_NUM, OH.CUSTOMER_NUM CUSTOMER_NUM, AD.ADDRESS_TYPE ADDRESS_TYPE,
C.CREATE_DATE CREATE_DATE, CON.FIRST_NAME || '''' || CON.LAST_NAME
CONTACT_NAME, CON.FIRST_NAME FIRST_NAME, CON.MIDDLE_INIT MIDDLE_INIT,
CON.LAST_NAME LAST_NAME, AD.COMPANY_NAME COMPANY_NAME, AD.STREET_ADDRESS_1
STREET_ADDRESS_1, C.CUSTOMER_CLASS CUSTOMER_CLASS, AD.CITY CITY, AD.STATE
STATE, AD.ZIP_CODE ZIP_CODE, AD.COUNTRY_CODE, CPH.MEDIA_TYPE MEDIA_TYPE,
CPH.PHONE_AREA_CODE PHONE_AREA_CODE, CPH.PHONE_NUMBER PHONE_NUMBER,
AD.ADDRESS_SEQ_NUM ADDRESS_SEQ_NUM, AD.STREET_ADDRESS_2 STREET_ADDRESS_2
FROM REPORTING_BASE.GT_GAFT_ORDERS GT, DOMS.US_ORDHDR OH, DOMS.US_ADDRESS
AD, DOMS.US_CUSTOMER C, DOMS.US_CONTACT CON, DOMS.US_CONTPH CPH WHERE
OH.ORDER_NUM = GT.ORDER_NUM AND OH.BUSINESS_UNIT_ID = GT.BUSINESS_UNIT_ID
AND OH.CUSTOMER_NUM = C.CUSTOMER_NUM(+) AND OH.CUSTOMER_NUM =
AD.CUSTOMER_NUM(+) AND AD.CUSTOMER_NUM = C.CUSTOMER_NUM AND (
AD.ADDRESS_TYPE = 'B' OR ( AD.ADDRESS_TYPE = 'S' AND AD.ADDRESS_SEQ_NUM =
OH.SHIP_TO_SEQ_NUM ) ) AND AD.CUSTOMER_NUM = CON.CUSTOMER_NUM(+) AND
AD.ADDRESS_TYPE = CON.ADDRESS_TYPE(+) AND AD.ADDRESS_SEQ_NUM =
CON.ADDRESS_SEQ_NUM(+) AND CON.CUSTOMER_NUM = CPH.CUSTOMER_NUM(+) AND
CON.CONTACT_ID = CPH.CONTACT_ID(+) ) GROUP BY ORDER_NUM, ORDER_DATE,
COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE, CREATE_DATE, CONTACT_NAME,
FIRST_NAME, MIDDLE_INIT, LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1,
CUSTOMER_CLASS, CITY, STATE, ZIP_CODE, COUNTRY_CODE, ADDRESS_SEQ_NUM,
STREET_ADDRESS_2
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 257 0.04 0.05 45 0 0 6421
total 257 0.04 0.05 45 0 0 6421
Misses in library cache during parse: 0
Parsing user id: 126
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 257 0.04 0.05 45 0 0 6421
total 257 0.04 0.05 45 0 0 6421
Misses in library cache during parse: 0
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
1 user SQL statements in session.
0 internal SQL statements in session.
1 SQL statements in session.
Trace file: axispr1_ora_15293.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
1 user SQL statements in trace file.
0 internal SQL statements in trace file.
1 SQL statements in trace file.
1 unique SQL statements in trace file.
289 lines in trace file.
83 elapsed seconds in trace file.Thanks in advance! -
SELECT CAL_EMPCALENDAR.START_DATE as main,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' ||
CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' ||
CAL_EMPCALENDAR.EMPLOYEE_ID as name,
CAL_EMPCALENDAR.START_DATE as sdate,
CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
CAL_EMPCALENDAR.LATE_IN,
CAL_EMPCALENDAR.EARLY_OUT,
CAL_EMPCALENDAR.UNDER_TIME,
CAL_EMPCALENDAR.OVERTIME,
TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
(SELECT shift_id
FROM CAL_GRPWORKDAY
WHERE CAL_GRPWORKDAY.calgrp_id =
(SELECT calgrp_id
FROM CAL_CALASSIGNMENT
WHERE employee_id = CAL_EMPCALENDAR.employee_id
AND CAL_CALASSIGNMENT.START_DATE <=
CAL_EMPCALENDAR.START_DATE
AND (CAL_CALASSIGNMENT.END_DATE is null or
CAL_CALASSIGNMENT.END_DATE >=
CAL_EMPCALENDAR.START_DATE))
AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
(SELECT max(entry_dt)
FROM , LV_TXN txn, CAL_EMPDAILYEVENT cale
WHERE status = 'Approved'
AND LV_APPSTATUSHIST.application_id = txn.application_id
AND cale.reference_id = txn.txn_id
AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id
) AS entry_dt,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 1
and BIZUNIT_ID like 'SG')) F1,
--TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 2
and bizunit_id like 'SG')) F2,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 3
and bizunit_id like 'SG')) F3,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 4
and bizunit_id like 'SG')) F4,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
WF_STATUS = 'Verified' OR WF_STATUS is Null OR
WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 5
and bizunit_id like 'SG')) F5
From CAL_EMPCALENDAR, HRM_CURR_CAREER_V, CAL_SHIFT, HRM_EMPLOYEE
Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
AND (CAL_EMPCALENDAR.WF_STATUS = 'Approved' Or
CAL_EMPCALENDAR.WF_STATUS = 'No Action')
AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
--and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
AND CAL_EMPCALENDAR.START_DATE BETWEEN
GREATEST(HRM_EMPLOYEE.COMMENCE_DATE,
TO_DATE('1-4-2006', 'DD-MM-YYYY')) AND
LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'),
NVL(HRM_EMPLOYEE.CESSATION_DATE,
TO_DATE('30-4-2006', 'DD-MM-YYYY')))
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
-- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
--AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
--$P!{ExceptionSQL}
--$P!{iHRFilterClause}
--order by $P!{OrderBy}
order by main
Hi all this query takes a very long time to run.
On the explain plan the The table in bold letter is using full tablescan rest all go for index scanning.
Table got Indexe on those CLOMUNS REFERREED
Oracle version 9.2.0.6
Message was edited by:
Maran.E
Message was edited by:
Maran.EMaran,
With tags and indentation it should be easiest to analyze at least for you :
SELECT CAL_EMPCALENDAR.START_DATE as main,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' || CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' || CAL_EMPCALENDAR.EMPLOYEE_ID as name,
CAL_EMPCALENDAR.START_DATE as sdate,
CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
CAL_EMPCALENDAR.LATE_IN,
CAL_EMPCALENDAR.EARLY_OUT,
CAL_EMPCALENDAR.UNDER_TIME,
CAL_EMPCALENDAR.OVERTIME,
TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
(SELECT shift_id
FROM CAL_GRPWORKDAY
WHERE CAL_GRPWORKDAY.calgrp_id = (SELECT calgrp_id
FROM CAL_CALASSIGNMENT
WHERE employee_id = CAL_EMPCALENDAR.employee_id
AND CAL_CALASSIGNMENT.START_DATE <= CAL_EMPCALENDAR.START_DATE
AND ( CAL_CALASSIGNMENT.END_DATE is null
or CAL_CALASSIGNMENT.END_DATE >= CAL_EMPCALENDAR.START_DATE))
AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
(SELECT max(entry_dt)
FROM LV_TXN txn, CAL_EMPDAILYEVENT cale
WHERE status = 'Approved'
AND LV_APPSTATUSHIST.application_id = txn.application_id
AND cale.reference_id = txn.txn_id
AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id) AS entry_dt,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 1
and BIZUNIT_ID like 'SG')) F1,
--TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 2
and bizunit_id like 'SG')) F2,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 3
and bizunit_id like 'SG')) F3,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 4
and bizunit_id like 'SG')) F4,
(SELECT ENTITLEMENT + ADJUST
FROM TAM_ALLOWANCE
WHERE ( WF_STATUS = 'Pending'
OR WF_STATUS = 'Approved'
OR WF_STATUS = 'Verified'
OR WF_STATUS is Null
OR WF_STATUS = 'No Action')
and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
AND ITEM_ID = (SELECT ITEM_ID
FROM TAM_CLAIM_FORMAT
WHERE SEQUENCE = 5
and bizunit_id like 'SG')) F5
From CAL_EMPCALENDAR,
HRM_CURR_CAREER_V,
CAL_SHIFT,
HRM_EMPLOYEE
Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
AND ( CAL_EMPCALENDAR.WF_STATUS = 'Approved'
Or CAL_EMPCALENDAR.WF_STATUS = 'No Action')
AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
--and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
AND CAL_EMPCALENDAR.START_DATE BETWEEN GREATEST(HRM_EMPLOYEE.COMMENCE_DATE, TO_DATE('1-4-2006', 'DD-MM-YYYY'))
AND LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'), NVL(HRM_EMPLOYEE.CESSATION_DATE, TO_DATE('30-4-2006', 'DD-MM-YYYY')))
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
-- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
--AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
--$P!{ExceptionSQL}
--$P!{iHRFilterClause}
--order by $P!{OrderBy}
order by mainNicolas.
Maybe you are looking for
-
Create LOV's for all columns at the time of folder creation
Hi, I know we can automatically create the LOV's for all the columns of a database table when creating the folder in the EUL. Is this a good practice or should I create LOV's on demand? What is the disadvantage of creating more LOV's than needs to be
-
How can I install a version of Safari compatible with MacOS 7.5
At some point I downloaded and installed Safari 6.0.2 on a MacBook Pro (4.1) running OS 10.7.4. When I tried to open it, an error message appeared stating that this version of Safari is incompatible with OS 10.7.x and is compatible with OS 10.8. Sinc
-
SUCCESS: Imac 27" Late-2013 Bootcamp to Windows 7
After one grueling day fighting with possibly every Bootcamp-related issue there is, I have finally installed Windows 7 on my brand new shiny iMac 27". I have these threads to thank: Boot Camp - Setup was unable to create a new system partition....fr
-
ICal will no longer open after Snow Leopard upgrade.....
....a blank calendar page comes up and immediately disappears. It was fine until Snow Leopard. If you have the time below is what got submitted to Apple when iCal quit. I really need iCal and all of the information stored on it. Anybody have any sugg
-
Mini displayport wont work ANYMORE?.
Hey all. About a year ago I bought a mini displayport to HDMI adapter, and it worked fine. But then it began to only work sometimes. And now it wont work at all. Sometimes I try to plug it in and then pull slightly to one and the other side, and some