A sql suddenly spent more time than before

9/28号,业务部门发现一条sql运行时间突然增长,原来只需1秒,现在要10秒左右。
database 10.2.0.4 rac 2node
server aix 5.3
语句如下:
update J_ORGANIZATION c
set c.MODIFY_TIME = sysdate, c.MODIFY_EMPL_ID = 1111
where 1 = 1
and c.ORG_ID ='BJ0000270551'
and c.DEAL_STATUS = '1' ;
检查执行计划,发现没有显著影响性能
Rows Execution Plan
0 UPDATE STATEMENT MODE: ALL_ROWS
1 UPDATE OF 'J_ORGANIZATION'
1 TABLE ACCESS MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF
'J_ORGANIZATION' (TABLE) PARTITION:ROW LOCATION
1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'UNI_ORG_ID' (INDEX
(UNIQUE))
做了10046在trace中发现多了条sql
UPDATE BDP_ZQINFO.J_OUT_ORG_FOR_TRS B SET OPER_TYPE = 0, CREATE_TIME = SYSDATE
WHERE
EXISTS (SELECT 1 FROM BDP_ZQINFO.J_OUT_ORG_FOR_TRS A WHERE A.ORG_SERIAL_ID =
B.ORG_SERIAL_ID AND B.ORG_SERIAL_ID = :B1 );
这个是触发器生成,触发器语句如下:
CREATE OR REPLACE TRIGGER T_UPD_J_ORGANIZATION
before update of modify_time -- synchro_status
on J_ORGANIZATION
for each row
declare
org_serialid number:=:old.org_serial_id;
org_prov varchar2(8):=:old.prov_region_code;
begin
-- dbms_output.put_line('bbbbbbbbbbbbbbbbbbb');
/* IF
:old.synchro_status=1 and :new.synchro_status=0 OR
:old.synchro_status is null and :new.synchro_status=0
THEN*/
sp_j_out_org_for_trs(org_serialid,org_prov);
EXCEPTION
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
-- END IF;
end T_UPD_J_ORGANIZATION;
这个update应是最先运行,J_OUT_ORG_FOR_TRS表为list分区(字段为PROV_REGION_CODE) 约有4百多万行,
其中字段ORG_SERIAL_ID索引J_OUT_ORG_FOR_TRS_ID为 global normal index,索引信息如下:
OWNER     INDEX_NAME     INDEX_TYPE     TABLE_OWNER     TABLE_NAME     BLEVEL     LEAF_BLOCKS     DISTINCT_KEYS     AVG_LEAF_BLOCKS_PER_KEY     AVG_DATA_BLOCKS_PER_KEY     CLUSTERING_FACTOR     STATUS     NUM_ROWS     SAMPLE_SIZE     LAST_ANALYZED
BDP_ZQINFO     IDX_ORGANIZATION_1     NORMAL     BDP_ZQINFO     J_ORGANIZATION     3     154,241     18,181,606     1     1     17,794,548     VALID     18,307,431     131,394     10-05-2012 23:34:12
BDP_ZQINFO     J_OUT_ORG_FOR_TRS_ID     NORMAL     BDP_ZQINFO     J_OUT_ORG_FOR_TRS     2     15,188     4,429,794     1     1     3,576,204     VALID     4,491,419     330,908     10-01-2012 22:24:58
此条sql执行计划如下:
WORKLOAD REPOSITORY SQL Report
Snapshot Period Summary
DB Name DB Id Instance Inst num Release RAC Host
BSTTEST 1834441837 bsttest1 1 10.2.0.4.0 YES olap1
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 35139 08-10月-12 13:00:32 231 1.7
End Snap: 35140 08-10月-12 14:00:49 249 1.7
Elapsed: 60.28 (mins)
DB Time: 212.30 (mins)
SQL Summary
SQL Id Elapsed Time (ms) Module Action SQL Text
av6s7vnuqkhh6 100,066 UPDATE J_OUT_ORG_FOR_TRS B SET OPER_TYPE = 0, CREATE_TIME = SYSDATE W...
Back to Top
SQL ID: av6s7vnuqkhh6
1st Capture and Last Capture Snap IDs refer to Snapshot IDs witin the snapshot range
UPDATE J_OUT_ORG_FOR_TRS B SET OPER_TYPE = 0, CREATE_TIME = SYSDATE WH...
# Plan Hash Value Total Elapsed Time(ms) Executions 1st Capture Snap ID Last Capture Snap ID
1 1602621420 100,066 14 35140 35140
Back to Top
Plan 1(PHV: 1602621420)
Plan Statistics
Execution Plan
Back to Top
Plan Statistics
% Total DB Time is the Elapsed Time of the SQL statement divided into the Total Database Time multiplied by 100
Stat Name Statement Total Per Execution % Snap Total
Elapsed Time (ms) 100,066 7,147.57 0.79
CPU Time (ms) 97,695 6,978.23 1.18
Executions 14
Buffer Gets 335,669 23,976.36 0.09
Disk Reads 39 2.79 0.00
Parse Calls 5 0.36 0.00
Rows 40 2.86
User I/O Wait Time (ms) 173
Cluster Wait Time (ms) 2,632
Application Wait Time (ms) 0
Concurrency Wait Time (ms) 22
Invalidations 0
Version Count 5
Sharable Mem(KB) 27
Back to Plan 1(PHV: 1602621420)
Back to Top
Execution Plan
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 UPDATE STATEMENT 11M(100)
1 UPDATE J_OUT_ORG_FOR_TRS
2 FILTER
3 PARTITION LIST ALL 3843K 65M 4486 (3) 00:00:54 1 32
4 TABLE ACCESS FULL J_OUT_ORG_FOR_TRS 3843K 65M 4486 (3) 00:00:54 1 32
5 FILTER
6 INDEX RANGE SCAN J_OUT_ORG_FOR_TRS_ID 1 7 3 (0) 00:00:01
Back to Plan 1(PHV: 1602621420)
Back to Top
其中走了全表,花费大量时间,问如何优化?
下面是详细的trace文件:
/oracle/orabase/admin/bsttest/udump/bsttest1_ora_1421390.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
ORACLE_HOME = /oracle/orabase/product/10g/olap
System name:     AIX
Node name:     olap1
Release:     3
Version:     5
Machine:     00C6A8824C00
Instance name: bsttest1
Redo thread mounted by this instance: 1
Oracle process number: 156
Unix process pid: 1421390, image: oracle@olap1 (TNS V1-V3)
*** ACTION NAME:() 2012-10-08 13:25:14.174
*** MODULE NAME:(SQL*Plus) 2012-10-08 13:25:14.174
*** SERVICE NAME:(SYS$USERS) 2012-10-08 13:25:14.174
*** SESSION ID:(819.44232) 2012-10-08 13:25:14.174
WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=50576422823402
*** 2012-10-08 13:25:24.664
WAIT #2: nam='SQL*Net message from client' ela= 10235104 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=50576433067587
WAIT #1: nam='library cache lock' ela= 277 handle address=504403169219121680 lock address=504403168553346632 100*mode+namespace=301 obj#=-1 tim=50576433068658
=====================
PARSING IN CURSOR #2 len=227 dep=1 uid=0 oct=3 lid=0 tim=50576433069481 hv=2190775527 ad='8ad0f328'
select u.name,o.name, t.update$, t.insert$, t.delete$, t.enabled  from obj$ o,user$ u,trigger$ t  where t.baseobject=:1 and t.obj#=o.obj# and o.owner#=u.user#  and bitand(property,16)=0 and bitand(property,8)=0  order by o.obj#
END OF STMT
PARSE #2:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=50576433069477
BINDS #2:
kkscoacd
Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=0001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=1104edb68  bln=22  avl=04  flg=05
  value=182495
EXEC #2:c=0,e=146,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=50576433069691
FETCH #2:c=0,e=328,p=0,cr=15,cu=0,mis=0,r=1,dep=1,og=4,tim=50576433070035
FETCH #2:c=0,e=4,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=50576433070074
FETCH #2:c=0,e=12,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=50576433070107
STAT #2 id=1 cnt=2 pid=0 pos=1 obj=0 op='SORT ORDER BY (cr=15 pr=0 pw=0 time=352 us)'
STAT #2 id=2 cnt=2 pid=1 pos=1 obj=0 op='NESTED LOOPS  (cr=15 pr=0 pw=0 time=294 us)'
STAT #2 id=3 cnt=2 pid=2 pos=1 obj=0 op='NESTED LOOPS  (cr=11 pr=0 pw=0 time=248 us)'
STAT #2 id=4 cnt=2 pid=3 pos=1 obj=81 op='TABLE ACCESS BY INDEX ROWID TRIGGER$ (cr=3 pr=0 pw=0 time=169 us)'
STAT #2 id=5 cnt=2 pid=4 pos=1 obj=125 op='INDEX RANGE SCAN I_TRIGGER1 (cr=1 pr=0 pw=0 time=116 us)'
STAT #2 id=6 cnt=2 pid=3 pos=2 obj=18 op='TABLE ACCESS BY INDEX ROWID OBJ$ (cr=8 pr=0 pw=0 time=79 us)'
STAT #2 id=7 cnt=2 pid=6 pos=1 obj=36 op='INDEX UNIQUE SCAN I_OBJ1 (cr=6 pr=0 pw=0 time=29 us)'
STAT #2 id=8 cnt=2 pid=2 pos=2 obj=22 op='TABLE ACCESS CLUSTER USER$ (cr=4 pr=0 pw=0 time=39 us)'
STAT #2 id=9 cnt=2 pid=8 pos=1 obj=11 op='INDEX UNIQUE SCAN I_USER# (cr=2 pr=0 pw=0 time=13 us)'
=====================
PARSING IN CURSOR #1 len=155 dep=0 uid=5 oct=6 lid=5 tim=50576433073227 hv=500046959 ad='8de25730'
update BDP_ZQINFO.J_ORGANIZATION c set c.MODIFY_TIME = sysdate, c.MODIFY_EMPL_ID = 1111 where 1 = 1  and c.ORG_ID ='BJ0000270551'  and c.DEAL_STATUS = '1'
END OF STMT
PARSE #1:c=0,e=5500,p=0,cr=15,cu=0,mis=1,r=0,dep=0,og=1,tim=50576433073224
BINDS #1:
WAIT #1: nam='db file sequential read' ela= 189 file#=179 block#=277463 blocks=1 obj#=183635 tim=50576433073777
WAIT #2: nam='library cache lock' ela= 301 handle address=504403169246502016 lock address=504403168550871736 100*mode+namespace=301 obj#=183635 tim=50576433074697
=====================
PARSING IN CURSOR #2 len=183 dep=1 uid=138 oct=6 lid=138 tim=50576433074784 hv=896090630 ad='1e6739b8'
UPDATE J_OUT_ORG_FOR_TRS B SET OPER_TYPE = 0, CREATE_TIME = SYSDATE WHERE EXISTS (SELECT 1 FROM J_OUT_ORG_FOR_TRS A WHERE A.ORG_SERIAL_ID = B.ORG_SERIAL_ID AND B.ORG_SERIAL_ID = :B1 )
END OF STMT
PARSE #2:c=0,e=511,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,tim=50576433074781
BINDS #2:
kkscoacd
Bind#0
  oacdty=02 mxl=22(21) mxlc=00 mal=00 scl=00 pre=00
  oacflg=13 fl2=206001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=110562a00  bln=22  avl=06  flg=09
  value=-23145297
WAIT #2: nam='gc current grant busy' ela= 559 p1=442 p2=296026 p3=33619969 obj#=184185 tim=50576433356596
WAIT #2: nam='gc current block 2-way' ela= 321 p1=30 p2=90709 p3=16777217 obj#=184219 tim=50576433357396
WAIT #2: nam='gc current block 2-way' ela= 514 p1=122 p2=50469 p3=33554433 obj#=184219 tim=50576433358061
WAIT #2: nam='gc current block 2-way' ela= 521 p1=122 p2=50461 p3=33554433 obj#=184219 tim=50576433358753
WAIT #2: nam='gc cr block 2-way' ela= 466 p1=189 p2=73366 p3=1 obj#=184187 tim=50576434468561
WAIT #2: nam='gc current block 2-way' ela= 613 p1=317 p2=239612 p3=1 obj#=184190 tim=50576437055206
WAIT #2: nam='gc cr block 2-way' ela= 654 p1=319 p2=228768 p3=1 obj#=184190 tim=50576437392221
WAIT #2: nam='gc cr block 2-way' ela= 333 p1=319 p2=228994 p3=1 obj#=184190 tim=50576437697191
WAIT #2: nam='gc current block 2-way' ela= 361 p1=361 p2=270819 p3=1 obj#=184190 tim=50576437997483
WAIT #2: nam='gc cr block busy' ela= 912 p1=363 p2=459877 p3=1 obj#=184191 tim=50576438627331
WAIT #2: nam='gc cr block 2-way' ela= 358 p1=87 p2=230733 p3=1 obj#=184192 tim=50576439432074
WAIT #2: nam='gc cr block 2-way' ela= 385 p1=366 p2=121340 p3=1 obj#=184199 tim=50576441543979
WAIT #2: nam='gc cr block 2-way' ela= 428 p1=366 p2=121393 p3=1 obj#=184199 tim=50576441854612
WAIT #2: nam='gc cr block 2-way' ela= 377 p1=366 p2=121412 p3=1 obj#=184199 tim=50576441864739
WAIT #2: nam='gc current block 2-way' ela= 408 p1=366 p2=121509 p3=1 obj#=184199 tim=50576442172533
WAIT #2: nam='gc cr block 2-way' ela= 392 p1=366 p2=121653 p3=1 obj#=184199 tim=50576442467808
*** 2012-10-08 13:25:37.010
EXEC #2:c=11740000,e=12049484,p=0,cr=23912,cu=7,mis=0,r=1,dep=1,og=1,tim=50576445124325
EXEC #1:c=11750000,e=12051323,p=1,cr=23916,cu=10,mis=0,r=1,dep=0,og=1,tim=50576445124609
WAIT #1: nam='SQL*Net message to client' ela= 3 driver id=1650815232 #bytes=1 p3=0 obj#=184199 tim=50576445124710
*** 2012-10-08 13:26:08.750
WAIT #1: nam='SQL*Net message from client' ela= 30995944 driver id=1650815232 #bytes=1 p3=0 obj#=184199 tim=50576476120696
STAT #1 id=1 cnt=1 pid=0 pos=1 obj=0 op='UPDATE  J_ORGANIZATION (cr=23916 pr=1 pw=0 time=12051201 us)'
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=182495 op='TABLE ACCESS BY GLOBAL INDEX ROWID J_ORGANIZATION PARTITION: ROW LOCATION ROW LOCATION (cr=4 pr=1 pw=0 time=428 us)'
STAT #1 id=3 cnt=1 pid=2 pos=1 obj=183635 op='INDEX UNIQUE SCAN UNI_ORG_ID (cr=3 pr=1 pw=0 time=409 us)'
=====================
PARSING IN CURSOR #1 len=56 dep=0 uid=5 oct=42 lid=5 tim=50576476121198 hv=1729844458 ad='0'
alter session set events '10046 trace name context off'
END OF STMT
PARSE #1:c=0,e=173,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=50576476121194
EXEC #1:c=0,e=120,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=50576476121370Edited by: 小电工II on 2012-10-7 下午11:43

此为
UPDATE BDP_ZQINFO.J_OUT_ORG_FOR_TRS B SET OPER_TYPE = 0, CREATE_TIME = SYSDATE
WHERE
EXISTS (SELECT 1 FROM BDP_ZQINFO.J_OUT_ORG_FOR_TRS A WHERE A.ORG_SERIAL_ID =
B.ORG_SERIAL_ID AND B.ORG_SERIAL_ID = :B1 );
生成的
Enter value for sqlid: av6s7vnuqkhh6
old  21: ('&SQLID') order by s.snap_id
new  21: ('av6s7vnuqkhh6') order by s.snap_id
    SnapId PLAN_HASH_VALUE Date time                      No. of exec        LIO/exec CPUTIM/exec  ETIME/exec    PIO/exec   ROWs/exec
     34967      1602621420 10/01/12_0800_0900                      26        23159.35        5.66        5.83       12.27      260.88
     34968      1602621420 10/01/12_0900_1000                      70        23162.23        5.63        5.70        1.20       97.33
     34969      1602621420 10/01/12_1000_1100                      58        23162.43        5.72        5.84         .91      117.95
     34970      1602621420 10/01/12_1100_1200                      62        23161.10        5.84        5.97         .92      110.71
     34971      1602621420 10/01/12_1200_1300                       5        23164.40        5.52        5.59        2.40         .60
     34974      1602621420 10/01/12_1500_1600                       6        23157.17        5.60        5.60         .00         .00
     34991      1602621420 10/02/12_0800_0900                      11        23166.36        5.50        5.54        1.00      625.00
     34992      1602621420 10/02/12_0900_1000                      19        23158.05        5.39        5.39         .47      362.00
     34994      1602621420 10/02/12_1100_1200                       1        23161.00        5.77        5.77         .00         .00
     34998      1602621420 10/02/12_1500_1600                       7        23253.00        5.70        5.70        1.57        7.71
     34999      1602621420 10/02/12_1600_1700                       9        23159.22        5.61        5.71         .33        6.22
     35002      1602621420 10/02/12_1900_2000                       1        23169.00        5.93        6.16        1.00       56.00
     35016      1602621420 10/03/12_0900_1000                       6        23188.83        5.79        5.96        2.17        3.50
     35017      1602621420 10/03/12_1000_1100                      17        23175.59        5.58        5.58        1.41        3.06
     35018      1602621420 10/03/12_1100_1200                       9        23165.89        5.60        5.60         .89        6.56
     35019      1602621420 10/03/12_1200_1300                      17        23170.24        5.66        5.67        1.59      413.29
     35020      1602621420 10/03/12_1300_1400                     116        23160.33        5.61        5.61         .47       60.44
     35021      1602621420 10/03/12_1400_1500                     107        23161.92        5.61        5.61         .83       65.96
     35022      1602621420 10/03/12_1500_1600                      51        23163.78        5.65        5.65        1.69      138.96
     35023      1602621420 10/03/12_1600_1700                      59        23161.88        5.62        5.62        1.12      120.61
     35024      1602621420 10/03/12_1700_1800                      45        23165.49        5.63        5.63        1.80      158.91
     35025      1602621420 10/03/12_1800_1900                      21        23161.24        5.60        5.60        1.00      340.95
     35040      1602621420 10/04/12_0900_1000                     165        23168.90        5.67        5.68         .79         .96
     35041      1602621420 10/04/12_1000_1100                     387        23160.60        5.62        5.63         .59         .73
     35042      1602621420 10/04/12_1100_1200                      79        23166.33        5.72        5.74        1.32        4.43
     35045      1602621420 10/04/12_1400_1500                      25        23170.24        5.63        5.63        1.48        1.24
     35046      1602621420 10/04/12_1500_1600                      79        23164.59        5.69        5.70         .70        1.20
     35047      1602621420 10/04/12_1600_1700                     125        23162.08        5.74        5.74         .42        1.31
     35048      1602621420 10/04/12_1700_1800                      77        23166.55        5.62        5.62         .62        3.10
     35050      1602621420 10/04/12_1900_2000                       1        23467.00        5.65        8.39     1213.00     7405.00
     35054      1602621420 10/04/12_2300_0000                     366        23166.97        5.50        5.51       11.54       20.58
     35055      1602621420 10/05/12_0000_0100                     634        23198.98        5.53        5.53         .31       12.88
     35056      1602621420 10/05/12_0100_0200                     619        23133.01        5.62        5.65         .26       14.19
     35057      1602621420 10/05/12_0200_0300                     616        23166.56        5.57        5.58         .42       15.28
     35058      1602621420 10/05/12_0300_0400                     634        23166.36        5.54        5.55         .38       15.85
     35059      1602621420 10/05/12_0400_0500                     629        23167.02        5.49        5.49         .62       17.04
     35060      1602621420 10/05/12_0500_0600                     637        23166.48        5.48        5.48         .52       17.82
     35061      1602621420 10/05/12_0600_0700                     626        23166.50        5.52        5.52         .52       19.14
     35062      1602621420 10/05/12_0700_0800                     643        23166.47        5.44        5.45         .47       19.63
     35063      1602621420 10/05/12_0800_0900                     721        23103.33        5.42        5.42         .64       18.40
     35064      1602621420 10/05/12_0900_1000                    1101        23145.79        5.54        5.59         .96       12.91
     35065      1602621420 10/05/12_1000_1100                     396        23163.76        5.55        5.59        1.00       36.46
     35066      1602621420 10/05/12_1100_1200                     153        23255.38        5.55        5.59        1.51      104.88
     35067      1602621420 10/05/12_1200_1300                     417        23160.47        5.43        5.45         .64         .31
     35068      1602621420 10/05/12_1300_1400                     395        23162.84        5.44        5.45         .63         .88
     35069      1602621420 10/05/12_1400_1500                       2        23161.50        5.42        5.64         .50         .50
     35070      1602621420 10/05/12_1500_1600                      10        23160.30        5.93        6.15         .60         .50
     35071      1602621420 10/05/12_1600_1700                      18        23181.94        5.72        5.74        2.11      892.39
     35073      1602621420 10/05/12_1800_1900                       3        23312.00        5.76        5.98        1.00     5370.33
     35091      1602621420 10/06/12_1200_1300                      68        23165.46        5.49        5.51        1.01         .82
     35092      1602621420 10/06/12_1300_1400                     164        23161.13        5.49        5.49         .43         .77
     35093      1602621420 10/06/12_1400_1500                     225        23159.11        5.52        5.54         .32         .82
     35094      1602621420 10/06/12_1500_1600                     109        23162.06        5.52        5.56         .93        2.23
     35095      1602621420 10/06/12_1600_1700                      24        23160.13        5.50        5.52         .58       10.46
     35096      1602621420 10/06/12_1700_1800                      56        23167.84        5.49        5.50         .89        5.68
     35097      1602621420 10/06/12_1800_1900                      71        23165.11        5.49        5.50         .83        5.15
     35098      1602621420 10/06/12_1900_2000                     437        23166.72        5.55        5.56        1.04        1.83
     35099      1602621420 10/06/12_2000_2100                     624        23167.51        5.52        5.52         .89        2.36
     35100      1602621420 10/06/12_2100_2200                     642        23191.14        5.44        5.44         .84        5.75
     35101      1602621420 10/06/12_2200_2300                     640        23167.84        5.45        5.45         .58        6.89
     35102      1602621420 10/06/12_2300_0000                     637        23166.58        5.43        5.43         .48        7.92
     35103      1602621420 10/07/12_0000_0100                     643        23169.93        5.43        5.44         .42        9.17
     35104      1602621420 10/07/12_0100_0200                     627        23169.19        5.49        5.51         .53       10.66
     35105      1602621420 10/07/12_0200_0300                     640        23166.57        5.44        5.44         .32       11.44
     35106      1602621420 10/07/12_0300_0400                     641        23166.40        5.45        5.46         .26       12.42
     35107      1602621420 10/07/12_0400_0500                     582        23181.24        5.43        5.43         .34       16.28
     35113      1602621420 10/07/12_1000_1100                      14        23235.79        5.79        5.86        1.21     1157.21
     35117      1602621420 10/07/12_1400_1500                      56        23158.70        5.41        5.43         .30      288.66
     35118      1602621420 10/07/12_1500_1600                      55        23161.78        5.43        5.45         .87      294.40
     35119      1602621420 10/07/12_1600_1700                      55        23157.67        5.45        5.47         .33      294.53
     35120      1602621420 10/07/12_1700_1800                      51        23178.39        5.40        5.40        1.00      319.88
     35121      1602621420 10/07/12_1800_1900                       7        23161.57        5.40        5.43        1.71     2331.00
     35122      1602621420 10/07/12_1900_2000                      12        23303.58        5.38        5.38        1.58     1375.08
     35136      1602621420 10/08/12_0900_1000                     197        23371.71        5.64        5.89         .79        2.98
     35137      1602621420 10/08/12_1000_1100                      96        23930.24        6.01        6.14         .93        6.52
     35138      1602621420 10/08/12_1100_1200                      81        23923.36        5.90        5.93         .94        8.17
     35139      1602621420 10/08/12_1200_1300                      41        23929.37        5.84        5.85        1.07        1.15
     35140      1602621420 10/08/12_1300_1400                      14        23976.36        6.98        7.15        2.79     1182.71
     35141      1602621420 10/08/12_1400_1500                     109        38385.74        6.24        6.34         .67         .56
79 rows selected.

Similar Messages

  • Count (*)  for select stmt take more time than  execute a that sql stmt

    HI
    count (*) for select stmt take more time than execute a that sql stmt
    executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql  command for faster execute .
    but if i tried to find out total number of rows in that query it takes more time ..
    almost 2.30 hrs still running to find count(col)
    please help me to get count of row faster.
    thanks in advance...

    797525 wrote:
    HI
    count (*) for select stmt take more time than execute a that sql stmt
    executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql  command for faster execute .
    but if i tried to find out total number of rows in that query it takes more time ..
    almost 2.30 hrs still running to find count(col)
    please help me to get count of row faster.
    thanks in advance...That may be because your client is displaying only the first few records when you are running the "SELECT *". But when you run "COUNT(*)", the whole records has to be counted.
    As already mentined please read teh FAQ to post tuning questions.

  • Error in sql query as "loop has run more times than expected (Loop Counter went negative)"

    Hello,
    When I run the query as below
    DECLARE @LoopCount int
    SET @LoopCount = (SELECT Count(*) FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL)
    WHILE (
        SELECT Count(*)
        FROM KC_PaymentTransactionIDConversion with (nolock)
        Where KC_Transaction_ID is NULL
        and TransactionYear is NOT NULL
    ) > 0
    BEGIN
        IF @LoopCount < 0
            RAISERROR ('Issue with data in KC_PaymentTransactionIDConversion, loop has run more times than expected (Loop Counter went negative).', -- Message text.
                   16, -- Severity.
                   1 -- State.
    SET @LoopCount = @LoopCount - 1
    end
    I am getting error as "loop has run more times than expected (Loop Counter went negative)"
    Could any one help on this issue ASAP.
    Thanks ,
    Vinay

    Hi Vinay,
    According to your code above, the error message make sense. Because once the value returned by “SELECT Count(*)  FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL” is bigger than 0,
    then decrease @LoopCount. Without changing the table data, the returned value always bigger than 0, always decrease @LoopCount until it's negative and raise the error.
    To fix this issue with the current information, we should make the following modification:
    Change the code
    WHILE (
    SELECT Count(*)
    FROM KC_PaymentTransactionIDConversion with (nolock)
    Where KC_Transaction_ID is NULL
    and TransactionYear is NOT NULL
    ) > 0
    To
    WHILE @LoopCount > 0
    Besides, since the current query is senseless, please modify the query based on your requirement.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Level1 backup is taking more time than Level0

    The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.

    Ogan Ozdogan wrote:
    Charles,
    By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
    Thank you in anticipating.
    OganOgan,
    I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
    An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
    A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
    "Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • I use itunes on a Dell XPS502 with W7/64. In some cases have have problems to import CD's. The sound is very disturbed and the import need a lot more time than in normal cases. Is there a problem between itunes W7/64 or a known hardware issue?

    I use itunes on a Dell XPS502 with W7/64. In some cases have have problems to import CD's. The sound is very disturbed and the import need a lot more time than in normal cases. Is there a problem between itunes and W7/64 or a known hardware issue?
    Example-CD : "Tracy Chapman , Telling stories" is not able to import . I have more such negative cases. But in other cases it works fine and the sound is great.
    The firmware at the inbuild CD/DVD DS-6E2SH is the latest version.
    What can I do??

    hi b noir,
    I don't no about virtuel drives like you mententioned. In the mean time I have rebooted the XPS and run again the iTunes diagnostics. I think the back - chance in the registry was not ready to use.  Now there are another results. They are the same in case of a running CD or a not running CD. The difference in total is like before. It takes more time that iTunes reads the (bad) CD and at the there is no good sound. In both cases ( running or not running CD) iTunes diagnostics gives as a result :
    (the copie from ITunes shows the result of the not running CD from Tracy Chapman)
    Microsoft Windows 7 x64 Ultimate Edition Service Pack 1 (Build 7601)
    Dell Inc. Dell System XPS L502X
    iTunes 10.3.1.55
    QuickTime 7.6.9
    FairPlay 1.11.17
    Apple Application Support 1.5.2
    iPod Updater-Bibliothek 10.0d2
    CD Driver 2.2.0.1
    CD Driver DLL 2.1.1.1
    Apple Mobile Device 3.4.0.25
    Apple Mobile Device Treiber 1.55.0.0
    Bonjour 2.0.5.0 (214.3)
    Gracenote SDK 1.8.2.457
    Gracenote MusicID 1.8.2.89
    Gracenote Submit 1.8.2.123
    Gracenote DSP 1.8.2.34
    iTunes-Seriennummer 00D7B2B00CD25750
    Aktueller Benutzer ist kein Administrator.
    Aktuelles Datum und Uhrzeit sind 2011-06-11 19:33:22.
    iTunes befindet sich nicht im abgesicherten Modus.
    WebKit Accelerated Compositing ist aktiviert.
    HDCP wird unterstützt.
    Core Media wird unterstützt.
    Info zu Video-Anzeige
    NVIDIA, NVIDIA GeForce GT 540M
    Intel Corporation, Intel(R) HD Graphics Family
    **** Info für externe Plug-Ins ****
    Keine externen Plug-Ins installiert.
    iPodService 10.3.1.55 (x64) arbeitet zurzeit.
    iTunesHelper 10.3.1.55 arbeitet zurzeit.
    Apple Mobile Device service 3.3.0.0 arbeitet zurzeit.
    **** CD/DVD-Laufwerkstests****
    LowerFilters: PxHlpa64 (2.0.0.0),
    UpperFilters: GEARAspiWDM (2.2.0.1),
    D: PLDS DVDRWBD DS-6E2SH, Rev CD11
    Audio-CD im Laufwerk
    11 Titel auf der CD gefunden, Spieldauer: 42:07 auf Audio-CD
    Titel 1, Startzeit: 00:02:00
    Titel 2, Startzeit: 03:59:47
    Titel 3, Startzeit: 07:19:27
    Titel 4, Startzeit: 11:31:30
    Titel 5, Startzeit: 15:31:50
    Titel 6, Startzeit: 20:07:50
    Titel 7, Startzeit: 24:27:15
    Titel 8, Startzeit: 27:49:10
    Titel 9, Startzeit: 32:41:25
    Titel 10, Startzeit: 35:29:65
    Titel 11, Startzeit: 38:38:00
    Audio-CD erfolgreich gelesen (Suche nach alter Firmware).
    Laufwerksgeschwindigkeit erfolgreich erkannt
    Die CDR-Geschwindigkeiten des Laufwerks sind:  4 10 16 24
    Die CDRW-Geschwindigkeiten des Laufwerks sind:  4
    Die DVDR-Geschwindigkeiten des Laufwerks sind:  4
    Die DVDRW-Geschwindigkeiten des Laufwerks sind:  4
    After starting the import it is going slower and slower. If it is helpful I can send you a soundfile with these distortions.
    best regards
    tcgerd

  • Delete DML statment tales more time than Update or Insert.

    i want to know whether a delete statement takes more time than an update or insert DML command. Please help in solving the doubt.
    Regards.

    I agree: the amount of ROLLBACK (called UNDO) and ROLLFORWARD (called REDO) information written by the various statement has a crucial impact on the speed.
    I did some simple benchmarks for INSERT, UPDATE and DELETE using a 1 million row simple table. As an alternative to the long UPDATEs and DELETEs, I tested also the usual workarounds (which have only partial applicability).
    Here are the conclusions (quite important in my opinion, but not to be taken as universal truth):
    1. Duration of DML statements for 1 million rows operations (with the size of redo generated):
    --- INSERT: 3.5 sec (redo: 3.8 MB)
    --- UPDATE: 24.8 sec (redo: 240 MB)
    --- DELETE: 26.1 sec (redo: 228 MB)
    2. Replacement of DELETE with TRUNCATE
    --- DELETE: 26.1 sec (rollback: 228 MB)
    --- TRUNCATE: 0.1 sec (rollback: 0.1 MB)
    3. Replacement of UPDATE with CREATE new TABLE AS SELECT (followed by DROP ols and RENAME new AS old)
    --- UPDATE: 24.8 sec (redo_size: 240 MB)
    --- replacement: 3.5 sec (rollback: 0.3 MB)
    -- * Preparation *
    CREATE TABLE ao AS
        SELECT rownum AS id,
              'N' || rownum AS name
         FROM all_objects, all_objects
        WHERE rownum <= 1000000;
    CREATE OR REPLACE PROCEDURE print_my_stat(p_name IN v$statname.NAME%TYPE) IS
        v_value v$mystat.VALUE%TYPE;
    BEGIN
        SELECT b.VALUE
          INTO v_value
          FROM v$statname a,
               v$mystat   b
         WHERE a.statistic# = b.statistic# AND lower(a.NAME) LIKE lower(p_name);
        dbms_output.put_line('*' || p_name || ': ' || v_value);
    END print_my_stat;
    -- * Test 1: Comparison of INSERT, UPDATE and DELETE *
    CREATE TABLE ao1 AS
        SELECT * FROM ao WHERE 1 = 2;
    exec print_my_stat('redo_size')
    *redo_size= 277,220,544
    INSERT INTO ao1 SELECT * FROM ao;
    1000000 rows inserted
    executed in 3.465 seconds
    exec print_my_stat('redo_size')
    *redo_size= 301,058,852
    commit;
    UPDATE ao1 SET name = 'M' || SUBSTR(name, 2);
    1000000 rows updated
    executed in 24.786 seconds
    exec print_my_stat('redo_size')
    *redo_size= 545,996,280
    commit;
    DELETE FROM ao1;
    1000000 rows deleted
    executed in 26.128 seconds
    exec print_my_stat('redo_size')
    *redo_size= 783,655,196
    commit;
    -- * Test 2:  Replace DELETE with TRUNCATE *
    DROP TABLE ao1;
    CREATE TABLE ao1 AS
        SELECT * FROM ao;
    exec print_my_stat('redo_size')
    *redo_size= 807,554,512
    TRUNCATE TABLE ao1;
    executed in 0.08 seconds
    exec print_my_stat('redo_size')
    *redo_size= 807,616,528
    -- * Test 3:  Replace UPDATE with CREATE TABLE AS SELECT *
    INSERT INTO ao1 SELECT * FROM ao;
    commit;
    exec print_my_stat('redo_size')
    *redo_size= 831,525,556
    CREATE TABLE ao2 AS
        SELECT id, 'M' || SUBSTR(name, 2) name FROM ao1;
    executed in 3.125 seconds
    DROP TABLE ao1;
    executed in 0.32 seconds
    RENAME ao2 TO ao1;
    executed in 0.01 seconds
    exec print_my_stat('redo_size')
    *redo_size= 831,797,608

  • Delete DML statment takes more time than Update or Insert.

    i want to know whether a delete statement takes more time than an update or insert DML command. Please help in solving the doubt.
    Regards.

    i do not get good answers sometimes, so, i ask again.I think Alex answer to your post was quite complete. If you missed some information, continue the same post, instead of opening a new thread with the same subject and content.
    You should be satistied with the answers you get, I also answered your question about global indexes, and I do think my answer was very complete. You may ask more if you want, but stop multiposting please. It is quite annoying.
    Ok, have a nice day

  • Why import of change request in production takes more time than quality?

    Hello All,
                 why import of change request in production takes more time than import into quality?

    Hi jahangeer,
    I believe it takes same time to import a request in both quality and production as they will be in sync.
    Even then if it takes more time in production that may depend on the change request.
    Thanks
    Pavan

  • When i put my Mac for sleep it takes more time than normal ( 20 secs). Sometimes, coming back from sleep the system is not responding (freeze).

    When i put my Mac for sleep it takes more time than normal (>20 secs). Sometimes, coming back from sleep the system is not responding (freeze).

    Perform SMC and NVRAM resets:
    http://support.apple.com/en-us/HT201295
    http://support.apple.com/en-us/HT204063
    The try a safe boot:
    http://support.apple.com/en-us/HT201262
    Any change?
    Ciao.

  • HT4863 I getting this message on my iPad more times than not. IOS message. I have 7.5 G remaining on my iCloud and don't do mass emailing. Just to replies to emails sent to me. I'm confused with such limits.

    I'm getting "exceeding my limit" messages more times than not on my iPad. I feel I'm not sending anything extraordinary. No mass sending but only replying to emails sent to me. I have 7.5 G remaining on my iCloud account. Is it because I have a lot of emails files? Please help because I'm getting frustrated with this.

    Hello, PriestessJeanann. 
    Thank you for visiting Apple Support Communities.
    Here is an article I would recommend going through when experiencing issues with mail.  The usual fix would be to delete the email account in question and add this account via the preset AOL option.  I would also recommend checking with your email provider for security procedures such as two-step verification as this could cause this issue.
    iOS: Troubleshooting Mail
    http://support.apple.com/kb/ts3899
    Cheers,
    Jason H.

  • Suddenly ODI scheduled executions taking more time than usual.

    Hi,
    I have set ODI packages scheduled for execution.
    From some days those are taking more time to execute themselves.
    Before they used to take 1 hr 30 mins approx.
    Now they are taking 3 - 3 hr 15 mins approx.
    And there no any major change in data in terms of Quantity.
    My ODI version s
    Standalone Edition Version 11.1.1
    Build ODI_11.1.1.3.0_GENERIC_100623.1635
    ODI packages are mainly using Oracle as SOURCE and TARGET DB.
    What things should i check to get to know reasons of sudden increase in time of execution.
    Any pointers regarding this would be appreciated.
    Thanks,
    Mahesh

    Mahesh,
    Use some repository queries to retrieve the session task timings and compare your slow execution to a previous acceptable execution, then look for the biggest changes - this will highlight where you are slowing down, then its off to tune the item accordingly.
    See here for some example reports , you might need to tweak for your current repos version but I dont think the table structures have changed that much :
    http://rnm1978.wordpress.com/2010/11/03/analysing-odi-batch-performance/

  • RMAN backup taking more time than usual suddenly

    Hi All,
    We are using 11.1.0.7 database, We regularly takes the full level 0 incremental backup which generally tooks 4:30 hours to complete but from last 2-3 days it is taking 6 hours /or more to complete. We did not made any parameter changes or script change in the database.
    Below are the details of rman :
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name OLAP are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'f:\backup
    CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
    CONFIGURE CHANNEL DEVICE TYPE DISK MAXOPENFILES 2;
    CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 5 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 6 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 7 DEVICE TYPE DISK FORMAT 'e:\backup\OLAP\OLAP_full_%u
    CONFIGURE CHANNEL 8 DEVICE TYPE DISK FORMAT 'f:\backup\OLAP\OLAP_full_%u
    CONFIGURE MAXSETSIZE TO UNLIMITED;
    CONFIGURE ENCRYPTION FOR DATABASE OFF;
    CONFIGURE ENCRYPTION ALGORITHM 'AES128';
    CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'f:\backup\OLAP\SNCFOLAP.ORA';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'F:\BACKUP\OLAP\SNCFOLAP.ORA';
    =====================================================
    Please help me, as this more time make my schedule task to extend.
    Thanks
    Sam

    sam wrote:
    Hi All,
    We are using 11.1.0.7 database, We regularly takes the full level 0 incremental backup which generally tooks 4:30 hours to complete but from last 2-3 days it is taking 6 hours /or more to complete. We did not made any parameter changes or script change in the database.could be due to change in server load,
    please compare server load(cpu/memory) on above two period.

  • Why SQL2 took much more time than SQL1?

    I run these 2 SQLs sequencely.
    --- SQL1: It took 245 seconds.
    create table PORTAL_DAYLOG_100118_bak
    as
    select * from PORTAL_DAYLOG_100118;
    --- SQL2: It took 3105 seconds.
    create table PORTAL_DAYLOG_100121_bak
    as
    select * from PORTAL_DAYLOG_100121;
    It is really strange that SQL2 took almost 13 times than SQL1, with nearly same data amount and same data structure in the same tablespace.
    Could anyone tell me the reason? or How could I find out why?
    Here is more detail info. for my case,
    --- Server:
    [@wapbi.no.sohu.com ~]$ uname -a
    Linux test 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
    --- DB
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    --- Tablespace:
    CREATE TABLESPACE PORTAL DATAFILE
      '/data/oradata/wapbi/portal01.dbf' SIZE 19456M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED,
      '/data/oradata/wapbi/portal02.dbf' SIZE 17408M AUTOEXTEND ON NEXT 1024M MAXSIZE UNLIMITED
    LOGGING
    ONLINE
    PERMANENT
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    BLOCKSIZE 8K
    SEGMENT SPACE MANAGEMENT AUTO
    FLASHBACK ON;
    --- Tables:
    SQL> select table_name,num_rows,blocks,avg_row_len from dba_tables
      2  where table_name in ('PORTAL_DAYLOG_100118','PORTAL_DAYLOG_100121');
    TABLE_NAME                       NUM_ROWS     BLOCKS AVG_ROW_LEN
    PORTAL_DAYLOG_100118             20808536     269760          85
    PORTAL_DAYLOG_100121             33747911     440512          86
    CREATE TABLE PORTAL_DAYLOG_100118
      IP           VARCHAR2(20 BYTE),
      NODEPATH     VARCHAR2(50 BYTE),
      PG           VARCHAR2(20 BYTE),
      PAGETYPE     INTEGER,
      CLK          VARCHAR2(20 BYTE),
      FR           VARCHAR2(20 BYTE),
      PHID         INTEGER,
      ANONYMOUSID  VARCHAR2(50 BYTE),
      USID         VARCHAR2(50 BYTE),
      PASSPORT     VARCHAR2(200 BYTE),
      M_TIME       CHAR(4 BYTE)                     NOT NULL,
      M_DATE       CHAR(6 BYTE)                     NOT NULL,
      LOGDATE      DATE
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    CREATE TABLE PORTAL_DAYLOG_100121
      IP           VARCHAR2(20 BYTE),
      NODEPATH     VARCHAR2(50 BYTE),
      PG           VARCHAR2(20 BYTE),
      PAGETYPE     INTEGER,
      CLK          VARCHAR2(20 BYTE),
      FR           VARCHAR2(20 BYTE),
      PHID         INTEGER,
      ANONYMOUSID  VARCHAR2(50 BYTE),
      USID         VARCHAR2(50 BYTE),
      PASSPORT     VARCHAR2(200 BYTE),
      M_TIME       CHAR(4 BYTE)                     NOT NULL,
      M_DATE       CHAR(6 BYTE)                     NOT NULL,
      LOGDATE      DATE
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;Any comment will be really appeciated!!!
    Satine

    Hey Anurag,
    Thank you for your help!
    Here it is.
    SQL1:
    create table portal.PORTAL_DAYLOG_100118_TEST
    as
    select * from portal.PORTAL_DAYLOG_100118
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1    374.69     519.05     264982     265815     274858    20808536
    Fetch        0      0.00       0.00          0          0          0           0
    total        2    374.69     519.05     264982     265815     274858    20808536
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    Rows     Row Source Operation
          0  LOAD AS SELECT  (cr=268138 pr=264982 pw=264413 time=0 us)
    20808536   TABLE ACCESS FULL PORTAL_DAYLOG_100118 (cr=265175 pr=264981 pw=0 time=45792172 us cost=73478 size=1768725560 card=20808536)SQL2:
    create table portal.PORTAL_DAYLOG_100121_TEST
    as
    select * from portal.PORTAL_DAYLOG_100121
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1   1465.72    1753.35     290959     291904     300738    22753695
    Fetch        0      0.00       0.00          0          0          0           0
    total        2   1465.72    1753.35     290959     291904     300738    22753695
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS
    Rows     Row Source Operation
          0  LOAD AS SELECT  (cr=295377 pr=290960 pw=289966 time=0 us)
    22753695   TABLE ACCESS FULL PORTAL_DAYLOG_100121 (cr=291255 pr=290958 pw=0 time=56167952 us cost=80752 size=1956817770 card=22753695)Best wishes,
    Satine

  • OR is taking much more time than UNION

    hi gems..
    i have written a query using UNION clause and it took 12 seconds to give result.
    then i wrote the same query using OR operator and then it took 78 seconds to give the resultset.
    The tables which are referred by this qurey have no indexes.
    the cost plans for the query with OR is also much more lesser than that with UNION.
    please suggest why OR is taking more time.
    thanks in advance

    Here's a ridiculously simple example.  (these tables don't even have any rows in them)
    If you had separate indexes on col1 and col2, the optimizer might use indexes in the union but not in the or statement:
    Which is faster will depend on the usual list of things.
    Of course, the union also requires a sort operation.
    SQL> create table table1
      2  (col1 number, col2 number, col3 number, col4 number);
    Table created.
    SQL> create index t1_idx1 on table1(col1);
    Index created.
    SQL> create index t1_idx2 on table1(col2);
    Index created.
    SQL> explain plan for
      2  select col1, col2, col3, col4
      3  from table1
      4  where col1> = 123
      5  or col2 <= 456;
    Explained.
    SQL> @xp
    | Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |        |     1 |    52 |     2   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| TABLE1 |     1 |    52 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1">=123 OR "COL2"<=456)
    SQL> explain plan for
      2  select col1, col2, col3, col4
      3  from table1
      4  where col1 >= 123
      5  union
      6  select col1, col2, col3, col4
      7  from table1
      8  where col2 <= 456;
    Explained.
    SQL> @xp
    | Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |         |     2 |   104 |     4  (75)| 00:00:01 |
    |   1 |  SORT UNIQUE                  |         |     2 |   104 |     4  (75)| 00:00:01 |
    |   2 |   UNION-ALL                   |         |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN          | T1_IDX1 |     1 |       |     1   (0)| 00:00:01 |
    |   5 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN          | T1_IDX2 |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("COL1">=123)
       6 - access("COL2"<=456)

Maybe you are looking for

  • Multi day events in Month View.

    Multi day events in month view only display on the first day. Checking all day event fixes this, but none of my multi day events are in fact all day. If I sync with my iphone, it does mark both days on the multi day event in month view. In week view

  • How can you see all the open tabs?

    When I use the drop-down Windows menu, it only shows the name of the window, not the tabs within each window. And I don't an option for "Activity Window" (like on Safari), or anything else that will show me all open tabs. How can I see them, as flipp

  • Performance of the HTTP Server

    Hi! I'm looking to deploy a corporate website that has a number of database led applications. We expect our site to get 50,000+ web page requests daily - I was wondering if the web server that comes with Sun ONE Application Server 7 is capable of thi

  • Can I create a multiple waypoint route in Maps?

    I would like to be able to define my own route in the Maps App instead of picking from one of the 3 offered.  Is there any way that I can force a route to go through a specific location?

  • Complex swf/xml structure with "donwload updated files" ?

    Hello to all, sorry for my bad english. i have a complex structure of swf + xml structure.  something like 200 files..  i was wondering if is it possible to create an air application for install all this package on a client and then check is some fil