Long running update statement needs a performance improve.
Hi,
I have the following update statement which runs for over 3 hours and updates 215 million rows, is there anyway I can rewrite it so it performs better?
UPDATE TABLE1 v
SET closs = (SELECT MIN(slloss)
FROM TABLE2 l
WHERE polnum = slpoln
AND polren = slrenn
AND polseq = slseqn
AND vehnum = slvehn
AND linecvg = sllcvg);Here is the execution plan:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | UPDATE STATEMENT | | 214M| 4291M| 2344K (2)|
| 1 | UPDATE | TABLE1 | | | |
| 2 | TABLE ACCESS FULL | TABLE1 | 214M| 4291M| 2344K (2)|
| 3 | SORT AGGREGATE | | 1 | 21 | |
| 4 | TABLE ACCESS BY INDEX ROWID| TABLE2 | 1 | 21 | 5 (0)|
| 5 | INDEX RANGE SCAN | TABLE2_N2 | 2 | | 3 (0)|
----------------------------------------------------------------------------------Here are create table statements for TABLE1(215million rows) and TABLE2(1million rows):
CREATE TABLE TABLE2 (SLCLMN VARCHAR2(11 byte),
SLFEAT NUMBER(2), SLPOLN NUMBER(9), SLRENN NUMBER(2),
SLSEQN NUMBER(2), SLVEHN NUMBER(2), SLDRVN NUMBER(2),
SLCVCD VARCHAR2(6 byte), SLLCVG NUMBER(4), SLSABB
VARCHAR2(2 byte), SLPRCD VARCHAR2(3 byte), SLRRDT
NUMBER(8), SLAYCD NUMBER(7), SLCITY VARCHAR2(28 byte),
SLZIP5 NUMBER(5), SLCEDING VARCHAR2(1 byte), SLCEDELOSS
VARCHAR2(1 byte), SLRISKTYPE VARCHAR2(1 byte), SLVEHDESIG
VARCHAR2(1 byte))
TABLESPACE S_DATA PCTFREE 10 PCTUSED 0 INITRANS 1
MAXTRANS 255
STORAGE ( INITIAL 106496K NEXT 0K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0)
NOLOGGING
MONITORING;
CREATE TABLE TABLE1 (POLNUM NUMBER(9) NOT NULL,
POLREN NUMBER(2) NOT NULL, POLSEQ NUMBER(2) NOT NULL,
VEHNUM NUMBER(2) NOT NULL, CVGCODE VARCHAR2(8 byte) NOT
NULL, LINECVG NUMBER(4), MAINVEH CHAR(1 byte), MAINCVG
CHAR(1 byte), CVGLIMIT VARCHAR2(13 byte), CVGDED
VARCHAR2(10 byte), FULLCVG CHAR(1 byte), CVGGRP CHAR(4
byte), CYCVG CHAR(1 byte), POLTYPE CHAR(1 byte),
CHANNEL CHAR(2 byte), UWTIER VARCHAR2(6 byte), SUBTIER
VARCHAR2(6 byte), THITIER VARCHAR2(3 byte), COMPGRP
VARCHAR2(8 byte), PRODGRP VARCHAR2(6 byte), UWSYS
VARCHAR2(6 byte), BRAND VARCHAR2(8 byte), COMP NUMBER(2),
STATE CHAR(2 byte), PROD CHAR(3 byte), RRDATE DATE,
STATENUM NUMBER(2), EFT_BP CHAR(1 byte), AGYCODE
NUMBER(7), AGYSUB CHAR(3 byte), AGYCLASS CHAR(1 byte),
CLMAGYCODE NUMBER(7), AGYALTCODE VARCHAR2(25 byte),
AGYRELATION VARCHAR2(10 byte), RATECITY VARCHAR2(28 byte),
RATEZIP NUMBER(5), RATETERR NUMBER, CURTERR NUMBER,
CURRRPROD CHAR(6 byte), CURRRDATE DATE, RATESYMB NUMBER,
SYMBTYPE CHAR(1 byte), CVGTERR NUMBER(3), CVGSYMB
NUMBER(3), VEHTERR NUMBER, VEHYEAR NUMBER, VEHMAKE
VARCHAR2(6 byte), VEHMODEL VARCHAR2(10 byte), VEHSUBMOD
VARCHAR2(10 byte), VEHBODY VARCHAR2(6 byte), VEHVIN
VARCHAR2(10 byte), VEHAGE NUMBER(3), VEHSYMB NUMBER,
DRVNUM NUMBER, DUMMYDRV CHAR(1 byte), DRVAGE NUMBER(3),
DRVSEX VARCHAR2(1 byte), DRVMS VARCHAR2(1 byte), DRVPTS
NUMBER(3), DRVPTSDD NUMBER(3), DRVGRP CHAR(7 byte),
DRVSR22 VARCHAR2(1 byte), DRVVTIER CHAR(2 byte),
BUSUSESUR CHAR(1 byte), EXCLDRVSUR CHAR(1 byte),
CSCODED NUMBER(5), CSACTUAL NUMBER(5), CSOVERRD
NUMBER(5), ANNMILES NUMBER(6), DLORIGDATE DATE,
DLLASTDATE DATE, DLMONTHS NUMBER(6), MATUREDSC CHAR(1
byte), PERSISTDSC CHAR(1 byte), ANNUALMILES_RANGE
VARCHAR2(25 byte), CEDEDLOSS VARCHAR2(1 byte), CEDEDPOL
VARCHAR2(1 byte), CEDEDCVG VARCHAR2(1 byte),
CONSTRAINT TABLE1_PK PRIMARY KEY(POLNUM, POLREN,
POLSEQ, VEHNUM, CVGCODE)
USING INDEX
TABLESPACE V_INDEX
STORAGE ( INITIAL 3874816K NEXT 0K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0) PCTFREE 10 INITRANS 2 MAXTRANS 255)
TABLESPACE U_DATA PCTFREE 10 PCTUSED 0 INITRANS 1
MAXTRANS 255
STORAGE ( INITIAL 4194304K NEXT 0K MINEXTENTS 1 MAXEXTENTS
2147483645 PCTINCREASE 0)
NOLOGGING
MONITORING;Thank you very much!
user6053424 wrote:
Hi,
I have the following update statement which runs for over 3 hours and updates 215 million rows, is there anyway I can rewrite it so it performs better?
UPDATE TABLE1 v
SET closs = (SELECT MIN(slloss)
FROM TABLE2 l
WHERE polnum = slpoln
AND polren = slrenn
AND polseq = slseqn
AND vehnum = slvehn
AND linecvg = sllcvg);
Are you trying to perform a correlated update? If so, you can perform something similar to;
Sample data;
create table t1 as (
select 1 id, 10 val from dual union all
select 1 id, 10 val from dual union all
select 2 id, 10 val from dual union all
select 2 id, 10 val from dual union all
select 2 id, 10 val from dual);
Table created
create table t2 as (
select 1 id, 100 val from dual union all
select 1 id, 200 val from dual union all
select 2 id, 500 val from dual union all
select 2 id, 600 val from dual);
Table createdThe MERGE will update each row based on the maximum for each ID;
merge into t1
using (select id, max(val) max_val
from t2
group by id) subq
on (t1.id = subq.id)
when matched then update
set t1.val = subq.max_val;
Done
select * from t1;
ID VAL
1 200
1 200
2 600
2 600
2 600If you want all rows updated to the same value then remove the ID grouping from the subquery and from the ON clause.
Similar Messages
-
I'm having a update-Statement that is running in Version 1.1 more than 30 minutes, but in Version 1.0 and SQL-Plus only 30 seconds.
SQL-Developer is running on WinXP
Oracle is Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
and this was the statement (witch is updating 2830 rows):
update hist_person p1
set letzte_vermittlung = (select letzte_vermittlung from hist_person p2
where p2.datum = add_months(date '2006-01-01',-1)
and p2.p_nr = p1.p_nr)
where p1.datum = date '2006-01-01'
and p1.p_nr in (select aktmon.p_nr
from hist_person vormon, hist_person aktmon
where aktmon.datum = date '2006-01-01'
and vormon.datum = add_months(date '2006-01-01',-1)
and vormon.p_nr = aktmon.p_nr
and vormon.letzte_vermittlung > nvl(aktmon.letzte_vermittlung, date '1900-01-01'));Mmm, read up on the "NLS_COMP set to ANSI" threads.
Try setting NLS_COMP to binary for the session, then run your update and see if that fixes the issue. If so, the forthcoming patch will probably fix the issue, else get back here...
K. -
Hello,
Can anyone explain why this simple update statement against a single partition in large table (~300,000 rows, ~1GB in size for the single partition) is taking very long time. The most unusual thing I see in the stats are HUGE number of buffer gets.
Table def is below, and there are 25 local b-tree indexes also on this table (too much to paste here), each on a single column residing in seperate tablespace than the table.
I don't have a trace and will not be able to get one. Any theories as to the high buffer gets? A simple table scan (which occurs many times in our batch) against a single partition takes usually between 30-60 seconds. Sometimes the table scan goes haywire and I see these huge buffer gets, somewhat higher disk reads, and much longer execution time. There are less than 3 million rows in the partition being acted on, and only updating a couple columns, I simply cannot understand why Oracle would be getting a block (whether it was in cache already or not) over 1 BILLION times to perform this update.
This is Oracle 11g 11.1.0.7 on RHL 5.3, 2 node RAC but all processing on instance 1 and instance 2 shut down at this point to avoid any possibility of cache fusuion issues.
Elapsed
SQL Id Time (ms)
0np3ccxhf9jmc 1.79E+07
UPDATE ESRODS.EXTR_CMG_TRANSACTION_HISTORY SET RULE_ID_2 = '9285', REPORT_CODE =
'MMKT' WHERE EDIT_CUSIP_NUM = '19766G868' AND PROCESS_DATE BETWEEN '01-JAN-201
0' AND '31-JAN-2010' AND RULE_ID_2 IS NULL
Plan Statistics
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 1.79E+07 17,915,656.1 2.3
CPU Time (ms) 1.18E+07 11,837,756.4 2.5
Executions 1 N/A N/A
Buffer Gets 1.09E+09 1.089168E+09 3.3
Disk Reads 246,267 246,267.0 0.0
Parse Calls 1 1.0 0.0
Rows 326,843 326,843.0 N/A
User I/O Wait Time (ms) 172,891 N/A N/A
Cluster Wait Time (ms) 0 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 504,047 N/A N/A
Invalidations 0 N/A N/A
Version Count 21 N/A N/A
Sharable Mem(KB) 745 N/A N/A
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | UPDATE STATEMENT | | | | 36029 (100)| | | |
| 1 | UPDATE | EXTR_CMG_TRANSACTION_HISTORY | | | | | | |
| 2 | PARTITION RANGE SINGLE| | 305K| 21M| 36029 (1)| 00:05:16 | 62 | 62 |
| 3 | TABLE ACCESS FULL | EXTR_CMG_TRANSACTION_HISTORY | 305K| 21M| 36029 (1)| 00:05:16 | 62 | 62 |
Full SQL Text
SQL ID SQL Text
0np3ccxhf9jm UPDATE ESRODS.EXTR_CMG_TRANSACTION_HISTORY SET RULE_ID_2 = '9285'
', REPORT_CODE = 'MMKT' WHERE EDIT_CUSIP_NUM = '19766G868' AND PR
OCESS_DATE BETWEEN '01-JAN-2010' AND '31-JAN-2010' AND RULE_ID_2
IS NULL
Table def:
CREATE TABLE EXTR_CMG_TRANSACTION_HISTORY
TRANSACTION_ID NUMBER(15) NOT NULL,
CREATE_DATE DATE,
CREATE_USER VARCHAR2(80 BYTE),
MODIFY_DATE DATE,
MODIFY_USER VARCHAR2(80 BYTE),
EXCEPTION_FLG CHAR(1 BYTE),
SOURCE_SYSTEM VARCHAR2(20 BYTE),
SOURCE_TYPE VARCHAR2(32 BYTE),
TRANSACTION_STATUS VARCHAR2(8 BYTE),
FUND_ID NUMBER(15),
FUND_UNIT_ID NUMBER(15),
FROM_FUND_ID NUMBER(15),
FROM_FUND_UNIT_ID NUMBER(15),
EXECUTING_DEALER_ID NUMBER(15),
EXECUTING_BRANCH_ID NUMBER(15),
CLEARING_DEALER_ID NUMBER(15),
CLEARING_BRANCH_ID NUMBER(15),
BRANCH_PERSON_MAP_ID NUMBER(15),
BP_REP_MAP_ID NUMBER(15),
REP_ID NUMBER(15),
PERSON_ID NUMBER(15),
TPA_DEALER_ID NUMBER(15),
TRUST_DEALER_ID NUMBER(15),
TRANS_CODE_ID NUMBER(15),
EDIT_DEALER_NUM VARCHAR2(30 BYTE),
EDIT_BRANCH_NUM VARCHAR2(50 BYTE),
EDIT_REP_NUM VARCHAR2(100 BYTE),
EDIT_CUSIP_NUM VARCHAR2(9 BYTE),
TRANS_TYPE VARCHAR2(80 BYTE),
TRANSACTION_CD VARCHAR2(8 BYTE),
TRANSACTION_SUFFIX VARCHAR2(8 BYTE),
SHARE_BALANCE_IND VARCHAR2(2 BYTE),
PROCESS_DATE DATE,
BATCH_DATE DATE,
SUPER_SHEET_DATE DATE,
CONFIRM_DATE DATE,
TRADE_DATE DATE,
SETTLE_DATE DATE,
PAYMENT_DATE DATE,
AM_PM_CD VARCHAR2(2 BYTE),
TRUST_DEALER_NUM VARCHAR2(7 BYTE),
TPA_DEALER_NUM VARCHAR2(7 BYTE),
TRUST_COMPANY_NUM VARCHAR2(10 BYTE),
DEALER_NUM VARCHAR2(25 BYTE),
BRANCH_NUM VARCHAR2(50 BYTE),
REP_NUM VARCHAR2(100 BYTE),
DEALER_NAME VARCHAR2(80 BYTE),
REP_NAME VARCHAR2(80 BYTE),
SOCIAL_SECURITY_NUMBER VARCHAR2(9 BYTE),
ACCT_NUMBER_CD VARCHAR2(6 BYTE),
ACCT_NUMBER VARCHAR2(20 BYTE),
ACCT_SHORT_NAME VARCHAR2(80 BYTE),
FROM_TO_ACCT_NUM VARCHAR2(20 BYTE),
EXTERNAL_ACCT_NUM VARCHAR2(14 BYTE),
NAV_ACCT VARCHAR2(1 BYTE),
MANAGEMENT_CD VARCHAR2(16 BYTE),
PRODUCT VARCHAR2(80 BYTE),
SUBSET_PRODUCT VARCHAR2(3 BYTE),
FUND_NAME VARCHAR2(80 BYTE),
FUND_NUM VARCHAR2(7 BYTE),
FUND_CUSIP_NUM VARCHAR2(9 BYTE),
TICKER_SYMBOL VARCHAR2(10 BYTE),
APL_FUND_TYPE VARCHAR2(10 BYTE),
LOAD_INDICATOR VARCHAR2(50 BYTE),
FROM_TO_FUND_NUM VARCHAR2(7 BYTE),
FROM_TO_FUND_CUSIP_NUM VARCHAR2(9 BYTE),
CUM_DISCNT_NUM VARCHAR2(9 BYTE),
NSCC_CONTROL_CD VARCHAR2(15 BYTE),
NSCC_NAV_REASON_CD VARCHAR2(1 BYTE),
BATCH_NUMBER VARCHAR2(20 BYTE),
ORDER_NUMBER VARCHAR2(16 BYTE),
CONFIRM_NUMBER VARCHAR2(9 BYTE),
AS_OF_REASON_CODE VARCHAR2(3 BYTE),
SOCIAL_CODE VARCHAR2(3 BYTE),
NETWORK_MATRIX_LEVEL VARCHAR2(1 BYTE),
SHARE_PRICE NUMBER(15,4),
GROSS_AMOUNT NUMBER(15,2),
GROSS_SHARES NUMBER(15,4),
NET_AMOUNT NUMBER(15,2),
NET_SHARES NUMBER(15,4),
DEALER_COMMISSION_CODE CHAR(1 BYTE),
DEALER_COMMISSION_AMOUNT NUMBER(15,2),
UNDRWRT_COMMISSION_CODE CHAR(1 BYTE),
UNDRWRT_COMMISSION_AMOUNT NUMBER(15,2),
DISCO-stupid spam filter- UNT_CATEGORY VARCHAR2(2 BYTE),
LOI_NUMBER VARCHAR2(9 BYTE),
RULE_ID_1 NUMBER(15),
RULE_ID_2 NUMBER(15),
OMNIBUS_FLG CHAR(1 BYTE),
MFA_FLG CHAR(1 BYTE),
REPORT_CODE VARCHAR2(80 BYTE),
TERRITORY_ADDR_CODE VARCHAR2(3 BYTE),
ADDRESS_ID NUMBER(15),
POSTAL_CODE_ID NUMBER(15),
CITY VARCHAR2(50 BYTE),
STATE_PROVINCE_CODE VARCHAR2(5 BYTE),
POSTAL_CODE VARCHAR2(12 BYTE),
COUNTRY_CODE VARCHAR2(5 BYTE),
LOB_ID NUMBER(15),
CHANNEL_ID NUMBER(15),
REGION_ID NUMBER(15),
TERRITORY_ID NUMBER(15),
EXCEPTION_NOTES VARCHAR2(4000 BYTE),
SOURCE_RECORD_ID NUMBER(15),
LOAD_ID NUMBER(15),
BIN VARCHAR2(20),
SHARE_CLASS VARCHAR2(50),
ACCT_PROD_ID NUMBER,
ORIGINAL_FUND_NUM VARCHAR2(7),
ORIGINAL_FROM_TO_FUND_NUM VARCHAR2(7),
ACCT_PROD_REGISTRATION_ID NUMBER,
REGISTRATION_LINE_1 VARCHAR2(60),
REGISTRATION_LINE_2 VARCHAR2(60),
REGISTRATION_LINE_3 VARCHAR2(60),
REGISTRATION_LINE_4 VARCHAR2(60),
REGISTRATION_LINE_5 VARCHAR2(60),
REGISTRATION_LINE_6 VARCHAR2(35),
REGISTRATION_LINE_7 VARCHAR2(35),
SECONDARY_LOB_ID NUMBER(15,0),
SECONDARY_CHANNEL_ID NUMBER(15,0),
SECONDARY_REGION_ID NUMBER(15,0),
SECONDARY_TERRITORY_ID NUMBER(15,0),
ACCOUNT_OVERRIDE_PRIORITY_CODE NUMBER(3,0)
TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
PCTUSED 0
PCTFREE 25
INITRANS 1
MAXTRANS 255
NOLOGGING
PARTITION BY RANGE (PROCESS_DATE)
PARTITION P_ESRODS_EXTR_CMG_TRAN_PRE2005 VALUES LESS THAN (TO_DATE(' 2005-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
COMPRESS
TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
PCTFREE 25
INITRANS 100
MAXTRANS 255
STORAGE (
INITIAL 5M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION P_ESRODS_EXTR_CMG_TRAN_201105 VALUES LESS THAN (TO_DATE(' 2011-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
COMPRESS
TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
PCTFREE 25
INITRANS 100
MAXTRANS 255
STORAGE (
INITIAL 5M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
PARTITION P_ESRODS_EXTR_CMG_TRAN_201106 VALUES LESS THAN (TO_DATE(' 2011-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
COMPRESS
TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_DAT
PCTFREE 25
INITRANS 100
MAXTRANS 255
STORAGE (
INITIAL 5M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOCACHE
NOPARALLEL;
ALTER TABLE EXTR_CMG_TRANSACTION_HISTORY ADD (
CONSTRAINT PK_EXTR_CMG_TRANSACTION_HIST PRIMARY KEY (TRANSACTION_ID)
USING INDEX
TABLESPACE P_ESRODS_EXTR_TRANS_LARGE_IDX
PCTFREE 25
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 5M
NEXT 5M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
Edited by: 855802 on May 1, 2011 6:46 AM855802 wrote:
You cannot bypass redo logging on update statement, there are only a handfull of operations that you can skip redo logging. The table is created no-logging. Still, update of the this many rows should not be affected by the redo writes. I agree that would be a way to speed it up.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/clauses005.htm#i999782
NOLOGGING is supported in only a subset of the locations that support LOGGING. Only the following operations support the NOLOGGING mode:
DML:
Direct-path INSERT (serial or parallel) resulting either from an INSERT or a MERGE statement. NOLOGGING is not applicable to any UPDATE operations resulting from the MERGE statement.
Direct Loader (SQL*Loader)
DDL:
CREATE TABLE ... AS SELECT
CREATE TABLE ... LOB_storage_clause ... LOB_parameters ... NOCACHE | CACHE READS
ALTER TABLE ... LOB_storage_clause ... LOB_parameters ... NOCACHE | CACHE READS (to specify logging of newly created LOB columns)
ALTER TABLE ... modify_LOB_storage_clause ... modify_LOB_parameters ... NOCACHE | CACHE READS (to change logging of existing LOB columns)
ALTER TABLE ... MOVE
ALTER TABLE ... (all partition operations that involve data movement)
ALTER TABLE ... ADD PARTITION (hash partition only)
ALTER TABLE ... MERGE PARTITIONS
ALTER TABLE ... SPLIT PARTITION
ALTER TABLE ... MOVE PARTITION
ALTER TABLE ... MODIFY PARTITION ... ADD SUBPARTITION
ALTER TABLE ... MODIFY PARTITION ... COALESCE SUBPARTITION
CREATE INDEX
ALTER INDEX ... REBUILD
ALTER INDEX ... REBUILD [SUB]PARTITION
ALTER INDEX ... SPLIT PARTITIONYes, I was thinking in consideration with my previous post using create table as select ...... But, if it's in application side then you need to think about another solution... -
Long running update command -Request guidance.!!!
Guys,
I'm struck up with an update query below , that has been running for hours together.
UPDATE ft
SET fact_1 = ( SELECT --+ USE_HASH(fb ft1)
fb.fact_2
FROM ODS_WO_WMIN04_FB fb, ft1 WHERE
AND ft1.geo = fb.geo
AND ft1.prod = fb.prod
AND ft1.cust = fb.cust
AND ft1.geo = ft.geo
AND ft1.prod = ft.prod
AND ft1.cust = ft.cust
AND ft1.file = 123
AND ft.file_ = 123)
The data in the underlying tables ranges to a few thousands, so there is no reason why it should run for 3 hours now.
When I check if there exists a lock on the tables,it is so.
SQL:cdwcp1 -> select
2 object_name,
3 object_type,
4 session_id,
5 type, -- Type or system/user lock
6 lmode, -- lock mode in which session holds lock
7 request,
8 block,
9 ctime -- Time since current mode was granted
10 from
11 v$locked_object, all_objects, v$lock
12 where
13 v$locked_object.object_id = all_objects.object_id AND
14 v$lock.id1 = all_objects.object_id AND
15 v$lock.sid = v$locked_object.session_id
16 order by
17 session_id, ctime desc, object_name
18 /
OBJECT_NAME OBJECT_TYPE SESSION_ID TY LMODE
REQUEST BLOCK CTIME
FT TABLE 144 TM 3
0 0 31263
Could somone let me know what is there in the udpate that could've caused the lock that holds there for such long?
Thank you all!!!!
BhagatPLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
| 0 | UPDATE STATEMENT | | 866K| 75M| 206 | | |
| 1 | UPDATE | FT | | | | | |
| 2 | TABLE ACCESS FULL | FT | 866K| 75M| 206 | | |
| 3 | FILTER | | | | | | |
| 4 | HASH JOIN | | 1 | 147 | 24 | | |
| 5 | TABLE ACCESS BY LOCAL INDEX ROWID| FB | 1 | 69 | 1 | 117 | 117 |
| 6 | BITMAP CONVERSION TO ROWIDS | | | | | | |
| 7 | BITMAP INDEX SINGLE VALUE | FB_IDX1 | | | | 117 | 117 |
| 8 | TABLE ACCESS BY INDEX ROWID | FT | 1 | 78 | 21 | | |
| 9 | BITMAP CONVERSION TO ROWIDS | | | | | | |
| 10 | BITMAP INDEX SINGLE VALUE | FT_IDX1 | | | | | |
[\pre]
The above is the explain plan guys..Please assist. -
Long running select statement and v$session_longops
Oracle Version: 10.2.0.4
I've a long running sql query that takes the estimated 6 minutes to complete and return the result.
While it's running I'd like to observe it into the view v$session_longops.
I altered the session that runs the query with
ALTER SESSION SET timed_statistics=TRUE;The tables it queries have gathered statistics on them.
However I don't see any rows in the view v$session_longops for the respective SID and serial#. Why is that? What am I missing?
Thank you!Hi,
Now I understand what you all meant by "loops" here .. Yes, the query does nested loops as one can see from the execution plan. So it could be the reason
SELECT STATEMENT, GOAL = ALL_ROWS
SORT GROUP BY
CONCATENATION
TABLE ACCESS BY LOCAL INDEX ROWID TABLE_1
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY GLOBAL INDEX ROWID TABLE_2
INDEX RANGE SCAN IPK_t2_CDATE
TABLE ACCESS BY INDEX ROWID TABLE_3
INDEX RANGE SCAN IPK_T3
PARTITION RANGE ALL
INDEX RANGE SCAN IRGP_REGCODE
TABLE ACCESS BY LOCAL INDEX ROWID TABLE_1
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY GLOBAL INDEX ROWID TABLE_2
INDEX RANGE SCAN IPK_t2_STATUS
TABLE ACCESS BY INDEX ROWID TABLE_3
INDEX RANGE SCAN IPK_T3
PARTITION RANGE SINGLE
INDEX RANGE SCAN IRGP_REGCODE -
I completed updating my iPad2 to ios6. I hv 11 app which need to be updated and these are flagged. But the updating procedures keep looping itself and I was not able to update any of the 11 app.? Why the update procedures keep looping and how to get out of it.
Okay - let's cut to the chase...
If the Menu Bar (File, Edit... Help) shown in my screenshot below is not visible, use CTRL+B to display it. Then click on Help/Check for Updates. If one is available, you will be able to select it at this time.
Note the item at the bottom of the list; About iTunes. Selecting that will show you which version of iTunes you are using.
An alternative method to display the Menu Bar is click on the box in the top left of your iTunes window... -
Long running updates and lock escalation
I have a question about locking when updating tables from SAP on a MSSQL2005 database.
Once a month we have a report for updating several rows (1 mio) on a special ledger table.
This report is taken the update as one transaction because of the possibility for rolling back. It is ledger data so we want to keep the update safe.
When running this report the online users has access to the same table. The users can make INSERT into the same table without any problems.
But after 15-20 minutes runtime the whole table is locked by a table lock instead of row locks. Therefore the users goes into a "hanging" situations when trying to make INSERT into the table. The locks are held until our program has finished.
Is there any way to raise the level of row locking escalation on a MSSQL2005 database?
I have found the trace flag 1211 and 1224 but is it safe setting this on a production database?Hii,,
I would suggest you to run the report when the users are not there as it is related to ledger table.
just schedule it at the low peak times.
Also the trace flags that you have mentioned can be harmful also
so i would better recommend not to use it.
Please refer to the link
http://blogs.msdn.com/sqlserverstorageengine/archive/2006/05/17/Lock-escalation.aspx
Also if u want to activate these traces I would recommend to talk to SAP before that
Rohit -
Can a long running batch job causing deadlock bring server performance down
Hi
I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking >12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
Thanks
Rgds
UngKok Aik wrote:
According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
And so on...
Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
Thanks
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking" Carl Sagan -
Long-running transactions and the performance penalty
If I change the orch or scope Transaction Type to "Long Running" and do not create any other transaction scopes inside, I'm getting this warning:
warning X4018: Performance Warning: marking service '***' as a longrunning transaction is not necessary and incurs the performance penalty of an extra commit
I didn't find any description of such penalties.
So my questions to gurus:
Does it create some additional persistence point(s) / commit(s) in LR orchestration/scope?
Where are these persistence points happen, especially in LR orchestration?
Leonid Ganeline [BizTalk MVP] BizTalk Development ArchitectureThe wording may make it sound so but IMHO, if during the build of an orchestration we get carried away with scope shapes we end up with more persistence points which do affect the performance so one additional should not make soo much of a difference. It
may have been put because of end-user feed back where people may have opted for long running transactions without realizing about performance overheads and in subsequent performance optimization sessions with Microsoft put it on the product enhancement list
as "provide us with an indication if we're to incurr performance penalties". A lot of people design orchestration like they write code (not saying that is a bad thing) where they use the scope shape along the lines of a try catch block and what with
Microsoft marketing Long Running Transactions/Compensation blocks as USP's for BizTalk, people did get carried away into using them without understanding the implications.
Not saying that there is no additional persistence points added but just wondering if adding one is sufficient to warrant the warning. But if I nest enough scope shapes and mark them all as long-running, they may add up.
So when I looked at things other than persistence points, I tried to think on how one might implement the long running transaction (nested, incorporating atomic, etc), would you be able to leverage the .Net transaction object (something the pipeline
use and execute under) or would that model not handle the complexities of the Long Running Transaction which by very definiton span across days/months and keeping .Net Transaction objects active or serialization/de-serialization into operating context will
cause more issues.
Regards. -
Long running table partitioning job
Dear HANA grus,
I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
Total data volumn is around 340GB and the table size was 32GB !!!!!
(migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
Before partitioning, the data volumn of the table was around 32GB.
After partitioning, the size has changed to 25GB.
It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
(It is QA DB, so less complaints)
I thought that I might not can do this in the production DB.
Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
Any comments would be appreciate.
Cheers,
- JasonJason,
looks like we're cross talking here...
What was your rationale to partition the table in the first place?
=> To reduce deleting time of CDPOS (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
The deletion then should run along these 4 sets of 25% of data.
It's surely me, but where is the speedup potential here?
How many unloads happened during the re-partitioning?
=> It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
How do the now longer running SQL statements look like?
=> As i mentioned selecting/deleting increased almost twice.
That's not what I asked.
Post the SQL statement text that was taking longer.
What are the three columns you picked for partitioning?
=> mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
Why those? Because these are the primary key?
I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
In that case the partition pruning cannot work and all partitions have to be searched.
How did you come up with 4 partitions? Why not 13, 72 or 213?
=> I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
Alright, so basically that was arbitrary.
For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
Well, not sure what "most people" would do.
HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
- Lars -
Update statement takes too long to run
Hello,
I am running this simple update statement, but it takes too long to run. It was running for 16 hours and then I cancelled it. It was not even finished. The destination table that I am updating has 2.6 million records, but I am only updating 206K records. If add ROWNUM <20 to the update statement works just fine and updates the right column with the right information. Do you have any ideas what could be wrong in my update statement? I am also using a DB link since CAP.ESS_LOOKUP table resides in different db from the destination table. We are running 11g Oracle Db.
UPDATE DEV_OCS.DOCMETA IPM
SET IPM.XIPM_APP_2_17 = (SELECT DISTINCT LKP.DOC_STATUS
FROM [email protected] LKP
WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND
IPM.XIPMSYS_APP_ID = 2
WHERE
IPM.XIPMSYS_APP_ID = 2;
Thanks,
Ilyamatthew_morris wrote:
In the first SQL, the SELECT against the remote table was a correlated subquery. the 'WHERE LKP.DOC_NUM = IPM.XIPM_APP_2_1 AND IPM.XIPMSYS_APP_ID = 2" means that the subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated. This might have meant thousands of iterations, meaning a great deal of network traffic (not to mention each performing a DISTINCT operation). Queries where the data is split between two or more databases are much more expensive than queries using only tables in a single database.Sorry to disappoint you again, but with clause by itself doesn't prevent from "subquery had to run once for each row of DEV_OCS.DOCMETA being evaluated". For example:
{code}
SQL> set linesize 132
SQL> explain plan for
2 update emp e
3 set deptno = (select t.deptno from dept@sol10 t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "T" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select * from dept@sol10) select t.deptno from t where e.deptno = t.deptno)
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3247731149
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 17 (83)| 00:00:01 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL| EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | REMOTE | DEPT | 1 | 13 | 0 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
Remote SQL Information (identified by operation id):
3 - SELECT "DEPTNO" FROM "DEPT" "DEPT" WHERE "DEPTNO"=:1 (accessing 'SOL10' )
16 rows selected.
SQL>
{code}
As you can see, WITH clause by itself guaranties nothing. We must force optimizer to materialize it:
{code}
SQL> explain plan for
2 update emp e
3 set deptno = (with t as (select /*+ materialize */ * from dept@sol10) select t.deptno from t where e.deptno = t.deptno
4 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3568118945
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
| 0 | UPDATE STATEMENT | | 14 | 42 | 87 (17)| 00:00:02 | | |
| 1 | UPDATE | EMP | | | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 42 | 3 (0)| 00:00:01 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FD9D6603_1CEEEBC | | | | | | |
| 5 | REMOTE | DEPT | 4 | 80 | 3 (0)| 00:00:01 | SOL10 | R->S |
PLAN_TABLE_OUTPUT
|* 6 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 | | |
| 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6603_1CEEEBC | 4 | 80 | 2 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
6 - filter("T"."DEPTNO"=:B1)
Remote SQL Information (identified by operation id):
PLAN_TABLE_OUTPUT
5 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'SOL10' )
25 rows selected.
SQL>
{code}
I do know hint materialize is not documented, but I don't know any other way besides splitting statement in two to materialize it.
SY. -
I need to uninstall Captivate update 7.0.1 becuase my quizzes will no longer run on any platform browser.
The reason they do not run is becuase LMS API will not load the quiz. because of a java script error (see image below)
The new update 7.0.1 has drastically changed the following files:
imsmanifest.xml
index_SCORM.html
SCORM_utilities.js
Utilities.js
The new script just will not run on Firefox 26
Internet Explorer 10 and 11
Chrome 31.0.1650.63m
Safari 5.1.7
How do I uninstall this update and get back to the earlier version or is there something that will fix this?
Claireyou would uninstall your current version and then reinstall the previous version, but there may be a better way to handle that error.
post on the captivate forum to check, http://forums.adobe.com/community/adobe_captivate -
SQL Update statement taking too long..
Hi All,
I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
Oracle Version: 11.2.0.1 64bit
OS: Windows 2008 64bit
desc temp_person;
Name Null? Type
PERSON_ID NOT NULL NUMBER(10)
DISTRICT_ID NOT NULL NUMBER(10)
FIRST_NAME VARCHAR2(60)
MIDDLE_NAME VARCHAR2(60)
LAST_NAME VARCHAR2(60)
BIRTH_DATE DATE
SIN VARCHAR2(11)
PARTY_ID NUMBER(10)
ACTIVE_STATUS NOT NULL VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
CPP_EXEMPT VARCHAR2(1)
EVENT_ID NOT NULL NUMBER(10)
USER_INFO_ID NUMBER(10)
TIMESTAMP NOT NULL DATE
CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
Index created.
ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
Index analyzed.
explain plan for update temp_person
2 set first_name = (select trim(f_name)
3 from ext_names_csv
4 where temp_person.PERSON_ID=ext_names_csv.p_id
5 and temp_person.DISTRICT_ID=ext_names_csv.ed_id);
Explained.
@?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 3786226716
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 82095 | 4649K| 2052K (4)| 06:50:31 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 82095 | 4649K| 191 (1)| 00:00:03 |
|* 3 | EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV | 1 | 178 | 24 (0)| 00:00:01 |
Predicate Information (identified by operation id):
3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
19 rows selected.By the looks of it the update is going to take 6 hrs!!!
ext_names_csv is an external table that have the same number of rows as the PERSON table.
ROHO@rohof> desc ext_names_csv
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
F_NAME VARCHAR2(300)
L_NAME VARCHAR2(300)Anyone can help diagnose this please.
Thanks
Edited by: rsar001 on Feb 11, 2011 9:10 PMThank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
Table created.
SQL> desc ext_person
Name Null? Type
P_ID NUMBER
ED_ID NUMBER
FST_NAME VARCHAR2(300)
LST_NAME VARCHAR2(300)
SQL> select count(*) from ext_person;
COUNT(*)
93383
SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
Index created.
SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
SQL> explain plan for update temp_person
2 set first_name = (select fst_name
3 from ext_person
4 where temp_person.PERSON_ID=ext_person.p_id
5 and temp_person.DISTRICT_ID=ext_person.ed_id);
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 1236196514
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | UPDATE STATEMENT | | 93383 | 1550K| 186K (50)| 00:37:24 |
| 1 | UPDATE | TEMP_PERSON | | | | |
| 2 | TABLE ACCESS FULL | TEMP_PERSON | 93383 | 1550K| 191 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| EXTT_PERSON | 9 | 1602 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | EXT_PERSON_ED | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
Note
- dynamic sampling used for this statement (level=2)
20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
SQL> explain plan for MERGE INTO temp_person t
2 USING (SELECT fst_name ,p_id,ed_id
3 FROM ext_person) ext
4 ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
5 WHEN MATCHED THEN
6 UPDATE set t.first_name=ext.fst_name;
Explained.
SQL> @?/rdbms/admin/utlxpls.sql
PLAN_TABLE_OUTPUT
Plan hash value: 2192307910
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | MERGE STATEMENT | | 92307 | 14M| | 1417 (1)| 00:00:17 |
| 1 | MERGE | TEMP_PERSON | | | | | |
| 2 | VIEW | | | | | | |
|* 3 | HASH JOIN | | 92307 | 20M| 6384K| 1417 (1)| 00:00:17 |
| 4 | TABLE ACCESS FULL| TEMP_PERSON | 93383 | 5289K| | 192 (2)| 00:00:03 |
| 5 | TABLE ACCESS FULL| EXT_PERSON | 92307 | 15M| | 85 (2)| 00:00:02 |
Predicate Information (identified by operation id):
3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
Note
- dynamic sampling used for this statement (level=2)
21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
Thank you all for your ideas that helped us get to the solution.
Much appreciated.
Thanks -
Need SSIS event handler for long running event
Hi,
I have a long running table load task that I would like to monitor using an event handler. I have tried the progress and information events but neither generates a message during the actual table load. Is there a way to invoke an
event during the SSIS data flow task when it 1%, 2% done?
thanks
oldmandbaDo you now how many rows the source table have ? You can run SELECT statement on the destination to find out how many data has been inserted.
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Update statement taking too long to execute
Hi All,
I'm trying to run this update statement. But its taking too long to execute.
UPDATE ops_forecast_extract b SET position_id = (SELECT a.row_id
FROM s_postn a
WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME)))
WHERE position_level = 7
AND b.am_id IS NULL;
SELECT COUNT(*) FROM S_POSTN;
214665
SELECT COUNT(*) FROM ops_forecast_extract;
49366
SELECT count(*)
FROM s_postn a, ops_forecast_extract b
WHERE UPPER(a.desc_text) = UPPER(TRIM(B.POSITION_NAME));
575What could be the reason for update statement to execute so long?
Thankspolasa wrote:
Hi All,
I'm trying to run this update statement. But its taking too long to execute.
What could be the reason for update statement to execute so long?You haven't said what "too long" means, but a simple reason could be that the scalar subquery on "s_postn" is using a full table scan for each execution. Potentially this subquery gets executed for each row of the "ops_forecast_extract" table that satisfies your filter predicates. "Potentially" because of the cunning "filter/subquery optimization" of the Oracle runtime engine that attempts to cache the results of already executed instances of the subquery. Since the in-memory hash table that holds these cached results is of limited size, the optimization algorithm depends on the sort order of the data and could suffer from hash collisions it's unpredictable how well this optimization works in your particular case.
You might want to check the execution plan, it should tell you at least how Oracle is going to execute the scalar subquery (it doesn't tell you anything about this "filter/subquery optimization" feature).
Generic instructions how to generate a useful explain plan output and how to post it here follow:
Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the {noformat}[{noformat}code{noformat}]{noformat} tag before and {noformat}[{noformat}/code{noformat}]{noformat} tag after or the {noformat}{{noformat}code{noformat}}{noformat} tag before and after to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
In 9i and above, if the "Predicate Information" section is missing from the DBMS_XPLAN.DISPLAY output but you get instead the message "Plan table is old version" then you need to re-create your plan table using the server side script "$ORACLE_HOME/rdbms/admin/utlxplan.sql".
In previous versions you could run the following in SQL*Plus (on the server) instead:
@?/rdbms/admin/utlxplsA different approach in SQL*Plus:
SET AUTOTRACE ON EXPLAIN
<run your statement>;will also show the execution plan.
In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
When your query takes too long ...
and post the "tkprof" output here, too.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/
Maybe you are looking for
-
Magsafe light goes green even though battery not charged then shuts down
So here is my scenario. My macbook was left on yesterday until it shut down on its own when the battery was depleted. When i plugged in the magsafe, the light was green and the computer showed the no battery X. After unplugging the adaptor and pluggi
-
Mac Mini fresh 10.4 install - can't install bundled applications.
Okay so I just got my first Mac last week . It's a Mac mini 1.83 Core 2 Duo. It came with Tiger and ILife bundle, and a Leopard upgrade dvd. I installed Leopard (erasing the original install) only to find that there is no iLife on it. I tried install
-
What should i know to develop objects in CRM Web Shop
Hi all, I am new to CRM Web Shop and would like to know what should I be knowing to do any development in CRM Web Shop? I am an ABAP Developer and can develop Web Dynpor's and BSP's using ABAP Work Bench. What else do I need to make my career as a We
-
I am using the time validation and it works well, however my case (which I think would be common) doesn't seem to be supported, or I am just not sure how to configure. I want someone to enter in a time such as '8:45 AM' or '11:30 PM'. It seems as tho
-
LoginTokenExpiredException while launching Analytics 2.5 Console.
Hello all, Help is really appreciated. Since past 2 weeks we are strggling the the following issues and are unable to access the Analytics Console or Administration tool. We have recently performed Analytics Upgrade from 2.1 to 2.5. All the configrat