Standby stop applying archivelogs
Hello,
yesterday, for the purpose of purgering flashback logs (it occupied a lot of FRA space) in standby, I dropped one 'restore point' in the standby database, and 'rm -r' one flashback log (because not very sure, so only rm one flashback log).
this is what I did
cc:
To remove uncessary Flashback Logs, you need to do the following:
1) drop any Guaranteed Restore Points that you don’t need (e.g. drop restore point ===> done in standby
2) set the init parameter DB_FLASHBACK_RETENTION_TARGET adequately ===> done in standby
3) Log in as root and run the following statements:
ls -lrt …/flashback|tail -100 =====> only 'rm' one flashback log (in asm )
prepare delete statement for each file you want to remove
(e.g. rm -f)
then this morning, I found last Applied log is stuck in yesterday afternoon. the stanby database won't apply archivelogs.
SYS> alter database recover managed standby database using current logfile disconnect;
alter database recover managed standby database using current logfile disconnect
ERROR at line 1:
ORA-01153: an incompatible media recovery is active
Any idea to fix those problem?
thank you
Hello,
thank you very much all.
when try to
alter database recover managed standby database cancel;
(...stuck here, no feedback, process running for 10 minutes without any feedback).
thank you
cc: alert log
NOTE: ASMB process exiting due to lack of ASM file activity
Thu Oct 25 12:19:22 2012
Stopping background process RBAL
Starting background process ASMB
ASMB started with pid=44, OS id=53739542
Starting background process RBAL
RBAL started with pid=50, OS id=15597628
Thu Oct 25 12:25:38 2012
SUCCESS: diskgroup RECOVER was mounted
SUCCESS: diskgroup RECOVER was dismounted
Thu Oct 25 12:25:39 2012
NOTE: ASMB process exiting due to lack of ASM file activity
Thu Oct 25 12:25:39 2012
Stopping background process RBAL
Thu Oct 25 12:32:49 2012
ORA-1013 signalled during: alter database recover managed standby database cancel...
Thu Oct 25 12:33:15 2012
alter database recover managed standby database cancel
(..hanging...)
Edited by: 951932 on Oct 25, 2012 10:40 AM
Similar Messages
-
Recover standby database apply archivelog slow
1.recover managed standby database disconnect from session;
2.view alert file,find apply a achivelog last block need 10m
3.view metalink MAA - Data Guard Redo Apply and Media Recovery Best Practices 10gR1
modify database parameter,still is slow
3.view v$managed_standby
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:49:09
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:56
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:57
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:57
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:58
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:58
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:58
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:59
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:59
SQL>
SQL> //
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:51:59
SQL> //
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:00
SQL> //
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:00
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:00
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:00
SQL> //
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:01
SQL> /
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:01
SQL> //
PROCESS SEQUENCE# THREAD# BLOCK# BLOCKS TIME
MRP0 133538 1 581533 581534 15-APR-2011 10:52:01
4. dump mrp
SO: 0xc103c9b58, type: 4, owner: 0xc172b7820, flag: INIT/-/-/0x00
(session) sid: 1087 trans: (nil), creator: 0xc172b7820, flag: (51) USR/- BSY/-/-/-/-/-
DID: 0001-0013-00000002, short-term DID: 0000-0000-00000000
txn branch: (nil)
oct: 0, prv: 0, sql: (nil), psql: (nil), user: 0/SYS
service name: SYS$BACKGROUND
waiting for 'PX Deq: Par Recov Reply' blocking sess=0x(nil) seq=6397 wait_time=0 seconds since wait started=3
sleeptime/senderid=10010000, passes=19f, =0
Dumping Session Wait History
for 'PX Deq: Par Recov Reply' count=1 wait_time=1953964
sleeptime/senderid=10010000, passes=19e, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954140
sleeptime/senderid=10010000, passes=19d, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954066
sleeptime/senderid=10010000, passes=19c, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954065
sleeptime/senderid=10010000, passes=19b, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954129
sleeptime/senderid=10010000, passes=19a, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954061
sleeptime/senderid=10010000, passes=199, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1953991
sleeptime/senderid=10010000, passes=198, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954123
sleeptime/senderid=10010000, passes=197, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954120
sleeptime/senderid=10010000, passes=196, =0
for 'PX Deq: Par Recov Reply' count=1 wait_time=1954073
sleeptime/senderid=10010000, passes=195, =0
5.how to read dump process file
Edited by: 852786 on 2011-4-16 上午1:46SO: 0xbf34d65b0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f4990d0, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000001A-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17426408, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17426418
SO: 0xbf34d6570, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f499020, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000019-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17426380, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17426390
SO: 0xbf34d6530, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498f88, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000018-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174262f8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17426308
SO: 0xbf34d64f0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498ef0, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000017-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17426270, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17426280
SO: 0xbf34d64b0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498e58, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000016-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174261e8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc174261f8
SO: 0xbf34d6470, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498dc0, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000015-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17426160, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17426170
SO: 0xbf34d6430, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498d28, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000014-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174260d8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc174260e8
SO: 0xbf34d63f0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498c90, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000013-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17426050, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17426060
SO: 0xbf34d63b0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498bf8, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000012-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425fc8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425fd8
SO: 0xbf34d6370, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498b60, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000011-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425f40, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425f50
SO: 0xbf34d6330, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498ac8, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000010-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425eb8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425ec8
SO: 0xbf34d62f0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498a30, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000000F-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425e30, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425e40
SO: 0xbf34d62b0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498998, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000000E-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425da8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425db8
SO: 0xbf34d6270, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498900, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000000D-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425d20, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425d30
SO: 0xbf34d6230, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498868, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000000C-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425c80, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425c90
SO: 0xbf34d61f0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f4987d0, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000000B-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425bf8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425c08
SO: 0xbf34d61b0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498738, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-0000000A-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425b70, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425b80
SO: 0xbf34d6170, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498688, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000009-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425ae8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425af8
SO: 0xbf34d6130, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f4985f0, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000008-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425a60, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425a70
SO: 0xbf34d60f0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498558, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000007-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174259d8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc174259e8
SO: 0xbf34d60b0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f4984c0, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000006-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425950, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425960
SO: 0xbf34d6070, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498428, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000005-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174258c8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc174258d8
SO: 0xbf34d6030, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498390, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000004-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425840, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425850
SO: 0xbf34d5ff0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f4982f8, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000003-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174257b8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc174257c8
SO: 0xbf34d5fb0, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f498260, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000002-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425730, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425740
SO: 0xbf34d5f70, type: 27, owner: 0xbfce71e80, flag: INIT/-/-/0x00
SO: 0xc0f4981c8, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000001-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc174256a8, mode: SSX, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc174256b8
SO: 0xc0f498130, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) FS-00000000-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425510, mode: S, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425520
SO: 0xc0f498000, type: 5, owner: 0xbfce71e80, flag: INIT/-/-/0x00
(enqueue) MR-00000000-00000000 DID: 0001-0016-00000002
lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 res_flag: 0x2
res: 0xc17425488, mode: S, lock_flag: 0x0
own: 0xc0e403c70, sess: 0xc0e403c70, proc: 0xc0c28ae80, prv: 0xc17425498
SO: 0xc13fe20e8, type: 61, owner: 0xc0e403c70, flag: INIT/-/-/0x00
Process Queue--kxfpq: 0x0xc13fe20e8, serial: 513, # of qrefs: 16,
inc: 0
client 2, detached proc: 0x(nil), QC qref 0x(nil), flags: FEML
Queue Descriptor--kxfpqd: 0x0xc13fe2140, remote queue addr: 0x0xc1
3fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fdd200, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdd200, ser: 513, seq: 1123, err
or: 0
opp qref: 0x0xc13fdbb50, process: 0x0xc0e2c33e8, bufs: {0x(nil),
0x0xb3fe124f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fdd310, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdd248, remote queue addr: 0x
0xc13fdfb40
instance id: 1, server id: 15, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fe124f8, 0x0xb3fe224f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fe124f8, type: DTA, bufnum: 1
ser: 513, seq: 1117, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdbb50, from qref: 0x0xc13fdd200, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fe12550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fdd410, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdd410, ser: 513, seq: 936, erro
r: 0
opp qref: 0x0xc13fdbd60, process: 0x0xc172b8fd8, bufs: {0x(nil),
0x0xb3fe024f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fdd520, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdd458, remote queue addr: 0x
0xc13fdfd98
instance id: 1, server id: 14, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fe024f8, 0x0xb3fdf24f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fe024f8, type: DTA, bufnum: 1
ser: 513, seq: 926, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdbd60, from qref: 0x0xc13fdd410, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fe02550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fdd620, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdd620, ser: 513, seq: 1076, err
or: 0
opp qref: 0x0xc13fdc5a0, process: 0x0xc0f28aa28, bufs: {0x0xb3fd
d24f8, 0x0xb3fdc24f8}
state: 10010, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fdd730, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdd668, remote queue addr: 0x
0xc13fdfff0
instance id: 1, server id: 13, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fdd24f8, 0x0xb3fdc24f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fdd24f8, type: DTA, bufnum: 0
ser: 513, seq: 1069, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdc5a0, from qref: 0x0xc13fdd620, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fdd2550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
Message Buffer--kxfpmh: 0x0xb3fdc24f8, type: DTA, bufnum: 1
ser: 513, seq: 1076, flags: DIAL, status: RCV, err: 0
to qref: 0x0xc13fdd620, from qref: 0x0xc13fdc5a0, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fdc2550, remote queue addr:
0x0xc13fdfff0
instance id: 1, server id: 13, flags: INIT
can't dump contents, client unknown
SO: 0xc13fdda40, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdda40, ser: 513, seq: 1209, err
or: 0
opp qref: 0x0xc13fdc7b0, process: 0x0xc102860a8, bufs: {0x(nil),
0x0xb3fe324f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fddb50, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdda88, remote queue addr: 0x
0xc13fe0248
instance id: 1, server id: 12, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fe324f8, 0x0xb3fda24f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fe324f8, type: DTA, bufnum: 1
ser: 513, seq: 1203, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdc7b0, from qref: 0x0xc13fdda40, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fe32550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fddc50, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fddc50, ser: 513, seq: 822, erro
r: 0
opp qref: 0x0xc13fdc180, process: 0x0xc0c28be50, bufs: {0x(nil),
0x0xb3fd924f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fddd60, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fddc98, remote queue addr: 0x
0xc13fe04a0
instance id: 1, server id: 11, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fd924f8, 0x0xb3f069230}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fd924f8, type: DTA, bufnum: 1
ser: 513, seq: 816, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdc180, from qref: 0x0xc13fddc50, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fd92550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fdde60, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdde60, ser: 513, seq: 829, erro
r: 0
opp qref: 0x0xc13fdbf70, process: 0x0xc0d2a44f8, bufs: {0x(nil),
0x0xb3f456dc8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fddf70, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fddea8, remote queue addr: 0x
0xc13fe06f8
instance id: 1, server id: 10, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3f456dc8, 0x0xb3fd724f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3f456dc8, type: DTA, bufnum: 1
ser: 513, seq: 823, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdbf70, from qref: 0x0xc13fdde60, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3f456e20, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fde070, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fde070, ser: 513, seq: 1321, err
or: 0
opp qref: 0x0xc13fdb940, process: 0x0xc0e2c2c00, bufs: {0x(nil),
0x0xb3fd424f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fde180, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fde0b8, remote queue addr: 0x
0xc13fe0950
instance id: 1, server id: 9, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fd424f8, 0x0xb3f854960}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fd424f8, type: DTA, bufnum: 1
ser: 513, seq: 1315, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdb940, from qref: 0x0xc13fde070, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fd42550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fde490, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fde490, ser: 513, seq: 1002, err
or: 0
opp qref: 0x0xc13fdd830, process: 0x0xc172b87f0, bufs: {0x(nil),
0x0xb3fde24f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fde5a0, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fde4d8, remote queue addr: 0x
0xc13fe0ba8
instance id: 1, server id: 8, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fde24f8, 0x0xb3fd224f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fde24f8, type: DTA, bufnum: 1
ser: 513, seq: 995, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdd830, from qref: 0x0xc13fde490, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fde2550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fde6a0, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fde6a0, ser: 513, seq: 1198, err
or: 0
opp qref: 0x0xc13fdcbd0, process: 0x0xc0f28a240, bufs: {0x(nil),
0x0xb3fd024f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fde7b0, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fde6e8, remote queue addr: 0x
0xc13fe0e00
instance id: 1, server id: 7, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fd024f8, 0x0xb3f059230}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3fd024f8, type: DTA, bufnum: 1
ser: 513, seq: 1191, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdcbd0, from qref: 0x0xc13fde6a0, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3fd02550, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fdeac0, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdeac0, ser: 513, seq: 1080, err
or: 0
opp qref: 0x0xc13fdcde0, process: 0x0xc102858c0, bufs: {0x(nil),
0x0xb3f436dc8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fdebd0, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdeb08, remote queue addr: 0x
0xc13fe1058
instance id: 1, server id: 6, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3f436dc8, 0x0xb3fce24f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3f436dc8, type: DTA, bufnum: 1
ser: 513, seq: 1074, flags: STRE, status: FRE, err: 0
to qref: 0x0xc13fdcde0, from qref: 0x0xc13fdeac0, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3f436e20, remote queue addr:
0x0xc13fe20e8
instance id: 1, server id: 65535, flags: ISQC
SO: 0xc13fdeee0, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdeee0, ser: 513, seq: 1418, err
or: 0
opp qref: 0x0xc13fde280, process: 0x0xc0c28b668, bufs: {0x(nil),
0x0xb3f824960}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fdeff0, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdef28, remote queue addr: 0x
0xc13fe12b0
instance id: 1, server id: 5, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3f824960, 0x0xb3fcc24f8}, sta
te: 10011
Message Buffer--kxfpmh: 0x0xb3f824960, type: NUL, bufnum: 1
ser: 513, seq: 1407, flags: DIAL, status: FRE, err: 0
to qref: 0x0xc13fdeee0, from qref: 0x0xc13fde280, inc: 0, send
er:
Queue Descriptor--kxfpqd: 0x0xb3f8249b8, remote queue addr:
0x0xc13fe12b0
instance id: 1, server id: 5, flags: INIT
SO: 0xc13fdf0f0, type: 62, owner: 0xc13fe20e8, flag: -/-/-/0x00
Queue Reference--kxfpqr: 0x0xc13fdf0f0, ser: 513, seq: 949, erro
r: 0
opp qref: 0x0xc13fdc390, process: 0x0xc0d2a3d10, bufs: {0x(nil),
0x0xb3fc924f8}
state: 00000, flags: SMEM, nulls 0, hint 0x1
latch 0x0xc13fdf200, remote descriptor:
Queue Descriptor--kxfpqd: 0x0xc13fdf138, remote queue addr: 0x
0xc13fe1508
instance id: 1, server id: 4, flags: INIT
recovery info--opr: 0, bufs: {0x0xb3fc924f8, 0x0xb3fca24f8}, sta
te: 10011
--------------------------------------- -
Logical Standby Stops Applying Up to a Point 10GR2
Hi, I'm running a standby on 10.2.0.2
There are no sequence gaps. I registered all the datafiles so it sees the ones before and after, yet in OEM it shows this:
Log Status ResetLogs ID # First Change # (SCN) Last Change # (SCN) Size (KB)
35334 Committed Transactions Applied 688864038 4819403033 4819404135 10782
35335 Partially Applied 688864038 4819404135 4819404151 92
35336 Not Applied 688864038 4819404151 4819404179 87
Alert log:
ora-01281 scn range specified is invalid.
I have tried doing a recover until scn # to no avail.Hello;
I believe i would review and follow this Oracle document:
(UNREGISTER logfile On Logical Standby (Doc ID 1416433.1))
Best Regards
mseberg -
Hi,
I've a problem applying real time on my logical standby. If I do a "alter system switch logfile" on the primary db then I see data from the primary db to the standby. However if I DON'T do the log switch or no log switch then I don't see the data even the transaction commit on the primary site.
Somehow all the transactions on the primary db redo logs are not transmit over the standby when transactions are commit. There's on error on the alert log on both primary and standby.
Here are my steps on the logical standby.
ALTER DATABASE STOP LOGICAL STANDBY APPLY;
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
My database version 10.2.0.3 and solaris 10.Hi there,
Here's the alert log file from the logical standby. Noted, I did an insert on the primary first then went to the standby "stop apply" and "start apply".
Completed: ALTER DATABASE stop LOGICAL STANDBY APPLY
Thu Oct 25 14:03:32 2007
ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
Thu Oct 25 14:03:32 2007
ALTER DATABASE START LOGICAL STANDBY APPLY (HARDWSTD)
with optional part
IMMEDIATE
Attempt to start background Logical Standby process
LSP0 started with pid=21, OS id=29458
LOGSTDBY status: ORA-16111: log mining and apply setting up
Thu Oct 25 14:03:32 2007
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
LOGMINER: Memory Size = 100M, Checkpoint interval = 500M
LOGMINER: session# = 1, builder process P003 started with pid=36 OS id=8859
LOGMINER: session# = 1, preparer process P004 started with pid=37 OS id=8861
LOGMINER: session# = 1, reader process P002 started with pid=35 OS id=8857
Thu Oct 25 14:03:32 2007
LOGSTDBY Parameter: DISABLE_APPLY_DELAY =
LOGSTDBY Parameter: LOG_AUTO_DELETE = FALSE
LOGSTDBY Parameter: MAX_SGA = 100
LOGSTDBY Parameter: REAL_TIME =
Completed: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
LOGSTDBY Analyzer process P005 started with pid=39 OS id=8927
LOGSTDBY Apply process P009 started with pid=43 OS id=8935
LOGSTDBY Apply process P007 started with pid=41 OS id=8931
Thu Oct 25 14:03:33 2007
LOGMINER: Begin mining logfile:/oradata/prod/HARDWSTD/stdby/1_363_600866453.dbf
LOGSTDBY Apply process P010 started with pid=44 OS id=8937
LOGSTDBY Apply process P008 started with pid=42 OS id=8933
LOGSTDBY Apply process P006 started with pid=40 OS id=8929
Thu Oct 25 14:03:34 2007
LOGMINER: End mining logfile:/oradata/prod/HARDWSTD/stdby/1_363_600866453.dbf
The standby parameter
SQL> show parameter log_archive_dest_1
NAME TYPE VALUE
log_archive_dest_1 string LOCATION=/orachive/prod/HARDDWSTD/stdby,
valid_for=(ALL_ROLES,ONLINE_LOGFILE ) -
Sql Apply on standby stops on error
hi all,
I have run into problem I can't figure out.
There are test and production databases, for both there are logical standby databases.
For example, when materialized view is created on primary database, I see error "ORA-16226: DDL skipped due to lack of support", which is ok.
When I try to create index on test database, I got message "ORA-16227: DDL skipped due to missing object", so far OK.
However, when I try to create index on this materialized view, on production database SQL Apply stops at this point, because materialized view does not exist on standby.
Any clues why SQL apply doesn't stop on one standby but on the another standby database it stops after the same statement?
Are there any parameters or smth for this?
Thanksyes, they are both same version and compatible parameter is the same - 11.1.0, they should be created same way on both.
only difference - prod is on solaris, test is on linux.
Problem in this case is not with mviews, but in fact, that if we have some table which is skipped and some index or other object is added on primary, SQL apply stops on standby.
Anyway, what should be the default action if SQL apply encounters an error - is it stopping apply ? -
Standby Applied Archivelog Automatic Deletion
Dear OTN Community,
My Oracle version is 10gR2 and OS is HP-UX v11.31
My question here is i have configured the archivelog deletion policy from none to APPLIED ON STANDBY on the standby database since we are taking backups on the primary database. I left the archivelog deletion policy as none on the primary database.
I have also set the retention period from redundancy 1 to recovery window of 2 days on the primary database. So the archivelogs, backups etc. are older than 2 days should be tagged as expired.
Now should the Oracle delete the expired and applied archivelogs of primary database on the standby archive destination of the standby database, am i right?
The other question is, how can we delete applied archivelogs from the standby archive destination on the standby unix box automatically? Let's say, i do not want to delete them using a unix deletion script or taking a backup of the archivelogs on primary database. Is it possible on realtime with an Oracle parameter or not?
Thank you in anticipating,
OganHi Ogan,
It's better to set the parameter applied on standby on the standby AND on the primary side, anyvay one of the side will not do anything but after a switchover like this all is already set.
Then yes Oracle will delete expired backup, archivelog on the primary destination and delete the applied log on the standby destination.
For your other question, put the applied on standby also on the primary side.
Loïc -
Logical standby stopped lastnight
Subject: Logical standby stopped lastnight
Author: raghavendra rao yella, United States
Date: Nov 14, 2007, 0 minutes ago
Os info: solaris 5.9
Oracle info: 10.2.0.3
Error info: ORA-16120: dependencies being computed for transaction at SC
N 0x0002.c8f6f182
Message: Our logical standby stopped last night. we tried to stop and start the standby but no help.
Below are some of the queries to get the status:
APPLIED_SCN LATEST_SCN MINING_SCN RESTART_SCN
11962328446 11981014649 11961580453 11961536228
APPLIED_TIME LATEST_TIME MINING_TIME RESTART_TIME
07-11-13 09:09:41 07-11-14 10:26:26 07-11-13 08:57:53 07-11-13 08:56:36
sys@RP06>SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
TYPE HIGH_SCN STATUS
COORDINATOR 1.1962E+10 ORA-16116: no work available
READER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
BUILDER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
PREPARER 1.1962E+10 ORA-16127: stalled waiting for additional transact
ions to be applied
ANALYZER 1.1962E+10 ORA-16120: dependencies being computed for transac
tion at SCN 0x0002.c8f6c002
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
APPLIER ORA-16116: no work available
10 rows selected.
Select PID,
TYPE,
STATUS
From
V$LOGSTDBY
Order by
HIGH_SCN; 2 3 4 5 6 7 8
PID TYPE STATUS
17896 ANALYZER ORA-16120: dependencies being computed for transaction at SC
N 0x0002.c8f6f182
17892 PREPARER ORA-16127: stalled waiting for additional transactions to be
applied
17890 BUILDER ORA-16243: paging out 8144 bytes of memory to disk
17888 READER ORA-16127: stalled waiting for additional transactions to be
applied
28523 COORDINATOR ORA-16116: no work available
17904 APPLIER ORA-16116: no work available
17906 APPLIER ORA-16116: no work available
17898 APPLIER ORA-16116: no work available
17900 APPLIER ORA-16116: no work available
17902 APPLIER ORA-16116: no work available
10 rows selected.
How can i get this transaction information, which log miner is looking for dependencies?
Let me know if you have any questions.
Thanks in advance.
Message was edited by:
raghu559Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
Logical standby stopped when trying to create partitions on primary(Urgent
RDBMS Version: 10.2.0.3
Operating System and Version: Solaris 5.9
Error Number (if applicable): ORA-1119
Product (i.e. SQL*Loader, Import, etc.): Data Guard on RAC
Product Version: 10.2.0.3
logical standby stopped when trying to create partitions on primary(Urgent)
Primary is a 2node RAC ON ASM, we implemented partitions on primar.
Logical standby stopped appling logs.
Below is the alert.log for logical stdby:
Current log# 4 seq# 860 mem# 0: +RT06_DATA/rt06/onlinelog/group_4.477.635601281
Current log# 4 seq# 860 mem# 1: +RECO/rt06/onlinelog/group_4.280.635601287
Fri Oct 19 10:41:34 2007
create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL
Fri Oct 19 10:41:34 2007
ORA-1119 signalled during: create tablespace INVACC200740 logging datafile '+OT06_DATA' size 10M AUTOEXTEND ON NEXT 5M MAXSIZE 1000M EXTENT MANAGEMENT LOCAL...
LOGSTDBY status: ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
LOGSTDBY Apply process P004 pid=49 OS id=16403 stopped
Fri Oct 19 10:41:34 2007
Errors in file /u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc:
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
Here is the trace file info:
/u01/app/oracle/admin/RT06/bdump/rt06_lsp0_16387.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0
System name: SunOS
Node name: iscsv341.newbreed.com
Release: 5.9
Version: Generic_118558-28
Machine: sun4u
Instance name: RT06
Redo thread mounted by this instance: 1
Oracle process number: 16
Unix process pid: 16387, image: [email protected] (LSP0)
*** 2007-10-19 10:41:34.804
*** SERVICE NAME:(SYS$BACKGROUND) 2007-10-19 10:41:34.802
*** SESSION ID:(1614.205) 2007-10-19 10:41:34.802
knahcapplymain: encountered error=12801
*** 2007-10-19 10:41:34.804
ksedmp: internal or fatal error
ORA-12801: error signaled in parallel query server P004
ORA-01119: error in creating database file '+OT06_DATA'
ORA-17502: ksfdcre:4 Failed to create file +OT06_DATA
ORA-15001: diskgroup "OT06_DATA" does not exist or is not mounted
ORA-15001: diskgroup "OT06_DATA" does not exist or
KNACDMP: *******************************************************
KNACDMP: Dumping apply coordinator's context at 7fffd9e8
KNACDMP: Apply Engine # 0
KNACDMP: Apply Engine name
KNACDMP: Coordinator's Watermarks ------------------------------
KNACDMP: Apply High Watermark = 0x0000.0132b0bc
Sorry our primary database file structure is different from stdby, we used db_file_name_convert in the init.ora, it look like this:
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+OT06_DATA/OT06TSG001/','+RT06_DATA/RT06/','+RECO/OT06TSG001','+RECO/RT06'
*.db_files=2000
*.db_name='OT06'
*.db_recovery_file_dest='+RECO'
Is there any thing wrong in this parameter.
I tried this parameter before for cloning using rman backup. This din't work.
What exactly must be done? for db_file_name_convert to work.
Even in this case i think this is the problem its not converting the location and the logical halts.
help me out.....
let me know if you have any questions.
Thanks Regards
Raghavendra rao Yella.Hi reega,
Thanks for your reply, our logical stdby has '+RT06_DATA/RT06'
and primary has '+OT06_DATA/OT06TSG001'
so we are using db_file_name_convert init parameter but it doesn't work.
Is there any thing particular steps hiding to use this parameter? as i tried this parameter for rman cloning it din't work, as a workaround i used rman set new name command for clonning.
Let me know if you have any questions.
Thanks in advance. -
Logical stopped applying logs after install STATSPACK
Pl. help urgent
Oracle 10.2.0.1.0
After installing statspack @spcreate installed sucessfully
@spauto create job sucessfully
But the Logical stdby stopped applying the logs with the following error message
alert.log
================
LOGSTDBY stmt: grant execute on dbms_shared_pool to execute_catalog_role
LOGSTDBY status: ORA-04042: procedure, function, package, or package body does not exist
LOGSTDBY id: XID 0x000d.01b.00009d72, hSCN 0x0000.20c7cf14, lSCN 0x0000.20c7cf14, Thread 1, RBA
0x6642.0000433b.138, txnCscn 0x0000.20c7cf17, PID 5544, ORACLE.EXE
(P006)
LOGSTDBY Apply process P006 pid=45 OS id=5544 stopped
Wed Dec 19 16:13:10 2007
================
Please help if anyone has any idea on this forum, how do i start the logs applying the to logical standby.
ThanksHi,
Will the below steps clear the above error:
Sql > alter database stop logical standby apply;
Sql> EXECUTE DBMS_LOGSTDBY.PURGE_SESSION;
Sql> alter database guard all;
Sql> EXECUTE DBMS_LOGSTDBY.APPLY_SET('MAX_SGA', 3000);
Sql> Exec dbms_logstdby.apply_set('MAX_SERVERS',25);
Sql> EXECUTE DBMS_LOGSTDBY.APPLY_SET('APPLY_SERVERS',12);
Sql > alter database start logical standby apply;
Please advice -
Applying archivelogs to test disaster recovery database
Database: 10.2.0.2
OS: RHEL
Goal: (1) To test the ability to rebuild a database using backupset and archivelogs.
(2) To roll forward the test DR database by applying archivelogs from the production database.
I am using rman but not a catalog.
I restored and recovered the production database to a new, separate server, making a second database with the same DBID. I felt proud of myself for a moment.
The production database now has archivelogs past the time of the backup that was restored and recovered to the test DR database. In a simulated production database failure, I want to apply those archivelogs to the DR database in order to roll forward the DR database to the point of failure in the production database. Everyone except me seems to know how to do this. I feel a lot less proud now.
Yes I have read the rman manuals, all of them, and several times - yes, I have read through the forums and read asktom and metalink. I must be unintentionally overlooking some crucial info and concepts.
My production database has generated archivelogs passed the last sequence known to my DR database. I don't know how to tell the DR database to recover these new archivelogs.
Another post on this form directs one to use the command: "recover database until cancel". I get a syntax error. So I tried recover until time, which runs, but does not apply the new archivelogs. Must I update the DR database controlfile with a post-last-backup copy from the production database in order to apply these archivelogs? Must a catalog be used?
Thank you for any assistance.What seems to work is:
1) to restore the database using the controlfile from the backup,
2) to issue the command 'alter database recover until cancel using backup control file' from sqlplus,
3) to respond to each ORA-00279 with 'alter database recover continue default' also from sqlplus and until all the archivelogs have been applied, including archivelogs with sequences subsequent to the backup, then 'alter database recover cancel'.
4) to issue the command 'alter database open resetlogs'.
The above steps allow the available archivelogs to roll the database forward beyond the point in time recorded at the time of the hot backup.
Metalink Note:161742.1 was helpful determining this information, yet it seems to conflict with Tom Kyte's statement that one should avoid using the backup controlfile whenever possible: http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:894628342039#29824708782039
Perhaps the conflict is due to the difference between a restore/recovery and a restore/recovery to a new server?
Do the above steps seem to be the best practice for restoring/recovering a database to a new server?
Thank you. -
All ZCM Policies Have Stopped Applying
Hey Guys,
We're having some serious problems over here.
Issue:
All ZCM policies have stopped applying.
This includes group policy settings, ZENworks Explorer Configuration Policies, and iPrint Printer Policies.
We've got a single primary server, 1300+ devices, and are currently using the built-in sybase database.
The first thing on our plate is to address the database. We're migrating to Oracle very soon, since we have out-grown the 1000-device limit of the built-in database, however do you feel that the limitations of the built-in database could be causing the policy issues? This all started perhaps a few weeks ago, and before that point everything was just fine.
Server OS:
SLES 11, 64-bit
ZCM Version 10.3.0.0
Clients are Windows 7 32-bit and 64-bit
pitcherjShaun,
I've not changed the way they're associated.
The iPrint policies are (and have been) associated to folders full of workstations.
The group policy setting(s) are (and have been) associated at the root of "Workstations" since it's a campus-wide policy.
Prior to our email discussion, I did create a test policy which was pointed at a different iPrint server with no success.
I also cycled enable/disable enforcement of the policy, incremented the version several times, performed a zac cc and a zac ref on the client, as well as attempted to associate the policy to a user which I logged into the workstation with no success. -
Stop Apply ORA-26672 and ORA-01013
Oracle 10.2
I have implemented a DML Handler linked to the "Heart Beat" table.
When the procedure gets an LCR which has the source time > of midnight, it executes the stop of capture and apply process.
source_time := lcr.GET_SOURCE_TIME();
IF source_time >= midnight THEN
DBMS_CAPTURE_ADM.STOP_CAPTURE (capture_name => 'CAPTURE');
DBMS_APPLY_ADM.STOP_APPLY (apply_name => 'APPLY');
-- DBMS_APPLY_ADM.STOP_APPLY (apply_name => 'APPLY', force => true );
END IF;
I don't know why the apply process goes in ABORTED status with errors:
ORA-26672: timeout occurred while stopping STREAMS process APPLY
ORA-01013: user requested cancel of current operation (if I set the parameter "force" to true)
Anyone can help me?
Thx.Apply handler are executed by apply server which is part of the apply process.
APPLY PROCESS:
{queue} <==> Apply READER
|
|--> Apply COORDINATOR
| | Begin
|--> Apply SERVER 1 | Stop capture
| -> Apply SERVER 2 <handler>----> | stop apply <-- You are doing this
. | do something <-- never reach here
. | end ;You stop the apply process from the apply server and since it is a PL/SQL code, the apply server will wait gently until the line is executed. But this line trigger the apply process to terminate the apply server, so the gentle end of the apply server is suddenly not gentle at all.
I am surprised you don't have a library cache lock and some more dump trace file in bdump for the apply and udump for the apply server process session.
Anyway your snakes is biting its tails : Bon appetite!
Edited by: bpolarsk on Jul 17, 2009 3:20 AM added small schema -
Logical standby server stopped applying changes
Hi
I set up a logical standby database with database guard and it worked fine for some time. But recently I had to use it again and discovered that applying changes from primary database to secondary database just stopped working. I see in V$ARCHIVED_LOG one entry per day. If I restart the logical standby then the changes from primary server are also applied. But if I just make a change on primary server and even call 'alter system switch logfile' then I see an entry in V$ARCHIVED_LOG on primary server but not on standby server (BTW in general there are much more entries in this view on the primary server). I checked pairs of log files indicated by the parameter *.log_file_name_convert in standby server's spfile: their last changed date is always the same.
I will paste spfile of my standby server (dh5). Primary server name is dh2.
dh2.__db_cache_size=79691776
dh5.__db_cache_size=96468992
dh2.__java_pool_size=4194304
dh5.__java_pool_size=4194304
dh2.__large_pool_size=4194304
dh5.__large_pool_size=4194304
dh2.__shared_pool_size=71303168
dh5.__shared_pool_size=54525952
dh2.__streams_pool_size=0
dh5.__streams_pool_size=0
*.audit_file_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/adump'
*.background_dump_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/bdump'
*.compatible='10.2.0.1.0'
*.control_files='/var/lib/oracle/oracle/product/10.2.0/db_1/oradata/dh5/control01.ctl'
*.core_dump_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='dh2','dh5'
*.db_name='dh7'
*.db_recovery_file_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='dh5'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=dh2XDB)'
*.fal_client='dh5'
*.fal_server='dh2'
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(dh2,dh5)'
*.log_archive_dest_1='LOCATION=/var/lib/oracle/oracle/product/10.2.0/db_1/oradata/dh5_local
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=dh5'
*.log_archive_dest_2='SERVICE=dh2 LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=dh2'
*.log_archive_dest_3='LOCATION=/var/lib/oracle/oracle/product/10.2.0/db_1/oradata/dh5
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES)
DB_UNIQUE_NAME=dh5'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_dest_state_3='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=30
*.log_file_name_convert='oradata/dh2/redo01.log','flash_recovery_area/DH5/onlinelog/o1_mf_4_5x0o5grc_.log','oradata/dh2/r
edo02.log','flash_recovery_area/DH5/onlinelog/o1_mf_5_5x0o61mw_.log','oradata/dh2/redo03.log','flash_recovery_area/DH5/on
linelog/o1_mf_6_5x0o63gj_.log'
*.nls_language='AMERICAN'
*.open_cursors=300
*.pga_aggregate_target=311427072
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=167772160
*.undo_management='AUTO'
*.undo_retention=3600
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/var/lib/oracle/oracle/product/10.2.0/db_1/admin/dh5/udump'
Thanks in advance for any help.
JMHi,
Nice to hear you issue got resolved.
It is good practice to keep monitoring the progress of SQL apply on the logical standby on a regular basis.
You can mark my response as helpful if it has helped you.
Regards
Anudeep -
Standby database not applying archivelogs, manually apply after registering
Hi
I have a small problem with physical standby db.
Standby db was created and it was working fine, was applying logs with no problem. One day I had to switch it to read only mode, it stayed in this mode for quiet a while. Then there was a problem with space for archive logs. I fixed this, copied missing logs, registered them and these were applied.
And then database stopped on the sequence which was automatically registered by primary database.
v$managed_standby:
PROCESS STATUS CLIENT_P SEQUENCE# DELAY_MINS BLOCK#
ARCH CLOSING ARCH 19667 0 126977
ARCH CLOSING ARCH 19668 0 133121
MRP0 WAIT_FOR_LOG N/A 19600 0 0
As you can see there is WAIT_FOR_LOG sequence 19600 which should be applied, which is in the directory, to which oracle user has rights to read etc, etc.
Only way to force database to apply this log is to manually register it, but I have to add "or replace", because file is already registered.
/path/dbsid1_19600_668777138.log 2 19600 YES NO NO A 28-OCT-09 28-OCT-09
alter database register or replace physical logfile '/path/dbsid1_19600_668777138.log';
After this I have:
/path/dbsid_19600_668777138.log 2 19600 YES NO NO A 28-OCT-09 28-OCT-09
/path/dbsid_19600_668777138.log 0 19600 YES YES NO A 29-OCT-09 28-OCT-09
Registering this file causes applying at once.
In pfile on primary:
log_archive_dest_2 string SERVICE=DRSTDB2 ARCH DELAY=2880
Pay attention - destination 2.
Question is obvious - why logs are not applied automatically?
Why logs ARE applying manually but the DEST_ID is set to 0?
There was no major structure change in primary, besides - logs are applied after all. Bouncing the database doesn't give me any good, switching to readonly and back to recovery mode neither.
Please can you help? I can build this standby again but this is not a solution.
Any additional info on request.
Regards
Jarek Jozwik
Edited by: user11281267 on 30-Oct-2009 06:15You haven't given any usefuly information on your problem.
show parameter log_archive_dest
show parameter fal
show parameter dg
What errors are in your alert logs ?
What command are you using to recover ?
Check the contents of v$archived_log
# run on primary to detect failures :-
select destination, status, fail_date, valid_now
from v$archive_dest
where status != 'VALID' or VALID_NOW != 'YES';
# run on standby to get exact position of rollforward :-
select thread#, to_char(snapshot_time,'dd-mon-yyyy:hh24:mi'),
to_char(applied_time,'dd-mon-yyyy:hh24:mi'),
to_char(newest_time,'dd-mon-yyyy:hh24:mi') from V$STANDBY_APPLY_SNAPSHOT;
Are you using dataguard broker ? -
How to apply archivelog with gap on standby database
Hi All,
Oracle Database version : 9.2.0.6
Following is my sequence of commands on standby database.
SQL>alter database mount standby database;
SQL> RECOVER AUTOMATIC STANDBY DATABASE UNTIL CHANGE n;
ORA-00279: change 809120216 generated at 07/24/2006 09:55:03 needed for thread
1
ORA-00289: suggestion : D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC
ORA-00280: change 809120216 for thread 1 is in sequence #19921
ORA-00278: log file 'D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC' no longer
needed for this recovery
ORA-00308: cannot open archived log
'D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
I have check the last sequence# on standby database which is 19921. And I have archivelog starting from sequence# 20672 onwards. When I am trying to apply archive log starting from sequence# 20672 , it is searching for 'D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC' file and cancel the recovery. Please note that I don't have those missing archive on Primary server as well. So How can I apply the remaining archive log which I do have from 20672 onwards.
I hope I am not creating any confusion.
Thx in advance.Hi Aijaz,
Thx for your answer. But my scenario is bit complex. I have checked my standby database status which is not running in recovery mode. I have tried to find archive_gap which is 0 on standby server. I am copying all archived log from primary to standby thru the script every 2 hour and appying them on standby. After applying, the script is removing all applied log files from primary as well as standby. So it is something like I have archivelog from 1,2,3,7,8,9,10. So 4,5 and 6 archivelog are missing which is required when I am trying to recover standby database. Also note that I want to apply 7,8,9,10. I will loose some data from those missing archive but I have cold back any way. I don't have those missing archivelog files(4,5 and 6) anywhere at all. So how can I recover standby database. I am using standby just for the backup purpose.
I hope my question is clear now.
Thx in advance
- Mehul
Maybe you are looking for
-
How can i use Unix database in java?
How can i use Unix database in Java? Message was edited by: JPro
-
Hi , Im using the standard text in smart forms, but Im getting the output displayed as hexadecimal numbers. These are the hexa characters stored in the "SO10" transaction for the standard text name. I try to display a signature stored in s
-
I got fed up typing the password every time I push some files to a server with scp, as I do it a lot with this project. Public/private key pair is not really an option (read: I do not want to send anything else but the web app files to the server) as
-
Enable Valuate Inspection Point
Dear All I have entered results in instrument calibration inspection lot (Inspection Origin 14). When i have save this results system suggest for valuation that A - Can be used, R1 - Adjustment is required and R2 - Cannot be used. I cannot select any
-
I set up iCloud as directed on my new iPad but a verification email was noy sent even after clicking send again. What do I do now?