Check actual apply log time on standby database after synchronization
Hi All,
I want to check the date and time stamp of applied archived logs on stanby database. How should I check that??
My dataguard link was broken for sometime and meanwhile lot of transactions happened on primary database. Now When the link came up the synchronisation happened within few hours and ultimatly transport and appliy lag became 0. But now I want to check actual time taken for tranporting the logs and applying them on standby database. Is there any way I could do that easily..
Thanks
This script written by Yousef Rifai I found here http://www.dba-village.com/village/dvp_forum.OpenThread?ThreadIdA=34772&DestinationA=RSS might be just what you need (run on standby database):
set ver off
alter session set nls_date_format='dd-mon-yy hh24:mi:ss'
select app_thread, seq_app, tm_applied,
nvl(seq_rcvd,seq_app) seq_rcvd, nvl(tm_rcvd,tm_applied) tm_rcvd
from
(select sequence# seq_app, FIRST_TIME tm_applied, thread# app_thread
from v$archived_log where applied = 'YES'
and (first_time, thread#) in (
select max(FIRST_TIME ), thread#
from v$archived_log where applied = 'YES'
group by thread# )
(select sequence# seq_rcvd, FIRST_TIME tm_rcvd, thread# rcvd_thread
from v$archived_log where applied = 'NO'
and (first_time, thread#) in (
select max(FIRST_TIME ), thread#
from v$archived_log where applied = 'NO'
group by thread# )
where rcvd_thread(+)= app_thread
Best regards,
Robert
http://robertvsoracle.blogspot.com
Similar Messages
-
Apply missing log on physical standby database
How to apply the missing log on physical standby database?
I already registered the missing log and started the recovery process.
Still...having log on standby that has status ..not applied.
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# APP
16018 YES
16019 YES
16020 NO ---------------------> Not applied.
16021 YES
ThanksNot much experience doing this, but according to the 9i doc (http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96653/log_apply.htm#1017352), all you need is properly-configured FAL_CLIENT and FAL_SERVER initialization parameters, and things should take care of themselves automatically. Let us know if that doesn't work for you, we might be able to think of something else.
Daniel -
What are the steps applying incremental backups to standby database 11g
Hi All,
I have built 11g none ASM standby database from ASM RAC Database. Now I want to apply incremental backup to the standby database from primary but not sure how to do it. I tried following and I had an error “ORA-01103: database name 'ins-prim' in control file is not 'ins-sec'”
1- I have configured standby database with RMAN backup.
2- After finishing installation, I took a incremental backup from primary server(ins-prim) and moved incremental backup and control file to the standby (ins-sec) database
3- I stared standby database nomount mode
4- restore controlfile from “incremental backup location in standby database”
5- alter database mount; and got this error
“ORA-01103: database name 'ins-prim' in control file is not 'ins-sec'”
What are the steps applying incremental backups to standby database with 11g?
Thank youI build the database from backup and changed from ASM to none ASM and changed location of data files and logfiles. I think this changes makes the standby database as logical one.
You can a have a physical standby with different locations for everything (redo/controlfiles/datafiles), ASM and no ASM etc. I have a such a configuration in production (10gR2)
I build the database from backup
Are you sure you have a standby ? Ins-sec receives the archivelog files from the primary ? How did you proceed to build this database ? I suspect you don't have a standby at all ! If you have duplicated the database ins-sec and ins-pri are independent databases and you won't be able to apply an incremental backup (your script was not correct but it is another story) -
BRARCHIVE backup for high volume offline redo log files on Standby Database
Hi All,
We are through with all of Standby database activity, also started applying the offline redo log files on the Standby site.
The throughput is not utilizing the actual available bandwith.
So we are not able to copy the offline redo files on time, as the offline redo files are piling up on the Production side.
My query is how we can parallely copy the offline redo log files on the DR site (ie. 4-5 redo files at a time).
Kindly guide for the same.
Regards,
Shaibazhi,
I have one doubt.
On other server (r3qas) the Umask settings are as followed
User UMASK value
<sid>adm 077
ora<SID> 077
root 077
Running SAP System : SAP R3 4.6C
Running DBMS : Oracle 9.0
Operating System :- HP_UX
On this system The new offline redo log files are created with 600 permissions. There is not a problem here, while taking the backup. I checked last "r3qas-archive" backups. There, i have not found any single error related to permissions, or any others (something like, Cannot open /oracle/RQ1/../.........dbf).
If everything is working fine, with this umask setting on this server, then, what's going wrong with the BW Quality server, which have the same umask settings (also others) for all the concerned users, as mentioned above.
Regards,
Bhavik Shroff -
Recovery process applies old archivelogs on standby database
Right now my standby database is in sync with my primary database and is waiting for the archived log sequeuence# 8378 to arrive.
But when I stop the recovery process (alter database recover managed standby database cancel;) and re-start it (alter database recover managed standby database disconnect), it starts all over again and starts applying archive logs starting from the sequence# 5739 (looks like its scanning thru the logs). To catchup with primary it takes 2+ hours as it need to skim thru all the logs starting from 5739 to 8377.
Please let me know if you need any further information to fix this.
Thank you
Sunny boy
Details:
Database version: 11.2.0.3
OS : RHEL 5
On Standby Database
SQL> SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG"
FROM V$LOG_HISTORY
GROUP BY THREAD#; 2 3
THREAD# LAST_APPLIED_LOG
1 8377
Alert log
alter database recover managed standby database disconnect
Attempt to start background Managed Standby Recovery process (MNODWDR)
Tue May 08 16:13:09 2012
MRP0 started with pid=28, OS id=26150
MRP0: Background Managed Standby Recovery process started (MNODWDR)
started logmerger process
Tue May 08 16:13:15 2012
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 8 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Completed: alter database recover managed standby database disconnect
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/arch/mnodw_1_5739_765032423.arc
Tue May 08 16:13:48 2012
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5740.1466.781015749
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5741.1468.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5742.1474.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5743.1473.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5744.1477.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5745.1478.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5746.1472.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5747.1475.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5748.1469.781017203
Media Recovery Log +MNODW_FRA_GRP/mnodwdr/archivelog/2012_04_19/thread_1_seq_5749.1470.781017203
Tue May 08 16:13:57 2012
Edited by: Sunny boy on May 8, 2012 5:29 PMHello;
V$LOG_HISTORY is the information from the control file. I would use a different query to check :
From the Primary :
SET PAGESIZE 140
COL DB_NAME FORMAT A10
COL HOSTNAME FORMAT A14
COL LOG_ARCHIVED FORMAT 999999
COL LOG_APPLIED FORMAT 999999
COL LOG_GAP FORMAT 9999
COL APPLIED_TIME FORMAT A14
SELECT
DB_NAME, HOSTNAME, LOG_ARCHIVED, LOG_APPLIED, APPLIED_TIME, LOG_ARCHIVED-LOG_APPLIED LOG_GAP
FROM
( SELECT
NAME DB_NAME
FROM
V$DATABASE
SELECT
UPPER(SUBSTR(HOST_NAME,1,(DECODE(INSTR(HOST_NAME,'.'),0,LENGTH(HOST_NAME), (INSTR(HOST_NAME,'.')-1))))) HOSTNAME
FROM
V$INSTANCE
SELECT
MAX(SEQUENCE#) LOG_ARCHIVED
FROM
V$ARCHIVED_LOG
WHERE
DEST_ID=1
AND
ARCHIVED='YES'
SELECT
MAX(SEQUENCE#) LOG_APPLIED
FROM
V$ARCHIVED_LOG
WHERE
DEST_ID=2
AND
APPLIED='YES'
SELECT
TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME
FROM
V$ARCHIVED_LOG
WHERE
DEST_ID=2
AND
APPLIED='YES'
);Change DEST_ID as needed for your system. I would also bump the parameter LOG_ARCHIVE_MAX_PROCESSES assuming its set to default to a higher value up to 30.
Maybe instead of stopping the recovery process you should DEFER on the Primary
alter system set log_archive_dest_state_2=defer;Change the _n from 2 to what your system requires. I use this and have watch DG catch up 200 archives in about 15 minutes.
You have Standby Redo setup and are using the same size as your redo right?
Have never seen the Standby try to apply twice.
ORA-600 [3020] "Stuck Recovery" [ID 30866.1] ( But I do not see your issue )
Metalink Note 241438.1 Script to Collect Data Guard Physical Standby Diagnostic Information
Metalink Note 241374.1 Script to Collect Data Guard Primary Site Diagnostic Information
Best Regards
mseberg
Edited by: mseberg on May 8, 2012 5:16 PM -
Problem for applying logs automatically to Standby
DBAz,
In my Dataguard , The primary database is archiving to the standby location.
But after that when we are verifying using following queries Its showing
that, the recently archived log is not applied.
Also From the primary it looks like the log were shipped twice.
I am using Maximum Performance protection mode . Is it necessary to run the
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
command each and every time to
synchronize the standby with primary ?
Will it be automatically updated when ever a SCN changed ?
More specifically. How Could i automatically update a standby database based
on primary.
I think the rodo logs are not perfectly applying to the standby.
Iam sure that I have configuared that as per documentation.
can anybody suggest a solution....
I can show u the status of logs too-
Primary =>
SQL> select SEQUENCE# ,FIRST_TIME , NEXT_TIME ,ARCHIVED,APPLIED,CREATOR
2 from v$archived_log;
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
139 26-FEB-07 26-FEB-07 YES NO ARCH
140 26-FEB-07 26-FEB-07 YES NO ARCH
141 26-FEB-07 26-FEB-07 YES NO ARCH
142 26-FEB-07 27-FEB-07 YES NO ARCH
143 27-FEB-07 07-APR-07 YES NO ARCH
144 07-APR-07 16-MAR-07 YES NO ARCH
145 16-MAR-07 20-MAR-07 YES NO ARCH
146 20-MAR-07 20-MAR-07 YES NO ARCH
147 20-MAR-07 21-MAR-07 YES NO FGRD
148 21-MAR-07 21-MAR-07 YES NO ARCH
149 21-MAR-07 21-MAR-07 YES NO ARCH
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
150 21-MAR-07 21-MAR-07 YES NO FGRD
151 21-MAR-07 22-MAR-07 YES NO ARCH
152 22-MAR-07 22-MAR-07 YES NO ARCH
152 22-MAR-07 22-MAR-07 YES NO ARCH
153 22-MAR-07 22-MAR-07 YES NO FGRD
153 22-MAR-07 22-MAR-07 YES YES FGRD
154 22-MAR-07 22-MAR-07 YES NO ARCH
154 22-MAR-07 22-MAR-07 YES YES ARCH
155 22-MAR-07 24-MAR-07 YES NO FGRD
155 22-MAR-07 24-MAR-07 YES NO FGRD
156 24-MAR-07 24-MAR-07 YES NO ARCH
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
156 24-MAR-07 24-MAR-07 YES YES ARCH
157 24-MAR-07 26-MAR-07 YES NO ARCH
157 24-MAR-07 26-MAR-07 YES NO ARCH
158 26-MAR-07 26-MAR-07 YES NO FGRD
158 26-MAR-07 26-MAR-07 YES NO FGRD
27 rows selected.
Standby =>
SQL> select SEQUENCE# ,FIRST_TIME , NEXT_TIME ,ARCHIVED,APPLIED,CREATOR
2 from v$archived_log;
SEQUENCE# FIRST_TIM NEXT_TIME ARC APP CREATOR
152 22-MAR-07 22-MAR-07 YES YES ARCH
153 22-MAR-07 22-MAR-07 YES YES FGRD
154 22-MAR-07 22-MAR-07 YES YES ARCH
155 22-MAR-07 24-MAR-07 YES YES FGRD
156 24-MAR-07 24-MAR-07 YES YES ARCH
157 24-MAR-07 26-MAR-07 YES NO ARCH
158 26-MAR-07 26-MAR-07 YES NO FGRD
7 rows selected.
SQL> select sequence#, archived, applied
2 from v$archived_log order by sequence#;
SEQUENCE# ARC APP
152 YES YES
153 YES YES
154 YES YES
155 YES YES
156 YES YES
157 YES NO
158 YES NO
7 rows selected.
Regards,
RajIam facing this problem at the very next moment after Dataguard configuration. because when i created some objects there in primary and when iam checking it on standby..its missing there.So i have queried all the above.
Here is the last few lines from 'Alert log ' but i think its normal
Primary =>
Sat Mar 24 15:03:36 2007
ARCH: Beginning to archive log 1 thread 1 sequence 155
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_155.arc'
ARCH: Completed archiving log 1 thread 1 sequence 155
Sat Mar 24 15:21:31 2007
Thread 1 advanced to log sequence 157
Current log# 3 seq# 157 mem# 0: /oracle/oradata/mudra/redo03.log
Sat Mar 24 15:21:31 2007
ARC0: Evaluating archive log 2 thread 1 sequence 156
ARC0: Beginning to archive log 2 thread 1 sequence 156
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_156.arc'
ARC0: Completed archiving log 2 thread 1 sequence 156
Mon Mar 26 14:49:06 2007
Thread 1 advanced to log sequence 158
Current log# 1 seq# 158 mem# 0: /oracle/oradata/mudra/redo01.log
Mon Mar 26 14:49:06 2007
ARC1: Evaluating archive log 3 thread 1 sequence 157
ARC1: Beginning to archive log 3 thread 1 sequence 157
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_157.arc'
ARC1: Completed archiving log 3 thread 1 sequence 157
Mon Mar 26 17:16:15 2007
Thread 1 advanced to log sequence 159
Current log# 2 seq# 159 mem# 0: /oracle/oradata/mudra/redo02.log
Mon Mar 26 17:16:15 2007
ARCH: Evaluating archive log 1 thread 1 sequence 158
Mon Mar 26 17:16:15 2007
ARC0: Evaluating archive log 1 thread 1 sequence 158
ARC0: Unable to archive log 1 thread 1 sequence 158
Log actively being archived by another process
Mon Mar 26 17:16:15 2007
ARCH: Beginning to archive log 1 thread 1 sequence 158
Creating archive destination LOG_ARCHIVE_DEST_2: 'stby4'
Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/admin/mudra/arch/arch_158.arc'
ARCH: Completed archiving log 1 thread 1 sequence 158
Standby =>
also when iam giving the query -
select SEQUENCE# from v$log_history;
Primary got SEQUENCE# => 1....158
Standby got SEQUENCE# = > 1...156 -
Hi Friends,
I am getting the following exception in logical standby database at the time of Sql Apply.
After run the command alter database start logical standby apply sql apply services start but after few second automatically stop and getting following exception.
alter database start logical standby apply
Tue May 17 06:42:00 2011
No optional part
Attempt to start background Logical Standby process
LOGSTDBY Parameter: MAX_SERVERS = 20
LOGSTDBY Parameter: MAX_SGA = 100
LOGSTDBY Parameter: APPLY_SERVERS = 10
LSP0 started with pid=30, OS id=4988
Tue May 17 06:42:00 2011
Completed: alter database start logical standby apply
Tue May 17 06:42:00 2011
LOGSTDBY status: ORA-16111: log mining and apply setting up
Tue May 17 06:42:00 2011
LOGMINER: Parameters summary for session# = 1
LOGMINER: Number of processes = 4, Transaction Chunk Size = 201
LOGMINER: Memory Size = 100M, Checkpoint interval = 500M
Tue May 17 06:42:00 2011
LOGMINER: krvxpsr summary for session# = 1
LOGMINER: StartScn: 0 (0x0000.00000000)
LOGMINER: EndScn: 0 (0x0000.00000000)
LOGMINER: HighConsumedScn: 2660033 (0x0000.002896c1)
LOGMINER: session_flag 0x1
LOGMINER: session# = 1, preparer process P002 started with pid=35 OS id=4244
LOGSTDBY Apply process P014 started with pid=47 OS id=5456
LOGSTDBY Apply process P010 started with pid=43 OS id=6484
LOGMINER: session# = 1, reader process P000 started with pid=33 OS id=4732
Tue May 17 06:42:01 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1417, X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
Tue May 17 06:42:01 2011
LOGMINER: Turning ON Log Auto Delete
Tue May 17 06:42:01 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01417_0748170313.001
Tue May 17 06:42:01 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1418, X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
LOGSTDBY Apply process P008 started with pid=41 OS id=4740
LOGSTDBY Apply process P013 started with pid=46 OS id=7864
LOGSTDBY Apply process P006 started with pid=39 OS id=5500
LOGMINER: session# = 1, builder process P001 started with pid=34 OS id=4796
Tue May 17 06:42:02 2011
LOGMINER: skipped redo. Thread 1, RBA 0x00058a.00000950.0010, nCV 6
LOGMINER: op 4.1 (Control File)
Tue May 17 06:42:02 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01418_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1419, X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01419_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1420, X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01420_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1421, X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
LOGSTDBY Analyzer process P004 started with pid=37 OS id=5096
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01421_0748170313.001
LOGSTDBY Apply process P007 started with pid=40 OS id=2760
Tue May 17 06:42:03 2011
Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
LOGSTDBY Apply process P012 started with pid=45 OS id=7152
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1422, X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01422_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1423, X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01423_0748170313.001
Tue May 17 06:42:03 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1424, X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
LOGMINER: session# = 1, preparer process P003 started with pid=36 OS id=5468
Tue May 17 06:42:03 2011
LOGMINER: End mining logfile: X:\TANVI\ARCHIVE2\ARC01424_0748170313.001
Tue May 17 06:42:04 2011
LOGMINER: Begin mining logfile for session 1 thread 1 sequence 1425, X:\TANVI\ARCHIVE2\ARC01425_0748170313.001
LOGSTDBY Apply process P011 started with pid=44 OS id=6816
LOGSTDBY Apply process P005 started with pid=38 OS id=5792
LOGSTDBY Apply process P009 started with pid=42 OS id=752
Tue May 17 06:42:05 2011
krvxerpt: Errors detected in process 34, role builder.
Tue May 17 06:42:05 2011
krvxmrs: Leaving by exception: 600
Tue May 17 06:42:05 2011
Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
LOGSTDBY status: ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
Tue May 17 06:42:06 2011
Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_lsp0_4988.trc:
ORA-12801: error signaled in parallel query server P001
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []
Tue May 17 06:42:06 2011
LogMiner process death detected
Tue May 17 06:42:06 2011
logminer process death detected, exiting logical standby
LOGSTDBY Analyzer process P004 pid=37 OS id=5096 stopped
LOGSTDBY Apply process P010 pid=43 OS id=6484 stopped
LOGSTDBY Apply process P008 pid=41 OS id=4740 stopped
LOGSTDBY Apply process P012 pid=45 OS id=7152 stopped
LOGSTDBY Apply process P014 pid=47 OS id=5456 stopped
LOGSTDBY Apply process P005 pid=38 OS id=5792 stopped
LOGSTDBY Apply process P006 pid=39 OS id=5500 stopped
LOGSTDBY Apply process P007 pid=40 OS id=2760 stopped
LOGSTDBY Apply process P011 pid=44 OS id=6816 stopped
Tue May 17 06:42:10 2011Errors in file x:\oracle\product\10.2.0\admin\tanvi\bdump\tanvi_p001_4796.trc:
ORA-00600: internal error code, arguments: [krvxbpx20], [1], [1418], [2380], [16], [], [], []submit an SR to ORACLE SUPPORT.
refer these too
*ORA-600/ORA-7445 Error Look-up Tool [ID 153788.1]*
*Bug 6022014: ORA-600 [KRVXBPX20] ON LOGICAL STANDBY* -
Archived log missed in standby database
Hi,
OS; Windows 2003 server
Oracle: 10.2.0.4
Data Guard: Max Performance
Dataguard missed some of the archivelog files and but latest log files are applying. standby database is not in sync with primary.
SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) LOCAL WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND THREAD# = LOCAL.THREAD#);
I queried above command and I found some files are missed in standby.
select status, type, database_mode, recovery_mode,protection_mode, srl, synchronization_status,synchronized from V$ARCHIVE_DEST_STATUS where dest_id=2;
STATUS TYPE DATABASE_MODE RECOVERY_MODE PROTECTION_MODE SRL SYNCHRONIZATION_STATUS SYN
VALID PHYSICAL MOUNTED-STANDBY MANAGED MAXIMUM PERFORMANCE NO CHECK CONFIGURATION NO
Anyone can tell me how to apply those missed archive log files.
Thanks in advacneDeccan Charger wrote:
I got below error.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
ERROR at line 1:
ORA-01153: an incompatible media recovery is activeYou need to essentially do the following.
1) Stop managed recovery on the standby.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;2) Resolve the archive log gap - if you have configured FAL_SERVER and FAL_CLIENT Oracle should do this when you follow step 3 below, as you've manually copied the missed logs you should be OK
3) restart managed recovery using the command shown above.
You can monitor archive log catchup using the alert.log or your original query.
Niall Litchfield
http://www.orawin.info/
Edited by: Niall Litchfield on May 4, 2010 2:29 PM
missed tag -
Increase current redo log size in standby database in mount stage
We have oracle 10g standby database. The standby database is always running in mount stage while apply logs manually not data guard is used.
We have increase size or online redo log in primary . Now want to inrease in standby database also.
how to increase the size of current online redo log in standby database while it in mount stage .
in mount stage we cant run alter system switch logfileuser11965804 wrote:
We have oracle 10g standby database. The standby database is always running in mount stage while apply logs manually not data guard is used.
We have increase size or online redo log in primary . Now want to inrease in standby database also.
how to increase the size of current online redo log in standby database while it in mount stage .
in mount stage we cant run alter system switch logfilein 10g Standby will be always in Mount status when MRP is running.
When you increase size of online redo log files in primary, You should increase in standby also..
Standby redo log file size should be equal or higher than primary. You no need to switch log files on Standby.
You will have only standby redo log files in standby not ORL(online redo log files)
You can use this below script to add standby redo log files.
http://www.pythian.com/news/581/oracle-standby-redo-logs/ -
Dropping log file in standby database
Please,
I need a help for the following issue:
I'm making a technical documentation on various event that occur on dataguard configuraation, right now I drop a redo log group file on primary database, and when I try to drop the equivalent log group file on standby database I got the following error:
SQL> alter database drop logfile group 3;
alter database drop logfile group 3
ERROR at line 1:
ORA-01156: recovery in progress may need access to files
this is the current state of the redolog file on standby database.
SQL> select group#,members,status from v$log;
GROUP# MEMBERS STATUS
1 3 CLEARING_CURRENT
3 3 CLEARING
2 3 CLEARING
Eventhough I do the following command on standby I also get an error.
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR LOGFILE GROUP 3
ERROR at line 1:
ORA-01156: recovery in progress may need access to files
Can someone tell me how to drop on dataguard configuration the redolog file on primary database and their corresponding on standby database
I'm working on 10 release 2, on Windows
Thanks youOracle Dataguard Concept and administration release 2,ref b14239: is my source but it doesn't work when trying to drop stanby group or logile member.
For example, if the primary database has 10 online redo log files and the standby
database has 2, and then you switch over to the standby database so that it functions
as the new primary database, the new primary database is forced to archive more
frequently than the original primary database.
Consequently, when you add or drop an online redo log file at the primary site, it is
important that you synchronize the changes in the standby database by following
these steps:
1. If Redo Apply is running, you must cancel Redo Apply before you can change the
log files.
2. If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO,
change the value to MANUAL.
3. Add or drop an online redo log file:
■ To add an online redo log file, use a SQL statement such as this:
SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log'
SIZE 100M;
■ To drop an online redo log file, use a SQL statement such as this:
SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
4. Repeat the statement you used in Step 3 on each standby database.
5. Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
Thank -
Relocating datafiles on standby database after mount point on stanby is ful
Hi,
We have a physical standby database.
The location of datafiles on primary database are at /oracle/oradata/ and the location of datafiles on standby database are at /oracle/oradata/
Now we are facing a situation of mount mount getting full on standby database so we need to move some tablespaces to another location on standby.
Say old location is /oracle/oradata/ and new location is /oradata_new/ and the tablespaces to be relocated are say tab1 and tab2.
Can anybody tell me whether following steps are correct.
1. Stop managed recovery on standby database
alter database recover managed standby database cancel;
2. Shutdown standby database
shutdown immediate;
3. Open standby database in mount stage
startup mount;
4. Copy the datafiles to new location say /oradata_new/ using os level command
4. Rename the datafile
alter database rename file
'/oracle/oradata/tab1.123451.dbf', '/oracle/oradata/tab1.123452.dbf','/oracle/oradata/tab2.123451.dbf',''/oracle/oradata/tab2.123452.dbf'
to '/oradata_new/tab1.123451.dbf', '/oradata_new/tab1.123452.dbf','/oradata_new/tab2.123451.dbf',''/oradata_new/tab2.123452.dbf';
5. Edit the parameter db_file_name_convert
alter system set db_file_name_convert='/oracle/oradata/tab1','/oradata_new/tab1','/oracle/oradata/tab2','/oradata_new/tab2'
6. Start a managed recovery on standby database
alter database recover managed standby database disconnect from session;
I am littelbit confused in step 5 as we want to relocate only two tablespaces and not all tablespaces so we have used.
Can we use db_file_name_convert like this i.e. does this work for only two tablespaces tab1 and tab2.
Thanks & Regards
GirishAhttp://download.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1010428
8.3.4 Renaming a Datafile in the Primary Database
When you rename one or more datafiles in the primary database, the change is not propagated to the standby database. Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent modifications on the standby database because the modifications are not performed automatically, even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO.
The following steps describe how to rename a datafile in the primary database and manually propagate the changes to the standby database.
To rename the datafile in the primary database, take the tablespace offline:
SQL> ALTER TABLESPACE tbs_4 OFFLINE;
Exit from the SQL prompt and issue an operating system command, such as the following UNIX mv command, to rename the datafile on the primary system:
% mv /disk1/oracle/oradata/payroll/tbs_4.dbf
/disk1/oracle/oradata/payroll/tbs_x.dbf
Rename the datafile in the primary database and bring the tablespace back online:
SQL> ALTER TABLESPACE tbs_4 RENAME DATAFILE 2> '/disk1/oracle/oradata/payroll/tbs_4.dbf'
3> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
SQL> ALTER TABLESPACE tbs_4 ONLINE;
Connect to the standby database, query the V$ARCHIVED_LOG view to verify all of the archived redo log files are applied, and then stop Redo Apply:
SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# APP
8 YES
9 YES
10 YES
11 YES
4 rows selected.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Shut down the standby database:
SQL> SHUTDOWN;
Rename the datafile at the standby site using an operating system command, such as the UNIX mv command:
% mv /disk1/oracle/oradata/payroll/tbs_4.dbf /disk1/oracle/oradata/payroll/tbs_x.dbf
Start and mount the standby database:
SQL> STARTUP MOUNT;
Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL.
SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf'
2> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';
On the standby database, restart Redo Apply:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
2> DISCONNECT FROM SESSION;
If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see error messages similar to the following in the alert log:
ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf' -
Use standby-database as standby-database after creating primary database
First I will tell what we have and what I did.
We have a standby database on server B that was not started. We did a cold copy to server A and opened the database on server A as primary database. Now we wonder if we can use the standby database on server B to apply archives without creating a new copy to server B. The standby database on server B is waiting for sequence 2900, while the database on server A started with a new sequence.
Is it possible somehow to start the recovery again on the standby databsae on server B without creating a new copy?
Forgot to mention: db-version is 9.2.0.7.0, OS is SunOS 5.10Writing seperate code is not needed, Oracle will copy the files if the log_archive_dest is set to the right value.
However, it is solved!
What I did:
- create standby controlfile on the primary
- copy standby controlfile to standby database
- on standby: startup nomount / alter database mount standby database
- on primary: set log_archive_dest_2 to trigger the copying of archivelogs to the standby
Right now the standby database is recovering. -
Recover standby database after primary failed
Hi,
I'm having 11g setup with 2 standby databases. My scenario is i'm doing failover on one standby[new primary] and converting old primary as a standby. Question is what is the status of another standby? have to create new standby or can recover using flash option?
regards,
jp'11g' is not a version, it is a marketing label. You need to post your version in the format <x>.<x>.<x>.<x>
Yes, I know this is asked much.
Also your question sadly lacks on details, as in this version of Oracle you can cascade standby databases (standby 1 can cascade to standby 2)
This begs the simple question:
Did you try?
If so, what happened?
If you didn't try, why didn't you try? What can happen?
Sybrand Bakker
Senior Oracle DBA -
Start recovery process on standby database after Windows reboot
Hi all,
I have a data guard configuration on Oracle 10.2.0.4 for Windows.
What I need is to mount and start recovery process automatically after Windows Server restart.
How can I do that ?
Thanks in advance.Here's what I found :
sleep 60
REM Start the database
%ORACLE_HOME%\bin\sqlplus -s "/ as sysdba" @startup.sql
exit
startupmount.sql -- Copy this under ORACLE_HOME/bin directory
-- start the database
startup mount
alter database recover managed standby database disconnect from session;
exitSorry, no way to test on my end.
The other thought is to hack oradim
But I'm thinking this might only do the startup mount.
Best Regards
mseberg
Edited by: mseberg on Feb 15, 2012 1:10 PM
This would need some work but it may help
@echo off
sc query "OracleOraDb11g_home1TNSListener" | findstr /i running
IF "%ERRORLEVEL%"=="0" (GOTO :RUNNING) ELSE (GOTO :STOPPED)
:STOPPED
ECHO NOT RUNNING
net start "OracleOraDb11g_home1TNSListener" | sc query "OracleServiceSCT" | findstr /i running | if NOT "%errorlevel%"=="0" net start oracleservicesct
GOTO :END
:RUNNING
ECHO RUNNING
net stop "OracleOraDb11g_home1TNSListener" | sc query "OracleServiceSCT" | findstr /i running | if "%errorlevel%"=="0" net stop oracleservicesct
GOTO :END
:ENDEdited by: mseberg on Feb 15, 2012 1:17 PM -
How to delete archive logs on the standby database....in 9i
Hello,
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
thanks,
C.We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.From 10g there is option to purge on deletion policy when archives were applied. Check this note.
*Configure RMAN to purge archivelogs after applied on standby [ID 728053.1]*
Still it is on 9i, So you need to schedule RMAN job or Shell script file to delete archives.
Before deleting archives
1) you need to check is all the archives are applied or not
2) then you can remove all the archives completed before 'sysdate-2';
RMAN> delete archvielog all completed before 'sysdate-2';
As per your requirement.
Maybe you are looking for
-
I am trying to simplify a lot of pages by using Facelets and custom taglibs that are just .xhtml pages with other Standard JSF components inside them. My problem is not really related to that, but it gives a bit of background info: What I am trying t
-
My new ssd doesn't show up in Disk Utility please help
Hi all, I had a HDD failure in my mbp (2012). It completly crashed. I was able to make a copy of some files through Disk Utility. (Note: at this point i could only enter disk utility after start up not in the actual OS) So i bought a Kingston ssdNow
-
Hi My client is askint to run APP(automatic Payment) the vendor is in both local and foreign vendor( same vendor name both in local and foreign) where do we configure in vendor master? why the system is taking only local vendors in APP run My client
-
My phone will not ring since I updated it.
I tried the DND settings....did not ring still. I checked all the obvious...ring tone...silence settings
-
I just got an email from Samsungs Seller office saying: 1. When submitting new app in Seller Office, you are required to submit app that was developed using Samsung SDK. Samsung SDKs are the tools that help sellers to make apps using Samsung device-s