Dataguard logs
Hai
i have some confusion in the redo logs creation for the standby database.
i would like to know whether we should copy the logfiles along with the datafiles and put it in the standby database location or we have to create the standby logfiles and after that we have to place it in the standby database location
if we have both logfiles copied and the standby logfiles created in the same standby location will it cause any problem?
Regards
orauser123
You will not be able to copy the logfiles if you create the standby DB from hot oracle backups.
Standby redo logs are a different concept from redo logs and they are created for supporting the MAXPROTECTION/AVAILABILITY modes and also for run time replications. It will be nice to have it on primary and standby DB in case you want to configure switchover operation. For just configuring failover standby redo logs needs to be created only on standby DB server.
Also if you create standby DB without redo logs being copied to standby DB, then once you create the standby DB try bouncing the standby DB and you will find that redo logs are being created by oracle in the location as specified in control file of standby DB. Try checking this scenario at your end as well.
Amar
Similar Messages
-
Hi,
We have dataguard on Oracle 10g. The logs are shipped from primary to physical stand by but when I look at "Last Applied Log" under EM on the primary db, I see 0. The logs are shipping but not applied to physical stand by.
I restarted the standby to mount stage.
I can tnsping both the database from other servers
***** on primary *****
SELECT name, value FROM gv$parameter WHERE name = 'log_archive_dest_state_1';
NAME
VALUE
log_archive_dest_state_1
enable
SELECT name, value FROM gv$parameter WHERE name = 'log_archive_dest_state_2';
NAME
VALUE
log_archive_dest_state_2
ENABLE
SELECT name, value FROM gv$parameter WHERE name = 'standby_archive_dest';
NAME
VALUE
standby_archive_dest
select status, error from v$archive_dest where dest_id=2;
STATUS ERROR
VALID
select switchover_status from v$database;
SWITCHOVER_STATUS
SESSIONS ACTIVE
select protection_mode, protection_level, database_role from v$database;
PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PRIMARY
SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
MAX(SEQUENCE#) THREAD#
4483 1
SELECT dest_id, valid_type, valid_role, valid_now FROM gv$archive_dest;
DEST_ID VALID_TYPE VALID_ROLE VALID_NOW
1 ONLINE_LOGFILE ALL_ROLES YES
2 ONLINE_LOGFILE PRIMARY_ROLE YES
***** on physical stand by *****
select protection_mode, protection_level, database_role from v$database;
PROTECTION_MODE PROTECTION_LEVEL DATABASE_ROLE
MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE PHYSICAL STANDBY
SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
MAX(SEQUENCE#) THREAD#
4483 1
Can anyone advise what can I check and how can I get the logs to apply on the standy by. The stand by is in mount stage. ThanksThe alert log for primary does not show any errors:
LNS: Standby redo logfile selected for thread 1 sequence 4520 for destination LOG_ARCHIVE_DEST_2
This is in the alert log for standby
Primary database is in MAXIMUM PERFORMANCE mode
RFS[11]: Successfully opened standby log 5: '/u02/oradata/prd1/stby_redo_g5_m1.dbf'
On standby I have the MRP process running:
$ ps -ef|grep mrp
oracle 10957 1 0 15:26:48 ? 0:01 ora_mrp0_prd1
SELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS WHERE DEST_ID=2 ;
RECOVERY_MODE
MANAGED REAL TIME APPLY
This is from v$dataguard_status
Remote File Server Warning 0 167 0 NO 20-DEC-07
RFS[11]: Successfully opened standby log 5: '/u02/oradata/prd1/stby_redo_g5_m1.dbf'
should I try this as first suggested ?
alter database recover managed standby database parallel 2 using current logfile disconnect; -
Oracle Dataguard - Logs apply issue
Hi,
Due to some issue our archives logs are not applying from production to standy. we had our standby at London. i dont see any error messages. not sure what is happening, i had case open with oracle support.
we need to shutdown all our production and DR last sunday. once it is back up it stopped sending logs. our production server is a 3 node RAC.
i just need how to apply archive logs from production to standby which is in london.
appricate your quick response.
ThanksHello,
Please post the result:
SELECT MAX(SEQUENCE#), MAX(COMPLETION_TIME), APPLIED FROM V$ARCHIVED_LOG GROUP BY APPLIED;
and
SELECT * FROM V$ARCHIVE_GAP;
also you can copy them from the primary to the standby and apply them one by one:
ALTER DATABASE REGISTER LOGFILE 'LOG_FILE_NAME';
if the gapsequence is so big like 20 or 30 days i will recreate the dataguard using and clone it using rman.
Kind regards
Mohamed -
Wonder if anyone can help me with a question?
I am new to data guard and only recently setup my first implementation of a primary and standby Oracle 11 g database.
It's all setup correctly, i.e no gaps sequences showing, no errors in the alert logs and I have successfully tested a switch over and switch back.
I wanted to re-test the archive logs were going across to the standby database ok, unfortunately I performed an alter system switch logfile on the standby database instead of primary.
No errors are reported anywhere, no archive log sequence gaps or errors in the alert logs, but I am wondering if this will cause a problem the next time I have to failover to the standby database?
Apologies for my lack of my knowledge I am new data guard and only been a DBA for a couple of years, have not had time to read up on the 500 page Data Guard book yet.
Thanks in AdvanceFirst you have to know what happens when log switch occurs either manually or internally.
All data & changes will be in online redo log files, once log switch occurred either automatic or forcefully, these information from online redo log files will be dumped to archives.
Now tell me. Where will be the online redo? There is no concept of online redo data on standby, in case of real time apply you will have only standby redo log files, even you cannot switch standby redo log files.
First this command on standby won't work, it's applicable only for online redo log files. So onions redo exists/active only in primary.
So nothing to worry on it. Make sure environment is sync prior to performing switchover.
Hope this helps.
Your all questions why unanswered? Close them and keep the forum clean. -
Hello,i have just set up the physical standby first time in 11g r2 and sucessfully completed.but log is not shipped to standby ,tnsname.ora and listener.ora have all the required entries and working properly,no any error seen in the alert log of both physical and standby .it will be appreciable if any one give me the clue to debug the issue..
Thanks,
PankajPost results from ( on Primary )
show parameter LOG_ARCHIVE_DEST_STATE_2
Best Regards
mseberg -
Dataguard Log Transport Services Warning
Please, explain me. What mind Warning on Primary Server
SELECT a.facility, a.severity, a.ERROR_CODE, a.MESSAGE
FROM v_$dataguard_status a
where a.facility = 'Log Transport Services'
and a.severity = 'Warning';
result
FACILITY SEVERITY ERROR_CODE
MESSAGE
Log Transport Services Warning 0
LNS: Standby redo logfile selected for thread 1 sequence 136 for destination LOG_ARCHIVE_DEST_2
Log Transport Services Warning 0
ARC0: Standby redo logfile selected for thread 1 sequence 135 for destination LOG_ARCHIVE_DEST_2In alertlog on Primary only message like :
Fri Jun 20 16:08:11 2008
LNS: Standby redo logfile selected for thread 1 sequence 146 for destination LOG_ARCHIVE_DEST_2
Fri Jun 20 16:08:12 2008
Thread 1 advanced to log sequence 147 (LGWR switch)
Current log# 3 seq# 147 mem# 0: /ora/oradata/testmain/redo31.log
Current log# 3 seq# 147 mem# 1: /ora/oradata/testmain/redo32.log
Fri Jun 20 16:08:12 2008
LNS: Standby redo logfile selected for thread 1 sequence 147 for destination LOG_ARCHIVE_DEST_2
Fri Jun 20 16:08:13 2008
Thread 1 advanced to log sequence 148 (LGWR switch)
Current log# 1 seq# 148 mem# 0: /ora/oradata/testmain/redo11.log
Current log# 1 seq# 148 mem# 1: /ora/oradata/testmain/redo12.log
and alert log Standby like:
Fri Jun 20 16:06:29 2008
Recovery of Online Redo Log: Thread 1 Group 5 Seq 145 Reading mem 0
Mem# 0: /ora/oradata/maintest/redo51std.log
Mem# 1: /ora/oradata/maintest/redo52std.log
Fri Jun 20 16:08:11 2008
Primary database is in MAXIMUM PERFORMANCE mode
RFS[1]: Successfully opened standby log 4: '/ora/oradata/maintest/redo41std.log'
Fri Jun 20 16:08:11 2008
Media Recovery Waiting for thread 1 sequence 146 (in transit)
Fri Jun 20 16:08:11 2008
Recovery of Online Redo Log: Thread 1 Group 4 Seq 146 Reading mem 0
Mem# 0: /ora/oradata/maintest/redo41std.log
Mem# 1: /ora/oradata/maintest/redo42std.log
I think that all work property. But I do not understand why message Warning? -
We have a site HP-UX PA-RISK and the stand by site has Itanium, can I make physical or logical standby., does it support.
Check this link:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/standby.htm#i72053
Amardeep Sidhu
http://www.amardeepsidhu.com -
Standby MRP0 process- Wait for Log
Hi ,
I have standby Oracle 11.2.0.3 DB on AIx server .
After configuring Dataguard Log apply service fails on standby DB ..Following is resulf from my standy and primary DB .
On Primary
select process, status, sequence#, block# from v$managed_standby;
PROCESS STATUS SEQUENCE# BLOCK#
ARCH CLOSING 52 1
ARCH CLOSING 51 1
ARCH WRITING 2 38913
ARCH CLOSING 52 1
LNS WRITING 54 1003
On Standby
select process, status, sequence#, block# from v$managed_standby;
PROCESS STATUS SEQUENCE# BLOCK#
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
ARCH CONNECTED 0 0
RFS RECEIVING 2 6145
RFS IDLE 54 1025
RFS IDLE 0 0
MRP0 WAIT_FOR_LOG 2 0
On primaray Datagaurd status shows below output
select message from V$DATAGUARD_STATUS order by TIMESTAMP;
ARCH: Completed archiving thread 1 sequence 53 (1093671-1094088)
ARCH: Beginning to archive thread 1 sequence 53 (1093671-1094088)
LNS: Beginning to archive log 3 thread 1 sequence 54
MESSAGE
LNS: Completed archiving log 2 thread 1 sequence 53
On Standby DB
select message from V$DATAGUARD_STATUS order by TIMESTAMP;
MESSAGE
RFS[2]: Assigned to RFS process 659510
RFS[2]: No standby redo logfiles available for thread 1
RFS[3]: Assigned to RFS process 1110268
RFS[3]: No standby redo logfiles available for thread 1
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery starting Real Time Apply
Media Recovery Waiting for thread 1 sequence 2 (in transit)
Please let me know what needs ti change for start log apply on physical standby .
Thanks .
Vaishali.Hi Shivananda,
Please find below output ..
I can tnsping both the database as well as when i try to connect from sqlplus to DB ..it was sucessful from both the side ...
SQL> select severity,error_code,message from v$dataguard_status where dest_id=2;
SEVERITY ERROR_CODE
MESSAGE
Error 1034
PING[ARC2]: Heartbeat failed to connect to standby 'IHISDR'. Error is 1034.
Error 1034
FAL[server, ARC2]: Error 1034 creating remote archivelog file 'IHISDR'
Error 1089
FAL[server, ARC2]: FAL archival, error 1089 closing archivelog file 'IHISDR'
SEVERITY ERROR_CODE
MESSAGE
Warning 1089
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (1089)
Warning 1089
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Error 1089
Error 1089 for archive log file 1 to 'IHISDR'
SEVERITY ERROR_CODE
MESSAGE
Error 1089
FAL[server, ARC0]: FAL archival, error 1089 closing archivelog file 'IHISDR'
7 rows selected.
Thanks . -
Standby can't received logs from primary after upgrade 10.2 to 11.2
Hi
I have counter problem on standby after upgrade from 10.2.0.1 to 11.2.0.2.The standby can not received logs from primary when look on dataguard log is shown as follows.it seem log transport service timeout before completion send logs to standby server
D MON_PROPERTY operation completed
2012-03-20 15:09:14.522 DMON: Timeout posting message to database 2 after having waited for 0 seconds, killing NSV1 (PID=4672)
2012-03-20 15:09:15.524 02001000 622551633 DMON: failed to forward op CTL_GET_STATUS to site standby with error ORA-16662
2012-03-20 15:09:15.524 02001000 622551633 DMON: Database PRIMARY returned ORA-16662
2012-03-20 15:09:15.524 02001000 622551633 for opcode = CTL_GET_STATUS, phase = NULL, req_id = 1.1.622551633
2012-03-20 15:09:15.525 02001000 622551633 DMON: CTL_GET_STATUS operation completed
2012-03-20 15:09:48.676 RSM detected log transport problem: log transport for database 'standby' has the following error.
2012-03-20 15:09:48.676 ORA-16198: Timeout incurred on internal channel during remote archival
2012-03-20 15:09:48.677 RSM0: HEALTH CHECK ERROR: ORA-16737: the redo transport service for standby database "standby" has an error
2012-03-20 15:09:48.789 00000000 622551634 Operation HEALTH_CHECK canceled during phase 2, error = ORA-16778
2012-03-20 15:09:48.790 00000000 622551634 Operation HEALTH_CHECK canceled during phase 2, error = ORA-16778
Please anyone have ideas
thanks
kind regard
Salutary(bob)
Edited by: 826034 on Mar 20, 2012 5:31 AM826034 wrote:
SQL> select severity, error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status where dest_id=2;
Error 16198
WARN: ARC0: Terminating pid 31868 hung on an I/O operation
+20-MAR-2012 13:29:32+
SEVERITY ERROR_CODE
MESSAGE
TO_CHAR(TIMESTAMP,'D
Error 16198
WARN: ARC0: Terminating pid 32182 hung on an I/O operation
+20-MAR-2012 13:34:37+
Error 16198
WARN: ARC0: Terminating pid 32393 hung on an I/O operation
SEVERITY ERROR_CODE
MESSAGE
TO_CHAR(TIMESTAMP,'D
+20-MAR-2012 13:39:42+
Error 16198
WARN: ARC0: Terminating pid 32620 hung on an I/O operation
+20-MAR-2012 13:44:47+
Error 16198
SEVERITY ERROR_CODE
MESSAGE
TO_CHAR(TIMESTAMP,'D
WARN: ARC0: Terminating pid 474 hung on an I/O operation
+20-MAR-2012 13:49:52+
Error 16198
WARN: ARC0: Terminating pid 696 hung on an I/O operation
+20-MAR-2012 13:54:57+
thanks
regard
SalutaryCheck this thread, it may helpful.. looks similar issue log shipping problem after upgrade to 11.2.0.2 -
Post switch over in oracle dataguard 11g
Dear Guru,
Switch over has been completed successfully from primary database to standby database.
new primaray database is open and accessible but its showing his satus in v$database as below.
database_role = primary
switchover_status = not allowed
db_unique_name = dg1_stdby
old primaray database which is now standby showing his satus in v$database as below.
database_role = physical standby
switchover_status = session active
db_unique_name = dg1_primy
when check status in data guard broker its
for both the databases - dg1_primy and dg1_stdby its showing error ORA-16810: multiple errors or warnings detected for the database.
when checked dataguard log file on new primary server its showing
ORA-16816: incorrect database role
ORA-16700: the standby database has diverged from the primary database
Please guide me how to resolved issue.
Thanks & Regards,
Nikunj ThakerHi Nikunj,
You can find the scenario, in the "Problem : Data Guard Broker Switchover fails With ORA-16665 using Active Data Guard", on metalink.oracle.com
First of all manually complete the Switchover, ie. restart the Databases in its new Role. Note that the final Role Change has not been recognized by the Broker, so you have to rebuild the Data Guard Broker Configuration when the Databases have been restarted:
DGMGRL> remove configuration;
DGMGRL> create configuration ...
DGMGRL> add database ...
DGMGRL> enable configuration;
Best regards,
Orkun Gedik -
Setting up the standby side after a crash (Data Guard)
Hi,
I hope this is the right area to publish my problem...
I can't find something like codetags for the system output - so I'm sorry for the bad looking
I have a problem. I use Oracle 11.2.0.3.0 with a dataguard environment. My primary database crashed and I activate the standby to be the new primary.
After the old primary gets repaired I want to define them as the ney standby. This didn't work because we have disabled flashback logging.
We created the new standby:
rman target sys/password@prim auxiliary sys/password@stby
duplicate target database for standby from active database;
After this we do this on the new standby:
alter database recover managed standby database disconnect from session;
It looks the now there is a working physical standby.
Now I look on the primary dataguard:
DGMGRL> show database verbose stby;
Database - stby
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: (unknown)
Real Time Query: OFF
Instance(s):
dbuc4
Properties:
DGConnectIdentifier = 'stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '900'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName = 'dbuc4'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=stby)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=stby_DGMGRL.blubb.de)(INSTANCE_NAME=stby)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.arc'
TopWaitEvents = '(monitor)'
Database Status:
ORA-16795: the standby database needs to be re-created
DGMGRL> show database verbose prim;
Database - prim
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
dbuc4
dbuc4stby
Properties:
DGConnectIdentifier = 'prim'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value
Database Status:
SUCCESS
DGMGRL> show database verbose dbuc4stby;
Database - dbuc4stby
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
dbuc4
dbuc4stby
Properties:
DGConnectIdentifier = 'dbuc4stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value
Database Status:
SUCCESS
DGMGRL> show configuration
Configuration - dg
Protection Mode: MaxPerformance
Databases:
prim - Primary database
stby - Physical standby database (disabled)
ORA-16795: the standby database needs to be re-created
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
On the stby side it looks like:
DGMGRL> show configuration
ORA-16795: Die Standby-Datenbank muss neu erstellt werden (The standby database needs to be recreated)
Did I have to create a new dataguard configuration?
I didn't know how I get this to work.
Thx fuechsinHi,
first of all: big thanks for your answer!
I think, this was a bad idea, too. But there was not enough space, so we decided to turn it off without thinking of consequences. :/
When the new hardware arrive I will enable flashback and never turn it off
The dataguard-log on the stby says:
01/23/2014 11:56:13
>> Starting Data Guard Broker bootstrap <<
Broker Configuration File Locations:
dg_broker_config_file1 = "/Daten/stby/stby_dgbroker1.dat"
dg_broker_config_file2 = "/Daten/stby/stby_dgbroker2.dat"
01/23/2014 11:56:17
Database needs to be reinstated or re-created, Data Guard broker ready
I want to try to delete the configuration on the prim and stby side and reconfigure it. But I don't know if there are side-effects on the working prim side - it is a productive system.
Best regards,
fuechsin -
Error: ORA-16751: resource guard encountered errors in switchover to primar
Hello ,
I have just completed dataguard configuration for one standby server. However when I tries to switchover i am getting following error in dataguard log file.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK ERROR: ORA-16816: incorrect database role
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK ERROR: ORA-16783: instance p11 not open for read and write access
DG 2011-07-09-20:08:58 0 2 0 RSM Warning: cannot find the destinationsetting in v$archive_dest for database 'p11_disaster'.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16728: consistency check for property LogXptMode found ORA-16777 error
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
DG 2011-07-09-20:08:58 0 2 0 RSM Warning: cannot find the destinationsetting in v$archive_dest for database 'p11_disaster'.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16728: consistency check for property DelayMins found ORA-16777 error
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
DG 2011-07-09-20:08:58 0 2 0 RSM Warning: cannot find the destinationsetting in v$archive_dest for database 'p11_disaster'.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16728: consistency check for property Binding found ORA-16777 error
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
DG 2011-07-09-20:08:58 0 2 0 RSM Warning: cannot find the destinationsetting in v$archive_dest for database 'p11_disaster'.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16728: consistency check for property MaxFailure found ORA-16777 error
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
DG 2011-07-09-20:08:58 0 2 0 RSM Warning: cannot find the destinationsetting in v$archive_dest for database 'p11_disaster'.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16728: consistency check for property MaxConnections found ORA-16777 error
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
DG 2011-07-09-20:08:58 0 2 0 RSM Warning: cannot find the destinationsetting in v$archive_dest for database 'p11_disaster'.
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16728: consistency check for property ReopenSecs found ORA-16777 error
DG 2011-07-09-20:08:58 0 2 0 RSM0: HEALTH CHECK WARNING: ORA-16777: unable to find the destination entry of a standby database in V$ARCHIVE_DEST
DG 2011-07-09-20:08:58 0 2 756068556 Operation CTL_GET_STATUS cancelled during phase 2, error = ORA-16810
DG 2011-07-09-20:08:58 0 2 756068556 Operation CTL_GET_STATUS cancelled during phase 2, error = ORA-16810
I have tried several times however it changes the role of primary DB to physical standby , which i again change to primary by "Alter Database Commit to Switchover to PRIMARY with session Shutdown;"
Here are the DGMGRL command prompts output.
C:\Documents and Settings\p11adm>dgmgrl
DGMGRL for 64-bit Windows: Version 10.2.0.2.0 - 64bit Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/windows123@to_primary
Connected.
DGMGRL> show database verbose p11_primary;
Database
Name: p11_primary
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
p11
Properties:
InitialConnectIdentifier = 'to_primary'
LogXptMode = 'ASYNC'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '10'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'PROD104'
SidName = 'p11'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.104)(
PORT=1527))'
StandbyArchiveLocation = 'C:\oracle\P11\oraarch\P11arch'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'ARC%S_%R.%T'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'
Current status for "p11_primary":
SUCCESS
DGMGRL> show database verbose p11_disaster;
Database
Name: p11_disaster
Role: PHYSICAL STANDBY
Enabled: YES
Intended State: ONLINE
Instance(s):
p11
Properties:
InitialConnectIdentifier = 'to_standby'
LogXptMode = 'ARCH'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '10'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'PROD102'
SidName = 'p11'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.102)(
PORT=1527))'
StandbyArchiveLocation = 'C:\oracle\P11\oraarch\P11arch'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'ARC%S_%R.%T'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'
Current status for "p11_disaster":
SUCCESS
DGMGRL> switchover to p11_disaster;
Performing switchover NOW, please wait...
Error: ORA-16751: resource guard encountered errors in switchover to primary dat
abase
Failed.
Unable to switchover, primary database is still "p11_primary"
Here are the changed pfile of both DB's
Pfile changes at Primary
*.db_name='p11'
*.db_unique_name='p11_primary'
*.dg_broker_start=true
*.dml_locks=4000
*.event='10191 trace name context forever, level 1'
*.FILESYSTEMIO_OPTIONS='setall'
*.job_queue_processes=1
*.local_listener='(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.104)(PORT = 1527))'
*.log_archive_config='dg_config=(p11_disaster,p11_primary)'
*.log_archive_dest_1='LOCATION=C:\oracle\P11\oraarch\P11arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_max_processes=10
*.log_archive_min_succeed_dest=1
*.log_buffer=1048576
*.log_checkpoint_interval=0
*.log_checkpoints_to_alert=true
*.open_cursors=800
*.parallel_execution_message_size=16384
*.pga_aggregate_target=347917516
*.processes=80
*.query_rewrite_enabled='false'
*.recyclebin='off'
*.remote_login_passwordfile='exclusive'
*.remote_os_authent=true
Pfile of standby server ( few parameters are created by DGMGRL itself)
*.db_name='P11'
*.db_unique_name='P11_disaster'
*.dg_broker_start=TRUE
*.dml_locks=4000
*.event='10191 trace name context forever, level 1'
*.fal_client='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.102)(PORT=1527)))(CONNECT_DATA=(SERVICE_NAME=p11_disaster_XPT)(INSTANCE_NAME=p11)(SERVER=dedicated)))'
*.fal_server='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.104)(PORT=1527)))(CONNECT_DATA=(SERVICE_NAME=p11_primary_XPT)(SERVER=dedicated)))'
*.FILESYSTEMIO_OPTIONS='setall'
*.job_queue_processes=1
*.local_listener='(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.102)(PORT = 1527))'
*.log_archive_config='DG_CONFIG= (P11_primary, P11_disaster)'
*.log_archive_dest_1='LOCATION="C:\oracle\P11\oraarch\P11arch", VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'
p11.log_archive_dest_1='location="C:\oracle\P11\oraarch\P11arch"','valid_for=(ALL_LOGFILES,ALL_ROLES)'
*.log_archive_dest_state_1='enable'
p11.log_archive_dest_state_1='ENABLE'
p11.log_archive_format='ARC%S_%R.%T'
*.log_archive_max_processes=10
*.log_archive_min_succeed_dest=1
p11.log_archive_trace=0
*.log_buffer=1048576
*.log_checkpoint_interval=0
*.log_checkpoints_to_alert=true
*.open_cursors=800
*.parallel_execution_message_size=16384
*.pga_aggregate_target=207827763
*.processes=80
*.query_rewrite_enabled='false'
*.recyclebin='off'
*.remote_login_passwordfile='exclusive'
*.remote_os_authent=true
*.replication_dependency_tracking=false
*.sessions=96
*.sga_max_size=311741644
*.shared_pool_reserved_size=15587082
*.shared_pool_size=300870822
*.sort_area_retained_size=0
*.sort_area_size=2097152
p11.standby_archive_dest='C:\oracle\P11\oraarch\P11arch'
*.standby_file_management='AUTO'
Please advice .
Thanks & Regards
Edited by: dharm.singh on Jul 9, 2011 7:48 AMI have added the standby archive log destination as per your suggestion p11.standby_archive_dest='C:\oracle\P11\oraarch\P11arch'
However , i am now getting these errors in dgmgrl log file while doing switchover.
DG 2011-07-11-14:39:24 0 2 756225487 DMON: CTL_ENABLE of p11
DG 2011-07-11-14:39:24 0 2 756225487 requires reset of LOG XPT Engine
DG 2011-07-11-14:39:24 0 2 756225487 on Site p11_primary
DG 2011-07-11-14:39:24 0 2 0 Reset Log Transport Resource: SetState ONLINE, phase BUILD-UP, External Cond ENABLE
DG 2011-07-11-14:39:24 0 2 0 Set log transport destination: SetState ONLINE, phase BUILD-UP, External Cond ENABLE
DG 2011-07-11-14:39:24 0 2 0 Executing SQL [alter system set log_archive_dest_2 = 'service="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.102)(PORT=1527)))(CONNECT_DATA=(SERVICE_NAME=p11_disaster_XPT)(INSTANCE_NAME=p11)(SERVER=dedicated)))"', ' LGWR ASYNC NOAFFIRM delay=0 OPTIONAL max_failure=0 max_connections=1 reopen=300 db_unique_name="p11_disaster" register net_timeout=180 valid_for=(online_logfile,primary_role)']
DG 2011-07-11-14:39:24 0 2 0 SQL [alter system set log_archive_dest_2 = 'service="(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.102)(PORT=1527)))(CONNECT_DATA=(SERVICE_NAME=p11_disaster_XPT)(INSTANCE_NAME=p11)(SERVER=dedicated)))"', ' LGWR ASYNC NOAFFIRM delay=0 OPTIONAL max_failure=0 max_connections=1 reopen=300 db_unique_name="p11_disaster" register net_timeout=180 valid_for=(online_logfile,primary_role)'] Executed successfully
DG 2011-07-11-14:39:24 0 2 0 Executing SQL [alter system set log_archive_dest_state_2 = 'ENABLE']
DG 2011-07-11-14:39:24 0 2 0 SQL [alter system set log_archive_dest_state_2 = 'ENABLE'] Executed successfully
DG 2011-07-11-14:39:24 0 2 0 Executing SQL [ALTER SYSTEM SWITCH ALL LOGFILE]
DG 2011-07-11-14:39:26 0 2 0 SQL [ALTER SYSTEM SWITCH ALL LOGFILE] Executed successfully
DG 2011-07-11-14:39:26 0 2 0 DMON: site 01001000, instance 00000001 queuing healthcheck lock request
DG 2011-07-11-14:39:26 0 2 0 DMON: Releasing healthcheck master lock
DG 2011-07-11-14:39:26 0 2 0 DMON: Health check master lock conversion successful
DG 2011-07-11-14:39:26 0 2 0 DMON: a process acquired the healthcheck master lock
DG 2011-07-11-14:39:26 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:26 0 2 756225487 DMON: status from rfi_post_instances() for ENABLE = ORA-00000
DG 2011-07-11-14:39:26 0 2 0 req_id 1.1.756225487, opcode CTL_ENABLE, phase END, flags 5
DG 2011-07-11-14:39:26 0 2 756225487 DMON: ENABLE Complete, Object p11
DG 2011-07-11-14:39:26 0 2 756225487 enabled in State ONLINE
DG 2011-07-11-14:39:26 0 2 756225487 rfm_inst_phase_dispatch 16 END phase processing
DG 2011-07-11-14:39:26 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:39:26 0 2 0 req_id 1.1.756225487, opcode CTL_ENABLE, phase END
DG 2011-07-11-14:39:26 0 2 756225487 DMON: CTL_ENABLE operation completed
DG 2011-07-11-14:39:26 0 2 756225487 DMON: Entered rfm_release_chief_lock for CTL_ENABLE
DG 2011-07-11-14:39:29 0 2 756225495 DMON: ENUM_DRC: success. (len = 652)
DG 2011-07-11-14:39:29 0 2 756225495 DMON: ENUM_DRC operation completed
DG 2011-07-11-14:39:29 1000 2 756225496 DMON: MON_PROPERTY operation completed
DG 2011-07-11-14:39:29 1000 2 756225497 DMON: Entered rfm_get_chief_lock() for MON_VERIFY, reason 0
DG 2011-07-11-14:39:29 1000 2 756225497 DMON: chief lock convert for client healthcheck
DG 2011-07-11-14:39:29 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:29 0 2 0 req_id 1.1.756225497, opcode MON_VERIFY, phase BEGIN, flags 5
DG 2011-07-11-14:39:30 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:39:30 0 2 0 req_id 1.1.756225497, opcode MON_VERIFY, phase BEGIN
DG 2011-07-11-14:39:30 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:30 0 2 0 req_id 1.1.756225497, opcode MON_VERIFY, phase RESYNCH, flags 10005
DG 2011-07-11-14:39:30 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:39:30 0 2 0 req_id 1.1.756225497, opcode MON_VERIFY, phase RESYNCH
DG 2011-07-11-14:39:30 1000 2 756225497 DMON: MON_VERIFY operation completed
DG 2011-07-11-14:39:30 1000 2 756225497 DMON: Entered rfm_release_chief_lock for MON_VERIFY
DG 2011-07-11-14:39:30 1000 2 756225500 DMON: CTL_GET_STATUS operation completed
DG 2011-07-11-14:39:44 1000 2 756225501 DMON: Entered rfm_get_chief_lock() for MON_VERIFY, reason 0
DG 2011-07-11-14:39:44 1000 2 756225501 DMON: chief lock convert for client healthcheck
DG 2011-07-11-14:39:44 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:44 0 2 0 req_id 1.1.756225501, opcode MON_VERIFY, phase BEGIN, flags 5
DG 2011-07-11-14:39:45 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:39:45 0 2 0 req_id 1.1.756225501, opcode MON_VERIFY, phase BEGIN
DG 2011-07-11-14:39:45 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:45 0 2 0 req_id 1.1.756225501, opcode MON_VERIFY, phase RESYNCH, flags 10005
DG 2011-07-11-14:39:45 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:39:45 0 2 0 req_id 1.1.756225501, opcode MON_VERIFY, phase RESYNCH
DG 2011-07-11-14:39:45 1000 2 756225501 DMON: MON_VERIFY operation completed
DG 2011-07-11-14:39:45 1000 2 756225501 DMON: Entered rfm_release_chief_lock for MON_VERIFY
DG 2011-07-11-14:39:45 0 2 756225504 DMON: ENUM_DRC: success. (len = 652)
DG 2011-07-11-14:39:45 0 2 756225504 DMON: ENUM_DRC operation completed
DG 2011-07-11-14:39:45 1000 2 756225505 DMON: MON_PROPERTY operation completed
DG 2011-07-11-14:39:45 2000000 3 756225506 DMON: Entered rfm_get_chief_lock() for CTL_SWITCH, reason 0
DG 2011-07-11-14:39:45 2000000 3 756225506 DMON: chief lock convert for switchover
DG 2011-07-11-14:39:45 0 2 0 Executing SQL [ALTER SYSTEM ARCHIVE LOG CURRENT]
DG 2011-07-11-14:39:53 0 2 0 SQL [ALTER SYSTEM ARCHIVE LOG CURRENT] Executed successfully
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: posting primary instances for SWITCHOVER phase 1
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: dispersing message to standbys for SWITCHOVER phase 1
DG 2011-07-11-14:39:53 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:53 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase BEGIN, flags 5
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: Entered rfmsoexinst for phase 1
DG 2011-07-11-14:39:53 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:39:53 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase BEGIN
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: CLSR being notified to disable services and shutdown instances as appropriate for SWITCHOVER.
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: posting primary instances for SWITCHOVER phase 2
DG 2011-07-11-14:39:53 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: dispersing message to standbys for SWITCHOVER phase 2
DG 2011-07-11-14:39:53 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase TEARDOWN, flags 5
DG 2011-07-11-14:39:53 2000000 3 756225506 DMON: Entered rfmsoexinst for phase 2
DG 2011-07-11-14:39:53 0 2 0 RSM 0 received SETSTATE request: rid=0x01041000, sid=0, phid=1, econd=2, sitehndl=0x02001000
DG 2011-07-11-14:39:53 0 2 0 Log Transport Resource: SetState OFFLINE, phase TEAR-DOWN, External Cond SWITCH-OVER-PHYS_STBY
DG 2011-07-11-14:39:53 0 2 0 RSM 0 received SETSTATE request: rid=0x01012000, sid=4, phid=1, econd=2, sitehndl=0x02001000
DG 2011-07-11-14:39:53 0 2 0 Database Resource[IAM=PRIMARY]: SetState PHYSICAL-APPLY-ON, phase TEAR-DOWN, External Cond SWITCH-OVER-PHYS_STBY, Target Site Handle 0x02001000
DG 2011-07-11-14:39:53 0 2 0 Executing SQL [ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN]
DG 2011-07-11-14:41:32 0 2 0 SQL [ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN] Executed successfully
DG 2011-07-11-14:41:32 0 2 0 Executing SQL [alter system set log_archive_dest_2 = '']
DG 2011-07-11-14:41:32 0 2 0 SQL [alter system set log_archive_dest_2 = ''] Executed successfully
DG 2011-07-11-14:41:32 0 2 0 Executing SQL [alter system set log_archive_dest_state_2 = 'ENABLE']
DG 2011-07-11-14:41:32 0 2 0 SQL [alter system set log_archive_dest_state_2 = 'ENABLE'] Executed successfully
DG 2011-07-11-14:41:32 0 2 0 Database Resource SetState succeeded
DG 2011-07-11-14:41:32 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:41:32 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase TEARDOWN
DG 2011-07-11-14:41:32 2000000 3 756225506 DMON: posting primary instances for SWITCHOVER phase 2
DG 2011-07-11-14:41:32 2000000 3 756225506 DMON: dispersing message to standbys for SWITCHOVER phase 2
DG 2011-07-11-14:41:32 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:41:32 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase TEARDOWN, flags 5
DG 2011-07-11-14:41:32 2000000 3 756225506 DMON: Entered rfmsoexinst for phase 2
DG 2011-07-11-14:41:32 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:41:32 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase TEARDOWN
DG 2011-07-11-14:41:47 2000000 3 756225506 Operation CTL_SWITCH cancelled during phase 2, error = ORA-16751
DG 2011-07-11-14:41:47 2000000 3 756225506 DMON: CLSR being notified to enable services and startup primary instances as appropriate for SWITCHOVER.
DG 2011-07-11-14:41:47 0 2 0 DMON: site 01001000, instance 00000001 queuing healthcheck lock request
DG 2011-07-11-14:41:47 0 2 0 DMON: Releasing healthcheck master lock
DG 2011-07-11-14:41:47 0 2 0 DMON: Health check master lock conversion successful
DG 2011-07-11-14:41:47 0 2 0 DMON: a process acquired the healthcheck master lock
DG 2011-07-11-14:41:47 2000000 3 756225506 DMON: posting primary instances for SWITCHOVER phase 5
DG 2011-07-11-14:41:47 2000000 3 756225506 DMON: SWITCHOVER Aborted due to errors
DG 2011-07-11-14:41:47 0 2 0 INSV: Received message for inter-instance publication
DG 2011-07-11-14:41:47 2000000 3 756225506 Site named: p11_primary is still primary
DG 2011-07-11-14:41:48 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase END, flags 5
DG 2011-07-11-14:41:48 2000000 3 756225506 error = ORA-16751
DG 2011-07-11-14:41:48 2000000 3 756225506 DMON: dispersing message to standbys for SWITCHOVER phase 5
DG 2011-07-11-14:41:48 0 2 0 INSV: All instances have replied for message
DG 2011-07-11-14:41:48 0 2 0 DMON: site 01001000, instance 00000001 queuing healthcheck lock request
DG 2011-07-11-14:41:48 0 2 0 req_id 1.1.756225506, opcode CTL_SWITCH, phase END
DG 2011-07-11-14:41:48 0 2 0 DMON: Releasing healthcheck master lock
DG 2011-07-11-14:41:48 0 2 0 DMON: Health check master lock conversion successful
DG 2011-07-11-14:41:48 0 2 0 DMON: a process acquired the healthcheck master lock
DG 2011-07-11-14:41:48 2000000 3 756225506 Operation CTL_SWITCH cancelled during phase 5, error = ORA-16751
DG 2011-07-11-14:41:48 2000000 3 756225506 DMON: CTL_SWITCH operation completed
DG 2011-07-11-14:41:48 2000000 3 756225506 DMON: Entered rfm_release_chief_lock for CTL_SWITCH
Please advice. -
Standby log files in Oracle Dataguard
Hi,
What is the difference between standby log files and online redo log files in a Dataguard environment?
What is the use of standby log files?
Thanks,
Charith.You're probably familiar with the Online Redo Logs (ORLs). Transaction changes are written from the Log Buffer to the ORLs by the LGWR process.
If you are setting up a physical standby, then you will want to create Standby Redo Logs (SRLs) in the standby database. When SRL's are in place, a process called LNS will transport redo from the Log Buffer of the primary to the RFS process on the standby which will write the redo to the SRLs. If the SRL does not exist, RFS can't do this. The biggest benefit of using SRLs is that you will experience much less data loss, even in MAX PERFORMANCE mode. Redo will constantly be shipped. You won't have to wait for ARCH to transport a full archived redo log.
Cheers,
Brian -
Resizing online and standby redo log in dataguard setup.
In 10gr2 dataguard i would like to increase redo logsize from 50M to 100M.
on primary
standby_file_management=manual
added online redo groups with 100M
log switched
drop old one and readded with 100m
deleted log added in step2.
same for standby redo logs.
On standby
was able to resize standby redo logs.
but cannot resize online redologs status is clearing or clearing_current.
please comment. thanks.I assume you just had to wait until the Primary switched out of that online log so it became inactive at the standby as well? We track where the Primary is by marking the online redo log files at the standby as clearing_current so you can tell where the primary was at any given moment.
Make sure you create new standby redo log files at the Primary and Standby to match the new online redo log file size.
Larry -
DataGuard FAL Client not re-transfering log
Hey, in my dataguard config the standby system detects a log gap.
The log on the primary site was not marked for the standby location - thats a known bug. OK.
But anyway, when the standby system detects a log gap, why is it not requesting the missing logfile from the primary site. Its still available on the primary site in the FRA.
#Standbysite - Detecting log-gap
*** 2011-05-30 09:22:57.602 64207 kcrr.c
Media Recovery Waiting for thread 2 sequence 164952
*** 2011-05-30 09:22:57.602 64207 kcrr.c
Fetching gap sequence in thread 2, gap sequence 164952-164952
Redo shipping client performing standby login
#Primiary Site - check for missing logfile
SQL> SELECT name FROM v$archived_log where sequence# like'164952';
NAME
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=stdynode3-vip.local)(PORT
=1521)))(CONNECT_DATA=(SERVICE_NAME=TEST_XPT.local)(INSTANCE_NAME=TEST3)(SERVER
=dedicated)))
+FLASH/testp/1_164952_692118044.dbf
+FLASH/testp/2_164952_692118044.dbf
SQL>Christian wrote:
Hey, in my dataguard config the standby system detects a log gap.
The log on the primary site was not marked for the standby location - thats a known bug. OK.
But anyway, when the standby system detects a log gap, why is it not requesting the missing logfile from the primary site. Its still available on the primary site in the FRA.
#Standbysite - Detecting log-gap
*** 2011-05-30 09:22:57.602 64207 kcrr.c
Media Recovery Waiting for thread 2 sequence 164952
*** 2011-05-30 09:22:57.602 64207 kcrr.c
Fetching gap sequence in thread 2, gap sequence 164952-164952
Redo shipping client performing standby login
#Primiary Site - check for missing logfile
SQL> SELECT name FROM v$archived_log where sequence# like'164952';
NAME
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=stdynode3-vip.local)(PORT
=1521)))(CONNECT_DATA=(SERVICE_NAME=TEST_XPT.local)(INSTANCE_NAME=TEST3)(SERVER
=dedicated)))
+FLASH/testp/1_164952_692118044.dbf
+FLASH/testp/2_164952_692118044.dbf
SQL>and also make sure is it priamary is two node rac?
Post the output of below query. from standby
select name,applied from v$archived_log where sequence#=164952 and thread#=2;and from primary:
select name from v$archived_log where sequence#=164952 and thread#=2;
Maybe you are looking for
-
Hi, It might not be new problem! Everything works on the local host on all the major browsers. When uploading the test page to server (Bluehost), it gives me these two errors: SpryMenu.js requires SpryWidget.js! and then: SpryMenuBarKeyNavigationPlug
-
Import videos from sony ccd-trv65 to imac
anyone know how to import videos from a sony ccd-trv65 hi8 to a iMac?
-
Lost Songs! Please help!
My daughter has lost all of her songs after connecting to her laptop and updating her iPod to the newest version of software. She has only ever downloaded songs directly from her iPod so they are not on any computer. Is there any way to recover the
-
PIONEER DVD-RW DVR-K06: Firmware Revision: Q609 Interconnect: ATAPI Burn Support: Yes (Apple Shipped/Supported) Cache: 2000 KB Reads DVD: Yes CD-Write: -R, -RW DVD-Write: -R, -RW, +R, +RW, +R DL Burn Underrun Protection CD: Yes Burn Underrun Protecti
-
I am unable to download powerpoint videos (.pps)
When I download a .pps video from an email, I get only lots of gibberish/letters that's greek to me! What do I need to do to get the video?