Applying logs in a semi-standby instance

Oracle Version: 10.2.0.3 Standard Edition
OS: Windows 2003 Server
I setup oracle standby (followed the steps in metalink doc 432514.1)..
I am running 10.2.0.3 standard edition.. so teh standby have to be done manually (which is fine)..
I shutdown production last night with shutdown immediate.. copy all the logs, datafiles, pfile..etc to the standby server.. I recreated the controlfile on the standby (NORESETLOGS and NOARCHIVELOGS).. the database is mounted and it's ok:
SQL> select status from v$instance;
STATUS
MOUNTED
SQL>
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6764 52428800 2 NO INACTIVE 2437870303 07-MAY-09
3 1 6765 52428800 2 NO INACTIVE 2437871111 07-MAY-09
2 1 6766 52428800 2 NO CURRENT 2437886254 07-MAY-09
SQL>
then after that I restarted production, after a while, I did a couple of switch logs and generated 2 archive logs of which I transferred to the standby to apply them.. now at this stage prod redo logs are:
SQL> select * from v$log
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6767 52428800 2 YES INACTIVE 2438366118 08-MAY-09
2 1 6766 52428800 2 YES INACTIVE 2437886254 07-MAY-09
3 1 6768 52428800 2 NO CURRENT 2438369450 08-MAY-09
SQL>
On the standby, I logged in as sysdba and tried to just apply the logs as you do normally:
C:\scheduled_scripts>sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.3.0 - Production on Fri May 8 08:58:03 2009
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Release 10.2.0.3.0 - Production
SQL> select status from v$instance;
STATUS
MOUNTED
SQL>
SQL>
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 2438369450 generated at 05/08/2009 07:45:07 needed for thread
1
ORA-00289: suggestion :
D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6768_%U_.ARC
ORA-00280: change 2438369450 for thread 1 is in sequence #6768
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6767_508KCMM0_.ARC
ORA-00310: archived log contains sequence 6767; sequence 6768 required
ORA-00334: archived log:
'D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6767_508KCMM0_.ARC'
SQL>
So my question is why isn't it applying the 6767 change? (keep in mind that at this stage production has not yet generated archive log 6768).. shouldn't it be applying 6767? or is it bcoz it's Inactive, it doesn't need to do that?
More over, I did the following:
I did a switch log on production to create the 6768 archive log:
SQL> select * from v$log
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6767 52428800 2 YES INACTIVE 2438366118 08-MAY-09
2 1 6766 52428800 2 YES INACTIVE 2437886254 07-MAY-09
3 1 6768 52428800 2 NO CURRENT 2438369450 08-MAY-09
SQL>
SQL>
SQL> alter system switch logfile;
System altered.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6767 52428800 2 YES INACTIVE 2438366118 08-MAY-09
2 1 6769 52428800 2 NO CURRENT 2438387289 08-MAY-09
3 1 6768 52428800 2 YES ACTIVE 2438369450 08-MAY-09
this generated teh archive log: O1_MF_1_6768_508VGSS5_.ARC
I copied the archive log to the standby serve.. checked teh current sequence (database only mounted on standby):
SQL> select * from v$log
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6764 52428800 2 NO INACTIVE 2437870303 07-MAY-09
3 1 6765 52428800 2 NO INACTIVE 2437871111 07-MAY-09
2 1 6766 52428800 2 NO CURRENT 2437886254 07-MAY-09
C:\scheduled_scripts>sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.3.0 - Production on Fri May 8 10:46:38 2009
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Release 10.2.0.3.0 - Production
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 2438369450 generated at 05/08/2009 07:45:07 needed for thread1
ORA-00289: suggestion :
D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6768_%U_.ARC
ORA-00280: change 2438369450 for thread 1 is in sequence #6768
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 2438387289 generated at 05/08/2009 10:37:29 needed for thread1
ORA-00289: suggestion :
D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6769_%U_.ARC
ORA-00280: change 2438387289 for thread 1 is in sequence #6769
ORA-00278: log file
'D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6768_508VGSS5_.ARC' no longer needed for this recovery
ORA-00308: cannot open archived log
'D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6769_%U_.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
and as you can see.. now that it has 6768, it's doing the same thing by not wanting it anymore and wanting the next one in teh sequence which is 6769 (which has not been generated on production yet).. I'm just not making any sense out of this!!
now afer few changes, the production sequence is:
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6773 52428800 2 YES ACTIVE *2438396976* 08-MAY-09
2 1 6772 52428800 2 YES INACTIVE 2438394852 08-MAY-09
3 1 6774 52428800 2 NO CURRENT 2438398862 08-MAY-09
standby sequence is:
SQL> SELECT * FROM V$LOG;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6764 52428800 2 NO INACTIVE 2437870303 07-MAY-09
3 1 6765 52428800 2 NO INACTIVE 2437871111 07-MAY-09
2 1 6766 52428800 2 NO CURRENT 2437886254 07-MAY-09
If I try to do the following on standby (archived has been shipped to standby at this stage) and do the AUTO option, it doesnt like the file:
SQL> RECOVER DATABASE using backup controlfile UNTIL CHANGE 2438396976;
ORA-00279: change 2438396976 generated at 05/08/2009 11:45:02 needed for thread 1
ORA-00289: suggestion : D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6773_%U_.ARC
ORA-00280: change 2438396976 for thread 1 is in sequence #6773
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00308: cannot open archived log 'D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6773_%U_.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
ORA-00308: cannot open archived log 'D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6773_%U_.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
but when i try to do the same and I actuially give it the proper file name (no _%U) then it's ok and does the recovery.. Is there a way of telling Oracle to grab the acual file name, not O1_MF_1_6773_50909L7L_.ARC instead of O1_MF_1_6773_%U_.ARC?
SQL> RECOVER DATABASE using backup controlfile UNTIL CHANGE *2438396976*;
ORA-00279: change 2438396976 generated at 05/08/2009 11:45:02 needed for thread 1
ORA-00289: suggestion : D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6773_%U_.ARC
ORA-00280: change 2438396976 for thread 1 is in sequence #6773
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
D:\ORACLE\PRODUCT\FLASH_RECOVERY_AREA\EBCP01\ARCHIVELOG\2009_05_08\O1_MF_1_6773_50909L7L_.ARC
Log applied.
Media recovery complete.
hmm.. now when would it actually update the sequence numbers on the logs, I am asking this bcoz the sequence on teh standby still reads:
SQL> SELECT * FROM V$LOG;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 6764 52428800 2 NO INACTIVE 2437870303 07-MAY-09
3 1 6765 52428800 2 NO INACTIVE 2437871111 07-MAY-09
2 1 6766 52428800 2 NO CURRENT 2437886254 07-MAY-09
I am lost here!!! and I'm not sure what to do about this! Your help is really appreciated.
Thanks

The "readme" for Patch 9369783 (which covers the AprilCPU for our 11.1.0.7 HPUX-IA64 environment) includes this short reference to DataGuard:
If you are using a Data Guard Physical Standby database, you must first install this patch on the primary database before installing the patch on the physical standby database. It is not supported to install this patch on the physical standby database before installing the patch on the primary database. For more information, see My Oracle Support Note 278641.1.
When checking that note 278641.1 we see that it also appears to only cover 10.2. Although this note has more detail, it is clearly the same procedure as discussed in 813445.1. Therefore, the conclusion that I make is: OPatch works exactly the same with DataGuard in 11g as it did in 10g.
We will be upgrading our DataGuard enviornment to 11g in one-month. At this point, I am completely expecting our OPatch procedures to remain unchanged from what we have done for years with 9i and 10g. I would also note that the upgrade procedures we have tested (involving DG from 10.2.0.4 to 11.1.0.7) are nearly identical to the above mentioned support notes.
Hope that helps,
Stan

Similar Messages

  • How to create online log in standby instance?

    Oracle say that:
    Step 3 Create an Online Redo Log on the Standby Database
    Although this step is optional, Oracle recommends that an online redo log be created
    when a standby database is created. By following this best practice, a standby database
    will be ready to quickly transition to the primary database role.
    The size and number of redo log groups in the online redo log of a standby database
    should be chosen so that the standby database performs well if it transitions to the
    primary role.
    But when I execute "ALTER DATABASE ADD LOGFILE ('/opt/oracle/oradata/orcl/redo01.log') SIZE 50M;" in standby instance,I get the follow message:
    ORA-01275: Operation ADD LOGFILE is not allowed if standby file management is automatic.
    How could I do?

    Hello xyz_hh ;
    Your post is just a little vague so I'm guessing you are referring to "3.2.6 Start the Physical Standby Database" in "Data Guard Concepts and Administration"
    If this is correct, this is part of "Step-by-Step Instructions for Creating a Physical Standby Database"
    Assuming I right so far you are try to "Create an Online Redo Log on the Standby Database".
    Can you confirm along with you OS, and Oracle version?
    If you are trying to create redo on the standby after the fact you will need to make this change before you proceed.
    STANDBY_FILE_MANAGEMENT=MANUALIf this is your issue check this link out:
    http://kb.dbatoolz.com/tp/2692.avoid_ora-19527_set_dummy_log_file_name_convert.html
    Best Regards
    mseberg
    Edited by: mseberg on Aug 12, 2011 5:35 AM

  • Dataguard - Primary not applying logs to Standby

    Having an issue applying logs to the standby, seemingly it's not setup correctly. I am sure I'm missing something simple here, but would love any input or help. Thanks in advance.
    Background:
    Primary: CDPMTSB (Single Stand alone)
    Standby: CDPMT (RAC)
    Error Message on Primary (Alert Log):
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16014: log 3 sequence# 4071 not archived, no available destinations
    ORA-00312: online log 3 thread 2: '+FRA_DG_01/cdpmtsb/onlinelog/group_3.291.799379949'
    ARCH: Archival error occurred on a closed thread. Archiver continuing
    ORACLE Instance CDPMTSB - Archival Error. Archiver continuing.
    Mon Nov 19 19:54:24 2012
    Changing destination 4 from remote to local during archival of log#: 3 sequence#: 4071 thread#: 2
    Changing destination 4 from remote to local during archival of log#: 3 sequence#: 4071 thread#: 2
    ARC6: LGWR is actively archiving destination LOG_ARCHIVE_DEST_2
    ARC6: Archive log rejected (thread 2 sequence 4071) at host 'CDPMT'
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16401: archivelog rejected by RFS
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16014: log 3 sequence# 4071 not archived, no available destinations
    ORA-00312: online log 3 thread 2: '+FRA_DG_01/cdpmtsb/onlinelog/group_3.291.799379949'
    ARCH: Archival error occurred on a closed thread. Archiver continuing
    ORACLE Instance CDPMTSB - Archival Error. Archiver continuing.
    Mon Nov 19 19:59:24 2012
    Changing destination 4 from remote to local during archival of log#: 3 sequence#: 4071 thread#: 2
    Changing destination 4 from remote to local during archival of log#: 3 sequence#: 4071 thread#: 2
    ARC6: LGWR is actively archiving destination LOG_ARCHIVE_DEST_2
    ARC6: Archive log rejected (thread 2 sequence 4071) at host 'CDPMT'
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16401: archivelog rejected by RFS
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16014: log 3 sequence# 4071 not archived, no available destinations
    ORA-00312: online log 3 thread 2: '+FRA_DG_01/cdpmtsb/onlinelog/group_3.291.799379949'
    ARCH: Archival error occurred on a closed thread. Archiver continuing
    ORACLE Instance CDPMTSB - Archival Error. Archiver continuing.
    Mon Nov 19 20:00:00 2012
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_j001_17473.trc:
    ORA-12012: error on auto execute of job 72620
    ORA-06502: PL/SQL: numeric or value error: character to number conversion error
    ORA-06512: at "CD_ADMIN.UTDCD_SURVEY_PKG", line 4926
    Standby Alert Log:
    ORA-16401: archivelog rejected by RFS
    Mon Nov 19 19:32:15 2012
    RFS[6]: Assigned to RFS process 4248
    RFS[6]: Identified database type as 'physical standby': Client is ARCH pid 9561
    Mon Nov 19 19:32:22 2012
    RFS[1]: Selected log 6 for thread 1 sequence 4073 dbid 1629723947 branch 769881773
    Mon Nov 19 19:32:22 2012
    Archived Log entry 1097 added for thread 1 sequence 4072 ID 0x62e7f5cf dest 1:
    Archived Log entry 1098 added for thread 1 sequence 4072 ID 0x62e7f5cf dest 3:
    Mon Nov 19 19:34:23 2012
    Errors in file /opt/app/oracle/diag/rdbms/cdpmt/CDPMT1/trace/CDPMT1_rfs_24994.trc:
    ORA-16401: archivelog rejected by RFS
    Mon Nov 19 19:38:12 2012
    RFS[1]: Selected log 5 for thread 1 sequence 4074 dbid 1629723947 branch 769881773
    Mon Nov 19 19:38:12 2012
    Archived Log entry 1099 added for thread 1 sequence 4073 ID 0x62e7f5cf dest 1:
    Archived Log entry 1100 added for thread 1 sequence 4073 ID 0x62e7f5cf dest 3:
    Mon Nov 19 19:39:23 2012
    Errors in file /opt/app/oracle/diag/rdbms/cdpmt/CDPMT1/trace/CDPMT1_rfs_24994.trc:
    ORA-16401: archivelog rejected by RFS
    Mon Nov 19 19:44:24 2012
    Errors in file /opt/app/oracle/diag/rdbms/cdpmt/CDPMT1/trace/CDPMT1_rfs_24994.trc:
    ORA-16401: archivelog rejected by RFS
    Mon Nov 19 19:49:24 2012
    Errors in file /opt/app/oracle/diag/rdbms/cdpmt/CDPMT1/trace/CDPMT1_rfs_24994.trc:
    ORA-16401: archivelog rejected by RFS
    Mon Nov 19 19:54:24 2012
    Errors in file /opt/app/oracle/diag/rdbms/cdpmt/CDPMT1/trace/CDPMT1_rfs_24994.trc:
    ORA-16401: archivelog rejected by RFS
    Mon Nov 19 19:59:24 2012
    Errors in file /opt/app/oracle/diag/rdbms/cdpmt/CDPMT1/trace/CDPMT1_rfs_24994.trc:
    ORA-16401: archivelog rejected by RFS
    Primary Parameters:
    NAME TYPE VALUE
    log_archive_config string DG_CONFIG=(CDPMT,CDPMTSB)
    log_archive_dest string
    log_archive_dest_1 string LOCATION=USE_DB_RECOVERY_FILE_
    DEST VALID_FOR=(ONLINE_LOGFIL
    ES,ALL_ROLES) DB_UNIQUE_NAME=C
    DPMTSB
    log_archive_dest_10 string
    log_archive_dest_2 string SERVICE=CDPMT VALID_FOR=(ONLIN
    E_LOGFILES,PRIMARY_ROLE) DB_UN
    IQUE_NAME=CDPMT
    log_archive_dest_3 string location="+FRA_DG_01/cdpmtsb/s
    tandbylog", valid_for=(STANDB
    Y_LOGFILE,STANDBY_ROLE)
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    log_archive_dest_state_1 string enable
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string ENABLE
    log_archive_dest_state_3 string ENABLE
    log_archive_dest_state_4 string defer
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    log_archive_duplex_dest string
    log_archive_format string %t_%s_%r.dbf
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 7
    log_archive_min_succeed_dest integer 2
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    Standby Parameters:
    NAME TYPE VALUE
    log_archive_config string dg_config=(CDPMT,CD PMTSB)
    log_archive_dest string
    log_archive_dest_1 string location="USE_DB_RE COVERY_FILE
    _DEST", valid_for= (ALL_LOGFIL
    ES,ALL_ROLES)
    log_archive_dest_10 string
    log_archive_dest_2 string SERVICE=cdpmtsb LGW R ASYNC VAL
    ID_FOR=(ONLINE_LOGF ILES,PRIMAR
    Y_ROLE) DB_UNIQUE_N AME=cdpmtsb
    log_archive_dest_3 string LOCATION=+FRA_DG_01 /CDPMT/STAN
    DBYLOG VALID_FOR=( STANDBY_LOG
    NAME TYPE VALUE
    FILES,STANDBY_ROLE) DB_UNIQUE_
    NAME=CDPMT
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    log_archive_dest_state_1 string ENABLE
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string ENABLE
    NAME TYPE VALUE
    log_archive_dest_state_3 string enable
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    log_archive_duplex_dest string
    log_archive_format string %t_%s_%r.dbf
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 30
    NAME TYPE VALUE
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    SQL> show parameter log_ar
    NAME TYPE VALUE
    log_archive_config string dg_config=(CDPMT,CDPMTSB)
    log_archive_dest string
    log_archive_dest_1 string location="USE_DB_RECOVERY_FILE
    _DEST", valid_for=(ALL_LOGFIL
    ES,ALL_ROLES)
    log_archive_dest_10 string
    log_archive_dest_2 string SERVICE=cdpmtsb LGWR ASYNC VAL
    ID_FOR=(ONLINE_LOGFILES,PRIMAR
    Y_ROLE) DB_UNIQUE_NAME=cdpmtsb
    log_archive_dest_3 string LOCATION=+FRA_DG_01/CDPMT/STAN
    DBYLOG VALID_FOR=(STANDBY_LOG
    NAME TYPE VALUE
    FILES,STANDBY_ROLE) DB_UNIQUE_
    NAME=CDPMT
    log_archive_dest_4 string
    log_archive_dest_5 string
    log_archive_dest_6 string
    log_archive_dest_7 string
    log_archive_dest_8 string
    log_archive_dest_9 string
    log_archive_dest_state_1 string ENABLE
    log_archive_dest_state_10 string enable
    log_archive_dest_state_2 string ENABLE
    NAME TYPE VALUE
    log_archive_dest_state_3 string enable
    log_archive_dest_state_4 string enable
    log_archive_dest_state_5 string enable
    log_archive_dest_state_6 string enable
    log_archive_dest_state_7 string enable
    log_archive_dest_state_8 string enable
    log_archive_dest_state_9 string enable
    log_archive_duplex_dest string
    log_archive_format string %t_%s_%r.dbf
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 30
    NAME TYPE VALUE
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    SQL>
    DGMGRL> show configuration verbose;
    Configuration
    Name: cdpmtqa
    Enabled: YES
    Protection Mode: MaxPerformance
    Databases:
    cdpmtsb - Primary database
    cdpmt - Physical standby database
    Fast-Start Failover: DISABLED
    Current status for "cdpmtqa":
    Warning: ORA-16608: one or more databases have warnings
    DGMGRL> show database verbose CDPMT
    Database
    Name: cdpmt
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: APPLY-ON
    Instance(s):
    CDPMT1
    CDPMT2 (apply instance)
    Properties:
    DGConnectIdentifier = 'cdpmt'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '30'
    RedoCompression = 'DISABLE'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '4'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = ''
    LogFileNameConvert = ''
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    StaticConnectIdentifier(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "cdpmt":
    Warning: ORA-16809: multiple warnings detected for the database
    Any help would be really appreciated. Thanks!
    Edited by: 972075 on Nov 19, 2012 3:09 PM

    Thanks MSEBERG,
    Here's what I found. FRA seems to have enough space on ASM and there are other logs there, not sure what the issue is:
    14:31:58 SYS: CDPMTSB> show parameter db_recovery
    NAME TYPE VALUE
    db_recovery_file_dest string +FRA_DG_01
    db_recovery_file_dest_size big integer 60G
    DGMGRL> show database CDPMTSB logxptstatus;
    LOG TRANSPORT STATUS
    PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME STATUS
    CDPMTSB cdpmt
    DGMGRL> SHOW DATABASE CDPMTSB InconsistentProperties;
    INCONSISTENT PROPERTIES
    INSTANCE_NAME PROPERTY_NAME MEMORY_VALUE SPFILE_VALUE BROKER_VALUE
    DGMGRL> show database CDPMTSB InconsistentLogXptProps;
    INCONSISTENT LOG TRANSPORT PROPERTIES
    INSTANCE_NAME STANDBY_NAME PROPERTY_NAME MEMORY_VALUE BROKER_VALUE
    DGMGRL> show database CDPMT logxptstatus;
    Error: ORA-16757: unable to get this property's value
    DGMGRL> SHOW DATABASE CDPMT InconsistentProperties;
    INCONSISTENT PROPERTIES
    INSTANCE_NAME PROPERTY_NAME MEMORY_VALUE SPFILE_VALUE BROKER_VALUE
    CDPMT2 DbFileNameConvert DG_01/cdpmtsb, DG_01/cdpmt
    CDPMT2 LogFileNameConvert FRA_DG_01/cdpmtsb, FRA_DG_01/cdpmt, DG_01/cdpmtsb, DG_01/cdpmt
    CDPMT1 LogArchiveMaxProcesses 4 30 4
    CDPMT1 DbFileNameConvert DG_01/cdpmtsb, DG_01/cdpmt DG_01/cdpmtsb,DG_01/cdpmt
    CDPMT1 LogFileNameConvert FRA_DG_01/cdpmtsb, FRA_DG_01/cdpmt, DG_01/cdpmtsb, DG_01/cdpmt FRA_DG_01/cdpmtsb,FRA_DG_01/cdpmt,+DG_01/cdpmtsb,+DG_01/cdpmt
    DGMGRL> show database CDPMT InconsistentLogXptProps;
    Error: ORA-16757: unable to get this property's value
    Errors in the Alert (from Primary):
    ARCH: Archival error occurred on a closed thread. Archiver continuing
    ORACLE Instance CDPMTSB - Archival Error. Archiver continuing.
    Tue Nov 20 14:34:43 2012
    Changing destination 4 from remote to local during archival of log#: 3 sequence#: 4071 thread#: 2
    Changing destination 4 from remote to local during archival of log#: 3 sequence#: 4071 thread#: 2
    ARC6: LGWR is actively archiving destination LOG_ARCHIVE_DEST_2
    ARC6: Archive log rejected (thread 2 sequence 4071) at host 'cdpmt'
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16401: archivelog rejected by RFS
    Errors in file /data/oracle/app/oracle/diag/rdbms/cdpmtsb/CDPMTSB/trace/CDPMTSB_arc6_9571.trc:
    ORA-16014: log 3 sequence# 4071 not archived, no available destinations
    ORA-00312: online log 3 thread 2: '+FRA_DG_01/cdpmtsb/onlinelog/group_3.291.799379949'
    ARCH: Archival error occurred on a closed thread. Archiver continuing
    ORACLE Instance CDPMTSB - Archival Error. Archiver continuing.
    DGMGRL> DGMGRL> show database verbose CDPMT
    Database
    Name: cdpmt
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: APPLY-ON
    Instance(s):
    CDPMT1
    CDPMT2 (apply instance)
    Properties:
    DGConnectIdentifier = 'cdpmt'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '30'
    RedoCompression = 'DISABLE'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '4'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = ''
    LogFileNameConvert = ''
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    StaticConnectIdentifier(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "cdpmt":
    Warning: ORA-16809: multiple warnings detected for the database
    DGMGRL> show database verbose CDPMTSB
    Database
    Name: cdpmtsb
    OEM Name: CDPMTSB_devdb40.utd.com
    Role: PRIMARY
    Enabled: YES
    Intended State: TRANSPORT-ON
    Instance(s):
    CDPMTSB
    Properties:
    DGConnectIdentifier = 'cdpmtsb'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '1'
    ReopenSecs = '300'
    NetTimeout = '30'
    RedoCompression = 'DISABLE'
    LogShipping = 'ON'
    PreferredApplyInstance = ''
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '7'
    LogArchiveMinSucceedDest = '2'
    DbFileNameConvert = '+DG_01/cdpmt, +DG_01/cdpmtsb'
    LogFileNameConvert = '+FRA_DG_01/cdpmt, FRA_DG_01/cdpmtsb, DG_01/cdpmt, +DG_01/cdpmtsb'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName = 'devdb40.utd.com'
    SidName = 'CDPMTSB'
    StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=devdb40.utd.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=CDPMTSB_DGMGRL)(INSTANCE_NAME=CDPMTSB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation = '+FRA_DG_01/cdpmtsb/standbylog'
    AlternateLocation = ''
    LogArchiveTrace = '0'
    LogArchiveFormat = '%t_%s_%r.dbf'
    LatestLog = '(monitor)'
    TopWaitEvents = '(monitor)'
    Current status for "cdpmtsb":
    SUCCESS
    Thanks for your help btw, I'm really at a loss here as to what is going on with this.

  • In standby db, can't find sequence number of Last Applied Log.

    Hello,
    Standby database is behind the primary database for over 200 hours, to repaire this, we are using a incremental backup from primary database and a restored control file.
    after starting up standby database, in Grid Control (OEM), can't find "last applied log" sequence number,
    go to that standby, do
    standby> select max(sequence#) from v$archived_log where applied='YES';
    MAX(SEQUENCE#)
    go to primary,
    do
    SQL> select max(sequence#) from v$archived_log where applied='YES';
    MAX(SEQUENCE#)
    83833
    then using OEM grid control, to Verify checks various standby database settings.
    Initializing
    Connected to instance standby_server:standby
    Starting alert log monitor...
    Updating Data Guard link on database homepage...
    Skipping verification of fast-start failover static services check.
    Data Protection Settings:
    Protection mode : Maximum Performance
    Redo Transport Mode settings:
    primary.com: ASYNC
    standby.com: ASYNC
    Checking standby redo log files.....OK
    Checking Data Guard status
    primary.com : Normal
    standby.com : Normal
    Checking inconsistent properties
    Checking agent status
    Checking applied log on standby........WARNING:
    Timed out after 60 seconds waiting for log to be applied.
    Processing completed.
    so how to fix this?
    thanks you very much.
    Edited by: 951932 on Oct 18, 2012 7:44 AM

    Hello;
    Probably nothing to fix. This is a common warning message.
    It even occurs in this example :
    http://www.databasejournal.com/features/oracle/article.php/10893_3826706_2/Oracle-11g-Data-Guard-Grid-Control-Management.htm
    Best Regards
    mseberg

  • Logical standby apply won't apply logs

    RDBMS Version: Oracle 10.2.0.2
    Operating System and Version: Red Hat Enterprise Linux ES release 4
    Error Number (if applicable):
    Product (i.e. SQL*Loader, Import, etc.): Oracle Dataguard (Logical Standby)
    Product Version:
    Hi!!
    I have problem logical standby apply won't apply logs.
    SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY;
    TYPE HIGH_SCN
    STATUS
    COORDINATOR 288810
    ORA-16116: no work available
    READER 288810
    ORA-16240: Waiting for logfile (thread# 1, sequence# 68)
    BUILDER 288805
    ORA-16116: no work available
    TYPE HIGH_SCN
    STATUS
    PREPARER 288804
    ORA-16116: no work available
    ANALYZER 288805
    ORA-16116: no work available
    APPLIER 288805
    ORA-16116: no work available
    TYPE HIGH_SCN
    STATUS
    APPLIER
    ORA-16116: no work available
    APPLIER
    ORA-16116: no work available
    APPLIER
    ORA-16116: no work available
    TYPE HIGH_SCN
    STATUS
    APPLIER
    ORA-16116: no work available
    10 rows selected.
    SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, DICT_BEGIN, DICT_END FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;
    SEQUENCE# FIRST_TIM NEXT_TIME DIC DIC
    66 11-JAN-07 11-JAN-07 YES YES
    67 11-JAN-07 11-JAN-07 NO NO
    SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state';
    NAME
    VALUE
    coordinator state
    IDLE
    SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS;
    APPLIED_SCN NEWEST_SCN
    288803 288809
    INITPRIMARY.ORA
    DB_NAME=primary
    DB_UNIQUE_NAME=primary
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    service_names=primary
    instance_name=primary
    UNDO_RETENTION=3600
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(primary,standy)'
    LOG_ARCHIVE_DEST_1=
    'LOCATION=/home/oracle/primary/arch1/
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=primary'
    LOG_ARCHIVE_DEST_2=
    'SERVICE=standy LGWR ASYNC
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=standy'
    LOG_ARCHIVE_DEST_3=
    'LOCATION=/home/oracle/primary/arch2/
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
    DB_UNIQUE_NAME=primary'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    LOG_ARCHIVE_DEST_STATE_3=ENABLE
    LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
    LOG_ARCHIVE_MAX_PROCESSES=30
    FAL_SERVER=standy
    FAL_CLIENT=primary
    DB_FILE_NAME_CONVERT='standy','primary'
    LOG_FILE_NAME_CONVERT=
    '/home/oracle/standy/oradata','home/oracle/primary/oradata'
    STANDBY_FILE_MANAGEMENT=AUTO
    INITSTANDY.ORA
    db_name='standy'
    DB_UNIQUE_NAME='standy'
    REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
    SERVICE_NAMES='standy'
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(primary,standy)'
    DB_FILE_NAME_CONVERT='/home/oracle/primary/oradata','/home/oracle/standy/oradata'
    LOG_FILE_NAME_CONVERT=
    '/home/oracle/primary/oradata','/home/oracle/standy/oradata'
    LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
    LOG_ARCHIVE_DEST_1=
    'LOCATION=/home/oracle/standy/arc/
    VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=standy'
    LOG_ARCHIVE_DEST_2=
    'SERVICE=primary LGWR ASYNC
    VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
    DB_UNIQUE_NAME=primary'
    LOG_ARCHIVE_DEST_3=
    'LOCATION=/home/oracle/standy/arch2/
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
    DB_UNIQUE_NAME=standy'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    LOG_ARCHIVE_DEST_STATE_3=ENABLE
    STANDBY_FILE_MANAGEMENT=AUTO
    FAL_SERVER=primary
    FAL_CLIENT=standy
    Alert Log Banco "Standy" desde a inicialização do SQL Apply
    Thu Jan 11 15:00:54 2007
    ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
    Thu Jan 11 15:01:00 2007
    alter database add supplemental log data (primary key, unique index) columns
    Thu Jan 11 15:01:00 2007
    SUPLOG: Updated supplemental logging attributes at scn = 289537
    SUPLOG: minimal = ON, primary key = ON
    SUPLOG: unique = ON, foreign key = OFF, all column = OFF
    Completed: alter database add supplemental log data (primary key, unique index) columns
    LOGSTDBY: Unable to register recovery logfiles, will resend
    Thu Jan 11 15:01:04 2007
    LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
    Thu Jan 11 15:01:04 2007
    LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
    Thu Jan 11 15:01:04 2007
    ALTER DATABASE START LOGICAL STANDBY APPLY (standy)
    with optional part
    IMMEDIATE
    Attempt to start background Logical Standby process
    LSP0 started with pid=21, OS id=12165
    Thu Jan 11 15:01:05 2007
    LOGSTDBY Parameter: DISABLE_APPLY_DELAY =
    LOGSTDBY Parameter: REAL_TIME =
    Completed: ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE
    Thu Jan 11 15:01:07 2007
    LOGSTDBY status: ORA-16111: log mining and apply setting up
    Thu Jan 11 15:01:07 2007
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 201
    LOGMINER: Memory Size = 30M, Checkpoint interval = 150M
    LOGMINER: session# = 1, reader process P000 started with pid=22 OS id=12167
    LOGMINER: session# = 1, builder process P001 started with pid=23 OS id=12169
    LOGMINER: session# = 1, preparer process P002 started with pid=24 OS id=12171
    Thu Jan 11 15:01:17 2007
    LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
    Thu Jan 11 15:01:17 2007
    LOGMINER: Turning ON Log Auto Delete
    Thu Jan 11 15:01:26 2007
    LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
    Thu Jan 11 15:01:26 2007
    LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
    Thu Jan 11 15:01:26 2007
    LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_ATTRCOL$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_CCOL$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_CDEF$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_COL$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_COLTYPE$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_ICOL$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_IND$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDCOMPART$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDPART$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_INDSUBPART$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_LOB$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_LOBFRAG$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_OBJ$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TAB$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABCOMPART$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABPART$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TABSUBPART$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TS$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_TYPE$ have been marked unusable
    Thu Jan 11 15:01:33 2007
    Some indexes or index [sub]partitions of table SYSTEM.LOGMNR_USER$ have been marked unusable
    Thu Jan 11 15:02:05 2007
    Indexes of table SYSTEM.LOGMNR_ATTRCOL$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_ATTRIBUTE$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_CCOL$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_CDEF$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_COL$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_COLTYPE$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_DICTIONARY$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_ICOL$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_IND$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_INDCOMPART$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_INDPART$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_INDSUBPART$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_LOB$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_LOBFRAG$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_OBJ$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_TAB$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_TABCOMPART$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_TABPART$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_TABSUBPART$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_TS$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_TYPE$ have been rebuilt and are now usable
    Indexes of table SYSTEM.LOGMNR_USER$ have been rebuilt and are now usable
    LSP2 started with pid=25, OS id=12180
    LOGSTDBY Analyzer process P003 started with pid=26 OS id=12182
    LOGSTDBY Apply process P008 started with pid=20 OS id=12192
    LOGSTDBY Apply process P007 started with pid=30 OS id=12190
    LOGSTDBY Apply process P005 started with pid=28 OS id=12186
    LOGSTDBY Apply process P006 started with pid=29 OS id=12188
    LOGSTDBY Apply process P004 started with pid=27 OS id=12184
    Thu Jan 11 15:02:48 2007
    Redo Shipping Client Connected as PUBLIC
    -- Connected User is Valid
    RFS[1]: Assigned to RFS process 12194
    RFS[1]: Identified database type as 'logical standby'
    Thu Jan 11 15:02:48 2007
    RFS LogMiner: Client enabled and ready for notification
    Thu Jan 11 15:02:49 2007
    RFS LogMiner: RFS id [12194] assigned as thread [1] PING handler
    Thu Jan 11 15:02:49 2007
    LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
    Thu Jan 11 15:02:49 2007
    LOGMINER: Turning ON Log Auto Delete
    Thu Jan 11 15:02:51 2007
    LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_66_608031954.arc
    Thu Jan 11 15:02:51 2007
    LOGMINER: Begin mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
    Thu Jan 11 15:02:51 2007
    LOGMINER: End mining logfile: /home/oracle/standy/arch2/1_67_608031954.arc
    Please, help me more time!!!!
    Thanks.

    Hello!
    thank you for the reply.
    The archive 1_68_608031954.arc that error of reading occurred, did not exist in the date of the error sees below:
    $ ls -lh /home/oracle/standy/arch2/
    total 108M
    -rw-r----- 1 oracle oinstall 278K Jan 11 15:00 1_59_608031954.arc
    -rw-r----- 1 oracle oinstall 76K Jan 11 15:00 1_60_608031954.arc
    -rw-r----- 1 oracle oinstall 110K Jan 11 15:00 1_61_608031954.arc
    -rw-r----- 1 oracle oinstall 1.0K Jan 11 15:00 1_62_608031954.arc
    -rw-r----- 1 oracle oinstall 2.0K Jan 11 15:00 1_63_608031954.arc
    -rw-r----- 1 oracle oinstall 96K Jan 11 15:00 1_64_608031954.arc
    -rw-r----- 1 oracle oinstall 42K Jan 11 15:00 1_65_608031954.arc
    -rw-r----- 1 oracle oinstall 96M Jan 13 06:10 1_68_608031954.arc
    -rw-r----- 1 oracle oinstall 12M Jan 13 13:29 1_69_608031954.arc
    $ ls -lh /home/oracle/primary/arch1/
    total 112M
    -rw-r----- 1 oracle oinstall 278K Jan 11 14:21 1_59_608031954.arc
    -rw-r----- 1 oracle oinstall 76K Jan 11 14:33 1_60_608031954.arc
    -rw-r----- 1 oracle oinstall 110K Jan 11 14:46 1_61_608031954.arc
    -rw-r----- 1 oracle oinstall 1.0K Jan 11 14:46 1_62_608031954.arc
    -rw-r----- 1 oracle oinstall 2.0K Jan 11 14:46 1_63_608031954.arc
    -rw-r----- 1 oracle oinstall 96K Jan 11 14:55 1_64_608031954.arc
    -rw-r----- 1 oracle oinstall 42K Jan 11 14:55 1_65_608031954.arc
    -rw-r----- 1 oracle oinstall 4.2M Jan 11 14:56 1_66_608031954.arc
    -rw-r----- 1 oracle oinstall 5.5K Jan 11 14:56 1_67_608031954.arc
    -rw-r----- 1 oracle oinstall 96M Jan 13 06:09 1_68_608031954.arc
    -rw-r----- 1 oracle oinstall 12M Jan 13 13:28 1_69_608031954.arc
    Alert log
    hu Jan 11 15:01:00 2007
    SUPLOG: Updated supplemental logging attributes at scn = 289537
    SUPLOG: minimal = ON, primary key = ON
    SUPLOG: unique = ON, foreign key = OFF, all column = OFF
    Completed: alter database add supplemental log data (primary key, unique index) columns
    LOGSTDBY: Unable to register recovery logfiles, will resend
    Thu Jan 11 15:01:04 2007
    LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
    Thu Jan 11 15:01:04 2007
    LOGMINER: Error 308 encountered, failed to read missing logfile /home/oracle/standy/arch2/1_68_608031954.arc
    You it would know as to help me?
    Would be a BUG of the Oracle 10g?
    Thanks.

  • 7.5 Standby apply log issue - Cold backup of standby

    We are in the process of migrating to 7.8 from 7.5 and have a 7.5 standby. Needed to test the time is takes to backup from the standby once we go-live on the 7.8 64bit server. Logs have been applied to the 7.5 standby for more than a month just fine prior to doing a "cold" backup test on the standby. Now the logs will not apply to the standby.
    ERR 20039 Log 
    Logrecovery is not allowed, because state of log volume is 'HistoryLost' (log must be cleared)
    2014-05-04 10:02:16 
    0xF28 WRN 20026 Admin
    Initialization of log for 'restore log' failed with 'LogAndDataIncompatible'
    First - backing up the standby in ADMIN mode / cold backup on the standby database, right?  The source and target (standby) have different server names.
    Is it possible to clear the log area or some other process to fix the issue?

    Hi Mike,
    You may need to restore the database with re-intialization and clear the logs
    Refer to the copy steps described in the thread
    Error while recover logs
    Hope this helps.
    Regards,
    Deepak Kori

  • Standby db doesn't apply logs

    hi all,
    my problem is that none archive logs are actually applied to the standby database
    Background Managed Standby Recovery process started
    Starting datafile 1 recovery in thread 1 sequence 15536
    Datafile 1: '/oracle/ean1/database/data/EAN1_SYSTEM01.dbf'
    Starting datafile 2 recovery in thread 1 sequence 15536
    Datafile 2: '/oracle/ean1/database/data/EAN1_UNDO01.dbf'
    Starting datafile 3 recovery in thread 1 sequence 15536
    Datafile 3: '/oracle/ean1/database/data/EAN1_D01_DATA01.dbf'
    Starting datafile 4 recovery in thread 1 sequence 15536
    Datafile 4: '/oracle/ean1/database/data/EAN1_I01_DATA01.dbf'
    Thu Sep 7 15:36:12 2006
    Completed: alter database recover managed standby database di
    Thu Sep 7 15:36:12 2006
    Media Recovery Waiting for thread 1 seq# 15536
    the actual log file on the stdb is 15535 and on the prd-db 15549
    can someone help me?

    Due to some error archive gap occurred in your setup. V$ARCHIVE_GAP can help you. Copy the missing archive logs to standby database and register them as follows.
    SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE /disk1/oracle/dbs/log-1292880008_10.arc;
    After you register these logs on the logical standby database, you can restart log apply services.
    -aijaz

  • Online redo logs on a physical standby?

    A question on REDO logs on physical standby databases. (10.2.0.4 db on Windows 32bit)
    My PRIMARY has 3 ONLINE REDO groups, 2 members each, in ..ORADATA\LOCP10G
    My PHYSICAL STANDBY has 4 STANDBY REDO groups, 2 members each, in ..ORADATA\SBY10G
    I have shipping occurring from the primary in LGWR, ASYNC mode - max availablility
    However I notice the STANDBY also has ONLINE REDO logs, same as the PRIMARY, in the ..ORADATA\SBY10G folder
    According to the 10g Dataguard docs, section 2.5.1:
    "Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."
    I have tried to drop these on the STANDBY when not in apply mode, but I get the following:
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    Database altered.
    SQL> ALTER DATABASE DROP LOGFILE GROUP 3;
    ALTER DATABASE DROP LOGFILE GROUP 3
    ERROR at line 1:
    ORA-01275: Operation DROP LOGFILE is not allowed if standby file management is
    automatic.
    I also deleted them while the STANDBY instance was idle, but it recreated them when moved to MOUNT mode.
    So my question is why is my PHYSICAL recreating and using these, if the docs say the shouldn't?
    I saw the same error mentioned here: prob. with DataGuard
    Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?
    Or, is this a prooduct of having management=AUTOMATIC - i.e. the database will create these 'automatically'
    Ta
    bt

    According to the 10g Dataguard docs, section 2.5.1:
    "Physical standby databases do not use an online redo log, because physical standby databases are not opened for read/write I/O."yes, those are used when database is open.
    You should not perform any changes in Standby. Even if those exist online redo log files, whats the difficulty you have seen?
    These will be used whenever you performed switchover/failover. So nothing to worry on this.
    Is this a case of the STANDBY needing at least a notion of where the REDO logs will need to be should a failover occur, and if the files are already there, the standby database CONTROLFILE will hold onto them, as they are not doing any harm anyway?Then oracle functionality itself harm if you think in that way. When they not used in open then what the harm with that?
    Standby_File_management --> for example if you add any datafile, those information will be in archives/redos once they applied on standby those will be added automatically when it is set to AUTO if its manual, then it creates a unnamed file in $ORACLE_HOME/dbs location later you have to rename that file and recovery need to perform .
    check this http://docs.oracle.com/cd/B14117_01/server.101/b10755/initparams206.htm
    HTH.

  • Standby instance throwing error

    when trying to startup standby instance , i am getting error :
    SQL> startup pfile='/home/oracle/dummy/admin/pfile/initdummy.ora' nomount;
    ORA-16025: parameter LOG_ARCHIVE_DEST_2 contains repeated or conflicting attributes
    SQL>
    db_name=org
    compatible=10.2.0
    sga_target=1000m
    control_files='/home/oracle/dummy/oradata/control1.ctl'
    background_dump_dest='/home/oracle/dummy/admin/bdump/'
    core_dump_dest='/home/oracle/dummy/admin/cdump/'
    user_dump_dest='/home/oracle/dummy/admin/udump/'
    fal_server=org
    fal_client=dummy
    db_unique_name=dummy
    standby_file_management=auto
    remote_login_passwordfile='exclusive'
    db_file_name_convert='/home/oracle/org/oradata/','/home/oracle/dummy/oradata/'
    log_file_name_convert='/home/oracle/org/oradata/','/home/oracle/dummy/oradata/'
    log_archive_dest_1='location=/home/oracle/dummy/oradata/arch/'
    log_archive_dest_2='service=org'
    ~
    ~
    ~

    now i can startup my standby instance but  getting some error.
    SQL> startup pfile='/home/oracle/dummy/admin/pfile/initdummy.ora' nomount;
    ORACLE instance started.
    Total System Global Area 1048576000 bytes
    Fixed Size                  1223344 bytes
    Variable Size             255853904 bytes
    Database Buffers          784334848 bytes
    Redo Buffers                7163904 bytes
    SQL> alter database mount standby database;
    Database altered.
    SQL> alter database  recover managed standby database disconnect from session;
    Database altered.
    SQL> select sequence#,applied from v$archived_log order by 1;
    SEQUENCE# APP
             9 NO
            10 NO
            11 NO
            12 NO
            13 NO
            14 NO
            15 NO
            16 YES
            17 YES
    9 rows selected.
    but stand by alert log showing following error :
    bash-3.2$ tail -f alert_dummy.log
    ORA-00312: online log 2 thread 1: '/home/oracle/dummy/oradata/log2.log'
    ORA-27037: unable to obtain file status
    Linux Error: 2: No such file or directory
    Additional information: 3
    Mon Jan  6 16:21:52 2014
    Completed: alter database  recover managed standby database disconnect from session
    Mon Jan  6 16:21:52 2014
    Clearing online redo logfile 2 complete
    Media Recovery Log /home/oracle/dummy/oradata/arch/1_17_836141614.dbf
    Media Recovery Waiting for thread 1 sequence 18
    Mon Jan  6 16:22:28 2014
    Errors in file /home/oracle/dummy/admin/bdump/dummy_arc1_8621.trc:
    ORA-16009: remote archive log destination must be a STANDBY database
    Mon Jan  6 16:22:28 2014
    PING[ARC1]: Heartbeat failed to connect to standby 'org'. Error is 16009.

  • Oracle ORA_16000 when trying to add standby instance to existing rac node

    I attempted to use dbca to add a new standby instance to an existing cluster. The cluster is 4 nodes, Linux RHEL 5.3 Oracle 11.1.0.7. Also using ASM, asmlib, ocfs2 and shared block devices.
    ASM instances are up and functional on all nodes. current config appears to be running normally and correctly.
    I have a 4 instance database running on the cluster. I also have 3 physical standby active data guard instances running on 3 of the nodes. I wanted to add a new ADG instance to the 4th node.
    While running dbca I received ORA-00604 and ORA-16000.
    The active data guard database was open (read only) and redo apply was on. I am using data guard broker as well, but not grid control.
    Does anyone have a procedure for adding an instance in this environment? Do I need to have the standby in mount state? If dbca won't work does anyone have a manual procedure for adding a new instance?
    Thanks

    zulo
    Let's say you adding node nusclust160## to you existing cluster and dbca is a pain to use.
    Extend clusterware to the nusclust160## server.
    re: Page 64 of Oracle® Clusterware Administration and Deployment Guide 11g Release 1 (11.1)
    1a.
    Add undo tablespace to support additional node.
    Re-check space for DATA1 on nusclust16007 and /dbdata/ORADB on sun16109.
    As of Thursday, May 21, 2009 the DATA1 asm group has 53,584M free.
    As of Thursday, May 21, 2009 the /dbdata/ORADB has 77G free.
    In a separate terminal window on nusclust16007 run the following in sqlplus
    CREATE UNDO TABLESPACE UNDOTBS4 datafile '+DATA1' SIZE 13300M AUTOEXTEND ON ;
    This will take a long time to create this tablespace. Please minimize the window after submitting the ddl and move on to the next step.
    1b.
    Insure .bash_profile on nusclust160## should look like this:
    vi .bash_profile
    export ORACLE_HOSTNAME=nusclust160##
    export ORACLE_SID=ORADB4
    export ORA_CRS_BASE=/apps/ocr/oracle
    export ORACLE_BASE=/apps/dbs/oracle
    export PATH=/usr/ccs/bin:/usr/X/bin:/usr/bin:/usr/sfw/bin:/usr/sbin:/usr/local/bin
    export server=`uname -n`
    export PS1="$ORACLE_SID@$HOSTNAME >"
    alias cls='clear'
    alias More='more'
    alias ll='ls -lt | more'
    Gather IP addresses for fourth node from /etc/hosts:
    222.65.125.### nusclust160##
    222.65.125.### nusclust160##-vip
    10.333.248.### nusclust160##-priv
    2. Start Oracle Universal Installer:
    Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing
    nodes. Oracle Universal Installer runs in add node mode.
    The Oracle inventory on nusclust16007, nusclust16008, and nusclust16036 are found under:
    /home/oracle/oraInventory
    Use a X windows enabled session (The OUI takes 33 minutes)
    cd /apps/ocr/oracle/product/11.1.0/crs/oui/bin
    ./addNode.sh
    a. In the first screen specify a new node as :
    Public Node Name:          nusclust160##
    Private Node Name:     nusclust160##-priv
    Virtual Host Name:     nusclust160##-vip
    If you receive the error:
    " tar. ./bin/racgvip.orig: Permission denied"
    Do the following:
    cd /apps/ocr/oracle/product/11.1.0/crs/bin
    ls -al racgvip.orig
    paste here:
    chown root:oinstall racgvip.orig
    chmod 771 racgvip.orig
    should now show:
    -rwxrwx--x 1 root oinstall 19213 Feb 11 08:36 racgvip.orig
    As root:
    a.
    On nusclust160##:
    cd /home/oracle/oraInventory
    ./orainstRoot.sh
    b.
    On nusclust16007:
    cd /apps/ocr/oracle/product/11.1.0/crs/install
    ./rootaddnode.sh
    clscfg: EXISTING configuration version 4 detected.
    clscfg: version 4 is 11 Release 1.
    Attempting to add 1 new nodes to the configuration
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 4: nusclust160## nusclust160##-priv nusclust160##
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    /apps/ocr/oracle/product/11.1.0/crs/bin/srvctl add nodeapps -n nusclust160## -A nusclust160##-vip/255.255.255.224/bge0
    c.
    On nusclust160##:
    cd /apps/ocr/oracle/product/11.1.0/crs/
    ./root.sh
    WARNING: directory '/apps/ocr/oracle/product/11.1.0' is not owned by root
    WARNING: directory '/apps/ocr/oracle/product' is not owned by root
    WARNING: directory '/apps/ocr/oracle' is not owned by root
    Checking to see if Oracle CRS stack is already configured
    OCR LOCATIONS = /raw/ocr/ocrconf1,/raw/ocr/ocrconf2
    OCR backup directory '/apps/ocr/oracle/product/11.1.0/crs/cdata/rac_cluster' does not exist. Creating now
    Setting the permissions on OCR backup directory
    Setting up Network socket directories
    Oracle Cluster Registry configuration upgraded successfully
    The directory '/apps/ocr/oracle/product/11.1.0' is not owned by root. Changing owner to root
    The directory '/apps/ocr/oracle/product' is not owned by root. Changing owner to root
    The directory '/apps/ocr/oracle' is not owned by root. Changing owner to root
    clscfg: EXISTING configuration version 4 detected.
    clscfg: version 4 is 11 Release 1.
    Successfully accumulated necessary OCR keys.
    Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
    node <nodenumber>: <nodename> <private interconnect name> <hostname>
    node 1: nusclust16007 nusclust16007-priv nusclust16007
    node 2: nusclust16008 nusclust16008-priv nusclust16008
    node 3: nusclust16036 nusclust16036-priv nusclust16036
    clscfg: Arguments check out successfully.
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 30 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Cluster Synchronization Services is active on these nodes.
    nusclust16007
    nusclust16008
    nusclust16036
    nusclust160##
    Cluster Synchronization Services is active on all the nodes.
    Waiting for the Oracle CRSD and EVMD to start
    Oracle CRS stack installed and running under init(1M)
    4. After this is done crs_stat -t will show nusclust160## in the crs i.e.
    I see:
    Name Type Target State Host
    ora....160##.gsd application ONLINE ONLINE sun...160##
    ora....160##.ons application ONLINE OFFLINE
    ora....160##.vip application ONLINE ONLINE sun...160##
    Do not be concerned about ora.nusclust160##.ons being OFFLINE, as that will be fixed shortly in a step that follows this one.
    5. As oracle :
    On nusclust16007:
    cd /apps/ocr/oracle/product/11.1.0/crs/bin
    ./racgons add_config nusclust160##:6251
    This should take about one second to run.
    If it says that it has already been added to the OCR you are fine.
    If it hangs, you may need to reboot all servers to clear this issue.
    6. Insure new node is properly added to ocr by running
    On nusclust16007:
    ocrdump
    Check for the entries that show:
    [DATABASE.ONS_HOSTS.nusclust160##.PORT]
    ORATEXT : 6251
    7. Check that your cluster is integrated and that the cluster is not divided into
    partitions by completing the following operations:
    On nusclust16007:
    cd /apps/ocr/oracle/product/11.1.0/crs/bin
    ./cluvfy comp clumgr -n all -verbose
    Should see Verification of cluster manager integrity was successful.
    8.
    Use the following command to perform an integrated validation of the Oracle
    Clusterware setup on all of the configured nodes, both the preexisting nodes
    and the nodes that you have added:
    AS oracle on nusclust16007:
    cluvfy stage -post crsinst -n all -verbose
    Post-check for cluster services setup was successful.
    good: Post-check for cluster services setup was successful.
    9.
    On nusclust160## as oracle run the following:
    cd /apps/ocr/oracle/product/11.1.0/crs/bin
    ./crs_stat -t | grep OFFLINE
    If you see this:
    ora.nusclust160##.ons application ONLINE OFFLINE
    then run this:
    ./crs_start -all
    After:
    ./crs_stat -t
    ora.nusclust160##.ons application ONLINE ONLINE nusclust160##
    If you see the above then you can move on the next step.
    Adding database binaries to the nusclust160## server and setting up the listener.
    1.
    From nusclust16007:
    Open an X window (The OUI part takes 13 minutes)
    cd /apps/dbs/oracle/product/11.1.0/db_1/oui/bin
    ./runInstaller -addNode ORACLE_HOME=/apps/dbs/oracle/product/11.1.0/db_1 $*
    You should get a prompt to specify a new node, in this case you should see nusclust160## where you will need to put a check mark beside it.
    2.
    from nusclust160##:
    Eventually you will be prompted to run the following as root on the new node
    On nusclust160##
    cd /apps/dbs/oracle/product/11.1.0/db_1
    ./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /apps/dbs/oracle/product/11.1.0/db_1
    Enter the full pathname of the local bin directory: [usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /var/opt/oracle/oratab file...
    Entries will be added to the /var/opt/oracle/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    Finished product-specific root actions.
    3. verification
    Now set up the .bash_profile and .asm profile to on nusclust160## to support new ORADB4 and +ASM4 instances for the oracle userid.
    On nusclust160##:
    cp .bash_profile .bash_profile.bak
    On nusclust16007:
    sftp nusclust160##
    put .bash_profile
    On nusclust160##:
    vi .bash_profile
    change ORALCE_SID to ORADB4
    cp .bash_profile .asm
    vi .asm
    change ORALCE_SID to +ASM4 in .asm file
    which sqlplus
    Should show the path below is $PATH environmental variable is set correctly.
    /apps/dbs/oracle/product/11.1.0/db_1/bin/sqlplus
    On nusclust160##:
    oifcfg getif
    This should show:
    ce4 10.333.248.192 global cluster_interconnect
    ce5 222.65.125.128 global public
    4.
    Run Netbackup Oracle Agent link script.
    As oracle make sure ORACLE_HOME is fined.
    env | grep ORACLE_HOME
    then
    cd /usr/openv/netbackup/bin/
    ./oracle_link
    ls -al $ORACLE_HOME/lib/libobk.so
    should show:
    /apps/dbs/oracle/product/11.1.0/db_1/lib/libobk.so -> /usr/openv/netbackup/bin/libobk.so64.1
    5.
    On the target node, run the Net Configuration Assistant (NETCA) to add a
    listener. Add a listener to the target node by running NETCA from the target node and
    selecting only the target node on the Node Selection page.
    I shall do the following on nusclust160## using X Windows
    Now before I do this I see:
    crs_stat -t
    ora.nusclust160##.gsd application ONLINE ONLINE nusclust160##
    ora.nusclust160##.ons application ONLINE ONLINE nusclust160##
    ora.nusclust160##.vip application ONLINE ONLINE nusclust160##
    Connect to nusclust160## and open up X windows session.
    netca
    Choose Cluster configuration.
    select nusclust160## as the node to configure.
    Choose Listener configuration, then Add.
    When it prompts you for a listener name choose LISTENER as it will append _NUSCLUST160##(server name) to end of the LISTENER name to make a complete listener name. 
    At this point you will have listener to support the new node in the crs.
    now
    crs_stat -t
    will show:
    ora....0#.lsnr application ONLINE ONLINE nusclust160##
    ora.nusclust160##.gsd application ONLINE ONLINE nusclust160##
    ora.nusclust160##.ons application ONLINE ONLINE nusclust160##
    ora.nusclust160##.vip application ONLINE ONLINE nusclust160##
    At this point the necessary crs entries for gsd, ons, vip, and the listener on nusclust160## all we need now are the ORADB4 and +ASM4 instances added.
    III. 7/11/2009 7:40 AM Sat [120 min] NTTA DBA
    Use NON dbca method to create additional instances on the nusclust160## server. This will involve a complete shutdown of all RAC instances.
    1.
    Undo tablespace creation was taken care of in Step I,1. Check on the progress of the creation of tablespace UNDOTBS4 in the minimized window. Should see tablespace on primary and physical standby databases.
    2. First we shall set up the +ASM4 instance on nusclust160## and add it to the cluster.
    On nusclust160##
    cd $ORACLE_HOME/dbs
    vi init+ASM4.ora
    # Copyright (c) 1991, 2001, 2002 by Oracle Corporation
    # Cluster Database
    cluster_database=true
    cluster_database_instances=6
    # Miscellaneous
    diagnostic_dest=/apps/dbs/oracle
    instance_type=asm
    # Pools
    large_pool_size=12M
    asm_diskgroups='DATA1','ARCH','REDO1','REDO2'
    asm_diskstring='/raw/asm'
    +ASM1.instance_number=1
    +ASM2.instance_number=2
    +ASM3.instance_number=3
    +ASM4.instance_number=4
    3.
    On nusclust16007
    cd $ORACLE_HOME/dbs
    sftp nusclust160##
    put orapw+ASM1 /apps/dbs/oracle/product/11.1.0/db_1/dbs
    put orapwORADB1 /apps/dbs/oracle/product/11.1.0/db_1/dbs
    4.
    On nusclust160##
    cd $ORACLE_HOME/dbs
    cp orapw+ASM1 orapw+ASM4
    cp orapwORADB1 orapwORADB4
    5.
    On nusclust160##
    cd $HOME
    . ./.asm
    sqlplus '/ as sysasm'
    startup
    create spfile from pfile='/apps/dbs/oracle/product/11.1.0/db_1/dbs/init+ASM4.ora' ;
    shutdown immediate ;
    startup
    show parameters spfile
    6. Now that we have a running asm instance add it the cluster.
    On nusclust160##
    srvctl add asm -n nusclust160## -i +ASM4 -o /apps/dbs/oracle/product/11.1.0/db_1
    srvctl enable asm -n nusclust160## -i +ASM4
    7. Now that we have an asm instance let's set up a database instance.
    On nusclust16007/ORADB1 :
    alter system set cluster_database_instances=6 scope=spfile ;
    alter system set instance_name=ORADB4 scope=spfile sid='ORADB4' ;
    alter system set instance_number=4 scope=spfile sid='ORADB4' ;
    alter system set local_listener=LISTENER_ NUSCLUST160## scope=both sid='ORADB4' ;
    alter system set thread=4 scope=both sid='ORADB4' ;
    alter system set undo_tablespace=UNDOTBS4 scope=both sid='ORADB4' ;
    alter database add logfile thread 4 group 28 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add logfile thread 4 group 29 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add logfile thread 4 group 30 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add logfile thread 4 group 31 ('+REDO1', '+REDO2' ) size 100M ;
    alter database enable public thread 4;
    Need to add 5 groups to support standby
    So at the end of the day 900M will be added to REDO1(29,577M free) and 900M will be added to REDO2 (29,577M free).
    8. Set up init.ora, listener.ora, and tnsnames.ora for ORADB4 on nusclust160##.
    a. init.ora set up
    cd $ORACLE_HOME/dbs
    vi initORADB4.ora
    SPFILE='+DATA1/ORADB/spfileORADB.ora'
    b. add entries to tnsnames.ora:
    ORADB4 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ORADB)
    (INSTANCE_NAME = ORADB4)
    ORADB =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16007-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16008-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16036-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ORADB)
    LISTENERS_ORADB =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16007-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16008-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16036-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    LISTENER_NUSCLUST160## =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    ORADB_PRIM =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16007-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16008-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust16036-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ORADB)
    c. add entries to listener.ora, The entries for most of this file should be set already, just insure modifications that need to be made are made.
    SID_LIST_LISTENER_NUSCLUST160## =
    (SID_LIST =
    (SID_DESC =
    (SID_NAME = PLSExtProc)
    (ORACLE_HOME = /apps/dbs/oracle/product/11.1.0/db_1)
    (PROGRAM = extproc)
    (SID_DESC =
    (GLOBAL_DBNAME = ORADB)
    (ORACLE_HOME = /apps/dbs/oracle/product/11.1.0/db_1)
    (SID_NAME = ORADB4)
    LISTENER_NUSCLUST160## =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = NUSCLUST160##-vip)(PORT = 1521)(IP = FIRST))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 222.65.125.###)(PORT = 1521)(IP = FIRST))
    9. Reload the listener.
    lsnrclt
    set current_listener LISTENER_NUSCLUST160##
    reload
    exit
    10. Check audit trail, add instance to cluster, and start db instance.
    a.
    Check for audit directory and start the instance.
    /apps/dbs/oracle/product/11.1.0/db_1/rdbms/audit
    If this audit trail directory does not exist then create it.
    b.
    srvctl add instance -d ORADB -i ORADB4 -n nusclust160##
    srvctl modify instance -d ORADB -i ORADB4 -s +ASM4
    srvctl enable instance -d ORADB -i ORADB4
    Will probably show: PRKP-1017 : Instance ORADB4 already enabled.
    c.
    sqlplus '/ as sysdba'
    startup
    **Because the cluster_database_instances parameter requires the complete shutdown of all instances in the cluster, you might have an issue when it attempts to start the instance. If you receive an error then run:
    srvctl stop database -d oradb
    sqlplus '/ as sysdba'
    startup
    shutdown
    srvctl start database -d oradb
    shutdown
    srvctl start instance -d ORADB -i ORADB4 -o open
    11.
    Modify spfile of ASM1, ASM2, +ASM3
    On nusclust16007
    . ./.asm
    sqlplus '/ as sysasm'
    alter system set instance_number=4 scope=spfile sid='+ASM4' ;
    On nusclust16008
    . ./.asm
    sqlplus '/ as sysasm'
    alter system set instance_number=4 scope=spfile sid='+ASM4' ;
    On nusclust16036
    . ./.asm
    sqlplus '/ as sysasm'
    alter system set instance_number=4 scope=spfile sid='+ASM4' ;
    b Modify tnsnames.ora on nusclust nusclust16007, nusclust16008, and nusclust16036
    On nusclust16007
    ORADB4 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ORADB)
    (INSTANCE_NAME = ORADB4)
    Add the following line to the ORADB alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    Add the following line to the LISTENERS_ORADB alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    Add the following line to the ORADB_PRIM alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    On nusclust16008
    ORADB4 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ORADB)
    (INSTANCE_NAME = ORADB4)
    Add the following line to the ORADB alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    Add the following line to the LISTENERS_ORADB alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    Add the following line to the ORADB_PRIM alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    On nusclust16036
    ORADB4 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = ORADB)
    (INSTANCE_NAME = ORADB4)
    Add the following line to the ORADB alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    Add the following line to the LISTENERS_ORADB alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    Add the following line to the ORADB_PRIM alias:
    (ADDRESS = (PROTOCOL = TCP)(HOST = nusclust160##-vip)(PORT = 1521))
    c Add standby logs on primary to support 4th node.
    alter database add standby logfile thread 4 group 32 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add standby logfile thread 4 group 33 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add standby logfile thread 4 group 34 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add standby logfile thread 4 group 35 ('+REDO1', '+REDO2' ) size 100M ;
    alter database add standby logfile thread 4 group 36 ('+REDO1', '+REDO2' ) size 100M ;
    12.
    Test the cluster to make sure everything is set up correctly.
    a. Shutdown resources.
    On nusclust16007:
    emctl stop dbconsole
    ps -ef | grep perl
    ps -ef | grep agent
    ps -ef | grep java
    On nusclust16008:
    emctl stop dbconsole
    On nusclust16036:
    emctl stop dbconsole
    On nusclust16008:
    cd $HOME
    . ./.rman
    cd scripts
    ./go
    shutdown immediate
    cd $HOME
    . ./.bash_profile
    srvctl stop database -d oradb
    crs_stop -all
    crs_stat -t
    b. Startup resources
    On nusclust16007:
    cd $HOME
    . ./.bash_profile
    crs_start -all
    crs_stat -t
    The command above should show everything up and running.
    ocrcheck
    On nusclust16008:
    cd $HOME
    . ./.rman
    cd scripts
    ./go
    startup
    On nusclust16007:
    emctl start dbconsole
    On nusclust16008:
    emctl start dbconsole
    On nusclust16036:
    emctl start dbconsole
    How does that work for you?
    -JR jr

  • Will RMAN delete archive log files on a Standby server?

    Environment:
    Oracle 11.2.0.3 EE on Solaris 10.5
    I am currently NOT using an RMAN repository (coming soon).
    I have a Primary database sending log files to a Standby.
    My Retention Policy is set to 'RECOVERY WINDOW OF 8 DAYS'.
    Question: Will RMAN delete the archive log files on the Standby server after they become obsolete based on the Retention Policy or do I need to remove them manually via O/S command?
    Does the fact that I'm NOT using an RMAN Repository at the moment make a difference?
    Couldn't find the answer in the docs.
    Thanks very much!!
    -gary

    Hello again Gary;
    Sorry for the delay.
    Why is what you suggested better?
    No, its not better, but I prefer to manage the archive. This method works, period.
    Does that fact (running a backup every 4 hours) make my archivelog deletion policy irrelevant?
    No. The policy is important.
    Having the Primary set to :
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBYBut set to "NONE" on the Standby.
    Means the worst thing that can happen is RMAN will bark when you try to delete something. ( this is a good thing )
    How do I prevent the archive backup process from backing up an archive log file before it gets shipped to the standby?
    Should be a non-issue, the archive does not move, the REDO is transported and applied. There's SQL to monitor both ( Transport and Apply )
    For Data Guard I would consider getting a copy of
    "Oracle Data Guard 11g Handbook" - Larry Carpenter (AKA Dr. Paranoid ) ISBN 978-0-07-162111-2
    Best Oracle book I've read in 10 years. Covers a ton of ground clearly.
    Also Data Guard forum here :
    Data Guard
    Best Regards
    mseberg
    Edited by: mseberg on Apr 10, 2012 4:39 PM

  • How to apply archivelog with gap on standby database

    Hi All,
    Oracle Database version : 9.2.0.6
    Following is my sequence of commands on standby database.
    SQL>alter database mount standby database;
    SQL> RECOVER AUTOMATIC STANDBY DATABASE UNTIL CHANGE n;
    ORA-00279: change 809120216 generated at 07/24/2006 09:55:03 needed for thread
    1
    ORA-00289: suggestion : D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC
    ORA-00280: change 809120216 for thread 1 is in sequence #19921
    ORA-00278: log file 'D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC' no longer
    needed for this recovery
    ORA-00308: cannot open archived log
    'D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    I have check the last sequence# on standby database which is 19921. And I have archivelog starting from sequence# 20672 onwards. When I am trying to apply archive log starting from sequence# 20672 , it is searching for 'D:\ORACLE\ADMIN\TEST\ARCH\TEST001S19921.ARC' file and cancel the recovery. Please note that I don't have those missing archive on Primary server as well. So How can I apply the remaining archive log which I do have from 20672 onwards.
    I hope I am not creating any confusion.
    Thx in advance.

    Hi Aijaz,
    Thx for your answer. But my scenario is bit complex. I have checked my standby database status which is not running in recovery mode. I have tried to find archive_gap which is 0 on standby server. I am copying all archived log from primary to standby thru the script every 2 hour and appying them on standby. After applying, the script is removing all applied log files from primary as well as standby. So it is something like I have archivelog from 1,2,3,7,8,9,10. So 4,5 and 6 archivelog are missing which is required when I am trying to recover standby database. Also note that I want to apply 7,8,9,10. I will loose some data from those missing archive but I have cold back any way. I don't have those missing archivelog files(4,5 and 6) anywhere at all. So how can I recover standby database. I am using standby just for the backup purpose.
    I hope my question is clear now.
    Thx in advance
    - Mehul

  • Standby instance: Ora-01033 when changing passwort

    Hello
    I build up an standby instance which I would like to as^dd in Grid Control. As you know, the standby instances is mounted, but not opened.
    This state leads to an error when trying to change dbsnmp's password:
    oracle@sksta90271 [IARTSBY]:~> sqlplus sys as sysdba
    SQL*Plus: Release 10.2.0.3.0 - Production on Mon May 26 09:13:36 2008
    Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
    Enter password:
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> alter user dbsnmp identified by test;
    alter user dbsnmp identified by test
    ERROR at line 1:
    ORA-01109: database not open
    SQL>
    I need to set dbsnmp's password to add the instance in Grid control. How to do this?
    Thanks Casi

    The procedure above (setting pw for dbsnmp, switching logfiles and having them
    applied to standby) was not succesful.Can you be more specific at what point did it fail??
    On Primary Database:
    ====================
    SQL> alter user scott identified by scott123;
    User altered.
    SQL> conn scott/scott123
    Connected.
    SQL> conn /as sysdba
    Connected.
    SQL> alter system switch logfile;
    System altered.
    On Physical Standby Database:
    =============================
    SQL> conn scott/scott123
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    Warning: You are no longer connected to ORACLE.
    SQL> conn /as sysdba
    Connected.
    SQL>
    SQL> alter database recover managed standby database cancel;
    Database altered.
    SQL> alter database open ;
    Database altered.
    SQL> conn scott/scott123
    Connected.

  • Error 1017 received logging on to the standby

    Hi anyone,
    Please help me...
    i have database running on Oracle 10gR1 EE (10.1.0.4.0) on Linux Red Hat. and this server is replicate to DR server for backup. we change 'SYS' password about a week ago and nothing occur until today, the database is not connecting to DR server. here is the error obtain from the alert_log file. is there i 've missed anything when changing SYS password?
    somebody please help me.. im in horror now.
    thanks in advance.
    -Nonie

    forgot to paste the error.
    Mon Jan 21 14:54:20 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    -Nonie

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

Maybe you are looking for