Setup of standby with log shipping

Hello list,
I want to setup an standby database on a different server, based on regularly complete/incremental/log backups.
In the past we run this scenario (via bash scripts):
- complete backup (once a day)
- incremental backups (every 30 minutes)
- cold incremental backup in the case of normal switching from live to standby server
This scenario works fine for the 2-3 situations, where we have to switch between the servers.
Now I want to change this to a combination to (mix of bash and perl scripts):
- complete backup (once a week/month)
- incremental backup (once a day)
- log backups (every <n> minutes)
according to the maxdb documentation and due to the fact, that we have configured autolog on and this way generate about on log backup every hour.
In various tests I have found a solution, where I use a complete backup for initialization and then regularly restore the log backups from the original db to the standby db.
Here an example:
a) db_admin
   db_connect
   db_activate RECOVER <complete media> DATA
b) db_admin
   db_connect
   recover_start <log media> LOG 001
c) db_admin
   db_connect
   recover_start <log media> LOG 001
   recover_replace <log media> "<location of log media>" 002
d) db_admin
   db_connect
   recover_start <log media> LOG 002
   recover_replace <log media> "<location of log media>" 003
and so on ...
Here are my questions:
1. Is it correct, that I have to restore the last successful restored log (if not the first) from the previous session with "recover_start", before I can restore the next log with "recover_replace" in a new session?
2. Can't I mix the restoring of incremental and log backups in this way: log001, incremental data, log002, ...? In my tests I was able to restore the incremental data direct after the complete data, but not between the log backups.
3. Can I avoid the state change to OFFLINE after a log restore?
I have read various documentations about this subject, but didn't find detailed descriptions about the concrete interactions between these commands, when using them in shell or perl scripts, which are executed regularly.
Maybe somebody has developed something similar to this. I've seen some approaches from others, but none of them seems to cover completely the scenario above.
Regards,
Thomas

>
Thomas Schulz wrote:
> Here are my questions:
>
> 1. Is it correct, that I have to restore the last successful restored log (if not the first) from the previous session with "recover_start", before I can restore the next log with "recover_replace" in a new session?
Yes, that's correct. As soon as you leave the restore session, you have to provide a 'overlap' of log information so that the recovery can continue.
> 2. Can't I mix the restoring of incremental and log backups in this way: log001, incremental data, log002, ...? In my tests I was able to restore the incremental data direct after the complete data, but not between the log backups.
No, that's not possible. After you've recovered some log-information the incremental backup cannot be appliead as a "delta" to the data are anymore as the data area has already changed.
> 3. Can I avoid the state change to OFFLINE after a log restore?
Of course - don't use recover_cancel
As soon as you stop the recovery, the database is stopped - no way around this.
There are some 3rd party tools available for this, like LIBELLE.
KR Lars

Similar Messages

  • Oracle8i Data Guard with log shipping

    Is it true that :
    in Oralce8i, with the data guard, there will be zero data loss if online redologs have been mirrored in DR site and in the event of DR, the last un-finished redolog can be used to recover the database.
    What product is used to apply the redolog ?
    I know Oracle9i claim this is possible, but when will Oracle9i available to Sun platform ?

    >
    Thomas Schulz wrote:
    > Here are my questions:
    >
    > 1. Is it correct, that I have to restore the last successful restored log (if not the first) from the previous session with "recover_start", before I can restore the next log with "recover_replace" in a new session?
    Yes, that's correct. As soon as you leave the restore session, you have to provide a 'overlap' of log information so that the recovery can continue.
    > 2. Can't I mix the restoring of incremental and log backups in this way: log001, incremental data, log002, ...? In my tests I was able to restore the incremental data direct after the complete data, but not between the log backups.
    No, that's not possible. After you've recovered some log-information the incremental backup cannot be appliead as a "delta" to the data are anymore as the data area has already changed.
    > 3. Can I avoid the state change to OFFLINE after a log restore?
    Of course - don't use recover_cancel
    As soon as you stop the recovery, the database is stopped - no way around this.
    There are some 3rd party tools available for this, like LIBELLE.
    KR Lars

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • MAXDB Log Shipping

    Hi folks,
    we had working log shipping scripts which automatically copy log backups to the standby server. We had to interrupt the log backups (overwrite mode set during EHP installation) and now after a full backup and restore the log restores do not work, please see the log below (the last two lines).
    The log file =42105
    FirstLogPage=5233911
    LastLogPage=6087243
    Database UsedLogPage=5752904
    I would expect the restore to work since Database log page is within the range of the first and last log pages of the log backup file.
    Is this not the case? How should it work when re-establishing the log shipping?
    Mark.
    Find first log file to apply
    FindLowestLogNo()
    File: log_backup.42105
       File extension is numeric '42105'
       LogNo=42105
    FindLowestLogNo()
    LogMin=42105 LogMax=42105
    Execute Command: C:\sapdb\programs\pgm\dbmcli.exe -n localhost -d ERP -u SUPERDBA,Guer0l1t0 db_admin & C:\sapdb\programs\pgm\dbmcli.exe -n localhost -d ERP -u SUPERDBA,Guer0l1t0 medium_label LOG1 42105
    [START]
    OK
    OK
    Returncode               0
    Date                     20110308
    Time                     00111416
    Server                   saperpdb.rh.renold.com
    Database                 ERP
    Kernel Version           Kernel    7.7.06   Build 010-123-204-327
    Pages Transferred        0
    Pages Left               0
    Volumes                  
    Medianame                
    Location                 F:\log_shipping\log_backup.42105
    Errortext                
    Label                    LOG_000042105
    Is Consistent            
    First LOG Page           5233911
    Last LOG Page            6087243
    DB Stamp 1 Date          20110209
    DB Stamp 1 Time          00190733
    DB Stamp 2 Date          20110308
    DB Stamp 2 Time          00111415
    Page Count               853333
    Devices Used             1
    Database ID              saperpdb.rh.renold.com:ERP_20110209_210432
    Max Used Data Page       
    Converter Page Count     
    [END]
    LogNo=42105 FirstLogPage=5233911 LastLogPage=6087243 (UsedLogPage=5752904)
    WARNING: Log file not yet applied but NOT the first log file. Either sequence error or first log file is missing/yet to arrive

    Hello Birch Mark,
    the recovery with intialization is the correct step to recreate the shadow database.
    What has to be done before:
    Source database:
    SA)Activate the database logging.
    SB) Create the complete databackup
    SC)  Set the aoutolog on or create the logbackup,
        the first logbackup after completedatabackup created => check the backup history of the source database.
    Shadow database:
    ShA) Run the recovery with initialization or use db_activate dbm command, see more details in the MAXDB library,
        I gave you references in the my reply before.
    ShB) After the restore of the complete databackup created in step SB) don't restart the database in the online.
             Keep the shadow database in admin or offline < when you run execute recover_cancel >.
            Please post output of the db_restartinfo command.
       ShC) You could restart the logbackups recovery created in SC), check the backup history of the source database which logbackup will be first to recover.
    Did you follow those steps?
    There is helpful documentation/hints at Wiki:
    http://wiki.sdn.sap.com/wiki/display/MaxDB/SAPMaxDBHowTo
    ->"HowTo - Standby DB log shipping"
    Regards, Natalia Khlopina

  • Log Shipping Restore Job working but DB not restoring

    Hi All 
    I had a problem with log shipping.
    The first time i setup log shipping. Restore job working fine.
    But after i edit destination folder in "Copy File" Tab Menu,
    before the destination folder is \\HQAPPDB\Data_backup then i changed to \\HQAPPDB\C$\Data_backup
    then restore job, stop restoring secondary database, but copy job still copying all transaction file to \\HQAPPDB\C$\Data_backup
    I think i cannot simply changed like that.
    i changed back destination folder to  \\HQAPPDB\Data_backup, sync primary db and secondary db, then enable backup, copy and restore job
    but still secondary db not restoring.
    Any solution for me so i can make restore job working like the first time i setup ?
    I really appreciate for your help
    Thank you so much. 
    Oryza Safutra

    Hi Oryza Safutra,
    According to your description, if you change the destination folder, enable backup, copy ,restore job again and these job are all run successfully. Meanwhile, all transaction file has been changed to the new destination folder. However, if you do some modifications
    in your primary database, it will not sync these modifications in the secondary database, right?
    I do a test, it run well. In theory, if you just only change the location of folder, it will not affect the process of Log shipping. We need to verify if you grant read and write permission on the new folder to the proxy account for the copy
    job. Or the frequency is too long, the restore job has not occurs. If you configure an alert for restore job when it run failed. I recommend you 
    check the error logs in the SQL Server Agent and if there are some error message in here, you can post for analysis.
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • The log shipping restore job restores a corrupted transaction log backup to a secondary database

    Dear Sir,
    I have primary sql instances in cluster node and it is configured with log shipping for DR system.
    The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
    Cheers,

    Dear Sir,
    I have primary sql instances in cluster node and it is configured with log shipping for DR system.
    The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
    Cheers,
    Well when failover happens SQL Server is stopped and restarted on other node. So when SQL Server is stopped and it is doing Log backup the backup operation would stop and there would be no trn files . The backup operation wont complete and hence no backup
    information would be stored in SQL Server MSDB and no .trn file would be generated.
    You can run restore verifyonly on .trn file to see whether it is damaged or not. Logshipping is quite flexible even if previous log backup did not complete the next wont be affected because SQL Server has no information about whether backup completed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • SQL Log shipping ISSUE

    Hi All,
    I have a maintenance plan set to run FULL backup for the database every night 12:00 AM and differential backups hourly. I also configured Log shipping for few databases. My question is: should I remove the log shipping database from maintenance plan both
    full and differential?
     My concern is that log shipping will do backup and maintenance plan will also backup the DBs. Will both interfere each other?  The log shipping takes the backup during the same time period (Because I put the default settings while LS setup)
    Should I exclude Log shipping DB’s from maintenance plan?
    Your response is important to me.
    Thanks,
    DBA
    DBA

    Should I exclude Log shipping DB’s from maintenance plan?
    Hi,
    No you should not as such but since your database is configured for logshipping differential backup might not be required but Full backup you must keep taking.
    Both diff and full backup would not affect logshipping.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • ORA-16191: Primary log shipping client not logged on standby.

    Hi,
    Please help me in the following scenario. I have two nodes ASM1 & ASM2 with RHEL4 U5 OS. On node ASM1 there is database ORCL using ASM diskgroups DATA & RECOVER and archive location is on '+RECOVER/orcl/'. On ASM2 node, I have to configure STDBYORCL (standby) database using ASM. I have taken the copy of database ORCL via RMAN, as per maximum availability architecture.
    Then I have ftp'd all to ASM2 and put them on FS /u01/oradata. Have made all necessary changes in primary and standby database pfile and then perform the duplicate database for standby using RMAN in order to put the db files in desired diskgroups. I have mounted the standby database but unfortunately, log transport service is not working and archives are not getting shipped to standby host.
    Here are all configuration details.
    Primary database ORCL pfile:
    [oracle@asm dbs]$ more initorcl.ora
    stdbyorcl.__db_cache_size=251658240
    orcl.__db_cache_size=226492416
    stdbyorcl.__java_pool_size=4194304
    orcl.__java_pool_size=4194304
    stdbyorcl.__large_pool_size=4194304
    orcl.__large_pool_size=4194304
    stdbyorcl.__shared_pool_size=100663296
    orcl.__shared_pool_size=125829120
    stdbyorcl.__streams_pool_size=0
    orcl.__streams_pool_size=0
    *.audit_file_dest='/opt/oracle/admin/orcl/adump'
    *.background_dump_dest='/opt/oracle/admin/orcl/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='+DATA/orcl/controlfile/current.270.665007729','+RECOVER/orcl/controlfile/current.262.665007731'
    *.core_dump_dest='/opt/oracle/admin/orcl/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl'
    *.db_recovery_file_dest='+RECOVER'
    *.db_recovery_file_dest_size=3163553792
    *.db_unique_name=orcl
    *.fal_client=orcl
    *.fal_server=stdbyorcl
    *.instance_name='orcl'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(orcl,stdbyorcl)'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.log_archive_dest_2='SERVICE=stdbyorcl'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='%t_%s_%r.dbf'
    *.open_cursors=300
    *.pga_aggregate_target=121634816
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=364904448
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS'
    *.user_dump_dest='/opt/oracle/admin/orcl/udump'
    Standby database STDBYORCL pfile:
    [oracle@asm2 dbs]$ more initstdbyorcl.ora
    stdbyorcl.__db_cache_size=251658240
    stdbyorcl.__java_pool_size=4194304
    stdbyorcl.__large_pool_size=4194304
    stdbyorcl.__shared_pool_size=100663296
    stdbyorcl.__streams_pool_size=0
    *.audit_file_dest='/opt/oracle/admin/stdbyorcl/adump'
    *.background_dump_dest='/opt/oracle/admin/stdbyorcl/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='u01/oradata/stdbyorcl_control01.ctl'#Restore Controlfile
    *.core_dump_dest='/opt/oracle/admin/stdbyorcl/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/oradata'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl'
    *.db_recovery_file_dest='+RECOVER'
    *.db_recovery_file_dest_size=3163553792
    *.db_unique_name=stdbyorcl
    *.fal_client=stdbyorcl
    *.fal_server=orcl
    *.instance_name='stdbyorcl'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(orcl,stdbyorcl)'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.log_archive_dest_2='SERVICE=orcl'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='%t_%s_%r.dbf'
    *.log_archive_start=TRUE
    *.open_cursors=300
    *.pga_aggregate_target=121634816
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=364904448
    *.standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS'
    *.user_dump_dest='/opt/oracle/admin/stdbyorcl/udump'
    db_file_name_convert=('+DATA/ORCL/DATAFILE','/u01/oradata','+RECOVER/ORCL/DATAFILE','/u01/oradata')
    log_file_name_convert=('+DATA/ORCL/ONLINELOG','/u01/oradata','+RECOVER/ORCL/ONLINELOG','/u01/oradata')
    Have configured the tns service on both the hosts and its working absolutely fine.
    <p>
    ASM1
    =====
    [oracle@asm dbs]$ tnsping stdbyorcl
    </p>
    <p>
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:49:00
    </p>
    <p>
    Copyright (c) 1997, 2005, Oracle. All rights reserved.
    </p>
    <p>
    Used parameter files:
    </p>
    <p>
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.20)(PORT = 1521))) (CONNECT_DATA = (SID = stdbyorcl) (SERVER = DEDICATED)))
    OK (30 msec)
    ASM2
    =====
    </p>
    <p>
    [oracle@asm2 archive]$ tnsping orcl
    </p>
    <p>
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:48:39
    </p>
    <p>
    Copyright (c) 1997, 2005, Oracle. All rights reserved.
    </p>
    <p>
    Used parameter files:
    </p>
    <p>
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.10)(PORT = 1521))) (CONNECT_DATA = (SID = orcl) (SERVER = DEDICATED)))
    OK (30 msec)
    Please guide where I am missing. Thanking you in anticipation.
    Regards,
    Ravish Garg

    Following are the errors I am receiving as per alert log.
    ORCL alert log:
    Thu Sep 25 17:49:14 2008
    ARCH: Possible network disconnect with primary database
    Thu Sep 25 17:49:14 2008
    Error 1031 received logging on to the standby
    Thu Sep 25 17:49:14 2008
    Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
    ORA-01031: insufficient privileges
    FAL[server, ARC1]: Error 1031 creating remote archivelog file 'STDBYORCL'
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Thu Sep 25 17:49:14 2008
    Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Thu Sep 25 17:49:14 2008
    ORACLE Instance orcl - Archival Error. Archiver continuing.
    Thu Sep 25 17:49:44 2008
    FAL[server]: Fail to queue the whole FAL gap
    GAP - thread 1 sequence 40-40
    DBID 1192788465 branch 665007733
    Thu Sep 25 17:49:46 2008
    Thread 1 advanced to log sequence 48
    Current log# 2 seq# 48 mem# 0: +DATA/orcl/onlinelog/group_2.272.665007735
    Current log# 2 seq# 48 mem# 1: +RECOVER/orcl/onlinelog/group_2.264.665007737
    Thu Sep 25 17:55:43 2008
    Shutting down archive processes
    Thu Sep 25 17:55:48 2008
    ARCH shutting down
    ARC2: Archival stopped
    STDBYORCL alert log:
    ==============
    Thu Sep 25 17:49:27 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-01017: invalid username/password; logon denied
    Thu Sep 25 17:49:27 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    Thu Sep 25 17:49:27 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-16191: Primary log shipping client not logged on standby
    PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
    Thu Sep 25 17:51:38 2008
    FAL[client]: Failed to request gap sequence
    GAP - thread 1 sequence 40-40
    DBID 1192788465 branch 665007733
    FAL[client]: All defined FAL servers have been attempted.
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that is sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    Thu Sep 25 17:55:16 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-01017: invalid username/password; logon denied
    Thu Sep 25 17:55:16 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    Thu Sep 25 17:55:16 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-16191: Primary log shipping client not logged on standby
    PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
    Please suggest where I am missing.
    Regards,
    Ravish Garg

  • Log shipping: RESTORE LOG WITH CONTINUE_AFTER_ERROR?

    Hi experts,
      I found the restore log on standby node failed with the error message like below. What happen? How do I prevent this error? How do I repair(rebuild a new log shipping DB)?
    ---  messages ---
    Msg 4360, Level 16, State 1, Line 1
    RESTORE LOG WITH CONTINUE_AFTER_ERROR was unsuccessful. Execution of the RESTORE command was aborted.
    Msg 3013, Level 16, State 1, Line 1
    RESTORE LOG is terminating abnormally.
    --- errorlog ---
    2014-07-16 07:00:00.42 Logon       Error: 18456, Severity: 14, State: 38.
    2014-07-16 07:00:00.42 Logon       Login failed for user 'QCISAP\---'. Reason: Failed to open the explicitly specified database '---'. [CLIENT: <local machine>]
    2014-07-16 07:01:49.52 spid60      Error: 4360, Severity: 16, State: 1.
    2014-07-16 07:01:49.52 spid60      RESTORE LOG WITH CONTINUE_AFTER_ERROR was unsuccessful. Execution of the RESTORE command was aborted.
    2014-07-16 07:19:56.38 spid59      FlushCache: cleaned up 247385 bufs with 162904 writes in 278549 ms (avoided 0 new dirty bufs) for db 7:0
    2014-07-16 07:19:56.38 spid59                  average throughput:   6.94 MB/sec, I/O saturation: 161105, context switches 305229
    2014-07-16 07:19:56.38 spid59                  last target outstanding: 34, avgWriteLatency 35

    One question, have you backup the log file with CONTINUE_AFTER_ERROR?
    Have you seen in ERROR.LOG some info regarding the backup log which was forced by CONTINUE_AFTER_ERROR???
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Resizing online and standby redo log in dataguard setup.

    In 10gr2 dataguard i would like to increase redo logsize from 50M to 100M.
    on primary
    standby_file_management=manual
    added online redo groups with 100M
    log switched
    drop old one and readded with 100m
    deleted log added in step2.
    same for standby redo logs.
    On standby
    was able to resize standby redo logs.
    but cannot resize online redologs status is clearing or clearing_current.
    please comment. thanks.

    I assume you just had to wait until the Primary switched out of that online log so it became inactive at the standby as well? We track where the Primary is by marking the online redo log files at the standby as clearing_current so you can tell where the primary was at any given moment.
    Make sure you create new standby redo log files at the Primary and Standby to match the new online redo log file size.
    Larry

  • Shrink Log file in log shipping and change the database state from Standby to No recovery mode

    Hello all,
    I have configured sql server 2008 R2 log shipping for some databases and I have two issues:
    can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No
    Truncate" option, but as I know this option will not affect the log file and it will shrink the data file only.
          I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge and it will take time to restore in the DR site, so the reconfiguration
    is not an option :( 
    how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need
    to do this to change the mdf and ldf file location for the secondary databases.
    can any one help?
    Thanks in advance,
    Faris ALMasri
    Database Administrator

    1. can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No Truncate"
    option, but as I know this option will not affect the log file and it will shrink the data file only.
          I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge
    and it will take time to restore in the DR site, so the reconfiguration is not an option :( 
    2. how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need to do
    this to change the mdf and ldf file location for the secondary databases.
    can any one help?
    Thanks in advance,
    Faris ALMasri
    Database Administrator
    1. If you change recovery model of database in logshipping to simple and back to full Logshipping will break and logs wont be resored on Secondary server as log chain will be broken.You can shrink log file of primary database but why would you need that
    what is schedule of log backup. Frequent log backup is already taking care of log files why to shrink it and create performance load on system when log file will ultimately grow and since because instant file initilaization is not for Log files it takes time
    to grow and thus slows performace.
    You said you want to shrink as Database size is huge is it huge or does it have lots of free space. dont worry about data file free space it will eventually be utilized by SQL server when more data comes
    2. You are following wrong method changing state to no recovery would not even allow you to run select queries which you can run in Standby mode. Please refer below link to move Secondary data and log files
    http://www.mssqltips.com/sqlservertip/2836/steps-to-move-sql-server-log-shipping-secondary-database-files/
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • DBCC SHRINKFILE with NOTRUNCATE has any performance impact in log shipping?

    Hi All,
    To procure space I'm suggested to use below command on primary database in log shipping and I just want
    to clarify whether it has any performance impact on primary database in log shipping and also is it a recommended practice to use the below command
    in regular intervals in
    case the log is using much space of the drive. Please suggest on this. Thank You.
    "DBCC
    SHRINKFILE ('CommonDB_LoadTest_log', 2048, NOTRUNCATE)"
    Regards,
    Kalyan
    ----Learners Curiosity Never Ends----

    Hi Kalyan \ Shanky
    I was not clear in linked conversation so adding some thing :
    As per http://msdn.microsoft.com/en-us//library/ms189493.aspx
    ----->TRUNCATEONLY is applicable only to data files. 
    BUT
    As per : http://technet.microsoft.com/en-us/library/ms190488.aspx
    TRUNCATEONLY affects the log file.
    And i also tried , it does works.
    Now Truncateonly : Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified
    with TRUNCATEONLY.
    So
    1. if i am removing non used space it will not effect log shiping or no log chain will broke.
    2. If you clear unsued space it will not touch existing data. no performance issue
    3. If you clear space and then due to other operation log file will auto grow it will put unnecessary pressure on database to allocate disk every time. So once you find the max growth of log file let it be as any how again it will grow to same size.
    4. Shrinking log file is not recommeded if its again and again reaching to same size . Until unless you have space crunch
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Grid checks for standby log shipping

    hi
    are there out of box metrics to verify how far behind the standby is ? how about golden gate?
    grid v11.1
    thanks

    Data Guard-Log shipping
    may be of your interest.
    Regards
    Girish Sharma

  • Physical standby with rman: location of redo logs

    guys,
    i use simple rman commands to create a physical standby database (i've attached the commands below). the problem is that on the physical standby, the location of the redo logs and standby redo logs differ from the primary.
    (we use redo logs and standby redo logs as we are running in max. availability mode and want to be prepared for switchover/failover).
    how do i backup/restore the database in a way that the redo logs are at the same place than on the primary database?
    thanks for your help,
    heri
    backup on primary:
    backup incremental level = 0 format '/tmp/transfer/td_%s_%p.bck' database;
    restore on standby:
    duplicate target database for standby NOFILENAMECHECK;

    Ogan,
    thanks alot for your reply. I was not using the log_file_name_convert parameter as i am not aware how to use it in this scenario. the log files on the standby are generated randomly.
    on the primary the log files are as follows:
    /home/oracle/app/oracle/oradata/td/redo01.log
    /home/oracle/app/oracle/oradata/td/redo02.log
    /home/oracle/app/oracle/oradata/td/redo03.log
    /home/oracle/app/oracle/oradata/td/standby_redo01.log
    /home/oracle/app/oracle/oradata/td/standby_redo02.log
    /home/oracle/app/oracle/oradata/td/standby_redo03.log
    on the standby the logs look like:
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_1_6b1f9mvc_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_2_6b1f9p36_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_3_6b1f9rdj_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_4_6b1f9v8r_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_5_6b1f9xms_.log
    /home/oracle/app/oracle/product/11.2.0/dbhome/dbs/TDSTBY/onlinelog/o1_mf_6_6b1f9zxv_.log
    the filenames on the standby are not predictable for me in any way, so how would i use log_file_name_convert?
    thanks alot for your help!

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

Maybe you are looking for