MAXDB Log Shipping
Hi folks,
we had working log shipping scripts which automatically copy log backups to the standby server. We had to interrupt the log backups (overwrite mode set during EHP installation) and now after a full backup and restore the log restores do not work, please see the log below (the last two lines).
The log file =42105
FirstLogPage=5233911
LastLogPage=6087243
Database UsedLogPage=5752904
I would expect the restore to work since Database log page is within the range of the first and last log pages of the log backup file.
Is this not the case? How should it work when re-establishing the log shipping?
Mark.
Find first log file to apply
FindLowestLogNo()
File: log_backup.42105
File extension is numeric '42105'
LogNo=42105
FindLowestLogNo()
LogMin=42105 LogMax=42105
Execute Command: C:\sapdb\programs\pgm\dbmcli.exe -n localhost -d ERP -u SUPERDBA,Guer0l1t0 db_admin & C:\sapdb\programs\pgm\dbmcli.exe -n localhost -d ERP -u SUPERDBA,Guer0l1t0 medium_label LOG1 42105
[START]
OK
OK
Returncode 0
Date 20110308
Time 00111416
Server saperpdb.rh.renold.com
Database ERP
Kernel Version Kernel 7.7.06 Build 010-123-204-327
Pages Transferred 0
Pages Left 0
Volumes
Medianame
Location F:\log_shipping\log_backup.42105
Errortext
Label LOG_000042105
Is Consistent
First LOG Page 5233911
Last LOG Page 6087243
DB Stamp 1 Date 20110209
DB Stamp 1 Time 00190733
DB Stamp 2 Date 20110308
DB Stamp 2 Time 00111415
Page Count 853333
Devices Used 1
Database ID saperpdb.rh.renold.com:ERP_20110209_210432
Max Used Data Page
Converter Page Count
[END]
LogNo=42105 FirstLogPage=5233911 LastLogPage=6087243 (UsedLogPage=5752904)
WARNING: Log file not yet applied but NOT the first log file. Either sequence error or first log file is missing/yet to arrive
Hello Birch Mark,
the recovery with intialization is the correct step to recreate the shadow database.
What has to be done before:
Source database:
SA)Activate the database logging.
SB) Create the complete databackup
SC) Set the aoutolog on or create the logbackup,
the first logbackup after completedatabackup created => check the backup history of the source database.
Shadow database:
ShA) Run the recovery with initialization or use db_activate dbm command, see more details in the MAXDB library,
I gave you references in the my reply before.
ShB) After the restore of the complete databackup created in step SB) don't restart the database in the online.
Keep the shadow database in admin or offline < when you run execute recover_cancel >.
Please post output of the db_restartinfo command.
ShC) You could restart the logbackups recovery created in SC), check the backup history of the source database which logbackup will be first to recover.
Did you follow those steps?
There is helpful documentation/hints at Wiki:
http://wiki.sdn.sap.com/wiki/display/MaxDB/SAPMaxDBHowTo
->"HowTo - Standby DB log shipping"
Regards, Natalia Khlopina
Similar Messages
-
Steps to empty SAPDB (MaxDB) log file
Hello All,
i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
I do have some idea what to do like the steps below
1. take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
3. It will automatically overwrite log after log backups.
or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
Can the log area be overwritten cyclically without having to make a log backup?
Yes, the log area can be automatically overwritten without log backups. Use the DBM command
util_execute SET LOG AUTO OVERWRITE ON
to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
util_execute SET LOG AUTO OVERWRITE OFF
and by creating a complete data backup in the ADMIN or ONLINE status.
Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
any reply will be highly appreciated.
Thanks
ManiHello Mani,
1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your firewall and restrict access to these ports to only those computers that need to access the database.u201D
Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
See the document u201CNetwork Communicationu201D at
http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
Thank you and best regards, Natalia Khlopina -
SAP NetWeaver 7.01 Java installation on MSCS and Log shipping for DR
Hi Experts,
Can anyone please explain if below is possible :
- We have installed Netweaver 7.01 Java application server on Microsoft clustering which is working perfectly ok.
-Now we are looking at building a DR solution for the above installation using SQL Server 2008 Log-shipping. The DR site has the same MSCS configuration as the primary site.
-If we log-ship the production database and create a secondary database in "standby" or "norecovery" mode, how do we install a Passive SAP Central instance in case we want to failover from the primary to secondar ?
- We know that we have the primary database log-shipped to the secondary site and in the event of failure we can switch the roles and make secondary as primary and then perform the system copy steps on the MSCS. the drawback here is that it will take a long time to bring the system online if we perform the system copy. Our RTO is one hour only so this is not acceptable.
Question : Is there anyway we can install the Central Services and Applicaiton servers beforehand on the secondary site on the secondary log-shipped database so that in the event of failure all we do is bring the secondary database primary and then start the SAP on the secondary site ?
If yes then can anyone point me the any SAP note or documentation that has the details of how to do this.
I have already looked at 1101017, 965908 but its not clear how we can perform the installation on the JAVA stack only.
We want to implement similar to what is shows in the below diagram :
http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm
Appreciate if someone can assist in resolving the above query.
Thank you.
RAThanks Markus for your reply.
I did looked at the HA web page from SAP and configured MSCS for HA but there's no information regarding DR setup. All it says is setup a log shiping which we already know but there's no procedure to setup SAP on the DR site(Passive SAP central instance)
My original question still stands :
- Is system copy the only option that we need to perform at the DR site in the event of failure ( Assumption is that we ieve make the secondary database as primary and also sync file system for JAVA using SAN replication technologies)
- We have to achieve an RTO of 1 hour which is not possible in this case as performing system copy using MSCS HA option will take few hours to setup and test.
-The link "http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm" shows that we can have SAP PAssive central instance. Whats the procedure of installing this Passive instance on the secondary site so in the event of failure all we do is make the secondary database primary and bring the SAP system online(installed already - no system copy performed) and also file system is already is sync.
Thank you.
RA -
SAP on MSCS and log shipping for DR
Hi Experts,
Can anyone please explain if below is possible :
- We have installed Netweaver 7.01 Java application server on Microsoft clustering which is working perfectly ok.
-Now we are looking at building a DR solution for the above installation using SQL Server 2008 Log-shipping. The DR site has the same MSCS configuration as the primary site.
-If we log-ship the production database and create a secondary database in "standby" or "norecovery" mode, how do we install a Passive SAP Central instance in case we want to failover from the primary to secondary?
- We know that we have the primary database log-shipped to the secondary site and in the event of failure we can switch the roles and make secondary as primary and then perform the system copy steps on the MSCS. the drawback here is that it will take a long time to bring the system online if we perform the system copy. Our RTO is one hour only so this is not acceptable.
Question : Is there anyway we can install the Central Services and Applicaiton servers beforehand on the secondary site on the secondary log-shipped database so that in the event of failure all we do is bring the secondary database primary and then start the SAP on the secondary site ?
If yes then can anyone point me the any SAP note or documentation that has the details of how to do this.
I have already looked at 1101017, 965908 but its not clear how we can perform the installation on the JAVA stack only.
We want to implement similar to what is shows in the below diagram :
http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm
Appreciate if someone can assist in resolving the above query.
Thank you.
RAThanks Marcus and John for your reply.
I did looked at the HA web page from SAP and configured MSCS for HA but there's no information regarding DR setup. All it says is setup a log shiping which we already know but there's no procedure to setup SAP on the DR site(Passive SAP central instance)
My original question still stands :
- Is system copy the only option that we need to perform at the DR site in the event of failure ( Assumption is that we ieve make the secondary database as primary and also sync file system for JAVA using SAN replication technologies)
- We have to achieve an RTO of 1 hour which is not possible in this case as performing system copy using MSCS HA option will take few hours to setup and test.
-The link "http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm" shows that we can have SAP PAssive central instance. Whats the procedure of installing this Passive instance on the secondary site so in the event of failure all we do is make the secondary database primary and bring the SAP system online(installed already - no system copy performed) and also file system is already is sync.
Thank you.
RA -
ORA-16191: Primary log shipping client not logged on standby.
Hi,
Please help me in the following scenario. I have two nodes ASM1 & ASM2 with RHEL4 U5 OS. On node ASM1 there is database ORCL using ASM diskgroups DATA & RECOVER and archive location is on '+RECOVER/orcl/'. On ASM2 node, I have to configure STDBYORCL (standby) database using ASM. I have taken the copy of database ORCL via RMAN, as per maximum availability architecture.
Then I have ftp'd all to ASM2 and put them on FS /u01/oradata. Have made all necessary changes in primary and standby database pfile and then perform the duplicate database for standby using RMAN in order to put the db files in desired diskgroups. I have mounted the standby database but unfortunately, log transport service is not working and archives are not getting shipped to standby host.
Here are all configuration details.
Primary database ORCL pfile:
[oracle@asm dbs]$ more initorcl.ora
stdbyorcl.__db_cache_size=251658240
orcl.__db_cache_size=226492416
stdbyorcl.__java_pool_size=4194304
orcl.__java_pool_size=4194304
stdbyorcl.__large_pool_size=4194304
orcl.__large_pool_size=4194304
stdbyorcl.__shared_pool_size=100663296
orcl.__shared_pool_size=125829120
stdbyorcl.__streams_pool_size=0
orcl.__streams_pool_size=0
*.audit_file_dest='/opt/oracle/admin/orcl/adump'
*.background_dump_dest='/opt/oracle/admin/orcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='+DATA/orcl/controlfile/current.270.665007729','+RECOVER/orcl/controlfile/current.262.665007731'
*.core_dump_dest='/opt/oracle/admin/orcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+RECOVER'
*.db_recovery_file_dest_size=3163553792
*.db_unique_name=orcl
*.fal_client=orcl
*.fal_server=stdbyorcl
*.instance_name='orcl'
*.job_queue_processes=10
*.log_archive_config='dg_config=(orcl,stdbyorcl)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_2='SERVICE=stdbyorcl'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=121634816
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=364904448
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/oracle/admin/orcl/udump'
Standby database STDBYORCL pfile:
[oracle@asm2 dbs]$ more initstdbyorcl.ora
stdbyorcl.__db_cache_size=251658240
stdbyorcl.__java_pool_size=4194304
stdbyorcl.__large_pool_size=4194304
stdbyorcl.__shared_pool_size=100663296
stdbyorcl.__streams_pool_size=0
*.audit_file_dest='/opt/oracle/admin/stdbyorcl/adump'
*.background_dump_dest='/opt/oracle/admin/stdbyorcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='u01/oradata/stdbyorcl_control01.ctl'#Restore Controlfile
*.core_dump_dest='/opt/oracle/admin/stdbyorcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='/u01/oradata'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+RECOVER'
*.db_recovery_file_dest_size=3163553792
*.db_unique_name=stdbyorcl
*.fal_client=stdbyorcl
*.fal_server=orcl
*.instance_name='stdbyorcl'
*.job_queue_processes=10
*.log_archive_config='dg_config=(orcl,stdbyorcl)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_2='SERVICE=orcl'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_start=TRUE
*.open_cursors=300
*.pga_aggregate_target=121634816
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=364904448
*.standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/oracle/admin/stdbyorcl/udump'
db_file_name_convert=('+DATA/ORCL/DATAFILE','/u01/oradata','+RECOVER/ORCL/DATAFILE','/u01/oradata')
log_file_name_convert=('+DATA/ORCL/ONLINELOG','/u01/oradata','+RECOVER/ORCL/ONLINELOG','/u01/oradata')
Have configured the tns service on both the hosts and its working absolutely fine.
<p>
ASM1
=====
[oracle@asm dbs]$ tnsping stdbyorcl
</p>
<p>
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:49:00
</p>
<p>
Copyright (c) 1997, 2005, Oracle. All rights reserved.
</p>
<p>
Used parameter files:
</p>
<p>
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.20)(PORT = 1521))) (CONNECT_DATA = (SID = stdbyorcl) (SERVER = DEDICATED)))
OK (30 msec)
ASM2
=====
</p>
<p>
[oracle@asm2 archive]$ tnsping orcl
</p>
<p>
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:48:39
</p>
<p>
Copyright (c) 1997, 2005, Oracle. All rights reserved.
</p>
<p>
Used parameter files:
</p>
<p>
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.10)(PORT = 1521))) (CONNECT_DATA = (SID = orcl) (SERVER = DEDICATED)))
OK (30 msec)
Please guide where I am missing. Thanking you in anticipation.
Regards,
Ravish GargFollowing are the errors I am receiving as per alert log.
ORCL alert log:
Thu Sep 25 17:49:14 2008
ARCH: Possible network disconnect with primary database
Thu Sep 25 17:49:14 2008
Error 1031 received logging on to the standby
Thu Sep 25 17:49:14 2008
Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
ORA-01031: insufficient privileges
FAL[server, ARC1]: Error 1031 creating remote archivelog file 'STDBYORCL'
FAL[server, ARC1]: FAL archive failed, see trace file.
Thu Sep 25 17:49:14 2008
Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Thu Sep 25 17:49:14 2008
ORACLE Instance orcl - Archival Error. Archiver continuing.
Thu Sep 25 17:49:44 2008
FAL[server]: Fail to queue the whole FAL gap
GAP - thread 1 sequence 40-40
DBID 1192788465 branch 665007733
Thu Sep 25 17:49:46 2008
Thread 1 advanced to log sequence 48
Current log# 2 seq# 48 mem# 0: +DATA/orcl/onlinelog/group_2.272.665007735
Current log# 2 seq# 48 mem# 1: +RECOVER/orcl/onlinelog/group_2.264.665007737
Thu Sep 25 17:55:43 2008
Shutting down archive processes
Thu Sep 25 17:55:48 2008
ARCH shutting down
ARC2: Archival stopped
STDBYORCL alert log:
==============
Thu Sep 25 17:49:27 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-01017: invalid username/password; logon denied
Thu Sep 25 17:49:27 2008
Error 1017 received logging on to the standby
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
It may be necessary to define the DB_ALLOWED_LOGON_VERSION
initialization parameter to the value "10". Check the
manual for information on this initialization parameter.
Thu Sep 25 17:49:27 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
Thu Sep 25 17:51:38 2008
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 40-40
DBID 1192788465 branch 665007733
FAL[client]: All defined FAL servers have been attempted.
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that is sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
Thu Sep 25 17:55:16 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-01017: invalid username/password; logon denied
Thu Sep 25 17:55:16 2008
Error 1017 received logging on to the standby
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
It may be necessary to define the DB_ALLOWED_LOGON_VERSION
initialization parameter to the value "10". Check the
manual for information on this initialization parameter.
Thu Sep 25 17:55:16 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
Please suggest where I am missing.
Regards,
Ravish Garg -
Transaction log shipping restore with standby failed: log file corrupted
Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
Date
9/10/2014 6:09:27 AM
Log
Job History (LSRestore_DATA_TPSSYS)
Step ID
1
Server
DATADR
Job Name
LSRestore_DATA_TPSSYS
Step Name
Log shipping restore log job step.
Duration
00:00:03
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted
0
Message
2014-09-10 06:09:30.37 *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: An error occurred while processing the log for database 'TPSSYS'.
If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Deleting old log backup files. Primary Database: 'TPSSYS'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.38 ----- END OF TRANSACTION LOG RESTORE
Exit Status: 1 (Error)I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in that server with log shipping configuration
error :
Message
2014-09-12 10:50:03.18 *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-12 10:50:03.18 *** Error: An error occurred while processing the log for database 'EAPDAT'. If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Articles -
hello experts,
I need to change the log mode which is in "overwrite mode"
currently. and increase LOG_IO_QUEUE size.
how can i do it.
our maxdb version is 7.6 and O/S linux 2.6.16
plz suggest steps .
thanks and regardsDear Kavitha,
-> Please review the SAP notes::
869267 FAQ: MaxDB LOG area
< "35. Can the log area be overwritten cyclically without having to make a log backup?"
"52. How large should the LOG_IO_QUEUE be when configured? " >
719652 Setting initial parameters for liveCache 7.5 or 7.6
819641 FAQ: MaxDB Performance
If you are SAP customer, you are able to read the SAP notes.
-> The MAXDB documentation also give you answers on the reported questions::
http://maxdb.sap.com/documentation/ -> Open the SAP MaxDB 7.6 Library
-> Glossary
"Changing the Values of Database Parameters" at
http://maxdb.sap.com/doc/7_6/9b/e6dc41765b6024e10000000a1550b0/content.htm
"Log Queue"
http://maxdb.sap.com/doc/7_6/23/c806c81e20f946a59a421e01c42c3b/content.htm
< The new value of the LOG_IO_QUEUE parameter will be activated only after
the database will be restarted from offline mode. >
"Displaying and Changing Database Parameters" using DBMGUI tool at
http://maxdb.sap.com/doc/7_6/84/d8d198570411d4aa82006094b92fad/content.htm
-> Please pay attention ::
"Automatic overwrite of the log area without log backups is NOT
recommended for production operation. Since no backup history exists
for the following changes in the database, you cannot track transactions
in the case of recovery. "
Run dbm command 'param_getexplain LOG_IO_QUEUE'.
May be the log devspaces can be moved to a faster disk, to accelerate the physical log I/O.
-> What is the version of the database? < Please also give the patch and build number >
Why do you need to change the log mode which is in "overwrite mode"
and increase the value of the database parameter LOG_IO_QUEUE?
Thank you and best regards, Natalia Khlopina -
How to use the mirrored and log shipped secondary database for update or insert operations
Hi,
I am doing a DR Test where I need to test the mirrored and log shipped secondary database but without stopping the mirroring or log shipping procedures. Is there a way to get the data out of mirrored and log shipped database to another database for update
or insert operations?
Database snapshot can be used only for mirrored database but updates cannot be done. Also the secondary database of log shipping cannot used for database snapshot. Any ideas of how this can be implemented?
Thanks,
PreethaHmm in this case I think you need Merge Replication otherwise it breaks down the purpose of DR...again in that case..
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Oracle 9i and Oracle 10g transaction log shipping
Hi,
We have Oracle 9i and we use the transaction log shipping mechanism to transport our transaction log files to our DR site. We then import these against the database at the remote site.
Our telecomms infrastructure is pretty limited so this is a manual process.
We are looking at upgrading to 10G. I believe that with 10g you have to use dataguard, or is there a way to mimic the behavior of 9i that would allow us to transport and apply the transaction logs manually?
Thanks
AndrewYou can try setting the SGA to a low value and bring up both the databases. I don't think it should be too slow provided you are not running other windows programs.
If you are really interested in trying out new products you can also explore the option of installing VMware, creating virtual machines & installing Linux, and then playing with the different Oracle products. Doing this will at least keep your main windows operating system clean.
You may want to check out my blog post on Build your own oracle test lab
Cheers !!!
Ashish Agarwal
http://www.asagarwal.com -
Sql server log shipping space consumption
I have implemented sql server logs shipping from hq to dr server
The secondary databases are in standby mode .
The issue is that after configuring it , my dr server is running out of space very rapidly
I have checked the log shipping folders where the trn files resides and they are of very decent size , and the retention is configured for twenty four hours
I checked the secondary databases and their size is exactly same as that of the corresponding primary databases
Could you guys please help me out in identifying the reason behind this odd space increase
I would be grateful if you could point me to some online resources that explains this matter with depthThe retention is happening . I have checked the folders they do not have records older then 24 hours .
I dont know may be its because in the secondary server (Dr) there is no full backup job working , is it because of this the ldf file is getting bigger and bigger but again I far as my understanding goes we cannot take a database full backup in the stand
by mode .
The TLog files of log shipped DBs on the secondary will be same size as that of primary. The only way to shrink the TLog files on secondary (I am not advising you to do this) is to shrink them on the primary, force-start the TLog backup job, then the copy
job, then the restore job on the secondary, which will sync the size of the TLog file on the secondary.
If you have allocated the same sized disk on both primary and secondary for TLog files, then check if the Volume Shadow Copy is consuming the space on the secondary
Satish Kartan www.sqlfood.com -
HI guys,
We are using SQL SERVER 2005.
I am having a LSAlert_Serv job and this job runs the system stored procedure sys.sp_check_log_shipping_monitor_alert.
when this job is run I am getting the following error message:
Here is the error message I am getting :
The log shipping primary database SHARP has backup threshold of 60 minutes and has not performed
a backup log operation for 7368 minutes. Check agent log and logshipping monitor information. [SQLSTATE 42000] (Error 14420). The step failed.
The database named SHARP that is mentioned in the above error message is now moved to another
server.
When I looked into the stored procedure and when i ran the below query from the Stored procedure
select primary_server
,primary_database
,isnull(threshold_alert, 14420)
,backup_threshold
,cast(0 as int)
from msdb.dbo.log_shipping_monitor_primary
where threshold_alert_enabled = 1
I can still see the database SHARP in the table msdb.dbo.log_shipping_monitor_primary. So,
is it the reason for failure? If so, what to do to update the table msdb.dbo.log_shipping_monitor_primary and to fix the issue.
ThanksThe database named SHARP that is mentioned in the above error message is now moved to another server.
When I looked into the stored procedure and when i ran the below query from the Stored procedure :
Since you said you moved database to different server can you please check that SQL Server service account( in new server where you moved database) has full permissions on folder where you have configured log backup job to back the transaction logs.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
Hi,
1. Can anybody please explain what is Compressed log shipping in oracle?
2. Advantages of using it over existing feature?
3. Does it require license to use it? If yes how it would be calculated?
Thanks in advanceUsed in DataGuard environments.
Requires the Advanced Compression option.
See http://download.oracle.com/docs/cd/E11882_01/server.112/e17022/log_arch_dest_param.htm#sthref994
You can get pricing for Oracle Enterprise Edition options at http://www.oracle.com/corporate/pricing
Hemant K Chitale
Edited by: Hemant K Chitale on Mar 18, 2011 3:58 PM -
Log shipping is not restoring log files ata particular time
Hi,
I have configured log shipping and it restores all the log files upto a particular time after which it throws error and not in consistent sate. I tried deleting and again configuring log shipping couple of times but no success. Can any one tell me
how to prevent it as I have already configured log shipping from another server to same destination server and it is working fine for more than 1 year? The new configuration is only throwing errors in restoration job.
Thanks,
PreethaMessage
2014-07-21 14:00:21.62 *** Error: The log backup file 'E:\Program Files\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\Qcforecasting_log\Qcforecasting_20140721034526.trn' was verified but could not be applied to secondary database 'Qcforecasting'.(Microsoft.SqlServer.Management.LogShipping)
2014-07-21 14:00:21.62 Deleting old log backup files. Primary Database: 'Qcforecasting'
2014-07-21 14:00:21.62 The restore operation completed with errors. Secondary ID: '46b20de0-0ccf-4411-b810-2bd82200ead8'
2014-07-21 14:00:21.63 ----- END OF TRANSACTION LOG RESTORE -----
The same file was tried thrice and it threw error all the 3 times.
But when I manually restored the Qcforecasting_20140721034526 transaction log it worked. Not sure why this is happening. After the manual restoration it worked fine for one run.ow waiting for the other to complete.
This seems strange to me error points to fact that backup was consistent but could not be applied because may be restore process found out that log backup was not it correct sequence or another log backup which was accidentally taken can be applied.
But then you said you can apply this manually then this must be related to permission. Because if it can restore manually it can do it with job unless agent account has some permission issue.
Ofcourse more logs would help
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
I see most recent file is called sql server log shipping work file
What is "SQL server log shipping work file"?
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/62a6556e-6655-4d19-9112-1788cf7bbcfc/wrk-file-in-logshipping-2005
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Hi,
We have configured log shipping in our production setup. Now we are planning to change the service account from local system on which SQL Server is running to to other domain account.
Pls let me know changing the service account would have any impact on Log Shipping.
Regards,
VarunHi n.varun,
According to your description, about startup account for SQL Server and SQL Server Agent Services on Log Shipping Servers, if you have placed SQL Servers in a domain, Microsoft recommends that you use a domain account to start SQL Server services. As other
post, the domain account has full control permissions on the shared location and is sysadmin role in SQL Server security. Even if
you change the account form Local System to the domain account in SQL Server Configuration Manager (SSCM), it would have no impact on Log Shipping.
In addition, if you configure SQL Server Log shipping in a different domain or workgroup, we need to verify that the SQL Server Agent service account running on the secondary server must have read access to the folder that the log backups are located in,
and has permissions to the local folder that it will copy the logs to, then in secondary server, we can change the SQL Server Agent from using Local System to the domain account you are configuring. For more information, see:
http://www.mssqltips.com/sqlservertip/2562/sql-server-log-shipping-to-a-different-domain-or-workgroup/
There is an detail about how to configure security for SQL Server log shipping, you can review the following link.
http://support.microsoft.com/kb/321247
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support
Maybe you are looking for
-
How can I get 3.6 back since 4.0 won't install on my computer?
I've submitted a previous related question, but I'll try again. After downloading 4.0, I tried to move the download file to my applications folder, got a message asking if I wanted to replace a version already there, and said yes. Poof--3.6 was gone.
-
How can i get iPhoto to open if it says i need to use a new version?
When I try to open IPhoto it says "You have made changes to your photo library using a newer version of iPhoto. Please quit and use the latest version of iPhoto." I have been through the discussion lists and have seen other people having the same iss
-
After getting a prompt on screen stating there was a new version of firefox available I clicked and downloaded. I think it is awful and either want to revert back to the old version or will try another provider. Can anyone help me get the older versi
-
In incoming call, why iphone 4S (ios 7.04) sometimes gives option to decline the call and sometimes not?
-
How to create iconbutton control or image control or treeview control in user interface?
Anyone can tell me how to do?