Compressed log shipping
Hi,
1. Can anybody please explain what is Compressed log shipping in oracle?
2. Advantages of using it over existing feature?
3. Does it require license to use it? If yes how it would be calculated?
Thanks in advance
Used in DataGuard environments.
Requires the Advanced Compression option.
See http://download.oracle.com/docs/cd/E11882_01/server.112/e17022/log_arch_dest_param.htm#sthref994
You can get pricing for Oracle Enterprise Edition options at http://www.oracle.com/corporate/pricing
Hemant K Chitale
Edited by: Hemant K Chitale on Mar 18, 2011 3:58 PM
Similar Messages
-
SAP NetWeaver 7.01 Java installation on MSCS and Log shipping for DR
Hi Experts,
Can anyone please explain if below is possible :
- We have installed Netweaver 7.01 Java application server on Microsoft clustering which is working perfectly ok.
-Now we are looking at building a DR solution for the above installation using SQL Server 2008 Log-shipping. The DR site has the same MSCS configuration as the primary site.
-If we log-ship the production database and create a secondary database in "standby" or "norecovery" mode, how do we install a Passive SAP Central instance in case we want to failover from the primary to secondar ?
- We know that we have the primary database log-shipped to the secondary site and in the event of failure we can switch the roles and make secondary as primary and then perform the system copy steps on the MSCS. the drawback here is that it will take a long time to bring the system online if we perform the system copy. Our RTO is one hour only so this is not acceptable.
Question : Is there anyway we can install the Central Services and Applicaiton servers beforehand on the secondary site on the secondary log-shipped database so that in the event of failure all we do is bring the secondary database primary and then start the SAP on the secondary site ?
If yes then can anyone point me the any SAP note or documentation that has the details of how to do this.
I have already looked at 1101017, 965908 but its not clear how we can perform the installation on the JAVA stack only.
We want to implement similar to what is shows in the below diagram :
http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm
Appreciate if someone can assist in resolving the above query.
Thank you.
RAThanks Markus for your reply.
I did looked at the HA web page from SAP and configured MSCS for HA but there's no information regarding DR setup. All it says is setup a log shiping which we already know but there's no procedure to setup SAP on the DR site(Passive SAP central instance)
My original question still stands :
- Is system copy the only option that we need to perform at the DR site in the event of failure ( Assumption is that we ieve make the secondary database as primary and also sync file system for JAVA using SAN replication technologies)
- We have to achieve an RTO of 1 hour which is not possible in this case as performing system copy using MSCS HA option will take few hours to setup and test.
-The link "http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm" shows that we can have SAP PAssive central instance. Whats the procedure of installing this Passive instance on the secondary site so in the event of failure all we do is make the secondary database primary and bring the SAP system online(installed already - no system copy performed) and also file system is already is sync.
Thank you.
RA -
SAP on MSCS and log shipping for DR
Hi Experts,
Can anyone please explain if below is possible :
- We have installed Netweaver 7.01 Java application server on Microsoft clustering which is working perfectly ok.
-Now we are looking at building a DR solution for the above installation using SQL Server 2008 Log-shipping. The DR site has the same MSCS configuration as the primary site.
-If we log-ship the production database and create a secondary database in "standby" or "norecovery" mode, how do we install a Passive SAP Central instance in case we want to failover from the primary to secondary?
- We know that we have the primary database log-shipped to the secondary site and in the event of failure we can switch the roles and make secondary as primary and then perform the system copy steps on the MSCS. the drawback here is that it will take a long time to bring the system online if we perform the system copy. Our RTO is one hour only so this is not acceptable.
Question : Is there anyway we can install the Central Services and Applicaiton servers beforehand on the secondary site on the secondary log-shipped database so that in the event of failure all we do is bring the secondary database primary and then start the SAP on the secondary site ?
If yes then can anyone point me the any SAP note or documentation that has the details of how to do this.
I have already looked at 1101017, 965908 but its not clear how we can perform the installation on the JAVA stack only.
We want to implement similar to what is shows in the below diagram :
http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm
Appreciate if someone can assist in resolving the above query.
Thank you.
RAThanks Marcus and John for your reply.
I did looked at the HA web page from SAP and configured MSCS for HA but there's no information regarding DR setup. All it says is setup a log shiping which we already know but there's no procedure to setup SAP on the DR site(Passive SAP central instance)
My original question still stands :
- Is system copy the only option that we need to perform at the DR site in the event of failure ( Assumption is that we ieve make the secondary database as primary and also sync file system for JAVA using SAN replication technologies)
- We have to achieve an RTO of 1 hour which is not possible in this case as performing system copy using MSCS HA option will take few hours to setup and test.
-The link "http://help.sap.com/saphelp_apo/helpdata/en/fc/33c028d58511d386ee00a0c930df15/content.htm" shows that we can have SAP PAssive central instance. Whats the procedure of installing this Passive instance on the secondary site so in the event of failure all we do is make the secondary database primary and bring the SAP system online(installed already - no system copy performed) and also file system is already is sync.
Thank you.
RA -
ORA-16191: Primary log shipping client not logged on standby.
Hi,
Please help me in the following scenario. I have two nodes ASM1 & ASM2 with RHEL4 U5 OS. On node ASM1 there is database ORCL using ASM diskgroups DATA & RECOVER and archive location is on '+RECOVER/orcl/'. On ASM2 node, I have to configure STDBYORCL (standby) database using ASM. I have taken the copy of database ORCL via RMAN, as per maximum availability architecture.
Then I have ftp'd all to ASM2 and put them on FS /u01/oradata. Have made all necessary changes in primary and standby database pfile and then perform the duplicate database for standby using RMAN in order to put the db files in desired diskgroups. I have mounted the standby database but unfortunately, log transport service is not working and archives are not getting shipped to standby host.
Here are all configuration details.
Primary database ORCL pfile:
[oracle@asm dbs]$ more initorcl.ora
stdbyorcl.__db_cache_size=251658240
orcl.__db_cache_size=226492416
stdbyorcl.__java_pool_size=4194304
orcl.__java_pool_size=4194304
stdbyorcl.__large_pool_size=4194304
orcl.__large_pool_size=4194304
stdbyorcl.__shared_pool_size=100663296
orcl.__shared_pool_size=125829120
stdbyorcl.__streams_pool_size=0
orcl.__streams_pool_size=0
*.audit_file_dest='/opt/oracle/admin/orcl/adump'
*.background_dump_dest='/opt/oracle/admin/orcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='+DATA/orcl/controlfile/current.270.665007729','+RECOVER/orcl/controlfile/current.262.665007731'
*.core_dump_dest='/opt/oracle/admin/orcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+RECOVER'
*.db_recovery_file_dest_size=3163553792
*.db_unique_name=orcl
*.fal_client=orcl
*.fal_server=stdbyorcl
*.instance_name='orcl'
*.job_queue_processes=10
*.log_archive_config='dg_config=(orcl,stdbyorcl)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_2='SERVICE=stdbyorcl'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=121634816
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=364904448
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/oracle/admin/orcl/udump'
Standby database STDBYORCL pfile:
[oracle@asm2 dbs]$ more initstdbyorcl.ora
stdbyorcl.__db_cache_size=251658240
stdbyorcl.__java_pool_size=4194304
stdbyorcl.__large_pool_size=4194304
stdbyorcl.__shared_pool_size=100663296
stdbyorcl.__streams_pool_size=0
*.audit_file_dest='/opt/oracle/admin/stdbyorcl/adump'
*.background_dump_dest='/opt/oracle/admin/stdbyorcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='u01/oradata/stdbyorcl_control01.ctl'#Restore Controlfile
*.core_dump_dest='/opt/oracle/admin/stdbyorcl/cdump'
*.db_block_size=8192
*.db_create_file_dest='/u01/oradata'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='orcl'
*.db_recovery_file_dest='+RECOVER'
*.db_recovery_file_dest_size=3163553792
*.db_unique_name=stdbyorcl
*.fal_client=stdbyorcl
*.fal_server=orcl
*.instance_name='stdbyorcl'
*.job_queue_processes=10
*.log_archive_config='dg_config=(orcl,stdbyorcl)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_2='SERVICE=orcl'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_start=TRUE
*.open_cursors=300
*.pga_aggregate_target=121634816
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=364904448
*.standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/opt/oracle/admin/stdbyorcl/udump'
db_file_name_convert=('+DATA/ORCL/DATAFILE','/u01/oradata','+RECOVER/ORCL/DATAFILE','/u01/oradata')
log_file_name_convert=('+DATA/ORCL/ONLINELOG','/u01/oradata','+RECOVER/ORCL/ONLINELOG','/u01/oradata')
Have configured the tns service on both the hosts and its working absolutely fine.
<p>
ASM1
=====
[oracle@asm dbs]$ tnsping stdbyorcl
</p>
<p>
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:49:00
</p>
<p>
Copyright (c) 1997, 2005, Oracle. All rights reserved.
</p>
<p>
Used parameter files:
</p>
<p>
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.20)(PORT = 1521))) (CONNECT_DATA = (SID = stdbyorcl) (SERVER = DEDICATED)))
OK (30 msec)
ASM2
=====
</p>
<p>
[oracle@asm2 archive]$ tnsping orcl
</p>
<p>
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:48:39
</p>
<p>
Copyright (c) 1997, 2005, Oracle. All rights reserved.
</p>
<p>
Used parameter files:
</p>
<p>
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.10)(PORT = 1521))) (CONNECT_DATA = (SID = orcl) (SERVER = DEDICATED)))
OK (30 msec)
Please guide where I am missing. Thanking you in anticipation.
Regards,
Ravish GargFollowing are the errors I am receiving as per alert log.
ORCL alert log:
Thu Sep 25 17:49:14 2008
ARCH: Possible network disconnect with primary database
Thu Sep 25 17:49:14 2008
Error 1031 received logging on to the standby
Thu Sep 25 17:49:14 2008
Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
ORA-01031: insufficient privileges
FAL[server, ARC1]: Error 1031 creating remote archivelog file 'STDBYORCL'
FAL[server, ARC1]: FAL archive failed, see trace file.
Thu Sep 25 17:49:14 2008
Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
ORA-16055: FAL request rejected
ARCH: FAL archive failed. Archiver continuing
Thu Sep 25 17:49:14 2008
ORACLE Instance orcl - Archival Error. Archiver continuing.
Thu Sep 25 17:49:44 2008
FAL[server]: Fail to queue the whole FAL gap
GAP - thread 1 sequence 40-40
DBID 1192788465 branch 665007733
Thu Sep 25 17:49:46 2008
Thread 1 advanced to log sequence 48
Current log# 2 seq# 48 mem# 0: +DATA/orcl/onlinelog/group_2.272.665007735
Current log# 2 seq# 48 mem# 1: +RECOVER/orcl/onlinelog/group_2.264.665007737
Thu Sep 25 17:55:43 2008
Shutting down archive processes
Thu Sep 25 17:55:48 2008
ARCH shutting down
ARC2: Archival stopped
STDBYORCL alert log:
==============
Thu Sep 25 17:49:27 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-01017: invalid username/password; logon denied
Thu Sep 25 17:49:27 2008
Error 1017 received logging on to the standby
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
It may be necessary to define the DB_ALLOWED_LOGON_VERSION
initialization parameter to the value "10". Check the
manual for information on this initialization parameter.
Thu Sep 25 17:49:27 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
Thu Sep 25 17:51:38 2008
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 40-40
DBID 1192788465 branch 665007733
FAL[client]: All defined FAL servers have been attempted.
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that is sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
Thu Sep 25 17:55:16 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-01017: invalid username/password; logon denied
Thu Sep 25 17:55:16 2008
Error 1017 received logging on to the standby
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
It may be necessary to define the DB_ALLOWED_LOGON_VERSION
initialization parameter to the value "10". Check the
manual for information on this initialization parameter.
Thu Sep 25 17:55:16 2008
Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
Please suggest where I am missing.
Regards,
Ravish Garg -
Transaction log shipping restore with standby failed: log file corrupted
Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
Date
9/10/2014 6:09:27 AM
Log
Job History (LSRestore_DATA_TPSSYS)
Step ID
1
Server
DATADR
Job Name
LSRestore_DATA_TPSSYS
Step Name
Log shipping restore log job step.
Duration
00:00:03
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted
0
Message
2014-09-10 06:09:30.37 *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: An error occurred while processing the log for database 'TPSSYS'.
If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Deleting old log backup files. Primary Database: 'TPSSYS'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.38 ----- END OF TRANSACTION LOG RESTORE
Exit Status: 1 (Error)I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in that server with log shipping configuration
error :
Message
2014-09-12 10:50:03.18 *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-12 10:50:03.18 *** Error: An error occurred while processing the log for database 'EAPDAT'. If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Articles -
Hi folks,
we had working log shipping scripts which automatically copy log backups to the standby server. We had to interrupt the log backups (overwrite mode set during EHP installation) and now after a full backup and restore the log restores do not work, please see the log below (the last two lines).
The log file =42105
FirstLogPage=5233911
LastLogPage=6087243
Database UsedLogPage=5752904
I would expect the restore to work since Database log page is within the range of the first and last log pages of the log backup file.
Is this not the case? How should it work when re-establishing the log shipping?
Mark.
Find first log file to apply
FindLowestLogNo()
File: log_backup.42105
File extension is numeric '42105'
LogNo=42105
FindLowestLogNo()
LogMin=42105 LogMax=42105
Execute Command: C:\sapdb\programs\pgm\dbmcli.exe -n localhost -d ERP -u SUPERDBA,Guer0l1t0 db_admin & C:\sapdb\programs\pgm\dbmcli.exe -n localhost -d ERP -u SUPERDBA,Guer0l1t0 medium_label LOG1 42105
[START]
OK
OK
Returncode 0
Date 20110308
Time 00111416
Server saperpdb.rh.renold.com
Database ERP
Kernel Version Kernel 7.7.06 Build 010-123-204-327
Pages Transferred 0
Pages Left 0
Volumes
Medianame
Location F:\log_shipping\log_backup.42105
Errortext
Label LOG_000042105
Is Consistent
First LOG Page 5233911
Last LOG Page 6087243
DB Stamp 1 Date 20110209
DB Stamp 1 Time 00190733
DB Stamp 2 Date 20110308
DB Stamp 2 Time 00111415
Page Count 853333
Devices Used 1
Database ID saperpdb.rh.renold.com:ERP_20110209_210432
Max Used Data Page
Converter Page Count
[END]
LogNo=42105 FirstLogPage=5233911 LastLogPage=6087243 (UsedLogPage=5752904)
WARNING: Log file not yet applied but NOT the first log file. Either sequence error or first log file is missing/yet to arriveHello Birch Mark,
the recovery with intialization is the correct step to recreate the shadow database.
What has to be done before:
Source database:
SA)Activate the database logging.
SB) Create the complete databackup
SC) Set the aoutolog on or create the logbackup,
the first logbackup after completedatabackup created => check the backup history of the source database.
Shadow database:
ShA) Run the recovery with initialization or use db_activate dbm command, see more details in the MAXDB library,
I gave you references in the my reply before.
ShB) After the restore of the complete databackup created in step SB) don't restart the database in the online.
Keep the shadow database in admin or offline < when you run execute recover_cancel >.
Please post output of the db_restartinfo command.
ShC) You could restart the logbackups recovery created in SC), check the backup history of the source database which logbackup will be first to recover.
Did you follow those steps?
There is helpful documentation/hints at Wiki:
http://wiki.sdn.sap.com/wiki/display/MaxDB/SAPMaxDBHowTo
->"HowTo - Standby DB log shipping"
Regards, Natalia Khlopina -
How to use the mirrored and log shipped secondary database for update or insert operations
Hi,
I am doing a DR Test where I need to test the mirrored and log shipped secondary database but without stopping the mirroring or log shipping procedures. Is there a way to get the data out of mirrored and log shipped database to another database for update
or insert operations?
Database snapshot can be used only for mirrored database but updates cannot be done. Also the secondary database of log shipping cannot used for database snapshot. Any ideas of how this can be implemented?
Thanks,
PreethaHmm in this case I think you need Merge Replication otherwise it breaks down the purpose of DR...again in that case..
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Oracle 9i and Oracle 10g transaction log shipping
Hi,
We have Oracle 9i and we use the transaction log shipping mechanism to transport our transaction log files to our DR site. We then import these against the database at the remote site.
Our telecomms infrastructure is pretty limited so this is a manual process.
We are looking at upgrading to 10G. I believe that with 10g you have to use dataguard, or is there a way to mimic the behavior of 9i that would allow us to transport and apply the transaction logs manually?
Thanks
AndrewYou can try setting the SGA to a low value and bring up both the databases. I don't think it should be too slow provided you are not running other windows programs.
If you are really interested in trying out new products you can also explore the option of installing VMware, creating virtual machines & installing Linux, and then playing with the different Oracle products. Doing this will at least keep your main windows operating system clean.
You may want to check out my blog post on Build your own oracle test lab
Cheers !!!
Ashish Agarwal
http://www.asagarwal.com -
Sql server log shipping space consumption
I have implemented sql server logs shipping from hq to dr server
The secondary databases are in standby mode .
The issue is that after configuring it , my dr server is running out of space very rapidly
I have checked the log shipping folders where the trn files resides and they are of very decent size , and the retention is configured for twenty four hours
I checked the secondary databases and their size is exactly same as that of the corresponding primary databases
Could you guys please help me out in identifying the reason behind this odd space increase
I would be grateful if you could point me to some online resources that explains this matter with depthThe retention is happening . I have checked the folders they do not have records older then 24 hours .
I dont know may be its because in the secondary server (Dr) there is no full backup job working , is it because of this the ldf file is getting bigger and bigger but again I far as my understanding goes we cannot take a database full backup in the stand
by mode .
The TLog files of log shipped DBs on the secondary will be same size as that of primary. The only way to shrink the TLog files on secondary (I am not advising you to do this) is to shrink them on the primary, force-start the TLog backup job, then the copy
job, then the restore job on the secondary, which will sync the size of the TLog file on the secondary.
If you have allocated the same sized disk on both primary and secondary for TLog files, then check if the Volume Shadow Copy is consuming the space on the secondary
Satish Kartan www.sqlfood.com -
HI guys,
We are using SQL SERVER 2005.
I am having a LSAlert_Serv job and this job runs the system stored procedure sys.sp_check_log_shipping_monitor_alert.
when this job is run I am getting the following error message:
Here is the error message I am getting :
The log shipping primary database SHARP has backup threshold of 60 minutes and has not performed
a backup log operation for 7368 minutes. Check agent log and logshipping monitor information. [SQLSTATE 42000] (Error 14420). The step failed.
The database named SHARP that is mentioned in the above error message is now moved to another
server.
When I looked into the stored procedure and when i ran the below query from the Stored procedure
select primary_server
,primary_database
,isnull(threshold_alert, 14420)
,backup_threshold
,cast(0 as int)
from msdb.dbo.log_shipping_monitor_primary
where threshold_alert_enabled = 1
I can still see the database SHARP in the table msdb.dbo.log_shipping_monitor_primary. So,
is it the reason for failure? If so, what to do to update the table msdb.dbo.log_shipping_monitor_primary and to fix the issue.
ThanksThe database named SHARP that is mentioned in the above error message is now moved to another server.
When I looked into the stored procedure and when i ran the below query from the Stored procedure :
Since you said you moved database to different server can you please check that SQL Server service account( in new server where you moved database) has full permissions on folder where you have configured log backup job to back the transaction logs.
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
Log shipping is not restoring log files ata particular time
Hi,
I have configured log shipping and it restores all the log files upto a particular time after which it throws error and not in consistent sate. I tried deleting and again configuring log shipping couple of times but no success. Can any one tell me
how to prevent it as I have already configured log shipping from another server to same destination server and it is working fine for more than 1 year? The new configuration is only throwing errors in restoration job.
Thanks,
PreethaMessage
2014-07-21 14:00:21.62 *** Error: The log backup file 'E:\Program Files\MSSQL10_50.MSSQLSERVER\MSSQL\Backup\Qcforecasting_log\Qcforecasting_20140721034526.trn' was verified but could not be applied to secondary database 'Qcforecasting'.(Microsoft.SqlServer.Management.LogShipping)
2014-07-21 14:00:21.62 Deleting old log backup files. Primary Database: 'Qcforecasting'
2014-07-21 14:00:21.62 The restore operation completed with errors. Secondary ID: '46b20de0-0ccf-4411-b810-2bd82200ead8'
2014-07-21 14:00:21.63 ----- END OF TRANSACTION LOG RESTORE -----
The same file was tried thrice and it threw error all the 3 times.
But when I manually restored the Qcforecasting_20140721034526 transaction log it worked. Not sure why this is happening. After the manual restoration it worked fine for one run.ow waiting for the other to complete.
This seems strange to me error points to fact that backup was consistent but could not be applied because may be restore process found out that log backup was not it correct sequence or another log backup which was accidentally taken can be applied.
But then you said you can apply this manually then this must be related to permission. Because if it can restore manually it can do it with job unless agent account has some permission issue.
Ofcourse more logs would help
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles -
I see most recent file is called sql server log shipping work file
What is "SQL server log shipping work file"?
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/62a6556e-6655-4d19-9112-1788cf7bbcfc/wrk-file-in-logshipping-2005
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Hi,
We have configured log shipping in our production setup. Now we are planning to change the service account from local system on which SQL Server is running to to other domain account.
Pls let me know changing the service account would have any impact on Log Shipping.
Regards,
VarunHi n.varun,
According to your description, about startup account for SQL Server and SQL Server Agent Services on Log Shipping Servers, if you have placed SQL Servers in a domain, Microsoft recommends that you use a domain account to start SQL Server services. As other
post, the domain account has full control permissions on the shared location and is sysadmin role in SQL Server security. Even if
you change the account form Local System to the domain account in SQL Server Configuration Manager (SSCM), it would have no impact on Log Shipping.
In addition, if you configure SQL Server Log shipping in a different domain or workgroup, we need to verify that the SQL Server Agent service account running on the secondary server must have read access to the folder that the log backups are located in,
and has permissions to the local folder that it will copy the logs to, then in secondary server, we can change the SQL Server Agent from using Local System to the domain account you are configuring. For more information, see:
http://www.mssqltips.com/sqlservertip/2562/sql-server-log-shipping-to-a-different-domain-or-workgroup/
There is an detail about how to configure security for SQL Server log shipping, you can review the following link.
http://support.microsoft.com/kb/321247
Regards,
Sofiya Li
Sofiya Li
TechNet Community Support -
During logshipping, job on secondary server is ran successfully BUT give this information
"Skipping log backup file since load delay period has not expired ...."
What is this "Load delay period" ? Can we configure this somehow, somewhere ?
NOTE : The value on "Restore Transasction Log tab", Delay Restoring backups at least = Default (zero minutes)
Thanks
Think BIG but Positive, may be GLOBAL better UNIVERSAL.How to get the LSBackup, LSCopy, and LSRestore jobs back in sync...
Last I posted the issue was that my trn backups were not being copied from Primary to Secondary.
I found upon further inspection of the LS related tables in MSDB the following likely candidates for adjustment:
1) dbo.log_shipping_monitor_secondary, column last_copied_file
change last copied file column to something older than the file that is stuck. For example, the value in the table was
E:\SQLLogShip\myDB_20140527150001.trn
I changed last_copied_file to E:\SQLLogShip\myDB_20140525235000.trn. Note that this is just a made up file name that is a few minutes before the actual file that I would like to restore (myDB_2014525235428.trn). 4 mins and 28 seconds before, to
be exact.
LSCOPY runs and voila! now it is copied from primary to secondary. That appears to be the only change needed to get the copy going again.
2) For LSRestore, see the MSDB table dbo.log_shipping_monitor_secondary, change
last_restored_file
again I used the made up file E:\SQLLogShip\myDB_20140525235000.trn
LSRESTORE runs and my just copied myDB_2014525235428.trn is restored
** note that
dbo.log_shipping_secondary_databases also has a last_restored_file column - this did not seem to have any effect, though I see that it updates after completing the above and LSRestore has run successfully, so now it is correct as well
3) LSBackup job is still not running, the job still has a last run date in the future. Could just leave it and eventually it will come right, but I made a fairly significant time change, plus it's all an experiment....back to MSDB.
look at dbo.sysjobs, get the job_id of your LSBackup job
edit dbo.sysjobschedules - change next_run_date next_run_time as needed to a datetime before the current time, or when you would like the job to start running.
I wouldn't be so cavalier with data that was important, but that's the benefit of being in Test, and it appears that these time comparisons are very rudimentary - a value in the relevant log shipping table and the name of the trn file. That said, if you
were facing a problem of this nature due to lost trn files, corrupted, or some similar scenario, this wouldn't fix your problem, though it _might_ allow you to continue? But in my case I know I have all the trn files, it's just the time that changed, in this
case on my Primary server, and thus the names of the trn logs got out sync. -
Shrink Log file in log shipping and change the database state from Standby to No recovery mode
Hello all,
I have configured sql server 2008 R2 log shipping for some databases and I have two issues:
can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No
Truncate" option, but as I know this option will not affect the log file and it will shrink the data file only.
I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge and it will take time to restore in the DR site, so the reconfiguration
is not an option :(
how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need
to do this to change the mdf and ldf file location for the secondary databases.
can any one help?
Thanks in advance,
Faris ALMasri
Database Administrator1. can I shrink the log file for these databases: If I change the primary database from full to simple and shrink the log file then change it back to full recovery mode the log shipping will fail, I've seen some answers talked about using "No Truncate"
option, but as I know this option will not affect the log file and it will shrink the data file only.
I also can't create maintenance to reconfigure the log shipping every time I want to shrink the log file because the database size is huge
and it will take time to restore in the DR site, so the reconfiguration is not an option :(
2. how can I change the secondary database state from Standby to No recovery mode? I tried to change it from the wizard and wait until the next restore for the transaction log backup, but the job failed and the error was: "the step failed". I need to do
this to change the mdf and ldf file location for the secondary databases.
can any one help?
Thanks in advance,
Faris ALMasri
Database Administrator
1. If you change recovery model of database in logshipping to simple and back to full Logshipping will break and logs wont be resored on Secondary server as log chain will be broken.You can shrink log file of primary database but why would you need that
what is schedule of log backup. Frequent log backup is already taking care of log files why to shrink it and create performance load on system when log file will ultimately grow and since because instant file initilaization is not for Log files it takes time
to grow and thus slows performace.
You said you want to shrink as Database size is huge is it huge or does it have lots of free space. dont worry about data file free space it will eventually be utilized by SQL server when more data comes
2. You are following wrong method changing state to no recovery would not even allow you to run select queries which you can run in Standby mode. Please refer below link to move Secondary data and log files
http://www.mssqltips.com/sqlservertip/2836/steps-to-move-sql-server-log-shipping-secondary-database-files/
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles
Maybe you are looking for
-
Error while creating a Event in Table maintanence gen
Hi, I've created a table maintanence gen every thing worked fine...i wanted to create a event 05 at new entries in my table main gen so i did it and wrote a subroutine inside it without any logic since i thougt of doing it later and just saved it and
-
The system is not allowing to do the Quality inspection
Followings are the details: 1) We created the PO of Stock items and in PO the same material number is created 2 times in one line quantity is 3 and other line quantity is 2. 2) we post the GRN (Transaction code MB01 and Movement type 103) for the abo
-
File browse image item not saving
Hi, Im faced with a problem whereby i created a file browse item to load/save images, thing is the image only gets saved when i edit the record and not on creation. Any help please. Thanks
-
Images open in PS4 greatly reduced in viewer
If I open PS4 and open a previously saved file, the window opens with a tiny speck in the middle. Titlebar confirms that the image is .054% of size. When I use the zoom tool, 8 or 10 clicks brings it back to the correct magnification. If I lef
-
HT5449 How do I delete last entry in dictate?
I like using dictate but was wondering how to delete a word or phrase by dictating?