Restore ASE - Dump of open transaction-log required?
Hi experts,
I am still doing some restore tests.
What about the following case.
Last transaction log was done at 1 o'clock and next will be at 4 o'clock.
At 3 o'clock we detect that we have to restore two 2 o'clock.
So for this restore, I need the transaction log which isn't dumped yet.
My question is, do I have to dump the current transaction log also to a file for the restore procedure?
Or is there another way to included in the restore the current-log file?
In other words, when will the log-file be touched first?
After "online database" command?
If so, I can also do the restore using the original-logfile, right?
Kind regards
Christian,
You are right.
Let me tell you what is the practice I reccommend to follow:
1. Take full backup daily during your off business hours if you have the infrastructure like tape/disk or SAN and the data is very critical may be production or development
2 During the business hours take hourly or half an hour once take the transaction backup may be between 9-6 as per your time zone :)
3 This mostly helps you to minimise the tran log loss.
4 As you have the week end reorg and update stats running I prefer just before start of production hours on Monday take a full backup and keep it safe so that the data is super clean and secure
If there is any confusion let me know I will explain you still clearly in simpler words
PS:One full backup per day is fine if you can preserve and retain for 7-10 days and delete it later point if you don't need it and don't have infrastructure and disk cost problems :P ;)
Cheers
Kiran K Adharapuram
Similar Messages
-
Database restore from backup--miss some transaction log backup files.How to restore?
One database with full recovery model
it runs full backup at 12:00 am on 1/1 one time.
Then transaction log backup every 3 hours. 3:00 am, 6:00am,9:00 am, 12:00 Pm, 3:00Pm,6:00PM,9:00pm,12am......
If we can't find 3:00 am, 6:00am, 9:00am transaction log backup files, could we still restore to 6:00am? We still have all transaction log backup files after 12:00Pm.
ThanksOne database with full recovery model
it runs full backup at 12:00 am on 1/1 one time.
Then transaction log backup every 3 hours. 3:00 am, 6:00am,9:00 am, 12:00 Pm, 3:00Pm,6:00PM,9:00pm,12am......
If we can't find 3:00 am, 6:00am, 9:00am transaction log backup files, could we still restore to 6:00am? We still have all transaction log backup files after 12:00Pm.
Thanks
NO..log files are incremental and connected.If any log file is missing you cannot restore to point which comes after backup which is lost or damaged.Like u miss 3 AM trn log backup now even if you have 6,9,12 AM trn log backups its of no use .
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
Ce7305-5.3.3: number of transaction log archives
Could anybody tell me how many archives are kept for transaction logs on a ce7305-5.3.3, and how can we control the number of them?
Thanks in advance.
A.T DoanHi Alex,
Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
- I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
- I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
- You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
Microsoft TechNet > Forums Home > Exchange Server > Development
Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
Development: Overview
http://technet.microsoft.com/en-us/library/aa997614.aspx
Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com -
WebDAV Query generates a high number of transaction log files
Hi all,
I have a program that launch WebDAV queries to search for contacts on an Exchange 2007 server. The number of contacts returned for each user's mailbox is quite high (about 4500).
I've noticed that each time the query is launched, about 15 transaction log files are generated on the Exchange server (each of them 1Mb). If I ask only for 2 properties on the contacts, this number is reduced to about 8.
This is a problem since our program is supposed to launch often (about every 3/5min) as It will synchronize Exchange mailboxes with a SQL Server DB. The result is that the logs increase very quickly on the server side, even if there are not so many updates.
Any idea why so many transaction logs are generated when doing a WebDAV search returning many items? I would understand that logs are created when an update is done on the server, but here it's only a search with many contacts items returned.
Is there maybe a setting on the Exchange server to control what kind of logs to generate?
Thank for your help,
AlexandreHi Alex,
Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
- I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
- I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
- You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
Microsoft TechNet > Forums Home > Exchange Server > Development
Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
Development: Overview
http://technet.microsoft.com/en-us/library/aa997614.aspx
Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com -
Managing rightly the transaction log of a db for a massive import to avoid storage saturation
Hi,
I'm working on a virtual production SQL Server environment that hosts 10-15 databases. The data storage has a capacity about 400-500 GB. All databases are in simple recovery model. A my procedure on a my db (in simple recovery mode) reads 4-5 millions of
row data from 4-5 Oracle tables having 20-30 columns. The SQL Server stored procedure is called by a SSIS 2012 task. During the procedure execution, the transaction log of my db is increased exceeding 120-130 GB. This growth has caused a storage saturation
not pleasant for my customer.
So, my customer has asked me to investigate about the right management of the transaction log growth. In particular, is it possible to avoid to write in the transaction log for a certain procedure or T-SQL operation? My customer has experienced with the
Oracle dbms and has said me that Oracle manages better the storage space also with blob fields.
Also if it is possible to perform a shrink operation for the transaction log, it is need to avoid the exponential growth of the transaction log before that the storage saturates.
Any helps to me, please? Many thanksHi,
Please monitor hung transactions which are causing high transaction log utilization.
Also you need to look with capacity planning perspective where you need to identify the transaction with highest transaction log requirement. Accordingly you need to set the size of Transaction log. Also you can set the Autogrowth option for transaction
log based on best practices.
Please check when is the auto checkpoint takes place I mean default recovery interval in your settings.
Because in simple recovery model auto checkpoint takes place based upon recovery interval/% Transaction log used.
If the log has swollen then as a weekly maintenance you can shrink the transaction log size based upon your transaction requirements.
overall better monitoring of your running transactions ,Identify any hung transactions find the root cause,
proper log auto growth setttings will overcome most of your problem. -
ASE - Dump Database (Transaction Log deleted?)
Hi experts,
I have a question.
Will the transaction log be deleted reseted to the time of a database dump is finished?
Hopefully it will only be deleted by dumping the transaction log itself, right?
Kind regardsHello Christian,
Don't worry we have set the options to the databases which are getting installed through business suites.
However I will provide you the detailed dB details and their respective option which yoibcan cross verify as well from the isql command
sp_helpdb
go
And you can very well enable or disable the options set as well using
sp_dboption "<SID>","<option_name>",true/false
go
Also if you needed the detailed dB wise default dB options that to be set let me know I will attach the details as well ;)
Regards
Kiran K Adharapuram -
Transaction log shipping restore with standby failed: log file corrupted
Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
Date
9/10/2014 6:09:27 AM
Log
Job History (LSRestore_DATA_TPSSYS)
Step ID
1
Server
DATADR
Job Name
LSRestore_DATA_TPSSYS
Step Name
Log shipping restore log job step.
Duration
00:00:03
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted
0
Message
2014-09-10 06:09:30.37 *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: An error occurred while processing the log for database 'TPSSYS'.
If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 Deleting old log backup files. Primary Database: 'TPSSYS'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
2014-09-10 06:09:30.37 *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.37 *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-10 06:09:30.37 *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
2014-09-10 06:09:30.38 ----- END OF TRANSACTION LOG RESTORE
Exit Status: 1 (Error)I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in that server with log shipping configuration
error :
Message
2014-09-12 10:50:03.18 *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-09-12 10:50:03.18 *** Error: An error occurred while processing the log for database 'EAPDAT'. If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
RESTORE LOG is terminating abnormally.
can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Articles -
Sql 2008 Issue restoring transaction logs....
** Update: I performed the same steps on the corresponding Dev and things worked as expected. Only our prod environment uses SnapManager for Sql (NetApp) and I'm beginning to suspect that may be behind this issue
Restored a full backup of the prod MyDB from 1/23/2014 in non-operational mode (so trans logs can be added). Planned to apply trans log dumps from 1/24/2014, 7am (our first of the day) to noon. But applying the 7am trans dump gave this error:
>>>>>
Restore Failed for this Server... the Log in this backup set begins at....which is too recent to apply to the database. An earlier log backup that includes LSN....can be restored.
>>>>>
That message is clear but I don't understand it in this case as the full DB dump was taken Thursday night and the tran logs I am trying to restore are all from Friday.
TIA,
edm2** Update 2 **
I kept checking and am now definitely think that the NetApp SnapManager for Sql product (which is a storage based, not sql based, approach to DR) is the culprit. My view of the world was that a Full Sql Database backup is performed at 7pm and the
Sql translogs are dumped every hour beginning at 7:15am the next day. This extract from the SnapManager log indicates quite a different story. It takes a full database backup at 11pm (!) that night followed by a translog backup.
No wonder, when I try to restoring things using Sql utilities it doesn't work. BTW: I have no idea where SnapManager's dumps are stored.
>>>>>>>>>>>>>>>>>>>>>>>>
[23:00:32.295] *** SnapManager for SQL Server Report
[23:00:32.296] Backup Time Stamp: 01-24-2014_23.00.32
[23:00:32.298] Getting SQL Server Database Information, please wait...
[23:00:32.299] Getting virtual disks information...
[23:00:37.692] Querying SQL Server instances installed...
[23:01:01.420] Full database backup
[..e
[23:01:01.422] Run transaction log backup after full database backup: Yes
[23:01:01.423] Transaction logs will be truncated after backup: Yes
[23:02:39.088] Database [MyDatabase] recovery model is Full.
[23:02:39.088] Transaction log backup for database [MyDatabase] will truncate logs...
[23:02:39.089] Starting to backup transaction log for database [MyDatabase]...
[23:02:39.192] Transaction log backup of database [MyDatabase] completed.
>>>>>>>>>>>>>>>>>>>>>>>>
Unless anyone has further thoughts I think I will close this case and take it up with NetApp.
edm2
Sorry I wasn't clearer. The Full database backups was taken on 1/23/2014 at 7pm. The trans logs I was trying to restore were from the next day (starting 1/24/2014 at 7:15am, 8:15am, etc.). I could not find any Sql translog dumps taken after
the full backup (at 7pm) until the next morning's trans dumps (which start at 7:15am). Here is what I did:
RESTORE DATABASE [MyDatabase] FROM DISK =
N'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak' WITH FILE = 1,
MOVE N'MyDatabase_data' TO N'C:\MSSQL\Data\MyDatabase.mdf',
MOVE N'MyDatabase_log' TO N'C:\MSSQL\Data\MyDatabase_1.LDF',
NORECOVERY, NOUNLOAD, STATS = 10
GO
RESTORE LOG [MyDatabase] FROM DISK =
N'D:\MyDatabase\MyDatabase_backup_2014_01_24_071501_9715589.trn'
WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 10
GO
Msg 4305, Level 16, State 1, Line 1
The log in this backup set begins at LSN 250149000000101500001, which is too recent to apply to the database. An earlier log backup that includes LSN 249926000000024700001 can be restored.
Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.
From Sql Error Log:
2014-01-25 00:00:15.40 spid13s This instance of SQL Server has been using a process ID of 1428 since 1/23/2014 9:31:01 PM (local) 1/24/2014 5:31:01 AM (UTC). This is an informational message only; no user action is required.
2014-01-25 07:31:08.79 spid55 Starting up database 'MyDatabase'.
2014-01-25 07:31:08.81 spid55 The database 'MyDatabase' is marked RESTORING and is in a state that does not allow recovery to be run.
2014-01-25 07:31:14.11 Backup Database was restored: Database: MyDatabase, creation date(time): 2014/01/15(16:41:13), first LSN: 249926:231:37, last LSN: 249926:247:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK:
{'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak'}). Informational message. No user action required.
Regarding my update note, the SnapManager for SQL product (which I was tolds simply uses VSS) runs every hour throughout the night. That's why I wondering if it could be interfering with the transaction log sequence. -
I forgot my ipad lock code and when i tried to restore it from yhe itunes it keeps requiring to switch off my ipad finder, i need yo open it whatever is the way
You can turn Find My iPad 'off' by going to http://icloud.com and logging into your account, select the iPad from the device list in the 'Find My iPhone' section on that site, and 'remove from account'
-
Backup and restore full and transaction log in nonrecovery mode failed due to LSN
In SQL 2012 SP1 enterprise, when taking a full backup and followed up a transaction log backup immediately, the transaction log backup starts with an earlier LSN than the ending LSN of the full backup. As a result, I cannot restore
the transaction log backup after the full backup both as nonrecovery on another machine. I was trying to make the two machine in sync for mirroring purpose. An example is as follows.
full backup: first 1121000022679500037, last 1121000022681200001
transaction log: first 1121000022679000001, last 1121000022682000001
--- SQL Scripts used
BACKUP DATABASE xxx TO DISK = xxx WITH FORMAT
go
backup log xxx to disk = xxx
--- When restore, I tried the
restore log BarraOneArchive from disk=xxx WITH STOPATMARK = 'lsn:1121000022682000001', NORECOVERY
Also tried StopBeforeMark, did not work either. Complained about the LSN too early to apply to the databaseI think that what I am saying is correct .I said in sync mirroring ( i was not talking about witness) if network goes for few minutes or some longer time may be 20 mins ( more than that is reare scenario IS team has backup for that) logs on Principal will
continue to grow as they wont be able to commit because there connection with mirror is gone so commit from mirror is not coming.After network comes online Mirror will replay all logs and will soon try to come up with principal
Books Online says this: This is achieved by waiting to commit a transaction on the principal database, until the principal server receives a message from the mirror server stating that it has hardened the transaction's log to disk. That is,
if the remote server would go away in a way so that the primary does not notice, transactions would not commit and the primary would also be stalled.
In practice it does not work that way. When a timeout expires, the principal will consider the mirror to be gone, and Books Online says about this case
If the mirror server instance goes down, the principal server instance is unaffected and runs exposed (that is without mirroring the data). In this section, BOL does not discussion transaction logs, but it appear reasonable that the log records are
retained so that the mirror can resync once it is back.
In Async Mirroring Transaction log is sent to Mirror but it does not waits for Acknowledgement from mirror and commits the transaction.
But I would expect that the principal still gets acknowledgement that the log records have been consumed, or else your mirroring could start failing f you backup the log too frequently. That is, I would not expect any major difference between sync and async
mirroring in this regard. (Where it matters is when you fail over. With async mirroring, you are prepared to accept some data loss in case of a failover.)
These are theories that could be fairly easily tested if you have a mirroring environment set up in a lab, but I don't.
Erland Sommarskog, SQL Server MVP, [email protected] -
Oracle dump transaction log like Sybase
Hello
In Sybase we use the dump transaction log command for the needed databases that dumps whatever is in the transaction log of that db to file system, after that dump file is copied over to another database server and loads the other database, this is like a replication
Does oracle have something similar to this?
Many thanksthe closest you'll get to what you are looking for is a dataguard setup. Dataguard will manage the shipment of the logs on it's own and keep things in sync.
There is another option of Oracle Advanced Replication but you will need to review if that really meets your needs. The following is simply an overview of it:
http://www.orafaq.com/wiki/Advanced_Replication_FAQ
Or GoldenGate:
http://www.oracle.com/technetwork/middleware/goldengate/overview/index.html
Edited by: rui_catcuddler on Oct 20, 2010 7:54 AM -
Restore transaction log for log shipping failed
restore transaction log failed and i get this error:
*** Error: An error occurred while processing the log for database 'XXX'. THe log block version is higher than this server allows.
can anyone help me, i don't have any idea how to solve it. Thank youi'm using SQL 2008 R2.. currently the LSRestore job failed on standby server.
Date 4/23/2014 6:51:00 PM
Log Job History (LSRestore_)
Step ID 1
Server
Job Name LSRestore_
Step Name Log shipping restore log job step.
Duration 00:01:37
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0
Message
2014-04-23 18:52:37.60 *** Error: Could not apply log backup file 'T:\Tlog\20140417084500.trn' to secondary database 'DB'.(Microsoft.SqlServer.Management.LogShipping) ***
2014-04-23 18:52:37.60 *** Error: An error occurred while processing the log for database 'DB'. THe log block version is higher than this server allows.
RESTORE LOG is terminating abnormally.
Processed 0 pages for database 'DB', file 'DB_Data' on file 1.
Processed 17181 pages for database 'DB', file 'DB_Log' on file 1.(.Net SqlClient Data Provider) *** -
Maxdb restore - transaction log backup
Hi,
Is it possible to restore the db backup without the transaction log backup? I know this is kinda lame to ask this but just wondering if this is possible and how it can be done.
Database is MaxDB and OS is Linux.
Thanks in advance!Hi,
the restore does not depend onto the database state in which the databackup has been made.
Instead you are able to recover every complete databackup without a logrecovery. You can do this by using the dbmcli command DB_ACTIVATE RECOVER <medium_name> or the corresponding dbmgui actions.
After the recovery you simply need to restart the database.
Kind regards, Martin -
Restoring Mailbox data through Transaction logs
Hello all,
We're running Exhange server 2007 on a dedicated VM.
We're currently running Tri-Daily VM backups and Daily backups via tape in 1 set, and 4 weekly sets of tapes.
Our issue is that now the tape drive has broken and we're trying to source a replacement and or repair of the tape drive, we need to work out how we're going to restore lost data without having to purchase more hardware for VM backups for dailys.
We know that transaction logs can be used to roll forward to a point in time of when the last VM backup was taken, to the time the server went down. to restore as many lost emails as possible.
Our SLA to our users is currently 2-48 hours of lost data on the system, but we'd like to limit this as much as possible, and as it's tri-daily, we run the risk of over running that SLA.
So the main question is, how would we roll our database forward using the transaction logs?Great link from Ed and regarding rolling logs there are a few things to keep in mind.
1. in order to use ESEutil to roll logs you MUST have a database that is in a dirty/inconsistent state, i.e. system stopped abruptly (was not shut down gracefully) or an Exchange aware backup was made and you are able to restore the DB to disk without rolling
up or mounting the DB.
2. You can only rollup/apply a set of logs to a dirty DB one time, i.e. lets say you have a backup of the DB that is still dirty and the backup was from 1/1/2014 and you have logs up through today 3/11/2014. So you could then make a copy of the logs
through say 1/31/2014 and use eseutil to roll them up and bring the db to a consistent/clean state.
3. However if you want to rollup data through say 2/15/2014 you cannot do so with the DB that was rolled up to 1/31/2014 in step 2 because that DB is now in a clean/consistent state and therefore it will not accept additional log rollup for the logs through
2/15/2014. To solve this you would need to do the following;
A: get a copy of the 1/1/2014 EDB that is dirty/inconsistent
B: copy the logs from 1/14 to 2/15 into a separate directory
C: Roll the logs into the DB using eseutil.
Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline Exchange Mailbox and Public Folder EDB's and Live Exchange Servers or Import/Migrate direct from Offline EDB to Any Production Exchange Server, even cross version i.e. 2003 -->
2007 --> 2010 --> 2013 with Lucid8's
DigiScope -
The log shipping restore job restores a corrupted transaction log backup to a secondary database
Dear Sir,
I have primary sql instances in cluster node and it is configured with log shipping for DR system.
The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
Cheers,Dear Sir,
I have primary sql instances in cluster node and it is configured with log shipping for DR system.
The instance fails over before the log shipping backup job finishes. Therefore, a corrupted transaction log backup is generated.how to handle the logshipping without break and how to know this transaction back is damaged.
Cheers,
Well when failover happens SQL Server is stopped and restarted on other node. So when SQL Server is stopped and it is doing Log backup the backup operation would stop and there would be no trn files . The backup operation wont complete and hence no backup
information would be stored in SQL Server MSDB and no .trn file would be generated.
You can run restore verifyonly on .trn file to see whether it is damaged or not. Logshipping is quite flexible even if previous log backup did not complete the next wont be affected because SQL Server has no information about whether backup completed
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP
Maybe you are looking for
-
PLEASE help. This is driving me nuts! I have an apple tv (second generation) and this morning I noticed I had no internet connection. I've been using the wireless capability on the device and it has worked fine until just now. I haven't changed anyth
-
I just bought a MacBook Pro, and need to d/l my photos from my memory card reader. iphoto keeps coming up as the app to use, but the column on the left says "No name, and shows 0 photos on the black screen. How do I view the photos on the reader?
-
How do I delete apps in os5? I press and hold the icon and they all begin to shake but the little x does not appear allowing me to delete.
-
Information on guidelines in ifrs on revaluation of assets
what all information is necessary to successfully implement ifrs for assets. for e.g studying as 10 and as 28 etc. what all other informations should be kept in mind for successful implementaion for ifrs in sap. which ifrs is for assets its valuation
-
Hi everyone, I am trying to access an Oracle database through the JCA framework in the portal. however, I have not had too much success. This is what I have done so far: 1. created a system in the portal called tempdb with a connection url of: jdbc: