Understanding transaction log..
Hi all,
Is there a way I can understand about the transaction,
I have tried using "dbcc(3604);dbcc log;" but that doesnt help much..
After some searching i came to know to use SQLAnywhere, but couldnt find any link to download it.
Is there any other way to view the log ?
Any reference on this will also be helpful.
Thanks in advance.
Unless the database is configured for sql statement replication, the transaction log does not contain any query text that would be useful to your investigation. The log contains records showing the changes made to the binary data stored on each page as the result of data modifications (DML) only.
You might consider setting up auditing of cmdtext and waiting for the issue to happen again. The audit tables would then contain the queries run.
You might also consider posting a detailed description of the issue, we might be able to recognize it or give further suggestions on how to reproduce it.
-bret
Similar Messages
-
Why is the transaction log file not truncated though its simple recovery model?
My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
mayooran99My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
mayooran99
If log records were never deleted(truncated) from the transaction log it wont show as 99% free.Simple recoveyr model
Log truncation automatically frees space in the logical log for reuse by the transaction log and thats what you are seeing. Truncation wont change file size. It more like
log clearing, marking
parts of the log free for reuse.
As you said "When I shrink it does shrink" I dont see any issues here. Log truncation and shrink file is 2 different things.
Please read below link for understanding "Transaction log Truncate vs Shrink"
http://blog.sqlxdetails.com/transaction-log-truncate-why-it-didnt-shrink-my-log/ -
Log Reader Agent: transaction log file scan and failure to construct a replicated command
I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
Error Message
2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
Replication agent job kept trying after specified intervals and kept failing with that message.
Investigation
I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
SELECT * FROM dbo.MSrepl_transactions -- 1162
SELECT * FROM dbo.MSrepl_commands -- 821922
The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
-- Returns the commands for transactions marked for replication
EXEC sp_replcmds
-- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
EXEC sp_repltrans
-- Returns the commands for transactions marked for replication in readable format
EXEC sp_replshowcmds
Resolution
Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
EXEC sp_addlogreader_agent
@job_login = 'LoginName',
@job_password = 'Password',
@publisher_security_mode = 1;
When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
--Updates the record that identifies the last distributed transaction of the server
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
-- Flushes the article cache
EXEC sp_replflush
Bingo !
Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
Question
Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.Hi Hilary,
Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1 -
Audit Vault 12.1.1 error creating audit trail with TRANSACTION LOG
Hi,
i installed AV 12.1.1 , the DB target is with Data Guard.
when i run the script oracle_user_setup with the mode REDO_COLL the final message is that was succesfull , but when i go to the AV console and try to create an audit trail with TRANSACTION LOG the AV console shows me an error and the log shows me this :
[2013-10-16T03:37:18.593-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] RedoCollector : runSourceScript : Error while running script on source for REDO collector.
[2013-10-16T03:37:19.528-05:00] [collfwk] [ERROR] [] [] [tid: 10] [ecid: 192.168.56.8:78800:1381912639433:0,0] OAV-8004: Failed to start collector {0}:{1}CollectionFactory : createCollection : Exception while creating collection. [[
Failed to start collector {0}:{1}
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:816)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
at java.lang.Thread.run(Thread.java:679)
Nested Exception:
java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges
ORA-06512: at line 1
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:445)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:879)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:450)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:192)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:531)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:207)
at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1044)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1329)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3584)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3685)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1376)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at oracle.ucp.jdbc.proxy.StatementProxyFactory.invoke(StatementProxyFactory.java:230)
at oracle.ucp.jdbc.proxy.PreparedStatementProxyFactory.invoke(PreparedStatementProxyFactory.java:124)
at $Proxy2.execute(Unknown Source)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.runSourceScript(RedoCollector.java:747)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.sourceSetup(RedoCollector.java:579)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.setup(RedoCollector.java:454)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollector.startCollector(RedoCollector.java:216)
at oracle.av.platform.agent.collfwk.impl.redo.RedoCollectorManager.startTrail(RedoCollectorManager.java:199)
at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:504)
at oracle.av.platform.agent.collfwk.impl.factory.CollectionFactory.createCollection(CollectionFactory.java:354)
at oracle.av.platform.agent.StartTrailCommandHandler.processMessage(StartTrailCommandHandler.java:63)
at oracle.av.platform.agent.AgentController.processMessage(AgentController.java:325)
at oracle.av.platform.agent.AgentController$MessageListenerThread.run(AgentController.java:1859)
at java.lang.Thread.run(Thread.java:679)
i don't understand why the issue because the user has the privileges given by the script and i tried with grant as sysdba but without any result
i don't understand what are the privileges that the collector needs.
any idea?
thnks for any helpHi
Just run the script $AV_AGENT/av/plugins/com.oracle.av.plugin.oracle/config/oracle_user_setup.sql USER_NAME REDO_COLL
This will grant the user some privileges and roles like DBA and CREATE Database Link
I hope this answer your question
Thanks
Ahmed Moustafa -
What is stored in a transaction log file?
What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
execution of a transaction or is it just the statements found in a transaction block? Please advice.
mayooran99yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
is committed, the transaction log will roll forward this iinformation
when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
in full recovery you will take log backups, to clear that txn from the transaction log.
in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.
i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
Hope it Helps!! -
Knowledge on Transaction log ?
Hi All,
I have couple of questions?
Question-1:
I need to know will running import/export wizard increase the T-log growth? OR will running simple select statement increase the T-log.
To my little knowledge data modification (insert, update, or delete) or data definition language (DDL) statements only increase the T-log how about import/export wizard or simple select statement..
Question-2:
Also what will happen inside simple recovery model when comparable to full recovery model ?
I assume the data is first written in T-log and once committed will move to mdf. In this scenario what will happen in simple and full recovery, how they differ from they differ from each other? Please help me to understand the internal architecture/inside oprations
of recovery models...
Best Regards,
Moug
Best Regards MougHi,
Q1) No. Select statements doesnt get logged. Import/export will write to the database hence the tlog will be used. Any statement other than a DRL (Data Retrieval Language) will be either fully or minimally logged.
Q2) In case of Single recovery model, the data is in the transaction log till it commits. Once it is committed it is written to mdf and then the space is cleared which means it can be reused for other transactions. In case of Full Recovery model the
space can only be reused once the log backup is taken.
Check this link about transaction log which should clear all your doubts.
http://msdn.microsoft.com/en-gb/library/ms190925.aspx
You can check the log_reuse_wait_desc column in sys.databases to know why transaction log is not reused.
http://msdn.microsoft.com/en-gb/library/ms178534.aspx
Listen to this video to know about the internals in deep for transaction log -
http://technet.microsoft.com/en-US/sqlserver/gg313762.aspx
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com -
Very high transaction log file growth
Hello
Running Exchange 2010 sp2 in a two node DAG configuration. Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
Bulls on ParadeIf you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved. If it is one or more iPhones then perhaps look at
what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout. NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
MS USER MONITOR:
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
And this is a great article as well
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
Troy Werelius
www.Lucid8.com
Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope -
Database Transaction log suspected pages
We migrated our Production Databases to New SQL Cluster and when I run query to find any suspected pages entries in MSDB Database .I found there are 5 entries in msdb.dbo.suspected_pages tables .These enries for Production Database Transaction file (File_id=2)
, Pages_id =1,2,3,6,7 and the event _type was updated to 4 for all pages after I did DB restore and error_count is 1 for each page_id.
As my understanding , before I did the DB restore ,there were transaction log pages were corrupted ,but the restored repaired those corrupted pages .Since pages are repaired then there is no need to concern for now .I have now Database consistency check
job scheduled to check the Database corruption on Report server each night .I restore Database on report server using the a copy of Production Database Backup .Someone please help me to understand what caused the log file pages to get corrupted .Page_id 1,2,3,6,7
are called boot pages for the log file ? What shold I do if I will find the Log file supected Pages ?
Thank so your help in advance .
DaizyHi Andreas , Thanks for your reply .
FYI- You have the event_type 1and 3 for your Database , but the event_type was updated to 4 on my system after I did restore and the date/time shows the exact date/time when the event_type was updated .
Please help me understand usually Database Data file is organized in pages ,not the log file ??
Thanks
Daizy
Hello Daizy
yes, the event types 1-3 were the error-state before the "repair".
After I did a Full backup + Restore I now have type 4 just as you do.
Yes, the Log files is organized in so called "Virtual Log Files"/VLFs, which have nothing in common with the 8-KB data-pages of the data-files. Therefore a page_id does not make sense there.
You can read more on the architecture of the Transaction Log here:
SQL Server Transaction Log Architecture and Management
This article by Paul Randal might also be of interest to you for:
Transaction log corruption and backups
Hope that helps.
Andreas Wolter (Blog |
Twitter)
MCSM: Microsoft Certified Solutions Master Data Platform, MCM, MVP
www.SarpedonQualityLab.com |
www.SQL-Server-Master-Class.com -
Transaction logging on waas 4.1.1c no_peer
started doing transaction logging
where do i find the decode, I don't understand no_peer when both devices are accessibleno_peer indicates that the TCP Autodiscovery does not detect another WAE at the other end during the syn/syn-ack/ack handshake.
Look for asynchronise routes that are not being intercepted or other back door routes that traffic might be taking. The same WAE needs to see both sides of the TCP connection. It could be something as simple as a typo in a wccp redirect-list acl or missing wccp redirect on an interface.
Hope that helps,
Dan
Dan -
Hi All,
I am getting below error while doing bulk update testing. Am continuously doing testing for last 3 days. Is this the reason for this issue? What is the solution and how we can prevent these kinds of issues from happening in future.
Msg 9002, Level 17, State 2, Line 11
The transaction log for database 'DB21_InforEAM' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Msg 9001, Level 21, State 5, Line 11
The log for database 'DB21_InforEAM' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
Msg 3314, Level 21, State 4, Line 11
During undoing of a logged operation in database 'DB21_InforEAM', an error occurred at log record ID (1168426:253789:252). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a
backup, or repair the database.
Msg 9001, Level 21, State 5, Line 11
The log for database 'DB21_InforEAM' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
Msg 3314, Level 21, State 4, Line 11
During undoing of a logged operation in database 'DB21_InforEAM', an error occurred at log record ID (1168426:253789:252). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a
backup, or repair the database.
Msg 9001, Level 21, State 1, Line 11
The log for database 'DB21_InforEAM' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
Msg 3314, Level 21, State 5, Line 11
During undoing of a logged operation in database 'DB21_InforEAM', an error occurred at log record ID (1168426:252711:1). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup,
or repair the database.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.>Is there any other way to shrink the log.
There are many factors this however usually & for simplicity of understanding:
In case of FULL recovery model: Log gets shrunk upon T-Log back-up and check-pointing
In case of BULK LOGGED recovery model: Log
gets shrunk upon T-Log back-up and check-pointing
In case of SIMPLE recovery model: Log
gets shrunk upon check-pointing
You can also issue DBCC SHRINKFLE explicitly
>Is there a way to do bulk update without log captured?
Yes, follow what dave_gone wrote
Good Luck! Please Mark This As Answer if it solved your issue. Please Vote This As Helpful if it helps to solve your issue -
System Crash after transactional log filled filesystem
Dear gurus,
We have an issue in our PRD system under FlexFrame platform. We SAP NW 7.4 (SP03) with ASE 15.7.0.042 (SuSe SLES 11 SP1) running as BW system.
While uploading data from ERP system, the transactional log was filled. We can see in <SID>.log:
Can't allocate space for object 'syslogs' in database '<SID>' because 'logsegment' segment is full/has no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use ALTER DATABASE to increase the size of the segment.
After this, we increase the transactional log (disk resize). Then, executed ALTER DATABASE <SID> log on <LOGDEVICE> = '<size>'
While ALTER is running the log filesystem was filled (100%), after this, <SID>.log began to grow tremendously.
We stopped Sybase and now, when we try start it all FF node will be down. The filesystem has free space (around 10 GB)
Could you help us?
Add: We think that a posible solution could be to delete the transactional log due to the fact that we understand that the failure is related to this log (maybe corrupted?)
Regards====================
00:0008:00000:00009:2014/06/26 15:49:37.09 server Checkpoint process detected hardware error writing logical page '2854988', device 5, virtual page 6586976 for dbid 4, cache 'log cache'. It will sleep until write completes successfully.
00:0010:00000:00000:2014/06/26 15:49:37.10 kernel sddone: write error on virtual disk 5 block 6586976:
00:0010:00000:00000:2014/06/26 15:49:37.10 kernel sddone: No space left on device
00:0008:00000:00009:2014/06/26 15:49:37.10 server bufwritedes: write error detected - spid=9, ppage=2854988, bvirtpg=(device 5, page 6586976), db id=4
=======================
1 - check to make sure the filesystem that device #5 (vdevno=5) sits on is not full; make sure filesystem is large enough to hold the entire defined size of device #5; make sure no other processes are writing to said filesystem
2 - have your OS/disk admin(s) make sure the disk fragment(s) underlying device #5's filesystem isn't referenced by other filesystems and/or raw device definitions -
Sql 2008 Issue restoring transaction logs....
** Update: I performed the same steps on the corresponding Dev and things worked as expected. Only our prod environment uses SnapManager for Sql (NetApp) and I'm beginning to suspect that may be behind this issue
Restored a full backup of the prod MyDB from 1/23/2014 in non-operational mode (so trans logs can be added). Planned to apply trans log dumps from 1/24/2014, 7am (our first of the day) to noon. But applying the 7am trans dump gave this error:
>>>>>
Restore Failed for this Server... the Log in this backup set begins at....which is too recent to apply to the database. An earlier log backup that includes LSN....can be restored.
>>>>>
That message is clear but I don't understand it in this case as the full DB dump was taken Thursday night and the tran logs I am trying to restore are all from Friday.
TIA,
edm2** Update 2 **
I kept checking and am now definitely think that the NetApp SnapManager for Sql product (which is a storage based, not sql based, approach to DR) is the culprit. My view of the world was that a Full Sql Database backup is performed at 7pm and the
Sql translogs are dumped every hour beginning at 7:15am the next day. This extract from the SnapManager log indicates quite a different story. It takes a full database backup at 11pm (!) that night followed by a translog backup.
No wonder, when I try to restoring things using Sql utilities it doesn't work. BTW: I have no idea where SnapManager's dumps are stored.
>>>>>>>>>>>>>>>>>>>>>>>>
[23:00:32.295] *** SnapManager for SQL Server Report
[23:00:32.296] Backup Time Stamp: 01-24-2014_23.00.32
[23:00:32.298] Getting SQL Server Database Information, please wait...
[23:00:32.299] Getting virtual disks information...
[23:00:37.692] Querying SQL Server instances installed...
[23:01:01.420] Full database backup
[..e
[23:01:01.422] Run transaction log backup after full database backup: Yes
[23:01:01.423] Transaction logs will be truncated after backup: Yes
[23:02:39.088] Database [MyDatabase] recovery model is Full.
[23:02:39.088] Transaction log backup for database [MyDatabase] will truncate logs...
[23:02:39.089] Starting to backup transaction log for database [MyDatabase]...
[23:02:39.192] Transaction log backup of database [MyDatabase] completed.
>>>>>>>>>>>>>>>>>>>>>>>>
Unless anyone has further thoughts I think I will close this case and take it up with NetApp.
edm2
Sorry I wasn't clearer. The Full database backups was taken on 1/23/2014 at 7pm. The trans logs I was trying to restore were from the next day (starting 1/24/2014 at 7:15am, 8:15am, etc.). I could not find any Sql translog dumps taken after
the full backup (at 7pm) until the next morning's trans dumps (which start at 7:15am). Here is what I did:
RESTORE DATABASE [MyDatabase] FROM DISK =
N'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak' WITH FILE = 1,
MOVE N'MyDatabase_data' TO N'C:\MSSQL\Data\MyDatabase.mdf',
MOVE N'MyDatabase_log' TO N'C:\MSSQL\Data\MyDatabase_1.LDF',
NORECOVERY, NOUNLOAD, STATS = 10
GO
RESTORE LOG [MyDatabase] FROM DISK =
N'D:\MyDatabase\MyDatabase_backup_2014_01_24_071501_9715589.trn'
WITH FILE = 1, NORECOVERY, NOUNLOAD, STATS = 10
GO
Msg 4305, Level 16, State 1, Line 1
The log in this backup set begins at LSN 250149000000101500001, which is too recent to apply to the database. An earlier log backup that includes LSN 249926000000024700001 can be restored.
Msg 3013, Level 16, State 1, Line 1
RESTORE LOG is terminating abnormally.
From Sql Error Log:
2014-01-25 00:00:15.40 spid13s This instance of SQL Server has been using a process ID of 1428 since 1/23/2014 9:31:01 PM (local) 1/24/2014 5:31:01 AM (UTC). This is an informational message only; no user action is required.
2014-01-25 07:31:08.79 spid55 Starting up database 'MyDatabase'.
2014-01-25 07:31:08.81 spid55 The database 'MyDatabase' is marked RESTORING and is in a state that does not allow recovery to be run.
2014-01-25 07:31:14.11 Backup Database was restored: Database: MyDatabase, creation date(time): 2014/01/15(16:41:13), first LSN: 249926:231:37, last LSN: 249926:247:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK:
{'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak'}). Informational message. No user action required.
Regarding my update note, the SnapManager for SQL product (which I was tolds simply uses VSS) runs every hour throughout the night. That's why I wondering if it could be interfering with the transaction log sequence. -
WebDAV Query generates a high number of transaction log files
Hi all,
I have a program that launch WebDAV queries to search for contacts on an Exchange 2007 server. The number of contacts returned for each user's mailbox is quite high (about 4500).
I've noticed that each time the query is launched, about 15 transaction log files are generated on the Exchange server (each of them 1Mb). If I ask only for 2 properties on the contacts, this number is reduced to about 8.
This is a problem since our program is supposed to launch often (about every 3/5min) as It will synchronize Exchange mailboxes with a SQL Server DB. The result is that the logs increase very quickly on the server side, even if there are not so many updates.
Any idea why so many transaction logs are generated when doing a WebDAV search returning many items? I would understand that logs are created when an update is done on the server, but here it's only a search with many contacts items returned.
Is there maybe a setting on the Exchange server to control what kind of logs to generate?
Thank for your help,
AlexandreHi Alex,
Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
- I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
- I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
- You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
Microsoft TechNet > Forums Home > Exchange Server > Development
Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
Development: Overview
http://technet.microsoft.com/en-us/library/aa997614.aspx
Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com -
Rerun failed queries from transaction log backup files
I recently ran into an issue where live production data was failing to insert into my database. When my windows service inserted data the query failed and returned an error message indicating that the transaction didn't succeed.
After restarting SQL the insert statements started working again as they had before. However, because of this I missed about 24 hrs worth of important production data. My server is currently configured to make database backups every 24 hours and make transaction
log backups every hour.
Is it possible to reapply the SQL queries that failed from the transaction log backups? I understand the transaction log backups contain information about what in the database changed, but does it contain the statements that attempted to run but failed?
thanks for any help!Thanks for all the replies. Here's the error message I'm getting..
An error occurred in
the Microsoft .NET Framework while trying to load assembly id 1. The server may
be running out of resources, or the assembly may not be trusted with
PERMISSION_SET = EXTERNAL_ACCESS or UNSAFE. Run the query again, or check
documentation to see how to solve the assembly trust issues. For more
information about this error: System.IO.FileNotFoundException: Could not load
file or assembly 'microsoft.sqlserver.types, Version=11.0.0.0, Culture=neutral,
PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot
find the file specified. System.IO.FileNotFoundException: at
System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String
codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint,
StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean
throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName
assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly,
StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean
throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString,
Evidence assemblySecurity, StackCrawlMark& stackMark, IntPtr
pPrivHostBinder, Boolean forIntrospection) at
System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence
assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at
System.Reflection.Assembly.Load(String assemblyString)
Basically SQL was having difficulty reading the microsoft.sqlserver.types.dll assembly. This assembly is used for interacting with the Geography data type. I was getting this error message directly from within SSMS - not just in my .net app. Also, there's
not triggers on this table or any custom SQL besides my insert statement.
Then consensus I'm getting is that this isn't really possible to do using transaction logs without extreme difficultly. I installed the ApexSQL before posting on this forum but I wasn't able to find my insert statements. -
Transaction Logs Troubleshooting
Hi,
I have recently deployed a brand new Exchange 2013 scenario for a customer and the exchange system is working well but I have found the transaction logs are growing at an extra ordinary rate and I'm struggling to troubleshoot the issue.
There are two Exchange servers in this deployment with a WAN connection between the two locations. I have also setup a DAG with a copy of the database at both locations.
The Mail flow is good users are using the system.
This issue only applies to one of the Exchange servers where the transaction logs are being created at about 1 every 5 seconds. There are only about 12 Mailboxes in each location so I see no reason for so many transactions taking place.
I have tried to locate whats causing the issue and have tried to use ExMon but this doesnt appear to work with Exchange 2013. The only bit of useful information I have found was to use this:
http://blogs.technet.com/b/exchange/archive/2012/01/31/a-script-to-troubleshoot-issues-with-exchange-activesync.aspx
where it did show there was one particular Android device that had a very high Hit count. I have asked the user to disable the device's email for the time being while I investigate further but the transaction logs are still growing.
From the same results I also see the Healthmailbox's have high hit numbers.
The script also has a line with some 55000 hits but its user is blank. What is this likely to be?
Is there a tool I can use to help me identify whats causing these logs to grow so fast?
I have installed some backup software to run every night that helps reduce the logs and also enable circular logging which I understand will help prevent running out of space on the drive.
Would really appreciate some guidance.
Thanks
BillDo you have any third party software AV installed on Exchange mailbox servers. If so can you disable them for a while or set exclusions for Exchange files on them.
Check the queue and see if there are any emails stuck in the queue
Just run below command to check if there are any large emails with attachment stuck in any of the users mailbox which might cause the issue
get-mailbox -ResultSize Unlimited| Get-MailboxFolderStatistics -folderscope Outbox | Sort-Object Foldersize -Descending | select-object identity,name,foldertype,itemsinfolder,@{Name="FolderSize MB";expression={$_.folderSize.toMB()}} | export-csv OutboxItems.csv
In your case most likely any third party AV might be causing this. Just disable third party AV if any and check.
Remember to mark as helpful if you find my contribution useful or as an answer if it does answer your question.That will encourage me - and others - to take time out to help you
Maybe you are looking for
-
Hi all, We have a transformation step that is generating mapping errors at runtime. We do not see alerts generated by default on the alert framework for these errors. Is this expected? Should we be implementing the alert generation specifically insid
-
Looking for a laser multi-function unit with duplexing
I had bought a Canon MF4350d for my MacPro with Snow Leopard but having trouble scanning and would like to purchase an alternate device. The features I'm looking for are scanner with sheet feeder, duplex (2-sided) laser printing, fax, and copier (als
-
Hi all I am fairly sure I have see something on the interweb that referenced research that made the case for centralizing demand, and showed quantitatively the benefits and impact that having a center of excellence gave an organisation versus those t
-
How can I get my pictures to continue downloading?
-
Hi, if I drop a user his objects ( and his schema) will removed ? Otherwise how to remove a schema ? Does we have drop schema schemaname ? Many thanks.