Transaction Logs Troubleshooting

Hi,
I have recently deployed a brand new Exchange 2013 scenario for a customer and the exchange system is working well but I have found the transaction logs are growing at an extra ordinary rate and I'm struggling to troubleshoot the issue.
There are two Exchange servers in this deployment with a WAN connection between the two locations. I have also setup a DAG with a copy of the database at both locations.
The Mail flow is good users are using the system.
This issue only applies to one of the Exchange servers where the transaction logs are being created at about 1 every 5 seconds. There are only about 12 Mailboxes in each location so I see no reason for so many transactions taking place.
I have tried to locate whats causing the issue and have tried to use ExMon but this doesnt appear to work with Exchange 2013. The only bit of useful information I have found was to use this:
http://blogs.technet.com/b/exchange/archive/2012/01/31/a-script-to-troubleshoot-issues-with-exchange-activesync.aspx
where it did show there was one particular Android device that had a very high Hit count. I have asked the user to disable the device's email for the time being while I investigate further but the transaction logs are still growing.
From the same results I also see the Healthmailbox's have high hit numbers.
The script also has a line with some 55000 hits but its user is blank. What is this likely to be?
Is there a tool I can use to help me identify whats causing these logs to grow so fast?
I have installed some backup software to run every night that helps reduce the logs and also enable circular logging which I understand will help prevent running out of space on the drive.
Would really appreciate some guidance.
Thanks
Bill

Do you have  any third party software AV installed on Exchange mailbox servers. If so can you disable them for a while or set exclusions for Exchange files on them.
Check the queue and see if there are any emails stuck in the queue 
Just run below command to check if there are any large emails with attachment stuck in any of the users mailbox which might cause the issue 
get-mailbox -ResultSize Unlimited| Get-MailboxFolderStatistics -folderscope Outbox | Sort-Object Foldersize -Descending | select-object identity,name,foldertype,itemsinfolder,@{Name="FolderSize MB";expression={$_.folderSize.toMB()}} | export-csv OutboxItems.csv 
In your case most likely  any third party AV might be causing this. Just disable third party AV if any and check.
Remember to mark as helpful if you find my contribution useful or as an answer if it does answer your question.That will encourage me - and others - to take time out to help you

Similar Messages

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • The transaction log for database 'Test_db' is full due to 'LOG_BACKUP'

    My dear All,
    Came up with another issue:
    App team is pushing the data from one Prod1 server 'test_1db' to another Prod2 server 'User_db' through a job, here while pushing the data after some duration job is failing and throwing the following error
    'Error: 9002, Severity: 17, State: 2.'The transaction log for database 'User_db' is full due to 'LOG_BACKUP'''.
    On Prod2 server 'User_db' log is having enough space 400gb on drive and growth is 250mb. I really confused that why job is failing as there is lot of space available. Kindly guide me to troubleshoot the issue as this issue is occuring from more than
    1 week. Kindly refer the screenshot for the same.
    Environment: SQL Server 2012 with sp1 Ent-edition. and log backup duration is every 15 mints and there is no High availability between the servers.
    Note: Changing to simple recovery model may resolve but App team is required to run in Full recovery model as they need of log backups.
    Thanks in advance,
    Nagesh
    Nagesh

    Dear V,
    Thanks for the susggestions.
    I have followed some steps to resolve the issue, as of now my jobs are working without issue.
    Steps:
    Generating log backup for every 5 minutes
    Increased the growth 500mb to unrestricted. 
    Once whole job completed we are shrinking the log file.
    Nagesh

  • Exchange 2010 SP3, RU5 - Massive Transaction Log File Generation

    Hey All,
    I am trying to figure out why 1 of our databases is generating 30k Log Files a day! The other one is generating 20K log files a day. The database does not grow in size as the log files are generated, the problem is log file generation.
    I've tried running through some of the various solutions out there, reviewed message tracking logs, rpc client access logs, IIS Logs - all of which show important info, but none of which actually provide the answers.
    I Stopped the following services to see if that would affect the log file generation in any way, and it has not!
    MS Exchange Transport
    Mail Submission
    IIS (Site Stopped in IIS)
    Mailbox Assistants
    Content Indexing Service
    With the above services stopped, I still see dozens (or more) log files generated in under 10 minutes, I also checked mailbox size reports (top 10) and found that several users mailboxes were generating item count increases for one user of
    about 300, size increases for one user of about 150Mb (over the whole day).
    I am not sure what else to check here? Any ideas?
    Thanks,
    Robert
    Robert

    Hmm - this sounds like an device is chewing up the logs.
    If you use log parser studio, are there any stand out devices in terms of the # of hits?
    And for the ExMon was that logged over a period of time?  The default 60 second window normally misses a lof of stuff.  Just curious!
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
    Rhoerick, 
    Thanks for the response. When checking the logs the highest number of hits were the (Source) Load Balancers, Port 25 VIP. The problem i was experience was the following: 
    1) I kept expecting the log file generation to drop to an acceptable rate of 10~20 MB Per Minute (Max). We have a large environment and use the exchange sevrers as the mail relays for the hated Nagios monitoring environment
    2) We didn't have our enterprise monitoring system watching SMTP traffic, this is  being resolved. 
    3) I needed to look closer at the SMTP transport database counters, logs, log files and focus less on the database log generation, i did do some of that but not enough of that. 
    4) My troubleshooting kept getting thrown off due to the monitoring notifications seeming to be sent out in batches (or something similar) stopping the transport service for 10 ~ 15 minutes several times seemed to finally "stop the transactions logs
    from growing at a psychotic rate". 
    5) I am re-running my data captures now that i have told the "Nagios Team" to quit killing the exchange servers, with their notifications, sometimes as much as 100+ of the same notifications for the same servers, issues. so far at a quick glance
    the log file generation seems to have dropped by about 30%. 
    Question: What would be the best counters to review in order to "Put it all together"? Also note: our Server roles are split, MBX and CAS/HT. 
    Robert 
    Robert

  • Transaction log usage grows due to replication even if I don't use replication at all

    Hi
    The transaction log usage keeps growing a lot on my user database since few days back. the database is in full recovery model and I do transaction log backups every 10 minutes. The DB was part of Database Mirroring but I removed it. The usage was controlled
    for many years by the backups but something happened that is messing up the transaction log
    this is DBCC OPENTRAN
    Transaction information for database 'MyDB'.
    Replicated Transaction Information:
            Oldest distributed LSN     : (0:0:0)
            Oldest non-distributed LSN : (1450911:6823:1)
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    log_reuse_wait_desc reports REPLICATION
    the funny thing is that I am not using replication at all. I am using CDC.
    To reduce the transaction log usage I run below statement every day since the problem started
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, 
        @numtrans = 0, @time = 0, @reset = 1
    Any idea what should I do to solve this problem and be back on a normal situation?
    BTW, The server is SQL 2012 (11.0.2383)
    Thanks
    Javier Villegas |
    @javier_vill | http://sql-javier-villegas.blogspot.com/
    Please click "Propose As Answer" if a post solves your problem or "Vote As Helpful" if a post has been useful to you

    CDC uses the replication log reader agent and if you manually ran sp_repldone like that you lost information in your CDC capture.  If the capture job can't keep up with the workload or is not running for CDC, you would have the exact problems you describe.
     If you execute sp_repldone like that, you might as well disable CDC.
    http://technet.microsoft.com/en-us/library/dd266396(v=sql.100).aspx
    Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
    My Blog |
    Twitter |
    MVP Profile
    Training |
    Consulting |
    Become a SQLskills Insider
    Troubleshooting SQL Server

  • Content Engine transaction logs -- monitoring and analysis

    At our remote sites there's a local Cisco CE511 to ease our WAN bandwidth. I have been tasked to find a method to gather CE usage for trending and troubleshooting.
    From my search on the internet I decided to go with the Webalizer application. I setup the CEs to export their transaction logs every hour to my FTP server. After a test of Webalizer on a log file, it produced a nice HTML report for that hour.
    I would like to discuss with anyone on bringing this up to a new level. I would like webalizer to run as a cron job, but the log file names changes every hour. So that's a hurdle I need to figure out. Also keeping track of user web hits is important. I would like to make sure my reports are accurate in reporting what IP address is the top talker.
    I hope this will start a productive exchange of ideas. Thanks.

    Simple Network Management Protocol (SNMP) is an interoperable standards-based protocol that allows for external monitoring of the Content Engine through an SNMP agent.
    An SNMP-managed network consists of three primary components: managed devices, agents, and management systems. A managed device is a network node that contains an SNMP agent and resides on a managed network. Managed devices collect and store management information and use SNMP to make this information available to management systems that use SNMP. Managed devices include routers, access servers, switches, bridges, hubs, computer hosts, and printers.
    An SNMP agent is a software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP. The SNMP agent gathers data from the Management Information Base (MIB), which is the repository for information about device parameters and network data. The agent can also send traps, or notification of certain events, to the manager.
    http://www.cisco.com/en/US/products/sw/conntsw/ps491/products_configuration_guide_chapter09186a0080236630.html#wp1101506

  • Exchange 2007 transaction log issue

    Hi All,
    One of my clients if having an Issue with their exchange 2007 server, running multiple exchange dbs for different sections of their business.
    1 of the 4 dbs (the smallest one) is having drop out issues, the exchange server has dismounted the db each night this week when the backup Exec 2010 running it fails with code E000032D. Which according to Symantec literature means as bad transaction
    log.
    upon looking at the files in the log folder i noticed there are a number of logs that have been duplicated and have a .delete file type.
    Can anyone give me any info on how to fix this???
    I have tried running an NTback in the hope that a more simple backup program might be more forgiving with the damaged logs.
    When this didn't work (failed to backup of stay mounted last night) I have tried to use circular logging to purge the logs today followed by another NTbackup of the exchange store in question. 
    The logs folder is still showing the .delete version of the log and I guessing we are nort going to get a successful backup tonight.
    The issue arose at the end of last week when a disk failed due to overheating when the server rooms AC went out for the weekend.
    Thanks in advance for your help,
    Joseph Atie

    Hi Joseph,
    From your description, I recommend you follow the steps below for troubleshooting:
    1. Please move the transaction log files to a safe directory of a different store at first.
    2. Mount this store and start the backup.
    What's more, circular logging is not recommended. Here is a blog for your reference.
    Exchange Circular Logging and VSS Backups
    http://blogs.technet.com/b/exchange/archive/2010/08/18/3410672.aspx
    Hope it helps.
    If there are any problems, please feel free to let me know.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • SAP xMII Log/ SQL transaction Log

    We have an application where we are inserting data from a PLC to the SQL Server using the xMII.
    The SQL server no. of times gives the Transaction Log Full error.
    This causes the error any data insertion  or deletion in SQL
    In tern the xMII logs error, the log file for a day (in the Lighthammer\Logs\cms.log)is seen as large as 15 GB.
    We are not able to open such a huge file...
    Moreover we are unable to know if there are anyu other errors other than due to the SQL logged by xMII
    How can we see the Log created by xMII, this will help us in troubleshooting .
    Message was edited by:
            Amol Kurdukar

    Hi All,
    Thanks for the help.
    Ryan,
    I have resolved the issue of transaction log by archiving the Log at increased frequency.
    Jeremy,
    We are required to have the PLC data in SQL as the Customer wants to have that and they do not have any other database, SCADA  etc.
    The SQL server logs (consolidates) not only the PLC data but  data from some other devices as well
    Moreover this is data then used for confirming the production to SAP and maintaining the status of the confirmation. This also involves queuing of the data in SQL if the data is not confirmed to SAP for any reason.
    We are not logging the data every minute, The data is logged depending on a batch completion, shift completion etc.
    Hope this clarifies, let us know your comments.

  • DPM doesn't clear transaction logs

    We use DPM 2012 to backup Exchange 2013. It works as shown the screenshot.
    However, the Exchange Admin Center shows no full backup.
    Also, we have a lot old transaction logs. How to DPM clear the transaction logs?
    Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    Hi,
    Check the application event log for events from exchange after a DPM synchronization is complete.  Make sure DPM is configured to perform FULL backups for one copy of the DB's in the DAG and not just copy only. 
    DPM is not responsible for truncating exchange logs. Exchange Writer tells Information Store, that backup has completed. Now Information Store using its own logic can decide which logs can be truncated. Basically IS will retrieve from passive copies
    information about what is the oldest log not yet replayed to the database and will look for the Checkpoint at Log Generation in log header in this log. It will allow logs older than Checkpoint at Log
    Generation to be truncated. Approximately 200 logs should remain.
     See Tim’s excellent blog post on this subject:
    http://blogs.technet.com/b/timmcmic/archive/2012/03/12/exchange-2010-log-truncation-and-checkpoint-at-log-creation-in-a-database-availability-group.aspx
    http://blogs.technet.com/b/timmcmic/archive/2011/09/26/exchange-server-2010-and-system-center-data-protection-manager-2010.aspx#3455825
    FROM: http://technet.microsoft.com/en-us/library/dd876874.aspx   (exch 2013)
    http://technet.microsoft.com/en-us/library/dd876874(v=exchg.141).aspx  (exch 2012)
    Specifically, the Microsoft Exchange Replication Service manages CRCL so that log continuity is maintained and logs are not deleted if they are still needed for replication. The Microsoft Exchange
    Replication Service and the Microsoft Exchange Information Store service communicate by using remote procedure calls (RPCs) regarding which log files can be deleted.
    For truncation to occur on highly available (non-lagged) mailbox database copies, the answer must be "Yes" to the following questions:
    * Has the log file been backed up, or is CRCL enabled?
    * Is the log file below the checkpoint?
    * Do the other non-lagged copies of the database agree with deletion?
    * Has the log file been inspected by all lagged copies of the database?
    For truncation to occur on lagged database copies, the answer must be "Yes" to the following questions:
    * Is the log file below the checkpoint?
    * Is the log file older than ReplayLagTime + TruncationLagTime?
    * Is the log file deleted on the active copy of the database?
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • The system failed to flush data to the transaction log. Corruption may occur.

    We have a windows server 2008 R2 Virtual machine and we are getting the following Warning Event.
    Event 51 Volmgr
    The system failed to flush data to the transaction log.  Corruption may occur.
    Any idea what is wrong with this server? Why this event is occurring?

    Hi Jitender KT,
    Before going further, would you please let me know the complete error message that you can find (such as a
    screenshot if you can provide)? Please check and confirm in Event Viewer if there other related event you can find, such as Event 57 and so on. Meanwhile, can you remember what operations you have done before the warning occurred?
    Based on current message that you provided, please run
    Chkdsk command to check if you can find error. The issue seems to be related to the storage device. Please refer to the following similar question.
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/044b10af-c253-46de-b40d-ce9d128b83d7/event-id-57-source-volmgr?forum=winservergen
    In addition, please also refer to the following link. It should be helpful.
    http://www.eventid.net/display-eventid-57-source-volmgr-eventno-8865-phase-1.htm
    Hope this helps.
    Best regards,
    Justin Gu

  • Cannot write to transaction log "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_Audit.log

    Hi friends,
    My server Intelligence Agent (SIA) can not start because the database service "SQLAnywhereForBI" can't start also. I got the following error :
    "I . 08/09 20:35:06. A read failed with error code: (1392), Le fichier ou le répertoire est endommagé et illisible.
    E. 08/09 20:35:06. Fatal error:  cannot write to transaction log "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_Audit.log"
    E. 08/09 20:35:06. unable to start database "C:\Program Files (x86)\SAP BusinessObjects\sqlanywhere\database\BI4_CMS.db"
    E. 08/09 20:35:06. Error writing to transaction log file
    I. 08/09 20:35:06. Database server shutdown due to startup error "
    inside the database log file.
    Please, can you help me

    I found the solution by following the advice given on the following forum:
    http://evtechnologies.com/transaction-logs-on-sybase-sql-anywhere-and-sap-​​businessobjects-bi-4-1
    In fact, I crushed the BI4_Audit.db and BI4_Audit.log files and I replaced with others that I got from another machine where I installed BO again and where the files are not corrupted . After I logged in to the CMS database by executing the command in the command line:
    dbisql -c "UID = DBA; PWD = mypassword; BI4 Server =; DBF = C: \ Program Files (x86) \ SAP BusinessObjects \ sqlanywhere \ database \ BI4_CMS.db."
    Once connected, I start the command:
    alter database 'C: \ Program Files (x86) \ SAP BusinessObjects \ sqlanywhere \ database \ BI4_Audit.db' alter log off;
    The query runs successfully.
    And that's good, I can be connected to BO smoothly.
    Thank you again Eric

  • Error in db6conv failed due to transaction log full

    Hi,
    I have a huge problem with my production system.
    I was executing db6conv v4.08  to convert a table to a new tablespace and it stopped due to a transaction log full.
    Now I have this situation:
    table soffcont1
    db6conv: status: preliminary
    I check the job db6conv_job_soffcont1 with status scheduled.
    The problem is that when I want to execute this jobs it gives me an error:
    Definition of job db6conv_job_soffcont1 is incomplete. Operation is not possible.
    regards,
    filipe vasconcelos

    hi filipe,
    i will follow up on this problem in you OSS message.
    regards, frank

  • How does one read the Unity transaction logs on Exchange?

    I'm trying to find out who placed a call to whom on a particular day and time.  A subscriber receives a blank email periodically with no "from" statement, just a timestamp. They don't have unified messaging.  I want to see if the transaction logs (located on Exchange) contain that information but they are all cryptic symbols.  Is there an application or utility to read tham?  These are the files located in the \Exchsrvr\MDBDATA folder. This is Unity 5.x and Excahnge 2003. Thanks.

    Hi,
    I think you may be wanting to look at Message Tracking?
    http://www.msexchange.org/tutorials/Exchange-2003-Message-Tracking-Logging.html
    But here's also an explanation of the transaction logging and how to read them from the same site:
    http://www.msexchange.org/articles/Transaction-Logs-Lifeblood-Exchange.html
    Hope that helps,
    Brad

  • Transaction log and access log

    The transaction log (TransactionLogFilePrefix) and the access log are stored
    relative to the directory where the server is started rather than where it
    resides as with the rest of the log files. Why is this?
    Eg.
    I start the server with a batch file contained in
    projects\bat
    My server is in
    projects\server\config\myDomain
    When I start the server the access and transaction logs end up in
    projects\bat
    while all the rest of the log files (such as the domain and server log) end
    up in
    projects\server
    My batch file that starts the server looks like this
    "%JAVA_HOME%\bin\java" -hotspot -ms64m -mx64m -classpath %CLASSPATH%
    "-Dbea.home=e:\bea"
    "-Djava.security.policy==i:\projects\server\config\myDomain\weblogic.policy"
    "-Dweblogic.Domain=myDomain" "-Dweblogic.Name=adminServer"
    "-Dweblogic.RootDirectory=i:/projects/server"
    "-Dweblogic.management.password=weblogic" weblogic.Server
    Thanks for help on this,
    Myles

    The same case with me, I sent email to apple support, but got not reply.
    The apple status page indicated that every thing is fine now, what a joke.
    Many devs are in this situation too, I guess we could do nothing but waiting for their system to come up.

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

Maybe you are looking for

  • How do I get a button to show in a list only when a survey is filled?

    I am using cf9 with mySQl 5+. I have two tables: Signups -  where people signup to take a course. Course_eval – once the course is completed, the student fills in the course evaluation I use an inner join on userID in both tables. I have a page, show

  • Sql Problems, Same Field Names In Multiple Mysql Tables?

    I have a keyword search that searches multiple DB tables for thumbnail images using UNION ALL. I have two pages, results.php, and view.php.  My goal is to able to click a thumbnail image on results.php and be directed to a larger version of that same

  • How to create a file after ElapsedTime

    Hi, I have a situation where my BPEL process consumes messages off of a queue and creates a output file using them via a FTP Adapter. The number of messages vary greatly depending on the day and the business needs only 1 file a day with the appropria

  • Font In Title Bar in Xfce4

    I've accessed the User Interface dialogue in Xfce's System Settings to change the font appearance in Xfce4-beta and indeed everything is altered correctly -- except for the title bar which continues to use the Sans font. What should I do to get that

  • After upgrading iOS to 7.0.4, I could not transfer:pdf files from email to ibooks ???

    Help anyone ??? After upgrading iOS to 7.0.4, I could not tranfer pdf files from email to ibooks. Previously, i would see a message asking for tranfering my word doc to pdf file and pdf file to store in ibooks...