DPM doesn't clear transaction logs

We use DPM 2012 to backup Exchange 2013. It works as shown the screenshot.
However, the Exchange Admin Center shows no full backup.
Also, we have a lot old transaction logs. How to DPM clear the transaction logs?
Bob Lin, MCSE & CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

Hi,
Check the application event log for events from exchange after a DPM synchronization is complete.  Make sure DPM is configured to perform FULL backups for one copy of the DB's in the DAG and not just copy only. 
DPM is not responsible for truncating exchange logs. Exchange Writer tells Information Store, that backup has completed. Now Information Store using its own logic can decide which logs can be truncated. Basically IS will retrieve from passive copies
information about what is the oldest log not yet replayed to the database and will look for the Checkpoint at Log Generation in log header in this log. It will allow logs older than Checkpoint at Log
Generation to be truncated. Approximately 200 logs should remain.
 See Tim’s excellent blog post on this subject:
http://blogs.technet.com/b/timmcmic/archive/2012/03/12/exchange-2010-log-truncation-and-checkpoint-at-log-creation-in-a-database-availability-group.aspx
http://blogs.technet.com/b/timmcmic/archive/2011/09/26/exchange-server-2010-and-system-center-data-protection-manager-2010.aspx#3455825
FROM: http://technet.microsoft.com/en-us/library/dd876874.aspx   (exch 2013)
http://technet.microsoft.com/en-us/library/dd876874(v=exchg.141).aspx  (exch 2012)
Specifically, the Microsoft Exchange Replication Service manages CRCL so that log continuity is maintained and logs are not deleted if they are still needed for replication. The Microsoft Exchange
Replication Service and the Microsoft Exchange Information Store service communicate by using remote procedure calls (RPCs) regarding which log files can be deleted.
For truncation to occur on highly available (non-lagged) mailbox database copies, the answer must be "Yes" to the following questions:
* Has the log file been backed up, or is CRCL enabled?
* Is the log file below the checkpoint?
* Do the other non-lagged copies of the database agree with deletion?
* Has the log file been inspected by all lagged copies of the database?
For truncation to occur on lagged database copies, the answer must be "Yes" to the following questions:
* Is the log file below the checkpoint?
* Is the log file older than ReplayLagTime + TruncationLagTime?
* Is the log file deleted on the active copy of the database?
Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
posting is provided "AS IS" with no warranties, and confers no rights.

Similar Messages

  • Clearing transaction log after ehpi

    hello every body
    i want to clear my transaction log on mssql server 2005,
    b'se after EHP4 upgrade it was reached near to 96 Gb
    i trried this
    a. Detach the database
    b. Rename the log file
    c. Attach the database without the log file
    d. Delete the log file
    but database was not attaching , it says that the default user was not able to login,
    before this i can log in easily with the same user.
    now after restart it was not connecting the database with same user error .
    b'se of this i am not been able to start the sap system too.
    hlp needed

    First, you need to change the default database of your login, mostly likely the database you detached was default for your login
    Please follow this article for instructions: [http://support.microsoft.com/kb/307864]
    SAP Note: [4064|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=806925&nlang=E&smpsrv=https://websmp204.sap-ag.de]
    you don't delete the transaction log if you want to clear the transaction log. you use sql command in management studio:
    dbcc shrinkfile('logfilename', 1024), where 1024 is desired size in MB to shrink the file.
    Now if you have deleted transaction log file from OS, probably you may have to restore the database.
    Thanks
    Mush

  • Why is the transaction log file not truncated though its simple recovery model?

    My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
    data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
    mayooran99

    My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
    data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
    mayooran99
    If log records were never deleted(truncated) from the transaction log it wont show as 99% free.Simple recoveyr model
    Log truncation automatically frees space in the logical log for reuse by the transaction log and thats what you are seeing. Truncation wont change file size. It more like
    log clearing, marking
    parts of the log free for reuse. 
    As you said "When I shrink it does shrink" I dont see any issues here. Log truncation and shrink file is 2 different things.
    Please read below link for understanding "Transaction log Truncate vs Shrink"
    http://blog.sqlxdetails.com/transaction-log-truncate-why-it-didnt-shrink-my-log/

  • Sql 2008 Issue restoring transaction logs....

    ** Update: I performed the same steps on the corresponding Dev and things worked as expected. Only  our prod environment uses SnapManager for Sql (NetApp) and I'm beginning to suspect that may be behind this issue
    Restored a full backup of the prod MyDB from 1/23/2014 in non-operational mode (so trans logs can be added). Planned to apply trans log dumps from 1/24/2014, 7am (our first of the day) to noon. But applying the 7am trans dump gave this error:
    >>>>>
    Restore Failed for this Server... the Log in this backup set begins at....which is too recent to apply to the database. An earlier log backup that includes LSN....can be restored.
    >>>>>
    That message is clear but I don't understand it in this case as the full DB dump was taken Thursday night and the tran logs I am trying to restore are all from Friday.
    TIA,
    edm2

    ** Update 2 **
    I kept checking and am now definitely think that the NetApp SnapManager for Sql product (which is a storage based, not sql based, approach to DR) is the culprit. My view of the world was that a Full Sql Database backup is performed at 7pm and the
    Sql translogs are dumped every hour beginning at 7:15am the next day.  This extract from the SnapManager log indicates quite a different story. It takes a full database backup at 11pm (!) that night followed by a translog backup.
    No wonder, when I try to restoring things using Sql utilities it doesn't work. BTW: I have no idea where SnapManager's dumps are stored.
    >>>>>>>>>>>>>>>>>>>>>>>>
    [23:00:32.295]  *** SnapManager for SQL Server Report
    [23:00:32.296]  Backup Time Stamp: 01-24-2014_23.00.32
    [23:00:32.298]  Getting SQL Server Database Information, please wait...
    [23:00:32.299]  Getting virtual disks information...
    [23:00:37.692]  Querying SQL Server instances installed...
    [23:01:01.420]  Full database backup
    [..e
    [23:01:01.422]  Run transaction log backup after full database backup: Yes
    [23:01:01.423]  Transaction logs will be truncated after backup: Yes
    [23:02:39.088]  Database [MyDatabase] recovery model is Full.
    [23:02:39.088]  Transaction log backup for database [MyDatabase] will truncate logs...
    [23:02:39.089]  Starting to backup transaction log for database [MyDatabase]...
    [23:02:39.192]  Transaction log backup of database [MyDatabase] completed.
    >>>>>>>>>>>>>>>>>>>>>>>>
    Unless anyone has further thoughts I think I will close this case and take it up with NetApp.
    edm2
    Sorry I wasn't clearer. The Full database backups was taken on 1/23/2014 at 7pm. The trans logs I was trying to restore were from the next day (starting 1/24/2014 at 7:15am, 8:15am, etc.).   I could not find any Sql translog dumps taken after
    the full backup (at 7pm) until the next morning's  trans dumps (which start at 7:15am). Here is what I did:
    RESTORE DATABASE [MyDatabase] FROM  DISK =
     N'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak' WITH  FILE = 1,
     MOVE N'MyDatabase_data' TO N'C:\MSSQL\Data\MyDatabase.mdf', 
     MOVE N'MyDatabase_log' TO N'C:\MSSQL\Data\MyDatabase_1.LDF', 
     NORECOVERY,  NOUNLOAD,  STATS = 10
    GO
    RESTORE LOG [MyDatabase] FROM  DISK =
    N'D:\MyDatabase\MyDatabase_backup_2014_01_24_071501_9715589.trn'
    WITH  FILE = 1,  NORECOVERY,  NOUNLOAD,  STATS = 10
    GO
    Msg 4305, Level 16, State 1, Line 1
    The log in this backup set begins at LSN 250149000000101500001, which is too recent to apply to the database. An earlier log backup that includes LSN 249926000000024700001 can be restored.
    Msg 3013, Level 16, State 1, Line 1
    RESTORE LOG is terminating abnormally.
    From Sql Error Log:
    2014-01-25 00:00:15.40 spid13s     This instance of SQL Server has been using a process ID of 1428 since 1/23/2014 9:31:01 PM (local) 1/24/2014 5:31:01 AM (UTC). This is an informational message only; no user action is required.
    2014-01-25 07:31:08.79 spid55      Starting up database 'MyDatabase'.
    2014-01-25 07:31:08.81 spid55      The database 'MyDatabase' is marked RESTORING and is in a state that does not allow recovery to be run.
    2014-01-25 07:31:14.11 Backup      Database was restored: Database: MyDatabase, creation date(time): 2014/01/15(16:41:13), first LSN: 249926:231:37, last LSN: 249926:247:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK:
    {'D:\MyDatabase\FULL_(local)_MyDatabase_20140123_190400.bak'}). Informational message. No user action required.
    Regarding my update note,  the SnapManager for SQL product (which I was tolds simply uses VSS) runs every hour throughout the night. That's why I wondering if it could be interfering with the transaction log sequence.

  • The transaction log for database 'BizTalkMsgBoxDb' is full.

    Hi All,
    We are getting the following error continously in the event viewer of our UAT servers. I checked the jobs and all the backup jobs were failing on the step to backup the transaction log file and were giving the same error. Our DBA's cleaned the message box manually and backed up the DB but still after some time the jobs starts failing and this error is logged in the event viewer.
    The transaction log for database 'BizTalkMsgBoxDb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases".
    Thanks,
    Abdul Rafay
    http://abdulrafaysbiztalk.wordpress.com/
    Please mark this answer if it helps

    Putting the database into simple recovery mode and shrinking the log file isn't going to help: it'll just grow again, it will probably fragment across the disk thereby impacting performance and, eventually, it will fill up again for the same reason
    as before.  Plus you put yourself in a very vulnerable position for disaster recovery if you change the recovery mode of the database: and that's before we've addressed the distributed transaction aspect of the BizTalkDatabases.
    First, make sure you're backing up the log file using the BizTalk job Backup BizTalk Server (BizTalkMgmtDb).  It might be that the log hasn't been backed up and is full of transactions: and, eventually, it will run out of space.  Configuration
    instructions at this link:
    http://msdn.microsoft.com/en-us/library/aa546765(v=bts.70).aspx  Your DBA needs to get the backup job running properly rather than panicking!
    If this is running properly, and backing up (which was the case for me) and the log file is still full, run the following query:
    SELECT Name, log_reuse_wait_desc
    FROM sys.databases
    This will tell you why the log file isn't properly clearing down and why it cannot use the space inside.  When I had this issue, it was due to an active transaction.
    I checked for open transactions on the server using this query:
    SELECT
    s_tst.[session_id],
    s_es
    .[login_name]
    AS [Login Name],
    DB_NAME
    (s_tdt.database_id)
    AS [Database],
    s_tdt
    .[database_transaction_begin_time]
    AS [Begin Time],
    s_tdt
    .[database_transaction_log_record_count]
    AS [Log Records],
    s_tdt
    .[database_transaction_log_bytes_used]
    AS [Log Bytes],
    s_tdt
    .[database_transaction_log_bytes_reserved]
    AS [Log Rsvd],
    s_est
    .[text]
    AS [Last T-SQL Text],
    s_eqp
    .[query_plan]
    AS [Last Plan]
    FROM
    sys.dm_tran_database_transactions
    s_tdt
    JOIN
    sys.dm_tran_session_transactions
    s_tst
    ON s_tst.[transaction_id]
    = s_tdt.[transaction_id]
    JOIN
    sys.[dm_exec_sessions]
    s_es
    ON s_es.[session_id]
    = s_tst.[session_id]
    JOIN
    sys.dm_exec_connections
    s_ec
    ON s_ec.[session_id]
    = s_tst.[session_id]
    LEFT
    OUTER
    JOIN
    sys.dm_exec_requests
    s_er
    ON s_er.[session_id]
    = s_tst.[session_id]
    CROSS
    APPLY
    sys.dm_exec_sql_text
    (s_ec.[most_recent_sql_handle])
    AS s_est
    OUTER
    APPLY
    sys.dm_exec_query_plan
    (s_er.[plan_handle])
    AS s_eqp
    ORDER
    BY [Begin Time]
    ASC;
    GO
    And this told me the spid of the process with an open transaction on BizTalkMsgBoxDB (in my case, this was something that had been open for several days).  I killed the transaction using KILL spid, where spid is an integer.  Then I ran the BizTalk
    Database Backup job again, and the log file backed up and cleared properly.
    Incidentally, just putting the database into simple transaction mode would have emptied the log file: giving it lots of space to fill up again.  But it doesn't deal with the root cause: why the backups were failing in the first place.

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

  • How do I view the transaction log in SQL Server 2008?

    Hello,
    I want to know how to view all the transactions taken during a particular period of time. I know there is a log file, ending with .ldf, created for each database. But how do I view this file?
    Is there any tool in the SQL Server studio that can enable me to view the transactions for a given time period?
    The reason for me wanting to view the log file is that, last week during a power outage, certain amount of data was not written. And one my friend had also messed up some of the data (unfortunately, she doesn't remember what she did).
    Thanks in advance.

    Hi,
     It enables you to read from you transaction log which contains very valuable information about stuff that is happening in your database.
    select
    * from fn_dblog (null,null) ..
    EXAMPLE:
    SELECT
    FROM
    ::fn_dblog(NULL, NULL)
    WHERE
    operation = 'LOP_DELETE_SPLIT'
    Thanks,
    Leks

  • Performance problem with transaction log

    We are having some performance problem in SAP – BW 3.5 system running on MS – SQL server 2000.The box is sized 63,574 MB. The transaction logs gets filled up after loading data in to a transactional cube or after doing selective deletion. The size of the transaction log is 7,587MB currently.
    Basis team feels that when performing either loading or selective deletion, SQL server views it as a single transaction and doesn't commit until every record is written. And so as a result, transaction logs fills up ultimately bringing the system down.
    The system log shows a DBIF error during the transaction log fill up as follows:
    Database error 9002 at COM
    > [9002] the log file for database 'BWP' is full. Back up the
    > Transaction log for the database to free up some log space.
    Function COMMIT on connection R/3 failed
    Perform rollback
    Can we make changes to Database to make commit action frequently? Is there any parameters we could change to reduce the packet size? Is there some setting to be changed in SQL server?
    Any Help will be appreciated.

    if you have disk space avialable you can allocate more space to the transaction log.

  • What is stored in a transaction log file?

    What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
    execution of a transaction or is it just the statements found in a transaction block? Please advice.
    mayooran99

    yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
    before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
    when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
    so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
    is committed, the transaction log will roll forward this iinformation
    when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
    in full recovery you will take log backups, to clear that txn from the transaction log.
    in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
    similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.  
    i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
    Hope it Helps!!

  • Knowledge on Transaction log ?

    Hi All,
    I have couple of questions?
    Question-1:
    I need to know will running import/export wizard increase the T-log growth? OR will running simple select statement increase the T-log.
    To my little knowledge data modification (insert, update, or delete) or data definition language (DDL) statements only increase the T-log how about import/export wizard or simple select statement..
    Question-2:
    Also what will happen inside simple recovery model when comparable to full recovery model ?
    I assume the data is first written in T-log and once committed will move to mdf. In this scenario what will happen in simple and full recovery, how they differ from they differ from each other? Please help me to understand the internal architecture/inside oprations
    of recovery models...
    Best Regards,
    Moug
    Best Regards Moug

    Hi,
    Q1) No. Select statements doesnt get logged. Import/export will write to the database hence the tlog will be used.  Any statement other than a DRL (Data Retrieval Language) will be either fully or minimally logged.
    Q2) In case of Single recovery model, the data is in the transaction log till it commits. Once it is committed it is written to mdf and then the space is cleared which means it can be reused for other transactions. In case of Full Recovery model the
    space can only be reused once the log backup is taken.
    Check this link about transaction log which should clear all your doubts.
    http://msdn.microsoft.com/en-gb/library/ms190925.aspx
    You can check the log_reuse_wait_desc column in sys.databases to know why transaction log is not reused.
    http://msdn.microsoft.com/en-gb/library/ms178534.aspx
    Listen to this video to know about the internals in deep for transaction log -
    http://technet.microsoft.com/en-US/sqlserver/gg313762.aspx
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • SQL Server Database - Transaction logs growing largely with Simple Recovery model

    Hello,
    There is SQL server database on client side in production environment with huge transaction logs.
    Requirement :
    1. Take database backup
    2. Transaction log backup is not required. - so it is set to Simple recovery model.
    I am aware that, Simple Recovery model also increases the transaction logs same as in Full Recovery model as given on below link.
    http://realsqlguy.com/origins-no-simple-mode-doesnt-disable-the-transaction-log/
    Last week, this transaction log became of 1TB size and blocked everything on the database server.
    How to over come with this situation?
    PS :  There are huge bulk uploads to the database tables.
    Current Configuration :
    1. Simple Recovery model
    2. Target Recovery time : 3 Sec
    3. Recovery interval : 0
    4. No SQL Agent job schedule to shrink database.
    5. No other checkpoints created except automatic ones.
    Can anyone please guide me to have correct configuration on SQL server for client's production environment?
    Please let me know if any other details required from server.
    Thank you,
    Mittal.

    @dave_gona,
    Thank you for your response.
    Can you please explain me this in more details -- 
    What do you mean by one batch ?
    1. Number of rows to be inserted at a time ?
    2. or Size of data in one cell does matter here.
    As in my case, I am clubbing together all the data in one xml (on c# side) and inserting it as one record. Data is large in size, but only 1 record is inserted.
    Is it a good idea to shrink transaction log periodically, as it is not happening itself in simple recovery model.
    HI Mittal,
    Shrinking is bad activity yu should not shrink log files regularly, in rare case if you want to recovery space you may do it.
    Have manual chekpoints in Bulk insert operation.
    I cannot tell upfront what should be batch size but you can start with 1/4 th of what you are currently inserting.
    Most important what does below query return for database
    select log_reuse_wait_desc from sys.databases where name='db_name'
    The value it returns is what stopping the log from getting cleared and reused.
    What is version and editon of SQl server we are talking about. What is output of
    select @@version
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Exchange 2010 DAG backup & Transaction logs

    Hi, 
    What is Microsoft recommended best practise for Exchange DAG group backup in an environment where there are Active & multiple (2-3) passive copies of the databases?
    Is it a good practice to backup transaction logs as frequently as possible in addition to daily full backup ? This I belive will allow to restore the DB to the latest possible state
    using last good full backup & transaction logs (Like restoring SQL databases)
    Thanks

    Hi,
    Windows Server Backup can't backup passive copy. If you want to backup both active and passive copies, you need to use DPM or other third party.
    Here is a similar thread for your reference.
    Exchange 2010 DAG Backup Best Practices
    http://social.technet.microsoft.com/Forums/exchange/en-US/269c195f-f7d7-488c-bb2e-98b98c7e8325/exchange-2010-dag-backup-best-practices
    Besides, here is a related blog below which may help you.
    Backup issues and limitations with Exchange 2010 and DAG
    http://blogs.technet.com/b/ehlro/archive/2010/02/13/backup-issues-and-limitations-with-exchange-2010-and-dag.aspx
    Hope this helps.
    Best regards,
    Belinda
    Belinda Ma
    TechNet Community Support

  • Big transaction log file

    Hi,
    I found a sql server database with a transaction log file of 65 GB.
    The database is configured with the recovery model option = full.
    Also, I noticed than since the database exist, they only took database backup.
    No transaction log backup were executed.
    Now, the "65 GB transaction log file" use more than 70% of the disk space.
    Which scenario do you recommend?
    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
    Does the " 65 GB file shrink" operation would have impact on my database users ?
    The sql server version is 2008 sp2 (10.0.4000)
    regards
    D

    I've read the other posts and I'm at the position of: It really doesn't matter.
    You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
    and just taking up space.
    For the fastest option I would personally do the following:
    1. Take a full database backup
    2. Set the database recovery model to Simple
    3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
    4. Shrink the log using the truncate option to lop off the end of the log
    5. Manually re-size the log based on usage needed
    6. Set the recovery model to full
    7. Take a differential database backup to bridge the log gap
    The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
    need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
    Sean Gallardy | Blog |
    Twitter

  • Log Reader Agent is not able to read Transaction Log of Publisher database.

    Hi,
    There is no restore or change in recovery model or detach-attach action performed on my production database but still I am seeing below error message from Log Reader Agent-
    Error messages:
    The process could not execute 'sp_repldone/sp_replcounters' on 'ProdInstance'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)
    Get help:
    An error occurred while processing the log for database 'MyDatabase'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log. (Source: MSSQLServer, Error number: 9004)
    The process could not set the last distributed transaction. (Source: MSSQL_REPL, Error number: MSSQL_REPL22017)
    Get help: The process could not execute 'sp_repldone/sp_replcounters' on 'ProdInstance'. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)
    Note- CheckDB on production and distribution database executed successfully. Also, I need subscriber to be a true copy of publisher so I think sp_replrestart is not an option for me.
    My question is how to resolve this issue? I am thinking that reinitialization should resolve the problem but what if this is not going to resolve? Do I need to reconfigure the transaction replication? Please sugegst.

    Hi,
    Please check out this link on how to resolve “The process could not execute 'sp_repldone/sp_replcounters'” error.
    http://blogs.msdn.com/b/repltalk/archive/2010/02/19/the-process-could-not-execute-sp-repldone-sp-replcounters.aspx
    The possible cause could be:
    1.The last LSN in Transaction Log is less than what the LSN Log Reader is trying to find. An old backup may have been restored on top of Published Database. After the restore, the new Transaction Log doesn't contain the data now distributor & subscriber(s)
    have.
    2.Database corruption.
    Since you have not restored the published database, I suggest you run
    DBCC CHECKDB to confirm the consistency of the database. Refer to the How to fix in the above link.
    Thanks.
    Tracy Cai
    TechNet Community Support

  • JTA Transaction log circular collision

    Greetings:
              Just thought I'd share some knowledge concerning a recent JTA-related
              issue within WebLogic Server 6.1.2.0:
              On our Production cluster, we recently ran into the following critical
              level problem:
              <Jan 10, 2003 6:00:14 PM EST> <Critical> <JTA> <Transaction log
              circular collision, file number 176>
              After numerous discussions with BEA Support, it appears to be a (rare)
              race condition within the tlog file. It was also noted by BEA during
              their testing of WebLogic 7.0.
              Some additional research lead to an MBean attribute under *WebLogic
              Server 7.0* entitled, "CheckpointIntervalSeconds". The documentation
              states:
              ~~~~
              Interval at which the transaction manager creates a new transaction
              log file and checks all old transaction log files to see if they are
              ready to be deleted. Default is 300 seconds (5 minutes); minimum is 10
              seconds; maximum is 1800 seconds (30 minutes).
              Default value = 300
              Minimum = 10
              Maximum = 1800
              Configurable = Yes
              Dynamic = Yes
              MBean class = weblogic.management.configuration.JTAMBean
              MBean attribute = CheckpointIntervalSeconds
              ~~~~
              After searching for a equivalent setting under WebLogic Server
              6.1.2.0, nothing was found and a custom (unsupported) patch was
              created to change this hardcoded setting under 6.1:
              from
              ... CHECKPOINT_THRESHOLD_MILLIS = 5 * 60 * 1000;
              to
              ... CHECKPOINT_THRESHOLD_MILLIS = 10 * 60 * 1000;
              within com.bea.weblogic.transaction.internal.ServerTransactionManagerImpl.
              If you'd like additional details, feel free to contact me via e-mail
              <[email protected]> or by phone +1.404.327.7238. Hope this
              helps!
              Brian J. Mitchell
              BEA Systems Administrator
              TRX
              6 West Druid Hills Drive
              Atlanta, GA 30329 USA
              

    Hi 783703,
    As Sridhar suggested for your problem you have to set transaction-time out in j2ee/home/config/transaction-manager.xml.
    If you use Idempotent as false for your partnerlinks, BPEL PM will store the status till that invoke(Proof that this invoke gets executed).
    So better to go for increasing the time instead of going for idempotent as it has some side effects.
    And coming to dehydration ....Ideally performance will be more if there are no much dehydration poitns in our process. But for some scenarios it is better to have dehydration(ex: we can know the status of the process...etc)
    Dehydration store will not get cleared after completion of the process. Here dehydration means ....it will store these dtails in tables(like Cube_instance,cube_scope...etc).
    Regards
    PavanKumar.M

Maybe you are looking for

  • How to control access to a document library based on fields of the list?

    Hi,  I need to give access to a document library but the users who need permission are unknown until the form is submitted to the document library.  The 2 users information are in 2 fields of the form.  I created a infopath form and a workflow with i

  • HT4356 How can I set the default to draft instead of normal printing?

    I'd like to do most of my printing in Draft mode to save ink.  How do I set that as the default, or even change it on the fly, when printing from the iPad Mini to an AirPrint printer?  I have an HP OfficeJet Pro 8600 Plus. Thanks!

  • Communication lost between labview 6.0 opc to rslinx

    I am reading i/o tags from an allen bradley 5/05 through rslinx using ethernet settings. Everything works fine, I read and write to the tags all day. However, If I go online through the ethernet it seems like all the bandwidth is used up and labview

  • Role of a functional consultant in Middleware

    Hi Friends, Can any one please tell me the role of a SAP Sales & Service functional consultant in Middle ware. As of i know we take care of the Initial load of the objects and monitor the Bdoc's, Objects..etc.. Can any one share the experience as of

  • PROBLUM IN RESULT OF IF ELSE

    Hi all i have table and data like this sale_order_id  QTY     item_type 12554          58     DYEING 12554          30     CUTTING 12554          58     PRINTINGAND MY CODE IS LIKE THIS DECLARE      a_qty NUMBER;      a_sale_order_no VARCHAR2(100);