Very high transaction log file growth

Hello
Running Exchange 2010 sp2 in a two node DAG configuration.  Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
Bulls on Parade

If you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved.  If it is one or more iPhones then perhaps look at
what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout.  NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
MS USER MONITOR:
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
And this is a great article as well
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
 http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
Troy Werelius
www.Lucid8.com
Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope

Similar Messages

  • WebDAV Query generates a high number of transaction log files

    Hi all,
    I have a program that launch WebDAV queries to search for contacts on an Exchange 2007 server. The number of contacts returned for each user's mailbox is quite high (about 4500).
    I've noticed that each time the query is launched, about 15 transaction log files are generated on the Exchange server (each of them 1Mb). If I ask only for 2 properties on the contacts, this number is reduced to about 8.
    This is a problem since our program is supposed to launch often (about every 3/5min) as It will synchronize Exchange mailboxes with a SQL Server DB. The result is that the logs increase very quickly on the server side, even if there are not so many updates.
    Any idea why so many transaction logs are generated when doing a WebDAV search returning many items? I would understand that logs are created when an update is done on the server, but here it's only a search with many contacts items returned.
    Is there maybe a setting on the Exchange server to control what kind of logs to generate?
    Thank for your help,
    Alexandre

    Hi Alex,
    Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
    - I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
    - I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
    - You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
    Microsoft TechNet > Forums Home > Exchange Server > Development
    Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
    Development: Overview
    http://technet.microsoft.com/en-us/library/aa997614.aspx
    Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com

  • Transactional log file issue

    Dear All,
    There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following
    please:
    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
    4. How often should the update stat job should run please?
    Thank you in advance!

    Hi
    My answers might be very similar to geeks already answer, but hope it will add something more
    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
     --> If database recovery model is full \ bulk then tlog backup is helpful, and it doesnt help try to increase frequency of log backup and you can refer :
    Factors That Can Delay Log Truncation
    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
    Auto grow for very large db is very crucial if its too high can cause active vlf and too less can cause fragmentation. In your case your priority is to control space utilizatiuon.
    i suggest you to keep minimum autogrowth and it must be in size not in percentage.
    /*******Auto grow formula for log file**********/
    Auto grow less than 64MB = 4 VLFs 
    Autogrow  of 64MB and less than 1GB = 8 VLFs 
    Autogrow of 1GB and larger = 16 VLFs
    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
    ---> If below query returns log_backup for respective database then yes you can to increase log backup frequency. But if it returns some other factor , please check above
    mention link
    "select name as [database] ,log_reuse_wait , log_reuse_wait_desc from sys.databases"
    4. How often should the update stat job should run please?
    this totaly depend on ammount of dml operation you are performing. you can select auto update stats and weekly you can do update stats with full scan.
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • Cancel the query which uses full transaction log file

    Hi,
    We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
    is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.

    Hi,
    We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
    is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.
    Hello,
    Instead of Putting limit on trn log it would be good to find out cause causing high utilization.Even if you find that your log is growing because of some transaction it would be a blunder to rollback its little easy to do it for Index rebuild but if you
    cancel for some delete operation you would end up in mess.Please don't create a program to delete or kill running operation.
    You can create custom job for alert for trn log file growth.That would be good.
    From 2008 onwards Index rebuild is fully logged so sometimes it causes trn log issue.To solve this either you run index rebuild for specific tables or for selective tables.
    Other option is widely accepted Ola Hallengren script for index rebuild.I suggest you try this
    http://ola.hallengren.com/
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • How to design SQL server data file and log file growth

    how to design SQL DB data file and log file growth- SQL server 2012
    if my data file is having 10 GB sizze and log file is having 5 GB size
    what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.

    It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
    The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
    the appropriate amount.
    One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
    https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
    Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this 
    SELECT
    [DatabaseName],
    [FileName],
    [SPID],
    [Duration],
    [StartTime],
    [EndTime],
    CASE [EventClass]
    WHEN 92 THEN 'Data'
    WHEN 93 THEN 'Log' END
    FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
    WHERE
    EventClass IN (92,93)
    hope that helps

  • What is stored in a transaction log file?

    What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
    execution of a transaction or is it just the statements found in a transaction block? Please advice.
    mayooran99

    yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
    before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
    when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
    so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
    is committed, the transaction log will roll forward this iinformation
    when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
    in full recovery you will take log backups, to clear that txn from the transaction log.
    in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
    similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.  
    i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
    Hope it Helps!!

  • Big transaction log file

    Hi,
    I found a sql server database with a transaction log file of 65 GB.
    The database is configured with the recovery model option = full.
    Also, I noticed than since the database exist, they only took database backup.
    No transaction log backup were executed.
    Now, the "65 GB transaction log file" use more than 70% of the disk space.
    Which scenario do you recommend?
    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
    Does the " 65 GB file shrink" operation would have impact on my database users ?
    The sql server version is 2008 sp2 (10.0.4000)
    regards
    D

    I've read the other posts and I'm at the position of: It really doesn't matter.
    You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
    and just taking up space.
    For the fastest option I would personally do the following:
    1. Take a full database backup
    2. Set the database recovery model to Simple
    3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
    4. Shrink the log using the truncate option to lop off the end of the log
    5. Manually re-size the log based on usage needed
    6. Set the recovery model to full
    7. Take a differential database backup to bridge the log gap
    The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
    need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
    Sean Gallardy | Blog |
    Twitter

  • Shrink Transaction log file - - - SAP BPC NW

    HI friends,
    We want to shrink the transaction log files in SAP BPC NW 7.0, how can we achieve thsi
    Please can you throw some light on this
    why we thought of ghrinking the file ?
    we are getting "out of memory " error when ever we do any activity. so we thought of shrinking the file (this is not a production server - FYI)
    example of an activity where the out of memeory issueee appears
    SAP BPC excel >>> etools >>> client options >>> refresh dimension members >>> this leads to a pop-up screen stating that "out of memory"
    so we thought of shrinking the file.
    Please any suggestions
    Thank you and Kindest Regards
    Srikaanth

    HI Poonam,
    Not only the excel is throwing this kind of message (out of memory) - the SAP note is helpful, if we have error in excel alone
    But we are facing this error every where
    We have also found out that our Hard disk capacity as run out of space.
    we want to empty the log files and make some space for us.
    our hard disk is now having only few MegaBytes now
    we want to clear our all test data, log files, and other stuffs
    Please can you recommed us some way
    Thank you and Kindest regards
    Srikaanth

  • Delete transaction log file

    Hi,
    I have three T-log files in my database, Now I want to delete 2 Transaction log files. 
    Can I do the below action:
    1. dbcc shrinkfile(log1,truncateonly) 
    2 dbcc shrinkfile(log2,truncateonly)
    2. Then remove the file using command or SSMS.
    Regards

    Hi Satheesh,
    What about this:
    Can I use the below procedure:
    dbcc shrinkfile(LOG2,emptyfile)
    dbcc shrinkfile (LOG3,emptyfile)
    alter database PRT remove file LOG2
    alter database PRT remove file LOG3
    Note: I have already LOG1 as my primary logfile existing there, I want to remove only secondary logfiles
    Regards

  • SHADOW_IMPORT_UPG1 is very very slow, no log files are created

    Hi all
    We are now doing our production upgrade, during the SHADOW_IMPORT_UPG1 phase system is very slow
    no log files are created in the /usr/sap/put/log directory.
    only three files are growing in /usr/sap/tmp directory
    orar3p> ls -lrt
    total 219176
    -rw-rw-rw-   1 r3padm     sapsys        2693 Aug 15 18:42 UCMIG_DE.ECO
    -rw-rw-rw-   1 r3padm     sapsys        2374 Aug 15 18:42 R3trans.out
    -rw-rw-rw-   1 r3padm     sapsys        2685 Aug 15 18:46 ADDON_TR.ECO
    -rw-rw-rw-   1 r3padm     sapsys         726 Aug 15 20:04 crshdusr.log
    -rw-rw-rw-   1 r3padm     sapsys        3915 Aug 15 21:53 EU_IMTSK.ECO
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLPTN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36433272 Aug 15 23:44 SAPKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36807577 Aug 15 23:44 SAPKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     35372350 Aug 15 23:44 SAPKLPTN18.R3P
    orar3p> date
    Fri Aug 15 23:44:54 PDT 2008
    Can anyone help what to do
    Thanks
    Senthil

    Hello,
    did you discover what the cause was for this phase running so slow? And  how long did it take to complete in the end?
    We are currently running an upgrade of our Development system and have struck the same issue.
    I killed the upgrade after the phase had been running for 4 hours and restarted it, but it looks like it is still going to run for a long time.
    Regards....John

  • Transaction Log File Drive is missing from SAN

    HI all,, we had some SAN issues and we dont have Transaction Log files for some databases..
    This is SQL Server 2008 R2 Cluster.. The drive which was holding this Tlog files were missing.. PLease let me know, how to bring back databases.. Awaiting for early reply..

    As others have said, the SAN folks needs to get their act together and bring back disk with the log files.
    If the log files are truly lost, you should restore a clean backup. If you don't have a clean backup, well, there are some people in your company that are likely to ask you some questions of what is going on the data centre.
    It was suggested that you should detach the data file and reattach it to have a new log file created. I strongly recommend against this. You will get a database that is likely to have corruption and inconsistency on both SQL Server level and application,
    due to transactions that were in flight when the log files were lost.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Log file Growth query

    Hi All,
    Is there any query to increase the logfile growth to unlimited in SQL 2000 . Am unable to do that through GUI mode and getting some transaction error. Please suggest me any query.
    Thanks & Regards,
    Venkat.

                     
    "As I said, you cannot set log files to "unrestricted".  You must set them to a number.  "
    Not true. From
    http://msdn.microsoft.com/en-us/library/bb522469.aspx:
    MAXSIZE { max_size| UNLIMITED }
    Specifies the maximum file size to which the file can grow.
    max_size                              
    Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default is MB. Specify a whole number and do not include a decimal. If
    max_size is not specified, the file size will increase until the disk is full.
    UNLIMITED                            
    Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a maximum size of 2 TB, and a data file has a maximum size of 16 TB. There is no maximum size when this option is specified for a FILESTREAM
    container. It continues to grow until the disk is full.
    In other words, you _can_ set a log file's MAXSIZE to UNLIMITED and you do _not_ have to specify a number, but SQL Server will _not_ grow a log file beyond 2TB (even when you try to allow it)

  • Unable to delete records as the transaction log file is full

    My disk is running out of space and as a result I decided to free some space by deleting old data. I tried to delete 100,000 by 100,000 as there are 240 million records to be deleted. But I am unable to delete them at once and shrinking the database doesn't
    free much space. This is the error im getting at times.
    The transaction log for database 'TEST_ARCHIVE' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
     How can I overcome this situation and delete all the old records? Please advice.
    mayooran99

    In order to delete the SQL Server need to write the information in the log file, and you do not have the place for those rows in the log file. You might succeeded to delete less in each time -> next backup the log file each time -> next shrink the
    log file... but this is not the way that I would chose.
    Best option is probably to add another disc (a simple disk do not cost a lot), move the log file there permanently. It will increase the database work as well (it is highly recommend not to put the log file on the same disk as the data file in most cases).
    If you can't add new disk permanently then add one temporary. Then add file to the database in this disk -> create new table in this disk -> move all the data that you do
    not want to delete to the new table -> truncate the current table -> bring back the data om the new table -> drop the new table and the new file to release the temporary disk.
    Are you using full mode or simple recovery mode ?
    * in full mode you have to backup the log file if you want to shrink it
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

  • Log File Growth after Database ReIndexing

    Hi,
    After doing Biztalk MsgBox and DTADb re-indexing by executing bts_RebuildIndexes and dtasp_RebuildIndexe sps respectively, it has been observed that the transaction log size for both the DBs went high and also Biztalk jobs (like, DTAPurge) were
    not completing.
    I am using BTS2006 and SQL2005.
    Because of the growth, we had to introduce extra storage. But it seems that the size is under control now.
    Kindly help me to understand what went wrong or why it happened?
    Thanks,
    Sugata

    Ideally while running the stored proc for rebuilding indexes, there shouldn't be any processing happening.
    So it's suggested to stop all the host instances, SQL Agent and IIS App pool if you have any SOAP/WCF receive location.
    You can run MBV report from below and check if it reports any issues. 
    Message Box Viewer -  http://blogs.technet.com/b/jpierauc/archive/2007/12/18/msgboxviewer.aspx
    Later use Terminator tool to address the concerns it reports. You may have to repair references.
    http://www.microsoft.com/en-in/download/details.aspx?id=2846
    Also, run the below query for all the databases and check if the output doesn't have any error(red color outcome) in the output.
    Use <DatabaseName>
    dbcc checkdb
    Let us know if you are still facing any issue.
    Thanks,
    Prashant
    Please mark this post accordingly if it answers your query or is helpful.

  • Log Reader Agent: transaction log file scan and failure to construct a replicated command

    I encountered the following error message related to Log Reader job generated as part of transactional replication setup on publisher. As a result of this error, none of the transactions propagated from publisher to any of its subscribers.
    Error Message
    2008-02-12 13:06:57.765 Status: 4, code: 22043, text: 'The Log Reader Agent is scanning the transaction log for commands to be replicated. Approximately 24500000 log records have been scanned in pass # 1, 68847 of which were marked for replication, elapsed time 66018 (ms).'.
    2008-02-12 13:06:57.843 Status: 0, code: 20011, text: 'The process could not execute 'sp_replcmds' on ServerName.'.
    2008-02-12 13:06:57.843 Status: 0, code: 18805, text: 'The Log Reader Agent failed to construct a replicated command from log sequence number (LSN) {00065e22:0002e3d0:0006}. Back up the publication database and contact Customer Support Services.'.
    2008-02-12 13:06:57.843 Status: 0, code: 22037, text: 'The process could not execute 'sp_replcmds' on 'ServerName'.'.
    Replication agent job kept trying after specified intervals and kept failing with that message.
    Investigation
    I could clearly see there were transactions waiting to be delilvered to subscribers from the followings:
    SELECT * FROM dbo.MSrepl_transactions -- 1162
    SELECT * FROM dbo.MSrepl_commands -- 821922
    The following steps were taken to further investigate the problem. They further confirmed how transactions were in queue waiting to be delivered to distribution database
    -- Returns the commands for transactions marked for replication
    EXEC sp_replcmds
    -- Returns a result set of all the transactions in the publication database transaction log that are marked for replication but have not been marked as distributed.
    EXEC sp_repltrans
    -- Returns the commands for transactions marked for replication in readable format
    EXEC sp_replshowcmds
    Resolution
    Taking a backup as suggested in message wouldn't resolve the issue. None of the commands retrieved from sp_browserreplcmds with mentioned LSN in message had no syntactic problems either.
    exec sp_browsereplcmds @xact_seqno_start = '0x00065e220002e3d00006'
    In a desperate attempt to resolve the problem, I decided to drop all subscriptions. To my surprise Log Reader kept failing with same error again. I thought having no subscription for publications log reader agent would have no reason to scan publisher's transaction log. But obviously I was wrong. Even adding new log reader using sp_addLogreader_agent after deleting the old one would not be any help. Restart of server couldn't do much good either.
    EXEC sp_addlogreader_agent
    @job_login = 'LoginName',
    @job_password = 'Password',
    @publisher_security_mode = 1;
    When nothing else worked for me, I decided to give it a try to the following procedures reserved for troubleshooting replication
    --Updates the record that identifies the last distributed transaction of the server
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
    -- Flushes the article cache
    EXEC sp_replflush
    Bingo !
    Log reader agent managed to start successfully this time. I wish if I could have used both commands before I decided to drop subscriptions. It would have saved me considerable effort and time spent re-doing subscriptions.
    Question
    Even though I managed to resolve the error and have replication funtioning again but I think there might have been some better solution and I would appreciate if you could provide me some feedback and propose your approach to resolve the problem.

    Hi Hilary,
    Will the below truncate the log file marked for replication, is there any data loss, when we execute this command, can you please help me understand, the internal working of this command.
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

Maybe you are looking for

  • Select options in report painter COPA

    In report painter when we go to "maintain selection options", the "exclude from selection" is greyed out. Any suggestions?

  • Will Photoshop 7.0 work in OSX 10.5

    Sorry if this has been asked before, but does anyone know if there is a way to make Photoshop 7.0 work on a Mac, in OSX 10.5? Last week I finally upgraded my system to 10.5, and now Photoshop 7 will not open. I get an error, "An unexpected and unreco

  • ME23N: Source of error 199(06)

    Hello experts. I am taking care of giving access to people in my area but there is something I have searched for a few days now and I can't seem to get the hand on it. Actually, a user tries to modify a standard PO but each time she clicks on Modify,

  • Univers Designer Connection SSO

    Hi, I have installed BO and the SAP Integration Kit. I have also configured Client Side and Server Side SSO and all work well except from one litte thing. I login to Univers Designer with authentication AD and create a SSO connection to the sap syste

  • HT2801 I can't burn a dvd. It tells me my superdrive is missing. How can i get that back.

    I can't burn dvd. It tells me my superdrive is missing. How can I get that back