Impact on DB to shrink a 130GB log file?

Here is my setup:
OS:  Windows Server 2008 R2 Enterprise
6 x Cores
8GB RAM
SQL:  Windows SQL 2008 setup in a 2 node cluster
I noticed that I have been having to add more disk space to one of the DBs that is running on this cluster.  After looking into this further I noticed that there is a 130GB log file for this database.  I have never attempted to shrink a log file
this size and do not know what kind of impact it may have to the running databases.  Can anyone comment on what I might expect?  Do I need to do this during a maintenance window?
Thank you
Rick

Rick,
this is a very comprehensive subject and you should brace yourself for a lot of complex information that is about to be posted here, but I'm going to try put this as simple as possible.
Since you are asking if you can shrink your log file, I'm going to assume you do not have a backup policy that involves them but you probably have your database recovery model set to full. The transaction log is needed to ensure transactional consistency
and may be backed up, under full/bulk-logged database recovery models, enabling you to restore them together with full/differental backups.
Yes you can shrink your log files, and for this to be effective I recommend you set your database to simple recovery model. There are a few main things you should be aware of:
1) The database engine will allocate log file space as needed for it to run transactions. If it runs out of the pre-allocated space, it will autogrow by an ammount previously specified if this option is enabled, but if its disabled or there is no more free
disk space available, transactions will fail. It is recommended that you set the autogrow increment to reasonable values so your log files doesn't get too fragmented (new VLFs are created every time the file grows). There is also a performance impact, generally
overrated by some analysts in my opinion but very real, of growing your file too often.
2) Under simple recovery model, parts of the log known as VLFs that have already been filled and aren't supporting active transactions anymore are reused so your log files will generally remain very small. In full recovery model, these parts are not reused
until you take a log backup, so the log file will keep growing (this is probably your case). You will need to switch your database to simple recovery model or take a log backup in order to successfully shrink the log file.
3) However, if your database is already on simple recovery model and the log grew to 130 GB, it was because the transactions really needed it to work, so I would recommend against shrinking your log file unless you know the transactions were part of an unusual
application process or appeared after an index maintenance operation.
4) Your backup/restore operations will run faster with a smaller log file.
5) You'll need less disk storage to restore your databases on secondary environments if you have smaller log files.
6) If the storage used for the log files are shared across multiple databases, instances, or other files, keeping your log files small ensures they have more free space to grow under abnormal circumstances without manual intervention, preventing service
disruption.
Just because there are clouds in the sky it doesn't mean it isn't blue. But someone will come and argue that in addition to clouds, birds, airplanes, pollution, sunsets, daltonism and nuclear bombs, all adding different colours to the sky, this
is an undocumented behavior and should not be relied upon.

Similar Messages

  • Shrink Transaction log file - - - SAP BPC NW

    HI friends,
    We want to shrink the transaction log files in SAP BPC NW 7.0, how can we achieve thsi
    Please can you throw some light on this
    why we thought of ghrinking the file ?
    we are getting "out of memory " error when ever we do any activity. so we thought of shrinking the file (this is not a production server - FYI)
    example of an activity where the out of memeory issueee appears
    SAP BPC excel >>> etools >>> client options >>> refresh dimension members >>> this leads to a pop-up screen stating that "out of memory"
    so we thought of shrinking the file.
    Please any suggestions
    Thank you and Kindest Regards
    Srikaanth

    HI Poonam,
    Not only the excel is throwing this kind of message (out of memory) - the SAP note is helpful, if we have error in excel alone
    But we are facing this error every where
    We have also found out that our Hard disk capacity as run out of space.
    we want to empty the log files and make some space for us.
    our hard disk is now having only few MegaBytes now
    we want to clear our all test data, log files, and other stuffs
    Please can you recommed us some way
    Thank you and Kindest regards
    Srikaanth

  • Big transaction log file

    Hi,
    I found a sql server database with a transaction log file of 65 GB.
    The database is configured with the recovery model option = full.
    Also, I noticed than since the database exist, they only took database backup.
    No transaction log backup were executed.
    Now, the "65 GB transaction log file" use more than 70% of the disk space.
    Which scenario do you recommend?
    1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
    2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
    Does the " 65 GB file shrink" operation would have impact on my database users ?
    The sql server version is 2008 sp2 (10.0.4000)
    regards
    D

    I've read the other posts and I'm at the position of: It really doesn't matter.
    You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
    and just taking up space.
    For the fastest option I would personally do the following:
    1. Take a full database backup
    2. Set the database recovery model to Simple
    3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
    4. Shrink the log using the truncate option to lop off the end of the log
    5. Manually re-size the log based on usage needed
    6. Set the recovery model to full
    7. Take a differential database backup to bridge the log gap
    The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
    need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
    Sean Gallardy | Blog |
    Twitter

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • MessageBox log file size

    Hi, 
    In our prod environment, the MessageBox data file is withing the recommended limits - 2GB, but the log file is 32GB. Is this a reason to worry, or it is normal?  I couldn't find any recommendations on this. 
    Thank you very much!

    This is not normal.
    IMO your BizTalk database Jobs are not running , Make sure your BizTalk SQL servers jobs have been enabled and SQL server agent is running. 
    Please have a look of
    How to Configure the Backup BizTalk Server Job article to enable the jobs. 
    The BizTalk backup job is responsible for keeping the log file size in the limit. 
    you can try shrinking the log file using following SQL command
    USE BiztalkMsgBoxDb;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE BiztalkMsgBoxDb
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1 MB.
    DBCC SHRINKFILE (BiztalkMsgBoxDb_Log, 2);
    GO
    I would recommend you to have a read of following articles
    BizTalk Environment Maintenance from a DBA perspective 
    BizTalk Databases: Survival Guide
    hope this helps. 
    Greetings,HTH
    Naushad Alam
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or
    Mark As Answer
    alamnaushad.wordpress.com

  • Transactional log file issue

    Dear All,
    There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following
    please:
    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
    4. How often should the update stat job should run please?
    Thank you in advance!

    Hi
    My answers might be very similar to geeks already answer, but hope it will add something more
    1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
     --> If database recovery model is full \ bulk then tlog backup is helpful, and it doesnt help try to increase frequency of log backup and you can refer :
    Factors That Can Delay Log Truncation
    2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
    Auto grow for very large db is very crucial if its too high can cause active vlf and too less can cause fragmentation. In your case your priority is to control space utilizatiuon.
    i suggest you to keep minimum autogrowth and it must be in size not in percentage.
    /*******Auto grow formula for log file**********/
    Auto grow less than 64MB = 4 VLFs 
    Autogrow  of 64MB and less than 1GB = 8 VLFs 
    Autogrow of 1GB and larger = 16 VLFs
    3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
    ---> If below query returns log_backup for respective database then yes you can to increase log backup frequency. But if it returns some other factor , please check above
    mention link
    "select name as [database] ,log_reuse_wait , log_reuse_wait_desc from sys.databases"
    4. How often should the update stat job should run please?
    this totaly depend on ammount of dml operation you are performing. you can select auto update stats and weekly you can do update stats with full scan.
    Thanks Saurabh Sinha
    http://saurabhsinhainblogs.blogspot.in/
    Please click the Mark as answer button and vote as helpful
    if this reply solves your problem

  • CPO SQL Log Files

    Hey guys
    We have started to see an exponential growth in our SQL log files that is in direct correlation with running more processes more consistently.
    Has anyone had any issues with running a log truncating script to both the TEO_processlog and TEO_reportinglog files?
    something along the lines of 
    ALTER DATABASE ExampleDB SET RECOVERY SIMPLE
    DBCC SHRINKFILE('ExampleDB_log', 0, TRUNCATEONLY)
    Thanks 
    Matt

    Matt,
     Yes, many people (if they do not need the t-logs) will reduce them. In 3.0 (during install) you can actually set it to SIMPLE mode instead of the more full recovery mode. I do that on most all of my boxes. Prior to CPO 3.0 you would have to do it like you show above.
    On most of my pre 3.0 boxes I would do something like
    USE <TEOProcess_DB_Name>;GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE <TEOProces_DB_Name> SET RECOVERY SIMPLE;GO
    -- Shrink the truncated log file to 1000MB.
    DBCC SHRINKFILE ("<TEOProcess_log_file_name>", 1000); GO
    Of course this is on SQL only. You can find more information on DBCC Shrinkfile at Microsoft's help site.
    If you need to reset the database to full mode it's:
    -- Reset the database recovery model.
    ALTER DATABASE <TEOProcess_DB_Name> SET RECOVERY FULL;GO
    --Shaun

  • Database Log file Shrink information

    Hello Team,
    Database log file shrink information: Due to space problem
    One of my database having 600 gb,under  600 gb log file having 260 gb, in this database we take full backup daily no log backup. If we shrink the log file any impact is happen?
                   We shrink the log file internally what will happen?
    One of my database having 600 gb,log file having 260 gb, in this database we take full backup daily and every 15 min take log backup.
    This is scenario we shrink the log file what will happen.

    Hello,
    You should not try to shrink the log file regularly, because is resource intensive, it takes a lot of time and it creates fragmentation on the disk storage.
    If you don’t backup the log files of the databases (option 1) you don’t have the ability to restore to a point in time between backups full and differential. If
    you don’t want to backup files then it makes sense to change the recovery model to simple.
    Backing up the logs regularly minimize the risk of the logs getting filled and extending their size.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • How to know the history of shrinking log files in mssql

    hello,
    In my SAP system some one shrinked the log file to 100 GB to 5 GB.How we would check when this
    was shrinked recently .
    Regards,
    ARNS.

    hi,
    Did u check the logfile in sapdirectory.There will be entry of who changed the size and the time.
    Also,
    Goto the screen where we usually change the logfile size.In that particular field press f1 and goto technical setting screen. Get the program name , table name  and field name.
    Now using se11 try to open the table and check whether the changed by value is there for that table.
    Also open the program and debug at change log file process block.use can see in which table it update the changes.
    There is a case of caution in this case.
    The size of the application server's System Log is determined by the
    following SAP profile parameters. Once the current System Log reaches
    the maximum file size, it gets moved to the old_file and and a new
    System Log file gets created. The number of past days messages in the
    System Log depends on the amount/activity of System Log messages and the
    max file size. Once messages get rolled off the current and old files,
    they are no longer retrievable.
    rslg/local/file /usr/sap/<SID>/D*/log/SLOG<SYSNO>
    rslg/local/old_file /usr/sap/<SID>/D*/log/SLOGO<SYSNO>
    rslg/max_diskspace/local 1000000
    rslg/central/file /usr/sap/<SID>/SYS/global/SLOGJ
    rslg/central/old_file /usr/sap/<SID>/SYS/global/SLOGJO
    rslg/max_diskspace/central 4000000  .

  • Shrink Log File on High Availability

    Dear support
    good day for you,
    i using SQL server 2012 and using AlwaysON High Availability (server_SQL1 = Primary & Server_SQL2=Secondary), when i try to shrink the log files he told me you must alter the database to simple recovery mode first but i cant coz using AlwaysON !
    thats mean:
    remove the DB's from AlwaysON (Server_SQL1)
    Shrink Files
    remove DB's from Server_SQL2
    add again DB's to AlwaysON
    any other solution for shrink logs without add/remove DB from AlwaysON
    Regards,

    The link that Uri has is correct, but let me expand on it for anyone else that runs across this issue:
    You don't actually need to be in the simple recovery model to shrink a file or the log. The reason why some people make the switch is because changing to the simple recovery model lets the database automatically clear the logs. This *generally* puts the
    VLF in use at the very beginning of the log. Since shrinking a log file works differently from data files (log only works from the end of the log until the first used vlf) it allows for a fast shrink and grow operation to fix up the log.
    In the full recovery model it is still possible, the difference being that you'll need to check to see which VLF the database is currently using and you may have to manually cause the log to circle around (by log backups, etc) to get a good shrink so
    that you can grow at a proper size.
    Sean Gallardy | Blog |
    Twitter

  • Shrink Log File on MS sql 2005

    Hi all,
    My DB has a huge logfile, with more than 100gb.
    The idea is to shrink it, but the good way.
    I was trying this way:
    use P01
    go
    backup log P01 TO DISK = 'D:\P01LOG1\P01LOG1.bak'
    go
    dbcc shrinkfile (P01LOG1,250) with no_infomsgs
    go
    The problem is that the backup file is getting bigger and bigger each backup.
    So, my question is, how to shrink the logfile, correctly, with backup, but that backup should not increase but stay at the same level, overwriting the backups.
    I have full dayly backup with data protector from HP, but it doesn't clean the log, and it isn't possible to shrink it.

    What you want to do with the log backups depends on how you are going to recover the database in case the system/database loss and your backup schedule.
    1. If you are not going to do point in time recovery then there is no point in taking a tran log backup to a backup file. You can change the recovery model of the database to "simple". If your recovery model is "simple" you don't have to take transaction log backups at all. The inactive transactions are flushed from the log automatically. You should still be taking full and differential backups so that you can atleast recover your database to last full backup and apply the latest differential backup.
    2. If this is a production system then you should definitly be on "full" recovery mode and should be taking regular transaction log backups and storing them in a safe place so that you can use them to recover your system to any point in time. Storing the transaction log backup on the same server kind of defeats the purpose because if you lost the server and disks you will not have the backups either.
    3. If you are in full recovery mode and lets assume that you run your transaction log backups every 30 mins then you need your log file to be of the size that can handle the transactions that happen in any given 30 to 60 mins.
    There shouldn't be a need to constantly shrink log files if you configure things right.
    Edited by: Neeraj Nagpal on Aug 20, 2010 2:48 AM

  • Cannot shrink log file 2 because the logical log file located at the end of the file is in use ?

    HI,
    I am getting this error frequently.. any recomendations :
    Executed as user: DB0\sqlservices. Processing database: dbin [SQLSTATE 01000] (Message 0) 
    Cannot shrink log file 2 (DB_log) because the logical log file located at the end of the file is in use. [SQLSTATE 01000] (Message 9008) 
    Processing database: DB_ [SQLSTATE 01000] (Message 0)  DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] (Message 2528) 
    Cannot shrink log file 2 (DB_log) because the logical log file located at the end of the file is in use. [SQLSTATE 01000] (Message 9008) 
    Processing database: DB [SQLSTATE 01000] (Message 0) 
    DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] (Message 2528) 
    Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed. [SQLSTATE 42000] (Error 3023) 
    Processing database: DB_AC [SQLSTATE 01000] (Error 0)  
    [SQLSTATE 01000] (Error 0)  DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] (Error 2528). 
    The step failed.
    Please give any receomendations to avoid this error in future :
    Yangamuni Prasad M

     
    Hi Yangamuni,
    Are there any progress?
    Please have a look on the below threads with the similar issues as yours:
    http://www.sqlservercentral.com/Forums/Topic652579-146-1.aspx
    http://social.msdn.microsoft.com/forums/en-US/sqldatabaseengine/thread/ae4db890-c15e-44de-a2af-e85c04260331
    The solution is change the recovery mode to SIMPLE, shrink log files and then change to the FULL recovery mode.
    Thanks,
    Weilin Qiao
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. This can be beneficial to other community members reading the thread.

  • MS_SQL Shrinking Log Files.

    Hi Experts,
    We have checked the documentation which have been received from SAP (SBO_Customer portal), Based on the Early Watch Alert.
    AS per SAP requsition we have minimized the size of 'Test Database Log' file  through MS_SQL Management Studio.(Restricted file size 10 percent, size 10 MB)
    Initially it was 50 Percent, 1000 MB
    Doubt is:
    Is that any problem will occur in future regarding this changes of 'LOG FILES'.
    Kindly help me.
    Based on your reply ...
    i will update in live production database....
    By
    kart

    The risk to shrink log file is fairly small.  Current hardware and software has much better reliability than before.  When you shrink your log file, you just lose some history which nobody even know there is any value in it.
    On the contrary, if you keep very large log file, it may cause more troubles than doing anything good.
    Thanks,
    Gordon

  • SQL Server 2012 DB log file doesn't shrink (simple recovery model)

    I've found several similar questions in this forum, but none of the answers have resolved my problem: I have a SQL Server 2012 DB using simple recovery model.  The MDF file is 12 GB and the LDF file is 10 GB.  I'm trying to shrink the size of the
    LDF file.  I've read that for simple recovery model DBs there are reasons for delaying log file shrinking, but I still can't find a solution based on these reasons.
    When I try to shrink it using this command:
    DBCC SHRINKFILE(MyDB_log, 1000000)
    I get these results, and no change of file size:
    DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages8 2 1241328 128 1241328 128
    The same results running this:
    DBCC SHRINKFILE(MyDB_log, 1000000, TRUNCATEONLY)
    There doesn't appear to be any open transactions:
    DBCC OPENTRAN()No active open transactions.DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    And this returns NOTHING:
    SELECT name, database_id, log_reuse_wait_desc FROM sys.databases WHERE database_id = DB_ID()name database_id log_reuse_wait_descMyDB 8 NOTHING
    I've also tried running the following, but nothing useful is returned:
    SELECT * FROM sys.dm_tran_database_transactionsSELECT * FROM sys.dm_exec_requests WHERE database_id=DB_ID()SELECT * FROM sys.dm_tran_locks WHERE resource_database_id=DB_ID()
    Any other suggestions of what I can do to shrink this log file?  Or perhaps someone can justify its enormous size?
    David Collacott

    The answer is pretty simple.
    The following code is the problem:
    DBCC SHRINKFILE(MyDB_log, 1000000)
    You are telling SQL Server that you want to "shrink" the MyDB_log file and the target size is 1TB.  Well according to you the MyDB_Log file is well below the 1TB size you are targeting, in fact it's only 10GB so SQL Server is doing precisely what you
    are telling it to do.
    See, according to the SQL Server documentation
    here, target size "Is the size for the file in megabytes, expressed as an integer."
    Now if you'd like to actually shrink the log file down to, oh say 1GB, then you should try the following command:
    DBCC SHRINKFILE(MyDB_log, 1000)
    The theory being 1000 * 1048576 (i.e. 1MB) is equal to 1GB.

  • Log file shrinking in SQL server

    Hi,
    I have a log file with initial size of 80 GB in C drive.
    Now we are having space issue in C drive, so i tried to shrink the log file, but its not reducing below initial size.
    Is it the default behavior or i missed anything while shrinking ?
    I used DBCC SHRINKFILE option to shrink it.
    How can i change the initial size of the log file ?
    If it has been set as 80 GB , because of that am i not able to free space in C drive ?
    Thanks,
    Vinodh Selvaraj

    Hello,
    Please check first the log reuse wait state of the databases, may you have to run an additional log backup before you can shrink the log file
    select name, log_reuse_wait_desc
    from sys.databases
    order by name
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

Maybe you are looking for

  • How do i get my laptop to access the itunes library on my external drive hooked up to my desktop via my home network?

    How do i get my laptop to access the itunes library on my external drive hooked up to my desktop via my home network?

  • Billing documento to CO-PA

    Hello gurus, I want to allocate to CO-PA the sales order billing document (VF02). I have customized KE4I and the P&L account is cost element type 11. when i create the billing document the profitability analysis object is derivated but no profitabili

  • I need to access old MacDraw files

    I kept my old Pismo PowerBook in order to be able to run OS 9.2 and old applications that were not brought on to the OSX platform. When I went to open some old MacDraw files I discovered that somehow the translator for MacDraw no longer was available

  • Converting .pdf into InDesign CS6 editable doc.

    I have a 103 page .pdf document that I need to edit the contents of (change fonts, type sizes, colors, insert revised logos, and place a few style elements). I have the latest version of acrobat pro. that enables you to edit text, but its not entirel

  • Main HDD name missing/changed.

    Installed Leopard last week and things have been good until last night. Yesterday I noticed that the Icon for the HDD on my desktop lost its name - Was Macintosh Hard Disk(or something like that) now it shows the name as (<) (>)? - Without the ( ) -