Log File Growth after Database ReIndexing

Hi,
After doing Biztalk MsgBox and DTADb re-indexing by executing bts_RebuildIndexes and dtasp_RebuildIndexe sps respectively, it has been observed that the transaction log size for both the DBs went high and also Biztalk jobs (like, DTAPurge) were
not completing.
I am using BTS2006 and SQL2005.
Because of the growth, we had to introduce extra storage. But it seems that the size is under control now.
Kindly help me to understand what went wrong or why it happened?
Thanks,
Sugata

Ideally while running the stored proc for rebuilding indexes, there shouldn't be any processing happening.
So it's suggested to stop all the host instances, SQL Agent and IIS App pool if you have any SOAP/WCF receive location.
You can run MBV report from below and check if it reports any issues. 
Message Box Viewer -  http://blogs.technet.com/b/jpierauc/archive/2007/12/18/msgboxviewer.aspx
Later use Terminator tool to address the concerns it reports. You may have to repair references.
http://www.microsoft.com/en-in/download/details.aspx?id=2846
Also, run the below query for all the databases and check if the output doesn't have any error(red color outcome) in the output.
Use <DatabaseName>
dbcc checkdb
Let us know if you are still facing any issue.
Thanks,
Prashant
Please mark this post accordingly if it answers your query or is helpful.

Similar Messages

  • How to design SQL server data file and log file growth

    how to design SQL DB data file and log file growth- SQL server 2012
    if my data file is having 10 GB sizze and log file is having 5 GB size
    what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.

    It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
    The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
    the appropriate amount.
    One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
    https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
    Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this 
    SELECT
    [DatabaseName],
    [FileName],
    [SPID],
    [Duration],
    [StartTime],
    [EndTime],
    CASE [EventClass]
    WHEN 92 THEN 'Data'
    WHEN 93 THEN 'Log' END
    FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
    WHERE
    EventClass IN (92,93)
    hope that helps

  • Very high transaction log file growth

    Hello
    Running Exchange 2010 sp2 in a two node DAG configuration.  Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
    the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
    integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
    with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
    particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
    i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
    I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
    http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
    Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
    Bulls on Parade

    If you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved.  If it is one or more iPhones then perhaps look at
    what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout.  NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
    There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
    http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
    I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
    MS USER MONITOR:
    http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
    http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
    And this is a great article as well
    http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
    Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
     http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
    Troy Werelius
    www.Lucid8.com
    Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope

  • Cannot remove 2nd log file on AlwaysOn database

    Hi all,
    I have a database, member of an availability group. This database has 2 log file, I want to remove the unsed secondary log file, I try to run this command to empty the second lofg file:
    USE [TEST-AG]
    GO
    DBCC SHRINKFILE (N'TEST-AG_log_2' , EMPTYFILE)
    GO
    the command completes successfully, the I run the command to remove the file:
    USE [TEST-AG]
    GO
    ALTER DATABASE [TEST-AG]  REMOVE FILE [TEST-AG_log_2]
    GO
    But this command fails with the following message:
    Error 5042: The
    file 'TEST-AG_log_2' cannot
    be removed because it is not empty.
    If I remove the database from availability group the command to remove the 2nd file works, so I can't remove a secondary log file on a database member
    of an alwayson availability grup?

    Hi all,
    I have a database, member of an availability group. This database has 2 log file, I want to remove the unsed secondary log file, I try to run this command to empty the second lofg file:
    USE [TEST-AG]
    GO
    DBCC SHRINKFILE (N'TEST-AG_log_2' , EMPTYFILE)
    GO
    the command completes successfully, the I run the command to remove the file:
    USE [TEST-AG]
    GO
    ALTER DATABASE [TEST-AG]  REMOVE FILE [TEST-AG_log_2]
    GO
    But this command fails with the following message:
    Error 5042: The file 'TEST-AG_log_2' cannot
    be removed because it is not empty.
    If I remove the database from availability group the command to remove the 2nd file works, so I can't remove a secondary log file on a database member of an alwayson
    availability grup?
    Remove the database from availability group, then remove 2nd file.  You have been successfully. Then add back database to the availability group, then create regular backup jobs of the database.

  • Database Log File getting full by Reindex Job

    Hey guys
    I have an issue with one of my databases during Reindex Job.  Most of the time, the log file is 99% free, but during the Reindex job, the log file fills up and runs out of space, so the reindex job fails and I also get errors from the DB due to log
    file space.  Any suggestions?

    Please note changing to BULK LOGGED recovery will make you loose point in time recovery. Because alter index rebuild would be minimally logged and for the time period this job is running you loose point in time recovery so take step accordingly. Plus you
    need to take log backup after changing back to Full recovery
    I guess Ola's script would suffice if not you would have to increase space on drive where log file is residing. Index rebuild is fully logged in full recovery.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • BRARCHIVE backup for high volume offline redo log files on Standby Database

    Hi All,
    We are through with all of Standby database activity, also started applying the offline redo log files on the Standby site.
    The throughput is not utilizing the actual available bandwith.
    So we are not able to copy the offline redo files on time, as the offline redo files are piling up on the Production side.
    My query is how we can parallely copy the offline redo log files on the DR site (ie. 4-5 redo files at a time).
    Kindly guide for the same.
    Regards,
    Shaibaz

    hi,
    I have one doubt.
    On other server (r3qas) the Umask settings are as followed
    User     UMASK value
    <sid>adm          077              
    ora<SID>           077
    root                   077
    Running SAP System :   SAP R3 4.6C
    Running DBMS          :  Oracle 9.0
    Operating System      :- HP_UX
    On this system The new offline redo log files are created with 600 permissions. There is not a problem here, while taking the backup. I checked last "r3qas-archive" backups. There, i have not found any single error related to permissions, or any others (something like, Cannot open /oracle/RQ1/../.........dbf).
    If everything is working fine, with this umask setting on this server, then, what's going wrong with the BW Quality server, which have the same umask settings (also others) for all the concerned users, as mentioned above.
    Regards,
    Bhavik Shroff

  • Use of standby redo log files in primary database

    Hi All,
    What is the exact use of setting up standby redo log files in the primary database on a data guard setup?
    any good documents?

    A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
    You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
    refer the link,and Perform the following steps to configure the standby redo log.:-
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
    If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
    refer the link
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_apply.htm#i1023371

  • Alert log file of the database is too big

    Dear Experts?
    Let me update you that we are in process of doing an R12.1 upgrade ? And Our Current Instance is running on the 11.5.10 with 9.2.0.6 ?
    We have the below challenges before going for an database upgrade ?
    We have observed that the customer database alert_SID.log (9.2.0.6) size is 2.5GB ? how to purge this file ? Please advise
    Please also note that our Instance is running on the Oracle Enterprise Linux 4 update 8
    Regards
    Mohammed.

    user977334 wrote:
    Dear Experts?
    Let me update you that we are in process of doing an R12.1 upgrade ? And Our Current Instance is running on the 11.5.10 with 9.2.0.6 ?
    We have the below challenges before going for an database upgrade ?
    We have observed that the customer database alert_SID.log (9.2.0.6) size is 2.5GB ? how to purge this file ? Please advise
    Please also note that our Instance is running on the Oracle Enterprise Linux 4 update 8
    Regards
    Mohammed.Rename the alert logfile. once you rename the logfile another alert log file will create and being populated with time, and later you can delete it old alert log file . it doesnot harm your database.
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Log file Growth query

    Hi All,
    Is there any query to increase the logfile growth to unlimited in SQL 2000 . Am unable to do that through GUI mode and getting some transaction error. Please suggest me any query.
    Thanks & Regards,
    Venkat.

                     
    "As I said, you cannot set log files to "unrestricted".  You must set them to a number.  "
    Not true. From
    http://msdn.microsoft.com/en-us/library/bb522469.aspx:
    MAXSIZE { max_size| UNLIMITED }
    Specifies the maximum file size to which the file can grow.
    max_size                              
    Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default is MB. Specify a whole number and do not include a decimal. If
    max_size is not specified, the file size will increase until the disk is full.
    UNLIMITED                            
    Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a maximum size of 2 TB, and a data file has a maximum size of 16 TB. There is no maximum size when this option is specified for a FILESTREAM
    container. It continues to grow until the disk is full.
    In other words, you _can_ set a log file's MAXSIZE to UNLIMITED and you do _not_ have to specify a number, but SQL Server will _not_ grow a log file beyond 2TB (even when you try to allow it)

  • Dropping log file in standby database

    Please,
    I need a help for the following issue:
    I'm making a technical documentation on various event that occur on dataguard configuraation, right now I drop a redo log group file on primary database, and when I try to drop the equivalent log group file on standby database I got the following error:
    SQL> alter database drop logfile group 3;
    alter database drop logfile group 3
    ERROR at line 1:
    ORA-01156: recovery in progress may need access to files
    this is the current state of the redolog file on standby database.
    SQL> select group#,members,status from v$log;
    GROUP# MEMBERS STATUS
    1 3 CLEARING_CURRENT
    3 3 CLEARING
    2 3 CLEARING
    Eventhough I do the following command on standby I also get an error.
    SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
    ALTER DATABASE CLEAR LOGFILE GROUP 3
    ERROR at line 1:
    ORA-01156: recovery in progress may need access to files
    Can someone tell me how to drop on dataguard configuration the redolog file on primary database and their corresponding on standby database
    I'm working on 10 release 2, on Windows
    Thanks you

    Oracle Dataguard Concept and administration release 2,ref b14239: is my source but it doesn't work when trying to drop stanby group or logile member.
    For example, if the primary database has 10 online redo log files and the standby
    database has 2, and then you switch over to the standby database so that it functions
    as the new primary database, the new primary database is forced to archive more
    frequently than the original primary database.
    Consequently, when you add or drop an online redo log file at the primary site, it is
    important that you synchronize the changes in the standby database by following
    these steps:
    1. If Redo Apply is running, you must cancel Redo Apply before you can change the
    log files.
    2. If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO,
    change the value to MANUAL.
    3. Add or drop an online redo log file:
    ■ To add an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log'
    SIZE 100M;
    ■ To drop an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
    4. Repeat the statement you used in Step 3 on each standby database.
    5. Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
    Thank

  • Auditing going to log file rather than database

    Post Author: guytang
    CA Forum: Administration
    I have set up Auditing within Business Objects XI R2 with Auditor.  I have some of my elements that are being audited going to the database and other elements are going to log files.  How do I change this to make sure that all the elements that are audited are going to the database?
    Thank you for your assistance,
    Guy

    ILikewindows wrote:
    ... I didn't know I could delete it through iTunes.
    Now that you know, try deleting through iTunes.  If that does not remove the image file, then place the app you mistakenly deleted back in the file folder from your latest backup and try deleting through iTunes again.

  • LDB files or log files in ORACLE database ?

    Hi Experts,
    Like LDB files in MIcrosoft Access do we have any sort of files in ORACLE which records the transactions and updates done in a particular database.
    Pls help me if you have any idea on this.
    Thanks in advance.

    If you are comparing Access with Oracle, then you are in for many shocks and surprises. In Oracle we have a database, in which there are logiical things called as tablespaces which have data files, a physical file at operating system level, which store data, and then transactions are also protected by the redo logs and for the old image we have undo tablespaces and so on...
    regards

  • Not able to add new log file to the 11g database.

    Hi DBA's
    I am not able to add the log file i am getting error while adding the database.
    SQL> alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse;
    alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse
    ERROR at line 1:
    ORA-01505: error in adding log files
    ORA-01577: cannot add log file '/oracle/DEV/db/apps_st/data/log03a.dbf' - file
    already part of database
    SQL> select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf ACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf ACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    Kindly help me to add the new log file to my database.
    Thanks,
    SG

    Hi Sawwan,
    V$LOGMEMBER was written in the document,
    I query the log members as bellow
    1)select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf INACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf INACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    2)SQL> select group#,member,status from v$logfile;
    GROUP# MEMBER STATUS
    2 /oracle/DEV/db/apps_st/data/log02a.dbf
    2 /oracle/DEV/db/apps_st/data/log02b.dbf
    1 /oracle/DEV/db/apps_st/data/log01a.dbf
    1 /oracle/DEV/db/apps_st/data/log01b.dbf
    But i am littile bit confused that there is no group or datafile called " Group 3 and log03a.dbf" as per the above query, how can i drop tease group and datafile.
    and i crossverified in the data top the files are exist or not but those are not existing. but still i am getting the same error that i can't create that already exist.
    can issue the bellow queris to drop those group which i dont think so it will exist?
    SQL>alter database drop logfile group 3;
    Thanks in advance.
    Regards,
    SG

  • Help! SQL server database log file increasing enormously

    I have 5 SSIS jobs running in sql server job agent and some of them are pulling transactional data into our database over the interval of 4 hours frequently. The problem is log file of our database is growing rapidly which means in a day, it eats up 160GB of
    disk space. Since our requirement dont need In-point recovery, so I set the recovery model to SIMPLE, eventhough I set it to SIMPLE, the log
    data consumes more than 160GB in a day. Because of disk full, the scheduled jobs getting failed often.Temporarily I am doing DETACH approach
    to cleanup the log.
    FYI: All the SSIS packages in the job is using Transaction on
    some tasks. for eg. Sequence Cointainer
    I want a permanent solution to keep log file in a particular memory limit and as I said earlier I dont want my log data for future In-Point recovery, so no need to take log backup at all.
    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues. Thanks in advance.

    And one more problem is that in our database,the transactional table has 10 million records in it and some master tables have over 1000 records on them but our mdf file
    size is about 50 GB now.I dont believe that this 10 million records should make it to 50GB memory consumption.Whats the problem here?
    Help me on these issues.
    For SSIS part of question it would be better if you ask in SSIS forum although noting is going to change about logging behavior. You can increase some space on log file and also should batch your transactions as already suggested
    Regarding memory question about SQL Server, once it utilizes memory is not going to release unless there is windows OS faces memory pressure and SQLOS asks SQL Server to trim down its memory consumption. So again if you have set max server memory to some
    where near 50 SQL Server will utilize that much memory eventually. So what you are seeing is totally normal. Remember its costtly task for SQL Server to release and take memory so it avoids it by caching as much as possible plus it caches more so as to avoid
    physical reads which are costly
    When log file is getting full what does below transaction return
    select log_reuse_wait_desc from sys.databases where name='db_name'
    Can you manually introduce chekpoint in ETL query. Try this it might help you
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

Maybe you are looking for

  • Userexit in the idoc invoice

    hi experts, i have a requirment like that , our client sending the invoices through idoc. my requirment is that, in  the invoice now the name of the payer is printing ,but in that place it should  print the contacat person name. could u please tell m

  • Prompt on Quarter

    Hi I have an object in universe like "Fiscal Quarter". The values of this object are like Q1 FY08,Q2 FY08,Q3 FY08.....I want to create prompt on this object in webi like "Fiscal Quarter between.....and......". As this is character data how will the B

  • Error in Formula - DB Null Value

    Hi, I am using the following forumula to print a tax column.  I want to print 0.00 if there is no value in database.  I am using Oracle 9i. Blinking where the things went wrong. Please find below the formula. NumberVar RequiredLength:= 9; NumberVar C

  • How can I validate value of BigDecimal into af:inputText??

    I have a af:inputText which value is binded a BigDecimal value. The big decimal value is held in DB with number format NUMBER(10,2), to validate this value, what types of validation technique should to be used? NUMBER(10,2) , user is able to enter va

  • Retrieve my lost files!

    Hi everyone. My computer was running slow so I decided to deletes some of my files. Without knowing I stuck a paper that I worked for three days on. I then emptied the trash. Is there anyway to retrieve just the file, FAST! I need it by Wednesday. Th