Log file Growth query
Hi All,
Is there any query to increase the logfile growth to unlimited in SQL 2000 . Am unable to do that through GUI mode and getting some transaction error. Please suggest me any query.
Thanks & Regards,
Venkat.
"As I said, you cannot set log files to "unrestricted". You must set them to a number. "
Not true. From
http://msdn.microsoft.com/en-us/library/bb522469.aspx:
MAXSIZE { max_size| UNLIMITED }
Specifies the maximum file size to which the file can grow.
max_size
Is the maximum file size. The KB, MB, GB, and TB suffixes can be used to specify kilobytes, megabytes, gigabytes, or terabytes. The default is MB. Specify a whole number and do not include a decimal. If
max_size is not specified, the file size will increase until the disk is full.
UNLIMITED
Specifies that the file grows until the disk is full. In SQL Server, a log file specified with unlimited growth has a maximum size of 2 TB, and a data file has a maximum size of 16 TB. There is no maximum size when this option is specified for a FILESTREAM
container. It continues to grow until the disk is full.
In other words, you _can_ set a log file's MAXSIZE to UNLIMITED and you do _not_ have to specify a number, but SQL Server will _not_ grow a log file beyond 2TB (even when you try to allow it)
Similar Messages
-
How to design SQL server data file and log file growth
how to design SQL DB data file and log file growth- SQL server 2012
if my data file is having 10 GB sizze and log file is having 5 GB size
what should be the size in MB (not in %) of autogrowth. based on what we have to determine the ideal size of file auto growth.It's very difficult to give a definitive answer on this. Best principal is to size your database correctly in advance so that you never have to autogrow, of course in reality that isn't always practical.
The setting you use is really dictated by the expected growth in your files. Given that the size is relatively small why not set it to 1gb on the datafile(s) and 512mb on the log file? The important thing is to monitor it on an on-going basis to see if that's
the appropriate amount.
One thing you should do is enable instant file initialization by granting the service account Perform Volume Maintenance tasks in group policy. This will allow the data files to grow quickly when required, details here:
https://technet.microsoft.com/en-us/library/ms175935%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396
Also, it possible to query the default trace to find autogrowth events, if you wanted you could write an alert/sql job based on this
SELECT
[DatabaseName],
[FileName],
[SPID],
[Duration],
[StartTime],
[EndTime],
CASE [EventClass]
WHEN 92 THEN 'Data'
WHEN 93 THEN 'Log' END
FROM sys.fn_trace_gettable('c:\path\to\trace.trc', DEFAULT)
WHERE
EventClass IN (92,93)
hope that helps -
Very high transaction log file growth
Hello
Running Exchange 2010 sp2 in a two node DAG configuration. Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
Bulls on ParadeIf you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved. If it is one or more iPhones then perhaps look at
what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout. NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
MS USER MONITOR:
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
And this is a great article as well
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
Troy Werelius
www.Lucid8.com
Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope -
Crystal Report Server Database Log File Growth Out Of Control?
We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise. Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
Is it "Normal" to have such a large LOG file compared to the DATABASE file?
Can you tell me if there is a recommended way to SHRINK the log file?
Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
USE CRS
GO
--Truncate the log by changing the database recovery model to SIMPLE
ALTER DATABASE CRS
SET RECOVERY SIMPLE;
--Shrink the truncated log file to 1 gigabyte
DBCC SHRINKFILE (CRS_log, 1000);
GO
--Reset the database recovery model.
ALTER DATABASE CRS
SET RECOVERY FULL;
GO
Do you think this approach would help?
Do you think this approach would cause any problems?my bad you didn't put the K on the 2nd number.
Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
Regards,
Tim -
Log File Growth after Database ReIndexing
Hi,
After doing Biztalk MsgBox and DTADb re-indexing by executing bts_RebuildIndexes and dtasp_RebuildIndexe sps respectively, it has been observed that the transaction log size for both the DBs went high and also Biztalk jobs (like, DTAPurge) were
not completing.
I am using BTS2006 and SQL2005.
Because of the growth, we had to introduce extra storage. But it seems that the size is under control now.
Kindly help me to understand what went wrong or why it happened?
Thanks,
SugataIdeally while running the stored proc for rebuilding indexes, there shouldn't be any processing happening.
So it's suggested to stop all the host instances, SQL Agent and IIS App pool if you have any SOAP/WCF receive location.
You can run MBV report from below and check if it reports any issues.
Message Box Viewer - http://blogs.technet.com/b/jpierauc/archive/2007/12/18/msgboxviewer.aspx
Later use Terminator tool to address the concerns it reports. You may have to repair references.
http://www.microsoft.com/en-in/download/details.aspx?id=2846
Also, run the below query for all the databases and check if the output doesn't have any error(red color outcome) in the output.
Use <DatabaseName>
dbcc checkdb
Let us know if you are still facing any issue.
Thanks,
Prashant
Please mark this post accordingly if it answers your query or is helpful. -
CREATE DATABASE with data file and log file in query pane
Hi everyone,
After I ran the below code I got the following error message. Can someone help me fix this?
Thanks
CREATE DATABASE project
ON
(Name= 'project_dat',
FILENAME ='C:\project.mdf',
SIZE = 10,
MAXSIZE = 100,
FILEGROWTH = 5)
LOG ON
(NAME = project_log,
FILENAME = 'C:\project.ldf',
SIZE =40,
MAXSIZE = 100,
FILEGROWTH = 10);
Msg 5123, Level 16, State 1, Line 1
CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'C:\project.mdf'.
Msg 1802, Level 16, State 4, Line 1
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
skiloHello ,
Please go through by support site :
Use SQL Server Enterprise Manager
Note The instance of SQL Server Enterprise Manager that is included with SQL Server 7.0 does not support setting the default data directory and the default log directory. However, you can register your instance of SQL Server 7.0 in the instance
of SQL Server Enterprise Manager that is included with SQL Server 2000, and you can then follow these steps to set the default data directory and the default log directory for your instance of SQL Server 7.0.
Click Start, point to Programs, point to
Microsoft SQL Server, and then click Enterprise Manager.
In SQL Server Enterprise Manager, right-click your instance of SQL Server, and then click
Properties.
In the SQL Server Properties (Configure) - <Instance Name> dialog box, click the
Database Settings tab.
In the New database default location section, type a valid folder path in the
Default data directory box and in the Default log directory box.
Click OK.
Stop your instance of SQL Server, and then restart your instance of SQL Server.
Ahsan Kabir Please remember to click Mark as Answer and Vote as Helpful on posts that help you. This can be beneficial to other community members reading the thread. http://www.aktechforum.blogspot.com/ -
Cancel the query which uses full transaction log file
Hi,
We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.Hi,
We have reindexing job to run every week sunday. During the last run, the transaction log got full and the subsequent transactions to the database got errorred out stating 'Transaction log is full'. I want to restrict the utilization of the log file, that
is, when the reindexing job reaches the utilization of the log file to a certain threshold automatically the job should get cancelled. Is there any way to get it.
Hello,
Instead of Putting limit on trn log it would be good to find out cause causing high utilization.Even if you find that your log is growing because of some transaction it would be a blunder to rollback its little easy to do it for Index rebuild but if you
cancel for some delete operation you would end up in mess.Please don't create a program to delete or kill running operation.
You can create custom job for alert for trn log file growth.That would be good.
From 2008 onwards Index rebuild is fully logged so sometimes it causes trn log issue.To solve this either you run index rebuild for specific tables or for selective tables.
Other option is widely accepted Ola Hallengren script for index rebuild.I suggest you try this
http://ola.hallengren.com/
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
Log Files out of Control - How to manage size?
This is a two part question:
1) Our Apache2 error log has grown to 41 GB!!! How can we clear it?
2) Is there a way to limit log file growth?
3) Is there an application to manage log files on a server?
We are running Leopard Server 10.5.x.
Thanks!1) How do we set up apache to rotate logs? I was checking server admin->web service for configuration options, but didn't see any (we did advanced server configuration).
It's automatic, and AFAIK enabled by default within Mac OS X Server. If you're piling up stuff in your logs, then your server is either very busy, or there are issues or problems being reported in the logs.
2) Where in server admin?
Server Admin > select server > Web > Sites > Logging
Or as an alternative approach toward learning more about Mac OS X Server and its technologies, download the PDF of the relevant [Apple manual|http://www.apple.com/server/macosx/resources/documentation.html]. Here, you can brute-force search the manual in the Preview tool. Depending on how you best learn, you can read through the various manuals for details on how to configure and operate and troubleshoot the various components, and (for more detail than is available in the Mac OS X Server manuals) for pointers to the component-specific web sites and documents, too. -
Location of query log files in OBIEE 11g (version 11.1.1.5)
Hi,
I wish to know the Location of query log files in OBIEE 11g (version 11.1.1.5)??Hi,
Log Files in OBIEE 11g
Login to the URL http://server.domain:7001/em and navigate to:
Farm_bifoundation_domain-> Business Intelligence-> coreapplications-> Dagnostics-> Log Messages
You will find the available files:
Presentation Services Log
Server Log
Scheduler Log
JavaHost Log
Cluster Controller Log
Action Services Log
Security Services Log
Administrator Services Log
However, you can also review them directly on the hard disk.
The log files for OBIEE components are under <OBIEE_HOME>/instances/instance1/diagnostics/logs.
Specific log files and their location is defined in the following table:
Log Location
Installation log <OBIEE_HOME>/logs
nqquery log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
nqserver log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
servername_NQSAdminTool log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
servername_NQSUDMLExec log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
servername_obieerpdmigrateutil log (Migration log) <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIServerComponent/coreapplication_obis1
sawlog0 log (presentation) <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
jh log (Java Host) <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIJavaHostComponent\coreapplication_obijh
webcatupgrade log (Web Catalog Upgrade) <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIPresentationServicesComponent/coreapplication_obips1
nqscheduler log (Agents) <OBIEE_HOME>/instances/instance1/diagnostics/logsOracleBISchedulerComponent/coreapplication_obisch1
nqcluster log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIClusterControllerComponent\coreapplication_obiccs1
ODBC log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OracleBIODBCComponent/coreapplication_obips1
opmn log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
debug log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
logquery log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
service log <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
opmn out <OBIEE_HOME>/instances/instance1/diagnostics/logs/OPMN/opmn
Upgrade Assistant log <OBIEE_HOME>Oracle_BI1/upgrade/logs
Regards
MuRam -
Webanalysis : Query Log File
Hi,
Plz suggest me where can i the path for report's query log file. i.e I run a webanalysis report and now i want to see the query that it fire to the source.
I want to see the detail level log.
Is there any other setting that i need to do like entry in property file or something.
Plz suggest the way.
ThanksCreate two env variables
ADM_TRACE_LEVEL=0
REDIRECTOR_TRACE_LEVEL=0
Also you can add this line to WA.prop file
LogQueries=true
Hope it hepls
Regards
CK -
Query log file location?
Is log file is created when query has completed execution, if yes plz tell its location.
Put a set timing on before executing the query in SQL *Plus like:
SQL> set timing on
SQL> select 1 from dual;
1
1
Elapsed: 00:00:00.00
To make trace on just do the following:
SQL> SET AUTOTRACE TRACEONLY
SQL> SELECT 1 FROM DUAL;
Elapsed: 00:00:00.01
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1)
1 0 FAST DUAL (Cost=2 Card=1)
Statistics
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
419 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
Regards
Edited by: Virendra.k.Yadav on Aug 20, 2010 2:37 AM -
I am trying create a query log file for an ASO cube. I have created the database.cfg file in the database directory and restarted the application. Then I run a few queries, and stop the application. Query log file (database.qlg) is not created. My database.cfg file looks like this:<BR>QUERYLOG [LOB]<BR>QUERYLOG LOGFILESIZE 2<BR>QUERYLOG TOTALLOGFILESIZE 1024<BR>QUERYLOG ON<BR><BR>Thanks
I am trying create a query log file for an ASO cube. I have created the database.cfg file in the database directory and restarted the application. Then I run a few queries, and stop the application. Query log file (database.qlg) is not created. My database.cfg file looks like this:<BR>QUERYLOG [LOB]<BR>QUERYLOG LOGFILESIZE 2<BR>QUERYLOG TOTALLOGFILESIZE 1024<BR>QUERYLOG ON<BR><BR>Thanks
-
Query runs every minute from B1 client - read B1 log file
Hi all,
We found this query has been run every minute from B1 client. It slows down the system, as we have over 100,000 records in table OCLG - activities. How do we stop this query to be run from B1 client. Also, anyone know what is the number for Duration in B1 log file, does the number mean CPU time? We got Duration=3581 for this query. but it takes about 10-15 seconds to run the same query from SQL 2005 management studio. Any idea?
Thanks,
David
============================================================================================
16/02/2010 12:10:42:295830 SQLMessage Note ExecDirectInt C:\Program Files\SAP\SAP Business One\SAP Business One.exe PID=1972 TID=1920 Duration=3581 Fetched=0
Query
SELECT T0.[ClgCode], T0.[Action], T0.[Details], T1.[Name], T0.[Recontact], T0.[BeginTime], T0.[AttendUser] FROM [dbo].[OCLG] T0 LEFT OUTER JOIN [dbo].[OCLO] T1 ON T1.[Code] = T0.[Location] WHERE T0.[Reminder] = (N'Y' ) AND T0.[RemSented] = (N'N' ) AND (T0.[RemDate] < (CONVERT(DATETIME, '20100216', 112) ) OR (T0.[RemDate] = (CONVERT(DATETIME, '20100216', 112) ) AND T0.[RemTime] <= (1210 ) ))Thanks Paulo,
I found another two queries that also run every minute on each B1 workstation. What is the measurement for the duration number in the log file, like Duration=1391?
David
============================================================================================
17/02/2010 11:32:46:620527 SQLMessage Note ExecDirectInt C:\Program Files\SAP\SAP Business One\SAP Business One.exe PID=1868 TID=3236 Duration=1391 Fetched=21
Query SELECT T0.[ClgCode], T0.[AttendUser], T0.[Closed], T0.[Recontact], T0.[endDate], T0.[Action], T0.[BeginTime], T0.[ENDTime], T0.[Duration], T0.[DurType], T0.[Details], T0.[Notes], T0.[personal] FROM [dbo].[OCLG] T0 WHERE (T0.[Recontact] >= CONVERT(DATETIME, '20100215', 112) AND T0.[Recontact] <= CONVERT(DATETIME, '20100221', 112) ) AND T0.[endDate] = T0.[Recontact] AND T0.[inactive] = (N'N' ) AND T0.[BeginTime] IS NOT NULL AND T0.[ENDTime] IS NOT NULL AND (T0.[Action] = (N'C' ) OR T0.[Action] = (N'M' ) OR T0.[Action] = (N'N' ) ) AND (T0.[AttendUser] = (612 ) ) ORDER BY T0.[Recontact],T0.[BeginTime]
17/02/2010 11:32:54:917455 SQLMessage Note ExecDirectInt C:\Program Files\SAP\SAP Business One\SAP Business One.exe PID=1868 TID=3236 Duration=487 Fetched=0
Query SELECT T0.[ClgCode], T0.[AttendUser], T0.[Closed], T0.[Recontact], T0.[endDate], T0.[Action], T0.[BeginTime], T0.[ENDTime], T0.[Duration], T0.[DurType], T0.[Details], T0.[Notes], T0.[personal] FROM [dbo].[OCLG] T0 WHERE (((T0.[Recontact] >= CONVERT(DATETIME, '20100215', 112) AND T0.[Recontact] <= CONVERT(DATETIME, '20100221', 112) ) OR (T0.[endDate] >= CONVERT(DATETIME, '20100215', 112) AND T0.[endDate] <= CONVERT(DATETIME, '20100221', 112) ) ) OR (T0.[Recontact] < (CONVERT(DATETIME, '20100215', 112) ) AND T0.[endDate] > (CONVERT(DATETIME, '20100221', 112) ) )) AND T0.[endDate] <> T0.[Recontact] AND T0.[inactive] = (N'N' ) AND T0.[BeginTime] IS NOT NULL AND T0.[ENDTime] IS NOT NULL AND (T0.[Action] = (N'C' ) OR T0.[Action] = (N'M' ) OR T0.[Action] = (N'N' ) ) AND (T0.[AttendUser] = (612 ) ) ORDER BY T0.[Recontact],T0.[BeginTime] -
WebDAV Query generates a high number of transaction log files
Hi all,
I have a program that launch WebDAV queries to search for contacts on an Exchange 2007 server. The number of contacts returned for each user's mailbox is quite high (about 4500).
I've noticed that each time the query is launched, about 15 transaction log files are generated on the Exchange server (each of them 1Mb). If I ask only for 2 properties on the contacts, this number is reduced to about 8.
This is a problem since our program is supposed to launch often (about every 3/5min) as It will synchronize Exchange mailboxes with a SQL Server DB. The result is that the logs increase very quickly on the server side, even if there are not so many updates.
Any idea why so many transaction logs are generated when doing a WebDAV search returning many items? I would understand that logs are created when an update is done on the server, but here it's only a search with many contacts items returned.
Is there maybe a setting on the Exchange server to control what kind of logs to generate?
Thank for your help,
AlexandreHi Alex,
Actually circular logging/backup was not a solution, I was just explaining that there is an option like that on server but it is not recommended hence not useful in our case :)
- I am not a developer but AFAIK, WebDAV search query shouldn't generate transaction log because it just searches the mailboxes and gives the result in HTTP format and doesn't produce any Exchange transaction.
- I wouldn't open transaction logs since it is being used by Exchange which may generate errors and may corrupt Exchange database sometime too. However it is not readable, as you observed, other than Exchange Information Store service (store.exe).
- You can post this query in development forum to get better idea on this, if any other programmer observed similar symptom while using WebDAV contact search query in Exchange 2007 or can validate your query.
Microsoft TechNet > Forums Home > Exchange Server > Development
Well, I just saw that you are using Exchange 2007, in that case why don't you use Exchange Web Service which is better and improved method to access/query mailboxes where as WebDAV is also de-emphasized in Exchange 2007 and might be disappeared in next version of Exchange. Checkout below article for further detail.
Development: Overview
http://technet.microsoft.com/en-us/library/aa997614.aspx
Amit Tank | MVP - Exchange | MCITP:EMA MCSA:M | http://ExchangeShare.WordPress.com -
Log file becomes ridiculously massive
The Windows system is Windows 7 32 bits
SQL server is Microsoft SQL 2008 R2 express
The database data file is around 120MB and the log file is 400GB
The recovery model is simple
Can anyone tell me what possible reasons are that caused log file becomes huge? Thank you in advanceThe Windows system is Windows 7 32 bits
SQL server is Microsoft SQL 2008 R2 express
The database data file is around 120MB and the log file is 400GB
The recovery model is simple
Can anyone tell me what possible reasons are that caused log file becomes huge? Thank you in advance
there are various reasons for abnormal growth of your log file.
try to figure out is there any open transaction in your database use - DBCC OPENTRAN
also run this query
select log_reuse_wait_desc from sys.databases where name = 'DATABASE_NAME'
and analyze the result, this result will let you know possible reason for the log growth.
Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
Praveen Dsa | MCITP - Database Administrator 2008 |
My Blog | My Page
Maybe you are looking for
-
I cannot log into iCloud using my Apple username and password. I keep getting an error message that states "CANNOT SIGN UP - The Apple ID is valid but is not an iCloud account." How do I fix this?
-
Number range object HRTR_PDOC : Interval not found
Hi, I am doing Travel Mgt testing i got two problems 1. Travel Policy is not taking when entering the expences 2. when posting the doc to FI the error "Number range object HRTR_PDOC : Interval not found" is coming Pls suggest thanks in advance chandu
-
Stuttering playback with all footage. New CC 2014.1 install
I just installed CC 2014 on a mac. Still have CS6 too. I'm getting a very strange stuttering playback issue. Looks like 10 frames per second. then crashes. Cleared preffs and rebooted but didnt fix it. I have tried several types of footage and all pl
-
Hi, I've read bits and pieces of news for CS4 like 64 bit support, new code or something and was just wondering if CS3 wont upgrade to CS4? I'm not the expert but I'm sure thats not the case. But I don't want to upgrade to CS3 if a CS4 update wont be
-
Rolling years dynamically updated from a business rule
Hi All, Can anyone advise on how they have solved the problem of rolling opening balances from one year to another in essbase\planning. Our users could potentially want to forecast out x number of years and we don't want to necessarily hardcode out t