Why is the transaction log file not truncated though its simple recovery model?
My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
mayooran99
My database is simple recovery model and when I view the free space in log file it shows 99%. Why doesn't my log file truncate the committed
data automatically to free space in ldf file? When I shrink it does shrink. Please advice.
mayooran99
If log records were never deleted(truncated) from the transaction log it wont show as 99% free.Simple recoveyr model
Log truncation automatically frees space in the logical log for reuse by the transaction log and thats what you are seeing. Truncation wont change file size. It more like
log clearing, marking
parts of the log free for reuse.
As you said "When I shrink it does shrink" I dont see any issues here. Log truncation and shrink file is 2 different things.
Please read below link for understanding "Transaction log Truncate vs Shrink"
http://blog.sqlxdetails.com/transaction-log-truncate-why-it-didnt-shrink-my-log/
Similar Messages
-
How to find out whether the transaction logs are being truncated or not
Hi,
We are using Veritas Backup tool for Backups and restores on our MS SQL 2000.
One of our Veritas Tech has disabled the Truncation of Transaction Log Backup using the Job setup in Veritas. We want to confirm whether the Truncation is happening or not. I don't see any difference in the Transaction Log file size.
How to find out whether truncation is active or not?
Thanks
VijayHello Vijay,
On MSSQL truncation of transaction log does not shrink the size of the transaction log. It simply removes the content within the transaction log and writes it to the backup.
Meaning the free percentage within the transaction log will increase.
If you want to Resize the transaction log size, you need to do something else.
The shrinking procedure is given here:
http://support.microsoft.com/kb/907511
Regards,
Siddhesh -
Unable to delete records as the transaction log file is full
My disk is running out of space and as a result I decided to free some space by deleting old data. I tried to delete 100,000 by 100,000 as there are 240 million records to be deleted. But I am unable to delete them at once and shrinking the database doesn't
free much space. This is the error im getting at times.
The transaction log for database 'TEST_ARCHIVE' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
How can I overcome this situation and delete all the old records? Please advice.
mayooran99In order to delete the SQL Server need to write the information in the log file, and you do not have the place for those rows in the log file. You might succeeded to delete less in each time -> next backup the log file each time -> next shrink the
log file... but this is not the way that I would chose.
Best option is probably to add another disc (a simple disk do not cost a lot), move the log file there permanently. It will increase the database work as well (it is highly recommend not to put the log file on the same disk as the data file in most cases).
If you can't add new disk permanently then add one temporary. Then add file to the database in this disk -> create new table in this disk -> move all the data that you do
not want to delete to the new table -> truncate the current table -> bring back the data om the new table -> drop the new table and the new file to release the temporary disk.
Are you using full mode or simple recovery mode ?
* in full mode you have to backup the log file if you want to shrink it
Ronen Ariely
[Personal Site] [Blog] [Facebook] -
How to send output from SQL script to the specified log file (not *.sql)
## 1 -I write sql command into sql file
echo "SELECT * FROM DBA_USERS;">results.sql
echo "quit;">>results.sql
##2- RUN sqlplus, run sql file and get output/results into jo.log file
%ORACLE_HOME/bin/sqlplus / as sysdba<results.sql>>jo.log
It doesn't work please advise$ echo "set pages 9999" >results.sql ### this is only to make the output more readable
$ echo "SELECT * FROM DBA_USERS;" >>results.sql
$ echo "quit" >>results.sql
$ cat results.sql
set pages 9999
SELECT * FROM DBA_USERS;
quit
$ sqlplus -s "/ as sysdba" @results >jo.log
$ cat jo.log
USERNAME USER_ID PASSWORD
ACCOUNT_STATUS LOCK_DATE EXPIRY_DAT
DEFAULT_TABLESPACE TEMPORARY_TABLESPACE CREATED
PROFILE INITIAL_RSRC_CONSUMER_GROUP
EXTERNAL_NAME
SYS 0 D4C5016086B2DC6A
OPEN
SYSTEM TEMP 06/12/2003
DEFAULT SYS_GROUP
SYSTEM 5 D4DF7931AB130E37
OPEN
SYSTEM TEMP 06/12/2003
DEFAULT SYS_GROUP
DBSNMP 19 E066D214D5421CCC
OPEN
SYSTEM TEMP 06/12/2003
DEFAULT DEFAULT_CONSUMER_GROUP
SCOTT 60 F894844C34402B67
OPEN
USERS TEMP 06/12/2003
DEFAULT DEFAULT_CONSUMER_GROUP
HR 47 4C6D73C3E8B0F0DA
OPEN
EXAMPLE TEMP 06/12/2003
DEFAULT DEFAULT_CONSUMER_GROUPThat's only a part of the file, it's too long :-) -
Hi
I have large database and i need to perform batch deleting without affecting the transaction log. So if I set the Recovery mode to Simple before deleting the transaction log will not grow ??
Thanks.Hi
I have large database and i need to perform batch deleting without affecting the transaction log. So if I set the Recovery mode to Simple before deleting the transaction log will not grow ??
Thanks.
You CANNOT delete records in sql server without getting information logged in transaction log. Please note every thing in SQL Server is logged and logging depends on recovery model used. When you use simple recovery logging will
almost be same as full just after checkpoint logs would be truncated and also when log file grows 70 % of its size. This can only not happen IF some ongoing transaction is not holding the VLF or requires the VLF(virtual
log file)
So you made good choice to delete in batches. Also have a look on Lock escalation
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
The transaction log for database 'ECC' is full + ECC6.0 Installation Failur
Guyz,
my ecc6 installation failed after 8 hours run with following error log snippet...
exec sp_bindefault 'numc3_default','SOMG.MSGNO'
DbSlExecute: rc = 99
(SQL error 9002)
error message returned by DbSl:
The transaction log for database 'ECC' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
(DB) ERROR: DDL statement failed
(ALTER TABLE [SOMG] ADD CONSTRAINT [SOMG~0] PRIMARY KEY CLUSTERED ( [MANDT], [OBJTP], [OBJYR], [OBJNO] ) )
DbSlExecute: rc = 99
(SQL error 4902)
error message returned by DbSl:
Cannot find the object "SOMG" because it does not exist or you do not have permissions.
ECCLOG1 data file has got 25GB initial size and growth was restricted to 10% (PROPOSED BY SAPInst)...
i'm assuming this error was due to lack of growth space for ECCLOG1 datafile...am i right? if so how much should i allocate memory for this log ? or is there any workaround ?
thanks in advanceKasu,
If SQL is complaining that the log file is full then the phase of the install that creates the SQL data/log files has already occurred (happens early in the install) and the install is importing programs, config and data into the db.
Look at the windows application event log for "Transaction log full" events to confirm.
To continue, in SQL Query analyzer try:
"Backup log [dbname] with truncate_only"
This will remove only inactive parts of the log and is safe when you don't require point-in-time recovery (which you don't during an install).
Then, go to the SQL Enterprise manager, choose the db in question and choose the shrink database function, choose to shrink only the transaction log file and the space made empty by the truncate will be removed from the file.
Change the recovery mode in SQL Server to "simple" so that the log file does not grow for the remainder of the install.
Make sure you change the recovery mode back to "full" after the install is complete.
Your transaction log appears to have filled the disk partition you have assigned to it.
25GB is huge for a transaction log and you would normally not see them grow this large if you are doing regular scheduled tlog backups (say every 30-60 minutes) because the log will truncate every time, but its not unusual to see one get big during an install, upgrade or when applying hotpacks.
Tim -
Hi,
I found a sql server database with a transaction log file of 65 GB.
The database is configured with the recovery model option = full.
Also, I noticed than since the database exist, they only took database backup.
No transaction log backup were executed.
Now, the "65 GB transaction log file" use more than 70% of the disk space.
Which scenario do you recommend?
1- Backup the database, backup the transaction log to a new disk, shrink the transaction log file, schedule transaction log backup each hour.
2- Backup the database, put the recovery model option= simple, shrink the transaction log file, Backup the database.
Does the " 65 GB file shrink" operation would have impact on my database users ?
The sql server version is 2008 sp2 (10.0.4000)
regards
DI've read the other posts and I'm at the position of: It really doesn't matter.
You've not needed point in time restore abilities inclusive of this date and time since inception. Since a full database backup contains all of the log needed to bring the database into a consistent state, doing a full backup and then log backup is redundant
and just taking up space.
For the fastest option I would personally do the following:
1. Take a full database backup
2. Set the database recovery model to Simple
3. Manually issue two checkpoints for good measure or check to make sure the current VLF(active) is near the beginning of the log file
4. Shrink the log using the truncate option to lop off the end of the log
5. Manually re-size the log based on usage needed
6. Set the recovery model to full
7. Take a differential database backup to bridge the log gap
The total time that will take is really just the full database backup and the expanding of the log file. The shrink should be close to instantaneous since you're just truncating the end and the differential backup should be fairly quick as well. If you don't
need the full recovery model, leave it in simple and reset the log size (through multiple grows if needed) and take a new full backup for safe keeping.
Sean Gallardy | Blog |
Twitter -
Shrink Transaction log file - - - SAP BPC NW
HI friends,
We want to shrink the transaction log files in SAP BPC NW 7.0, how can we achieve thsi
Please can you throw some light on this
why we thought of ghrinking the file ?
we are getting "out of memory " error when ever we do any activity. so we thought of shrinking the file (this is not a production server - FYI)
example of an activity where the out of memeory issueee appears
SAP BPC excel >>> etools >>> client options >>> refresh dimension members >>> this leads to a pop-up screen stating that "out of memory"
so we thought of shrinking the file.
Please any suggestions
Thank you and Kindest Regards
SrikaanthHI Poonam,
Not only the excel is throwing this kind of message (out of memory) - the SAP note is helpful, if we have error in excel alone
But we are facing this error every where
We have also found out that our Hard disk capacity as run out of space.
we want to empty the log files and make some space for us.
our hard disk is now having only few MegaBytes now
we want to clear our all test data, log files, and other stuffs
Please can you recommed us some way
Thank you and Kindest regards
Srikaanth -
Dear All,
There have been issues in the past where the transactional log file has grown too big that it made the drive to limit its size. I would like to know the answers to the following
please:
1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
4. How often should the update stat job should run please?
Thank you in advance!Hi
My answers might be very similar to geeks already answer, but hope it will add something more
1. To resolve the space issue, is the correct way to first take a backup of the transactional log then shrink the transactional log file?
--> If database recovery model is full \ bulk then tlog backup is helpful, and it doesnt help try to increase frequency of log backup and you can refer :
Factors That Can Delay Log Truncation
2. What would be the recommended auto growth size, for example if I have a DB which is 1060 GB?
Auto grow for very large db is very crucial if its too high can cause active vlf and too less can cause fragmentation. In your case your priority is to control space utilizatiuon.
i suggest you to keep minimum autogrowth and it must be in size not in percentage.
/*******Auto grow formula for log file**********/
Auto grow less than 64MB = 4 VLFs
Autogrow of 64MB and less than 1GB = 8 VLFs
Autogrow of 1GB and larger = 16 VLFs
3. At the moment, the transactional log backup is done every 1 hour, but I'm not sure if it should be taken more regularly?
---> If below query returns log_backup for respective database then yes you can to increase log backup frequency. But if it returns some other factor , please check above
mention link
"select name as [database] ,log_reuse_wait , log_reuse_wait_desc from sys.databases"
4. How often should the update stat job should run please?
this totaly depend on ammount of dml operation you are performing. you can select auto update stats and weekly you can do update stats with full scan.
Thanks Saurabh Sinha
http://saurabhsinhainblogs.blogspot.in/
Please click the Mark as answer button and vote as helpful
if this reply solves your problem -
What is stored in a transaction log file?
What does the transaction log file store? Is it the blocks of transactions to be executed, is it the snapshot of records before beginning the
execution of a transaction or is it just the statements found in a transaction block? Please advice.
mayooran99yes, it will store all the values before and after that were modified. you,first, have to understand the need for transaction log, then, it will start to become apparent, what is stored in the transaction log
before the transaction can be committed, sql server will make sure that all the information is hardened on the transaction log,so if a crash happens, it can still recover\restore the data.
when you update some data - the data is feteched into memory and updated- transaction log makes note of it(before and after values etc).see, at this point, the changes were done but not physically present in the data page, they present only in the memory.
so, if crash happens(before a check piont\lazy writer could be issued), you will that data...this where transaction log comes handy, because all this information is stored in physical file of transaction log. so, when your server comes back on, if the transaction
is committed, the transaction log will roll forward this iinformation
when a checkpoint\lazy writer happens, in simple recovery, the transaction log for that txn is cleared out, if there are no other older active txns.
in full recovery you will take log backups, to clear that txn from the transaction log.
in transaction log data generally is faster because 1. it is written sequentialyl...it will track the data pageno, lsn and other details that were modified and makes a note of it.
similar to data cache, there is also transaction log cache, that makes this process faster.. all transactions before being committed, it will wait to make sure everything related to the txn is written to the transaction log on disk.
i advice you to pick up - kalen delaney, sql internals book and read - recovery an logging chapter..for more and better understanding...
Hope it Helps!! -
Bottleneck when switching the redo log files.
Hello All,
I am using Oracle 11.2.0.3.
The application team reported that they are facing slowness at certain time.
I monitored the database and I found that at some switching of the redo log files (not always) I am facing a slowness at the application level.
I have 2 threads since my database is RAC, each thread have 3 redo log groups multiplexed to the FRA, with size 300 MB each.
Is there any way to optimize the switch of redo log files? knowing that my database is running in ARCHIVELOG mode.
Regards,Hello Nikolay,
Thanks for your input I am sharing with you the below information. I have 2 instances so I will provide the info from each instance
Instance 1:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 4.9 0.0 0.00 0.00
DB CPU(s): 1.1 0.0 0.00 0.00
Redo size: 3,014,876.2 3,660.4
Logical reads: 32,619.3 39.6
Block changes: 7,969.0 9.7
Physical reads: 0.2 0.0
Physical writes: 164.0 0.2
User calls: 7,955.4 9.7
Parses: 288.9 0.4
Hard parses: 96.0 0.1
W/A MB processed: 0.2 0.0
Logons: 0.9 0.0
Executes: 2,909.4 3.5
Rollbacks: 0.0 0.0
Instance 2:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 5.5 0.0 0.00 0.00
DB CPU(s): 1.4 0.0 0.00 0.00
Redo size: 3,527,737.9 3,705.7
Logical reads: 29,916.5 31.4
Block changes: 8,893.7 9.3
Physical reads: 0.2 0.0
Physical writes: 194.0 0.2
User calls: 7,742.8 8.1
Parses: 262.7 0.3
Hard parses: 99.5 0.1
W/A MB processed: 0.4 0.0
Logons: 1.0 0.0
Executes: 2,822.5 3.0
Rollbacks: 0.0 0.0
Transactions: 952.0
Instance 1:
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
DB CPU 1,043 21.5
log file sync 815,334 915 1 18.9 Commit
gc buffer busy acquire 323,759 600 2 12.4 Cluster
gc current block busy 215,132 585 3 12.1 Cluster
enq: TX - row lock contention 23,284 264 11 5.5 Applicatio
Instance 2:
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
DB CPU 1,340 24.9
log file sync 942,962 1,125 1 20.9 Commit
gc buffer busy acquire 377,812 594 2 11.0 Cluster
gc current block busy 211,270 488 2 9.1 Cluster
enq: TX - row lock contention 30,094 299 10 5.5 Applicatio
Instance 1:
Operating System Statistics Snaps: 1016-1017
-> *TIME statistic values are diffed.
All others display actual values. End Value is displayed if different
-> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic Value End Value
AVG_BUSY_TIME 17,451
AVG_IDLE_TIME 81,268
AVG_IOWAIT_TIME 1
AVG_SYS_TIME 6,854
AVG_USER_TIME 10,548
BUSY_TIME 420,031
IDLE_TIME 1,951,741
IOWAIT_TIME 288
SYS_TIME 165,709
USER_TIME 254,322
LOAD 3 6
OS_CPU_WAIT_TIME 523,000
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 311,280
VM_OUT_BYTES 75,862,008
PHYSICAL_MEMORY_BYTES 62,813,896,704
NUM_CPUS 24
NUM_CPU_CORES 6
NUM_LCPUS 24
NUM_VCPUS 6
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_DEFAULT 16,384
TCP_RECEIVE_SIZE_MAX 9.2233720368547758E+18
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 9.2233720368547758E+18
TCP_SEND_SIZE_MIN 4,096
Operating System Statistics - Detail Snaps: 1016-101
Snap Time Load %busy %user %sys %idle %iowait
22-Aug 11:33:55 2.7 N/A N/A N/A N/A N/A
22-Aug 11:50:23 6.2 17.7 10.7 7.0 82.3 0.0
Instance 2:
Operating System Statistics Snaps: 1016-1017
-> *TIME statistic values are diffed.
All others display actual values. End Value is displayed if different
-> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
Statistic Value End Value
AVG_BUSY_TIME 11,823
AVG_IDLE_TIME 86,923
AVG_IOWAIT_TIME 0
AVG_SYS_TIME 4,791
AVG_USER_TIME 6,991
BUSY_TIME 475,210
IDLE_TIME 3,479,382
IOWAIT_TIME 410
SYS_TIME 193,602
USER_TIME 281,608
LOAD 3 6
OS_CPU_WAIT_TIME 615,400
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 16,360
VM_OUT_BYTES 72,699,920
PHYSICAL_MEMORY_BYTES 62,813,896,704
NUM_CPUS 40
NUM_CPU_CORES 10
NUM_LCPUS 40
NUM_VCPUS 10
GLOBAL_RECEIVE_SIZE_MAX 4,194,304
GLOBAL_SEND_SIZE_MAX 4,194,304
TCP_RECEIVE_SIZE_DEFAULT 16,384
TCP_RECEIVE_SIZE_MAX 9.2233720368547758E+18
TCP_RECEIVE_SIZE_MIN 4,096
TCP_SEND_SIZE_DEFAULT 16,384
TCP_SEND_SIZE_MAX 9.2233720368547758E+18
TCP_SEND_SIZE_MIN 4,096
Operating System Statistics - Detail Snaps: 1016-101
Snap Time Load %busy %user %sys %idle %iowait
22-Aug 11:33:55 2.6 N/A N/A N/A N/A N/A
22-Aug 11:50:23 5.6 12.0 7.1 4.9 88.0 0.0
------------------------------------------------------------- -
Very high transaction log file growth
Hello
Running Exchange 2010 sp2 in a two node DAG configuration. Just recently i have noticed a very high transaction log file growth for one database. The transaction logs are growing so quickly that i have had to turn on circular logging in order to prevent
the log lun from filling up and causing the database to dismount. I have tried several things to try find out what is causing this issue. At first i thought this could be happening because of virus, an Active Sync user, a users outlook client, or our salesforce
integration, howerver when i used exmon, i could not see any unusual high user activity, also when i looked at the itemcount for all mailboxes in the particular database that is experiencing the high transaction log file growth, i could not see any mailboxes
with unusual high item count, below is the command i ran to determine this, i ran this command sever times. I also looked at the message tracking log files, and again could see no indication of a message loop or unusual high message traffic for a
particlar day. I also followed this guide hopping that it would allow me to see inside the transaction log files, but it didnt produce anything that would help me understand the cause of this issue. When i ran the below tool againts the transaction log files,
i saw DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD, or OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO, or HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.
I am starting to run out of ideas on how to figure out what is causing the log file build up. Any help is greatly appreciated.
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Get-Mailbox -database databasethatkeepsgrowing | Get-MailboxStatistics | Sort-Object ItemCount -descending |Select-Object DisplayName,ItemCount,@{name="MailboxSize";exp={$_.totalitemsize}} -first 10 | Convertto-Html | out-File c:\temp\report.htm
Bulls on ParadeIf you have users with iPhones or Smart Phones using ActiveSync then one of the quickest ways to see if this is the issue is to have users shot those phones off to see if the problem is resolved. If it is one or more iPhones then perhaps look at
what IOS they are on and get them to update to the latest version or adjust the ActiveSync connection timeout. NOTE: There was an issue where iPhones caused runaway transactions logs and I believe it was resolved with IOS 4.0.1
There was also a problem with the MS CRM client awhile back so if you are using that check out this link.
http://social.microsoft.com/Forums/en/crm/thread/6fba6c7f-c514-4e4e-8a2d-7e754b647014
I would also deploy some tracking methods to see if you can hone in on the culprits, i.e. If you want to see if the problem is coming from an internal Device/Machine you can use one of the following
MS USER MONITOR:
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en and here is a link on how to use it
http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html
And this is a great article as well
http://blogs.msdn.com/b/scottos/archive/2007/07/12/rough-and-tough-guide-to-identifying-patterns-in-ese-transaction-log-files.aspx
Also check out ExMon since you can use it to confirm which mailbox is unusually active , and then take the appropriate action.
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=9A49C22E-E0C7-4B7C-ACEF-729D48AF7BC9&displaylang=en
Troy Werelius
www.Lucid8.com
Search, Recover, & Extract Mailboxes, Folders, & Email Items from Offline EDB's and Live Exchange Servers with Lucid8's DigiScope -
The transaction log for database 'BizTalkMsgBoxDb' is full.
Hi All,
We are getting the following error continously in the event viewer of our UAT servers. I checked the jobs and all the backup jobs were failing on the step to backup the transaction log file and were giving the same error. Our DBA's cleaned the message box manually and backed up the DB but still after some time the jobs starts failing and this error is logged in the event viewer.
The transaction log for database 'BizTalkMsgBoxDb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases".
Thanks,
Abdul Rafay
http://abdulrafaysbiztalk.wordpress.com/
Please mark this answer if it helpsPutting the database into simple recovery mode and shrinking the log file isn't going to help: it'll just grow again, it will probably fragment across the disk thereby impacting performance and, eventually, it will fill up again for the same reason
as before. Plus you put yourself in a very vulnerable position for disaster recovery if you change the recovery mode of the database: and that's before we've addressed the distributed transaction aspect of the BizTalkDatabases.
First, make sure you're backing up the log file using the BizTalk job Backup BizTalk Server (BizTalkMgmtDb). It might be that the log hasn't been backed up and is full of transactions: and, eventually, it will run out of space. Configuration
instructions at this link:
http://msdn.microsoft.com/en-us/library/aa546765(v=bts.70).aspx Your DBA needs to get the backup job running properly rather than panicking!
If this is running properly, and backing up (which was the case for me) and the log file is still full, run the following query:
SELECT Name, log_reuse_wait_desc
FROM sys.databases
This will tell you why the log file isn't properly clearing down and why it cannot use the space inside. When I had this issue, it was due to an active transaction.
I checked for open transactions on the server using this query:
SELECT
s_tst.[session_id],
s_es
.[login_name]
AS [Login Name],
DB_NAME
(s_tdt.database_id)
AS [Database],
s_tdt
.[database_transaction_begin_time]
AS [Begin Time],
s_tdt
.[database_transaction_log_record_count]
AS [Log Records],
s_tdt
.[database_transaction_log_bytes_used]
AS [Log Bytes],
s_tdt
.[database_transaction_log_bytes_reserved]
AS [Log Rsvd],
s_est
.[text]
AS [Last T-SQL Text],
s_eqp
.[query_plan]
AS [Last Plan]
FROM
sys.dm_tran_database_transactions
s_tdt
JOIN
sys.dm_tran_session_transactions
s_tst
ON s_tst.[transaction_id]
= s_tdt.[transaction_id]
JOIN
sys.[dm_exec_sessions]
s_es
ON s_es.[session_id]
= s_tst.[session_id]
JOIN
sys.dm_exec_connections
s_ec
ON s_ec.[session_id]
= s_tst.[session_id]
LEFT
OUTER
JOIN
sys.dm_exec_requests
s_er
ON s_er.[session_id]
= s_tst.[session_id]
CROSS
APPLY
sys.dm_exec_sql_text
(s_ec.[most_recent_sql_handle])
AS s_est
OUTER
APPLY
sys.dm_exec_query_plan
(s_er.[plan_handle])
AS s_eqp
ORDER
BY [Begin Time]
ASC;
GO
And this told me the spid of the process with an open transaction on BizTalkMsgBoxDB (in my case, this was something that had been open for several days). I killed the transaction using KILL spid, where spid is an integer. Then I ran the BizTalk
Database Backup job again, and the log file backed up and cleared properly.
Incidentally, just putting the database into simple transaction mode would have emptied the log file: giving it lots of space to fill up again. But it doesn't deal with the root cause: why the backups were failing in the first place. -
SQL Server Database - Transaction logs growing largely with Simple Recovery model
Hello,
There is SQL server database on client side in production environment with huge transaction logs.
Requirement :
1. Take database backup
2. Transaction log backup is not required. - so it is set to Simple recovery model.
I am aware that, Simple Recovery model also increases the transaction logs same as in Full Recovery model as given on below link.
http://realsqlguy.com/origins-no-simple-mode-doesnt-disable-the-transaction-log/
Last week, this transaction log became of 1TB size and blocked everything on the database server.
How to over come with this situation?
PS : There are huge bulk uploads to the database tables.
Current Configuration :
1. Simple Recovery model
2. Target Recovery time : 3 Sec
3. Recovery interval : 0
4. No SQL Agent job schedule to shrink database.
5. No other checkpoints created except automatic ones.
Can anyone please guide me to have correct configuration on SQL server for client's production environment?
Please let me know if any other details required from server.
Thank you,
Mittal.@dave_gona,
Thank you for your response.
Can you please explain me this in more details --
What do you mean by one batch ?
1. Number of rows to be inserted at a time ?
2. or Size of data in one cell does matter here.
As in my case, I am clubbing together all the data in one xml (on c# side) and inserting it as one record. Data is large in size, but only 1 record is inserted.
Is it a good idea to shrink transaction log periodically, as it is not happening itself in simple recovery model.
HI Mittal,
Shrinking is bad activity yu should not shrink log files regularly, in rare case if you want to recovery space you may do it.
Have manual chekpoints in Bulk insert operation.
I cannot tell upfront what should be batch size but you can start with 1/4 th of what you are currently inserting.
Most important what does below query return for database
select log_reuse_wait_desc from sys.databases where name='db_name'
The value it returns is what stopping the log from getting cleared and reused.
What is version and editon of SQl server we are talking about. What is output of
select @@version
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
1) OS version:
OS Name : Windows Server 2008 R2
2) BO version:
BusinessObjects XI 3.1 SP05.
3) My question:
We have “dbbackup.exe” utility in SqlAnywhere in BI 4.1 for running the transaction log ( CMS and Audit) truncation/backup. But the same utility was not present in BOXI 3.1 SP05 for backup.
Is there an equivalent/alternative utility in BOXI 3.1 SP05 for the same purpose? We use the command below for BI 4.1 Transaction Log truncation/backup:
E:\Program Files\SAP BusinessObjects\sqlanywhere\BIN64>dbbackup.exe -c "dsn=<System DSN>;uid=< SQL_AW_DBA_UID>;pwd=< SQL_AW_DBA_PASSWD>;host=localhost:2638" -t -x -n "E:
\Transaction_log_backup\CMS"
Any help or clarification on this issue would be greatly appreciated.
Thanks in advance.
Conor.Hi Conor,
BOXI 3.1 SP05 does not include the dbbackup utility. Instead, you issue SQL statements to create the backup. We published a paper on the subject:
http://scn.sap.com/docs/DOC-48608
The paper uses a maintenance plan to schedule regular backups, but you don't need to do that if you want to simply create a backup when required. To do that (along with transaction log truncation), you run the SQL statement:
BACKUP DATABASE DIRECTORY 'backup-dir'
TRANSACTION LOG TRUNCATE;
For complete details about the BACKUP statement, have a look here:
http://dcx.sap.com/index.html#1201/en/dbreference/backup-statement.html
You'll need to execute the statement inside a SQL console - the paper above describes how to get that.
I hope this helps!
José Ramos
Product Manager
SAP Canada
Maybe you are looking for
-
How to set up icloud and sync contacts on iphone?
I'm trying to help out my builder, who is a friend and occasional client of my business. He has no decent way of keeping all his contacts organized, many are on his iphone, and others on a lot of yellow sticky notes. He wants to set up his contact l
-
Hi All, I have already posted regarding the same kind of error .. I have been asked to check the pubilc key names . But this time i have double checked the key names even then i am getting such error . Unable to forward message to JCA adapter. Reas
-
ACL blocking traffic towards the management interface on WLC 5508
Hello All, I need to apply an ACL in WLC 5508 such that it would allow https traffic on management interface only from selected clients. For same, I have created an ACL permitting only the intended users while blocking the rest. Have applied the sam
-
How to back up iTunes Match music to external HDD
I have had an iTunes Match account for several years. Since I now use a retina MBP, I have deleted the matched music from my SSD to save space. Now I would like to make a complete backup of all my music to an external HDD. How can I do that withou
-
FDM Mapping script produces result #script
Hello, I am trying to use the following mapping script and instead of getting the result defined in the script it is producing the result #script. It is as if it is taking that as the target rather than processing the script. Has anyone else seen thi