ShrinkFile
Boa tarde,
Usei os seguintes comandos para consultar e reduzir o arquivo de log do meu banco porém os aquivos continuam do mesmo tamanho. O que posso fazer para resolver ?
DBCC SQLPERF (LOGSPACE)
DATABASE NAME - LOG SIZE (MB) - LOG SPACE USED (%) - STATUS
DB_TESTE - 25034,68
- 86,71359 - 0
SELECT * FROM DB_TESTE.DBO.SYSFILES
DB_TESTE_Log
DBCC SHRINKFILE ( DB_TESTE_Log, 200,
TRUNCATEONLY )
Hello,
Based on your description, you want to shrink the log file of the database to 200 MB. However TRUNCATEONLY is applicable only to data files and
target_size is ignored if specified with TRUNCATEONLY in DBCC SHRINKFILE.
Fanny that is actually a document bug which I proved it in below link.
http://social.technet.microsoft.com/wiki/contents/articles/22206.when-using-dbcc-shrinkfile-for-log-files-can-we-use-truncateonly-option.aspx
Here is
Connect raised for the same
It does effect the log files it shrinks it.Cuts out the free space and returns it to OS
Rafael,
What does below command returns
select log_reuse_wait_desc from sys.databases where name='db_nmae'
If it returns log backup you need to take log backup( may be twice) which will truncate the logs and then you can shrink using DBCC SHRINKFILE.
DBCC LOGINFO(DB_NAME) run this command if last value in col status is 2 which means active transaction is still holding the VLF, so then you need to take log backup.If the value is 0 you can shrink then
>>and I don't have space to realize a bkp currently.
You can change recovery model of database to simple and then back to full and then it will allow you to shrink but you will loose point in time recovery. Please take full backup after this activity.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
My TechNet Wiki Articles
Similar Messages
-
Dbcc shrinkfile log files could not be found.
After a huge yearly data load and verified, I thought I'd shrink the log files.
This is what I did:
USE XYZArchive
GO
sp_helpfile
DBCC SHRINKFILE ('XYZArchive_Log')
GO
I'm getting the following error message:
Could not locate file 'XYZArchive_Log' for database 'XYZArchive' in sys.database_files. The file either does not exist, or was dropped.
I checked and the file is in the correct location. So, I issued
SELECT *
FROM sys.master_files
WHERE
database_id = DB_ID(N'XYZArchive')
The physical_name is the same as filename (in sp_helpfile); however, the name is different from the name (in sp_helpfile)
SP_HELPFILE 's name is XYZArchive_Log
SYS.MASTER_FILES ' name is XYZ_Log
I know I can shrink file using the SYS.MASTER_FILES' name.
Question 1: How can these be different?
Question 2: How can I fixed that?Hello,
Please try the following:
USE
XYZArchive
GO
DBCC
SHRINKFILE(2,100);
If the above does not work, try to shrink the data file first, and then shrink the log file.
Hope this helps.
Regards,
Alberto Morillo
SQLCoffee.com -
Hi,
I issued this command on Tempdb but it doesnot shrink the file.
dbcc shrinkfile (tempdev_3,1)
go
Messages:
DBCC SHRINKFILE: Page 4:11283400 could not be moved because it is a work table page.
I have checked that there are no tables associated with any user in tempdb. Any help is appreciated.
Regards,
Razithis basically is a re-write of an existing stored procedure that will show
temp tables in the tempdb, and their relative size. I also have a query to
show the size of tempdb in MB that i used with this one, it is also a modification
to an existing system stored proc that shows allocated and free space in MB.
--10 june 2007 slane
--shows temp tables in the
--tempdb
use tempdb
declare @id int
declare @dt smalldatetime
create table #spt_space_all
id int,
name varchar(500),
rows varchar(200) null,
reserved varchar(200) null,
data varchar(200) null,
index_size varchar(200)null,
unused varchar(200) null,
create_date smalldatetime null,
declare TMP_ITEMS CURSOR LOCAL FAST_FORWARD for
select id from sysobjects
where xtype='U'
open TMP_ITEMS
fetch next from TMP_ITEMS into @id
declare @pages int
WHILE @@FETCH_STATUS = 0
begin
create table #spt_space
id int,
rows int null,
reserved dec(15) null,
data dec(15) null,
indexp dec(15) null,
unused dec(15) null
create_date smalldatetime null,
set nocount on
if @id is not null
set @dt = (select crdate from sysobjects where id=@id)
begin
** Now calculate the summary data.
** reserved: sum(reserved) where indid in (0, 1, 255)
insert into #spt_space (reserved)
select sum(reserved)
from sysindexes
where indid in (0, 1, 255)
and id = @id
** data: sum(dpages) where indid < 2
** + sum(used) where indid = 255 (text)
select @pages = sum(dpages)
from sysindexes
where indid < 2
and id = @id
select @pages = @pages + isnull(sum(used), 0)
from sysindexes
where indid = 255
and id = @id
update #spt_space
set data = @pages
/* index: sum(used) where indid in (0, 1, 255) - data */
update #spt_space
set indexp = (select sum(used)
from sysindexes
where indid in (0, 1, 255)
and id = @id)
- data
/* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */
update #spt_space
set unused = reserved
- (select sum(used)
from sysindexes
where indid in (0, 1, 255)
and id = @id)
update #spt_space
set rows = i.rows
from sysindexes i
where i.indid < 2
and i.id = @id
update #spt_space set create_date=@dt
end
insert into #spt_space_all
select name = @id,object_name(@id),
rows = convert(char(11), rows),
reserved = ltrim(str(reserved * d.low / 1024.,15,0) +
' ' + 'KB'),
data = ltrim(str(data * d.low / 1024.,15,0) +
' ' + 'KB'),
index_size = ltrim(str(indexp * d.low / 1024.,15,0) +
' ' + 'KB'),
unused = ltrim(str(unused * d.low / 1024.,15,0) +
' ' + 'KB'),create_date
from #spt_space, master.dbo.spt_values d
where d.number = 1
and d.type = 'E'
drop table #spt_space
FETCH NEXT FROM TMP_ITEMS
INTO @id
end
CLOSE TMP_ITEMS
DEALLOCATE TMP_ITEMS
select * from #spt_space_all where [name] not like '%#spt_space_all%'
drop table #spt_space_all
GO -
DBCC SHRINKFILE with NOTRUNCATE has any performance impact in log shipping?
Hi All,
To procure space I'm suggested to use below command on primary database in log shipping and I just want
to clarify whether it has any performance impact on primary database in log shipping and also is it a recommended practice to use the below command
in regular intervals in
case the log is using much space of the drive. Please suggest on this. Thank You.
"DBCC
SHRINKFILE ('CommonDB_LoadTest_log', 2048, NOTRUNCATE)"
Regards,
Kalyan
----Learners Curiosity Never Ends----Hi Kalyan \ Shanky
I was not clear in linked conversation so adding some thing :
As per http://msdn.microsoft.com/en-us//library/ms189493.aspx
----->TRUNCATEONLY is applicable only to data files.
BUT
As per : http://technet.microsoft.com/en-us/library/ms190488.aspx
TRUNCATEONLY affects the log file.
And i also tried , it does works.
Now Truncateonly : Releases all free space at the end of the file to the operating system but does not perform any page movement inside the file. The data file is shrunk only to the last allocated extent. target_percent is ignored if specified
with TRUNCATEONLY.
So
1. if i am removing non used space it will not effect log shiping or no log chain will broke.
2. If you clear unsued space it will not touch existing data. no performance issue
3. If you clear space and then due to other operation log file will auto grow it will put unnecessary pressure on database to allocate disk every time. So once you find the max growth of log file let it be as any how again it will grow to same size.
4. Shrinking log file is not recommeded if its again and again reaching to same size . Until unless you have space crunch
Thanks Saurabh Sinha
http://saurabhsinhainblogs.blogspot.in/
Please click the Mark as answer button and vote as helpful
if this reply solves your problem -
Hi,
We had purged a lot of unwanted data from one of our replicated databases ,the data was purged from the non replicated articles
Planned to shrink the data file using (DBCC shrinkfile) to claim the space back .The dbcc was running more than 20 hrs .
we saw this morning that replication was lagging .
On looking at the replication monitor (Publisher To Distributor History ) says that (the Log reader Agent is scanning the transaction log for commands to be replicated .Apx 339000000 log records have been scanned ).
Checked the log file size of the DB its about 400GB .
Suspected that the log growth might be because of dbcc command and killed the dbcc and tried to shrink the log file which came to 200 gb ,but still the replication is not back to speed .
Can someone please explain the background process that happened in the above process which caused the above affect and any ideas on how to get the replication back to normal.
Thanks,
JackIts because the log file is huge in your published database which is causing the issue. The fact that you purged data from non-replicated tables is not known to log reader agent as it still has to scan through the whole log file to pick up the
records which are to be replicated. So it will take some time before it skims through the log file.
More over unless replication goes through the complete log file and marks each vlf as replicated, its not going to allow you to completely shrink the file.
Best option for you is to allow replication to catchup.
read the below
http://blogs.msdn.com/b/repltalk/archive/2011/03/30/impact-on-log-reader-agent-after-reindex-operations.aspx
Possible Cause:
large number of non-replication transactions: section of the below blog
http://blogs.msdn.com/b/chrissk/archive/2009/05/25/transactional-replication-conversations.aspx
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com -
SQL 2008 shrinkfile decompresses ?
Hello, have compressed a table with Page compression and then when trying to reclaim the space in running a SHRINKFILE it seems the size is climbing back up which noticed a few sites mentioning this was a bug & would be addressed...is this still a
bug ? It has SQL 2008 sp3. Thanks in advance.Thanks for your reply Tom also.
Running that returns: Microsoft SQL Server 2008 (SP3) - 10.0.5500.0 (X64) -
Crystal Report Server Database Log File Growth Out Of Control?
We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise. Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
Is it "Normal" to have such a large LOG file compared to the DATABASE file?
Can you tell me if there is a recommended way to SHRINK the log file?
Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
USE CRS
GO
--Truncate the log by changing the database recovery model to SIMPLE
ALTER DATABASE CRS
SET RECOVERY SIMPLE;
--Shrink the truncated log file to 1 gigabyte
DBCC SHRINKFILE (CRS_log, 1000);
GO
--Reset the database recovery model.
ALTER DATABASE CRS
SET RECOVERY FULL;
GO
Do you think this approach would help?
Do you think this approach would cause any problems?my bad you didn't put the K on the 2nd number.
Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
Regards,
Tim -
Database Log File becomes very big, What's the best practice to handle it?
The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
Should I Shrink the Database?
I know increase hard disk is need for long term .
Thanks in advance.Hi Finke,
Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
Follow these steps to get transactional file back in normal shape:
1.) Take a transactional backup.
2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
>
Finke Xie wrote:
> Should I Shrink the Database? .
"NEVER SHRINK DATA FILES", shrink only log file
3.) Schedule log backups every 15 minutes.
Thanks
Mush -
Splitting data equaly from one .mdf to several .ndf files
Hi all,
Situation:
We have a db with 1 TB of data in one .mdf file (sql 2012).
For better performance we want to split this file up into 4 equaly big data files on different drives.
1st guess:
Well I thought that's pretty simple, I just add 4 datafiles (.ndf) on different drives, empty my existing .mdf and drop it finaly.
Well the problem with this is, you cannot delete your .mdf file because it holds metadata (catalog) for your db.
Solution:
- I added 3 more datafiles (.ndf) with 250 GB size each, autogrow disabled, on 3 additional drives (no new filegroup)
- DBCC SHRINKFILE (your_logical_mdf_file, EMPTYFILE)
(this gives you an error because there is not enough diskspace to split the 1TB .mdf file into 3 x 250 GB .ndf files, but it splits your 1 TB file equaly to all your 4 db files)
- shrinked my .mdf file and adjusted filesize and autogrow settings for all 4 db files
Remark:
Just keep in mind that moving 1 TB of data around takes a while and you better do this in a not so busy time...
HTH
acki4711Neha,
I don't see any advantage of the IO with filegroups.
We don't want to deal with what object should be in what filegroup, (most of the time we maintain 3rd party software/dbs) just want to get better performance by splitting data into more then one db file.
acki4711
You could benefit from performance by splitting data in upto 4 or 8 files and each file on a different drive.
How are the underlying disks configured, do you know? Otherwise, if all of these volumes are carved out of a single lun then there won't be a performance gain that you are looking for. Also, please enable TF 1117 for uniform db file growth once you size
the files to be of equal size.
Note: You may want to test the emptyfile option on your .mdf file, as it may not be straightfoward.
HTH
Hello AlliedDBA,
There is no benefit by splitting database files on different disk drives if your Underlying hardware is RAID 10 or RAID 5 you can achieve performance benefit.Daa to data files are written in Round robin fashion and you dont have any control over it.
Please dont enable any Trace flag.I am not sure what could be solution beacause IMO you anyhow require space
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
Hello Shanky..
Thanks for your feedback, but, I still stand by my suggestion. I see you are opinionating here, so please post some facts to back your claims that 1. No advantage of having multiple files 2. Not sure why your are scared of the term Trace flag but your
statement "IMO anyhow require space" lacks clarity..What is the relationship between space and trace flag?? -
How to reduce the database size based on MS SQL/Oracle?
Hi,
I had deleted severy clients via SCC5, but the database size is more and more bigger. Do you know how to reduce the database size.
my platform is: NT, MS SQL 2005
NT, Oracle 10.2
Best Regards,
Simon ShenHi Simon,
In the case of SQL you need to check how many space free you have in your datafiles and the based on that you need to Shrink them using DBCC SHRINKFILE command.
Find a couple of reference docs here,
http://www.intermedia.net/support/kb/default.asp?id=996
http://doc.ddart.net/mssql/sql70/dbcc_21.htm
Also i'm pretty sure that if your check the properties of the datafiles in MSSQL Enterprise Manager you get a "Shrink" option.
Regards
Juan
Please reward with points if helpful -
Problem when creating configuration Scenario
hi all
when i Create configuration Scenario use sap XI then it warning that "Error initializing key service (COULD_NOT_GET_KEYSERVICE)" and "The log file for database 'N4S' is full. Back up the transaction log for the database to free up some log space."
is it means so many people to use the system?
and how can i clear this problem?
thanks!
marxHi
User query below to backup and shrink the trasactional logs
db_name in your case is "N4S"
db_name_log is the trasctional log file name:Chek this in your case must be like N4S_log
backup log db_name with truncate_only
go
dbcc shrinkfile (db_name_log,0)
go
run this query and restart the database
Ps: If helful pleae reward points -
Cannot remove 2nd log file on AlwaysOn database
Hi all,
I have a database, member of an availability group. This database has 2 log file, I want to remove the unsed secondary log file, I try to run this command to empty the second lofg file:
USE [TEST-AG]
GO
DBCC SHRINKFILE (N'TEST-AG_log_2' , EMPTYFILE)
GO
the command completes successfully, the I run the command to remove the file:
USE [TEST-AG]
GO
ALTER DATABASE [TEST-AG] REMOVE FILE [TEST-AG_log_2]
GO
But this command fails with the following message:
Error 5042: The
file 'TEST-AG_log_2' cannot
be removed because it is not empty.
If I remove the database from availability group the command to remove the 2nd file works, so I can't remove a secondary log file on a database member
of an alwayson availability grup?Hi all,
I have a database, member of an availability group. This database has 2 log file, I want to remove the unsed secondary log file, I try to run this command to empty the second lofg file:
USE [TEST-AG]
GO
DBCC SHRINKFILE (N'TEST-AG_log_2' , EMPTYFILE)
GO
the command completes successfully, the I run the command to remove the file:
USE [TEST-AG]
GO
ALTER DATABASE [TEST-AG] REMOVE FILE [TEST-AG_log_2]
GO
But this command fails with the following message:
Error 5042: The file 'TEST-AG_log_2' cannot
be removed because it is not empty.
If I remove the database from availability group the command to remove the 2nd file works, so I can't remove a secondary log file on a database member of an alwayson
availability grup?
Remove the database from availability group, then remove 2nd file. You have been successfully. Then add back database to the availability group, then create regular backup jobs of the database. -
Shrink Log File on MS sql 2005
Hi all,
My DB has a huge logfile, with more than 100gb.
The idea is to shrink it, but the good way.
I was trying this way:
use P01
go
backup log P01 TO DISK = 'D:\P01LOG1\P01LOG1.bak'
go
dbcc shrinkfile (P01LOG1,250) with no_infomsgs
go
The problem is that the backup file is getting bigger and bigger each backup.
So, my question is, how to shrink the logfile, correctly, with backup, but that backup should not increase but stay at the same level, overwriting the backups.
I have full dayly backup with data protector from HP, but it doesn't clean the log, and it isn't possible to shrink it.What you want to do with the log backups depends on how you are going to recover the database in case the system/database loss and your backup schedule.
1. If you are not going to do point in time recovery then there is no point in taking a tran log backup to a backup file. You can change the recovery model of the database to "simple". If your recovery model is "simple" you don't have to take transaction log backups at all. The inactive transactions are flushed from the log automatically. You should still be taking full and differential backups so that you can atleast recover your database to last full backup and apply the latest differential backup.
2. If this is a production system then you should definitly be on "full" recovery mode and should be taking regular transaction log backups and storing them in a safe place so that you can use them to recover your system to any point in time. Storing the transaction log backup on the same server kind of defeats the purpose because if you lost the server and disks you will not have the backups either.
3. If you are in full recovery mode and lets assume that you run your transaction log backups every 30 mins then you need your log file to be of the size that can handle the transactions that happen in any given 30 to 60 mins.
There shouldn't be a need to constantly shrink log files if you configure things right.
Edited by: Neeraj Nagpal on Aug 20, 2010 2:48 AM -
Splitting large sql database files
Hi
I have large sap sql server data files having size in 40 GB/ 3 data files. Is there any way to split those data files into smaller size .
Thanks and Regards
JijeeshHi Jijesh,
There is a way of splitting files without hampering performance, I have done this on a production system without loss of performance.
All data files in MSSQL belong to a FILE Group, suppose there are 4 datafiles in your database each of 4GB, assume all files belong to the same filegroup.
file1.mdf
file2.mdf
file3.mdf
file4.mdf
and you want to split each into 2 GB datafiles here is what you do.
(Before starting Take Complete Offline backup of your database by stopping SAP and performing a full offline backup of all data and log files.
This operation technically could be performed with SAP Up and running , however its safer to keep database idle when doing a reorg, so stop SAP instance and do the following.)
1.We select the file file1.mdf
2.Add 2 new files to the same filegroup as file1.mdf and name them as file1a.mdf and file1b.mdf of size 2GB each.
3.RESTRICT growth on all rest files (file2.mdf, file3.mdf and file4.mdf) by unchecking the Auto grow option.
4. Open the SQL Analyser and give the command:
<b> USE '<SAP SID>' ;
GO
DBCC SHRINKFILE 'file1.mdf' EMPTYFILE;
GO
</b>
The above mentioned commands will empty the contents of 'file1.mdf' and restribute the data to other files in its FILEGROUP.
Now we have restricted growth by TURNING OFF growth on 'file2.mdf' , file3.mdf and file4.mdf.
The command will distribute data to the new files created by us file1a.mdf and file1b.mdf.
When the command has completed, you can safely remove the file1.mdf
5. Perform steps 1-4 for all the remaining files file2.mdf and file3.mdf and file4.mdf.
After doing the above operation run a CHECK DB using DBCC CHECKDB , this will ensure that your database integrity is checked and everything is okay.
Now run Update statistics to ensure that your performance will not be hampered.
Regards,
Siddhesh -
Hi All,
This is to bring to your notice that when iam trying to see free space using shrink database it is showing 180GB free space.But when i am trying to shrink individual data files they are only showing free space in MBs.Further analysis sussgests that there
might be space used by internal objects in tempdb.How can i reclaim that without restarting sql server services.
Regards
RahulYou can shrink tempdb by multipel ways like;
DBCC SHRINKFILE
DBCC SHRINKDATABASE
ALTER ATABASE
But you may run into consistency errors by doing this. That is why, the safes way to release the tempdb space is to restart the instance's sql server service.
Please visit my Blog for some easy and often used t-sql scripts
My BizCard
Atif can you show any example where by running Shrinkfile or alter database command database will go in inconsistent state.I agree sometimes restart is last option but I dont agree with your consistency statement.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
Maybe you are looking for
-
hi all, I tried executing this program....its showing Noclassdeffound error.... ***//this file i ve named it as demos.java*** package pack1; import pack2.demo2; public class demos int x,y; public demos(){ demo2 obj=new demo2(2,3);
-
Error in TECO of production orders
Hi I want to TECO of production orders whose delivered quantity is greater than ordered quantity for a semi finished good.Is it possible. Or how to change the delivered quantity and TECO the order.Please Help Regards, PradeepM.
-
Hi, I learnt there is a patch for PeopleSoft database that reduces free spaces when building indexes in the database. Can someone please give me the link or possibly the Patch ID. Thanks and regards. Texas!
-
Hi, I am afraid I have a stupid question to post here... I have lost a startup script that is invoked everytime I run my bash shell. This script starts a licence manager I no longer require or have, so everytime I run terminal an error message occurs
-
Query to find requestor's department in requisition
Please help me query(sql) the requestor's department in requisition.