Unable to set max log file size to unlimited

Hi all,
Hoping someone can give me an explanation of an oddity I've had. I have a series of fairly large databases. I wanted the make the database log files 8GB in size with 8GB growth increments with
unlimited maximum file size. So, I wrote the script below. It seems to have worked but the database max size doesn't show as unlimited, it shows as 2,097,152MB and cannot be set to unlimited using a script or in SSMS by clicking on the unlimited radio button.
2TB is effectively unlimited anyway but why show that rather than actually setting it to unlimited?
USE
[master]
GO
--- Note: this only works for SIMPLE RECOVERY MODE. FOR FULL / BULK RECOVERY modes you need to backup the transaction log instead of a CHECKPOINT.
DECLARE
@debug
varchar(1)
SET @debug =
'Y'
DECLARE
@database
varchar(255)
DECLARE
@logicalname
varchar(255)
DECLARE
@command
varchar(8000)
DECLARE
database_cursor
CURSOR LOCAL
FAST_FORWARD FOR
select
DB_NAME(database_id)
as DatabaseName
name
as LogicalName
from
master.sys.master_files
where
file_id
= 2
AND type_desc
= 'LOG'
AND physical_name
LIKE '%_log.ldf'
AND
DB_NAME(database_id)
NOT IN('master','model','msdb','tempdb')
OPEN
database_cursor
FETCH
NEXT FROM database_cursor
into @database,@logicalname
WHILE
@@FETCH_STATUS
= 0
BEGIN
SET
@command
= '
USE ['
+ @database
+ ']
CHECKPOINT
DBCC SHRINKFILE('''
+ @logicalname
+ ''', TRUNCATEONLY)
IF
(@debug='Y')
BEGIN
PRINT
@command
END
exec
(@command)
SET
@command
= '
USE master
ALTER DATABASE ['
+ @database
+ ']
MODIFY FILE
( NAME = '''
+ @logicalname
+ '''
, SIZE = 8000MB
ALTER DATABASE ['
+ @database
+ ']
MODIFY FILE
( NAME = '''
+ @logicalname
+ '''
, MAXSIZE = UNLIMITED
, FILEGROWTH = 8000MB
IF
(@debug='Y')
BEGIN
PRINT
@command
END
exec
(@command)
FETCH
NEXT FROM database_cursor
INTO @database,@logicalname
END
CLOSE
database_cursor
DEALLOCATE
database_cursor

Hi,
http://technet.microsoft.com/en-us/library/ms143432.aspx
File size (data)
16 terabytes
16 terabytes
File size (log)
2 terabytes
2 terabytes
Thanks, Andrew
My blog...

Similar Messages

  • How to set up PopProxy* log file size ?

    Dear All,
    Does anybody know how to set up MMP PopProxy* log file size and rollovertime ?
    ./imsimta version
    Sun Java(tm) System Messaging Server 7.0-3.01 64bit (built Dec 9 2008)
    libimta.so 7.0-3.01 64bit (built 09:24:13, Dec 9 2008)
    Steve

    SteveHibox wrote:
    Does anybody know how to set up MMP PopProxy* log file size and rollovertime ?Details on these settings are available here:
    http://wikis.sun.com/display/CommSuite6U1/Communications+Suite+6+Update+1+What%27s+New#CommunicationsSuite6Update1What%27sNew-MMPLogging
    Regards,
    Shane.

  • What is the max mp4 file size?

    Does anyone know what the max mp4 file size that can be imported into iTunes for sync with ATV is?

    RichSolNuv wrote:
    I got HandBrake to work. What settings do you recomment for ATV? When I used the Apple Tv setting it worked pretty quick and I could view the file in iTunes, but it would not import into ATV. I got a message saying it was not playable on ATV. The "Normal" setting (h.264) seems to work good. I did a few minutes of a movie and it looks good and works with ATV, but it seems to take a lot longer (hours).
    With Handbrake 0.9.1, always select "Apple TV" under the "Presents" column on the right side of the window. When you say it is "not playable on ATV" do you mean by streaming or syncing or both?

  • SQL LOG FILE SIZE INCREASING

    Hi DBA's
    SQL Log file size occupies more disk space on the server, the overall database size is 8GB
    How to decrease the SQL LDF file size on the server, please explain the suitable steps to perform
    Thanks
    DBA

    use master
    go
    dump transaction <YourDBName>
    with no_log
    go
    use <YourDBName>
    go
    DBCC SHRINKFILE (<YourDBNameLogFileName>,
    100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
    go
    -- then you can call to check that all went fine
    dbcc checkdb(<YourDBName>)
    Andy ,
    what point in asking user to use No_log and you did not even motioned about what this eveil command will do. Actually its
    seriously not required reason being initial size of log file set to 8 G.
    Plus what is point in running checkdb ?
    I don't agree to any part you pointed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Log file size

    We have a DNS Server running on solaris 9, it's generating huge logs hence /var/adm/messages file size is vey big. Is there any way to create seperate log file for everyday or can I restrict the log file size for a single file.
    Thank you

    Hmmm,
    For what type environment is this DNS server used for? How many domains/delegated domains are configured on the host?
    I think by default BIND allows 1000 recursive lookup connections. (That is already plenty and if you have that amount of legitimate traffic you will have to add more DNS servers and configure the nodes accordingly)
    Is the server listed as a Name Server for your domain and used externally for name resolution for your domain host entries, maybe the SOA?
    nslookup (enter)
    set type=ns (enter)
    you_domain_mane (i.e. your_domain.com) (enter)Or
    dig �q NS your_domain.com
    If the affected server returns in the list it is NEVER EVER a good idea to allow recursive lookups.
    My guess is that you are subject to denial of service, unless you host a fairly large environment with 1000s of hosts.
    Change the recursive-cient connection back (you system cannot handle 5000 recursive lookups and your system utilization shows this.)
    Then configure
    �category queries { your_query_file; };� In your namd.conf
    restart BIND
    Use �rndc� to change the trace level to 1
    Let it run for 2 -5 min and stop BIND entirely
    Then run something like:
    �cat your_query_file | cut -d'/' -f2 | sort | uniq �c | more� (depends on the log file format, better yet use nwak)
    take a quick look to see if there is one IP that is hammering your system.

  • Max binary file size in r3

    we have certain reports that dump large files (over 1 gb).  in r2, we set the max binary file size (cmc > servers > webi rep server) to handle this.  where is this setting in r3 (we're actually at 3.1).
    thanks in advance....

    Hi Shawn
    The exact location is:
    Properties of WebIntelligenceProcessingServer > In the Web Intelliegece Processing Service section >
    Binary Stream Maximum Size (MB).
    By default it is 50MB in BO XIR3.1
    Regards,
    Hrishikesh

  • On-line redu log file size reduce

    Dear Experts,
    Recently i have done HADR set-up, my DR server is on remote location,
    and my network line is not very fast , my query is , can we reduce on log file size, which is currently 63.9921874995806 MB(by default) . because if we will reduce the size it may help to ship the log fast,
    Kindly suggest the best,
    Thanks
    Sadiq

    Hello,
    if you are referring to the build-in DB2 HADR functionality, reducing the size of log files will not help.
    HADR does not transfer complete log files but will replicate logging information of each single transaction constantly to the standby site.
    Your network has to have enough bandwidth to support the average log generation rate. This is not related to the size of individual log files, but to how much logging information is generated per amount of time.
    Kindly check the corresponding DB2 online documentation for HADR performance aspects
    [High availability disaster recovery (HADR) performance|http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.ha.doc%2Fdoc%2Fc0021056.html]
    But, to answer your initial  question: The size of log files can be changed by modifying the LOGFILSIZ database configuration parameter. Probably it will not help in your case.
    Edited by: Hans-Juergen Zeltwanger on Feb 20, 2012 2:49 PM
    Edited by: Hans-Juergen Zeltwanger on Feb 20, 2012 2:50 PM

  • Log file size on ACS 5.3

                       Hi,
    how do i set limit on the log file size in ACS 5.3. I had the same issue with Nexus 1000v but there is a command that enables you to set log file nane and size. it is getting bulky. any advice?
    thanks
    Kerim

    Hi,
    Here is the explanation of this function:
    http://www.cisco.com/en/US/docs/net_mgmt/cisco_secure_access_control_system/5.3/user/guide/admin_config.html#wpxref88023
    Here is some information about the file system that this is deleted from:
    http://www.cisco.com/en/US/docs/net_mgmt/cisco_secure_access_control_system/5.3/user/guide/logging.html#wp1052656
    So if the file that you are looking at is active (probably is if you see logs from yesterday and today, it will not be deleted).
    Hope this helps,
    Tarik Admani
    *Please rate helpful posts*

  • MessageBox log file size

    Hi, 
    In our prod environment, the MessageBox data file is withing the recommended limits - 2GB, but the log file is 32GB. Is this a reason to worry, or it is normal?  I couldn't find any recommendations on this. 
    Thank you very much!

    This is not normal.
    IMO your BizTalk database Jobs are not running , Make sure your BizTalk SQL servers jobs have been enabled and SQL server agent is running. 
    Please have a look of
    How to Configure the Backup BizTalk Server Job article to enable the jobs. 
    The BizTalk backup job is responsible for keeping the log file size in the limit. 
    you can try shrinking the log file using following SQL command
    USE BiztalkMsgBoxDb;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE BiztalkMsgBoxDb
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1 MB.
    DBCC SHRINKFILE (BiztalkMsgBoxDb_Log, 2);
    GO
    I would recommend you to have a read of following articles
    BizTalk Environment Maintenance from a DBA perspective 
    BizTalk Databases: Survival Guide
    hope this helps. 
    Greetings,HTH
    Naushad Alam
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or
    Mark As Answer
    alamnaushad.wordpress.com

  • Max Character File Size Limit exceeded.The document is too large to process

    Hi
    I have  made set on section on products in report.
    the Reports contains around 50 products when im opening the report
    in draft mode the following error is displayed
    "Max Character File Size Limit exceeded.The document is too large to be proceesed by the server .Contact your business Objects Administrator."
    can some body help me out how to increase my report character file size

    Hi,
      If you are using Business Objects XIR2, there is a performance parameter in the Web_Intelligence Report Server where you can increase the size of that file. The parameter is Maximun Character File Size. Go to the CMC>Server>server.Web_IntelligenceReportServer in the Properties tab you will see it.
    Cheers,
    Luigi

  • Optimal online redo log file size

    hello to all,
    I have installed just now 11gr2 patching 2.
    I found
    The Redo Logfile Size Advisor can be used to determine the least optimal online redo log file size based on the current FAST_START_MTTR_TARGET setting and MTTR statistics.
    Which means that Redo Logfile Size Advisor is enabled only if FAST_START_MTTR_TARGET is set.
    for now this installation is not a production instance (only for my testing) I am still on 10g
    my question is:
    is't important to set FAST_START_MTTR_TARGET, in 11g or that all old infos?, because from my query I see OPTIMAL_LOGFILE_SIZE isn't set.
    SELECT TARGET_MTTR,ESTIMATED_MTTR,WRITES_MTTR,WRITES_LOGFILE_SIZE, OPTIMAL_LOGFILE_SIZE FROM V$INSTANCE_RECOVERY;TARGET_MTTR     ESTIMATED_MTTR     WRITES_MTTR     WRITES_LOGFILE_SIZE     OPTIMAL_LOGFILE_SIZE
    0     20     0     0
    and
    SELECT a.group#, b.member, a.status, a.bytes
    FROM v$log a, v$logfile b
    WHERE a.group#=b.group#
    GROUP#     MEMBER     STATUS     BYTES
    6     /u03/oradata/TEST/redo06.log     CURRENT     1048576000
    5     /u03/oradata/TEST/redo05.log     INACTIVE     1048576000
    4     /u03/oradata/TEST/redo04.log     INACTIVE     1048576000
    3     /u03/oradata/TEST/redo03.log     INACTIVE     1048576000
    2     /u03/oradata/TEST/redo02.log     INACTIVE     1048576000
    1     /u03/oradata/TEST/redo01.log     INACTIVE     1048576000
    thanks for any doc or info you will pointing me

    for now this installation is not a production instance (only for my testing) I am still on 10g
    my question is:
    is't important to set FAST_START_MTTR_TARGET, in 11g or that all old infos?,There is no difference for FAST_START_MTTR_TARGET in 10g and 11g.
    For 10g
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams068.htm#i1127412
    For 11g
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams079.htm
    Both documentation have exact text, means it will work exactly in 11g as 10g.
    But do'nt see this like (because its not a official documentation link... as it is different from official doc's text though)
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10755/initparams069.htm
    Regards
    Girish Sharma

  • Reduce the Production Log file size(.LDF)

    Hi Everybody,
                We are using R/3 ECC 6.0 VERSION with SQL 2005 Database. For the past two days our Production Server Performance is very slow due to the size of Production Log file(.LDF) it crossed 17 GB. i want to reduce this Log file size. i dont know how. plz some one help me to do this job.otherwise this will become Serious Issue.
    Points will be rewarded
    Thanks
    Siva

    How did you trace the slowness back to the log file?  A 17 GB log file is on the small side for a Production system.  I don't think a hotfix is going to fix your log growth.
    Is the log on the same physical disk as your data files?  Is it on a very slow hard drive or is the drive having an I/O problem?  That is the only way it would impact performance to a noticable degree.  A large or small log file will have no real effect on performance since it is just appended to and not read during writes, and in most production environments it is on a seperate disk or part of a SAN.
    You can decrease its growth by increasing your log backup time.  Do you back it up now?  You can probably set your backup software to shrink the file when it finished backing up.  You should consult your DBA team and ask for their advice, they can quickly point you in the right direction.

  • Project Bloating and Repair in Premiere Pro CS6; Max Project File Size? Adobe Coders Welcome...

    Greetings.
    After a few browsing sessions, I haven't found direction for a specific problem within a project bloat. First and foremost; we're always backed up. I have a restoration copy of the project that I will reference herein. Check that off the list.
    Numerous people have posted here and around Creative Cow with problems regarding an expanding file size using CS6; watching files go from 300mb to 2.8gb. The unfortunate problem is when the project crosses a magical line that results in an inablity to open, import, export, or access sequences contained therein. Has anyone addressed a solution to repairing Premiere CS6 save files? Reducing file bloat? I haven't seen anything (recall the FCP7 repair days... solutions existed because problems were frequent; thankfully not so bad with Adobe).
    Does anyone inside the code size know the maximum project file size that Premiere can read? I created a monster... Over 200 sequences utilizing 4.5TB of footage, stretching from Red codecs to EXCam to MPEG4 and H264. And for some time, the project size was manageable. Then, it inflated. And one day, randomly stopped opening. None of the auto-saves open.
    Just curious. Files aren't the problem, but the 2.8GB save file is. Machine specs // etc. are openly available if required; no bad ram, no problems with all projects (just this one beast).
    Cheers,
    Jon Michael Ryan

    REALLY_ Mac has a history of being more stable than PC with Premier ...
    Pastor Bernie
    International Evangelism Coordinator
    Global Teen Challenge
    visit us on  FACEBOOK (http://www.facebook.com/bernie.gillott)  or on 
    twitter-just tweet globalcry
    ~~~~Pray Big or Stay Home~~~
    www.bgillott.org (http://www.bgillott.com/)
    426 Newport News Ave,  Hampton VA 23669
    Phone (757) 728-0347 cell (757) 218  8499
    In a message dated 6/2/2013 5:50:50 P.M. Eastern Daylight Time, 
    [email protected] writes:
    Re:  Project Bloating and Repair in Premiere Pro CS6; Max Project File
    Size?  Adobe Coders Welcome...
    created  by joe bloe  premiere
    (http://forums.adobe.com/people/joebloepremiere)  in Premiere Pro - View the full  discussion
    (http://forums.adobe.com/message/5373302#5373302)

  • I install IDES 4.7 in VMware, Why "unable to set time for file...."

    system     Windows2003
    database   Oracle 9
    disk space : C(50G)D(80G)E(40G)
    "Copying file C:/DOCUME1/ADMINI1/LOCALS~1/Temp/SAPinst/bootstrap_keydb.1.xml to: C:/SAPinst ORACLE SAPINST.
    INFO 2014-01-26 16:22:47
    Copying file C:/DOCUME1/ADMINI1/LOCALS~1/Temp/SAPinst/bootstrap_keydb.xml to: C:/SAPinst ORACLE SAPINST.
    INFO 2014-01-26 16:22:47
    Copying file C:/DOCUME1/ADMINI1/LOCALS~1/Temp/SAPinst/CONTROL.DTD to: C:/SAPinst ORACLE SAPINST.
    ERROR 2014-01-26 16:22:47
    FSL-02010  Unable to set time for file C:/SAPinst ORACLE SAPINST/CONTROL.DTD.
    ERROR 2014-01-26 16:22:47
    FJS-00012  Error when executing script."
    who can help me ..please.....

    Hello Matthew,
    You should also change your temp directory to something woth no spaces, something like C:\temp.
    Sapisnt sometimes has problems with the spaces in the temp path, and the Universal Installer nearly always
    has a problem with this.
    Regards,
    David

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

Maybe you are looking for