Content Database Growing too large

We seem to be experiencing some slowness on our SharePoint farm and noticed that one of our databases (we have two) is now at 170 Gb. Best practice seems to be to keep the database from going over 100Gb.
We have hundreds of Sites within one Database and need to spit these up to save space on our databases.
So I  would like to create some new databases and move some of the sites from the old database over to the new databases.
Can anyone tell me if I am on the right track here and if so how to safely move these sites to another Content Database?
dfrancis

I would not recommend using RBS. Microsoft's RBS is really just meant to be able to exceed the 4GB/10GB MDF file size limit in SQL Express. RBS space /counts against/ database size, and backup/restore becomes a more complex task.
Trevor Seward
Follow or contact me at...
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Similar Messages

  • SharePoint 2010 Content database growing too fast

    Hi,
    I have a SharePoint 2010 Content database wss_content_DB which is 197 GB BUT when I calculate my site collection I can see they have used only about 70 GB. I know there is Auditdata table which is about 27 GB. Couple of days ago my content DB was about 105
    GB and it Audit data log was set to unlimited once  I noticed that I set to 90 days since then it jumped to 197 GB.
     I have no idea what is going on.
     Now I need to know
     - why it is showing 197 GB in SQL server
     - How can I trim audit log
     -  Why suddenly it's increased to 197 GB from 105 GB
    Any help will be greatly appreciated.
     Abdul

    There is a way to deal with this table. There is an stsadm command (stsadm -o trimauditlog) as part of the infastructure update that will allow you to manage this data.
    #Powershell Script to trim old AuditData table records.
    $currentDate = Get-Date
    #Number of days we want to keep data in the table
    $NumberOfDays = -90
    $DateToDelete = $currentDate.AddDays($NumberOfDays)
    $DateString = ‘{0:yyyyMMdd}’ -f $DateToDelete
    $STSADMCMD = “stsadm -o trimauditlog -date $DateString -databasename WSS_ContentDB”
    invoke-expression “$STSADMCMD”
    WHY IT SUDDENLY INCREASED:
    Do this query on your SQL Instance: DBCC SQLPERF(logspace)
    This will show you how the size of LOGfiles (LDF) and how much is unused, if you have alot of unused space it indicates you dont have a recent DB backup - normally backup would write log data to MDF file .
    Normally you will not shrink DBs, only if you have deleted alot of data and you are not going to use space again so only do a shrink if you REALLY needs to.
    http://www.microsoft.com/en-us/download/details.aspx?id=24282

  • My audit database getting too large

    Post Author: amr_foci
    CA Forum: Administration
    my audit database getting too large, how i reset it?

    Post Author: jsanzone
    CA Forum: Administration
    Amr,
    The best that I can determine, there is no official documentation from BusinessObjects regarding a method to "trim" the Auditor database.  Based on previous disucssions, I seem to remember that you are on XI R2, but if I'm wrong, then these notes will not apply to you.  Here is the scoop:
    There are six tables used by Auditor: 1) APPLICATION_TYPE (initialized w/ 13 rows, does not "grow") 2) AUDIT_DETAIL (tracks activity at a granular level, grows) 3) AUDIT_EVENT (tracks activity at a granular level, grows) 4) DETAIL_TYPE (initialized w/ 28 rows, does not "grow") 5) EVENT_TYPE (initialized w/ 41 rows, does not "grow") 6) SERVER_PROCESS ( (initialized w/ 11 rows, does not "grow")
    If you simply want to remove all audit data and start over, then truncate AUDIT_EVENT and AUDIT_DETAIL.
    If you want to only remove rows based on a period, then consider that the two tables, AUDIT_DETAIL and AUDIT_EVENT, are transactional in nature, however, AUDIT_DETAIL is a child to the parent table AUDIT_EVENT, thus you will want to remove rows from AUDIT_DETAIL based on its link to AUDIT_EVENT before removing rows from AUDIT_EVENT first.  Otherwise, rows in AUDIT_DETAIL will get "orphaned" and never be of any use to you, and worse, you will not readily know how to ever delete these rows again.
    Here is the SQL statements:delete from AUDIT_DETAILwhere event_id =(select Event_ID from AUDIT_EVENT                  where Start_Timestamp between '1/1/2006' and '12/31/2006')godelete from AUDIT_EVENT                  where Start_Timestamp between '1/1/2006' and '12/31/2006'go
    One word of caution is to down you BOE application before doing this maintenance work, otherwise there is a possibility that Auditor will be busy trying to bring new rows to your database while you're busy delete rows and you might encounter an unwanted table lock, either on the work you're doing or the work that BOE is trying to perform.
    Good luck!

  • TIme Machine  backup grows too large during backup process

    I have been using Time Machine without a problem for several months, backing up my imac - 500GB drive with 350g used. Recently TM failed because the backups had finally filled the external drive - 500GB USB. Since I did not need the older backups, I reformatted the external drive to start from scratch. Now TM tries to do an initial full backup but the size keeps growing as it is backing up, eventually becoming too large for the external drive and TM fails. It will report, say, 200G to back up, then it reaches that point and the "Backing up XXXGB of XXXGB" just keeps getting larger. I have tried excluding more than 100GB of files to get the backup set very small, but it still grows during the backup process. I have deleted plist and cache files as some discussions have suggested, but the same issue occurs each time. What is going on???

    Michael Birtel wrote:
    Here is the log for the last failure. As you see it indicates there is enough room 345g needed, 464G available, but then it fails. I can watch the backup progress, it reaches 345G and then keeps growing till it give out of disk space error. I don't know what "Event store UUIDs don't match for volume: Macintosh HD" implies, maybe this is a clue?
    No. It's sort of a warning, indicating that TM isn't sure what's changed on your internal HD since the previous backup, usually as a result of an abnormal shutdown. But since you just erased your TM disk, it's perfectly normal.
    Starting standard backup
    Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    Ownership is disabled on the backup destination volume. Enabling.
    2009-07-08 19:37:53.659 FindSystemFiles[254:713] Querying receipt database for system packages
    2009-07-08 19:37:55.582 FindSystemFiles[254:713] Using system path cache.
    Event store UUIDs don't match for volume: Macintosh HD
    Backup content size: 309.5 GB excluded items size: 22.3 GB for volume Macintosh HD
    No pre-backup thinning needed: 345.01 GB requested (including padding), 464.53 GB available
    This is a completely normal start to a backup. Just after that last message is when the actual copying begins. Apparently whatever's happening, no messages are being sent to the log, so this may not be an easy one to figure out.
    First, let's use Disk Utility to confirm that the disk really is set up properly.
    First, select the second line for your internal HD (usually named "Macintosh HD"). Towards the bottom, the Format should be +Mac OS Extended (Journaled),+ although it might be +Mac OS Extended (Case-sensitive, Journaled).+
    Next, select the line for your TM partition (indented, with the name). Towards the bottom, the Format must be the same as your internal HD (above). If it isn't, you must erase the partition (not necessarily the whole drive) and reformat it with Disk Utility.
    Sometimes when TM formats a drive for you automatically, it sets it to +Mac OS Extended (Case-sensitive, Journaled).+ Do not use this unless your internal HD is also case-sensitive. All drives being backed-up, and your TM volume, should be the same. TM may do backups this way, but you could be in for major problems trying to restore to a mis-matched drive.
    Last, select the top line of the TM drive (with the make and size). Towards the bottom, the *Partition Map Scheme* should be GUID (preferred) or +Apple Partition Map+ for an Intel Mac. It must be +Apple Partition Map+ for a PPC Mac.
    If any of this is incorrect, that's likely the source of the problem. See item #5 of the Frequently Asked Questions post at the top of this forum for instructions, then try again.
    If it's all correct, perhaps there's something else in your logs.
    Use the Console app (in your Applications/Utilities folder).
    When it starts, click +Show Log List+ in the toolbar, then navigate in the sidebar that opens up to your system.log and select it. Navigate to the +Starting standard backup+ message that you noted above, then see what follows that might indicate some sort of error, failure, termination, exit, etc. (many of the messages there are info for developers, etc.). If in doubt post (a reasonable amount of) the log here.

  • SharePoint TempDB.mdf growing too large? I have to restart SQL Server all the time. Please help

    Hi there,
    On our DEV SharePoint farm > SQL server
    The tempdb.mdf size grows too quickly and too much. I am tired of increasing the space and cannot do that anymore.
    All the time I have to reboot the SQL server to get tempdb to normal size.
    The Live farm is okay (with similar data) so it must be something wrong with our
    DEV environment.
    Any idea how to fix this please?
    Thanks so much.

    How do you get the tempdb to 'normal size'? How large is large and how small is normal.
    Have you put the databases in simple recovery mode? It's normal for dev environments to not have the required transaction log backups to keep the ldf files in check. That won't affect the tempdb but if you've got bigger issues then that might be a symptom.
    Have you turned off autogrowth for the temp DB?

  • EM Application Log and Web Access Log growing too large on Redwood Server

    Hi,
    We have a storage space issue on our Redwood SAP CPS Orcale servers and have found that the two log files above are the main culprits for this. These files are continually updated and I need to know what these are and if they can be purged or reduced down in size.
    They have been in existence since the system has been installed and I have tried to access them but they are too large. I have also tried taking the cluster group offline to see if the file stops being updated but the file continues to be updated.
    Please could anyone shed any light on this and what can be done to resolve it?
    Thanks in advance for any help.
    Jason

    Hi David,
    The file names are:
    em-application.log and web access.log
    The File path is:
    D:\oracle\product\10.2.0\db_1\oc4j\j2ee\OC4J_DBConsole_brsapprdbmp01.britvic.BSDDRINKS.NET_SAPCPSPR\log
    Redwood/CPS version is 6.0.2.7
    Thanks for your help.
    Kind Regards,
    Jason

  • Music library growing too large...

    I've been using Quod Libet as my music player for a while now, and it is pretty much exactly what I want in a music player.  However, as my music collection grows, it has been slowing down lately.  I have over 8000 songs now, around 40 gigs, and Quod Libet will slow down, peg cpu usage, and crash quite often now.  What other options do I have?  I know Amarok can use a real database backend that should scale way beyond what I currently have, but prefer GTK apps and the Quod Libet interface.  Can MPD handle a library this large?  Any MPD clients that are Quod Libet like?  Anyway to make Quod Libet scale better?
    Thanks

    luciferin wrote:
    dmz wrote:http://www.last.fm/user/betbot
    It takes a true audiophile to require The Spice Girls in lossless quality
    Here's me: http://www.last.fm/user/Arch
    That's right, I nabbed the nick Arch way back in 2004 on Audioscrobbler and Neowin.net   Arch Linux and I were meant to be together.
    And to derail this thread a little bit: does anybody know of a linux music player that doesn't use a database?  Just adds files from your directories ala Foobar?
    The Spice Girls is very underestimated. And Mel C is a hell of a girl. So beautiful.. I wish.. oh well. Maybe you want to take a look at mocp or cmus, if you dont want to use mpd.

  • Tablespace growing too large

    Good morning gurus,
    Sorry if I sound novice at some point .
    I have this table space vending of size 188,598.6MB.It keeps on growing.I have to give it extra space every week and all is consumed.It is a permanent table space with extent management local and segment space management auto.This table space is the backbone of the database which is 250G.We are currently running oracle 10.2.0.4 on windows.
    Please help
    Regards
    Deepika

    Hi..
    Please do mention the database version and the OS.
    You need to know what are the objects, object_types on such a big tablespace.Which schemas use it waht do they do.Do they do any kind of DIRECT Loading in the database.Are all the tables and the indexes on the same tablespace.What i feel is, you are having all the tables and the indexes on the same tablespace.I would recommend 2 things:--
    1. Do the data purging.Talk to the considered applications team,or who so ever is the concerned person, and the decide data retention period in the database and move the rest of the data to some other database as history.
    2. Keep different tablespaces for the tables and indexes.
    HTH
    Anand

  • Automatic Deployment Rule for SCEP Definitions growing too large.

    See the deployment package for SCEP definition is now 256MB and growing.  How can we make sure it stays small?  The ADR creating the package is leaving 26 Definition in there right now.

    The method that Kevin suggests above is what is implemented as part of a default deployment template included with SP1. This limits the number of definitions in the update group to the latest eight (I think).
    As a supplemental note here, whenever an ADR runs and is configured to use an existing update group, it first wipes that update group.
    Jason | http://blog.configmgrftw.com

  • RTP jitter buffer growing too large?

    Hi all I am experiencing a rather annoying problem when receiving RTP audio data and rendering it: It takes some time for the player to get created and realized, in the mean time RTP packets continued to arrive, causing them to be buffered. It appeared that the buffer grew until data is drained from it (by the player), so the longer it took the player to get created and realized the larger the buffer became, causing a massive delay which is annoying when a conversation is being carried out. I did set the buffer length (via the RTPManager's BufferControl) to 200ms but this does not seem to make any difference. I don't have direct proof that this is what actually happened under the hood but all evidence seemed to point to this unchecked growth of the jitter buffer. The faster the computer, the faster the player get realized and the smaller the delay.
    Does anyone else experience this phenomenon? Is there a fix?

    I don't know if your diagnosis is correct, for shure I have a lot of jitter between two PC using the same java app and playing a RTP broadcast audio.
    But I could not relate it with the speed of the computer, sometimes A plays before B, sometimes after. Problably it is the time to create objects that varies.
    Still looking for a solution....

  • RSZWOBJ table growing too large

    Hello Experts:
    RSZWOBJ is the largest table at my client.  Does anyone have experience with archiving the RSZWOBJ table or handling its data growth?
    Thanks,
    Jane

    Hi,
    can you carry out say bookmark purge for the content which are older than 6 months or so?
    can u check that whether we can delete those history from the system and appraently from table?
    How to delete user defined Bookmarks ?
    How to find the Infoprovider and Query name with help of WAD tech name:
    Thanks and regards
    Kiran

  • CSCtz15346 - /mnt/pss directory growing too large and having no free space

          hello,
    Nexus 3K switch is not allowing me to save the configuration, showing the below message.   
              switch %$ VDC-1 %$ %SYSMGR-2-NON_VOLATILE_DB_FULL: System non-volatile storage usage is unexpectedly high at 99%
    switch# switch# copy  r s
    [########################################] 100%
    Configuration update aborted: request was aborted
    switch#
    how to clean up /mnt/pss ?
    Thanks

    Hi Naga,
    From the CLI, issue "show system internal flash" to see what directory is taking up the space.   Unfortunately, if it is /mnt/pss, then you really need to engage TAC to get on the switch and enable the internal access to the file system so it can be cleared up.
    Sincerely,
    David.

  • Methods for managing large content databases (SharePoint 2013)

    I have a SharePoint 2013 web application with a content database of over 800GB.  It's becoming difficult to manage backups (backup time takes forever).  It was also very difficult to migrate for 2010 to 2013.  I'm getting a warning from SharePoint
    indicating that the content database is very large?
    What are methods (SQL or SharePoint) for managing this.  I was told I could split the content database into smaller DBs....

    RBS isn't a factor in database sizing (RBS content must be accounted for when sizing a database). The latter half of your statement is absolutely correct. Microsoft supports you based on those requirements, appropriate disk performance, HA, DR, and so on
    (because restoring a 4TB content database would take quite some time).
    But keep in mind what supported means here -- if you opened up a PSS case with Microsoft, regardless if you did not meet these requirements, they would 'support' you up until they found that the issue may be stemming from your lack of having these things
    in place.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    So RBS dosen't shrink DB size, what's the purpose of it.  I dont' have RBS implemented but I have an 800GB DB.  I have a disaster recovery plan, Mirrored SQL and the disk utilization is within the range.

  • Var/adm/utmpx: value too large for defined datatype

    Hi,
    On a Solaris 10 machine I cannot use last command to view login history etc. It tells something like "/var/adm/utmpx: value too large for defined datatype".
    The size of /var/adm/utmpx is about 2GB.
    I tried renaming the file to utmpx.0 and create a new file using head utmpx.0 > utmpx but after that the last command does not show any output. The new utmpx file seems to be updating with new info though... as seen from file last modified time.
    Is there a standard procedure to recreate a new utmpx file once it grows too largs?? I couldnt find much in man pages
    Thanks in advance for any help

    The easiest way is to cat /dev/null to utmpx - this will clear out the file to 0 bytes but leave it intact.
    from the /var/adm/ directory:
    cat /dev/null > /var/adm/utmpx
    Some docs suggest going to single user mode to do this, or stopping the utmp service daemon first, but I'm not positive this is necessary. Perhaps someone has input on that aspect. I've always just sent /dev/null to utmpx and wtmpx without a problem.
    BTW - I believe "last" works with wtmpx, and "who" works with utmpx.

  • Iphotos Package Contents: What are all these folders for? Is it too large to back up in the cloud?

    Dear experts,  I am quite a newbie when it comes to understanding the Mac filing system, as I originally came from the PC world.  There's lots of things I dont understand in the iMac file viewer and how to organise and backup my photos is an important issue for me.
    I know that if I want to look at the original photo files on my iMac, I can right-click on users/myname/pictures and select "Show Package Contents".
    Question 1 - What are all these folders for?
    Please can someone explain what is the difference between all the folders I see?  Some of them seem to be exact duplicates of the others e.g. Masters, Modified and Originals all seem to have the same content.  So here is a list of folders that I see.  What is in them?, or what is their purpose?
    Data
    Data.noindex
    Modified
    Originals
    Apple TV Photo Cache
    Attachments
    Auto Import
    Backup
    Caches
    Contents
    Database
    iLifeShared
    iPod Photo Cache
    Masters
    Previews
    ProjectCache
    Thumbnails
    Question 2 - Which photo folder should I back-up?
    If I want to keep a physical backup of my photos, which of the above folders should I copy to an external hard drive?  (I use Get Backup to automatically  copy all important new or changed files to an external drive)
    Question 3 - Using the cloud: What is the best way to backup my large photo library in the cloud safely? 
    I would like to have some kind of safe backup in the cloud for my photos.  However the size of the iphoto library is huge at 165GB.  Even the Masters folder is huge.  It is 130GB.  Is it possible to back up files of this size in the cloud?  I have a couple of services called photo streaming and Dropbox, but they don't seem to be able to handle this kind of size.  Photo streaming only works with 1000 photos (as far as I can tell), and my Dropbox probably has a limit too.  I guess it's about 5GB.  I am already using about 3GB of my Dropbox space for other files.  I would consider both paid and free solutions.
    Many thanks to all the experts for your help!

    know that if I want to look at the original photo files on my iMac, I can right-click on users/myname/pictures and select "Show Package Contents".
    Don't do that. That's like opening the hood of your car and trying to figure out what all the different bits and peices are and which you can yank out and dispose of. Simply, there are no user-serviceable parts in here.
    So, your Question 2:
    You back up the iPhoto Library as a single unit.
    Most Simple Back Up:
    Drag the iPhoto Library from your Pictures Folder to another Disk. This will make a copy on that disk.
    Slightly more complex: Use an app that will do incremental back ups. This is a very good way to work. The first time you run the back up the app will make a complete copy of the Library. Thereafter it will update the back up with the changes you have made. That makes subsequent back ups much faster. Many of these apps also have scheduling capabilities: So set it up and it will do the back up automatically.
    Example of such apps: Chronosync - but there are many others. Search on MacUpdate or the App Store
    Your question 3:
    There is no good back up to the Cloud. There are a couple of reasons for this. One is that the datasets are so large and the cloud services shapre their download speeds. This means restoring can take days to complete. Then, and this is the big problem, the iPhoto Library needs to sit on a disk formatted Mac OS Extended (Journaled). Bluntly, no servers online are formatted appropriately, and if the Library is written to - by an incremental back up, for instance - there is a very high likelihood that the library will be corrupted.
    Your Question 1:
    The Library you're describing there sounds like one that has been updated a few times. Not everything you list there is a folder. Some will be aliases.
    The Data folders hold thumbnails.
    The Masters and Originals folders hold the files as imported from the camera
    The Previews hold the versions of the edited photos that are accessed via the Sharing mechanism.
    I think if you look losely that you'll notice that one of the Data folders and one of either the Masters or the Originals folders is actually an alias.
    Everything else is a database or cache file of some form. All are required for iPhoto to work.
    As an FYI:
    For help accessing your photos in iPhoto see this user tip:
    https://discussions.apple.com/docs/DOC-4491

Maybe you are looking for

  • Aggregate Function in SQL subquery

    Hello, I am trying to use the following syntax and it is saying I can't use an aggregate function in a subquery. I can't use a GROUP BY in this case because if another field in the project table (such as status) is different, that project will show u

  • Which is better --- myfaces or sun reference implementation (jsf-ri)

    Wich of the two implementations for jsf is better. the open source implementation myfaces or sun reference implementation

  • How to transfer outlook 2011 calendar to iCloud calendar?

    I am using Outlook 2011 for Mac and can't seem to move the calendar items to the iCloud calendar.  In addition, my iCal calendar doesn't even resemble either the iCloud calendar (virtually empty) or the Outlook calendar.  How do I sync them all toget

  • Problem with a Timer (Java.util)

    Hello guys, I have a small problem with Java timer. The problem is I wanne do some task let's say after 5 seconds, so I created a timer and set its delay to 5000 milli seconds and run my program to see that the task is well performed but the program

  • Lightroom and problems loading the trail version

    When I try to install lightroom it ways I do not have the sofware to open it.  I have a trial version of the current photoshop and the old Photoshop CS3   What do I need?