Disk size based on content database size

Hi,
Assuming that by the formulae Database size = ((D × V)
× S)
+ (10 KB × (L +
(V × D)))
my DB size is 105 GB,
what should we recommend be the Hard disk size for creating this data base?
Thank you,
Kotamarthi

Updated but few questions
WFE & App servers : System
Files :80 GB ;ULS & IIS Logs etc:80 : Back up : 160(can vary)
FAST Search : System
Files:80 GB; Search : 300 GB.: ULS Logs : 80 GB.; Back up: 80 GB.
DB Server (current content size is 106 GB): System
Files :80 GB ;MDF: 300 GB;LDF : 100 GB ;Back ups:300 GB(can vary) 
Updated the Fast Search  with ULS =80 and added back up for 80 GB.
Few Questions.
1.For WFE and App server I have only two Drives one for system and one for Logs are those sufficient? where would the sharepoint files go into? or do w e need to and any other drives.
1.For fast search search drive is 300 GB(this drive would be for indexing/Querying etc? is it fine)
a.on search Server added 80 GB for back up -  can you let me know for what purpose can we use this.
2.For DB server , do you mean since we have Drive for LDF logs we dont need drive for ULS log?
thank you,
Vinay

Similar Messages

  • Mismatch between the Content Database size and the total of each site collection' storage used.

    Hi All,
    Environment:  SharePoint 2010 with SP2.
    Issue: One of the Content databases size in our farm shows 200 GB as used. There are 25 site collections in the DB and we counted
    the total storage used of all site collections in that Content DB and is not more than 40 GB. (used "enumsites" command and counted the each sitecollections storage used).
    What actions/troubleshooting were done?
    Ran one script which will find the actual size of site collection and how much space used in the disk. But didn’t find major difference in the report.
    Checked “Deleted from end user recycle bin “in all the site collections and no major storage is noticed.
    Planning to Detach the problematic Content DB and attach ,will check whether any major effect is there
    Why the Content DB shows 200 GB as used when the total storage used of all site collections is just 40 GB.
    Appreciate suggestions from any one.
    Best Regards,
    Pavan Kumar Sapara.
    s p kumar

    Hi,
    Thanks for your reply.
    As there is only 20 MB of unallocated space for the above said DB, SQL DB team informed that they cannot perform the DB shrink at this moment.
    So we are thinking to Offload all the site collections to other new DB and then will Drop the problematic database. In this way we can overcome the
    issue.
    Answer for your queries.
    Are the mismatched sizes causing an issue? Are you short on diskspace for DB storage or SQL backups?
    No, there is no issue with the mismatched sizes. We are not on short on disk space. We just worried why it occupies that much size(200 GB) when
    the total storage used of all site collections in that DB is 40 GB.
    Best Regards,
    Pavan Kumar Sapara.
    s p kumar

  • SharePoint content database size limit

    Hi All,
    I am just verifying whether my method is a right way to do it.
    I created a site collection and I am need to clone the sites. I tried Export/Import but the permissions are not copied as I use a third party tool for column level permission.
    The work around I found is to restore the site with new content database (all permissions are copied). I need to create 100 of those same exact site collection (for different users). I am planning on creating new content database within the same web
    application and resorting the site collection to the new content DB.
    Microsoft supports content DB of 4TB size.
    But my question is if I create multiple content DB (each does not exceed 200 MB) will there be any performance issue? Is it a right way to do it?
    100*200(MB) = close to 2 GB.
    I will be having 5 web application each with the 100 site collection and a content DB for each site collection with 200 MB.
    Is there anything wrong to do it this way?

    Hi Maddy.r,
    from the article it is stated the limit that 500 content database per farm.
    Limit                      
                            Maximum value                      
                            Limit type                      
                            Notes                      
    Number of content databases
    500 per farm
    Supported
    The maximum number of content databases per farm is 500. With 500 content databases per web application, end user operations such as opening the site or site collections are not affected. But administrative operations such as creating a new site collection
    will experience decrease in performance. We recommend that you use Windows PowerShell to manage the web application when a large number of content databases are present, because the management interface might become slow and difficult to navigate.
    sharepoint 2010 and 2013, have difference, and the capability for sharepoint 2013 is better than 2010.
    addition,
    i heard there is solution to have multiple farm instead, since the limitation is by farm, then if you able to create multiple farm, it may increase the bar for your environment limitation.
    http://www.informit.com/articles/article.aspx?p=2131832&seqNum=2
    Regards,
    Aries
    Microsoft Online Community Support
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Reduce Size or delete unwanted entries from tables AllDocStreams,AllDocVersions,EventCache,EventLog of Sharepoint 2010 content database

    Size or delete unwanted  entries from tables AllDocStreams,AllDocVersions,EventCache,EventLog  of Sharepoint 2010 content database:
    We using  powershell  scripts to migrate data between   two  sharepoint  2010 sites .
    While  doing migration  we  delete all  document libraries  & list  form  destination  site  and then run  powershell to migrate data from  source to destination. We following this  process
    twice in a week.
    But in doing  so  we found the above mention tables (AllDocStreams,AllDocVersions,EventCache,EventLog ) of destination Sharepoint Content database  are growing at an alarming rate. 
    Wish to know  how  could get rid of  unwanted  data stored in these tables.

    Hi,
    This is old thread but here is your answer for some of the tables you mentioned
    http://blogs.msdn.com/b/sowmyancs/archive/2012/06/29/alldocversions-amp-alldocstreams-table-size-after-upgrading-to-sharepoint-2010.aspx
    Cheers

  • SharePoint 2010 content database size issues alldocs streams

    HI All,
    We are planning to do migration from sharepoint 2010 to sharepoint 2013. our database team identified that "all docs streams" and "audit log table" using almost 300 gig
    we dropped "audit log" table today almost 100 gig
     but still "all doc streams" table size is almost 147 gig
    how to handle this situation before migration?
    do I need to migrate as it is or do I need to work on "all doc streams" table
    hot to handle database size
    which way you recommend ? what are the best practices
    please see database table size
    can any one please send me step by step implementation?
    Thanks,
    kumar
    kkm

    First off, touching SharePoint database tables is completely unsupported.
    http://support.microsoft.com/kb/841057
    You shouldn't be making changes within the database at all and you're putting yourself out of support by doing so.
    AllDocStreams is your data. You need to have users clean up data, or move Site Collections to new Content Databases if you feel you need to reduce the size of the table itself.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Tempdb- Disk size Requirement for Tempdb Database?

    Working on new project that have fresh windows and SQl  Server Install on physical machine. 
    Like to know what should be the disk size to keep for  tempdb Intially? like 10Gb or 20GB? It will be on separate Drive .
    Also like to know what should be ideal drive size to keep for Data  and Log drive? They both will be on separate drive also.
    any other configuration best practices  ?
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    Size of the tempdb depends on various factors.
    Bulkload operation
    general query expressions
    dbcc checks
    indexes
    LOB Variables and parameters and many more.
    As others suggested,monitoring is the best way.Try the below in test environments.
    Set autogrow on for tempdb.
    Execute individual queries or workload trace files and monitor tempdb space use.
    Execute index maintenance operations, such as rebuilding indexes and monitor tempdb space.
    Use the space-use values from the previous steps to predict your total workload usage; adjust this value for projected concurrent activity, and then set the size oftempdb accordingly.
    Refer the below link for optimization.
    http://technet.microsoft.com/en-us/library/ms175527(v=sql.105).aspx
    Best Regards, Arun http://whynotsql.blogspot.com/

  • Where is the Blob Content saved ,for BLOBs those size is lessthen the default Threshold limit 60 KB for RBS enabled Content Database

    Hi
    in my sharepoint farm i enabled RBS for a content database
    1) I uploaded 4 pdf files to doc librfary, from which 2 fiels are <60 KB
    as per this query results  these 2  files are not go to RBS file system folder (ex: C:/MywebApp/BLOBFolder)
    select * from mssqlrbs_resources.rbs_internal_blob_stores
      when i click extended configuration column the xml has below value
    <config_item_list>
      <config_item key="max_size_inline_blob" value="61140" />
    </config_item_list>
    but which table the content itself saved for these 2 documents?
    when i check the below query the content is showing Null and it has RBSID value, but these 2  files not itself exists in RBS folder
    so my question is where these 2 files content saved 
    adil

    did you check the all docs table?
    Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog

  • Disk Size Increasing very Fast

    I am facing very critical issue, disk size, where E2k13 is installed reducing its free space daily near 1 GB and on other hand database file is not taking too much size over the Disk. please suggest me some good option to sort it .
    BRAT

    Hi,
    Based on my knowledge, circular logging is not recommended in a normal Exchange production environment. Also, enabling circular logging is not a long-term option.
    I recommend you disable it and do a full backup to solve your issue.
    For more information, here is a thread for your reference.
    enable circular logging (Note: Though it is Exchange 2010, I think it also applies to Exchange 2013 about this issue)
    http://social.technet.microsoft.com/Forums/en-US/a01579af-8cdc-40d3-aef4-b5f569833553/enable-circular-logging?forum=exchange2010
    Hope it helps.
    Best regards,
    Amy
    Amy Wang
    TechNet Community Support

  • Disk size and backups

    Hello
    I wander what is required disk size on hana when migrating from e.g Oracle to Hana?
    E.g under oracle I have 1Tb database (and something more if I backup also at OS level)
    What will I have under Hana ? How much to backup?
    Thank you a lot
    Jan

    Dear Jan,
    The first thing we need to perform before migrating from AnyDb to HANA is memory sizing. SAP provides specific notes for memory sizing of different SAP Applications as mentioned bel     ow
    1793345 - Sizing for SAP Suite on HANA
    1637145 - SAP BW on HANA: Sizing SAP In-Memory Database
    1872170 - Suite on HANA memory sizing
    1736976 - Sizing Report for BW on HANA  
    The objective if the the above is to get the footprint of uncompressed DATA in AnyDB.
    That forms one of the parameter for sizing the HANA DB.
    Based on this sizing information one would use the suitable T-Shirt sized HANA Appliance from the hardware vendor. We then have to apply the
    Traditional the RAM should be atleast 2x(uncompressed_DB / Compression factor) and the Disk space would be at least 3-4xRAM for persisitance layer and 1xRAM for logs.
    The compression factor might be around 4-7 times depending on the type SAP application.
    Regards
    Arshad

  • Hyper-V checkpoint disk size growth out of control

    Hi,
    i have hyper-v with exchange server installed on VM. i have used checkpoint in vm production. but i'v got trouble with my disk space. my vm is suddenly pause-critical because of out of disk space that used by checkpoint.
    based on microsoft recommendation it's not worth to use point in production environment. if so you have to separate disk from vhd and checkpoint path.
    now, im planning to delete all checkpoints. but the current condition of my local disk size is very low.
    my vm size now is 300 GB then my free space only 50 GB. please tell me how much free space needed to merge while process merging the disk ? does it possible to perform delete check point with my current condition ?
    please give me advice, im in very horrible situation right now. 
    thanks.

    Hi,
    Using checkpoints can result to unwanted beheviours like the one you are encoutering.
    Aaik, the merge process will need a random free space depending on the avhd sizes and content.
    But in your case, you VM is a production VM and the process may miss-behave.
    So i highly recommand you do export your VM to another location before deleting checkooints.
    Connect a disk to your hyper-v server
    Shutdown your VM
    Right click the VM then choose Export, and browse the place where you want to export
    The exported VM will be a mix of your VHD and AVHDs (checkpoints
    Now, try delete checkpoints one by one. Delete the first one, wait for merge, delete the second wait for merge... Do nkt forget, you are using windows server 2012 so kepp the VM stopped for the merge process to run.
    Regards, Samir Farhat Infrastructure and Virtualization Consultant || Virtualization, Cloud, Azure ? Follow and Ask here https://buildwindows.wordpress.com

  • Empty content database filled up SQL hard disk

    Anyone have this situation.
    SharePoint 2013 farm.
    Content databases migrated from SP 2010 farm.
    New content database added. No site collections in it yet. But something ran and put a 100 GB in the DocStreams table and filled my hard disk.

    There's been similar issues in the past with 2007 and 2010
    http://blogs.msdn.com/b/sowmyancs/archive/2012/06/29/alldocversions-amp-alldocstreams-table-size-after-upgrading-to-sharepoint-2010.aspx
    But it doesn't make much sense that it's a new content database that's grown so much.

  • How to determine physical disk size on solaris

    I would like to know whether there is a simple method available for determining physical hard disk sizes on Sun sparc machines. On HP based machines it is simple:
    1. run "ioscan -fnC disk" - to find all disk devices and there raw device target address ie /dev/rdsk/c0t2d2
    2. run "diskinfo /dev/rdsk/c0t2d2" - display the attributes of the physical disk including size in Kbyes.
    This simple process allows me create simple scripts that I can use to automate collation of audit data for a large number of HP machines.
    On Sun based machines I've looked at the prtvtoc, format, and devinfo commands and have had no joy. Methods and suggestion will be well appriciated.

    ok,
    format should say .....eg
    type format ..
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
    if this is not a Sun disk, and you do not get the info,
    select the required disk and select partition and then print. This will display what you need.
    hope this helps

  • Disk size is drastically reducing after the Dense restructure

    After Clearing empty blocks the number of blocks are coming down by 10% showing that empty blocks were existing in the database. At this point in time total size of .PAG file remains constant (Before and after clearing empty blocks).
    We did a restructure of database the number of blocks remained constant but the .PAG file has drastically reduced to less than 50%.
    We are trying to understand why would the 10% block reduction is casuing the disk size reduction by 50%.
    Thanks for your inputs!
    Edited by: user3942062 on Mar 14, 2013 3:31 AM

    The .pag file contains free space, not just that left behind by CLEARBLOCK EMPTY but also where blocks have been modified and no longer fit back into their original slot (so Essbase has to expand the .pag file to fit them back somewhere else). For example, if a block is taking up 2K on disk but you then load additional data, the block's footprint on disk may increase to 3K. Essbase can't fit it back into its original 2K location, so it leaves that 2K chunk of the .pag file empty (at least for the time being) and finds 3K elsewhere.
    Only restructuring the database physically releases this space.

  • R11 disk size problem

    Hi;
    I have 11.5.10.2 instance on redhat... The question is my disk size become %94... is there any file in EBS system which can be hold soo many space?? Anyone can advice me whihc file i can delete from disk(log file etc..)
    or
    i can ask my question like that.... for can have open more space for my system which files can be delete for space
    Thanks A lot

    Hi hussein;
    Sorry for latley update,
    If you have some other mount point(s) which can be utilized, then consider moving the datafiles (some or all) and/or the archivelog files to this disk.Sorry if i say something wrong hussein. Can you be guide for my curious about below steps.
    For instance my dbf file path is :
    /u01/VIS/database/visdata and i create new path as : /u02/VIS/database/visdata and move some dbf from /u01 to /u02 manualy as mv command... Is this cause of problem? Becouse i moved them by mv command and when i run something they try to look their old path?
    For the other application/database files, please refer to the following documents/links.
    Note: 274666.1 - Cleaning An 11i Apps Instance Of Redundant FilesThanks for that nice notes
    Delete Events Logs
    Re: Delete Events Logs
    Thanks for those nice posts and answers too :)
    By the way Thanks god we have you here, and oracle is soo lucky becouse they have person like you in this forum which is love to give answers, which is gives always correct notes...
    Thanks again

  • Disk size in Solaris 10

    I have some confusion about disk subsystem in Solaris, i am trying to clarify from this forum.
    I have recently installed Solaris 10 in one SPARC box. After i installed, the format gives the bellow output.
    0 root wm 19491 - 29648 4.88GB (10158/0/0) 10239264
    1 swap wu 0 - 4062 1.95GB (4063/0/0) 4095504
    2 backup wm 0 - 29648 14.25GB (29649/0/0) 29886192
    From the above output, is the size of my disk is 14 GB ?, or the size of my disk is 14+2+5=21 GB ?
    I am trying to learn ZFS, so i want another partition in this disk so that i create ZFS on that partition.
    I have gone to single user mode by using CD. I assumed that, from the above "format" command output, i thought i have 21GB of disk size and 14GB of free space. So i created another partition with 14GB. Now the format command gives bellow output.
    0 root wm 19491 - 29648 4.88GB (10158/0/0) 10239264
    1 swap wu 0 - 4062 1.95GB (4063/0/0) 4095504
    2 backup wm 0 - 29648 14.25GB (29649/0/0) 29886192
    3 reserved wm 0 - 29127 14.00GB (29128/0/0) 29361024
    When i am creating ZFS, it given me a warning that the the partition i have specified is spanned into root partition (first partition), and it mentioned to use "-f" option.
    With "-f", it created successfully.
    If i assume now that the size of my disk is 14GB only then,
    (1) how come two partitions are pointing to the same area in the disk ?
    (2) How come two different filesystems are pointing to the same area ?
    Please anyone clarify my doubts. Thank you.

    Assuming a standard labeled disk it is standrad practice to have section/slice 2 being 'whole disk' for purposes of 'backup'. That would tend to indicate you have a 14GB disk. A prtvtoc /dev/dsk/c?t?d?s2 (change the ?s to the right values) will give a little more on the disk geometry.
    In the display from format column 4 is the start cylinder of the partition and column 5 is the end cylinder. From the first set out output it looks like cyclinders 4063 to 19490 are not allocated
    In the second set you have created a new slice (section 3) that overlaps both sections 0 and 1 - which is generally considered to be bad!

Maybe you are looking for

  • ISight Camara

    My camara does not work! It is the build in ISight camara. Any help? Thanks

  • How to update a customer's notes by email?

    Does anyone know a way of updating a customer's notes by email? My client is unable to easily update a customer's notes through the CMS. Is there an alternative or a work around? TIA

  • Used Apple Studio Display, Possibly Ruined

    Hello all, I purchased a used Apple Studio Display a few days ago and thought nothing of it at the time (I was picking up some used Macs for friends, and it was an afterthought). I figured, what the h*ll. So anyway, I finally hooked it up, and was ve

  • Need to add Menu Item to Search Result Template page

    Hi Guys, I am using UCM 11g and I am customizing serach result page. When we do search on UCM, we get a search result page. This search result page has a two drop downs (on the top right) "Change View" and "Search Action" in the blue strip. I have to

  • Deleting Parked Document thru FBV0

    Dear Experts, We have created one Price Difference A/c GL and all related transaction are being booked to theis GL thru automatic posting. How ever this entry is posted agaisnt the materials like :- Raw MAterial Store & Spares As per Schedule VI requ