What is the best practice on mailbox database size in exchange 2013

Hi, 
does anybody have any links to good sites that gives some pros/cons when it comes to the mailbox database sizes in exchange 2013? I've tried to google it - but hasn't found any good answers. I would like to know if I really need more than 5 mailbox databases
or not on my exchange environment. 

Hi
   As far as I know, 2TB is recommended maximum database size for Exchange 2013 databases.
   If you have any feedback on our support, please click
here
Terence Yu
TechNet Community Support

Similar Messages

  • Best practice on mailbox database size & we need how many server for deployment exchange server 2013

    Dear all,
    We have  an server that runs Microsoft exchange server 2007 with the following specification:
    4 servers: Hub&CAS1 & Hub&CAS2 & Mailbox1 & Mailbox2 
    Operating System : Microsoft Windows Server 2003 R2 Enterprise x64
    6 mailbox databases
    1500 Mailboxes
    We need to upgrade our exchange server from 2007 to 2013 to fulfill the following requirment:
    I want to upgrade the exchange server 2007 to exchange server 2013 and implement the following details:
    1500 mailboxes
    10GB or 15GB mailbox quota for each user
    How many
    servers and databases  are
    required for this migration<ins cite="mailto:Mohammad%20Ashouri" datetime="2014-05-18T22:41"></ins>?
    Number of the servers:
    Number of the databases:
    Size of each database:
    Many thanks.

    You will also need to check server role requirement in exchange 2013. Please go through this link to calculate the server role requirement : http://blogs.technet.com/b/exchange/archive/2013/05/14/released-exchange-2013-server-role-requirements-calculator.aspx
    2TB is recommended maximum database size for Exchange 2013 databases.
    Here is the complete checklist to upgrade from exchange 2007 to 2013 : http://technet.microsoft.com/en-us/library/ff805032%28v=exchg.150%29.aspx
    Meanwhile, to reduce the risks and time consumed during the completion of migration process, you can have a look at this proficient application(http://www.exchangemigrationtool.com/) that would also be
    a good approach for 1500 users. It will help you to ensure the data security during the migration between exchange 2007 and 2013.

  • What is the best practice to get database connection?

    What are best best practices in database connection to follow?

    &#24335; &#21487; &#20197; &#37319; &#29992;Class.forName &#26041; &#27861; &#26174; &#31034; &#21152; &#36733;&#65292; &#22914; &#19979; &#38754; &#30340; &#35821; &#21477; &#21152; &#36733;Sun &#20844; &#21496; &#30340;JDBC-ODBCbridge &#39537; &#21160; &#31243; &#24207;&#65306;
    Class.forName&#65288;�sun.jdbc.odbc.JdbcOdbcDriver�)&#65307;
    &#28982; &#21518; &#36816; &#29992;DriverManager &#31867; &#30340;getConnection &#26041; &#27861; &#24314; &#31435; &#19982; &#25968; &#25454; &#28304; &#30340; &#36830; &#25509;&#65306;
    Connectioncon=DrivenManagerget-
    Connection(url);
    &#35813; &#35821; &#21477; &#19982;url &#23545; &#35937; &#25351; &#23450; &#30340; &#25968; &#25454; &#28304; &#24314; &#31435; &#36830; &#25509;&#12290; &#33509; &#36830; &#25509; &#25104; &#21151;&#65292; &#21017; &#36820; &#22238; &#19968; &#20010;Connection &#31867; &#30340; &#23545; &#35937;con&#12290; &#20197; &#21518; &#23545; &#36825; &#20010; &#25968; &#25454; &#28304; &#30340; &#25805; &#20316; &#37117; &#26159; &#22522; &#20110;con &#23545; &#35937; &#30340;&#12290;
    &#25191; &#34892; &#26597; &#35810; &#35821; &#21477;&#12290; &#26412; &#25991; &#20171; &#32461; &#22522; &#20110;Statement &#23545; &#35937; &#30340; &#26597; &#35810; &#26041; &#27861;&#12290; &#25191; &#34892;SQL &#26597; &#35810; &#35821; &#21477; &#38656; &#35201; &#20808; &#24314; &#31435; &#19968; &#20010;Statement &#23545; &#35937;&#12290; &#19979; &#38754; &#30340; &#35821; &#21477; &#24314; &#31435; &#21517; &#20026;guo &#30340;Statement &#23545; &#35937;&#65306;
    Statement guo=con.creatStatement()&#65307;
    &#22312;Statement &#23545; &#35937; &#19978;&#65292; &#21487; &#20197; &#20351; &#29992;execQuery &#26041; &#27861; &#25191; &#34892; &#26597; &#35810; &#35821; &#21477;&#12290;execQuery &#30340; &#21442; &#25968; &#26159; &#19968; &#20010;String &#23545; &#35937;&#65292; &#21363; &#19968; &#20010;SQL &#30340;Select &#35821; &#21477;&#12290; &#23427; &#30340; &#36820; &#22238; &#20540; &#26159; &#19968; &#20010;ResultSet &#31867; &#30340; &#23545; &#35937;&#12290;
    ResultSet result=guo.execQuery(�SELECT*FROM A�)
    &#35813; &#35821; &#21477; &#23558; &#22312;result &#20013; &#36820; &#22238;A &#20013; &#30340; &#25152; &#26377; &#34892;&#12290;
    &#23545;Result &#23545; &#35937; &#36827; &#34892;&#65288; &#19979; &#36716;76 &#39029;&#65289;( &#19978; &#25509;73 &#39029;&#65289; &#22788; &#29702; &#21518;&#65292; &#25165; &#33021; &#23558; &#26597; &#35810; &#32467; &#26524; &#26174; &#31034; &#32473; &#29992; &#25143;&#12290;Result &#23545; &#35937; &#21253; &#25324; &#19968; &#20010; &#30001; &#26597; &#35810; &#35821; &#21477; &#36820; &#22238; &#30340; &#19968; &#20010; &#34920;&#65292; &#36825; &#20010; &#34920; &#20013; &#21253; &#21547; &#25152; &#26377; &#30340; &#26597; &#35810; &#32467; &#26524;&#12290; &#23545;Result &#23545; &#35937; &#30340; &#22788; &#29702; &#24517; &#39035; &#36880; &#34892;&#65292; &#32780; &#23545; &#27599; &#19968; &#34892; &#20013; &#30340; &#21508; &#20010; &#21015;&#65292; &#21487; &#20197; &#25353; &#20219; &#20309; &#39034; &#24207; &#36827; &#34892; &#22788; &#29702;&#12290;Result &#31867; &#30340;getXXX &#26041; &#27861; &#21487; &#23558; &#32467; &#26524; &#38598; &#20013; &#30340;SQL &#25968; &#25454; &#31867; &#22411; &#36716; &#25442; &#20026;Java &#25968; &#25454; &#31867; &#22411;

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • What is the best methodology to handle database schema changes after an application has been deployed?

    Hi,
    VS2013, SQL Server 2012 Express LocalDB, EF 6.0, VB, desktop application with an end user database
    What is a reliable method to follow when there is a schema change for an end user database used by a deployed application?  In other words, each end user has their own private data, but the database needs to be expanded for additional features, etc. 
    I list here the steps it seems I must consider.  If I've missed any, please also inform:
    (1) From the first time the application is installed, it should have already moved all downloaded database files to a separate known location, most likely some sub-folder in <user>\App Data.
    (2) When there's a schema change, the new database file(s) must also be moved into the location in item (1) above.
    (3) The application must check to see if the new database file(s) have been loaded, and if not, transfer the data from the old database file(s) to the new database file(s).
    (4) Then the application can operate using the new schema.
    This may seem basic, but for those of us who haven't done it, it seems pretty complicated.  Item (3) seems to be the operative issue for database schema changes.  Existing user data needs to be preserved, but using the new schema.  I'd like
    to understand the various ways it can be done, if there are specific tools created to handle this process, and which method is considered best practice.
    (1) Should we handle the transfer in a 'one-time use' application method, i.e. do it in application code.
    (2) Should we handle the transfer using some type of 'one-time use' SQL query.  If this is the best way, can you provide some guidance if there are different alternatives for how to perform this in SQL, and where to learn/see examples?
    (3) Some other method?
    Thanks.
    Best Regards,
    Alan

    Hi Uri,
    Thank you kindly for your response.  Also thanks to Kalman Toth for showing the right forum for such questions.
    To clarify the scenario, I did not mean to imply the end user 'owns' the schema.  I was trying to communicate that in my scenario, an end user will have loaded their own private data into the database file originally delivered with the application. 
    If the schema needs to be updated for new application features, the end user's data will of course need to be preserved during the application upgrade if that upgrade includes a database schema change.
    Although I listed step 3 as transferring the data, I should have made more clear I was trying to express my limited understanding of how this process "might work", since at the present time I am not an expert with this.  I suspected my thinking
    is limited and someone would correct me.
    This is basically the reason for my post; I am hoping an expert can point me to what I need to learn about to handle database schema changes when application upgrades are deployed.  For example, if an SQL script needs to be created and deployed
    then I need to learn how to do that.  What's the best practice, or most reliable/efficient way to make sure the end user's database is changed to the new schema after the upgraded application is deployed?  Correct me if I'm wrong on this,
    but updating the end user database will have to be handled totally within the deployment tool or the upgraded application when it first starts up.
    If it makes a difference, I'll be deploying application upgrades initially using Click Once from Visual Studio, and eventually I may also use Windows Installer or Wix.
    Again, thanks for your help.
    Best Regards,
    Alan

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • What is the best practice in order to create flow in a single maintenance plan?

    Hi All,
    What is the best practice in order to create flow in a single maintenance plan.
    1st Check Database Integrity (Check DB)
    2nd Rebuild Index 
    or 
    1st Rebuild Index
    2nd Check Database Integrity (Check DB)
    Grateful to your time and support. Regards, Shiva

    Use the Maintenance Plan Wizard to create a maintenance plan:
    "This topic describes how to create a single server or multiserver maintenance plan using the Maintenance Plan Wizard in SQL Server 2012. The Maintenance Plan Wizard creates a maintenance plan that Microsoft SQL Server Agent can run on a regular
    basis. This allows you to perform various database administration tasks, including backups, database integrity checks, or database statistics updates, at specified intervals."
    LINK: http://technet.microsoft.com/en-us/library/ms191002.aspx
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • What are the best practices for the RCU's schemas

    Hi,
    I was wondering if there is some best practices about the RCU's schemas created with BIEE.
    I already have discoverer (and application server), so I have a metadata repository for the Application Server. I will upgrade Discoverer 10g to 11, so I will create new schema with RCU in my metada repository (MR) of the Application Server. I'm wondering if I can put the BIEE's RCU schemas in the same database.
    Basically,
    1. is there a standard for the PREFIX ?
    2. If I have multiple components of Fusion in the same Database, I will have multiples PREFIX_MDS schema ? Can they have the same PREFIX ? Or They all need to have a different prefix ?
    For exemple: DISCO_MDS and BIEE_MDS or I can have DEV_MDS and this schema is valid for both Discoverer and BIEE.
    Thank you !

    What are the best practices for exception handling in n-tier applications?
    The application is a fat client based on MVVM pattern with
    .NET framework.
    That would be to catch all exceptions at a single point in the n-tier solution, log it and create user friendly messages displayed to the user. 

  • What are the best practices to connect 30-40 iPads to Wi-Fi in a single room?

    What are the best practices to connect 30-40 iPads to Wi-Fi in a single room?

    I don't use it but it does say this in the help section...

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • What are the best practices to replace a disk in 6140 ?

    What are the best practices to replace a disk in 6140?
    Regards

    The best way is to follow CAM Service Advisor instructions.

  • What is the best practice for changing view states?

    I have a component with two Pie Charts that display
    percentages at two specific dates (think start and end values).
    But, I have three views: Start Value only, End Value only, or show
    Both. I am using a ToggleButtonBar to control the display. What is
    the best practice for changing this kind of view state? Right now
    (since this code was inherited), the view states are changed in an
    ActionScript function which sets the visible and includeInLayout
    properties on each Pie Chart based on the selectedIndex of the
    ToggleButtonBar, but, this just doesn't seem like the best way to
    do this - not very dynamic. I'd like to be able to change the state
    based on the name of the selectedItem, in case the order of the
    ToggleButtons changes, and since I am storing the name of the
    selectedItem for future reference.
    Would using States be better? If so, what would be the best
    way to implement this?
    Thanks.

    I would stick with non-states, as I have always heard that
    states are more for smaller components that need to change under
    certain conditions, like a login screen that changes if the user
    needs to register.
    That said, if the UI of what you are dealing with is not
    overly complex, and if it will not become overly complex, maybe
    states is the way to go.
    Looking at your code, I don't think you'll save much in terms
    of lines of code.

  • What is the best practice in securing deployed source files

    hi guys,
    Just yesterday, I developed a simple image cropper using ajax
    and flash. After compiling the package, I notice the
    package/installer delivers the same exact source files as in
    developed to the installed folder.
    This doesnt concern me much at first, but coming to think of
    it. This question keeps coming out of my head.
    "What is the best practice in securing deployed source
    files?"
    How do we secure application installed source files from
    being tampered. Especially, when it comes to tampering of the
    source files after it's been installed. E.g. modifying spraydata.js
    files for example can be done easily with an editor.

    Hi,
    You could compute a SHA or MD5 hash of your source files on
    first run and save these hashes to EncryptedLocalStore.
    On startup, recompute and verify. (This, of course, fails to
    address when the main app's swf / swc / html itself is
    decompiled)

  • What is the best practice to display info of completed task in process flow

    Hi all,
    I'm starting to study BPM modeling with CE7.1 EHP1. Thanks to the tutorial and example on SDN site and I can easily build my own process in NWDS and deploy to server, start it, finish it.
    I like the new runtime which can show a BPMN diagram to the processors. However, I can't find a way to let the follow up processor to review the task result completed in previous step. I'm more familiar with Guided Procedure, and know there is "Display Callable Object" which can used to show some info of a completed task when the processor/owner/admin/overseer click on a completed task.  Where is the feature in BPM ? What is the best practice to show such task information in BPM environment.
    For example, A multiple level approval process, the higher level approver need to know the comment written by the previous approver. Can he read this information from process flow ?
    I think it is very important feature for a BPM platform. In Guided Procedure, such requirement can be done with Display Callable Object + View Permission, and you just need some coding for the UI. If BPM is superior to GP, I think there must be a way to achieve this, I just do not know how ?
    Can anyone shed me some light on it ?

    Oliver,
    Thanks for your quick reply.
    Yes, Notes and Attachment CAN BE USED for the purpose. But I'm still looking for a more elegant solution.
    With the solution of using Notes/Attachment, the processor need to give input at two places : the task UI and Note/Attach , with similar or same data. It is really annoying.
    Is there any SAP BPM real-world deployment ? None of customer has the requirement ?

Maybe you are looking for

  • A1630n will not boot up

    My a1630n suddenly will not boot up. It was working fine other than the display was fading, ( some portions of the display were so light they could barely be seen). It worked fine one day and the next it won't. The fan comes on, but the processing li

  • Error message when compiling invalid packages and procedures

    Hi. I have a routine for copying certain data from a production database to a test database. To do this I disable constraints and triggers, truncate tables, copy tables and enable triggers and constraints again. Now several of my functions, procedure

  • Key Commands corrupt using Configurator in Photoshop CS5

    For me, when the Configurator Panel is open common keyboard commands for tools such as the brush or patch tool (any of them) no longer work.  There is a brief pause, it switches to the tool you want, and then it reverts back to the tool you were usin

  • Confirmation Error in SRM 7.1 Standalone

    Hi Experts, We are implementing SRM 7 Ehp1 in our landscape. All basic configurations are completed and we are now testing the scenarios. We have an issue in standalone scenario, where the SC creator is unable to post a confirmation. Both SC and PO a

  • Migration Thunderbird (PC) to Mac Mail (OS X) : merged folders

    Hi there, I'm trying to migrate my emails from Thunderbird (PC) to Mac Mail. I have multiple accounts in Thunderbird in separate forlders. I copied the thunderbird profile folder from my old PC  and imported the emails to Mac Mail, unfortunately the