Best practice for default location of mailbox database(s) / logs

Hello,
I don't recall seeing any options during the Exchange 2013 install, to specify an alternate location for either the mailbox database or log files. I've reviewed the commands for moving the mailbox databases, but before reviewing the options for setting a
location for the log files, I thought it best to see whether it's still advised to separate log/mailbox databases away from the OS, with our Exchange server being a virtualised instance?
Also, I'm assuming that Exchange still requires the usual backup of transaction logs, for them to be cleared?
Many thanks.

Hi JH,
here is a link to storage options and requirements for the Mailbox server :
http://technet.microsoft.com/en-us/library/ee832792(v=exchg.150).aspx
By default when installing New Exchange server With mailbox role,it will create default database and log path.
Recomended is to have Database and Log files on seperate disks.You will have to attache those disk first,then you can create New database using ECP.
Please look at my Gallery With full guide on how to setup New Exchange server.
http://gallery.technet.microsoft.com/Install-Exchange-server-b5cce9e4
Also for future use you might need to clean up log files to free up Space on Your Exchange server:
http://gallery.technet.microsoft.com/Task-Scheduler-to-cleanup-25047622
Hope this helps
Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you. Thank you! Off2work

Similar Messages

  • Best practice for default values in EO

    I have and entity called AUTH_USER (a user table) within it has 2 TIMESTAMP WITH TIME ZONE columns like this ...:
    EFF_DATE TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT current_timestamp,
    TERM_DATE TIMESTAMP WITH TIME ZONE
    Notice EFF_DATE has a default constraint and is not nullable.
    In the EO, EFF_DATE is represented as a TIMESTAMPTZ and is checked as MANDATORY in its attribute properties. I cannot commit a NEW RECORD based on VO derived from this EO because of the MANDATORY constraint that is set in the EFF_DATE attribute's properties unless I enter a value. My original strategy was to have the field populated by a DEFAULT DATE if the user should attempt to leave this field null.
    This is my deli ma.
    1. I could have the database populate the value based on the default constraint in the table definition. Since EFF_DATE and TERM_DATE resemble the Effective Date (Start, End) properties that the framework already provides then I could set both fields as Effective Date (Start, End) and then check Refresh After Insert. But this still won't work unless I deselect the mandatory property on EFF_DATE.
    2. The previous solution would work. However, I'm not sure that it is part of a "Best Practices" solution. In my mind if a database column is mandatory in the database then it should be mandatory in the Model as well.
    3. If the first option is a poor choice, then what I need to do is to leave the attribute defined and mandatory and have a DEFAULT VALUE set in the RowImpl create method.
    4. Finally, I could just force the user to enter a value. That would seem to be the common sense thing to do. I mean that's what calendar widgets and AJAX enabled JSF are for!
    Regardless to what the correct answer is, I'd like to see some sample code of how the date can be populated inside the RowImpl create method and it pass to setEffDate(TimestampTZ dt). Keep in mind though that in this instance I need the timezone at the database server side and not the client side. I would also ask for advice on doing this with Groovy Scripting or expressions.
    And finally, what is the best practice in this situation?
    Thanks in advance.

    How about setting the default value property of the attribute in the EO to be adf.currentDate ?
    (assuming you are using 11g).
    This way there is a default date being set when the record is created and the user can change it if he wants to.

  • Best Practices for deployment of Oracle 10g database.

    Hello ,
    Is anyone aware of a whitepaper/ document that talks about best pratices in deploying a database on Oracle 10g and configuration of the database to utilize all the features available in 10g ( eg. ADDM , reports setup etc )
    Thanking you in Advance.
    Cheers..rCube

    Appreciate the input Jaffer. Thanks.
    However I was referring to a Best Practices whitepaper like the one existing for Data Guard & MAA available at the follwogng url : - http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
    Is there something available along the same lines ?
    Cheers..rCube

  • Is there a best practice for multi location server setups including mac mail server?

    Afternoon all,
    Last year I setup a client with Snow Leopard Server including hosting his mail on the server with Mac Mail Server and calendaring.  He now has plans to open other sites with the same setup, how can this be done using Mac Server?  The implementation of a new server on the new site would be straight forward my concerns / question are:
    1. How would the two servers communicate?
        a.)Do they need to communicate?
    3. How will mail across the two sites work?
        a.) How will DNS work for email internally?
        b.) How will DNS work for emai externally?
    4.  How will calendaring work across the two sites?
    Is Mac Server the best platform for moving ahead with this type of business need?
    Any help or direction would be greatly appreciated.
    Anthony

    Camelot,
    many thanks for the speedy reply.  Your comments are very helpful thank you, if I may I will give you some for information and some other questions.
    The offices will be from 5 miles to 25 miles apart, the new office and ones that follow will be considered as branches of the main office for example the company name is Sunflower and it serves area 1, then new will office will serve area 2 and so on.  So in theory I could leave the main server domain and mail mx record as sunflower and then add further servers as 2.sunflower.com, 3.sunflower.com as domains and mx records? This would then provide unique mail records and users within the organisation such as [email protected] and [email protected], which doesnt look to good I would prefer all users to be name@sunflower how can this be acheived?
    With regard to user activity in the whole users will be fixed but their will be floaters such as managers and the owners that may at times float from one office to the other and would beneift from logging into any machine and getting their profile.
    I have thought about VPN's as I have acheieved this type of setup with Microsoft server products, I have found speed issues in some cases, how can this be acheived using OS X, are there any how to docs around?  In the Microsoft setup I acheived this using netgear VPN Firewalls which ofcourse adds an overhead, can this be acheived using OS X propietary VPN software?
    So ultimatley I would prefer to have the one domain without subs "sunflower.com" and users to be able to login to their profiles from either location. The main location now will remain as headoffice and the primary location and the new will be satelites.
    I hope that covers all of your other question, again many thanks for your comments and time.
    Kind Regards
    Anthony

  • Best Practice for Developer update access to database in Production

    I am curious to find out what other organizations are doing as to developer access to sysadm in production. Such as using a database account created like sysadm that can be checked out for use and locked when not in use? or ?

    Developer can be provided with Read only access to SYSADM schema.
    Thanks
    Soundappan
    Edited by: Soundappan on Dec 26, 2011 12:00 PM

  • Best Practices for data storage in portal database

    Hi,
    I need to store some stuctured data which is related to portal only. This data may grow data by data and may huge amount some point of time.
    Iam thinking which is the best way to handle it like maintanance point of view.
    i think i can store it in R/3 as custom table which is easy to maintain and read/write using RFC.
    the other one is store it on the portal database (dictionary), may not be easy to handle it.
    suggestions please?
    Thanks.

    Best way is to maintian data(growing data) on R3 and use  JCA to write a simple portal application(JSPDynpage or WD) to get the data back on portal.
    Using JCA, won't affect noticeable portal performance..
    Itz not advisable to store large data on dictionary.
    Regards,
    N.

  • What are the best practices for Database management and performance tuning?

    Hello,
    I want to ensure that I am using the best practices for managing and maintaining our Database.
    Is there any documentation out there that outlines how to maintain and ensure top performance out of our database?
    Thank you!
    John Sefton

    I appreciate the responses, however this is not the information I am looking for.
    I am specificaly looking for best practices invloving the managment and performance tuning.
    Example: are their tools that I can install that will monitor the size and response time of the database and alert me if there is degradation in performance?
    Are there specific periodic activities I should be doing to garuntee that my database will continue to function that way it is supposed to?
    Or is this a fire and forget solution that does not need this attention?

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

  • Best Practice for Distributing Databases to Customers

    I did a little searching and was surprised to not find a best practice document for how to distribute Microsoft SQL Databases. With other database formats, it's common to distribute them as scripts. It seems that feature is rather limited with the built-in
    tools Microsoft provides. There appear to be limits to the length of the script. We're looking to distribute a database several GBs in size. We could detach the database or provide a backup, but that has its own disadvantages by limiting what versions
    of the SQL Server will accept the database.
    What do you recommend and can you point me to some documentation that handles this practice?
    Thank you.

    Its much easier to distribute schema/data from an older version to a newer one than the other way around. Nearly all SQL Server deployment features supports database version upgrade, and these include the "Copy Database" wizard, BACKUP/RESTORE,
    detach/attach, script generation, Microsoft Sync framework, and a few others.
    EVEN if you just want to distribute schemas, you may want to distribute the entire database, and then truncate the tables to purge data.
    Backing up and restoring your database is by far the most RELIABLE method of distributing it, but it may not be pratical in some cases because you'll need to generate a new backup every time a schema change occurs, but not if you already have an automated
    backup/maintenance routine in your environment.
    As an alternative, you can Copy Database functionality in SSMS, although it may present itself unstable in some situations, specially if you are distributing across multiple subnets and/or domains. It will also require you to purge data if/when applicable.
    Another option is to detach your database, copy its files, and then attach them in both the source and destination instances. It will generate downtime for your detached databases, so there are better methods for distribution available.
    And then there is the previously mentioned method of generating scripts for schema, and then using an INSERT statement or the import data wizard available in SSMS (which is very practical and implements a SSIS package internally that can be saved for repeated
    executions). Works fine, not as practical as the other options, but is the best way for distributing databases when their version is being downgraded.
    With all this said, there is no "best practice" for this. There are multiple features, each offering their own advantages and downfalls which allow them to align to different business requirements.

  • Best Practice for Removing Zeroes from Database

    Does anyone have some clever bits of code or best practices for evaluating a database and instances of zeroes? I'm working on cleaning up our rules file and am thinking the best way to start would be to write some code to look for zeroes and write them to a log file. This would at least indicate if there was even a problem with zeroes (which there may or may not be).
    Any suggestions out there / utilities / code samples?
    Thanks.

    We accomplished this using data extracts from a subset of scenarios/years/entities/accounts to ensure that all of our potential rules could be checked to ensure they were not writting zero's. This worked pretty well for our purposes, a text editor called EmEditor allows for VB macros in it pretty easily and we could write a quick macro to check for strings ending in "; 0." You may also want to review your check box of calculated in your extract and see if the zeros are a result of calculations. A rule output could work pretty well, although it would take some defining as you would have to write it out in a sub and make sure that you capture the data of all subroutines if your zero's are rule driven or actual inputs. May want to review some if you have very small insignificant values getting written, seen items that have one value 13 places to the right of the decimal that were not really signficant.
    JTF

  • Best Practice for the database owner of an SAP database.

    We recently had a user account removed from our SAP system when this person left the agency.  The account was associated with the SAP database (he created the database a couple of years ago). 
    I'd like to change the owner of the database to <domain>\<sid>adm  (ex: XYZ\dv1adm)  as this is the system admin account used on the host server and is a login for the sql server.  I don't want to associate the database with another admin user as that will change over time.
    What is the best practice for database owner for and SAP database?
    Thanks
    Laurie McGinley

    Hi Laura
    I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
    Ken

  • Noticing a lot of database index fragmentation yet no Health Analyzer alerts...? Best practice for database maintenance in 2013?

    Could someone point me to a document for best practices for database maintenance with SharePoint 2013? I have read the 2010 document, but I'm hoping their is an updated one that I'm just missing.
    My problem is that our DBA recently noticed that many of our SharePoint databases have high index fragmentation.  I have the Health Analyzer rules enabled for index fragmentation and they run daily, but I've never received an alert despite the majority
    of our databases having greater than 40% fragmentation and some are even above 95%.  
    Obviously it has our attention now and we want to get this addressed.  My understanding (which I now fear is at best incomplete, more likely just plain wrong) was that a maintenance plan wasn't needed for index fragmentation in 2010/2013 like it was
    in 2007. 
    Thanks,
    Troy

    It depends. Here are the rules for that job:
    Sampled mode
    Page count >24 and avg fragmentation in percent >5
    Or
    Page count >8 avg page space used in percent < fill_factor * 0.9 (Fill Factor in SharePoint 2013 varies from 80 to 100 depending on the index, it is important not to adjust index fill factors)
    I have seen cases where the indexes are not automatically managed by the rule and require a manual defragmentation with a Full Scan, instead of Sampled. Once the Full Scan defrag completed, the timer job started handling the index fragmentation automatically.
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Best Practices for Using Service Controller for Entity Framework Database

    I'm running into an issue in my first time creating a Web Service with a .NET backend with Azure. I designed a database in Entity Framework and had it create the models, but I couldn't create a controller for the table unless I made the model inherit from
    EntityData. Here's the catch, the Database Model has int Id, but EntityData has string Id, so, of course, I'm getting errors. What is best practice for what I'm trying to do?
    Michael DiLeo

    hi Michael,
    Thanks for you posting!
    Sorry for I am not totally understanding your issue. Maybe two points need your confirm:
    1. I confuse with the "Service controller"? IS your meaning MVC controller? Or ServiceController(http://www.codeproject.com/Articles/31688/Using-the-ServiceController-in-C-to-stop-and-start
    2.whether  The type of ID in the model is match to the database ? In other words, Is the type of IDin .edmx matched to the database?
    By the way, it seems that this issue is more related to EF. You could post this issue on EF discussion for better support.
    Thanks & Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

Maybe you are looking for