Exchange logs DAG best practice

In real world scenarios do may of you enable circular logging? I have never had to replay log files and usually enable circular logging.
Also in a DAG do the transaction logs get created on each member if circular logging is disabled?
finally what is best practice for backing up databases in a dag if i have 2 active with split mounted copies of each database so db 1,2,3 are on server one with passive 4,5 and 123, are passive on server 2 with 4,5 active then I have server 3 with 1,2,3,4,5
all passive where do I back up from?
***Don't forget to mark helpful or answer***

I wouldn't enable the circular logging and let my Daily Backup take care of log truncation. However you can enable the circular logging, it all depends on how you want to design your infra, backup and recovery strategy...
Yes in DAG, transaction logs are copied over to other server where database copy is located, inspected and checked that it is not needed by other database before it got truncated due to circular logging. 
I would take backup from the server 3 where all the DBs are not mounted...
Some references:
Circular Logging and Mailbox Database Copies - http://blogs.technet.com/b/scottschnoll/archive/2011/06/27/circular-logging-and-mailbox-database-copies.aspx
Managing mailbox database copies - http://technet.microsoft.com/en-us/library/dd335158(v=exchg.150).aspx
Amit Tank | Exchange - MVP | Blog:
exchangeshare.wordpress.com 

Similar Messages

  • Exchange 2013 Journaling Best practices

    I am using native Exchange 2013 Journaling for journal all external messages for all users. The size of journal mailbox is very huge. I am looking for best practices for journaling
    in terms of journal mailbox size? Journal database size? Do I need to create multiple journal mailboxes? Do I enable achieving for journal mailbox? If archiving enabled, then what is max size for archive mailbox? I do not want to export mails to PSTs.
    Appreciate help from others

    Hi,
    Do you mean the size limit for .pst file?
    By default, in Outlook 2010 and Outlook 2013, the overall size of .pst file has a preconfigured limit of 50GB. In Outlook 2003 and Outlook 2007, the default .pst file size limit is 20 GB. If the file size is greater than the default size limit, you will
    be unable to open the .pst file. However, we can modify this value by making changes to the Windows registry.
    Here is a related article for your reference.
    The file size limits of .pst and .ost files are larger in Outlook 2010 and Outlook 2013
    http://support.microsoft.com/kb/982577
    About the user mailbox size limit, there are no inherent size limits, you can set the size limit to unlimited if you want. But you need to take many factors into consideration as mentioned above.
    Best regards,
    Belinda
    Belinda Ma
    TechNet Community Support

  • Request info on Archive log mode Best Practices

    Hi,
    Could anyone from their personal experience share with me the Best Practices for maintaining Archiving on any version of oracle. Please tell me
    1) Whether to place archives and log files on same disks?
    2) How many lgwr processes to use.
    3) checkpoint frequency.
    4) How to maintain speed of the server being run in archivelog mode.
    5) Errors to look.
    Thanks,

    1. Use separate mount point for archive logs like /archv
    2. Start using with 1 and check the performance.
    3. This is depends upon the redo log file size. Create your redo log file such that hourly maximum 5-8 log switch will happen. Try to make it less than 5 log switch per hour.
    4. Check the redo log file size.
    5. Check for archive log mount point space allocation. Take the backup of archive by RMAN and deleted the backed up archive logs from the archived destination.
    Regards
    Asif Kabir

  • Managing Alert log files : Best practices?

    DB Version : 10.2.0.4
    Several of our DBs' alert logs have become quite large. I know that oracle will create a brand new alert log file if i delete the existing one.
    But i just want to know how you guys manage your alert log files. Do you guys archive (move to a different directory) and recreate a brand new alert log. Just want to know if there any best practices i could follow.

    ScottsTiger wrote:
    DB Version : 10.2.0.4
    Several of our DBs' alert logs have become quite large. I know that oracle will create a brand new alert log file if i delete the existing one.
    But i just want to know how you guys manage your alert log files. Do you guys archive (move to a different directory) and recreate a brand new alert log. Just want to know if there any best practices i could follow.Every end of day (or every periodic time) archive your alert.log and move other directory then remove alert.log.Next time Database instance will automatically create own alert.log

  • Exchange 2010 - What is best practice for protection against corruption replication?

    My Exchange 2010 SP3 environment includes DAG with offsite passive copy.  DB is backed-up nightly with TSM TDP.  My predecessor also installed DoubleTake software to protect the DB against replication of malware or corruption to the passive MB
    server.  Doubletake updates offsite DB replica every 4-hours.  Understanding that this is ultimately a decision based on my company's risk tolerance, to the end, what is the probability of malware or corruption propagation due to replication? 
    What is industry best practice: do most companies have a 3rd, lagged copy of the DB in the DAG, or are 3rd party solutions such as DoubleTake commonly employed?  Are there other, better (and less expensive) options?

    Correct. If 8 days lagged copy is maintained then daily transaction log files of 8 days are preserved before replaying them to lagged database. This will ensure point-in-time recovery, as you can select log files that you need to replay into the database.
    Logs will get truncated if they have been successfully replayed into database and have expired their lagged time-stamp.
    Each database copy has a checkpoint file (.chk), which keeps track of transaction log files status.
    Command to check the Transaction Logs replay status:
    eseutil /mk <path-of-the-chk-file>  - (stored with the Transaction log files)
    - Sarvesh Goel - Enterprise Messaging Administrator

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Exchange 2010 DAG backup & Transaction logs

    Hi, 
    What is Microsoft recommended best practise for Exchange DAG group backup in an environment where there are Active & multiple (2-3) passive copies of the databases?
    Is it a good practice to backup transaction logs as frequently as possible in addition to daily full backup ? This I belive will allow to restore the DB to the latest possible state
    using last good full backup & transaction logs (Like restoring SQL databases)
    Thanks

    Hi,
    Windows Server Backup can't backup passive copy. If you want to backup both active and passive copies, you need to use DPM or other third party.
    Here is a similar thread for your reference.
    Exchange 2010 DAG Backup Best Practices
    http://social.technet.microsoft.com/Forums/exchange/en-US/269c195f-f7d7-488c-bb2e-98b98c7e8325/exchange-2010-dag-backup-best-practices
    Besides, here is a related blog below which may help you.
    Backup issues and limitations with Exchange 2010 and DAG
    http://blogs.technet.com/b/ehlro/archive/2010/02/13/backup-issues-and-limitations-with-exchange-2010-and-dag.aspx
    Hope this helps.
    Best regards,
    Belinda
    Belinda Ma
    TechNet Community Support

  • I found warning after ran Best practice analyser Tools in exchange 2010

    Hello ,
    when ran Best practice Analyse tool i found some warining :
    1-DNS 'Host' Record Appears to Be Missing
    2-Active Server Pages is
    not installed
    3-Application log size
    4-Self –sign certificate found:
    is strongly recommended that you install an authority-signed or trusted certificate
    The SSL certificate for 'https://exchange.mydomain.com/Autodiscover/Autodiscover.xml' is self-signed. It does not provide any of the security guarantees provided
    by authority-signed or trusted certificates.(i have ssl certificate form geo cert Turst )  all users you can access mails form owa and they  can connect
    mailbox using outlook anywhere but with SSL warning.
    5-Single Global catalog in topology:
    There is only one global catalog server in the Directory Service Access (DSAccess) topology on server CADEXCHANGE. This configuration should be avoided for fault-tolerance
    reasons
    already checked the below links but i am not understood good  :
    http://technet.microsoft.com/en-us/library/6ec1c7f7-f878-43ae-bc52-6fea410742ae.aspx
    http://technet.microsoft.com/en-us/library/4fa708a1-a118-4953-8956-3c50399499f8.aspx
    http://technet.microsoft.com/en-us/library/8867bba7-7f81-42f9-96b6-2feb7e0cea4e.aspx
    please advise me to avoid this issue
    thanks

    i have 2 server both server global catalog
    my question why warning appear only one global catalog
    please explain this.
    when test Autodiscover the test successful but when expand menu
    i am found some error :
    Attempting to test potential Autodiscover URL https://Mydomain.com:443/Autodiscover/Autodiscover.xml
    Testing the SSL certificate to make sure it's valid.
    Validating the certificate name.
    Certificate name validation failed 

  • Best Practices for patching Exchange 2010 servers.

    Hi Team,
    Looking for best practices on patching Exchnage Server 2010.
    Like precautions  , steps and pre and post patching checks.
    Thanks. 

    Are you referring to Exchange updates? If so:
    http://technet.microsoft.com/en-us/library/ff637981.aspx
    Install the Latest Update Rollup for Exchange 2010
    http://technet.microsoft.com/en-us/library/ee861125.aspx
    Installing Update Rollups on Database Availability Group Members
    Key points:
    Apply in role order
    CAS, HUB, UM, MBX
    If you have CAS roles in an array/load-balanced, they should all have the same SP/RU level.  so coordinate the Exchange updates and add/remove nodes as needed so you do not run for an extended time with different Exchange levels in the same array.
    All the DAG nodes should be at the same rollup/SP level as well. See the above link on how to accomplish that.
    If you are referring to Windows Updates, then I typically follow the same install pattern:
    CAS, HUB, UM, MBX
    With windows updates however, I tend not to worry about suspending activation on the DAG members rather simply move the active mailbox copies, apply the update and reboot if necessary.

  • Best practice of OSB logging Report handling or java code using publish

    Hi all,
    I want to do common error handling of OSB I did two implementations as below just want to know which one is the best practice.
    1. By using the custom report handler --> When ever we want to log we will use the report action of OSB which will call the Custom java class which
    Will log the data in to DB.
    2. By using plain java class --> creating a java class publish to the proxy which will call this java class and do the logging.
    Which is the best practice and pros and cons.
    Thanks
    Phani

    Hi Anuj,
    Thanks for the links, they have been helpful.
    I understand now that OSR is only meant to contain only Proxy services. The synch facility is between OSR and OSB so that in case when you are not using OER, you can publish Proxy services to OSR from OSB. What I didn't understand was why there was a option to publish a Proxy service back to OSB and why it ended up as a Business service. From the link you provided, it mentioned that this case is for multi-domain OSBs, where one OSB wants to use the other OSB's service. It is clear now.
    Some more questions:
    1) In the design-time, in OER no Endpoints are generated for Proxy services. Then how do we publish our design-time services to OSR for testing purposes? What is the correct way of doing this?
    Thanks,
    Umar

  • Best practice for CM log and Out

    Hi,
    I've following architecture:
    DB SERVER = DBSERVER1
    APPS SERVER =APP1 AND APP2
    LB= Netscaler
    PCP configured.
    What is the best practice to have CM log and out? Do I need to keep these files on DB server and mount to APP1 and APP2?
    Please advice.
    Thanks

    Hi,
    see if you want to save the logfiles of other cm node when crash happens then why u want to share APPLCSF?if the node which is hosting shared APPLCSF gets crashed then ALL the logfiles(of both nodes) are gone, keep same directories and path of APPLCSF on both the nodes,so the CMNODE(A) have logfiles local in its directories and CMNODE(B) have logfile local in its directories....
    In the last what ever  i said above was my thinking so follow it or not its u r wish,But always follow what oracle says...the poster should also check with oracle.
    Regards
    Edited by: hungry_dba on Jan 21, 2010 1:20 PM

  • Best practice for Error logging and alert emails

    I have SQL Server 2012 SSIS. I have Excel files that are imported with Exel Source and OLE DB Destination. Scheduled Jobs runs every night SSIS packages.
    I'm looking for advice that what is best practice for production environment.Requirements are followings:
    1) If error occurs with tasks, email is sent to admin
    2) If error occurs with tasks, we have log in flat file or DB
    Kenny_I

    Are you asking about difference b/w using standard logging and event handlers? I prefer latter as using standard logging will not always data in the way in which we desire. So we've developed a framework to add necessary functionalities inside event handlers
    and log required data in the required format to a set of tables that we maintain internally.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Best practice for creating a new email address to Exchange Server 2010 for share point Library

    Hi,
    Please advise if there is any best practice for the above issue?
    Thanks 
    srabon

    Hi Srabon,
    Hope these are what you want.
    Use a cmdlet to Create a User account and Mailbox in Exchange 2010
    http://technet.microsoft.com/en-us/magazine/ff381465.aspx
    Create a Mailbox for an Existing User
    http://technet.microsoft.com/en-us/library/aa998319(v=exchg.141).aspx
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • Best Practice - Hardware requirements for exchange test environment

    Hi Experts,
    I'm new to exchange and I want to have a test environment for learning, testing ,batches and updates.
    In our environment we have co-existence 2010 and 2013 and I need to have a close scenario on my test environment.
    I was thinking of having an isolated (not domain joined) high end workstation laptop with (quad core i7, 32GB RAM, 1T SSD) to implement the environment on it, but the management refused and replied "do it on one of the free servers within the live production
    environment at the Data Center"... !
    I'm afraid of doing so not to corrupt the production environment with any mistake by my configuration "I'm not that exchange expert who could revert back if something wrong happened".
    Is there a documented Microsoft recommendation on how to do it and where to do so to be able to send it to them ??
    OR/ Could someone help with the best practice on where to have my test environment and how to set it up??
    Many Thanks
    Mohamed Ibrahim

    I think this may be useful:
    It's their official test lab set up guide.
    http://social.technet.microsoft.com/wiki/contents/articles/15392.test-lab-guide-install-exchange-server-2013.aspx
    Also, your spec should be fine as long as you run the VMs within their means.

Maybe you are looking for