Exchange Log Shipping Replay queue length monitor

Hi Guys,
Can anyone tell me, what king of monitor is Log shipping replay queue length monitor??
Is it a average threshold monitor or consecutive samples over threshold monitor?
Thanks

Hi,
This monitor is optimized for the CCR scenario and raises an alert if the number of transaction logs waiting to be committed is greater than 15 logs and has been waiting for more than 5 minutes. Therefore, it is a Consecutive Samples over Threshold.
You can also get the answer from Microsoft Exchange Server 2007 Management Pack Guide document (Page 72)
http://download.microsoft.com/download/1/E/D/1ED18BCA-B96D-4184-89DB-EDD9A77E5040/OM2007_MP_EX2007_SP1.doc
Niki Han
TechNet Community Support

Similar Messages

  • DB, Replay Queue length is growing

    Exchange 2013, it's just starting after migration from 2010 to 2013.
    Replay Queue length in specifed passive DBs which is healthy has been growing rapidly in business
    hour, however Copy Queue length is ok.
    And it's not decreasing them at all in business hour, I'm serching for the cause for that,
    MBX server performance, Disk I/O or networok... need help.
    Even if that's night time, it looks that specified DBs on one server it has long replay Queue length logs.

    HI tanale,
    Seems log files are copied to the passive copies of the mailbox databases. But the log files are not replayed to the passive database.
    Please verify "Don't mount this database at startup " check box selected on the database. If yes please
    uncheck it.
    Regards
    Chinthaka Shameera | MCITP: EA | MCSE: M |
    http://howtoexchange.wordpress.com/

  • HubTransport UnHealty - Total.Shadow.Queue.Length.Above.Threshold.Monitor - What to check?

    Hi To all,
    looking at Exchange 2013 ServerHealth  i have the "HubTransport" in unhealth state related to this
    item:
    Total.Shadow.Queue.Length.Above.Threshold.Monitor
    I cannot find more information about this issue...
    Many thanks for help! :)
    r.

    Hi,
    Found nothing in the public resource, neither.
    Is there any error/warning/information in the event viewer?
    Please also check the detailed error message in the Monitor if it is possible.
    Did this cause other issues, some like mailflow issue etc.?
    If everything going well, I suggest disable the Alert.
    Thanks
    Mavis
    Mavis Huang
    TechNet Community Support

  • Monitoring queue lengths

    Does anyone have any advice/scripts for monitoring queue lengths?
    I'd like to be able to monitor the lengths of the queues within my system, ideally
    such that once queueing occurs an alert/message of sorts can be raised.
    So far I have no continously active monitoring of queue lengths, but am relying
    on the average queue length data provided by the pq command, to identify if queuing
    is occuring.
    Relying on the average queue length reported by pq, I don't think is the best
    route to take. Sometimes it provides data that cannot be correct - I get the
    impression that unless it has a reasonably constant flow of requests it isn't
    very accurate.
    I'm assuming what is actually required is some kind of MIB interrogation program,
    is there anyone that uses something like this to monitor queues?
    The average queue length info provided by pq, does need a little data manipulation
    I've discovered to be meaningful, for everyones benefit here's what needs to be
    done:
    The average queue length is the average number of messages in the queue (inclduing
    those being processed) minus one. I don't know the reason for the minus one,
    but it is something to be aware of (particularly for MSSQ sets).
    I subtract the number of servers serving the queue from the average queue length,
    then add the one back on. This gives the average number of requests in the queue
    that are actually waiting to be processed.
    thanks
    Jody

    Just found it. Coherence->Cache->DistributedCacheForMessages->Attributes->Size

  • Monitor BizTalk Host Queue length and suspended msgs w/SCOM

    First, I hope the BizTalk forum is the right place to ask this. Maybe I should try the SCOM forum as well.
    I'm trying to create two monitors (Not rules, as we want the alert to be automatically healthy when under treshold again and we want to see the status state as well) in SCOM based on performance counters for BizTalk Msgbox Host Queue Length and suspended
    msgs. My question is what I should use as target (class) in SCOM? And can I use "All instances" of the counter or must i create a monitor for each instance (This is a lot of work and not very dynamic)? We want to monitor all the instances/hosts with
    different tresholds, so the first thing I did was to target the "BizTalk Host" class, so I can do overrides to different hosts.
    The problem with this is it will generete a alert for all hosts if one instance is over treshold. I also tried to target the "Run-time role", and this actually works better, but not perfect as i cannot set a treshold for just one instance/host
    then and it will close the alert if any other intance is under treshold.
    Anyone have experiences with SCOM and monitoring Hosts queues and/or suspended msgs as monitors? 
    thank you in advance for all suggestions!

    I would suggest to look into spool table and its size . As per recommendation it  count should not be greater than 3000 per server .
    Its easy way to monitor the performance counter "Message Box:General Counters /Spool size".you can execute one the following SQL in the BizTalk message box database.
    You can have a counter for spool table size and manually you can use below sql query to find out the count.
    SELECT count(*) from SPOOL WITH (NOLOCK)
    SELECT top 1 rows  FROM sys.partitions WHERE object_id = object_id(‘spool’)
    Note :The NOLOCK keyword is important in first query, you don’t want to put any locks in the spool table while measuring the row count.  The second query is the one used by the performance counter “Spool Size” using the stored procedure
    “MsgBoxPerfCounters_GetSpoolSize”
    Reference :http://msdn.microsoft.com/en-us/library/aa561922.aspx
    Thanks
    Abhishek

  • Disk Queue Length ?

    Our organization is having some slowness problems particularly when most are logging on and off so
    mornings and 330 or so I've been through everything bandwidth etc we have 10G switches but I've come across this I believe is the problem on our server that we redirect everyones desktop and profile etc. On that drive in he resource monitor there is a section
    for Disk Queue Length that I've read should be 0-2. Ours averages 5-10 and spikes to 50 during these slowness times. All our servers are VMware, its on a SAN with SSD drives so what can I do to resolve this. Its just on the drive that that data is on so we've
    been considering creating another drive and splitting up the users profile folders or do we need another separate server? How can I fix this problem? Is there a limit to the amount of users that can be setup to access one server? Do I need to break that up
    to several servers?
    Jason

    Hi Jason0923,
    The Disk Queue has Length may caused may reasons, such as high workload with SAN IO bottleneck, generally we can first confirm whether your SAN write disk cache has enabled,
    others clue is you can refer the following article to determine whether there have IO bottle neck with your SAN.
    Monitoring Queue Length
    https://technet.microsoft.com/en-us/library/cc938625.aspx?f=255&MSPPError=-2147217396
    Windows Performance Monitor Disk Counters Explained
    http://blogs.technet.com/b/askcore/archive/2012/03/16/windows-performance-monitor-disk-counters-explained.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Copy Queue Length - All of a sudden one server having communication issues

    We have 4 servers in a DAG (3 at site A and 1 at site B).
    Of the three servers at site A two of them always show 0 copy queue length.  Recently one of the servers started to show a back log and we are seeing the following in the event viewer.  We see this error when this problem server connects to either
    of the other two in the same physical site.
    The log copier was unable to communicate with server 'ABC1'. The copy of database 'DB2\ABC1' is in a disconnected state. The communication error was: An error occurred while communicating with server
    'ABC1'. Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. The copier will automatically retry after a short delay.
    At night the queue goes back to 0 and we start over again.  Currently the problem server only has passive copies, we moved of active just in case.
    I have tried using the MAPI network to replicate (Different physical NICs and switches), that was just worse.  Also tried deactivating the primary NIC in the team and using the secondary that is connected to a different core switch.
    Any ideas? 

    Hi,
    Basic on your post, I understand that one DAG member always show 0 copy queue length with error “Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. The copier will automatically retry after a
    short delay”.
    If I misunderstand your concern, please do not hesitate to let me know.
    Please run below command to double check the connectivity between server:
    1. Use netsh int tcp show global.
    2. Use netsh int tcp to set global autotuninglevel=disabled.
    3. Use netsh int tcp to set global chimney=disabled.
    4. Use netsh int tcp to set global rss=disabled.
    Meanwhile, follow below steps:
    1. Please use the Get-DatabaseAvailabilityGroupNetwork cmdlet to check if DAG network is ok.
    2. Run the Update-MailboxDatabaseCopy -Identity xx cmdlet to seed a copy of a database.
    3. Restart the Microsoft Exchange Replication service.
    4. Please ensure that port 64327 is open.
    Thanks
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Allen Wang
    TechNet Community Support

  • ORA-16191: Primary log shipping client not logged on standby.

    Hi,
    Please help me in the following scenario. I have two nodes ASM1 & ASM2 with RHEL4 U5 OS. On node ASM1 there is database ORCL using ASM diskgroups DATA & RECOVER and archive location is on '+RECOVER/orcl/'. On ASM2 node, I have to configure STDBYORCL (standby) database using ASM. I have taken the copy of database ORCL via RMAN, as per maximum availability architecture.
    Then I have ftp'd all to ASM2 and put them on FS /u01/oradata. Have made all necessary changes in primary and standby database pfile and then perform the duplicate database for standby using RMAN in order to put the db files in desired diskgroups. I have mounted the standby database but unfortunately, log transport service is not working and archives are not getting shipped to standby host.
    Here are all configuration details.
    Primary database ORCL pfile:
    [oracle@asm dbs]$ more initorcl.ora
    stdbyorcl.__db_cache_size=251658240
    orcl.__db_cache_size=226492416
    stdbyorcl.__java_pool_size=4194304
    orcl.__java_pool_size=4194304
    stdbyorcl.__large_pool_size=4194304
    orcl.__large_pool_size=4194304
    stdbyorcl.__shared_pool_size=100663296
    orcl.__shared_pool_size=125829120
    stdbyorcl.__streams_pool_size=0
    orcl.__streams_pool_size=0
    *.audit_file_dest='/opt/oracle/admin/orcl/adump'
    *.background_dump_dest='/opt/oracle/admin/orcl/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='+DATA/orcl/controlfile/current.270.665007729','+RECOVER/orcl/controlfile/current.262.665007731'
    *.core_dump_dest='/opt/oracle/admin/orcl/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl'
    *.db_recovery_file_dest='+RECOVER'
    *.db_recovery_file_dest_size=3163553792
    *.db_unique_name=orcl
    *.fal_client=orcl
    *.fal_server=stdbyorcl
    *.instance_name='orcl'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(orcl,stdbyorcl)'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.log_archive_dest_2='SERVICE=stdbyorcl'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='%t_%s_%r.dbf'
    *.open_cursors=300
    *.pga_aggregate_target=121634816
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=364904448
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS'
    *.user_dump_dest='/opt/oracle/admin/orcl/udump'
    Standby database STDBYORCL pfile:
    [oracle@asm2 dbs]$ more initstdbyorcl.ora
    stdbyorcl.__db_cache_size=251658240
    stdbyorcl.__java_pool_size=4194304
    stdbyorcl.__large_pool_size=4194304
    stdbyorcl.__shared_pool_size=100663296
    stdbyorcl.__streams_pool_size=0
    *.audit_file_dest='/opt/oracle/admin/stdbyorcl/adump'
    *.background_dump_dest='/opt/oracle/admin/stdbyorcl/bdump'
    *.compatible='10.2.0.1.0'
    *.control_files='u01/oradata/stdbyorcl_control01.ctl'#Restore Controlfile
    *.core_dump_dest='/opt/oracle/admin/stdbyorcl/cdump'
    *.db_block_size=8192
    *.db_create_file_dest='/u01/oradata'
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='orcl'
    *.db_recovery_file_dest='+RECOVER'
    *.db_recovery_file_dest_size=3163553792
    *.db_unique_name=stdbyorcl
    *.fal_client=stdbyorcl
    *.fal_server=orcl
    *.instance_name='stdbyorcl'
    *.job_queue_processes=10
    *.log_archive_config='dg_config=(orcl,stdbyorcl)'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.log_archive_dest_2='SERVICE=orcl'
    *.log_archive_dest_state_1='ENABLE'
    *.log_archive_dest_state_2='ENABLE'
    *.log_archive_format='%t_%s_%r.dbf'
    *.log_archive_start=TRUE
    *.open_cursors=300
    *.pga_aggregate_target=121634816
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=364904448
    *.standby_archive_dest='LOCATION=USE_DB_RECOVERY_FILE_DEST'
    *.standby_file_management='AUTO'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS'
    *.user_dump_dest='/opt/oracle/admin/stdbyorcl/udump'
    db_file_name_convert=('+DATA/ORCL/DATAFILE','/u01/oradata','+RECOVER/ORCL/DATAFILE','/u01/oradata')
    log_file_name_convert=('+DATA/ORCL/ONLINELOG','/u01/oradata','+RECOVER/ORCL/ONLINELOG','/u01/oradata')
    Have configured the tns service on both the hosts and its working absolutely fine.
    <p>
    ASM1
    =====
    [oracle@asm dbs]$ tnsping stdbyorcl
    </p>
    <p>
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:49:00
    </p>
    <p>
    Copyright (c) 1997, 2005, Oracle. All rights reserved.
    </p>
    <p>
    Used parameter files:
    </p>
    <p>
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.20)(PORT = 1521))) (CONNECT_DATA = (SID = stdbyorcl) (SERVER = DEDICATED)))
    OK (30 msec)
    ASM2
    =====
    </p>
    <p>
    [oracle@asm2 archive]$ tnsping orcl
    </p>
    <p>
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2008 18:48:39
    </p>
    <p>
    Copyright (c) 1997, 2005, Oracle. All rights reserved.
    </p>
    <p>
    Used parameter files:
    </p>
    <p>
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.20.10)(PORT = 1521))) (CONNECT_DATA = (SID = orcl) (SERVER = DEDICATED)))
    OK (30 msec)
    Please guide where I am missing. Thanking you in anticipation.
    Regards,
    Ravish Garg

    Following are the errors I am receiving as per alert log.
    ORCL alert log:
    Thu Sep 25 17:49:14 2008
    ARCH: Possible network disconnect with primary database
    Thu Sep 25 17:49:14 2008
    Error 1031 received logging on to the standby
    Thu Sep 25 17:49:14 2008
    Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
    ORA-01031: insufficient privileges
    FAL[server, ARC1]: Error 1031 creating remote archivelog file 'STDBYORCL'
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Thu Sep 25 17:49:14 2008
    Errors in file /opt/oracle/admin/orcl/bdump/orcl_arc1_4825.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Thu Sep 25 17:49:14 2008
    ORACLE Instance orcl - Archival Error. Archiver continuing.
    Thu Sep 25 17:49:44 2008
    FAL[server]: Fail to queue the whole FAL gap
    GAP - thread 1 sequence 40-40
    DBID 1192788465 branch 665007733
    Thu Sep 25 17:49:46 2008
    Thread 1 advanced to log sequence 48
    Current log# 2 seq# 48 mem# 0: +DATA/orcl/onlinelog/group_2.272.665007735
    Current log# 2 seq# 48 mem# 1: +RECOVER/orcl/onlinelog/group_2.264.665007737
    Thu Sep 25 17:55:43 2008
    Shutting down archive processes
    Thu Sep 25 17:55:48 2008
    ARCH shutting down
    ARC2: Archival stopped
    STDBYORCL alert log:
    ==============
    Thu Sep 25 17:49:27 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-01017: invalid username/password; logon denied
    Thu Sep 25 17:49:27 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    Thu Sep 25 17:49:27 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-16191: Primary log shipping client not logged on standby
    PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
    Thu Sep 25 17:51:38 2008
    FAL[client]: Failed to request gap sequence
    GAP - thread 1 sequence 40-40
    DBID 1192788465 branch 665007733
    FAL[client]: All defined FAL servers have been attempted.
    Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
    parameter is defined to a value that is sufficiently large
    enough to maintain adequate log switch information to resolve
    archivelog gaps.
    Thu Sep 25 17:55:16 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-01017: invalid username/password; logon denied
    Thu Sep 25 17:55:16 2008
    Error 1017 received logging on to the standby
    Check that the primary and standby are using a password file
    and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
    and that the SYS password is same in the password files.
    returning error ORA-16191
    It may be necessary to define the DB_ALLOWED_LOGON_VERSION
    initialization parameter to the value "10". Check the
    manual for information on this initialization parameter.
    Thu Sep 25 17:55:16 2008
    Errors in file /opt/oracle/admin/stdbyorcl/bdump/stdbyorcl_arc0_4813.trc:
    ORA-16191: Primary log shipping client not logged on standby
    PING[ARC0]: Heartbeat failed to connect to standby 'orcl'. Error is 16191.
    Please suggest where I am missing.
    Regards,
    Ravish Garg

  • URGENT: SBS 2011 Exchange log files filling up drive in minutes!

    I need some help with ideas as to why Exchange is generating hundreds of log files every minute.
    The server had 0MB free on the C: drive, and come to find out, there were over 119,000 log files in the Exchange server folder (dated within the last 7 days).  These files are named like E00001D046C.log.  Oddly, the Exchange database store
    is not growing in size as you'd expect.  Frantically searching for a way to free up space, I turned on circular logging and remounted the store (after freeing up enough space for it to mount!).  Almost instantly, the 119,000+ log files disappeared,
    but now there are about 40 or so that are constantly being created/written/deleted, over and over and over.
    This is a small 5 person office with a 4GB database store.  The 119,000 log files were taking up over 121GB.  It's nice to have that space back, but something is in a loop, constantly creating log files as fast as the system can write them.
    I checked the queues...nothing.  Where else can I look to see what might be causing this?
    Thanks for the help.
    ps.  Windows server backup failed about the time this problem started, stating the backup drive is out of space.  It's a 2TB drive, backing up 120GB of data.  Isn't it supposed to delete old backups to make room for new?

    Hi,
    Regarding the current issue, please refer to the following article to see if it could help.
    Exchange log disk is full, Prevention and Remedies
    http://www.msexchange.org/articles-tutorials/exchange-server-2003/planning-architecture/Exchange-log-disk-full.html
    If you want to disable Exchange ActiveSync feature, please refer to the following article.
    Disable Exchange ActiveSync
    http://technet.microsoft.com/en-us/library/bb124502(v=exchg.141).aspx
    Best Regards,
    Andy Qi
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Andy Qi
    TechNet Community Support

  • Log shipping job failing

    HI guys,
    We are using SQL SERVER 2005.
    I am having a LSAlert_Serv job and this job runs the system stored procedure sys.sp_check_log_shipping_monitor_alert.
    when this job is run I am getting the following error message: 
    Here is the error message I am getting :
    The log shipping primary database SHARP has backup threshold of 60 minutes and has not performed
    a backup log operation for 7368 minutes. Check agent log and logshipping monitor information. [SQLSTATE 42000] (Error 14420). The step failed.
    The database named SHARP that is mentioned in the above error message is now moved to another
    server. 
    When I looked into the stored procedure and when i ran the below query from the Stored procedure
    select primary_server
    ,primary_database
    ,isnull(threshold_alert, 14420)
    ,backup_threshold
    ,cast(0 as int)
    from msdb.dbo.log_shipping_monitor_primary
    where threshold_alert_enabled = 1
    I can still see the database SHARP in the table msdb.dbo.log_shipping_monitor_primary. So,
    is it the reason for failure? If so, what to do to update the table msdb.dbo.log_shipping_monitor_primary and to fix the issue.
    Thanks

    The database named SHARP that is mentioned in the above error message is now moved to another server. 
    When I looked into the stored procedure and when i ran the below query from the Stored procedure :
    Since you said you moved database to different server can you please check that SQL Server service account( in new server where you moved database)  has full permissions on folder where you have configured log backup job to back the transaction logs.
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Need help on explanation of Avg. Disk Queue Length

    Based on perfmon, my Avg. Disk Queue Length on physical dick hit 100%.
    What's that mean? Really need explanation

    I’m a bit confused by your statement.   I'm not sure where the 100% is coming from. 
    Avg. Disk Queue Length is the average number of both read and write requests that were queued for the selected disk during the sample interval.
    Current Disk Queue Length is the number of requests outstanding on the disk at the time the performance data is collected. It also includes requests in service at the time of the
    collection. This is a instantaneous snapshot, not an average over the time interval. Multi-spindle disk devices can have multiple requests that are active at one time, but other concurrent requests are awaiting service. This counter might reflect a transitory
    high or low queue length, but if there is a sustained load on the disk drive, it is likely that this will be consistently high. Requests experience delays proportional to the length of this queue minus the number of spindles on the disks. For good performance,
    this difference should average less than two.
    This whole topic can get very confusing.
    Think of Current Disk Queue Length as in flight operations. 
    These are disk read or write that have passes through the Performance Filter Driver and are on their way to the physical disk and back. While in flight a disk activity must pass through (Assuming a SAN) your class drivers, multi path drivers, HBA card
    the network fabric, Switches and into the SAN.  Any of which could introduce a bottleneck. 
    Then the acknowledgment of completion must return.
     Think of Avg. Disk Queue Length as disk activities waiting to jump onto the flight.
    So if you have an Ave. Disk Queue Length happening thinks of this as cars backing up on the on ramp to get on to the highway.
    Typically I start disk analysis by looking at:
    Logical Disk\Ave. Disk sec/Read
    Logical Disk\Ave. Disk sec/Write.
    The Queue Length counters are secondary and only used if the latency counters are out of spec.
    Here are some good Blog and tools to use to follow up.
    Taking Your Server's Pulse
    http://technet.microsoft.com/en-us/magazine/2008.08.pulse.aspx?pr=blog
    Performance Analysis of Logs (PAL) Tool
    http://pal.codeplex.com/
    The Case of the Mysterious Black Box
    http://blogs.technet.com/b/clinth/archive/2009/11/18/the-case-of-the-mysterious-black-box-san-analysis-for-beginners.aspx
    Bruce Adamczak
    Bruce Adamczak

  • Best way of handle Log Shipping in Physical Movement of SQL Server

    Hi All
    We are moving SQL Server physically from one rack to another in data centre. Just power off and move the server and network link up and power on and bring to same state before power off.
    One of Server 2005 have Log shipping active. What is best way to maintain Log Shipping during this physical movement. I do not want to remove log shipping and re-configure from scratch.
    Need help for clean and safe method to carry this activity.
    Thanks in Advance

    Thanks for reply...
    No... I am not asking RACK migration. Just SQL SERVER...Here are my steps that I am like to do...
    1. Stop application(s) that connect to the databases on this Server
    Correct
    2. Note of account under which SQL Server running for the purpose of permission of this account on folder(s) for L Log Shipping  
    OK
    2. Stop Jobs using via making script for disable and enable jobs
    Here I have doubt on Log Shipping jobs disable and enable that
    You can use job( tsql) but why not use GUI.Log into SQL Server expand SQL server agent RK on job and disable it.
    a. I should stop LS jobs manually as you recommended but not while running em I right?
    Yes see job activity monitor on both primary and secondary server.It will show status executing  for jobs,look out for LS jobs.
    b. Shall I stop disable jobs first on Secondary or Primary Server?
    First disable on Primary and then on secondary.
    I faced a issue where shutting down SQL agent on Secondary caused Sec database to go in suspect mode.So make sure while you shutdown agent no job is running.If restore log job is running let it complete and then disable the job
    c. On which server I should enable LS jobs first?
    Primary
    3.  Stop SQL Server Services.
    Run sp_who2 or select * from sys.sysprocessess to see any active transaction.Then move accordingly.
    AND take same steps in reverse manner when power on after physical migration of Box.
    Pls advise
    Hope this helps
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Log shipping Could not retrieve backup settings for primary ID

    hello,
    I implemented log shipping in our server the process of implementation went fine but when I view job history I found
    Message
    2014-06-11 12:00:01.53    *** Error: Could not retrieve backup settings for primary ID '99817903-626e-4380-bcf1-c09ca6f48b6d'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-06-11 12:00:01.53    *** Error: The specified agent_id 99817903-626E-4380-BCF1-C09CA6F48B6D or agent_type 0 do not form a valid pair for log shipping monitoring processing.(.Net SqlClient Data Provider) ***
    2014-06-11 12:00:01.53    *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-06-11 12:00:01.53    *** Error: The specified agent_id 99817903-626E-4380-BCF1-C09CA6F48B6D or agent_type 0 do not form a valid pair for log shipping monitoring processing.(.Net SqlClient Data Provider) ***
    2014-06-11 12:00:01.53    *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-06-11 12:00:01.53    *** Error: The specified agent_id 99817903-626E-4380-BCF1-C09CA6F48B6D or agent_type 0 do not form a valid pair for log shipping monitoring processing.(.Net SqlClient Data Provider) ***
    2014-06-11 12:00:01.53    ----- END OF TRANSACTION LOG BACKUP   -----
    Exit Status: 1 (Error)
    also I check for database ID using select * from msdb..log_shipping_primary_databases
    your help is appreciated.
    Please Mark it as Answered if it answered your question
    OR mark it as Helpful if it help you to solve your problem
    Elmozamil Elamir Hamid
    MCSE Data Platform
    MCITP: SQL Server 2008 Administration/Development
    MCSA SQL Server 2012
    MCTS: SQL Server Administration/Development
    MyBlog

    Thank you all for your contribution.
    After testing and debugging I found that when they move their database from one server (server2) to another server (server1) the database still using the old name (server2) which you can findusing this script:
    SELECT @@SERVERNAME
    so the replication can't retrieve information, unfortunately for me it is difficult to rename the server because it sis development server so they need to make analysis about side effects if we renamed it. 
    Please Mark it as Answered if it answered your question
    OR mark it as Helpful if it help you to solve your problem
    Elmozamil Elamir Hamid
    MCSE Data Platform
    MCITP: SQL Server 2008 Administration/Development
    MCSA SQL Server 2012
    MCTS: SQL Server Administration/Development
    MyBlog

  • VM exhibiting 100% disk busy time, large disk queue lengths

    Hi everyone,
    We have a .VHD workload residing on a logical 2 x 136Gb RAID1 mirror pair of disks.
    The .VHD file is 130Gb (with 70Gb of free space)
    The Virtual Machine is running Windows 2008 R2 SP1, 4 cores and 8Gb of RAM and is exhibiting 100% disk busy time and disk queue lengths of anywhere between 14 and 44
    I'm assuming this is because there is virtually no disk space on the logical drive. Ops Mgr 2012 R2 reports high memory pages/sec
    So we backed up the .VHD workload, broke the RAID1 Mirror and inserted 2 x 300Gb  as a RAID1 mirror and restore the .VHD / VM
    The Logical disk has 50% free disk space, however the VM is still exhibiting 100% disk busy time and the above disk queue lengths.
    It is running on a Windows Server 2008 R2 SP1 HP Proliant Server running the Hyper-V role under Server Core
    Any ideas most appreciated.

    Hi,
    The mirror array doesn’t improve the disk performance but only for the disk redundancy, base on my experience some application frequent operate the large small files often
    can use the large disk resource, if you can’t sure the high disk IO cause by the guest vm or host computer, you can use the Resource Monitor first to identify which process handled the high disk resource, then do the further troubleshooting:
    The third party Resource Monitor use example:
    How to use the Resource Monitor in Windows 7 & Windows 8
    http://www.7tutorials.com/how-use-resource-monitor-windows-7
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How to suppress Log Shipping alert for a specific database?

    I want to disable log shipping for a database temporarily.   I have disabled the backup job, copy job, and restore job created automatically when log shipping is configured for the database.  However, I cannot disable the Log Shipping alert
    job since there are other databases configured for log shipping.   How can I suppress Log Shipping alert for specific database?  I don't want to disable the log shipping for the primary database since it will delete all jobs and history
    related to the log shipping configuration at the primary, secondary and monitor server instances.

    Too late but this is possible. We just need to set the value threshold_alert_enabled to 0 in the system table msdb.dbo.log_shipping_monitor_primary on the primary server and in msdb.dbo.log_shipping_monitor_secondary on the secondary server.
    I just tested it out and it should work.
    To test this out, perform the below:-
    1. Run exec master.sys.sp_check_log_shipping_monitor_alert will show the same error message that thresholds have been crossed. This is the same script used in LSAlert job.
    Use something like the below command to edit:-
    2. Run the below to change the values:
    update msdb.dbo.log_shipping_monitor_primary
    set threshold_alert_enabled = 0
    where primary_database = 'XYZ'
    Run this for all databases that needs to be excluded from monitoring.
    3. Run the script in step 1 again and it should run now. 
    Please mark the answer as helpful if i have answered your query. Thanks and Regards, Kartar Rana

Maybe you are looking for