Failover Clsutering: Failed in NetShareGetInfo

Hi to all,
today I've experience a failover cluster failure. Genereting a cluster log, a warning followed by an error as been generated:
WARN: Failed in NetShareGetInfo(...), status 1726. Tolerating...
ERR: Not a single share among 1 configured shares is online.
After 500ms, the cluster resource has come online.
Can someone explain the 1726 Status?
Thanks in advance.

Hi Yuri Corio,
Could you offer us which server edition you are using in your cluster? Did you install all recommend update and hot fix, did your cluster can pass the cluster
validation?
Please install Recommended hotfixes and updates for Windows Server failover clusters, then run the cluster validation post the warning and error part, the status
1726 usually caused by the file share resource IsAlive failure, please update your storage driver or firmware then monitor this issue again.
More information:
Working with File Shares in Windows Server 2008 (R2) Failover Clusters
http://blogs.technet.com/b/askcore/archive/2010/08/19/working-with-file-shares-in-windows-server-2008-r2-failover-clusters.aspx
NetShareGetInfo function
http://msdn.microsoft.com/en-us/library/windows/desktop/bb525388(v=vs.85).aspx
I’m glad to be of help to you!
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Failover interface failed

    Hi,
       I have 2 ASA 5520 firewall configured with HA(Failover). but some time my primary firewall goes down standby firewall doesnt come active. i found below log from primary firewall..what is the reason & what is the mining of reason code of 4...
    Nov 30 2012 14:07:47: %ASA-1-105002: (ASA) Enabling failover.
    Nov 30 2012 14:08:43: %ASA-1-105043: (Primary) Failover interface failed
    Nov 30 2012 14:08:56: %ASA-1-103001: (Primary) No response from other firewall (reason code = 4).
    After i hard reboot my standby firewall below log had been generated..
    Nov 30 2012 15:51:57: %ASA-1-105042: (Primary) Failover interface OK
    Nov 30 2012 15:52:02: %ASA-1-709003: (Primary) Beginning configuration replication: Send to mate.
    Nov 30 2012 15:52:15: %ASA-1-709004: (Primary) End Configuration Replication (ACT)
    Please assist....
    Regards
    Suhas

    Hi,
    The explanation for that can be found in the ASAs syslog messages document.
    Here it is
    103001 Error Message    %ASA-1-103001: (Primary) No response from other firewall (reason
    code = code).
    Explanation    This is a failover message, which is displayed if the primary unit is unable to  communicate with the secondary unit over the failover cable. (Primary) can also be listed as  (Secondary). for the secondary unit. Table 1-2 lists the reason codes and the descriptions to  determine why the failover occurred.
    Table 1-2     Reason Codes
    Reason Code
    Description
    1
    The local unit is not receiving the hello packet on the failover LAN  interface when LAN failover occurs or on the serial failover cable when  serial failover occurs, and declares that the peer is down.
    2
    An interface did not pass one of the four failover tests, which are as  follows: 1) Link Up, 2) Monitor for Network Traffic, 3) ARP, and 4)  Broadcast Ping.
    3
    No proper ACK for 15+ seconds after a command was sent on the serial cable.
    4
    The local unit is not receiving the hello packet on the failover LAN and  other data interfaces and it is declaring that the peer is down.
    5
    The failover LAN interface is down, and other data interfaces are not  responding to additional interface testing. In addition, the local unit  is declaring that the peer is down.
    Recommended Action    Verify that the failover cable is connected correctly and both units have the  same hardware, software, and configuration. If the problem persists, contact the Cisco TAC.
    Are you saying that the Primary ASA loses all connectivity to the Secondary ASA (looking at the log messages). Judging by the above Cisco description it would mean the Primary ASA isnt getting Failover Hellos through any of the monitored interfaces which again would make it seem like the Secondary Firewall is expriencing some problems.
    - Jouni

  • Failover cluster failed due to mysterious IP conflict ?

    I'm having a mysterious problem with my Failover cluster,
    Cluster name: PrintCluster01.domain.com
    Members: PrintServer01.domain.com andPrintServer02.domain.com
    in the Failover Cluster Management – Cluster Event I received the Critical error message 1135 and 1177:
    Log Name: System
    Source: Microsoft-Windows-FailoverClustering
    Date: 15/06/2011 9:07:49 PM
    Event ID: 1177
    Task Category: None
    Level: Critical
    Keywords:
    User: SYSTEM
    Computer: PrintServer01.domain.com
    Description:
    The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk.
    Run the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is
    connected such as hubs, switches, or bridges.
    Log Name: System
    Source: Microsoft-Windows-FailoverClustering
    Date: 15/06/2011 9:07:28 PM
    Event ID: 1135
    Task Category: None
    Level: Critical
    Keywords:
    User: SYSTEM
    Computer: PrintServer01.domain.com
    Description:
    Cluster node 'PrintServer02' was removed from the active failover cluster membership. The Cluster service on this node may have stopped. This could also be due to the node having lost communication with other active nodes in the failover cluster. Run
    the Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapters on this node. Also check for failures in any other network components to which the node
    is connected such as hubs, switches, or bridges.
    After further investigation, I found some interesting error here, from the very first critical error message logged in the Event viewer on PrintServer02:
    Log Name: System
    Source: Tcpip
    Date: 15/06/2011 9:07:29 PM
    Event ID: 4199
    Task Category: None
    Level: Error
    Keywords: Classic
    User: N/A
    Computer: PrintServer02-VM.domain.com
    Description:
    The system detected an address conflict for IP address 192.168.127.142 with the system having network hardware address 00-50-56-AE-29-23. Network operations on this system may be disrupted as a result.
    192.168.127.142 --> secondary IP of PrintServer01
    how could that be possible it conflict by one of the PrintServer01 node ? the detailed is as below:
    **From PrintServer01**
    Ethernet adapter Local Area Connection* 8:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Failover Cluster Virtual Adapter
    Physical Address. . . . . . . . . : 02-50-56-AE-29-23
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 169.254.1.183(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.0.0
    Default Gateway . . . . . . . . . :
    NetBIOS over Tcpip. . . . . . . . : Enabled
    I have double check in all of the cluster members that all IP addresses is now unique.
    however I'm sure that I the IP is static not by DHCP as from the IPCONFIG results below:
    From **PrintServer01** (the Active Node)
    Windows IP Configuration
    Host Name . . . . . . . . . . . . : PrintServer01
    Primary Dns Suffix . . . . . . . : domain.com
    Node Type . . . . . . . . . . . . : Hybrid
    IP Routing Enabled. . . . . . . . : No
    WINS Proxy Enabled. . . . . . . . : No
    DNS Suffix Search List. . . . . . : domain.com
    domain.com.au
    Ethernet adapter Local Area Connection* 8:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Failover Cluster Virtual Adapter
    Physical Address. . . . . . . . . : 02-50-56-AE-29-23
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 169.254.1.183(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.0.0
    Default Gateway . . . . . . . . . :
    NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Cluster Public Network:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Intel® PRO/1000 MT Network Connection
    Physical Address. . . . . . . . . : 00-50-56-AE-29-23
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 192.168.127.155(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    IPv4 Address. . . . . . . . . . . : 192.168.127.88(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    IPv4 Address. . . . . . . . . . . : 192.168.127.142(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    IPv4 Address. . . . . . . . . . . : 192.168.127.143(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    IPv4 Address. . . . . . . . . . . : 192.168.127.144(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Default Gateway . . . . . . . . . : 192.168.127.254
    DNS Servers . . . . . . . . . . . : 192.168.127.10
    192.168.127.11
    Primary WINS Server . . . . . . . : 192.168.127.10
    Secondary WINS Server . . . . . . : 192.168.127.11
    NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Cluster Private Network:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Intel® PRO/1000 MT Network Connection #2
    Physical Address. . . . . . . . . : 00-50-56-AE-43-EC
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 10.184.2.2(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Default Gateway . . . . . . . . . :
    NetBIOS over Tcpip. . . . . . . . : Disabled
    From **PrintServer02**
    Windows IP Configuration
    Host Name . . . . . . . . . . . . : PrintServer02
    Primary Dns Suffix . . . . . . . : domain.com
    Node Type . . . . . . . . . . . . : Hybrid
    IP Routing Enabled. . . . . . . . : No
    WINS Proxy Enabled. . . . . . . . : No
    DNS Suffix Search List. . . . . . : domain.com
    domain.com.au
    Ethernet adapter Local Area Connection* 8:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Failover Cluster Virtual Adapter
    Physical Address. . . . . . . . . : 02-50-56-AE-5F-E5
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 169.254.2.86(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.0.0
    Default Gateway . . . . . . . . . :
    NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Cluster Public Network:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Intel® PRO/1000 MT Network Connection
    Physical Address. . . . . . . . . : 00-50-56-AE-79-FA
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 192.168.127.172(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    IPv4 Address. . . . . . . . . . . : 192.168.127.119(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Default Gateway . . . . . . . . . : 192.168.127.254
    DNS Servers . . . . . . . . . . . : 192.168.127.10
    192.168.127.11
    Primary WINS Server . . . . . . . : 192.168.127.11
    Secondary WINS Server . . . . . . : 192.168.127.10
    NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Cluster Private Network:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Intel® PRO/1000 MT Network Connection #2
    Physical Address. . . . . . . . . : 00-50-56-AE-77-8D
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv4 Address. . . . . . . . . . . : 10.184.2.3(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Default Gateway . . . . . . . . . :
    NetBIOS over Tcpip. . . . . . . . : Disabled
    Any help would be greatly appreciated.
    Thanks,
    AWT
    /* Server Support Specialist */

    I
    am facing the same scenario as the original poster. This is on Server 2008 R2 SP1.
     WIndow event log entries follow the same pattern. The MAC address listed in connection with the duplicate IP belonged to the passive node.
    Interestingly, the Cluster.log begins to explode with activity a few milliseconds before the first Windows event is logged.
    2012/07/11-15:20:59.517 INFO  [CHANNEL fe80::8145:f2b9:898e:784e%37:~3343~] graceful close, status (of previous failure, may not indicate problem) ERROR_IO_PENDING(997)
    2012/07/11-15:20:59.517 WARN  [PULLER SQLTESTSQLB] ReadObject failed with GracefulClose(1226)' because of 'channel to remote endpoint fe80::8145:f2b9:898e:784e%37:~3343~
    is closed'
    2012/07/11-15:20:59.517 ERR   [NODE] Node 1: Connection to Node 2 is broken. Reason GracefulClose(1226)' because of 'channel to remote endpoint fe80::8145:f2b9:898e:784e%37:~3343~
    is closed'
    2012/07/11-15:20:59.517 WARN  [RGP] Node 1: only local suspects are missing (2). moving to the next stage (shortcut compensation time 05.000)
    2012/07/11-15:20:59.548 WARN  [NETFTAPI] Failed to query parameters for fe80::5efe:169.254.1.79 (status 80070490)
    2012/07/11-15:20:59.548 WARN  [NETFTAPI] Failed to query parameters for fe80::5efe:169.254.1.79 (status 80070490)
    2012/07/11-15:20:59.579 INFO  [CHANNEL 192.168.3.22:~3343~] graceful close, status (of previous failure, may not indicate problem) ERROR_SUCCESS(0)
    2012/07/11-15:20:59.579 WARN  cxl::ConnectWorker::operator (): GracefulClose(1226)' because of 'channel to remote endpoint 192.168.3.22:~3343~ is closed'
    2012/07/11-15:20:59.829 INFO  [GEM] Node 1: EnterRepairStage1: Gem agent for node 1
    2012/07/11-15:21:00.141 INFO  [GEM] Node 1: EnterRepairStage2: Gem agent for node 1
    2012/07/11-15:21:00.499 WARN  [RCM] Moving orphaned group Available Storage from downed node SQLTESTSQLB to node SQLTESTSQLA.
    2012/07/11-15:21:00.499 WARN  [RES] IP Address <Cluster IP Address>: WorkerThread: NetInterface ef150d1a-f4a1-4f4f-a5c7-6e7cb2bfacab changed to state 3.
    2012/07/11-15:21:00.499 WARN  [RCM] Moving orphaned group MSSTEST from downed node SQLTESTSQLB to node SQLTESTSQLA.
    2012/07/11-15:21:00.546 WARN  [RES] IP Address <SQL IP Address 1 (DEVSQL)>: Failed to delete IP interface 2003B882, status 87.
    2012/07/11-15:21:00.562 WARN  [RES] Physical Disk <Cluster Disk 2>: PR reserve failed, status 170
    2012/07/11-15:21:00.577 WARN  [RES] Physical Disk <Cluster Disk 1>: PR reserve failed, status 170
    2012/07/11-15:21:00.593 WARN  [RES] Physical Disk <Cluster Disk 3>: PR reserve failed, status 170
    2012/07/11-15:21:02.215 WARN  [NETFTAPI] Failed to query parameters for 192.168.3.32 (status 80070490)
    2012/07/11-15:21:02.215 WARN  [NETFTAPI] Failed to query parameters for 192.168.3.32 (status 80070490)
    2012/07/11-15:21:05.864 DBG   [NETFTAPI] received NsiParameterNotification  for fe80::5cd:8cc2:186:f5cb (IpDadStatePreferred )
    2012/07/11-15:21:06.565 ERR   [RES] Physical Disk <Cluster Disk 2>: Failed to preempt reservation, status 170
    2012/07/11-15:21:06.581 ERR   [RES] Physical Disk <Cluster Disk 2>: OnlineThread: Unable to arbitrate for the disk. Error: 170.
    2012/07/11-15:21:06.581 ERR   [RES] Physical Disk <Cluster Disk 2>: OnlineThread: Error 170 bringing resource online.
    2012/07/11-15:21:06.581 ERR   [RHS] Online for resource Cluster Disk 2 failed.
    2012/07/11-15:21:06.581 WARN  [RCM] HandleMonitorReply: ONLINERESOURCE for 'Cluster Disk 2', gen(0) result 5018.
    2012/07/11-15:21:06.581 ERR   [RCM] rcm::RcmResource::HandleFailure: (Cluster Disk 2)
    2012/07/11-15:21:06.581 WARN  [RES] Physical Disk <Cluster Disk 2>: Terminate: Failed to open device \Device\Harddisk5\Partition1, Error 2
    2012/07/11-15:21:06.581 ERR   [RES] Physical Disk <Cluster Disk 1>: Failed to preempt reservation, status 170
    2012/07/11-15:21:06.581 ERR   [RES] Physical Disk <Cluster Disk 1>: OnlineThread: Unable to arbitrate for the disk. Error: 170.
    2012/07/11-15:21:06.581 ERR   [RES] Physical Disk <Cluster Disk 1>: OnlineThread: Error 170 bringing resource online.
    Full cluster log here:
    https://skydrive.live.com/redir?resid=A694FDEBF02727CD!133&authkey=!ADQMxHShdeDvXVc

  • Failover cluster fails validation after a single node restart

    I had a lab environment setup that works great, passes validation, can do live migrations without issue but as soon as I restarted one of the nodes, the then still live node became the only node able to access the storage backend. What's weird is that the restarted
    node can still access the CSV storage and run VMs off of it, but the validation report is unable to list the actual disks.
    My Cluster consists of 2 nodes. I have an iSCSI backed shared storage server and I can see that both of my nodes
    are connected to the iSCSI targets successfully, but the node I first restarted no longer lists any disks/volumes in disk management and the once available MPIO menus are disabled in the iSCSI control panel. I also tried to restart the second node after the
    first node came back but although the first node was up and running and had VMs on it, restarting the second node brought the entire cluster down. I see event IDs 1177, 1573, and 1069 appear in the Cluster Events log. When the second node came back up, the
    cluster came back with it, but not the storage. Both nodes seem to display similar behavior in that they cannot access the storage backend. Now the storage is inaccessible by both nodes. I was able to get both nodes connected to the storage backend by
    going to the iscsicpl and disconnecting all current connections to the iSCSI backend and adding them back. Doing the test again after bringing the storage back up resulted in the same behavior and this time redoing the iSCSI connections is not helping.
    I think the issue here is that the first node I restarted is unable to see any disks/volumes from the storage backend only after joining the cluster and doing a restart. Before joining the cluster I did reboots on both nodes and both were able to connect to
    the iSCSI backend without issue. It wasn't until after joining the cluster that node 1 became unable to access the storage backend after reboots. The validation report fails with "No disks were found on which to perform cluster validation tests. To correct
    this, review the following possible causes: ..." although none of the suggestions seem applicable and the validation report was successful right before the restart of the node.
    Does anyone have suggestions on how to further troubleshoot or resolve this issue?
    I am using Hyper-V Server 2012 R2 on both nodes and they are joined to the same domain.

    Hi,
    I don’t found the similar issue, please your storage compatible with server 2012R2, Update Network Card Drivers and firmware on both the Nodes, temporarily disable your AV
    soft and firewall install the Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters update.
    The Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters
    http://support.microsoft.com/kb/2920151/en-us
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows 2008 failover clustering fails with Event ID 1205 1069 1558

    I have a two nodes Windows 2008 and SQL 2008 cluster running in active\passive.  I was restarting
    my nodes after applying windows update then I received the following error message below within Failover Cluster Manager. I get errors with Event ID 1205 1069 1558  this happens every month ..  Could someone help me on to find the root cause for
    this issue how to check if it is an issue with network or Quorum drive  ?

    First, take a look at the 1069 event. This should point out which resource is failing. 
    Then, open a command prompt and run the command: cluster log /g
    This will generate a cluster.log file that you can use to further troubleshoot this issue. The log file will be located in the C:\Windows\Cluster\Reports folder.
    The cluster.log will be based on GMT time, so you may need to translate this to the local time zone in order to find the events you are looking for. Look for ERR or WARN messages and these should hopefully give you more insight as to why the resource is
    failing.
    Hope this helps. 
    Visit my blog about multi-site clustering

  • SQL 2012 installation for Failover Cluster failed

    While installation of SQL 2012 on FOC validation fails on "Database Engine configuration" page with following error:
    The volume that contains SQL Server data directory g:\MSSQL11.MSSQLSERVER\MSSQL\DATA does not belong to the cluster group.
    Want to know how does SQL installation wizard queries volumes configured with Failover Cluster. does it:
    - Enumerate "Physical Disk" resources in FOC
    - does it enumerate all Storage Class resources in FOC for getting the volume list
    - or it depends on WMI (Win32_Volume) to get volumes ?
    The wizard correctly discovers volume g:\ in its FOC group on "Cluster Resource Group" and "Cluster Disk Selection" page. but gives the error on Database configuration page.
    Any help in this would be appreciated.
    Thanks in advance
    Rakesh
    Rakesh Agrawal

    Can you please check if there is any disk in the cluster which is not in online state? Please run below script following the steps.
    1. Save a script as "Disk.vbs" and use
    use CSCRIPT to run it.
    2. Syntax: CSCRIPT <Disk.vbs> <Windows Cluster Name>
    < Script>
    Option Explicit
    Public objArgs, objCluster
    Public Function Connect()
    ' Opens a global cluster object. Using Windows Script Host syntax,
    ' the cluster name or "" must be passed as the first argument.
    Set objArgs = WScript.Arguments
    if objArgs.Count=0  then
     wscript.Echo "Usage Cscript  <script file name>  <Windows Cluster Name> "
     WScript.Quit
    end IF
    Set objCluster = CreateObject("MSCluster.Cluster")
    objCluster.Open objArgs(0)
    End Function
    Public Function Disconnect()
    ' Dereferences global objects.  Used with Connect.
     Set objCluster = Nothing
     Set objArgs = Nothing
    End Function
    Connect
    Dim objEnum
    For Each objEnum in objCluster.Resources
     If objEnum.ClassInfo = 1 Then
      WScript.Echo ObjEnum.Name
      Dim objDisk
      Dim objPartition
      On Error Resume Next
       Set objDisk = objEnum.Disk
       If Err.Number <> 0 Then
        WScript.Echo "Unable to retrieve the disk: " & Err
       Else
        For Each objPartition in objDisk.Partitions
         WScript.Echo objPartition.DeviceName
        Next
       End If
     End If
    Next
    Disconnect
    </Script>

  • Solaris cluster 3.2 with zfs failover filesystem failed. How can I recover?

    Hi all,
    I have just install and configure Solaris cluster 3.2U3 using zfs for both of root filesystem and shared storage file system.
    This cluster operate clearly. Today, I can not see the zpool for shared storage. I can see the storage volume in the output of format command.
    So all my resources change to offline status. and my application is failed.
    How can I recover this cluster??????
    is there any body can help me :(

    Have you used a SUNW.HAStoragePlus (HASP) resource to control your zpool? If not, the zpool is probably needs importing. That is what the HASP resource would do for you. You would also need a dependency from your application on the HASP resource to ensure that your application does not try to start up before the storage is avaialable.
    Regards,
    Tim
    ---

  • ICloud 30+ hour outage- try failover not FAIL

    Apple support just told me they hope "by the WEEKEND" to get iCloud mail back up and running? I think they better call SunGuard and restore from back up! In the amount of time This outage has gone on Tulane payroll was already restored from Hurricane Katrina!
    Dear Apple forget your new iPhone spend the development money on back up/restore and support! Oh and word of advice don't back up to "the cloud" environment because that is a FAIL for your company!

        That's not the type of impression we want to make with our 4G LTE service dianefrommd! Where are you located? Are you experiencing any difficulties with other 4G devices in that area? Are there any others in the area with the same equipment experiencing similar difficulties? I'm pleased to hear that a resolution ticket was opened. What were the results? What was the indication given as to why the ticket was closed? We appreciate you reaching out to us via a different support platform. Please be aware that we have the same options for troubleshooting here as we do with our other support channels.
    Jonathan_VZW
    Please follow us on twitter @VZWSupport

  • MDS 9509 Failover Testing failed

    We tried to do a failover to test the HA environment for the upcoming upgrade to 2.0(1b), only with no succes. We're currently running on 1.2(1b) and want to upgrade non-disruptively to the new firmware.
    can anybody tell me what went wrong? Thanks in advance.
    Switch CLI Output:
    ==================================
    SSW-97-001# show module
    Mod Ports Module-Type Model Status
    1 16 1/2 Gbps FC Module DS-X9016 ok
    2 16 1/2 Gbps FC Module DS-X9016 ok
    3 16 1/2 Gbps FC Module DS-X9016 ok
    4 16 1/2 Gbps FC Module DS-X9016 ok
    5 0 Supervisor/Fabric-1 DS-X9530-SF1-K9 active *
    6 0 Supervisor/Fabric-1 DS-X9530-SF1-K9 ha-standby
    Mod Sw Hw World-Wide-Name(s) (WWN)
    1 1.2(1b) 3.0 20:01:00:0d:ec:02:34:80 to 20:10:00:0d:ec:02:34:80
    2 1.2(1b) 5.0 20:41:00:0d:ec:02:34:80 to 20:50:00:0d:ec:02:34:80
    3 1.2(1b) 5.0 20:81:00:0d:ec:02:34:80 to 20:90:00:0d:ec:02:34:80
    4 1.2(1b) 5.0 20:c1:00:0d:ec:02:34:80 to 20:d0:00:0d:ec:02:34:80
    5 1.2(1b) 4.0 --
    6 1.2(1b) 4.0 --
    Mod MAC-Address(es) Serial-Num
    1 00-0c-30-da-09-6c to 00-0c-30-da-09-70 JAB0749066Y
    2 00-0c-30-da-2c-88 to 00-0c-30-da-2c-8c JAB075107B2
    3 00-0c-30-da-2c-60 to 00-0c-30-da-2c-64 JAB075107FW
    4 00-0c-30-da-78-38 to 00-0c-30-da-78-3c JAB08040ACU
    5 00-0c-30-d9-f5-60 to 00-0c-30-d9-f5-64 JAB0747055Q
    6 00-0c-30-d9-f4-d0 to 00-0c-30-d9-f4-d4 JAB0747059P
    * this terminal session
    SSW-97-001# conf t
    Enter configuration commands, one per line. End with CNTL/Z.
    SSW-97-001(config)# system switchover warm
    SSW-97-001(config)# exit
    SSW-97-001# show module
    Mod Ports Module-Type Model Status
    1 16 1/2 Gbps FC Module DS-X9016 ok
    2 16 1/2 Gbps FC Module DS-X9016 ok
    3 16 1/2 Gbps FC Module DS-X9016 ok
    4 16 1/2 Gbps FC Module DS-X9016 ok
    5 0 Supervisor/Fabric-1 DS-X9530-SF1-K9 active *
    6 0 Supervisor/Fabric-1 DS-X9530-SF1-K9 standby
    Mod Sw Hw World-Wide-Name(s) (WWN)
    1 1.2(1b) 3.0 20:01:00:0d:ec:02:34:80 to 20:10:00:0d:ec:02:34:80
    2 1.2(1b) 5.0 20:41:00:0d:ec:02:34:80 to 20:50:00:0d:ec:02:34:80
    3 1.2(1b) 5.0 20:81:00:0d:ec:02:34:80 to 20:90:00:0d:ec:02:34:80
    4 1.2(1b) 5.0 20:c1:00:0d:ec:02:34:80 to 20:d0:00:0d:ec:02:34:80
    5 1.2(1b) 4.0 --
    6 1.2(1b) 4.0 --
    Mod MAC-Address(es) Serial-Num
    1 00-0c-30-da-09-6c to 00-0c-30-da-09-70 JAB0749066Y
    2 00-0c-30-da-2c-88 to 00-0c-30-da-2c-8c JAB075107B2
    3 00-0c-30-da-2c-60 to 00-0c-30-da-2c-64 JAB075107FW
    4 00-0c-30-da-78-38 to 00-0c-30-da-78-3c JAB08040ACU
    5 00-0c-30-d9-f5-60 to 00-0c-30-d9-f5-64 JAB0747055Q
    6 00-0c-30-d9-f4-d0 to 00-0c-30-d9-f4-d4 JAB0747059P
    * this terminal session

    Some log messages:
    Feb 11 01:17:39 switch1.int.bon.nl : Feb 11 02:17:38 cet: %KERN-2-SYSTEM_MSG: mts: HA communication with standby timedout
    Feb 11 01:17:39 switch1.int.bon.nl : Feb 11 02:17:39 cet: %PLATFORM-5-MOD_REMOVE: Module 6 removed
    Feb 11 01:17:41 switch1.int.bon.nl : Feb 11 02:17:41 cet: %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configuring console from pts/0 (192.168.44.1)
    Feb 11 01:17:49 switch1.int.bon.nl : Feb 11 02:17:49 cet: %SYSMGR-3-OPERATIONAL_MODE_WARM: Operational redundancy mode set to warm (error-id 0x401E0017).
    Feb 11 01:17:53 switch1.int.bon.nl : Feb 11 02:17:53 cet: %MODULE-5-STANDBY_SUP_OK: Supervisor 6 is standby

  • Transparent Application Failover (TAF)  FAILED!

    Dear all,
    I have installed RAC 10gR2 on 64-bit Oracle Enterprise Linux, iscsi as shared disks and ASM as storage option. I follwoed document hunter_rac10gr2_iscsi.
    Everything is ok both database instance are up and running.
    Value in the show parameter services is
    bss.beaconhouse.edu.pk, orcl_taf, ora_devp
    TNSNAMES.ORA file on a window based client machine contains the following entry:
    ora_devp, ora_devp.world =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.63)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.64)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = bss.beaconhouse.edu.pk)
    (FAILOVER_MODE =
    (TYPE = SELECT)
    (METHOD = BASIC)
    (RETRIES = 180)
    (DELAY = 5)
    But still when I connect to database from client machine and after verifying the connected instance, I stop the services of that instance from RAC server. The client automatically do not shirt to the second instance but give error.
    My Transparent Application Failover is not configured properly. Though as i told you both instance are UP and running i can stop and start any one instance.
    Kindly help me to implement the very basic feature of RAC.
    Thanks, Imran

    It seems that Hunter's scripts configure only the "oracle_taf" service for TAF.
    Therefore, to test TAF your client should be attempting to connect to the TAF
    service "oracle_taf" not "bss.beaconhouse.edu.pk"
    See under Step 24
    Database Services : For this test configuration, click Add, and enter orcl_taf as the "Service Name." Leave both instances set to Preferred and for the "TAF Policy" select "Basic".
    and
    "Create the orcl_taf Service
    During the creation of the Oracle clustered database, you added a service named orcl_taf that will be used to connect to the database with TAF enabled. During several of my installs, the service was added to the tnsnames.ora, but was never updated as a service for each Oracle instance.
    Use the following to verify the orcl_taf service was successfully added:
    SQL> show parameter service
    NAME TYPE VALUE
    service_names string orcl.idevelopment.info, orcl_taf
    If the only service defined was for orcl.idevelopment.info, then you will need to manually add the service to both instances:
    SQL> show parameter service
    NAME TYPE VALUE
    service_names string orcl.idevelopment.info
    SQL> alter system set service_names =
    2 'orcl.idevelopment.info, orcl_taf.idevelopment.info' scope=both; "
    and step 30
    TAF Demo
    From a Windows machine (or other non-RAC client machine), login to the clustered database using the orcl_taf service as the SYSTEM user:
    C:\> sqlplus system/manager@orcl_taf
    Message was edited by:
    Hemant K Chitale

  • Data Guard Broker: errors ORA-16816 and ORA-16817 with Fast Start Failover

    Hi,
    my environment is:
    OS: Windows XP Professional Edition SP2
    DB: Oracle EE 10.2.0.3
    Primary db: orcl
    Standby db: stby
    both databases are running on the same server.
    I have configured Data Guard as described in the DG Administration Guide.
    In the Data Guard Broker I switch to 'stby' succesfully and 'stby' is the primary db and 'orcl' is the standby db.
    I switched back to 'orcl' as primary db and I get some errors:
    DGMGRL> show database 'orcl' statusreport;
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    * WARNING ORA-16817: configuration for Fast Start of Failover is not synchronized.
    DGMGRL> show database 'stby' statusreport;
    STATUS REPORT
    INSTANCE_NAME SEVERITY ERROR_TEXT
    * ERROR ORA-16816: wrong databaserole
    * WARNING ORA-16817: configuration for Fast Start of Failover is not synchronized.
    DGMGRL> show configuration;
    Configuration
    Name: DG1
    Enabled: YES
    Protection Mode: MaxAvailability
    Fast-Start Failover: ENABLED
    Databases:
    orcl - Physical standby database
    - Fast-Start Failover target
    stby - Primary database
    Current status for DG1:
    Warning: ORA-16607: one or more databases failed.
    I have searched for solutions on Metalink and google, but with no success.
    Has anyone got this kind of problem?
    Any suggestions on how to resolve it?
    Thanks

    Hi DigerDBA
    I did your advice , the error disappear, and thanks for your advice , but do I need to keep standby_file_management='AUTO' or 'MANUAL' in primary and standby init files?
    Am asking this because when I use the observer the failover failed and I get the following error
    SQL Execution error=604, sql=[ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WAIT WITH SESSION SHUTDOWN]. See error stack below.
      ORA-00604: error occurred at recursive SQL level 1
      ORA-01275: Operation ADD LOGFILE is not allowed if standby file management is automatic.
    Complete Failover operation failed in the step when attempting to convert the database to be the new primary.
    Database Resource SetState Error (16771)
    01/07/2014 09:14:43
    Command FAILOVER TO epprod2 completed with error ORA-16771
    if possible can you advice me please?

  • Failover not working correctly on "redundancy-phy" (box to box style)

    Hi,
    I've got 2 CSS 11506 boxes configured using box to box failover.   Failing the master CSS box itself (powering down) causes the backup CSS to become master and all is well.
    However when the switch, which the CSS is connected to, fails the CSS didn't fail over so I added the redundancy-phy to both the interfaces connected to the switch and failed the switch again.  At this point a "show redundancy" shows the master becoming backup but between 3 and 5 seconds later it re-assumes master status and keeps flipping every  60 - 90 seconds
    I also tried a service with a type of redundancy-up and again the same symptoms - fails over but assumes master again within 3--5 seconds.
    Any help gratefully received!
    Cheers

    box-to-box is the least interesting redundancy mechanism.
    I definitely prefer vip/interface redundancy.
    More complex to configure but better control.
    Regarding your problem, is the switch connected to both CSS ?  Do you have a direct link between the CSS for the redundancy protocol ? What version do you run ?
    Gilles

  • Multi-context active-active etherchannel failover

    Hi All,
    Is there a way to monitor individual interfaces on a box doing multicontext etherchannel failover?
    I can understand on an individual box you can add monitor-interface to the physical interface, but in multi context mode, there is only one interface (the logical etherchannel subinterface) pushed through from the system context to each of the other contexts. I've been looking around and can't work out how to get a context failover to fail if only one of the etherchannel fails.
    If the other box has more active etherchannels then that's the one I want active, but can't see it at the moment.
    Possibly missed something somewhere. Any ideas?
    Thanks,
    Gaz

    monitor-interface will only work on "named" interfaces.  So, what you are looking to do is not possible.
    The member interfaces on a port-channel will not have "nameif" associated with them.
    -Kureli

  • SC 3.0 file system failover for Oracle 8i/9i

    I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
    Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
    The SA request states:
    "The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
    My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
    Thanks for your help ...
    -j

    The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
    Before this goes up in flames, let me speak from real world experience.
    Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
    We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
    If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

  • Server 2008 Cluster Random failover occuring on Fileserver Resource

    We have a 2 node active/passive 2008 Sql Cluster that also has a fileshare on it that randomly fails over. We get events
    Events from Cluster Admin
    Event ID 1230
    cluster resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)' (resource type '', DLL 'clusres.dll') either crashed or deadlocked. The Resource Hosting Subsystem (RHS) process will now attempt to terminate, and the resource will be marked to run in a separate monitor.
    Event2
    EventID 1146
    the cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor.
    Event 3
    EventID 1069
    Cluster resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)' in clustered service or application 'SQL Server (SQLPRODA)' failed.
    Event 4
    Event ID 1205
    The Cluster service failed to bring clustered service or application 'SQL Server (SQLPRODA)' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.
    We have updated the NIC drivers on each node, the Drivers and Bios have been updated on the HBA's. We have updated the srv.sys and the srv2.sys files thinking it might be an SMB issue. TCP offloading is disabled on the the Nics. We are running SP2 on both nodes and all the windows updates are current.  In the cluster logs we are seeing what is listed below.
    HYSQL02
    ========
    00000cc8.00001364::2010/02/17-18:23:32.352 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:24:32.353 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:25:32.356 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:26:32.414 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ImportFiles), status 2114. Tolerating...
    00000cc8.00001364::2010/02/17-18:29:32.369 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:32:32.431 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    00000cc8.00001364::2010/02/17-18:35:32.387 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:37:32.392 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:42:32.408 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:43:32.410 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:44:32.425 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:48:32.798 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:51:32.949 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:54:33.045 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-18:58:33.158 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    00000cc8.00001364::2010/02/17-19:01:33.192 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    00000cc8.00001364::2010/02/17-19:05:33.166 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:10:33.182 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:11:33.184 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:13:33.190 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:22:33.218 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:26:33.229 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:27:33.232 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:28:33.236 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:29:33.238 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ImportFiles), status 64. Tolerating...
    00000cc8.00001364::2010/02/17-19:30:33.241 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ReportImages), status 64. Tolerating...
    00000cc8.00000cd4::2010/02/17-19:30:34.000 ERR [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out for resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)'.
    00000cc8.00000cd4::2010/02/17-19:30:34.000 ERR [RHS] Resource FileServer-(MSCS3)(Cluster Disk 4- Database) handling deadlock. Cleaning current operation and terminaiting RHS process.
    000009ec.0000174c::2010/02/17-19:30:34.000 INFO [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'FileServer-(MSCS3)(Cluster Disk 4- Database)', gen(0) result 4.
    000009ec.0000174c::2010/02/17-19:30:34.000 INFO [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)' consecutive failure count 1.
    000009ec.0000174c::2010/02/17-19:30:34.002 ERR [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 3272 / 0xcc8
    000009ec.0000174c::2010/02/17-19:30:34.004 INFO [RCM] Created monitor process 2248 / 0x8c8
    000008c8.000010c8::2010/02/17-19:30:34.019 INFO [RHS] Initializing.
    000009ec.0000174c::2010/02/17-19:30:34.030 INFO [RCM] rcm::RcmResource::ReattachToMonitorProcess: (FileServer-(MSCS3)(Cluster Disk 4- Database), Online)
    000009ec.0000174c::2010/02/17-19:30:34.030 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) Initializing-->OpenCallIssued.
    000009ec.0000174c::2010/02/17-19:30:34.030 INFO [RCM] rcm::RcmGroup::ProcessStateChange: (SQL Server (SQLPRODA), Online --> PartialOnline)
    000009ec.0000174c::2010/02/17-19:30:34.055 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) Online-->ProcessingFailure.
    000009ec.0000174c::2010/02/17-19:30:34.055 INFO [RCM] rcm::RcmGroup::ProcessStateChange: (SQL Server (SQLPRODA), PartialOnline --> Failed)
    000009ec.0000174c::2010/02/17-19:30:34.055 ERR [RCM] rcm::RcmResource::HandleFailure: (FileServer-(MSCS3)(Cluster Disk 4- Database))
    000009ec.0000174c::2010/02/17-19:30:34.055 INFO [RCM] resource FileServer-(MSCS3)(Cluster Disk 4- Database): failure count: 1, restartAction: 2.
    000009ec.0000174c::2010/02/17-19:30:34.055 INFO [RCM] Will restart resource in 500 milliseconds.
    000009ec.0000174c::2010/02/17-19:30:34.055 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) ProcessingFailure-->[Terminating to DelayRestartingResource].
    000009ec.0000174c::2010/02/17-19:30:34.055 INFO [RCM] rcm::RcmGroup::ProcessStateChange: (SQL Server (SQLPRODA), Failed --> Pending)
    000008c8.00001784::2010/02/17-19:30:34.112 INFO [RES] File Server : FileServerDoTerminate: Terminate called... !!!
    000009ec.0000126c::2010/02/17-19:30:34.119 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) [Terminating to DelayRestartingResource]-->DelayRestartingResource.
    000009ec.0000174c::2010/02/17-19:30:34.619 INFO [RCM] Delay-restarting FileServer-(MSCS3)(Cluster Disk 4- Database) and any waiting dependents.
    000009ec.0000174c::2010/02/17-19:30:34.619 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) DelayRestartingResource-->OnlineCallIssued.
    000009ec.0000126c::2010/02/17-19:30:34.620 INFO [RCM] HandleMonitorReply: ONLINERESOURCE for 'FileServer-(MSCS3)(Cluster Disk 4- Database)', gen(1) result 997.
    000009ec.0000126c::2010/02/17-19:30:34.620 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) OnlineCallIssued-->OnlinePending.
    000008c8.000016cc::2010/02/17-19:30:34.657 INFO [RES] File Server : Shares 'are being scoped to virtual name MSCS3
    HYSQL01
    =========
    000015ac.00001200::2010/02/17-21:42:54.976 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-21:47:51.082 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-21:51:51.094 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-21:56:51.056 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000015ac.00001200::2010/02/17-22:06:51.139 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-22:09:51.148 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000009e0.00001b08::2010/02/17-22:17:51.431 INFO [NM] Received request from client address 10.1.0.220.
    000015ac.00001200::2010/02/17-22:21:51.184 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-22:25:31.804 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-22:30:34.959 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000015ac.00001200::2010/02/17-22:31:36.518 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-22:34:41.036 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-22:39:48.514 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000015ac.00001200::2010/02/17-22:42:51.247 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000009e0.0000132c::2010/02/17-22:44:16.801 INFO [NM] Received request from client address 10.1.0.220.
    000015ac.00001200::2010/02/17-22:47:51.209 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000015ac.00001200::2010/02/17-22:49:51.215 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    000009e0.000015f4::2010/02/17-22:51:27.511 INFO [NM] Received request from client address 10.1.0.220.
    000015ac.00001200::2010/02/17-22:52:51.277 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-22:55:51.286 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-23:06:51.319 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-23:12:51.284 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000015ac.00001200::2010/02/17-23:13:51.340 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 2114. Tolerating...
    000015ac.00001200::2010/02/17-23:16:51.349 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    2nd Issues
    000018f0.0000137c::2010/02/16-18:03:23.988 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-18:07:23.947 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:11:23.959 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:13:23.965 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:14:24.021 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-18:20:23.986 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:23:23.996 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:26:24.005 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:27:24.007 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:28:24.063 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-18:37:24.038 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:38:24.094 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-18:41:24.102 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-18:44:24.059 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:50:24.129 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-18:54:24.089 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:55:24.091 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:56:24.095 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-18:57:24.151 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ImportFiles), status 2114. Tolerating...
    000009e0.00000d2c::2010/02/16-19:13:04.903 INFO [NM] Received request from client address 10.1.0.220.
    000018f0.0000137c::2010/02/16-19:18:24.213 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    000018f0.0000137c::2010/02/16-19:22:24.172 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    000018f0.0000137c::2010/02/16-19:24:24.178 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    000018f0.000012dc::2010/02/16-19:25:25.000 ERR [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out for resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)'.
    000018f0.000012dc::2010/02/16-19:25:25.000 ERR [RHS] Resource FileServer-(MSCS3)(Cluster Disk 4- Database) handling deadlock. Cleaning current operation and terminaiting RHS process.
    000009e0.00000f48::2010/02/16-19:25:25.000 INFO [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'FileServer-(MSCS3)(Cluster Disk 4- Database)', gen(1) result 4.
    000009e0.00000f48::2010/02/16-19:25:25.000 INFO [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)' consecutive failure count 1.
    000009e0.00000f48::2010/02/16-19:25:25.002 ERR [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 6384 / 0x18f0
    000009e0.00000f48::2010/02/16-19:25:25.003 INFO [RCM] Created monitor process 6020 / 0x1784
    00001784.00001b1c::2010/02/16-19:25:25.012 INFO [RHS] Initializing.
    000009e0.00000f48::2010/02/16-19:25:25.023 INFO [RCM] rcm::RcmResource::ReattachToMonitorProcess: (FileServer-(MSCS3)(Cluster Disk 4- Database), Online)
    000009e0.00000f48::2010/02/16-19:25:25.023 INFO [RCM] TransitionToState(FileServer-(MSCS3)(Cluster Disk 4- Database)) Initializing-->OpenCallIssued.
    000009e0.00000f48::2010/02/16-19:25:25.023 INFO [RCM] rcm::RcmGroup::ProcessStateChange: (SQL Server (SQLPRODA), Online --> PartialOnline)
    3)
    00000d80.00000388::2010/02/16-12:15:13.281 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    00000d80.00000388::2010/02/16-12:19:19.253 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:24:22.132 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:25:22.187 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 2114. Tolerating...
    00000d80.00000388::2010/02/16-12:29:22.146 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:42:22.185 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:50:22.209 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:51:22.212 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:53:22.218 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-12:54:22.274 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ImportFiles), status 2114. Tolerating...
    00000d80.00000388::2010/02/16-13:01:31.308 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    00000d80.00000388::2010/02/16-13:10:22.322 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 2114. Tolerating...
    00000d80.00000388::2010/02/16-13:13:22.279 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-13:17:22.291 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-13:20:22.300 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00000d80.00000388::2010/02/16-13:22:22.305 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00000d80.00000388::2010/02/16-13:24:22.311 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00000d80.00000d8c::2010/02/16-13:24:23.000 ERR [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out for resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)'.
    00000d80.00000d8c::2010/02/16-13:24:23.000 ERR [RHS] Resource FileServer-(MSCS3)(Cluster Disk 4- Database) handling deadlock. Cleaning current operation and terminaiting RHS process.
    000009e0.000015dc::2010/02/16-13:24:23.000 INFO [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'FileServer-(MSCS3)(Cluster Disk 4- Database)', gen(0) result 4.
    000009e0.000015dc::2010/02/16-13:24:23.000 INFO [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)' consecutive failure count 1.
    000009e0.000015dc::2010/02/16-13:24:23.002 ERR [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 3456 / 0xd80
    4)
    00001770.00001594::2010/02/09-16:01:06.362 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ReportImages), status 2114. Tolerating...
    00000aa4.0000183c::2010/02/09-16:01:15.630 INFO [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x3ba33338
    00000aa4.0000183c::2010/02/09-16:01:19.036 INFO [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x3ba3333f
    00000aa4.0000183c::2010/02/09-16:01:19.040 INFO [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x3ba3333a
    00000aa4.0000183c::2010/02/09-16:01:19.044 INFO [RES] Physical Disk: HardDiskpGetDiskInfo: Disk is of type MBR, signature 0x3ba33339
    00001770.00001910::2010/02/09-16:05:06.311 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, HelpSystem), status 64. Tolerating...
    00001770.00001910::2010/02/09-16:06:06.314 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, ImportFiles), status 64. Tolerating...
    00001770.00001910::2010/02/09-16:07:06.317 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_HelpSystem), status 64. Tolerating...
    00001770.00001910::2010/02/09-16:08:06.320 WARN [RES] File Server : Failed in NetShareGetInfo(MSCS3, PreProd_ImportFiles), status 64. Tolerating...
    00001770.00000d14::2010/02/09-16:08:07.000 ERR [RHS] RhsCall::DeadlockMonitor: Call ISALIVE timed out for resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)'.
    00001770.00000d14::2010/02/09-16:08:07.000 ERR [RHS] Resource FileServer-(MSCS3)(Cluster Disk 4- Database) handling deadlock. Cleaning current operation and terminaiting RHS process.
    000009f0.00001324::2010/02/09-16:08:07.000 INFO [RCM] HandleMonitorReply: FAILURENOTIFICATION for 'FileServer-(MSCS3)(Cluster Disk 4- Database)', gen(4) result 4.
    000009f0.00001324::2010/02/09-16:08:07.000 INFO [RCM] rcm::RcmResource::HandleMonitorReply: Resource 'FileServer-(MSCS3)(Cluster Disk 4- Database)' consecutive failure count 1.
    000009f0.00001324::2010/02/09-16:08:07.002 ERR [RCM] rcm::RcmMonitor::RecoverProcess: Recovering monitor process 6000 / 0x1770
    000009f0.00001324::2010/02/09-16:08:07.003 INFO [RCM] Created monitor process 4748 / 0x128c
    Analysis
    We are getting Error 64 and 2114 and the File share is failing with a Deadlock Error
    Status 64 = the specified network name is no longer available.
    Status 2114 = The Server service is not started.
    We setup Netmon and ran traces yesterday when the issue happened and they did not show anything. The Server service does not seem to get any errors.
    We have also engaged EMC into the issue and MS has escalated the case but wanted to see if anyone else has experienced this issue or found any resolution. We have run out of options.

    Here you go!
    KB950811
    why not this one:
    http://support.microsoft.com/kb/2231728
    /* Server Support Specialist */

Maybe you are looking for