Hyper-V cluster iSCSI not communicating on one host - Catch 22

I'm in a catch-22, I have a 3 host Hyper-V 2012 cluster connected to a Dell Compellent SAN and Force 10 switches for iSCSI. Host3 simply stopped communicating via iSCSI, I can't ping Host 1 or 2 from Host 3 and vice versa.  I could take Host3 down
for maintenance and troubleshoot any hardware issues, however, Host3 is stuck as the CSV owner...if I go into failover cluster manager under storage - disks, Host 3 is the owner node for all my CSVs. If I shut the server down all my VMS will stop. So I think
I'm stuck in a catch-22. I can't shut the problematic server down to troubleshoot, and I can't change CSV owner to another host because iSCSI is not communicating. Any thoughts, ideas or troubleshooting steps are GREATLY appreciated.

Why would the VMs stop if you shut down Host 3? Ownership of a CSV has no bearing on whether or not other systems have access to the CSV - that's the whole point of CSV.  Each host has its own separate communication path to the CSV and to the files
(virtual hard disks) that it needs.
If Host 3 is still shown as part of the cluster, there must be some communication going on.  Otherwise the other two nodes would have ejected it on their own.
. : | : . : | : . tim

Similar Messages

  • Hyper-v cluster with core switch downtime... what to do?

    Is there a way to essentially "pause" the hyper-v cluster and keep things running but do NOT attempt to failover anything for any reason?
    We have one Procurve 5412zl switch with two c7000 enclosures. In each c7000 enclosure there are two switches that connect all the blade servers within the enclosure. Those two switches are interconnected internally so they can communicate within the enclosure.
    So if the core switch goes down the hyper-v servers in the same c7000 enclosure can still communicate but they will be seperated from the others in the other enclosure.
    So we have 4 hyper-v servers in one enclosure and 3 in another. If i disconnect the core switch i'm wondering what will happen (if I reboot the switch which is what I need to do).
    How can I avoid having to shut down everything for this and just tell hyper-v cluster to not do anything when the network is lost?

    Hi Quadrantids,
    " to essentially "pause" the hyper-v cluster and keep things running but
    do NOT attempt to failover anything for any reason"
    Based on my understanding  you need to keep cluster running on the same C7000 enclosure , in another words before you cut the connection between the C7000 enclosures  you may migrate VMs to same enclosure to keep running (I assume that the
    storage will not be affected by the restart ).
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • My  MacBook is not communicating with my desk top to enable printing from my MacBook. I need to find out how to make that happen. I have a static IP address on my desktop but don't know what to do on the Mac Book.

    My MacBook is not connecting to the host computer for the printer. The host computer has a static IP address which is different from the IP address that the laptop is looking for. How do I change the laptop so that it looks for the host computer's correct IP? The laptop is connected to the router. I know this because I can get on the internet with the laptop.

    The host computer is Dell and the operating system is Windows 7.  The printer is an Epson, and it is wired to the host computer with a USB cable. It does have host printing turned on.
    The three have been working harmoniously for a long time.  The router had to be reset; and after that, the MacBook documents would no longer print.
    We created a new static IP address on the Dell but the MacBook is not recognizing it.  We have researched the internet trying to figure out how to make the MacBook locate and recognize the correct IP address from the Dell so they can communicate with each other.  The laptop is not communicating with the host computer because is is looking for the wrong IP.  The host IP is 192.168.1.245 and the laptop is looking for 192.168.15.237.
    I hope everything is clear.  The person who originally set it up is no longer available to help me.  Thank you for any help you may give me.

  • Cluster VMs not accessible from other node hyper-v 2012 r2

    I have implementing 3 node cluster using Windows server 2012 r2 hyper-v environment, Scenario as below....
    3 HP server, every server 4 NIC`s, i made team using 4 NIC`s, and Cisco switch port configured as ether-channel and trunk port.
     cluster up and running across all node, suddenly i have faced that i can`t access VM when its not on the same node from where i am accessing Cluster manager. but i can access that VM only if i log on that owned node via cluster manager.\
    Please help.
    Thanks
    Shipon 
     

    Hi Shipon,
    Your network configuration not meet the cluster network requirement,
    Network adapters and cable (for network communication): The network hardware, like other components in the failover cluster solution, must be marked as "Certified for Windows
    Server 2008 R2." If you use iSCSI, your network adapters should be dedicated to either network communication or iSCSI, not both.
     In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You
    can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single
    points of failure.
    The related KB:
    Network Recommendations for a Hyper-V Cluster in Windows Server 2012
    http://technet.microsoft.com/en-us/library/dn550728.aspx
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Unplanned failover in a Hyper-V cluster vs unplanned failover in an ordinary (not Hyper-V) cluster

    Hello!
    Please excuse me if you think my question is silly, but before deploying something  in a production environment I'd like to dot the i's  and cross the t's.
    1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
    If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
    including their system state to another node without downtime.
    In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
    WITHOUT their system state. As far as I understand this can lead to some data loss.
    http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
    If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
    2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
    Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
    Is it correct?
    Thank you in advance,
    Michael

    "And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
    First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs.  By default, Hyper-V is set to move a maximum of 2 VMs at a time.  You can change that, but it would be foolish to increase that value if
    all you have is a single 1GE network.  The other VMs will be queued.
    Secondly, you are getting that amount of time confused with what is actually happening.  Think of a single VM.  Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate.  (Highly unlikely, even
    on a 1 GE NIC).  During that 13 minutes the VM takes to live migrate, the VM continues to perform normally.  In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
    on the original host.
    Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster.  The VM is doing its work reading and writing to its data files.  At that instance in time when the host
    fails, the VM may have some unwritten data buffers in memory.  Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time.  It is not going to lose any 13 minutes of data.  In fact, if you have an application
    that is processing data at this volume, you most likely have something like SQL running.  When the VM goes down, the cluster will automatically restart the VM on another node of the cluster.  SQL will automatically replay transaction logs to recover
    to the best of its ability.
    Is there a possibility of data loss?  Yes, a very tiny possibility for a very small amount.  Is there a possibility of data corruption?  Yes, a very, very tiny possibility, just like with a physical machine.
    The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM.  The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
    "clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
    Not exactly true.  Let's use SQL as an example again.  When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances.  When the node on which the active instance is running fails, there is
    a brief pause in service while the instance starts on the other node.  Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
    Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application.  Recovery time is about the biggest issue.  As you have noted, restarting a VM, i.e. rebooting it, takes time. 
    And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
    clients may be unable to access for up to a minute or so.
    However, the amount of data potentially lost is quite dependent upon the application.  SQL is designed to recover nicely in either environment, and it is likely not to lose any data.  Sequential writing applications will be dependent upon things
    like disk cache held in memory - large caches means higher probability of losing data.  No disk cache means there is not likely to be any loss of data.
    .:|:.:|:. tim

  • Guest VM failover cluster on Hyper-V 2012 Cluster does not work across hosts

    Hi all,
    We are evaluating Hyper-V on Windows Server 2012, and I have bumped in to this problem:
    I have a Exchange 2010SP2 DAG installed on 2 vms in our Hyper-V cluster (a DAG forms a failover cluster, but does not use any shared storage). As long as my vms are on the same host, all is good. However, if I live migrate or shutdown-->move-->start one
    of the guest nodes on another pysical host, it loses connectivity with the cluster. "regular" network is fine across hosts, and I can ping/browse one guest node from the other. I have tried looking for guidance for Exchange on Hyper-V clusters but have not
    been able to find anything.
    According to the Exchange documentation this configuration is supported, so I guess I'm asking for any tips and pointers on where to troubleshoot this.
    regards,
    Trond

    Hi All,
    so some updates...
    We have a ticket logged with Microsoft, more of a check box exercise to reassure the business we're doing the needful.  Anyway, they had us....
    Apply hotfix http://support.microsoft.com/kb/2789968?wa=wsignin1.0  to both guest DAG nodes, which seems pretty random, but they wanted to update the TCP/IP stack...
    There was no change in error, move guest to another Hyper-V node, and the failover cluster, well, fails with the following event ids I the node that fails...
    1564 -File share witness resource 'xxxx)' failed to arbitrate for the file share 'xxx'. Please ensure that file share '\xxx' exists and is accessible by the cluster..
    1069 - Cluster resource 'File Share Witness (xxxxx)' in clustered service or application 'Cluster Group' failed
    1573 - Node xxxx  failed to form a cluster. This was because the witness was not accessible. Please ensure that the witness resource is online and available
    The other node stays up, and the Exchange DB's mounted on that node stay up, the ones mounted on the way that fails failover to the remaining node...
    So we then
    Removed 3 x Nic's in one of the 4 x NIC teams, so, leaving a single NIC in the team (no change)
    Removed one NIC from the LACP group on each Hyper-V host
    Created new Virtual Switch using this simple trunk port NIC on each Hyper-V host
    Moved the DAG nodes to this vSwitch
    Failover cluster works as expected, guest VM's running on separate Hyper-V hosts, when on this vswitch with single NIC
    So Microsoft were keen to close the call, as there scope was, I kid you not, to "consider this issue
    resolved once we are able to find the cause of the above mentioned issue", which we have now done, as in, teaming is the cause... argh.
    But after talking, they are now escalating internally.
    The other thing we are doing, is building Server 2010 Guests, and installing Exchange 2010 SP3, to get a Exchange 2010 DAG running on Server 2010 and see if this has the same issue, as people indicate that this is perhaps not got the same problem.
    Cheers
    Ben
    Name                   : Virtual Machine Network 1
    Members                : {Ethernet, Ethernet 9, Ethernet 7, Ethernet 12}
    TeamNics               : Virtual Machine Network 1
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    Name                   : Parent Partition
    Members                : {Ethernet 8, Ethernet 6}
    TeamNics               : Parent Partition
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Heartbeat
    Members                : {Ethernet 3, Ethernet 11}
    TeamNics               : Heartbeat
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Virtual Machine Network 2
    Members                : {Ethernet 5, Ethernet 10, Ethernet 4}
    TeamNics               : Virtual Machine Network 2
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    A Cloud Mechanic.

  • Hyper-V - 'Failed to start the virtual machine 'test' because one of the Hyper-V components is not running.'

    Hi all,
    Having a bit of a nightmare with Hyper-V on my Windows 8 Pro laptop - Whenever I create a new VM and try to start it I receive the following error;
    'Failed to start the virtual machine 'test' because one of the Hyper-V components is not running.'
    I sometimes also receive an error saying something along the lines of 'Unable to change virtual machine state'
    I have done a lot of searching a seen two common answers - The first is to try removing the Hyper-V role and re-adding it - I have tried this several times to no avail. (Intel VT and all virtualisation capabilities are enabled in BIOS). The second issue
    it would seem
    some people have had relates to editing the VMX configuration file and adding the line
    hypervisor.cpuid.v0 = "FALSE" - I thought VMX files were only present in VMware Virtual Machines...
    Any help would be greatly appreciated.
    Many thanks in advance.
    James

    The hypervisor event log message is generated only at boot - so this would be expected.  Also vmbus should not be running on the host (I was looking at a VM yesterday)...  This error message is generated only when the vid is determined not to be
    running or cannot communicate with the hypervisor.
    I have seen this due to anti-virus software in the past or due to driver verifier being configured.  Since it looks like you checked driver verifier already do you have anti-virus software installed?  If so have you followed the best practices
    and exempted the Hyper-v services? 
    http://technet.microsoft.com/en-us/library/dd283088(WS.10).aspx
    To resolve this problem, configure the real-time scanning component within your antivirus software to exclude the following directories and files:
    Default virtual machine configuration directory (C:\ProgramData\Microsoft\Windows\Hyper-V)
    Custom virtual machine configuration directories
    Default virtual hard disk drive directory (C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks)
    Custom virtual hard disk drive directories
    Custom replication data directories, if you are using Hyper-V Replica
    Snapshot directories
    Vmms.exe (Note: This file may have to be configured as a process exclusion within the antivirus software.)
    Vmwp.exe (Note: This file may have to be configured as a process exclusion within the antivirus software.)
    -Taylor Brown<br/> -Program Manager, Hyper-V<br/> -http://blogs.msdn.com/taylorb

  • VM Machines not communicating with each other on Hyper-V 2012

    In Hyper-V 2012 on Server 2012 I have created two VM's Server 2008 64-bit & Server 2012 64-bit.
    The problem is both VM's are not communicating with each other.
    Regards
    Ganesh Parte

    Hello,
    seems this belongs to the Hyper-V networking configuration settings. Please ask this in http://social.technet.microsoft.com/Forums/en-US/home?forum=winserverhyperv
    and also describe how you have the network settings configured in the Hyper-V MMC.
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Add iSCSI LUN to Multiple Hyper-V Cluster Hosts?

    Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?

    Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?
    Here's a good step-by-step guide on how to do everything you want using just PowerShell. Please see:
    Configuring iSCSI storage for a Hyper-V Cluster
    http://www.hypervrockstar.com/qs-buildingahypervcluster_part3/
    This part is should be of a particular interest of yours. See:
    Connect Nodes to iSCSI Target
    Once the target is created and configured, we need to attach the iSCSI initiator in each node to the storage. We will use MPIO to
    ensure best performance and availability of storage.  When we enable the MS
    DSM to claim all iSCSI LUNs we must reboot the node for the setting to take affect. MPIO is utilized by creating a persistent connection to the target for each data NIC on the target server and from all iSCSI initiator NICs on our hyper-v
    server.  Because our hyper-v servers are using converged networking, we only have 1 iSCSI NIC.  In our example resiliency is provided by the LBFO team we created in the last video.
    PowerShell Commands
    1
    2
    3
    4
    5
    6
    7
    8
    9
    Set-Service -Name
    msiscsi -StartupType Automatic
    Start-Service msiscsi
    #reboot requres after claim
    Enable-MSDSMAutomaticClaim -BusType
    iSCSI
    Set-MSDSMGlobalDefaultLoadBalancePolicy
    -Policy RR
    New-IscsiTargetPortal –TargetPortalAddress 192.168.1.107
    $target = Get-IscsiTarget
    -NodeAddress *HyperVCluster*
    $target| Connect-IscsiTarget
    -IsPersistent $true -IsMultipathEnabled
    $true -InitiatorPortalAddress
    192.168.1.21 -TargetPortalAddress 10.0.1.10
    $target| Connect-IscsiTarget-IsPersistent$true-IsMultipathEnabled$
    You'll find a reference to "Connect-IscsiTarget" PowerShell cmdlet here:
    Connect-IscsiTarget
    https://technet.microsoft.com/en-us/library/hh826098.aspx
    Set of samples on how to control MSFT iSCSI initiator with PowerShell could be found here:
    Managing iSCSI Initiator with PowerShell
    http://blogs.msdn.com/b/san/archive/2012/07/31/managing-iscsi-initiator-connections-with-windows-powershell-on-windows-server-2012.aspx
    Good luck and happy clustering :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Hyper-V cluster Backup causes virtual machine reboots for common Cluster Shared Volumes members.

    I am having a problem where my VMs are rebooting while other VMs that share the same CSV are being backed up. I have provided all the information that I have gather to this point below. If I have missed anything, please let me know.
    My HyperV Cluster configuration:
    5 Node Cluster running 2008R2 Core DataCenter w/SP1. All updates as released by WSUS that will install on a Core installation
    Each Node has 8 NICs configured as follows:
     NIC1 - Management/Campus access (26.x VLAN)
     NIC2 - iSCSI dedicated (22.x VLAN)
     NIC3 - Live Migration (28.x VLAN)
     NIC4 - Heartbeat (20.x VLAN)
     NIC5 - VSwitch (26.x VLAN)
     NIC6 - VSwitch (18.x VLAN)
     NIC7 - VSwitch (27.x VLAN)
     NIC8 - VSwitch (22.x VLAN)
    Following hotfixes additional installed by MS guidance (either while build or when troubleshooting stability issue in Jan 2013)
     KB2531907 - Was installed during original building of cluster
     KB2705759 - Installed during troubleshooting in early Jan2013
     KB2684681 - Installed during troubleshooting in early Jan2013
     KB2685891 - Installed during troubleshooting in early Jan2013
     KB2639032 - Installed during troubleshooting in early Jan2013
    Original cluster build was two hosts with quorum drive. Initial two hosts were HST1 and HST5
    Next host added was HST3, then HST6 and finally HST2.
    NOTE: HST4 hardware was used in different project and HST6 will eventually become HST4
    Validation of cluster comes with warning for following things:
     Updates inconsistent across hosts
      I have tried to manually install "missing" updates and they were not applicable
      Most likely cause is different build times for each machine in cluster
       HST1 and HST5 are both the same level because they were built at same time
       HST3 was not rebuilt from scratch due to time constraints and it actually goes back to Pre-SP1 and has a larger list of updates that others are lacking and hence the inconsistency
       HST6 was built from scratch but has more updates missing than 1 or 5 (10 missing instead of 7)
       HST2 was most recently built and it has the most missing updates (15)
     Storage - List Potential Cluster Disks
      It says there are Persistent Reservations on all 14 of my CSV volumes and thinks they are from another cluster.
      They are removed from the validation set for this reason. These iSCSI volumes/disks were all created new for
      this cluster and have never been a part of any other cluster.
     When I run the Cluster Validation wizard, I get a slew of Event ID 5120 from FailoverClustering. Wording of error:
      Cluster Shared Volume 'Volume12' ('Cluster Disk 13') is no longer available on this node because of
      'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the
      volume is reestablished.
     Under Storage and Cluster Shared VOlumes in Failover Cluster Manager, all disks show online and there is no negative effect of the errors.
    Cluster Shared Volumes
     We have 14 CSVs that are all iSCSI attached to all 5 hosts. They are housed on an HP P4500G2 (LeftHand) SAN.
     I have limited the number of VMs to no more than 7 per CSV as per best practices documentation from HP/Lefthand
     VMs in each CSV are spread out amonst all 5 hosts (as you would expect)
    Backup software we use is BackupChain from BackupChain.com.
    Problem we are having:
     When backup kicks off for a VM, all VMs on same CSV reboot without warning. This normally happens within seconds of the backup starting
    What have to done to troubleshoot this:
     We have tried rebalancing our backups
      Originally, I had backup jobs scheduled to kick off on Friday or Saturday evening after 9pm
      2 or 3 hosts would be backing up VMs (Serially; one VM per host at a time) each night.
      I changed my backup scheduled so that of my 90 VMs, only one per CSV is backing up at the same time
       I mapped out my Hosts and CSVs and scheduled my backups to run on week nights where each night, there
       is only one VM backed up per CSV. All VMs can be backed up over 5 nights (there are some VMs that don't
       get backed up). I also staggered the start times for each Host so that only one Host would be starting
       in the same timeframe. There was some overlap for Hosts that had backups that ran longer than 1 hour.
      Testing this new schedule did not fix my problem. It only made it more clear. As each backup timeframe
      started, whichever CSV the first VM to start was on would have all of their VMs reboot and come back up.
     I then thought maybe I was overloading the network still so I decided to disable all of the scheduled backup
     and run it manually. Kicking off a backup on a single VM, in most cases, will cause the reboot of common
     CSV members.
     Ok, maybe there is something wrong with my backup software.
      Downloaded a Demo of Veeam and installed it onto my cluster.
      Did a test backup of one VM and I had not problems.
      Did a test backup of a second VM and I had the same problem. All VMs on same CSV rebooted
     Ok, it is not my backup software. Apparently it is VSS. I have looked through various websites. The best troubleshooting
     site I have found for VSS in one place it on BackupChain.com (http://backupchain.com/hyper-v-backup/Troubleshooting.html)
     I have tested almost every process on there list and I will lay out results below:
      1. I have rebooted HST6 and problems still persist
      2. When I run VSSADMIN delete shadows /all, I have no shadows to delete on any of my 5 nodes
       When I run VSSADMIN list writers, I have no error messages on any writers on any node...
      3. When I check the listed registry key, I only have the build in MS VSS writer listed (I am using software VSS)
      4. When I run VSSADMIN Resize ShadowStorge command, there is no shadow storage on any node
      5. I have completed the registration and service cycling on HST6 as laid out here and most of the stuff "errors"
       Only a few of the DLL's actually register.
      6. HyperV Integration Services were reconciled when I worked with MS in early January and I have no indication of
       further issue here.
      7. I did not complete the step to delete the Subscriptions because, again, I have no error messages when I list writers
      8. I removed the Veeam software that I had installed to test (it hadn't added any VSS Writer anyway though)
      9. I can't realistically uninstall my HyperV and test VSS
      10. Already have latest SPs and Updates
      11. This is part of step 5 so I already did this. This seems to be a rehash of various other stratgies
     I have used the VSS Troubleshooter that is part of BackupChain (Ctrl-T) and I get the following error:
      ERROR: Selected writer 'Microsoft Hyper-V VSS Writer' is in failed state!
      - Status: 8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
      - Writer Failure code: 0x800423f0 (<Unknown error code>)
      - Writer ID: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
      - Instance ID: {d55b6934-1c8d-46ab-a43f-4f997f18dc71}
      VSS snapshot creation failed with result: 8000FFFF
    VSS errors in event viewer. Below are representative errors I have received from various Nodes of my cluster:
    I have various of the below spread out over all hosts except for HST6
    Source: VolSnap, Event ID 10, The shadow copy of volume took too long to install
    Source: VolSnap, Event ID 16, The shadow copies of volume x were aborted because volume y, which contains shadow copy storage for this shadow copy, wa force dismounted.
    Source: VolSnap, Event ID 27, The shadow copies of volume x were aborted during detection because a critical control file could not be opened.
    I only have one instance of each of these and both of the below are from HST3
    Source: VSS, Event ID 12293, Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an
    unexpected error.
    Source: VSS, Event ID 8193, Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult.  hr = 0x80070057, The parameter is incorrect.
    So, basically, everything I have tried has resulted in no success towards solving this problem.
    I would appreciate anything assistance that can be provided.
    Thanks,
    Charles J. Palmer
    Wright Flood

    Tim,
    Thanks for the reply. I ran the first two commands and got this:
    Name                                                            
    Role Metric
    Cluster Network 1                                              
    3  10000
    Cluster Network 2 - HeartBeat                              1   1300
    Cluster Network 3 - iSCSI                                    0  10100
    Cluster Network 4 - LiveMigration                         1   1200
    When you look at the properties of each network, this is how I have it configured:
    Cluster Network 1 - Allow cluster network communications on this network and Allow clients to connect through this network (26.x subnet)
    Cluster Network 2 - Allow cluster network communications on this network. New network added while working with Microsoft support last month. (28.x subnet)
    Cluster Network 3 - Do not allow cluster network communications on this network. (22.x subnet)
    Cluster Network 4 - Allow cluster network communications on this network. Existing but not configured to be used by VMs for Live Migration until MS corrected. (20.x subnet)
    Should I modify my metrics further or are the current values sufficient.
    I worked with an MS support rep because my cluster (once I added the 5th host) stopped being able to live migrate VMs and I had VMs host jumping on startup. It was a mess for a couple of days. They had me add the Heartbeat network as part of the solution
    to my problem. There doesn't seem to be anywhere to configure a network specifically for CSV so I would assume it would use (based on my metrics above) Cluster Network 4 and then Cluster Network 2 for CSV communications and would fail back to the Cluster Network
    1 if both 2 and 4 were down/inaccessible.
    As to the iSCSI getting a second NIC, I would love to but management wants separation of our VMs by subnet and role and hence why I need the 4 VSwitch NICs. I would have to look at adding an additional quad port NIC to my servers and I would be having to
    use half height cards for 2 of my 5 servers for that to work.
    But, on that note, it doesn't appear to actually be a bandwidth issue. I can run a backup for a single VM and get nothing on the network card (It caused the reboots before any real data has even started to pass apparently) and still the problem occurs.
    As to Backup Chain, I have been working with the vendor and they are telling my the issue is with VSS. They also say they support CSV as well. If you go to this page (http://backupchain.com/Hyper-V-Backup-Software.html)
    they say they support CSVs. Their tech support has been very helpful but unfortunately, nothing has fixed the problem.
    What is annoying is that every backup doesn't cause a problem. I have a daily backup of one of our machines that runs fine without initiating any additional reboots. But most every other backup job will trigger the VMs on the common CSV to reboot.
    I understood about the updates but I had to "prove" it to the MS tech I was on the phone with and hence I brought it up. I understand on the storage as well. Why give a warning for something that is working though... I think that is just a poor indicator
    that it doesn't explain that in the report.
    At a loss for what else I can do,
    Charles J. Palmer

  • Hyper-V Cluster Name offline

    We have a 2012 Hyper-V cluster that isn't online and we can't migrate VMs to the other Hyper-V host.  We see event errors in the Failover Cluster Manager:
    The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Name
    Cluster Group
    Network Name
    The description for Event ID 1254 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Group
    The description for Event ID 1155 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    ACMAIL
    3604536
    Any help or info is appreciated.
    Thank you!

    Here is the network validation.  Any thoughts?
    Failover Cluster Validation Report
    Failover Cluster Validation Report
          Node: ACHV01.AshtaChemicals.localValidated
          Node: ACHV02.AshtaChemicals.localValidated
          Started8/6/2014 5:04:47 PM
          Completed8/6/2014 5:05:22 PM
    The Validate a Configuration Wizard must be run after any change is made to the
    configuration of the cluster or hardware. For more information, see
    Results by Category
          NameResult SummaryDescription
          NetworkWarning
    Network
          NameResultDescription
          List Network Binding OrderSuccess
          Validate Cluster Network ConfigurationSuccess
          Validate IP ConfigurationWarning
          Validate Multiple Subnet PropertiesSuccess
          Validate Network CommunicationSuccess
          Validate Windows Firewall ConfigurationSuccess
    Overall Result
      Testing has completed for the tests you selected. You should review the
      warnings in the Report. A cluster solution is supported by Microsoft only if
      it passes all cluster validation tests.
    List Network Binding Order
      Description: List the order in which networks are bound to the adapters on
      each node.
      ACHV01.AshtaChemicals.local
            Binding OrderAdapterSpeed
            iSCSI3Intel(R) PRO/1000 PT Quad Port LP Server Adapter #31000 Mbit/s
            Ethernet 3Intel(R) PRO/1000 PT Quad Port LP Server AdapterUnavailable
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver2000 Mbit/s
            iSCSI2Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #371000
            Mbit/s
            3Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)Unavailable
      ACHV02.AshtaChemicals.local
            Binding OrderAdapterSpeed
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver #22000 Mbit/s
            iSCSI1Broadcom NetXtreme Gigabit Ethernet #71000 Mbit/s
            NIC2Broadcom NetXtreme Gigabit EthernetUnavailable
            SLOT 5 2Broadcom NetXtreme Gigabit EthernetUnavailable
            iSCSI2Broadcom NetXtreme Gigabit Ethernet1000 Mbit/s
    Back to Summary
    Back to Top
    Validate Cluster Network Configuration
      Description: Validate the cluster networks that would be created for these
      servers.
      Network: Cluster Network 1
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.131.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI3
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.113
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.121
            Prefix Length24
      Network: Cluster Network 2
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.141.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.12
            Prefix Length24
      Network: Cluster Network 3
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.140.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.12
            Prefix Length24
      Network: Cluster Network 4
      DHCP Enabled: False
      Network Role: Enabled
            PrefixPrefix Length
            10.1.1.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.4
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.5
            Prefix Length24
      Network: Cluster Network 5
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.130.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.112
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI1
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.121
            Prefix Length24
      Verifying that each cluster network interface within a cluster network is
      configured with the same IP subnets.
      Examining network Cluster Network 1.
      Network interface ACHV01.AshtaChemicals.local - iSCSI3 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Network interface ACHV02.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Examining network Cluster Network 2.
      Network interface ACHV01.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Network interface ACHV02.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Examining network Cluster Network 3.
      Network interface ACHV01.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Network interface ACHV02.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Examining network Cluster Network 4.
      Network interface ACHV01.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Network interface ACHV02.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Examining network Cluster Network 5.
      Network interface ACHV01.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Network interface ACHV02.AshtaChemicals.local - iSCSI1 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Verifying that, for each cluster network, all adapters are consistently
      configured with either DHCP or static IP addresses.
      Checking DHCP consistency for network: Cluster Network 1. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI3 matches network Cluster Network 1.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 1.
      Checking DHCP consistency for network: Cluster Network 2. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      Checking DHCP consistency for network: Cluster Network 3. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      Checking DHCP consistency for network: Cluster Network 4. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      Checking DHCP consistency for network: Cluster Network 5. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 5.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI1 matches network Cluster Network 5.
    Back to Summary
    Back to Top
    Validate IP Configuration
      Description: Validate that IP addresses are unique and subnets configured
      correctly.
      ACHV01.AshtaChemicals.local
            ItemName
            Adapter NameiSCSI3
            Adapter DescriptionIntel(R) PRO/1000 PT Quad Port LP Server Adapter #3
            Physical Address00-26-55-DB-CF-73
            StatusOperational
            DNS Servers
            IP Address192.168.131.113
            Prefix Length24
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.141.10
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.140.10
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address10.1.1.4
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)
            #37
            Physical Address78-2B-CB-3C-DC-F7
            StatusOperational
            DNS Servers
            IP Address192.168.130.112
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 12
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-61-1E-49-32-8F
            StatusOperational
            DNS Servers
            IP Addressfe80::cc2f:d769:fe24:3d04%23
            Prefix Length64
            IP Address169.254.2.195
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{96B6424D-DB32-480F-8B46-056A11A0A6A8}
            Adapter DescriptionMicrosoft ISATAP Adapter
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.113%16
            Prefix Length128
            ItemName
            Adapter Nameisatap.{A0353AF4-CE7F-4811-B4FC-35273C2F2C6E}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.112%18
            Prefix Length128
            ItemName
            Adapter Nameisatap.{FAAF4D6A-5A41-4725-9E83-689D8E6682EE}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.10%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{C66443C2-DC5F-4C2A-A674-2191F76E33E1}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.4%27
            Prefix Length128
            ItemName
            Adapter Nameisatap.{B3A95E1D-CB95-4111-89E5-276497D7EF42}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.10%29
            Prefix Length128
            ItemName
            Adapter Nameisatap.{7705D42A-1988-463E-9DA3-98D8BD74337E}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.2.195%30
            Prefix Length128
      ACHV02.AshtaChemicals.local
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.141.12
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.140.12
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #2
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address10.1.1.5
            Prefix Length24
            IP Address10.1.1.248
            Prefix Length24
            ItemName
            Adapter NameiSCSI1
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet #7
            Physical Address74-86-7A-D4-C9-8A
            StatusOperational
            DNS Servers
            IP Address192.168.130.121
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet
            Physical Address00-10-18-F5-08-9C
            StatusOperational
            DNS Servers
            IP Address192.168.131.121
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 11
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-8F-46-67-27-51
            StatusOperational
            DNS Servers
            IP Addressfe80::3471:c9bf:29ad:99db%25
            Prefix Length64
            IP Address169.254.1.193
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{8D7DF16A-1D5F-43D9-B2D6-81143A7225D2}
            Adapter DescriptionMicrosoft ISATAP Adapter #2
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.121%21
            Prefix Length128
            ItemName
            Adapter Nameisatap.{82E35DBD-52BE-4BCF-BC74-E97BB10BF4B0}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.121%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{5A315B7D-D94E-492B-8065-D760234BA42E}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.12%23
            Prefix Length128
            ItemName
            Adapter Nameisatap.{2182B37C-B674-4E65-9F78-19D93E78FECB}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.12%24
            Prefix Length128
            ItemName
            Adapter Nameisatap.{104DC629-D13A-4A36-8845-0726AC9AE25E}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.5%33
            Prefix Length128
            ItemName
            Adapter Nameisatap.{483266DF-7620-4427-BE5D-3585C8D92A12}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.1.193%34
            Prefix Length128
      Verifying that a node does not have multiple adapters connected to the same
      subnet.
      Verifying that each node has at least one adapter with a defined default
      gateway.
      Verifying that there are no node adapters with the same MAC physical address.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Verifying that there are no duplicate IP addresses between any pair of nodes.
      Checking that nodes are consistently configured with IPv4 and/or IPv6
      addresses.
      Verifying that all nodes IPv4 networks are not configured using Automatic
      Private IP Addresses (APIPA).
    Back to Summary
    Back to Top
    Validate Multiple Subnet Properties
      Description: For clusters using multiple subnets, validate the network
      properties.
      Testing that the HostRecordTTL property for network name Name: Cluster1 is set
      to the optimal value for the current cluster configuration.
      HostRecordTTL property for network name Name: Cluster1 has a value of 1200.
      Testing that the RegisterAllProvidersIP property for network name Name:
      Cluster1 is set to the optimal value for the current cluster configuration.
      RegisterAllProvidersIP property for network name Name: Cluster1 has a value of
      0.
      Testing that the PublishPTRRecords property for network name Name: Cluster1 is
      set to the optimal value for the current cluster configuration.
      The PublishPTRRecords property forces the network name to register a PTR in
      DNS reverse lookup record IP address to name mapping.
    Back to Summary
    Back to Top
    Validate Network Communication
      Description: Validate that servers can communicate, with acceptable latency,
      on all networks.
      Analyzing connectivity results ...
      Multiple communication paths were detected between each pair of nodes.
    Back to Summary
    Back to Top
    Validate Windows Firewall Configuration
      Description: Validate that the Windows Firewall is properly configured to
      allow failover cluster network communication.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI3'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node ACHV01.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI1'.
      The Windows Firewall on node ACHV02.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
    Back to Summary
    Back to Top

  • How to use Fibre Chanel matrifx for Hyper-V Cluster

    Hi
    I created Hyper-V Cluster (2012 R2) and have Fibre Chanel matrix (4 TB). Is it better co create one big lun for hiper-v storage or create two small luns (2 x 2 TB). Where will be better I/O? All disk used in matrix are the same.
    Thank you for help
    Kind Regards Tomasz

    Hi Yukio,
    I agree with Tim , the best way is to contact with hardware vendor for the disk construction of FC storage .
    Based on my understanding , if these "basic disks" are same and controlled by same controller , I think it will not create any I/O when you divide it  , the total amount of I/O is equal .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Can I change which nic is used for a cluster network when more than one nic on the node is on same subnet?

    This cluster has been up and working for maybe a year and a half the way it is.  There are two nodes, running Server 2012.  In addition to a couple network interfaces devoted to VM traffic each node has:
    Management Interface: 192.168.1.0/24
    iSCSI Interface: 192.168.1.0/24
    Internal Cluster Interface: 192.168.99.0/24
    The iSCSI interfaces have to be on same subnet as management interfaces due to limitations in the shared storage.  Basically if I segregate it I wouldn't be able access the shared storage itself for any kind of management or maintenance tasks. 
    I have restricted the iSCSI traffic to only use the one interface on each cluster node but I noticed that one of the cluster networks is connecting the management interface on one cluster node member with the iSCSI interface on the other cluster node member. 
    I would like for the cluster network to be using the management interface on both cluster node members so as not to interfere with iSCSI traffic.  Can I change this?
    Binding order of interfaces is the same on both boxes but maybe I did that after I created the cluster, not sure. 

    Hi MnM Show,
    Tim is correct, if you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This
    network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.
    This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network
    will be disabled from Cluster use. This network should set to lowest in the binding order.
    The related article:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Windows Server 2012 - Hyper-V - Cluster Sharded Storage - VHDX unexpectedly gets copied to System Volume Information by "System", Virtual Machines stops respondig

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM. This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM. This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched off.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, VMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation:
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with two 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair.
    Our Problem:
    Our problem is that for some reason a random VHDX gets copied to System Volume Information by "System" of the Clusterd Shared Storage (i.e. C:\ClusterStorage\Volume1\System Volume Information).
    All VMs stops responding or responds very slowly during this copy process and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    This happens at random and not every day and different VHDX files from different VMs gets copied each time. Some time it happens during daytime wich causes a lot of problems, especially when a 200 GB file gets copied (which take a lot of time).
    What it is not:
    We thought that this was connected to the backup, but the backup had finished 3 hours before the last time this happended and the backup never uses any of the files in System Volume Information so it is not the backup.
    An observation:
    When this happend today I switched on ShadowCopy (previous files) and set it to only to use 320 MB of storage and then the Copy Process stopped and the virtual Machines started responding again. This could be unrelated since there is no way to see
    how much of the VHDX that is left to be copied, so it might have been finished at the same time as I enabled  ShadowCopy (previos files).
    Our question:
    Why is a VHDX copied to System Volume Information when scheduled ShadowCopy (previous version of files) is switched off? As far as I know, nothing should be copied to this folder when this functionis switched off?
    List of VSS Writers:
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Writer name: 'VSS Metadata Store Writer'
       Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06}
       Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93}
       State: [1] Stable
       Last error: No error
    Writer name: 'Performance Counters Writer'
       Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2}
       Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381}
       State: [1] Stable
       Last error: No error
    Writer name: 'System Writer'
       Writer Id: {e8132975-6f93-4464-a53e-1050253ae220}
       Writer Instance Id: {7848396d-00b1-47cd-8ba9-769b7ce402d2}
       State: [1] Stable
       Last error: No error
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {8b6c534a-18dd-4fff-b14e-1d4aebd1db74}
       State: [5] Waiting for completion
       Last error: No error
    Writer name: 'Cluster Shared Volume VSS Writer'
       Writer Id: {1072ae1c-e5a7-4ea1-9e4a-6f7964656570}
       Writer Instance Id: {d46c6a69-8b4a-4307-afcf-ca3611c7f680}
       State: [1] Stable
       Last error: No error
    Writer name: 'ASR Writer'
       Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4}
       Writer Instance Id: {fc530484-71db-48c3-af5f-ef398070373e}
       State: [1] Stable
       Last error: No error
    Writer name: 'WMI Writer'
       Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0}
       Writer Instance Id: {3792e26e-c0d0-4901-b799-2e8d9ffe2085}
       State: [1] Stable
       Last error: No error
    Writer name: 'Registry Writer'
       Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
       Writer Instance Id: {6ea65f92-e3fd-4a23-9e5f-b23de43bc756}
       State: [1] Stable
       Last error: No error
    Writer name: 'BITS Writer'
       Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0}
       Writer Instance Id: {71dc7876-2089-472c-8fed-4b8862037528}
       State: [1] Stable
       Last error: No error
    Writer name: 'Shadow Copy Optimization Writer'
       Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Instance Id: {cb0c7fd8-1f5c-41bb-b2cc-82fabbdc466e}
       State: [1] Stable
       Last error: No error
    Writer name: 'Cluster Database'
       Writer Id: {41e12264-35d8-479b-8e5c-9b23d1dad37e}
       Writer Instance Id: {23320f7e-f165-409d-8456-5d7d8fbaefed}
       State: [1] Stable
       Last error: No error
    Writer name: 'COM+ REGDB Writer'
       Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
       Writer Instance Id: {f23d0208-e569-48b0-ad30-1addb1a044af}
       State: [1] Stable
       Last error: No error
    Please note:
    Please only answer our question and do not offer any general optimization tips that do not directly adress the issue! We want the problem to go away, not to finish a bit faster!

    Hallo Lawrence!
    Thankyou for youre reply, some comments to help you and others who read this thread:
    First of all, we use Windows Server 2012 and the VHDX as I wrote in the headline and in the text in my post. We have not had this problem in similar setups with Windows Server 2008 R2, so the problem seem to be introduced in Windows Server 2012.
    These posts that you refer to seem to be outdated and/or do not apply to our configuration:
    The post about Dynamic Disks:
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx is only a recommendation for Windows Server 2008 R2 and the VHD format. Dynamic VHDX is indeed recommended by Microsoft when using Windows Server 2012 (please look in the optimization guide
    for Windows Server 2012).
    Infact, if we use fixed VHDX then we would have a bigger problem since fixed VHDX are generaly larger then Dynamic Disks, i.e. more data would be copied and that would take longer time = the VMs would be unresponsive for a longer time.
    The post "What's the deal with the System Volume Information folder"
    http://blogs.msdn.com/b/oldnewthing/archive/2003/11/20/55764.aspx is for Windows XP / Windows Server 2003 and some things has changed since then. for instance In Windows Server 2012, Shadow Copies cannot be controlled by going to Control panel -> System.
    Instead you right-click on a Drive (i.e. a Volume, for instance the C drive/Volume) in Computer and then click "Configure Shadow Copies".
    Windows Server 2008 R2 Backup problem
    http://social.technet.microsoft.com/Forums/en/windowsbackup/thread/0fc53adb-477d-425b-8c99-ad006e132336 - This post is about the Antivirus software trying to scan files used during backup that exists in the System Volume Information folder and we do not
    have any antivirus software installed on our hosts as I stated in my post.
    Comment that might help us:
    So according to “System Volume Information” definition, the operation you mentioned is Volume Shadow Copy. Check event viewer to find Volume Shadow Copy related event logs and post them.
    Why?
    Furhter investigation suggests that a volume shadow copy is somehow created even though the Schedule for Shadows Copies is turned off for all drives. This happens at random and we have not found any pattern. Yesterday this operation took almost all available
    disk space (over 200 GB), but all the disk space was released when I turned on scheduled Shadow Copies for the CSV.
    I therefore draw these conclusions:
    The CSV Volume has about 600 GB of disk space and since Volume Shadows Copy used 200 GB, or about 33% of the disk space, and the default limit is 10% then I conclude that for some reason the unscheduled Volume Shadow Copy did not have any limit (or ignored
    the limit).
    When I turned on the Schedule I also change the limit to the minimum amount which is 320 MB and this is probably what released the disk space. That is, the unscheduled Volume Shadow Copy operation was aborted and it adhered to the limit and deleted the
    Volume Shadow Copy it had taken.
    I have also set the limit for Volume Shadow Copies for all other volumes to 320 MB by using the "Configure Shadow Copies" Window that you open by right clicking on a drive (volume) in Computer and then selecting "Configure Shadow Copies...".
    It is important to note that setting a limit for Shadow Copy Storage, and disabaling the Schedule are two different things! It is possible to have unlimited storage for Shadow Copies when the Schedule is disabled, however I do not know if this was the case
    Before I enabled Shadow Copies on the CSV since I did not look for this.
    I now have defined a limit for Shadow Copy Storage to 320 MB on all drives and then no VHDX should be copied to System Volume Information since they are all larger than 320 MB.
    Does this sound about right or am I drawing the wrong conclusions?
    Limits for Shadow Copies:
    Below we list the limits for our two hosts:
    "Primary Host":
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (91%)
    Shadow Copy Storage association
       For volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Shadow Copy Storage volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    Shadow Copy Storage association
       For volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Shadow Copy Storage volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (3%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    C:\>cd \ClusterStorage\Volume1
    Secondary host:
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 35,0 MB (10%)
    Shadow Copy Storage association
       For volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Shadow Copy Storage volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 27,3 GB (10%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 6,80 GB (10%)
    C:\>
    There is something strange about the limits on the Secondary host!
    I have not in any way changed the settings on the Secondary host and as you can see, the Secondary host has a maximum limit of only 35 MB storage on the CSV, but it also shows that this is 10% of the Volume. This is clearly not the case since 10% if 600
    GB = 60 GB!
    The question is, why does it by default set a too small limit (i.e. < 320 MB) on the CSV and is this the cause of the problem? I.e. is the limit ignored since it is smaller than the smallest amount you can provide using the GUI?
    Is the default 35 MB maximum Shadow Copy limit a bug, or is there any logical reason for setting a limit that according to the GUI is too small?

  • Cluster Quorum Disk failing inside Guest cluster VMs in Hyper-V Cluster using Virtual Disk Sharing Windows Server 2012 R2

    Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled. 
    It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
    Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
    The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
    Right now I'm seen this errors on one of the VM Guest:
     Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1069
    Task Category: Resource Control Manager
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1069</EventID>
        <Version>1</Version>
        <Level>2</Level>
        <Task>3</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14140</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="ResourceName">Quorum-HDD</Data>
        <Data Name="ResourceGroup">Cluster Group</Data>
        <Data Name="ResTypeDll">Physical Disk</Data>
      </EventData>
    </Event>
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1558
    Task Category: Quorum Manager
    Level:         Warning
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1558</EventID>
        <Version>0</Version>
        <Level>3</Level>
        <Task>42</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14139</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">ServerDB02</Data>
      </EventData>
    </Event>
    We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
    Any ideas are appreciated.
    Thanks.
    Eduardo Rojas

    Hi,
    Please refer to the following link:
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
    Best Regards,
    Vincent Wu
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Maybe you are looking for

  • Can you get your money back if the app you purchased doesn't work properly?

    Hi, I recently bought the MobileNoter SE app in order to sync my OneNotes between Playbook, Windows Phone and Laptops. Once the Playbook was added to the previously fully functioning sync system things started to go wrong, so much so that I've had to

  • Nokia 6280 and errors with iSync

    I have purchased a nokia 6280 and paired the phone to my mac. I am wanting to sync my address book and calendars from my mac to my phone. I can send individual cards from Address Book by selecting the card and then going to CARDS>>SEND THIS CARD. Whe

  • SD Assign G/L accounts

    Hi Gurus, Question is: I have pricing condition type Z100 for Direct Labor , Z200 for Indirect labor, ZMAK is markup ( %) . In this case, let's say Z100=$40, Z200=$10, ZMAK=10%, So the total reveune is (40+10)*10%=$55 the revenue for Diret Labor is 4

  • My mac book pro keeps turning off, and its fully charged!?!?!

    its is fully charged!?!? and yet it keeps randomly turning off, please help me...

  • ORA-12560 TNS error in protocol adapter. Why ?

    I installed on computer1 an Oracle Express 10g server and he runs fine. Then, on another computer2 in the inhouse-net I installed the Oracle Client. When I try to contact now from computer2 the Oracle server on computer1 with e.g. sqlplus.exe myname/