Add iSCSI LUN to Multiple Hyper-V Cluster Hosts?

Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?

Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?
Here's a good step-by-step guide on how to do everything you want using just PowerShell. Please see:
Configuring iSCSI storage for a Hyper-V Cluster
http://www.hypervrockstar.com/qs-buildingahypervcluster_part3/
This part is should be of a particular interest of yours. See:
Connect Nodes to iSCSI Target
Once the target is created and configured, we need to attach the iSCSI initiator in each node to the storage. We will use MPIO to
ensure best performance and availability of storage.  When we enable the MS
DSM to claim all iSCSI LUNs we must reboot the node for the setting to take affect. MPIO is utilized by creating a persistent connection to the target for each data NIC on the target server and from all iSCSI initiator NICs on our hyper-v
server.  Because our hyper-v servers are using converged networking, we only have 1 iSCSI NIC.  In our example resiliency is provided by the LBFO team we created in the last video.
PowerShell Commands
1
2
3
4
5
6
7
8
9
Set-Service -Name
msiscsi -StartupType Automatic
Start-Service msiscsi
#reboot requres after claim
Enable-MSDSMAutomaticClaim -BusType
iSCSI
Set-MSDSMGlobalDefaultLoadBalancePolicy
-Policy RR
New-IscsiTargetPortal –TargetPortalAddress 192.168.1.107
$target = Get-IscsiTarget
-NodeAddress *HyperVCluster*
$target| Connect-IscsiTarget
-IsPersistent $true -IsMultipathEnabled
$true -InitiatorPortalAddress
192.168.1.21 -TargetPortalAddress 10.0.1.10
$target| Connect-IscsiTarget-IsPersistent$true-IsMultipathEnabled$
You'll find a reference to "Connect-IscsiTarget" PowerShell cmdlet here:
Connect-IscsiTarget
https://technet.microsoft.com/en-us/library/hh826098.aspx
Set of samples on how to control MSFT iSCSI initiator with PowerShell could be found here:
Managing iSCSI Initiator with PowerShell
http://blogs.msdn.com/b/san/archive/2012/07/31/managing-iscsi-initiator-connections-with-windows-powershell-on-windows-server-2012.aspx
Good luck and happy clustering :)
StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • How to monitor the performances of VMs & Hyper-v Cluster host node running on SCVMM Cluster.

    hello...,
    How to monitor the performances of VMs & Hyper-v Cluster hosts node running on SCVMM Cluster from SCOM so that we can
    Identify the highest utilized(CPU and Memory ) VM on that from cluster  hyper-v host.
    Identify the lowest utilized (CPU and Memory )Hyper-v Host in the Cluster .
    After identifies VMs and Hyper-v cluster  host on SCVMM , so that we  can  proceed to do migrate the Highest Utilized VM to Lowest Utilized
    Hyper-v cluster host. 
    To identified and implement above ,what are the things I need to do or configured on SCOM.
    Thanks
    RICHA KM

    hello...,
    How to monitor the performances of VMs & Hyper-v Cluster hosts node running on SCVMM Cluster from SCOM
    so that we can
    Identify the highest utilized(CPU and Memory ) VM on that from cluster  hyper-v
    host.
    Identify the lowest utilized (CPU and Memory )Hyper-v Host in the Cluster .
    After identifies VMs and Hyper-v cluster  host on SCVMM , so that we  can  proceed
    to do migrate the Highest Utilized VM to Lowest Utilized Hyper-v cluster host. 
    To identified and implement above ,what are MPs i need to installed on SCOM for implementing
    this.
    Thanks
    RICHA KM

  • Hyper-V Cluster Hosts BMR Backups take over 12 hours

    Hello, 
    We are using 2012 R2 DC to create a Hyper-V  2 node cluster.  The backup is DPM 2012 R2 running on 2012 R2 DC.  The DPM system was upgraded to 2012 R2 from DPM 2010 about a year ago and has worked for the most part.
    Two problems we are having.  I am using the VSS writer from the SAN manufacture.  On our old cluster I used the Microsoft VSS writer and had no problems unless I loaded the C: up with too many files on the cluster node.  I saw another post
    that says to disable the HW VSS writer but that was unsuccessful the last time I tried it.  I am going to try it again though.
    1.  I currently get a random VM that hangs during the backup.  The VM still runs but I cannot migrate it and backups fail from that point on.  In Hyper-V backup the Status field shows "Backing up...".  I am waiting for this
    to happen on a non critical VM but alas that is not happening. <sigh>  To correct the problem I have to pull power from the cluster node the system was on and then about 90% of the time I have to replace to different .sys files in c:\windows\system23\drivers
    and then run the startup repair utility from the windows install disk.
    2.  I did not notice this until this week but my BMR backups of the Hyper-V hosts are taking over 12 hours to complete.  The BMR on a different stand alone server is taking less than 10 min.  What is causing this and if these are taking this
    long is it interfering with the backup of the VMs on the cluster and causing the problem in paragraph 1 above?  I have tried to stagger my backups so they don't interfere with each other but if this one is taking 12-14 hours that will never happen.

    Hi,
    Are you on DPM 2012 R2 UR3 ?    If so - there is no need to a vss hardware provider, you can safely un-install it and DPM backups will continue to work fine.
    Problem-1)  I currently get a random VM that hangs during the backup.  The VM still runs but I cannot migrate it and backups fail from that point on.  In Hyper-V backup the Status field shows "Backing up...". 
    Response-1) Is there still an active backup job for that VM on the DPM server - and does it show that it's still transferring changes ?   If not, then wait for other VM backups to complete on that host, then manually stop the DPMRA service - that
    will reset the "backup in progress" flag and allow the VM to be managed.  We have an open bug that we're working on to fix that issue. 
    Problem-2) My BMR backups of the Hyper-V hosts are taking over 12 hours to complete. 
    Response-2)  DPM is not responsible for the amount of time a BMR backup takes to complete.  To troubleshoot this outside of DPM perform the following.
    First make sure no active BMR job is running by looking at running jobs on the DPM server.
    Second - using task manager make sure there are no WBENGINE.EXE process running on the DC you want to test - if so - kill it with task manager.
     To test BMR backup outside of DPM, try this command:
    1) Set up a network share on a remote machine
    \\server\bmrshare
    2) From an administrative command prompt on the PS, type:
                  wbadmin.exe start backup -allcritical -backuptarget:\\server\bmrshare
    This should show you the list of volumes included in the BMR backup and ask "Do you want to start the backup operation?. - Type Y to continue..
    See how long it takes to complete or see if it hangs - kill it if you need to after X hours - but look on the share and see how much it copied before killing it.
    You can try disabling chimney on both DPM and DC to see if that helps.
        c:\>Netsh int tcp set global chimney=disabled
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Add Node to Hyper-V Cluster running Server 2012 R2

    Hi All,
    I am in the process to upgrade our Hyper-V Cluster to Server 2012 R2 but I am not sure about the required Validation test.
    The Situation at the Moment-> 1 Node Cluster running Server 2012 R2 with 2 CSVs and Quorum. Addtional Server prepared to add to the Cluster. One CSV is empty and could be used for the Validation Test. On the Other CSV are running 10 VMs in production.
    So when I start the Validation wizard I can select specific CSVs to test, which makes sense;-) But the Warning message is not clear for me "TO AVOID ROLE FAILURES, IT IS RECOMMENDED THAT ALL ROLES USING CLUSTER SHARED VOLUMES BE STOPPED BEFORE THE STORAGE
    IS VALIDATED". Does it mean that ALL CSVs will be testest and Switched offline during the test or just the CSV that i have selected in the Options? I have to avoid definitly that the CSV where all the VMs are running will be switched offline and also
    that the configuration will be corputed after loosing the CSV where the VMs are running.
    Can someone confirm that ONLY the selected CSV will be used for the Validation test ???
    Many thanks
    Markus

    Hi,
    The validation will test the select the CSV storage, if you have guest vm running this CSV it must shutdown or saved before you validate the CSV.
    Several tests will actually trigger failovers and move the disks and groups to different cluster nodes which will cause downtime, and these include Validating Disk Arbitration,
    Disk Failover, Multiple Arbitration, SCSI-3 Persistent Reservation, and Simultaneous Failover. 
    So if you want to test a majority of the functionality of your cluster without impacting availability, exclude these tests.
    The related information:
    Validating a Cluster with Zero Downtime
    http://blogs.msdn.com/b/clustering/archive/2011/06/28/10180803.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • ISCSI LUNs presented as online on new server for new cluster.

    We are building out a new 2012 R2 Hyper-V cluster. The old environment is a mix of Cluster Shared Volume drives and LUNs presented just for the a VM itself.
    I had planned on presenting everything in the old environment to the new environment and then just using the cluster migration wizard to move VM's over a few at a time.
    I ran into a problem when I connected my first host to our SAN today.  It is in a group that has over 70 LUN's presented to it.  Once I connect to the target the host just cripples.  I am not having Memory or CPU issues but Disk IO issues. 
    I noticed that the host now sees all 70 plus LUNs and has tried to bring those disk online as well.
    I don't want them online right now.  I just need the quorum drive and a few of the newly created LUN's online to finish creating our Cluster so we can start migrations.
    Why is the Host trying to bring these drives online?  As soon as I click on Devices in the iSCSI initiator the program locks up and doesn't respond. 
    Is there a way to setup the target but force the OS not to bring any of those disk online?  I removed the favorite target, and there are no items listed in the Volume List under Volumes and Devices.  However if you go to Disk Manager it shows all
    70 plus disk and most of them now show online except for my newly created LUNs which are not initialized yet.
    Kristopher Turner | Not the brightest bulb but by far not the dimmest bulb.

    Hi KristopherJTurner,
    You can try to remove the issue occurred hosts from iSCSI target SAN first. The iSCSI initiator first logs on to the target which the target has grant access before the server
    can start reading and writing to all LUNs that are assigned to that target.
    More information:
    Manage iSCSI Targets
    http://technet.microsoft.com/en-us/library/cc726015.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V Cluster Name offline

    We have a 2012 Hyper-V cluster that isn't online and we can't migrate VMs to the other Hyper-V host.  We see event errors in the Failover Cluster Manager:
    The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Name
    Cluster Group
    Network Name
    The description for Event ID 1254 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Group
    The description for Event ID 1155 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    ACMAIL
    3604536
    Any help or info is appreciated.
    Thank you!

    Here is the network validation.  Any thoughts?
    Failover Cluster Validation Report
    Failover Cluster Validation Report
          Node: ACHV01.AshtaChemicals.localValidated
          Node: ACHV02.AshtaChemicals.localValidated
          Started8/6/2014 5:04:47 PM
          Completed8/6/2014 5:05:22 PM
    The Validate a Configuration Wizard must be run after any change is made to the
    configuration of the cluster or hardware. For more information, see
    Results by Category
          NameResult SummaryDescription
          NetworkWarning
    Network
          NameResultDescription
          List Network Binding OrderSuccess
          Validate Cluster Network ConfigurationSuccess
          Validate IP ConfigurationWarning
          Validate Multiple Subnet PropertiesSuccess
          Validate Network CommunicationSuccess
          Validate Windows Firewall ConfigurationSuccess
    Overall Result
      Testing has completed for the tests you selected. You should review the
      warnings in the Report. A cluster solution is supported by Microsoft only if
      it passes all cluster validation tests.
    List Network Binding Order
      Description: List the order in which networks are bound to the adapters on
      each node.
      ACHV01.AshtaChemicals.local
            Binding OrderAdapterSpeed
            iSCSI3Intel(R) PRO/1000 PT Quad Port LP Server Adapter #31000 Mbit/s
            Ethernet 3Intel(R) PRO/1000 PT Quad Port LP Server AdapterUnavailable
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver2000 Mbit/s
            iSCSI2Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #371000
            Mbit/s
            3Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)Unavailable
      ACHV02.AshtaChemicals.local
            Binding OrderAdapterSpeed
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver #22000 Mbit/s
            iSCSI1Broadcom NetXtreme Gigabit Ethernet #71000 Mbit/s
            NIC2Broadcom NetXtreme Gigabit EthernetUnavailable
            SLOT 5 2Broadcom NetXtreme Gigabit EthernetUnavailable
            iSCSI2Broadcom NetXtreme Gigabit Ethernet1000 Mbit/s
    Back to Summary
    Back to Top
    Validate Cluster Network Configuration
      Description: Validate the cluster networks that would be created for these
      servers.
      Network: Cluster Network 1
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.131.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI3
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.113
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.121
            Prefix Length24
      Network: Cluster Network 2
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.141.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.12
            Prefix Length24
      Network: Cluster Network 3
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.140.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.12
            Prefix Length24
      Network: Cluster Network 4
      DHCP Enabled: False
      Network Role: Enabled
            PrefixPrefix Length
            10.1.1.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.4
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.5
            Prefix Length24
      Network: Cluster Network 5
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.130.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.112
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI1
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.121
            Prefix Length24
      Verifying that each cluster network interface within a cluster network is
      configured with the same IP subnets.
      Examining network Cluster Network 1.
      Network interface ACHV01.AshtaChemicals.local - iSCSI3 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Network interface ACHV02.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Examining network Cluster Network 2.
      Network interface ACHV01.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Network interface ACHV02.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Examining network Cluster Network 3.
      Network interface ACHV01.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Network interface ACHV02.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Examining network Cluster Network 4.
      Network interface ACHV01.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Network interface ACHV02.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Examining network Cluster Network 5.
      Network interface ACHV01.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Network interface ACHV02.AshtaChemicals.local - iSCSI1 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Verifying that, for each cluster network, all adapters are consistently
      configured with either DHCP or static IP addresses.
      Checking DHCP consistency for network: Cluster Network 1. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI3 matches network Cluster Network 1.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 1.
      Checking DHCP consistency for network: Cluster Network 2. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      Checking DHCP consistency for network: Cluster Network 3. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      Checking DHCP consistency for network: Cluster Network 4. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      Checking DHCP consistency for network: Cluster Network 5. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 5.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI1 matches network Cluster Network 5.
    Back to Summary
    Back to Top
    Validate IP Configuration
      Description: Validate that IP addresses are unique and subnets configured
      correctly.
      ACHV01.AshtaChemicals.local
            ItemName
            Adapter NameiSCSI3
            Adapter DescriptionIntel(R) PRO/1000 PT Quad Port LP Server Adapter #3
            Physical Address00-26-55-DB-CF-73
            StatusOperational
            DNS Servers
            IP Address192.168.131.113
            Prefix Length24
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.141.10
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.140.10
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address10.1.1.4
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)
            #37
            Physical Address78-2B-CB-3C-DC-F7
            StatusOperational
            DNS Servers
            IP Address192.168.130.112
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 12
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-61-1E-49-32-8F
            StatusOperational
            DNS Servers
            IP Addressfe80::cc2f:d769:fe24:3d04%23
            Prefix Length64
            IP Address169.254.2.195
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{96B6424D-DB32-480F-8B46-056A11A0A6A8}
            Adapter DescriptionMicrosoft ISATAP Adapter
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.113%16
            Prefix Length128
            ItemName
            Adapter Nameisatap.{A0353AF4-CE7F-4811-B4FC-35273C2F2C6E}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.112%18
            Prefix Length128
            ItemName
            Adapter Nameisatap.{FAAF4D6A-5A41-4725-9E83-689D8E6682EE}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.10%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{C66443C2-DC5F-4C2A-A674-2191F76E33E1}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.4%27
            Prefix Length128
            ItemName
            Adapter Nameisatap.{B3A95E1D-CB95-4111-89E5-276497D7EF42}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.10%29
            Prefix Length128
            ItemName
            Adapter Nameisatap.{7705D42A-1988-463E-9DA3-98D8BD74337E}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.2.195%30
            Prefix Length128
      ACHV02.AshtaChemicals.local
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.141.12
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.140.12
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #2
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address10.1.1.5
            Prefix Length24
            IP Address10.1.1.248
            Prefix Length24
            ItemName
            Adapter NameiSCSI1
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet #7
            Physical Address74-86-7A-D4-C9-8A
            StatusOperational
            DNS Servers
            IP Address192.168.130.121
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet
            Physical Address00-10-18-F5-08-9C
            StatusOperational
            DNS Servers
            IP Address192.168.131.121
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 11
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-8F-46-67-27-51
            StatusOperational
            DNS Servers
            IP Addressfe80::3471:c9bf:29ad:99db%25
            Prefix Length64
            IP Address169.254.1.193
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{8D7DF16A-1D5F-43D9-B2D6-81143A7225D2}
            Adapter DescriptionMicrosoft ISATAP Adapter #2
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.121%21
            Prefix Length128
            ItemName
            Adapter Nameisatap.{82E35DBD-52BE-4BCF-BC74-E97BB10BF4B0}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.121%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{5A315B7D-D94E-492B-8065-D760234BA42E}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.12%23
            Prefix Length128
            ItemName
            Adapter Nameisatap.{2182B37C-B674-4E65-9F78-19D93E78FECB}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.12%24
            Prefix Length128
            ItemName
            Adapter Nameisatap.{104DC629-D13A-4A36-8845-0726AC9AE25E}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.5%33
            Prefix Length128
            ItemName
            Adapter Nameisatap.{483266DF-7620-4427-BE5D-3585C8D92A12}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.1.193%34
            Prefix Length128
      Verifying that a node does not have multiple adapters connected to the same
      subnet.
      Verifying that each node has at least one adapter with a defined default
      gateway.
      Verifying that there are no node adapters with the same MAC physical address.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Verifying that there are no duplicate IP addresses between any pair of nodes.
      Checking that nodes are consistently configured with IPv4 and/or IPv6
      addresses.
      Verifying that all nodes IPv4 networks are not configured using Automatic
      Private IP Addresses (APIPA).
    Back to Summary
    Back to Top
    Validate Multiple Subnet Properties
      Description: For clusters using multiple subnets, validate the network
      properties.
      Testing that the HostRecordTTL property for network name Name: Cluster1 is set
      to the optimal value for the current cluster configuration.
      HostRecordTTL property for network name Name: Cluster1 has a value of 1200.
      Testing that the RegisterAllProvidersIP property for network name Name:
      Cluster1 is set to the optimal value for the current cluster configuration.
      RegisterAllProvidersIP property for network name Name: Cluster1 has a value of
      0.
      Testing that the PublishPTRRecords property for network name Name: Cluster1 is
      set to the optimal value for the current cluster configuration.
      The PublishPTRRecords property forces the network name to register a PTR in
      DNS reverse lookup record IP address to name mapping.
    Back to Summary
    Back to Top
    Validate Network Communication
      Description: Validate that servers can communicate, with acceptable latency,
      on all networks.
      Analyzing connectivity results ...
      Multiple communication paths were detected between each pair of nodes.
    Back to Summary
    Back to Top
    Validate Windows Firewall Configuration
      Description: Validate that the Windows Firewall is properly configured to
      allow failover cluster network communication.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI3'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node ACHV01.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI1'.
      The Windows Firewall on node ACHV02.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
    Back to Summary
    Back to Top

  • Best design for HA Fileshare on existing Hyper-V Cluster?

    Have a three node 2012 R2 Hyper-V Cluster. The storage is a HP MSA 2000 G3 SAS Block Storage with CSV's. 
    We have a fileserver for all users running as VM on the cluster. Fileserver availability is important and it's difficult to take this fileserver down for the monthly patching. So we want to make these file services HA. Nearly all clients are Windows 8.1,
    so SMB 3 can be used. 
    What is the best way to make these file services HA?
    1. The easiest way would probably be to migrate these fileserver ressources to a dedicated LUN on the MSA 2000, and to add a "general fileserver role" to the existing hyper-V cluster. But is it supported and a good solution to provide Hyper-V VM's
    and HA file services on the same cluster (even when the performance requirements for file services are not high)? Or does this configuration affect the Hyper-V VM performance too much?
    2. Is it better to create a two node guest cluster with "Shared VHDX" for the file services? I'm not sure if this would even work. Because we had "Persistent Reservation" warnings when creating the Hyper-V cluster with the MSA 2000. According "http://blogs.msdn.com/b/clustering/archive/2013/05/24/10421247.aspx",
    these warnings are normal with block storage and can be ignored when we never want to create Windows storage pools or storage spaces. But the Hyper-V MMC shows that "shared VHDX" work with "persistent reservations". 
    3. Are there other possibilities to provide HA file services with this configuration without buying new HW? (Remark: DFSR with two independet Fileservers is probably not a good solution, we have a lot of data that change frequently).
    Thank you in advance for any advice and recommedations!
    Franz

    Hi Franz,
    If you are not going to be using Storage Spaces in the Cluster, this is a warning that you can safely ignore. 
    It passes the normal SCSI3 Persistent Reservation tests, so you are good with those. Additional, when we use the cluster we can install the cluster CAU it will automatically install the cluster updates.
    The related KB:
    Requirements and Best Practices for Cluster-Aware Updating
    https://technet.microsoft.com/en-us/library/jj134234.aspx
    Cluster-Aware Updating: Frequently Asked Questions
    https://technet.microsoft.com/en-us/library/hh831367.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • SMB3 for Hyper-V Cluster

    I'm contemplating using the much hyped SMB3 backed Hyper-V cluster. I just have a few questions.
    1. Is there a way for the SMB3 share to be HA?
    2. Is it easily scalable / can I add storage live without downtime?
    3. Is there any performance or reliability advantage over iSCSI attached storage?
    This is assuming the data within the VHDs is OS and general company data, not large CAD or multimedia data.
    I can probably google most of this knowledge but I'm looking for the confirmation from someone who has done / is doing it. Whitepapers can be not so helpful sometimes and sales guys usually have to refer me to their sales engineer. Thanks in advance.
    This topic first appeared in the Spiceworks Community

    Are there any ways around this limitation without having to install 3rd party software?
    I'm surprised I wasn't able to find much about this on any of my searches.
    Run your workload inside a virtual machine. Configure guest VM cluster between a pair of VMs running Windows Server 2012 R2 and make built-in MSFT iSCSI target to do a failover. See for reference:
    Configure MSFT iSCSI Target for HA
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    (yes it would bring up virtualization overhead as all I/O would be routed over VMbus and also you'll be still active-passive as MSFT cannot do active-active but if you don't want to use third-party software and don't care much about performance that's the
    viable way to go)
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Hyper-V Cluster in VMM

    I am trying to build a Hyper-V cluster with VMM 2012 R2 but require some advice as it is not working how I want it too.
    I have 2 Hyper-V servers, both with their own local storage and 1 iSCSI disk shared between them. I am trying to cluster the servers so that the shared iSCSI disk becomes a shared volume while maintaining the ability to use the local storage as well - some
    VMs will run from local storage while others will run from the CSV.
    The issue I'm having is that when I cluster the 2 servers the iSCSI disk does not show up in VMM as a shared volume. In Windows Explorer the disk has the cluster icon but in VMM there is nothing. In the cluster properties I can add a shared volume... but
    it asks for a logical node which I cannot create because I have no storage pools (server manager says no groups of disks are available to pool).
    I also noticed when I clustered the servers my 2 file shares to their local storage disappeared from VMM which isn't what I want.
    Can someone please advise, or link to, a way to achieve my desired configuration?
    Cheers,
    MrGoodBytes
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

    Hi MrGoodBytes,
    Hi,
    Unfortunately, the available information is not enough to have a clear view of the occurred behavior. Could you provide more information about your environment. For example,
    the server version of the problem on, when this problem occurs the system log record information, screenshots is the best information.
    Before you create the cluster we strongly recommend you run the cluster validation, If you are considering the cluster may have some issue please rerun the validation, then
    post the validation report warning and error part information, this report will quickly locate the cluster potential issue.
    A disk witness is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. A failover cluster has a disk witness only if this
    is specified as part of the quorum configuration.
    Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/zh-cn/library/jj612870.aspx
    I am not familiar with SVCMM so please refer the following related KB to confirm your shared storage add steps is correct.
    How to Configure Storage on a Hyper-V Host Cluster in VMM
    http://technet.microsoft.com/en-us/library/gg610692.aspx
    Configuring Storage in VMM
    http://technet.microsoft.com/en-us/library/gg610600.aspx
    More information:
    How to add storage to Clustered Shared Volumes in Windows Server 2012
    http://blogs.msdn.com/b/clustering/archive/2012/04/06/10291490.aspx
    Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/zh-cn/library/jj612870.aspx
    Event Logs
    http://technet.microsoft.com/en-us/library/cc722404.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How to use Fibre Chanel matrifx for Hyper-V Cluster

    Hi
    I created Hyper-V Cluster (2012 R2) and have Fibre Chanel matrix (4 TB). Is it better co create one big lun for hiper-v storage or create two small luns (2 x 2 TB). Where will be better I/O? All disk used in matrix are the same.
    Thank you for help
    Kind Regards Tomasz

    Hi Yukio,
    I agree with Tim , the best way is to contact with hardware vendor for the disk construction of FC storage .
    Based on my understanding , if these "basic disks" are same and controlled by same controller , I think it will not create any I/O when you divide it  , the total amount of I/O is equal .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Using single SMB share with multiple Hyper-V clusters

    Hello,
    I'm trying to find out if I can use a single SMB share with multiple Hyper-V Clusters. Looking at:
    How to Assign SMB 3.0 File Shares to Hyper-V Hosts and Clusters in VMM
    I think it's possible. Since the File Server is going to handle the file locking it shouldn't be a problem.
    Has anyone tried that?
    Thank you in advance!

    Hello,
    I'm not sure that's possible, I get this from this statement:"Assign the share—Assign
    the share to a virtual machine host or cluster."
    Even if it worked I wouldn't do that. Why don't  you just create multiple shares?

  • Hyper V Cluster Migration - Options

    Hello,
    I have a Server 2012 4 node cluster that i need to migrate to new hardware along with all the VMs. The new hardware will be a 2 node Server 2012 R2 cluster. Can i please have some advice on my migration options? I am using iSCSi SAN storage but have limited
    available to make available to the new cluster as it is all being used by the current one.
    I was hoping I could add the host to the existing cluster but that does not appear to be the case. So i think my only options now would be to either use copy cluster roles wizard or import/export process, found here: http://technet.microsoft.com/en-us/library/dn486792.aspx
    If I use the copy cluster roles wizard should i setup the storage on the new hosts, as in make it available via iscsi initiator, before i create the cluster? I just have concerns about giving access to the same CSV storage that is currently live on the current
    cluster. I am using SCVMM 2012 for management. Many Thanks. Carl.

    Hi Alexey, thanks for your reply and sorry for my delay. I ended up setting up the new cluster, giving it some storage and live migrating the VMs - this was taking a long time so basically completed what you explained above - i used the copy cluster roles
    wizard, turned off the VMs on the old cluster, bought the CSV online on the new cluster then turned the machines on. It was actually pretty much issue free which was a nice surprise. Yes there was also a chunk of downtime but luckily I managed to get approval.
    Thanks, Carl.

  • Unplanned failover in a Hyper-V cluster vs unplanned failover in an ordinary (not Hyper-V) cluster

    Hello!
    Please excuse me if you think my question is silly, but before deploying something  in a production environment I'd like to dot the i's  and cross the t's.
    1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
    If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
    including their system state to another node without downtime.
    In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
    WITHOUT their system state. As far as I understand this can lead to some data loss.
    http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
    If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
    2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
    Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
    Is it correct?
    Thank you in advance,
    Michael

    "And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
    First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs.  By default, Hyper-V is set to move a maximum of 2 VMs at a time.  You can change that, but it would be foolish to increase that value if
    all you have is a single 1GE network.  The other VMs will be queued.
    Secondly, you are getting that amount of time confused with what is actually happening.  Think of a single VM.  Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate.  (Highly unlikely, even
    on a 1 GE NIC).  During that 13 minutes the VM takes to live migrate, the VM continues to perform normally.  In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
    on the original host.
    Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster.  The VM is doing its work reading and writing to its data files.  At that instance in time when the host
    fails, the VM may have some unwritten data buffers in memory.  Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time.  It is not going to lose any 13 minutes of data.  In fact, if you have an application
    that is processing data at this volume, you most likely have something like SQL running.  When the VM goes down, the cluster will automatically restart the VM on another node of the cluster.  SQL will automatically replay transaction logs to recover
    to the best of its ability.
    Is there a possibility of data loss?  Yes, a very tiny possibility for a very small amount.  Is there a possibility of data corruption?  Yes, a very, very tiny possibility, just like with a physical machine.
    The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM.  The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
    "clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
    Not exactly true.  Let's use SQL as an example again.  When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances.  When the node on which the active instance is running fails, there is
    a brief pause in service while the instance starts on the other node.  Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
    Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application.  Recovery time is about the biggest issue.  As you have noted, restarting a VM, i.e. rebooting it, takes time. 
    And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
    clients may be unable to access for up to a minute or so.
    However, the amount of data potentially lost is quite dependent upon the application.  SQL is designed to recover nicely in either environment, and it is likely not to lose any data.  Sequential writing applications will be dependent upon things
    like disk cache held in memory - large caches means higher probability of losing data.  No disk cache means there is not likely to be any loss of data.
    .:|:.:|:. tim

  • Cluster Quorum Disk failing inside Guest cluster VMs in Hyper-V Cluster using Virtual Disk Sharing Windows Server 2012 R2

    Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled. 
    It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
    Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
    The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
    Right now I'm seen this errors on one of the VM Guest:
     Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1069
    Task Category: Resource Control Manager
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1069</EventID>
        <Version>1</Version>
        <Level>2</Level>
        <Task>3</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14140</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="ResourceName">Quorum-HDD</Data>
        <Data Name="ResourceGroup">Cluster Group</Data>
        <Data Name="ResTypeDll">Physical Disk</Data>
      </EventData>
    </Event>
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1558
    Task Category: Quorum Manager
    Level:         Warning
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1558</EventID>
        <Version>0</Version>
        <Level>3</Level>
        <Task>42</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14139</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">ServerDB02</Data>
      </EventData>
    </Event>
    We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
    Any ideas are appreciated.
    Thanks.
    Eduardo Rojas

    Hi,
    Please refer to the following link:
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
    Best Regards,
    Vincent Wu
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Server 2012 R2 Hyper-V Cluster, VM blue screens after migration between nodes.

    I currently have a two node Server 2012 R2 Hyper-v Cluster (fully patched) with a Windows Server 2012 R2 Iscsi target.
    The VMs run fine all day long, but when I try to do a live/quick migration the VM blue screens after about 20 minutes. The blue reports back about a “Critical_Structure_Corruption”.
    I’m being to think it might be down to CPU, as one system has an E-2640v2 and the other one has an E5-2670v3. Should I be able to migrate between these two systems with these type of CPU?
    Tim

    Sorry Tim, is that all 50 blue screen if live migrated?
    Are they all on the latest integration services? Does a cluster validation complete successfully? Are the hosts patched to the same level?
    The fact that if you power them off then migrate them and they boot fine, does point to a processor incompatibility and the memory BIN file is not accepted on the new host.
    Bit of a long shot but the only other thing I can think of off the top of my head if the compatibility option is checked, is checking the location of the BIN file when the VM is running to make sure its in the same place as the VHD\VHDX in the
    CSV storage where the VM is located and not on the local host somewhere like C:\program data\.... that is stopping it being migrated to the new host when the VM is live migrated.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Maybe you are looking for