Hyper-V Cluster Name offline

We have a 2012 Hyper-V cluster that isn't online and we can't migrate VMs to the other Hyper-V host.  We see event errors in the Failover Cluster Manager:
The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Cluster Name
Cluster Group
Network Name
The description for Event ID 1254 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Cluster Group
The description for Event ID 1155 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
ACMAIL
3604536
Any help or info is appreciated.
Thank you!

Here is the network validation.  Any thoughts?
Failover Cluster Validation Report
Failover Cluster Validation Report
      Node: ACHV01.AshtaChemicals.localValidated
      Node: ACHV02.AshtaChemicals.localValidated
      Started8/6/2014 5:04:47 PM
      Completed8/6/2014 5:05:22 PM
The Validate a Configuration Wizard must be run after any change is made to the
configuration of the cluster or hardware. For more information, see
Results by Category
      NameResult SummaryDescription
      NetworkWarning
Network
      NameResultDescription
      List Network Binding OrderSuccess
      Validate Cluster Network ConfigurationSuccess
      Validate IP ConfigurationWarning
      Validate Multiple Subnet PropertiesSuccess
      Validate Network CommunicationSuccess
      Validate Windows Firewall ConfigurationSuccess
Overall Result
  Testing has completed for the tests you selected. You should review the
  warnings in the Report. A cluster solution is supported by Microsoft only if
  it passes all cluster validation tests.
List Network Binding Order
  Description: List the order in which networks are bound to the adapters on
  each node.
  ACHV01.AshtaChemicals.local
        Binding OrderAdapterSpeed
        iSCSI3Intel(R) PRO/1000 PT Quad Port LP Server Adapter #31000 Mbit/s
        Ethernet 3Intel(R) PRO/1000 PT Quad Port LP Server AdapterUnavailable
        Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
Mbit/s
        Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
        Mbit/s
        MgtMicrosoft Network Adapter Multiplexor Driver2000 Mbit/s
        iSCSI2Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #371000
        Mbit/s
        3Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)Unavailable
  ACHV02.AshtaChemicals.local
        Binding OrderAdapterSpeed
        Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
Mbit/s
        Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
        Mbit/s
        MgtMicrosoft Network Adapter Multiplexor Driver #22000 Mbit/s
        iSCSI1Broadcom NetXtreme Gigabit Ethernet #71000 Mbit/s
        NIC2Broadcom NetXtreme Gigabit EthernetUnavailable
        SLOT 5 2Broadcom NetXtreme Gigabit EthernetUnavailable
        iSCSI2Broadcom NetXtreme Gigabit Ethernet1000 Mbit/s
Back to Summary
Back to Top
Validate Cluster Network Configuration
  Description: Validate the cluster networks that would be created for these
  servers.
  Network: Cluster Network 1
  DHCP Enabled: False
  Network Role: Disabled
  One or more interfaces on this network are connected to an iSCSI Target. This
  network will not be used for cluster communication.
        PrefixPrefix Length
        192.168.131.024
        ItemValue
        Network InterfaceACHV01.AshtaChemicals.local - iSCSI3
        DHCP EnabledFalse
        Connected to iSCSI targetTrue
        IP Address192.168.131.113
        Prefix Length24
        ItemValue
        Network InterfaceACHV02.AshtaChemicals.local - iSCSI2
        DHCP EnabledFalse
        Connected to iSCSI targetTrue
        IP Address192.168.131.121
        Prefix Length24
  Network: Cluster Network 2
  DHCP Enabled: False
  Network Role: Internal
        PrefixPrefix Length
        192.168.141.024
        ItemValue
        Network InterfaceACHV01.AshtaChemicals.local - Mgt - Heartbeat
        DHCP EnabledFalse
        Connected to iSCSI targetFalse
        IP Address192.168.141.10
        Prefix Length24
        ItemValue
        Network InterfaceACHV02.AshtaChemicals.local - Mgt - Heartbeat
        DHCP EnabledFalse
        Connected to iSCSI targetFalse
        IP Address192.168.141.12
        Prefix Length24
  Network: Cluster Network 3
  DHCP Enabled: False
  Network Role: Internal
        PrefixPrefix Length
        192.168.140.024
        ItemValue
        Network InterfaceACHV01.AshtaChemicals.local - Mgt - LiveMigration
        DHCP EnabledFalse
        Connected to iSCSI targetFalse
        IP Address192.168.140.10
        Prefix Length24
        ItemValue
        Network InterfaceACHV02.AshtaChemicals.local - Mgt - LiveMigration
        DHCP EnabledFalse
        Connected to iSCSI targetFalse
        IP Address192.168.140.12
        Prefix Length24
  Network: Cluster Network 4
  DHCP Enabled: False
  Network Role: Enabled
        PrefixPrefix Length
        10.1.1.024
        ItemValue
        Network InterfaceACHV01.AshtaChemicals.local - Mgt
        DHCP EnabledFalse
        Connected to iSCSI targetFalse
        IP Address10.1.1.4
        Prefix Length24
        ItemValue
        Network InterfaceACHV02.AshtaChemicals.local - Mgt
        DHCP EnabledFalse
        Connected to iSCSI targetFalse
        IP Address10.1.1.5
        Prefix Length24
  Network: Cluster Network 5
  DHCP Enabled: False
  Network Role: Disabled
  One or more interfaces on this network are connected to an iSCSI Target. This
  network will not be used for cluster communication.
        PrefixPrefix Length
        192.168.130.024
        ItemValue
        Network InterfaceACHV01.AshtaChemicals.local - iSCSI2
        DHCP EnabledFalse
        Connected to iSCSI targetTrue
        IP Address192.168.130.112
        Prefix Length24
        ItemValue
        Network InterfaceACHV02.AshtaChemicals.local - iSCSI1
        DHCP EnabledFalse
        Connected to iSCSI targetTrue
        IP Address192.168.130.121
        Prefix Length24
  Verifying that each cluster network interface within a cluster network is
  configured with the same IP subnets.
  Examining network Cluster Network 1.
  Network interface ACHV01.AshtaChemicals.local - iSCSI3 has addresses on all
  the subnet prefixes of network Cluster Network 1.
  Network interface ACHV02.AshtaChemicals.local - iSCSI2 has addresses on all
  the subnet prefixes of network Cluster Network 1.
  Examining network Cluster Network 2.
  Network interface ACHV01.AshtaChemicals.local - Mgt - Heartbeat has addresses
  on all the subnet prefixes of network Cluster Network 2.
  Network interface ACHV02.AshtaChemicals.local - Mgt - Heartbeat has addresses
  on all the subnet prefixes of network Cluster Network 2.
  Examining network Cluster Network 3.
  Network interface ACHV01.AshtaChemicals.local - Mgt - LiveMigration has
  addresses on all the subnet prefixes of network Cluster Network 3.
  Network interface ACHV02.AshtaChemicals.local - Mgt - LiveMigration has
  addresses on all the subnet prefixes of network Cluster Network 3.
  Examining network Cluster Network 4.
  Network interface ACHV01.AshtaChemicals.local - Mgt has addresses on all the
  subnet prefixes of network Cluster Network 4.
  Network interface ACHV02.AshtaChemicals.local - Mgt has addresses on all the
  subnet prefixes of network Cluster Network 4.
  Examining network Cluster Network 5.
  Network interface ACHV01.AshtaChemicals.local - iSCSI2 has addresses on all
  the subnet prefixes of network Cluster Network 5.
  Network interface ACHV02.AshtaChemicals.local - iSCSI1 has addresses on all
  the subnet prefixes of network Cluster Network 5.
  Verifying that, for each cluster network, all adapters are consistently
  configured with either DHCP or static IP addresses.
  Checking DHCP consistency for network: Cluster Network 1. Network DHCP status
  is disabled.
  DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
  iSCSI3 matches network Cluster Network 1.
  DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
  iSCSI2 matches network Cluster Network 1.
  Checking DHCP consistency for network: Cluster Network 2. Network DHCP status
  is disabled.
  DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
  - Heartbeat matches network Cluster Network 2.
  DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
  - Heartbeat matches network Cluster Network 2.
  Checking DHCP consistency for network: Cluster Network 3. Network DHCP status
  is disabled.
  DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
  - LiveMigration matches network Cluster Network 3.
  DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
  - LiveMigration matches network Cluster Network 3.
  Checking DHCP consistency for network: Cluster Network 4. Network DHCP status
  is disabled.
  DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
  matches network Cluster Network 4.
  DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
  matches network Cluster Network 4.
  Checking DHCP consistency for network: Cluster Network 5. Network DHCP status
  is disabled.
  DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
  iSCSI2 matches network Cluster Network 5.
  DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
  iSCSI1 matches network Cluster Network 5.
Back to Summary
Back to Top
Validate IP Configuration
  Description: Validate that IP addresses are unique and subnets configured
  correctly.
  ACHV01.AshtaChemicals.local
        ItemName
        Adapter NameiSCSI3
        Adapter DescriptionIntel(R) PRO/1000 PT Quad Port LP Server Adapter #3
        Physical Address00-26-55-DB-CF-73
        StatusOperational
        DNS Servers
        IP Address192.168.131.113
        Prefix Length24
        ItemName
        Adapter NameMgt - Heartbeat
        Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
        Physical Address78-2B-CB-3C-DC-F5
        StatusOperational
        DNS Servers10.1.1.2, 10.1.1.8
        IP Address192.168.141.10
        Prefix Length24
        ItemName
        Adapter NameMgt - LiveMigration
        Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
        Physical Address78-2B-CB-3C-DC-F5
        StatusOperational
        DNS Servers10.1.1.2, 10.1.1.8
        IP Address192.168.140.10
        Prefix Length24
        ItemName
        Adapter NameMgt
        Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver
        Physical Address78-2B-CB-3C-DC-F5
        StatusOperational
        DNS Servers10.1.1.2, 10.1.1.8
        IP Address10.1.1.4
        Prefix Length24
        ItemName
        Adapter NameiSCSI2
        Adapter DescriptionBroadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)
        #37
        Physical Address78-2B-CB-3C-DC-F7
        StatusOperational
        DNS Servers
        IP Address192.168.130.112
        Prefix Length24
        ItemName
        Adapter NameLocal Area Connection* 12
        Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
        Physical Address02-61-1E-49-32-8F
        StatusOperational
        DNS Servers
        IP Addressfe80::cc2f:d769:fe24:3d04%23
        Prefix Length64
        IP Address169.254.2.195
        Prefix Length16
        ItemName
        Adapter NameLoopback Pseudo-Interface 1
        Adapter DescriptionSoftware Loopback Interface 1
        Physical Address
        StatusOperational
        DNS Servers
        IP Address::1
        Prefix Length128
        IP Address127.0.0.1
        Prefix Length8
        ItemName
        Adapter Nameisatap.{96B6424D-DB32-480F-8B46-056A11A0A6A8}
        Adapter DescriptionMicrosoft ISATAP Adapter
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.131.113%16
        Prefix Length128
        ItemName
        Adapter Nameisatap.{A0353AF4-CE7F-4811-B4FC-35273C2F2C6E}
        Adapter DescriptionMicrosoft ISATAP Adapter #3
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.130.112%18
        Prefix Length128
        ItemName
        Adapter Nameisatap.{FAAF4D6A-5A41-4725-9E83-689D8E6682EE}
        Adapter DescriptionMicrosoft ISATAP Adapter #4
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.141.10%22
        Prefix Length128
        ItemName
        Adapter Nameisatap.{C66443C2-DC5F-4C2A-A674-2191F76E33E1}
        Adapter DescriptionMicrosoft ISATAP Adapter #5
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:10.1.1.4%27
        Prefix Length128
        ItemName
        Adapter Nameisatap.{B3A95E1D-CB95-4111-89E5-276497D7EF42}
        Adapter DescriptionMicrosoft ISATAP Adapter #6
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.140.10%29
        Prefix Length128
        ItemName
        Adapter Nameisatap.{7705D42A-1988-463E-9DA3-98D8BD74337E}
        Adapter DescriptionMicrosoft ISATAP Adapter #7
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:169.254.2.195%30
        Prefix Length128
  ACHV02.AshtaChemicals.local
        ItemName
        Adapter NameMgt - Heartbeat
        Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
        Physical Address74-86-7A-D4-C9-8B
        StatusOperational
        DNS Servers10.1.1.8, 10.1.1.2
        IP Address192.168.141.12
        Prefix Length24
        ItemName
        Adapter NameMgt - LiveMigration
        Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
        Physical Address74-86-7A-D4-C9-8B
        StatusOperational
        DNS Servers10.1.1.8, 10.1.1.2
        IP Address192.168.140.12
        Prefix Length24
        ItemName
        Adapter NameMgt
        Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #2
        Physical Address74-86-7A-D4-C9-8B
        StatusOperational
        DNS Servers10.1.1.8, 10.1.1.2
        IP Address10.1.1.5
        Prefix Length24
        IP Address10.1.1.248
        Prefix Length24
        ItemName
        Adapter NameiSCSI1
        Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet #7
        Physical Address74-86-7A-D4-C9-8A
        StatusOperational
        DNS Servers
        IP Address192.168.130.121
        Prefix Length24
        ItemName
        Adapter NameiSCSI2
        Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet
        Physical Address00-10-18-F5-08-9C
        StatusOperational
        DNS Servers
        IP Address192.168.131.121
        Prefix Length24
        ItemName
        Adapter NameLocal Area Connection* 11
        Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
        Physical Address02-8F-46-67-27-51
        StatusOperational
        DNS Servers
        IP Addressfe80::3471:c9bf:29ad:99db%25
        Prefix Length64
        IP Address169.254.1.193
        Prefix Length16
        ItemName
        Adapter NameLoopback Pseudo-Interface 1
        Adapter DescriptionSoftware Loopback Interface 1
        Physical Address
        StatusOperational
        DNS Servers
        IP Address::1
        Prefix Length128
        IP Address127.0.0.1
        Prefix Length8
        ItemName
        Adapter Nameisatap.{8D7DF16A-1D5F-43D9-B2D6-81143A7225D2}
        Adapter DescriptionMicrosoft ISATAP Adapter #2
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.131.121%21
        Prefix Length128
        ItemName
        Adapter Nameisatap.{82E35DBD-52BE-4BCF-BC74-E97BB10BF4B0}
        Adapter DescriptionMicrosoft ISATAP Adapter #3
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.130.121%22
        Prefix Length128
        ItemName
        Adapter Nameisatap.{5A315B7D-D94E-492B-8065-D760234BA42E}
        Adapter DescriptionMicrosoft ISATAP Adapter #4
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.141.12%23
        Prefix Length128
        ItemName
        Adapter Nameisatap.{2182B37C-B674-4E65-9F78-19D93E78FECB}
        Adapter DescriptionMicrosoft ISATAP Adapter #5
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:192.168.140.12%24
        Prefix Length128
        ItemName
        Adapter Nameisatap.{104DC629-D13A-4A36-8845-0726AC9AE25E}
        Adapter DescriptionMicrosoft ISATAP Adapter #6
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:10.1.1.5%33
        Prefix Length128
        ItemName
        Adapter Nameisatap.{483266DF-7620-4427-BE5D-3585C8D92A12}
        Adapter DescriptionMicrosoft ISATAP Adapter #7
        Physical Address00-00-00-00-00-00-00-E0
        StatusNot Operational
        DNS Servers
        IP Addressfe80::5efe:169.254.1.193%34
        Prefix Length128
  Verifying that a node does not have multiple adapters connected to the same
  subnet.
  Verifying that each node has at least one adapter with a defined default
  gateway.
  Verifying that there are no node adapters with the same MAC physical address.
  Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
  ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
  ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration.
  Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
  ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
  ACHV01.AshtaChemicals.local adapter Mgt.
  Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
  ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration and node
  ACHV01.AshtaChemicals.local adapter Mgt.
  Found duplicate physical address 74-86-7A-D4-C9-8B on node
  ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
  ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration.
  Found duplicate physical address 74-86-7A-D4-C9-8B on node
  ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
  ACHV02.AshtaChemicals.local adapter Mgt.
  Found duplicate physical address 74-86-7A-D4-C9-8B on node
  ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration and node
  ACHV02.AshtaChemicals.local adapter Mgt.
  Verifying that there are no duplicate IP addresses between any pair of nodes.
  Checking that nodes are consistently configured with IPv4 and/or IPv6
  addresses.
  Verifying that all nodes IPv4 networks are not configured using Automatic
  Private IP Addresses (APIPA).
Back to Summary
Back to Top
Validate Multiple Subnet Properties
  Description: For clusters using multiple subnets, validate the network
  properties.
  Testing that the HostRecordTTL property for network name Name: Cluster1 is set
  to the optimal value for the current cluster configuration.
  HostRecordTTL property for network name Name: Cluster1 has a value of 1200.
  Testing that the RegisterAllProvidersIP property for network name Name:
  Cluster1 is set to the optimal value for the current cluster configuration.
  RegisterAllProvidersIP property for network name Name: Cluster1 has a value of
  0.
  Testing that the PublishPTRRecords property for network name Name: Cluster1 is
  set to the optimal value for the current cluster configuration.
  The PublishPTRRecords property forces the network name to register a PTR in
  DNS reverse lookup record IP address to name mapping.
Back to Summary
Back to Top
Validate Network Communication
  Description: Validate that servers can communicate, with acceptable latency,
  on all networks.
  Analyzing connectivity results ...
  Multiple communication paths were detected between each pair of nodes.
Back to Summary
Back to Top
Validate Windows Firewall Configuration
  Description: Validate that the Windows Firewall is properly configured to
  allow failover cluster network communication.
  The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV01.AshtaChemicals.local - Mgt - LiveMigration'.
  The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV01.AshtaChemicals.local - iSCSI3'.
  The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV01.AshtaChemicals.local - Mgt - Heartbeat'.
  The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV01.AshtaChemicals.local - Mgt'.
  The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV01.AshtaChemicals.local - iSCSI2'.
  The Windows Firewall on node ACHV01.AshtaChemicals.local is configured to
  allow network communication between cluster nodes.
  The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV02.AshtaChemicals.local - Mgt'.
  The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV02.AshtaChemicals.local - iSCSI2'.
  The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV02.AshtaChemicals.local - Mgt - LiveMigration'.
  The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV02.AshtaChemicals.local - Mgt - Heartbeat'.
  The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
  allow network communication between cluster nodes over adapter
  'ACHV02.AshtaChemicals.local - iSCSI1'.
  The Windows Firewall on node ACHV02.AshtaChemicals.local is configured to
  allow network communication between cluster nodes.
Back to Summary
Back to Top

Similar Messages

  • 2012R2 hyper-v failover cluster Cluster name object has no DNS record created

    I’m trying to setup a 2-node ws2012R2 cluster using ws2008R2 AD(with DNS) but got an issue with DNS entry creation on AD. I also tried ws2012 AD but it's the same problem.
    The individual node DNS entries were created on AD automatically upon joining AD but I can’t get AD to create DNS entry for my cluster name object automatically. AD will have cluster name computer
    created but no record for cluster name  in DNS entries
    Got the following event ID 1196 error with the info below.
    Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason:
    DNS server failure.

    Hi hjma29,
    How about your issue now? I just want to confirm the current situations.
    Please feel free to let us know if you need further assistance.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason: The handle is invalid.

    I'm stuck here trying to figure this error out.  
    2003 domain, 2012 hyper v core 3 nodes.  (I have two of these hyper V groups, hvclust2012 is the problem group, hvclust2008 is okay)
    In Failover Cluster Manager I see these errors, "Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason:  The handle is invalid."
    I restarted the host node that was listed in having the error then another node starts showing the errors.
    I tried to follow this site:  http://blog.subvertallmedia.com/2012/12/06/repairing-a-failover-cluster-in-windows-server-2012-live-migration-fails-dns-cluster-name-errors/
    Then this error shows up when doing the repair:  there was an error repairing the active directory object for 'Cluster Name'
    I looked at our domain controller and noticed I don't have access to local users and groups.  I can access our other hvclust2008 (both clusters are same version 2012).
    <image here>
    I came upon this thread:  http://social.technet.microsoft.com/Forums/en-US/85fc2ad5-b0c0-41f0-900e-df1db8625445/windows-2012-cluster-resource-name-fails-dns-registration-evt-1196?forum=winserverClustering
    Now, I'm stuck on adding a managed service account (mas).  I'm not sure if I'm way off track to fix this.  Any advice?  Thanks in advance!
    <image here>

    Thanks Elton,
    I restarted 3 hosts after applying the hotfix.  Then I did the steps below and got stuck on step 5.  That is when I get the error (image above).  There
    was an error repairing the active directory object for 'Cluster Name'.  For more data, see 'Information Details'.
    To reset the password on the affected name resource, perform the following steps:
    From Failover Cluster Manager, locate the name resource.
    Right-click on the resource, and click Properties.
    On the Policies tab, select If resource fails, do not restart, and then click OK.
    Right-click on the resource, click More Actions, and then click Simulate Failure.
    When the name resource shows "Failed," right-click on the resource, click More Actions, and then click Repair.
    After the name resource is online, right-click on the resource, and then click Properties.
    On the Policies tab, select If resource fails, attempt restart on current node, and then click OK.
    Thanks

  • Cluster Quorum Disk failing inside Guest cluster VMs in Hyper-V Cluster using Virtual Disk Sharing Windows Server 2012 R2

    Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled. 
    It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
    Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
    The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
    Right now I'm seen this errors on one of the VM Guest:
     Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1069
    Task Category: Resource Control Manager
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1069</EventID>
        <Version>1</Version>
        <Level>2</Level>
        <Task>3</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14140</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="ResourceName">Quorum-HDD</Data>
        <Data Name="ResourceGroup">Cluster Group</Data>
        <Data Name="ResTypeDll">Physical Disk</Data>
      </EventData>
    </Event>
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1558
    Task Category: Quorum Manager
    Level:         Warning
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1558</EventID>
        <Version>0</Version>
        <Level>3</Level>
        <Task>42</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14139</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">ServerDB02</Data>
      </EventData>
    </Event>
    We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
    Any ideas are appreciated.
    Thanks.
    Eduardo Rojas

    Hi,
    Please refer to the following link:
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
    Best Regards,
    Vincent Wu
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Add Node to Hyper-V Cluster running Server 2012 R2

    Hi All,
    I am in the process to upgrade our Hyper-V Cluster to Server 2012 R2 but I am not sure about the required Validation test.
    The Situation at the Moment-> 1 Node Cluster running Server 2012 R2 with 2 CSVs and Quorum. Addtional Server prepared to add to the Cluster. One CSV is empty and could be used for the Validation Test. On the Other CSV are running 10 VMs in production.
    So when I start the Validation wizard I can select specific CSVs to test, which makes sense;-) But the Warning message is not clear for me "TO AVOID ROLE FAILURES, IT IS RECOMMENDED THAT ALL ROLES USING CLUSTER SHARED VOLUMES BE STOPPED BEFORE THE STORAGE
    IS VALIDATED". Does it mean that ALL CSVs will be testest and Switched offline during the test or just the CSV that i have selected in the Options? I have to avoid definitly that the CSV where all the VMs are running will be switched offline and also
    that the configuration will be corputed after loosing the CSV where the VMs are running.
    Can someone confirm that ONLY the selected CSV will be used for the Validation test ???
    Many thanks
    Markus

    Hi,
    The validation will test the select the CSV storage, if you have guest vm running this CSV it must shutdown or saved before you validate the CSV.
    Several tests will actually trigger failovers and move the disks and groups to different cluster nodes which will cause downtime, and these include Validating Disk Arbitration,
    Disk Failover, Multiple Arbitration, SCSI-3 Persistent Reservation, and Simultaneous Failover. 
    So if you want to test a majority of the functionality of your cluster without impacting availability, exclude these tests.
    The related information:
    Validating a Cluster with Zero Downtime
    http://blogs.msdn.com/b/clustering/archive/2011/06/28/10180803.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Adding nodes to Windows Server 2008 R2 Hyper-V Cluster..

    Currently we have a 3 node Windows Server 2008 R2 Hyper-V Cluster in production. There are about 3 terrabytes worth of VMs running across these nodes.
    It is over-committed, so i've setup two new nodes to add to the cluster.
    I've done this before in a SQL cluster but never a Hyper-V cluster.
    If I don't run validation when adding the nodes, will there be downtime?
    The quorum is setup for disk majority, everything is identical on all nodes that needs to be. Shared storage is recognized and ready on the new nodes. I've gone through every checklist that Microsoft has. I'm just curious if the virtual machines will go
    offline on the current nodes when i add the two new nodes.
    Everything is identical down to the wsus updates installed. From networking to storage everything is perfect.
    I don't want to run validation as I know that'll take everything offline.

    Hi,
    It is recommend to run a validation test. You can select custom test. (skip storage).
    When add the new node to existing cluster . it will not bring down existing VM. 
    Lai (My blog:- http://www.ms4u.info)

  • Microsoft Virtual Machine Converter VMware To Hyper-V Cluster

    I'm not sure if this should technically be in the clustering section but I have just moved from SCVMM 2012 SP1 to 2012 R2 and I kinda miss the built-in converter tool. What I used to do when converting VMware to Hyper-V was uninstall VMware Tools and then
    I would do a physical to virtual conversion on the VMware virtual machine and SCVMM would handle copying the virtual machine while it was online and register it in our Hyper-V cluster. Now, the only thing I could come up with is the Microsoft Virtual Machine
    Converter but it seems rather limited and doesn't seem to have an option to import to a cluster. So is the only option to convert it over to Hyper-V as if it were a local machine and then run another export/import process to get it in the cluster? I tried
    to point it to a CSV and while it copied the disk over, it registered the virtual machine in ProgramData (the default location). This obviously causes issues when trying to make the VM highly available. Any one have a suggested process of the best way to go
    about this? Thank you in advance for your time!

    So no matter what option I choose, V2V always shuts down the source VM during the conversion process. On the other hand, if I use the old method of uninstalling VMware tools manually and then doing a P2V instead, the source VM has to stay online for
    that type of process. Then I just have to migrate it to become highly available and that seems to accomplish what I want. The only annoying part is that I couldn't run it on my Win 8.1 Pro workstation as it requires the BITS feature to be installed
    and that only appears to be available on server editions (correct me if there is a way to get it on 8.1). I guess the documentation (I think) says that it should run on server editions only but V2V runs fine from 8.1 since it doesn't need BITS since it's
    an offline process.

  • Set cluster name and IP address ONLINE

    Hi
    I had to shut down all cluster nodes. After turned on all cluster nodes I saw in Management cluster console:
    Basic cluster resourses:
    Name: CLuster1  State:Offline
    Adress IP: xx.xx.xx.xx. State Offline
    How to set these cluster resources (cluster name, ip adress) Online using Powershell?
    Kind Regards Tomasz

    Hi Yukio Seki,
    If you want to know how to bring cluster resource online please refer the following KB:
    Microsoft.FailoverClusters.PowerShell
    https://technet.microsoft.com/en-us/library/ee461009.aspx
    Failover Clusters Cmdlets in Windows PowerShell
    https://technet.microsoft.com/en-us/library/hh847239.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Move 2012 R2 Hyper-V Cluster to New Subnet

    I have a two-node Hyper-V cluster running on Windows Server 2012 R2. I am moving it to a remote hosting facility. So it is going to be going to a new subnet with different IP addresses than it has now. What do I need to do to make this a smooth transition?
    Is there an article about it? I haven't found it yet.
    Thanks,
    Rob

    It's a simple process.
    The IPs that you have to change are:
    - Management IPs of your nodes (2 IPs)
    - Private IPs of your nodes (LM, CSV)
    - The Cluster IP address
    The cluster communicates using DNS names. So when you will change the IP addresses, the DNS record will be updated, and then the cluster will resume the communications.
    What i suggest is:
    Change the nodes private IP addresses (CSV, LM). Verify that the nodes communicates using these new IP addresses. Verify that the cluster have updated its configuration (You should see the new IP addresses on the cluster console, Networks)
    Change the cluster IP adress, verify that the DNS has been updated, and that the cluster Name and IP are online (Cluster console)
    Change the first node IP, verify that the DNS record has been updated
    Change the second node IP, verify that the DNS record has been updated
    Now, the configuration should work normally
    Regards, Samir Farhat Infrastructure and Virtualization Consultant || Virtualization, Cloud, Azure ? Follow and Ask here https://buildwindows.wordpress.com

  • Add iSCSI LUN to Multiple Hyper-V Cluster Hosts?

    Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?

    Is there a way to connect multiple Hyper-V hosts to a CSV LUN without manually logging into each and opening the iSCSI Initiator GUI?
    Here's a good step-by-step guide on how to do everything you want using just PowerShell. Please see:
    Configuring iSCSI storage for a Hyper-V Cluster
    http://www.hypervrockstar.com/qs-buildingahypervcluster_part3/
    This part is should be of a particular interest of yours. See:
    Connect Nodes to iSCSI Target
    Once the target is created and configured, we need to attach the iSCSI initiator in each node to the storage. We will use MPIO to
    ensure best performance and availability of storage.  When we enable the MS
    DSM to claim all iSCSI LUNs we must reboot the node for the setting to take affect. MPIO is utilized by creating a persistent connection to the target for each data NIC on the target server and from all iSCSI initiator NICs on our hyper-v
    server.  Because our hyper-v servers are using converged networking, we only have 1 iSCSI NIC.  In our example resiliency is provided by the LBFO team we created in the last video.
    PowerShell Commands
    1
    2
    3
    4
    5
    6
    7
    8
    9
    Set-Service -Name
    msiscsi -StartupType Automatic
    Start-Service msiscsi
    #reboot requres after claim
    Enable-MSDSMAutomaticClaim -BusType
    iSCSI
    Set-MSDSMGlobalDefaultLoadBalancePolicy
    -Policy RR
    New-IscsiTargetPortal –TargetPortalAddress 192.168.1.107
    $target = Get-IscsiTarget
    -NodeAddress *HyperVCluster*
    $target| Connect-IscsiTarget
    -IsPersistent $true -IsMultipathEnabled
    $true -InitiatorPortalAddress
    192.168.1.21 -TargetPortalAddress 10.0.1.10
    $target| Connect-IscsiTarget-IsPersistent$true-IsMultipathEnabled$
    You'll find a reference to "Connect-IscsiTarget" PowerShell cmdlet here:
    Connect-IscsiTarget
    https://technet.microsoft.com/en-us/library/hh826098.aspx
    Set of samples on how to control MSFT iSCSI initiator with PowerShell could be found here:
    Managing iSCSI Initiator with PowerShell
    http://blogs.msdn.com/b/san/archive/2012/07/31/managing-iscsi-initiator-connections-with-windows-powershell-on-windows-server-2012.aspx
    Good luck and happy clustering :)
    StarWind Virtual SAN clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How to use Fibre Chanel matrifx for Hyper-V Cluster

    Hi
    I created Hyper-V Cluster (2012 R2) and have Fibre Chanel matrix (4 TB). Is it better co create one big lun for hiper-v storage or create two small luns (2 x 2 TB). Where will be better I/O? All disk used in matrix are the same.
    Thank you for help
    Kind Regards Tomasz

    Hi Yukio,
    I agree with Tim , the best way is to contact with hardware vendor for the disk construction of FC storage .
    Based on my understanding , if these "basic disks" are same and controlled by same controller , I think it will not create any I/O when you divide it  , the total amount of I/O is equal .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V cluster: Unable to fail VM over to secondary host

    I am working on a Server 2012 Hyper-V Cluster. I am unable to fail my VMs from one node to the other using either LIVE or Quick migration.
    A force shutdown of VMHost01 will force a migration to VMHost02. And once we are on VMHost02 we can migrate back to VMHost01, but once that is done we can't move the VMs back to VMHost02 without a force shutdown.
    The following error pops up:
    Event ID: 21502 The Virtual Machine Management Service failed to establish a connection for a Virtual machine migration with host.... The connection attempt failed because the connected party did not properly respond after a period of time, or the established
    connection failed because connected host has failed to respond (0X8007274C)
    Here's what I noticed:
    VMMS.exe is running on VMHost02 however it is not listening on Port 6600. Confirmed this after a reboot by running netstat -a. We have tried setting this service to a delayed start.
    I have checked Firewall rules and Anti-Virus exclusions, and they are correct. I have not run the cluster validation test yet, because I'll need to schedule a period of downtime to do so.
    We can start/stop the VMMS.exe service just fine and without errors, but I am puzzled as to why it will not listen on Port 6600 anywhere. Anyone have any suggestions on how to troubleshoot this particular issue? 
    Thanks,
    Tho H. Le

    Just ran into the same issue in a 16-node cluster being managed by VMM. When trying to live migrate VMs using the VMM console the migration would fail with the following: Error 10698. Failover Cluster manager would report the following error code: Error
    (0x8007274C).
    + Validated Live Migration and Cluster networks. Everything checked out.
    + Looking in Hyper-V manager and migrations are enabled and correct networks displayed.
    + Found this particular Blog that mentions that the Virtual Machine Management service is not listening to port 6600
    http://blogs.technet.com/b/roplatforms/archive/2012/10/16/shared-nothing-migration-fails-0x8007274c.aspx
    Ran the following from an elivated command line:
    Netstat -ano | findstr 6600
    Node 2 did not return anything
    Node 1 returned correct output:
    TCP
    10.xxx.251.xxx:6600
    0.0.0.0:0
    LISTENING
    4540
    TCP
    10.xxx.252.xxx:6600
    0.0.0.0:0
    LISTENING
    4560
    Set Hyper-V Virtual Machine Service to delayed start.
    Restarted the service; no change.
    Checked the Event Logs for Hyper-V VMMS and noted the following events - VMMS Listener started
    for Live Migration networks, and then shortly after listener stopped.
    Removed the system from the cluster and restarted - No change
    Checked this host by running gpedit.msc - could not open console: Permission Error
    Tried to run a GPO refresh (gpupdate /force), but error returned that LocalGPO could not apply registry settings. Group Policy
    processing would not continue until this was resolved.
    Checked the local group policy folder on node 2 and it was corrupt:
    C:\Windows\System32\GroupPolicy\Machine\reg.pol showed 0K for the size.
    Copied local policy folders from Node 1 to 2, and then was able to refresh the GPOs.
    Restarting the VMMS service did not change the status of the ports.
    Restarted Server, added Live Migration networks back into Hyper-V manager and now netstat output reports that VMMS service
    is listening on 6600.

  • Unplanned failover in a Hyper-V cluster vs unplanned failover in an ordinary (not Hyper-V) cluster

    Hello!
    Please excuse me if you think my question is silly, but before deploying something  in a production environment I'd like to dot the i's  and cross the t's.
    1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
    If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
    including their system state to another node without downtime.
    In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
    WITHOUT their system state. As far as I understand this can lead to some data loss.
    http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
    If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
    2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
    Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
    Is it correct?
    Thank you in advance,
    Michael

    "And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
    First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs.  By default, Hyper-V is set to move a maximum of 2 VMs at a time.  You can change that, but it would be foolish to increase that value if
    all you have is a single 1GE network.  The other VMs will be queued.
    Secondly, you are getting that amount of time confused with what is actually happening.  Think of a single VM.  Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate.  (Highly unlikely, even
    on a 1 GE NIC).  During that 13 minutes the VM takes to live migrate, the VM continues to perform normally.  In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
    on the original host.
    Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster.  The VM is doing its work reading and writing to its data files.  At that instance in time when the host
    fails, the VM may have some unwritten data buffers in memory.  Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time.  It is not going to lose any 13 minutes of data.  In fact, if you have an application
    that is processing data at this volume, you most likely have something like SQL running.  When the VM goes down, the cluster will automatically restart the VM on another node of the cluster.  SQL will automatically replay transaction logs to recover
    to the best of its ability.
    Is there a possibility of data loss?  Yes, a very tiny possibility for a very small amount.  Is there a possibility of data corruption?  Yes, a very, very tiny possibility, just like with a physical machine.
    The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM.  The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
    "clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
    Not exactly true.  Let's use SQL as an example again.  When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances.  When the node on which the active instance is running fails, there is
    a brief pause in service while the instance starts on the other node.  Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
    Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application.  Recovery time is about the biggest issue.  As you have noted, restarting a VM, i.e. rebooting it, takes time. 
    And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
    clients may be unable to access for up to a minute or so.
    However, the amount of data potentially lost is quite dependent upon the application.  SQL is designed to recover nicely in either environment, and it is likely not to lose any data.  Sequential writing applications will be dependent upon things
    like disk cache held in memory - large caches means higher probability of losing data.  No disk cache means there is not likely to be any loss of data.
    .:|:.:|:. tim

  • How to determine the Cluster name ?

    Grid version : 11.2.0.3 on Solaris 10
    When we start installing Grid Infrastructure, we specify a Cluster Name.
    Question1.
    How can I determine the cluster name of a 11.2 RAC Cluster ? We maintain a DB inventory. For each cluster , we want to specify the Cluster name.Hence we require the cluster name .
    Question2.
    The cluster name is of no functional importance. Right ? I don't remember using Cluster Name in any commands (srvctl, crsctl,...)

    To determine the cluster name,[oracle@iron1 ~]$ olsnodes -c
    ironcluster
    [oracle@iron1 ~]$I wouldn't say it has much technical significance, though it is used as a default in a few other names.
    John Watson
    Oracle Certified Master DBA
    http://skillbuilders.com

  • 2012 R2 Hyper-V Cluster two node design with abundance 1Gbps NICs and FC storage

    Hello All,
    First post so please be gentle!
    We are currently in the process of building/testing the upgrade of two node 2012 R2 Hyper-V cluster.
    Two hosts built with Datacentre 2012 R2 which will host approx. 30 VM's.
    Shared storage will be fault tolerant FC- connection.
    10, (yes 10!) 1Gbps NICs are available, Intel i350.
    trying to decide on teaming interfaces using native LBFO and the 2008 'style' of using un-converged networking, or teaming up most interfaces and using QoS.  Can find many/numerous examples of using 10Gbps NICs and converged, however 10Gbps networking
    isn't an option right now.
    recommendations appreciated.
    thanks

    Hi Sir,
    >>trying to decide on teaming interfaces using native LBFO and the 2008 'style' of using un-converged networking, or teaming up most interfaces and using QoS. 
    The following link detailed the teaming configuration and it's applicable scenario (server 2012):
    http://www.aidanfinn.com/?p=14039
    Also please refer to the document for 2012R2 LBFO :
    http://www.microsoft.com/en-us/download/details.aspx?id=40319
    In server 2012R2 , there is a new setting "Dynamic" for "load banacing mode " and it is recommended to use Dynamic for "Load banacing mode" :
    If you can accept 1GB max bandwidth for each VM I would suggest you to use LBFO mode : Independent/dynamic/None(All adapter active)  .
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Maybe you are looking for

  • Error while creating adapter in 11gr2

    Hi , AM trying to create an adapter for web service.In adapter factory task when i select the adapter am getting *."Server Could not load the class"*.I generated webservic using netbean SEVERE: Class/Method: tcADPClient/introspect encounter some prob

  • Sort by release date

    I would like to sort albums by release date but none have the field completed. Does anyone know if it is possible to manually add a release date to an album?

  • Did a fresh install of Windows Vista and Premiere CS5 and now Premiere is buggy!

    Hi everyone. I was using Premiere Pro CS5 on my quad core system editing AVCHD clips just fine and then I had to do a fresh install of Windows Vista Home Premium. I then did a fresh install of Premiere Pro CS5 on a separate hard drive and opened up o

  • Cannot uninstall Premiere pro.

    I have tried to uninstall Adobe Premiere Pro, as I need to re-install it  because something stopped working on it. I go to my control panel, uninstall a program and chose to uninstall Premiere Pro.  It keeps telling me "Uninstall Failed. Your uninsta

  • How to check the previous loaded SAP GUI patch number

    Hi now I am installing patch#24 SAP GUI ,now where to check the previous added patches,where I need to check, please mail the path