Hyper-V 2012 R2 Cluster Creation Fails

I am trying to create a 2 Node Hyper-v 2012 R2 Cluster.  The Cluster Validation passes with no errors or warnings but the the cluster creation fails. 
The error is similar to
here . 
In this case he solved it by joining the Nodes to a Windows 2012 Domain.  We don't have that option in  our environment. 
In the System Logs Events 7024 The cluster service terminated... The cluster join operation failed,  and 7031 Cluster Services Terminated unexpectedly. 
Anyone have an idea?
Todd

Hi Todd,
For troubleshooting , please try to create a new OU then move cluster nodes computer to that new OU then block inheritance , restart the nodes .
After this try to use domain admin account logon the cluster nodes to build cluster again .
Best Regards,
Elton Ji
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Similar Messages

  • Symantec Antivirus Best Practice for Hyper-v 2012 R2 Cluster

    Hi Team ,
    I am Working with Hyper-v 2012 R2 Cluster with 5 Node . All are working fine .
    Now i am Planning to install Antivirus Symantec in to each node. Please let me know if there is any Best practice guide to install Symantec Antivirus for Hyper-v 2012 R2 Cluster node.
    I am using full version of Windows 2012 R2 with hyper-v.
    Thanks
    Ravi
    Ravi

    I would also look strongly at no antivirus as well, but if you do, here are the recommended exclusions and possible issues with Anti-Virus to look out for.
    Look for the Hyper-V section:
    http://social.technet.microsoft.com/wiki/contents/articles/953.microsoft-anti-virus-exclusion-list.aspx
    Big things to stay away from from a VM level too:
    1. Do not have a set time when all VMs will kick off a scheduled full disk scan.  This can create an I/O storm on your hosts as well as saturate the CPUs as well.   Look for products or settings that allow for randomization of full disk scans on
    the VMs or do not do full disk scans and only keep the real time scanners active on the VMs for incoming writes.
    2.  Watch for Antivirus products that update all the VMs at the same time.  Again, sometimes you can randomize or exclude a scheduled full disk scan, but sometimes an automated update that kicks off say at 12am can then automatically kick off a
    mini scan.   This can also create disk I/O and CPU storms.
    The problem that still exists with many Antivirus products today is that they try to scan as fast as they can and then get out of the way.  This works ok for endpoint desktops or laptops, but when you have 50 or more VMs on a host all ramping up trying
    to get done as quick as they can, then this can cause some issues. 
    Rob McShinsky (www.VirtuallyAware.com)
    VirtuallyAware - Experiences in a Virtual World (Microsoft MVP - Virtual Machine)

  • DPM2012 R2 protecting Hyper-V 2012 R2 Cluster VMs, some appear offline in DPM

    Hello,
    We have 2 separate Hyper-V 2012 R2 Clusters using iSCSI SAN CSVs as storage.For these I have set up 2 PSGs for VM backups using checkpoints (As new in DPM 2012R2/Hyper.V 2012 R2)
    90% of the VM backups work in online mode but some shows as Offline when running the DPMGuide to add a VM to a PSG.The server OS versions differs between WS2003 R2 SP2/WS2008 SP2 / WS2008 R2 SP1 / WS 2012.
    Windows Server Backup is installed on all VMs and vssadmin list writers shows no errors.How can I elaborate and solve this issue?
    Thx /Tony

    Hi Elton,
    Thanks for your answer, My comments in bold indent below.
    The Backup (Volume Snapshot) Integration Service is disabled or not installed.
    Installed and enabled on all those VMs
    A virtual machine has one or more dynamic disks.
    I know this is a supposed issue, however all other VMs with both Dynamic / Fixed disk VMs
    does work.And among these failing we have both dynamic / Fixed disks as well.
    A virtual machine has one or more volumes that are based on non-NTFS file systems.
    All Vms has NTFS
    In a cluster configuration, the virtual machine Cluster Resource Group is offline.
    The Cluster Resource Group is online
    A virtual machine is not in a running state.
    All are running
    A Shadow Storage assignment of a volume inside the virtual machine is explicitly set to a different volume other than itself.
    Interesting, how do I check this in the VM itself ?
    Thx /Tony

  • Cluster Creation Failed with Ambari AppManager

    Hi,
    I am getting this error when creating cluster
    (cluster create --name hdp --distro HDP-1.3.2 --appManager Ambari --networkName Hadoop_NW)
    serengeti>appmanager list
      NAME     DESCRIPTION                  TYPE     URL
      Default  Default application manager  Default
      ambari   AmbariServer                 Ambari   http://10.6.55.239:8080
    ==========================
    It seems that agent on the host is not able to connect to the server but the problem is the Ambari Server is not located at localhost:8080 how can i change it to ambari server's address.
    Running setup agent script...
    ==========================
    {'exitstatus': 1, 'log': "Host registration aborted. Ambari Agent host cannot reach Ambari Server 'localhost:8080'. Please check the network connectivity between the Ambari Agent host and the Ambari Server"}
    Connection to node1.hadooptest.com closed.
    SSH command execution finished
    host=node1.hadooptest.com, exitcode=1
    ERROR: Bootstrap of host node1.hadooptest.com fails because previous action finished with non-zero exit code (1)
    ERROR MESSAGE: tcgetattr: Invalid argument
    Connection to node1.hadooptest.com closed.
    STDOUT: {'exitstatus': 1, 'log': "Host registration aborted. Ambari Agent host cannot reach Ambari Server 'localhost:8080'. Please check the network connectivity between the Ambari Agent host and the Ambari Server"}
    Connection to node1.hadooptest.com closed.

    Hi Qing,
    Thanks for the solution its solved the earlier problem but now i got a new one. According to the error Failed to start ping port listener of:[Errno 98] Address already in use" This the only address on the lan .... what can be causing this issue ?
    Mohsin
    The failed nodes: 1
    [NAME] hdp2-worker-0
    [STATUS] VM Ready
    [Error Message] ==========================
    Copying common functions script...
    ==========================
    scp /usr/lib/python2.6/site-packages/common_functions
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Copying OS type check script...
    ==========================
    scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Running OS type check...
    ==========================
    Cluster primary/cluster OS type is redhat6 and local/current OS type is redhat6
    Connection to node5.hadooptest.com closed.
    SSH command execution finished
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Checking 'sudo' package on remote host...
    ==========================
    sudo-1.8.6p3-12.el6.x86_64
    Connection to node5.hadooptest.com closed.
    SSH command execution finished
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Copying repo file to 'tmp' folder...
    ==========================
    scp /etc/yum.repos.d/ambari.repo
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Moving file to repo dir...
    ==========================
    Connection to node5.hadooptest.com closed.
    SSH command execution finished
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Copying setup script file...
    ==========================
    scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
    host=node5.hadooptest.com, exitcode=0
    ==========================
    Running setup agent script...
    ==========================
    Restarting ambari-agent
    Verifying Python version compatibility...
    Using python  /usr/bin/python2.6
    Found ambari-agent PID: 1682
    Stopping ambari-agent
    Removing PID file at /var/run/ambari-agent/ambari-agent.pid
    ambari-agent successfully stopped
    Verifying Python version compatibility...
    Using python  /usr/bin/python2.6
    Checking for previously running Ambari Agent...
    Starting ambari-agent
    Verifying ambari-agent process status...
    ERROR: ambari-agent start failed
    Agent out at: /var/log/ambari-agent/ambari-agent.out
    Agent log at: /var/log/ambari-agent/ambari-agent.log
    ('INFO 2015-04-01 06:19:59,137 HostCheckReportFileHandler.py:109 - Creating host check file at /var/lib/ambari-agent/data/hostcheck.result
    INFO 2015-04-01 06:19:59,205 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:09,207 Heartbeat.py:76 - Sending heartbeat with response id: 1 and timestamp: 1427869209207. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:09,251 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:19,252 Heartbeat.py:76 - Sending heartbeat with response id: 2 and timestamp: 1427869219252. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:19,296 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:29,296 Heartbeat.py:76 - Sending heartbeat with response id: 3 and timestamp: 1427869229296. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:29,340 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:39,340 Heartbeat.py:76 - Sending heartbeat with response id: 4 and timestamp: 1427869239340. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:39,384 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:49,384 Heartbeat.py:76 - Sending heartbeat with response id: 5 and timestamp: 1427869249384. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:49,428 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:59,429 Heartbeat.py:76 - Sending heartbeat with response id: 6 and timestamp: 1427869259429. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:21:05,061 main.py:83 - loglevel=logging.INFO
    INFO 2015-04-01 06:21:10,870 main.py:83 - loglevel=logging.INFO
    INFO 2015-04-01 06:21:10,871 DataCleaner.py:36 - Data cleanup thread started
    INFO 2015-04-01 06:21:10,875 DataCleaner.py:71 - Data cleanup started
    INFO 2015-04-01 06:21:10,876 DataCleaner.py:73 - Data cleanup finished
    ERROR 2015-04-01 06:21:10,877 PingPortListener.py:44 - Failed to start ping port listener of:[Errno 98] Address already in use
    INFO 2015-04-01 06:21:10,877 PingPortListener.py:52 - Ping port listener killed
    ', None)
    Connection to node5.hadooptest.com closed.
    SSH command execution finished
    host=node5.hadooptest.com, exitcode=255
    ERROR: Bootstrap of host node5.hadooptest.com fails because previous action finished with non-zero exit code (255)
    ERROR MESSAGE: tcgetattr: Invalid argument
    Connection to node5.hadooptest.com closed.
    STDOUT: Restarting ambari-agent
    Verifying Python version compatibility...
    Using python  /usr/bin/python2.6
    Found ambari-agent PID: 1682
    Stopping ambari-agent
    Removing PID file at /var/run/ambari-agent/ambari-agent.pid
    ambari-agent successfully stopped
    Verifying Python version compatibility...
    Using python  /usr/bin/python2.6
    Checking for previously running Ambari Agent...
    Starting ambari-agent
    Verifying ambari-agent process status...
    ERROR: ambari-agent start failed
    Agent out at: /var/log/ambari-agent/ambari-agent.out
    Agent log at: /var/log/ambari-agent/ambari-agent.log
    ('INFO 2015-04-01 06:19:59,137 HostCheckReportFileHandler.py:109 - Creating host check file at /var/lib/ambari-agent/data/hostcheck.result
    INFO 2015-04-01 06:19:59,205 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:09,207 Heartbeat.py:76 - Sending heartbeat with response id: 1 and timestamp: 1427869209207. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:09,251 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:19,252 Heartbeat.py:76 - Sending heartbeat with response id: 2 and timestamp: 1427869219252. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:19,296 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:29,296 Heartbeat.py:76 - Sending heartbeat with response id: 3 and timestamp: 1427869229296. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:29,340 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:39,340 Heartbeat.py:76 - Sending heartbeat with response id: 4 and timestamp: 1427869239340. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:39,384 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:49,384 Heartbeat.py:76 - Sending heartbeat with response id: 5 and timestamp: 1427869249384. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:20:49,428 Controller.py:211 - No commands sent from the Server.
    INFO 2015-04-01 06:20:59,429 Heartbeat.py:76 - Sending heartbeat with response id: 6 and timestamp: 1427869259429. Command(s) in progress: False. Components mapped: False
    INFO 2015-04-01 06:21:05,061 main.py:83 - loglevel=logging.INFO
    INFO 2015-04-01 06:21:10,870 main.py:83 - loglevel=logging.INFO
    INFO 2015-04-01 06:21:10,871 DataCleaner.py:36 - Data cleanup thread started
    INFO 2015-04-01 06:21:10,875 DataCleaner.py:71 - Data cleanup started
    INFO 2015-04-01 06:21:10,876 DataCleaner.py:73 - Data cleanup finished
    ERROR 2015-04-01 06:21:10,877 PingPortListener.py:44 - Failed to start ping port listener of:[Errno 98] Address already in use
    INFO 2015-04-01 06:21:10,877 PingPortListener.py:52 - Ping port listener killed
    ', None)
    Connection to node5.hadooptest.com closed.
    cluster hdp2 resume failed: Task execution failed: An exception happens when App_Manager (Ambari) creates the cluster: (hdp2). Creation fails..

  • Hyper V 2012 R2 Cluster Host Continusly restarting

    Hi,
    I have a new Hyper V 2012 Data Center Edition 3 Hosts Cluster with shared SAN storage network. I I restarted one host after due to a guest VM not shutting down well, unfortunately after the host hard restart windows OS goes Getting Ready  to updating
    and stopping and restarting the cluster services automatically then goes system restart. with out going to windows desktop and  its continually doing the same thing. MY server is HP BL 680 G7...

    Hi Joy Devis,
    In addition , if the node can not be fixed  (cause of some corruption ), you may think of remove the node and add it after rebuilding the node .
    For details please refer to the article regarding "Add a Server to a Failover Cluster":
    http://technet.microsoft.com/en-us/library/cc730998.aspx
    Hope this helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Failover Cluster creation fails 2012R2

    Create Cluster
    Cluster: FailoverCluster
    Node: WS2012R2-2.yottabyte.inc
    Node: WS2012R2-1.yottabyte.inc
    IP Address: DHCP address on 192.168.136.0/24
    Started 3/31/2015 11:48:52 PM
    Completed 3/31/2015 11:52:13 PM
    Beginning to configure the cluster FailoverCluster. 
    Initializing Cluster FailoverCluster. 
    Validating cluster state on node WS2012R2-2.yottabyte.inc. 
    Find a suitable domain controller for node WS2012R2-2.yottabyte.inc. 
    Searching the domain for computer object 'FailoverCluster'. 
    Bind to domain controller \\WS2012R2-1.yottabyte.inc. 
    Check whether the computer object FailoverCluster for node WS2012R2-2.yottabyte.inc exists in the domain. Domain controller \\WS2012R2-1.yottabyte.inc. 
    Computer object for node WS2012R2-2.yottabyte.inc does not exist in the domain. 
    Creating a new computer account (object) for 'FailoverCluster' in the domain. 
    Check whether the computer object WS2012R2-2 for node WS2012R2-2.yottabyte.inc exists in the domain. Domain controller \\WS2012R2-1.yottabyte.inc. 
    Creating computer object in organizational unit CN=Computers,DC=yottabyte,DC=inc where node WS2012R2-2.yottabyte.inc exists. 
    Create computer object FailoverCluster on domain controller \\WS2012R2-1.yottabyte.inc in organizational unit CN=Computers,DC=yottabyte,DC=inc. 
    Check whether the computer object FailoverCluster for node WS2012R2-2.yottabyte.inc exists in the domain. Domain controller \\WS2012R2-1.yottabyte.inc. 
    Configuring computer object 'FailoverCluster in organizational unit CN=Computers,DC=yottabyte,DC=inc' as cluster name object. 
    Get GUID of computer object with FQDN: CN=FAILOVERCLUSTER,CN=Computers,DC=yottabyte,DC=inc 
    Validating installation of the Network FT Driver on node WS2012R2-2.yottabyte.inc. 
    Validating installation of the Cluster Disk Driver on node WS2012R2-2.yottabyte.inc. 
    Configuring Cluster Service on node WS2012R2-2.yottabyte.inc. 
    Validating installation of the Network FT Driver on node WS2012R2-1.yottabyte.inc. 
    Validating installation of the Cluster Disk Driver on node WS2012R2-1.yottabyte.inc. 
    Configuring Cluster Service on node WS2012R2-1.yottabyte.inc. 
    Waiting for notification that Cluster service on node WS2012R2-2.yottabyte.inc has started. 
    Forming cluster 'FailoverCluster'. 
    Unable to successfully cleanup. 
    An error occurred while creating the cluster and the nodes will be cleaned up. Please wait... 
    An error occurred while creating the cluster and the nodes will be cleaned up. Please wait... 
    There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes. 
    There was an error cleaning up the cluster nodes. Use Clear-ClusterNode to manually clean up the nodes. 
    An error occurred while creating the cluster.
    An error occurred creating cluster 'FailoverCluster'.
    This operation returned because the timeout period expired
    To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster.
    While creating cluster I am getting this error? Any solution please.
    Thank You

    Did you follow the instruction "To troubleshoot cluster creation problems, run the Validate a Configuration wizard on the servers you want to cluster"?  Any warnings/errors?
    . : | : . : | : . tim

  • Hyper-V 2012 R2 Cluster - Drain Roles / Fail Roles Back

    Hi all,
    In the past when I've needed to apply windows updates to my 3 Hyper-V cluster nodes I used to make a note of which VM's were running on each node, then I'd live migrate them to one of the other cluster nodes before pausing the node I need to work on and
    carry out the updates, once I finished installing the updates I'd then simply resume the node and live migrate the VM's back to their original node.
    Having recently upgraded my nodes to Windows 2012 R2 I decided to use the new functionality in Failover Cluster Manager where you can pause & drain a node of its roles, perform the updates/maintenance, and then resume & fail roles back to the node,
    unfortunately this didn't go as smoothly as I'd hoped, for some reason it seems like the drain/fail back decided to be cumulative rather than one off jobs per-node ... hard to explain, hopefully the following will be clear enough if the formatting survives:
    1. Beginning State:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    2. Drain Hyper1:
    Hyper1     Hyper2     Hyper3
                    VM04       VM01
                    VM05       VM02
                    VM06       VM03
                                   VM07
                                   VM08
                                   VM09
    3. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    4. Drain Hyper2:
    Hyper1     Hyper2     Hyper3
    VM01                       VM04
    VM02                       VM05
    VM03                       VM06
                                   VM07
                                   VM08
                                   VM09
    5. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
                    VM01       VM07
                    VM02       VM08
                    VM03       VM09
                    VM04  
                    VM05
                    VM06
    6. Manually Live Migrate VM's back to correct location:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    7. Drain Hyper3:
    Hyper1     Hyper2     Hyper3
    VM01        VM04
    VM02        VM05
    VM03        VM06
                    VM07
                    VM08
                    VM09
    8. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
                                   VM01
                                   VM02
                                   VM03
                                   VM04
                                   VM05
                                   VM06
                                   VM07
                                   VM08
                                   VM09
    9. Manually Live Migrate VM's back to correct location:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    Step 8 was a rather hairy moment, although I was pleased to see my cluster hardware capacity planning rubber stamped, good to know that if I were ever to loose 2 out of 3 nodes everything would keep ticking over!
    So, I'm back to the old ways of doing things for now, has anyone else experienced this strange behaviour?
    Thanks in advance,
    Ben

    Hi,
    Just want to confirm the current situations.
    Please feel free to let us know if you need further assistance.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VMM 2012 R2 Template creation fails with sysprep error via the gui, but works in powershell?

    I'm in the process of trying to convert an existing gen1 VM (Win Server 2012 R2) to a VM template using VMM 2012 R2 with rollup 1. The creation keeps failing with the following error :-
    "Error
    (2901)The operation did not complete successfully because of a parameter or call sequence
    that is not valid.Recommended ActionEnsure
    that the parameters are valid, and then try the operation again."
    I've checked that the local administrator account for the machine being converted to a template and it is blank as expected and in an attempt to troubleshoot I've output the template creation script via the VMM GUI and executed the commands in order which appears
    to have resolved the problem, well the template finishes creation, i'm yet to deploy a machine via it?
    # Create VM Template Wizard Script
    # Script generated on Thursday, April 3, 2014 12:52:15 PM by Virtual Machine Manager
    # For additional help on cmdlet usage, type get-help <cmdlet name>
    $VM = Get-SCVirtualMachine -VMMServer localhost -Name "MyTemplate_Template" -ID "98299447-83e1-4d98-a558-a96ebafcf9b5"
    | where {$_.VMHost.Name -eq "myhost.mydomain.com"}
    $LibraryServer = Get-SCLibraryServer -VMMServer localhost | where {$_.Name -eq "my.library.com"}
    $GuestOSProfile = Get-SCGuestOSProfile -VMMServer localhost | where {$_.Name -eq "Windows Server 2012
    R2"}
    $OperatingSystem = Get-SCOperatingSystem -VMMServer localhost -ID "50b66974-c64a-4a06-b05a-7e6610c579a2"
    | where {$_.Name -eq "Windows Server 2012 R2 Standard"}
    $template = New-SCVMTemplate -Name "My Template" -RunAsynchronously -VM $VM -LibraryServer $LibraryServer
    -SharePath "\\mylibrary.mydomain.com\SCVMM Library\Templates\mytemplatedestination" -GuestOSProfile $GuestOSProfile -JobGroup d2f2f539-85da-4091-ab08-abe739fc4761 -ComputerName "*" -TimeZone 85  -FullName "Administrator"
    -OrganizationName "My Organisation" -Workgroup "WORKGROUP" -AnswerFile $null -OperatingSystem $OperatingSystem 
    Now the only fields that weren't populated when I ran through the gui were FullName and Organization? Before I changed the script these two fields were just set to ""? Is that what has potentially caused the issue are they both required fields?

    Your not the only one who can reproduce this. Did you ever find out what it was? I saw your post and tried your same work-around and it also worked for me via Powershell.

  • Packets sent out the wrong Interface on Hyper-V 2012 Failover Cluster

    Here is some background information:
    2 Dell PowerEdge servers running Windows Server 2012 w/ Hyper-V in a Failover Cluster environment.  Each has:
    1 NIC for Live Migration 192.168.80.x/24 (connected to a private switch)
    1 NIC for Cluster Communication 192.168.90.x/24 (connected to a private switch)
    1 NIC for iscsi 192.168.100.x/24 (connected to a private switch)
    1 NIC for host management with a routable public IP (*connected to corp network) w/ gateway on this interface
    1 NIC for Virtual Machine traffic (*connected to corp network)
    All NICs are up, we can ping the IPs between servers on the private network and on the public facing networks.  All functions of hyper-v are working and the failover cluster reports all interfaces are up and we receive no errors.  Live migration
    works fine.  In the live migration settings i have restricted the use of the 2 NICs (live migration or cluster comm).
    My problem is that our networking/security group sees on occasion (about every 10 minutes with a few other packets thrown in at different times) syn packets that are destined for the 192.168.80.3 interface goes out of the public interface and is dropped
    at our border router.  These should be heading out of the 192.168.80.x or 192.168.90.x interfaces without ever hitting our corporate network. Anyone have an idea of why this might be happening?  Traffic is on TCP 445.
    Appreciate the help.
    Nate

    Hi,
    Please check live migration and Cluster Communication network settings in cluster:
    In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
    If the console tree is collapsed, expand the tree under the cluster that you want to configure.
    Expand Networks.
    Right-click the network that you want to modify settings for, and then click Properties.
    There are two options:
    Allow cluster network communication on this network-- Allow clients to connect through this network
    Do not allow cluster network communication on this network
    If the network is used only for cluster node communication, clear “Allow clients to connect through this network” option.
    Check that and give us feedback for further troubleshooting, for more information please refer to following MS articles:
    Modify Network Settings for a Failover Cluster
    http://technet.microsoft.com/en-us/library/cc725775.aspx
    Lawrence
    TechNet Community Support

  • Win Server 2012 Failover Cluster - Error: Failed to bring cluster disk online

    Hi Technet
    I'm currently running running 2 VMs Win Server 2012 and would like to test Failover Clustering for one of our FTP server
    I've added on both servers an additional partition, formatted and Online
    One of the drives comes Online but cannot bring the 2nd disk online from the cluster manager
    Error: failed to bring resource online - clustered storage is not connected to the node
    Assistance would be greatly appreciated
    Thank you
    Jabu

    Hi jsibeko,
    Since the VMware offer the MSCS shared storage solution themselves, I suggest you first ask VMware whether that shared storage is supported with your vSphere edition, I found
    some VMware KB about the VMware shared storage for MSCS, may you can get more tips.
    The VMware KB:
    Microsoft Clustering on VMware vSphere: Guidelines for supported configurations (1037959)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037959
    Microsoft Cluster Service (MSCS) support on ESXi/ESX (1004617)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004617
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Re-installing Hyper-V 2012 R2 cluster node

    We have a four HP BL460 Gen8 servers acting as a part of Hyper-V Cluster, running Windows Server 2012 R2 Datacenter.
    Storage is provided by two node 3PAR StoreServ 7400.
    All network and fc connections are managed by HP Virtual Connect.
    One of the four nodes crashed during HP SPP upgrade which resulted as non booting OS.
    I managed to get the OS alive by running multiple check disks and by manually restoring registry hives from backup via Windows 7 installation media's recovery console.
    After the recovery there were still some issues with filesystem. Corrupted, orphaned and missing files here and there.
    Now I want to re-install the OS from scratch to make sure everything will work correctly and to avoid any future errors.
    What I need to know is that is the best practice to re-install the OS with new computername, or should I drop the current OS to workgroup, re-install it and join the AD domain with same computer name? I've already evicted the node from Hyper-V cluster
    but the server is still running as a member server on AD.
    Any other things I should take into consideration before doing the re-installation?
    Thanks in advance!

    I agree that after a major problem it is much safer to rebuild the system.  It sounds like you have the node rebuilt, so I would evict it from the cluster and then remove it from the domain. Rebuild it and you can use the same name because those two
    actions will clean up its 'footprints'.
    If the machine were not running, you would still evict the node from the cluster, but you would need to go into Active Directory to delete the computer account.  Then rebuild.
    . : | : . : | : . tim

  • What is the Proper way to protect 2012 R2 DC on Hyper-V 2012 R2?

    Hello,
    I have one physical DC running server 2008 R2 of which I am protecting the system state with DPM 2012 R2.  I also have one Virtual DC running server 2012 R2 on a Hyper-V 2012 R2 cluster.  I am currently protecting the system state of my Server
    2012 DC VM in one protection group and protecting the Hyper-V VM itself with DPM 2012 R2.  After reading about Server 2012 VM-GenerationID and how it prevents USN rollbacks on a Virtual DC restoration, I am thinking that I should only need to protect
    my Virtual DC by just protecting the VM itself.
    Question, Do I still need to protect the system state of a server 2012 R2 Hyper-V Virtual DC?  Or should we now just need to protect the VM itself [no system state]?
    Thanks,
    Rob

    In every Environment, there should be a physical DC, this one should be backup up with BMR, so the Active Directory will bee backed up correctly
    Every VM DC is enough to backup the VM
    If all your DC are virtual, pleas ebackup one with BMR
    Seidl Michael | http://www.techguy.at |
    twitter.com/techguyat | facebook.com/techguyat

  • DPM 2012 R2 Backup job FAILED for some Hyper-v VMs and Some Hyper-v VMs are not appearing in the DPM

    DPM 2012 R2  Backup job FAILED for some Hyper-v VMs
    DPM encountered a retryable VSS error. (ID 30112 Details: VssError:The writer experienced a transient error.  If the backup process is retried,
    the error may not reoccur.
     (0x800423F3))
    All the vss Writers are in stable state
    Also Some Hyper-v VMs are not appearing in the DPM 2012 R2 Console When I try to create the Protection Group please note that they are not part of cluster.
    Host is 2012 R2 and The VM is also 2012 R2.

    Hi,
    What update rollup are you running on the DPM 2012 R2 server ?  DPM 2012 R2 UR5 introduced a new refresh feature that will re-enumerate data sources on an individual protected server.
    Check for VSS errors inside the guests that are having problems being backed up.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT]
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Guest VM failover cluster on Hyper-V 2012 Cluster does not work across hosts

    Hi all,
    We are evaluating Hyper-V on Windows Server 2012, and I have bumped in to this problem:
    I have a Exchange 2010SP2 DAG installed on 2 vms in our Hyper-V cluster (a DAG forms a failover cluster, but does not use any shared storage). As long as my vms are on the same host, all is good. However, if I live migrate or shutdown-->move-->start one
    of the guest nodes on another pysical host, it loses connectivity with the cluster. "regular" network is fine across hosts, and I can ping/browse one guest node from the other. I have tried looking for guidance for Exchange on Hyper-V clusters but have not
    been able to find anything.
    According to the Exchange documentation this configuration is supported, so I guess I'm asking for any tips and pointers on where to troubleshoot this.
    regards,
    Trond

    Hi All,
    so some updates...
    We have a ticket logged with Microsoft, more of a check box exercise to reassure the business we're doing the needful.  Anyway, they had us....
    Apply hotfix http://support.microsoft.com/kb/2789968?wa=wsignin1.0  to both guest DAG nodes, which seems pretty random, but they wanted to update the TCP/IP stack...
    There was no change in error, move guest to another Hyper-V node, and the failover cluster, well, fails with the following event ids I the node that fails...
    1564 -File share witness resource 'xxxx)' failed to arbitrate for the file share 'xxx'. Please ensure that file share '\xxx' exists and is accessible by the cluster..
    1069 - Cluster resource 'File Share Witness (xxxxx)' in clustered service or application 'Cluster Group' failed
    1573 - Node xxxx  failed to form a cluster. This was because the witness was not accessible. Please ensure that the witness resource is online and available
    The other node stays up, and the Exchange DB's mounted on that node stay up, the ones mounted on the way that fails failover to the remaining node...
    So we then
    Removed 3 x Nic's in one of the 4 x NIC teams, so, leaving a single NIC in the team (no change)
    Removed one NIC from the LACP group on each Hyper-V host
    Created new Virtual Switch using this simple trunk port NIC on each Hyper-V host
    Moved the DAG nodes to this vSwitch
    Failover cluster works as expected, guest VM's running on separate Hyper-V hosts, when on this vswitch with single NIC
    So Microsoft were keen to close the call, as there scope was, I kid you not, to "consider this issue
    resolved once we are able to find the cause of the above mentioned issue", which we have now done, as in, teaming is the cause... argh.
    But after talking, they are now escalating internally.
    The other thing we are doing, is building Server 2010 Guests, and installing Exchange 2010 SP3, to get a Exchange 2010 DAG running on Server 2010 and see if this has the same issue, as people indicate that this is perhaps not got the same problem.
    Cheers
    Ben
    Name                   : Virtual Machine Network 1
    Members                : {Ethernet, Ethernet 9, Ethernet 7, Ethernet 12}
    TeamNics               : Virtual Machine Network 1
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    Name                   : Parent Partition
    Members                : {Ethernet 8, Ethernet 6}
    TeamNics               : Parent Partition
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Heartbeat
    Members                : {Ethernet 3, Ethernet 11}
    TeamNics               : Heartbeat
    TeamingMode            : SwitchIndependent
    LoadBalancingAlgorithm : TransportPorts
    Status                 : Up
    Name                   : Virtual Machine Network 2
    Members                : {Ethernet 5, Ethernet 10, Ethernet 4}
    TeamNics               : Virtual Machine Network 2
    TeamingMode            : Lacp
    LoadBalancingAlgorithm : HyperVPort
    Status                 : Up
    A Cloud Mechanic.

Maybe you are looking for