Hyper-V cluster: Unable to fail VM over to secondary host

I am working on a Server 2012 Hyper-V Cluster. I am unable to fail my VMs from one node to the other using either LIVE or Quick migration.
A force shutdown of VMHost01 will force a migration to VMHost02. And once we are on VMHost02 we can migrate back to VMHost01, but once that is done we can't move the VMs back to VMHost02 without a force shutdown.
The following error pops up:
Event ID: 21502 The Virtual Machine Management Service failed to establish a connection for a Virtual machine migration with host.... The connection attempt failed because the connected party did not properly respond after a period of time, or the established
connection failed because connected host has failed to respond (0X8007274C)
Here's what I noticed:
VMMS.exe is running on VMHost02 however it is not listening on Port 6600. Confirmed this after a reboot by running netstat -a. We have tried setting this service to a delayed start.
I have checked Firewall rules and Anti-Virus exclusions, and they are correct. I have not run the cluster validation test yet, because I'll need to schedule a period of downtime to do so.
We can start/stop the VMMS.exe service just fine and without errors, but I am puzzled as to why it will not listen on Port 6600 anywhere. Anyone have any suggestions on how to troubleshoot this particular issue? 
Thanks,
Tho H. Le

Just ran into the same issue in a 16-node cluster being managed by VMM. When trying to live migrate VMs using the VMM console the migration would fail with the following: Error 10698. Failover Cluster manager would report the following error code: Error
(0x8007274C).
+ Validated Live Migration and Cluster networks. Everything checked out.
+ Looking in Hyper-V manager and migrations are enabled and correct networks displayed.
+ Found this particular Blog that mentions that the Virtual Machine Management service is not listening to port 6600
http://blogs.technet.com/b/roplatforms/archive/2012/10/16/shared-nothing-migration-fails-0x8007274c.aspx
Ran the following from an elivated command line:
Netstat -ano | findstr 6600
Node 2 did not return anything
Node 1 returned correct output:
TCP
10.xxx.251.xxx:6600
0.0.0.0:0
LISTENING
4540
TCP
10.xxx.252.xxx:6600
0.0.0.0:0
LISTENING
4560
Set Hyper-V Virtual Machine Service to delayed start.
Restarted the service; no change.
Checked the Event Logs for Hyper-V VMMS and noted the following events - VMMS Listener started
for Live Migration networks, and then shortly after listener stopped.
Removed the system from the cluster and restarted - No change
Checked this host by running gpedit.msc - could not open console: Permission Error
Tried to run a GPO refresh (gpupdate /force), but error returned that LocalGPO could not apply registry settings. Group Policy
processing would not continue until this was resolved.
Checked the local group policy folder on node 2 and it was corrupt:
C:\Windows\System32\GroupPolicy\Machine\reg.pol showed 0K for the size.
Copied local policy folders from Node 1 to 2, and then was able to refresh the GPOs.
Restarting the VMMS service did not change the status of the ports.
Restarted Server, added Live Migration networks back into Hyper-V manager and now netstat output reports that VMMS service
is listening on 6600.

Similar Messages

  • Hyper-V 2012 R2 Cluster - Drain Roles / Fail Roles Back

    Hi all,
    In the past when I've needed to apply windows updates to my 3 Hyper-V cluster nodes I used to make a note of which VM's were running on each node, then I'd live migrate them to one of the other cluster nodes before pausing the node I need to work on and
    carry out the updates, once I finished installing the updates I'd then simply resume the node and live migrate the VM's back to their original node.
    Having recently upgraded my nodes to Windows 2012 R2 I decided to use the new functionality in Failover Cluster Manager where you can pause & drain a node of its roles, perform the updates/maintenance, and then resume & fail roles back to the node,
    unfortunately this didn't go as smoothly as I'd hoped, for some reason it seems like the drain/fail back decided to be cumulative rather than one off jobs per-node ... hard to explain, hopefully the following will be clear enough if the formatting survives:
    1. Beginning State:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    2. Drain Hyper1:
    Hyper1     Hyper2     Hyper3
                    VM04       VM01
                    VM05       VM02
                    VM06       VM03
                                   VM07
                                   VM08
                                   VM09
    3. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    4. Drain Hyper2:
    Hyper1     Hyper2     Hyper3
    VM01                       VM04
    VM02                       VM05
    VM03                       VM06
                                   VM07
                                   VM08
                                   VM09
    5. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
                    VM01       VM07
                    VM02       VM08
                    VM03       VM09
                    VM04  
                    VM05
                    VM06
    6. Manually Live Migrate VM's back to correct location:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    7. Drain Hyper3:
    Hyper1     Hyper2     Hyper3
    VM01        VM04
    VM02        VM05
    VM03        VM06
                    VM07
                    VM08
                    VM09
    8. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
                                   VM01
                                   VM02
                                   VM03
                                   VM04
                                   VM05
                                   VM06
                                   VM07
                                   VM08
                                   VM09
    9. Manually Live Migrate VM's back to correct location:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    Step 8 was a rather hairy moment, although I was pleased to see my cluster hardware capacity planning rubber stamped, good to know that if I were ever to loose 2 out of 3 nodes everything would keep ticking over!
    So, I'm back to the old ways of doing things for now, has anyone else experienced this strange behaviour?
    Thanks in advance,
    Ben

    Hi,
    Just want to confirm the current situations.
    Please feel free to let us know if you need further assistance.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Cluster Quorum Disk failing inside Guest cluster VMs in Hyper-V Cluster using Virtual Disk Sharing Windows Server 2012 R2

    Hi, I'm having a problem in a VM Guest cluster using Windows Server 2012 R2 and virtual disk sharing enabled. 
    It's a SQL 2012 cluster, which has around 10 vhdx disks shared this way. all the VHDX files are inside LUNs on a SAN. These LUNs are presented to all clustered members of the Windows Server 2012 R2 Hyper-V cluster, via Cluster Shared Volumes.
    Yesterday happened a very strange problem, both the Quorum Disk and the DTC disks got the information completetly erased. The vhdx disks themselves where there, but the info inside was gone.
    The SQL admin had to recreated both disks, but now we don't know if this issue was related to the virtualization platform or another event inside the cluster itself.
    Right now I'm seen this errors on one of the VM Guest:
     Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1069
    Task Category: Resource Control Manager
    Level:         Error
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    Cluster resource 'Quorum-HDD' of type 'Physical Disk' in clustered role 'Cluster Group' failed.
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1069</EventID>
        <Version>1</Version>
        <Level>2</Level>
        <Task>3</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14140</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="ResourceName">Quorum-HDD</Data>
        <Data Name="ResourceGroup">Cluster Group</Data>
        <Data Name="ResTypeDll">Physical Disk</Data>
      </EventData>
    </Event>
    Log Name:      System
    Source:        Microsoft-Windows-FailoverClustering
    Date:          3/4/2014 11:54:55 AM
    Event ID:      1558
    Task Category: Quorum Manager
    Level:         Warning
    Keywords:      
    User:          SYSTEM
    Computer:      ServerDB02.domain.com
    Description:
    The cluster service detected a problem with the witness resource. The witness resource will be failed over to another node within the cluster in an attempt to reestablish access to cluster configuration data.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="Microsoft-Windows-FailoverClustering" Guid="{BAF908EA-3421-4CA9-9B84-6689B8C6F85F}" />
        <EventID>1558</EventID>
        <Version>0</Version>
        <Level>3</Level>
        <Task>42</Task>
        <Opcode>0</Opcode>
        <Keywords>0x8000000000000000</Keywords>
        <TimeCreated SystemTime="2014-03-04T17:54:55.498842300Z" />
        <EventRecordID>14139</EventRecordID>
        <Correlation />
        <Execution ProcessID="1684" ThreadID="2180" />
        <Channel>System</Channel>
        <Computer>ServerDB02.domain.com</Computer>
        <Security UserID="S-1-5-18" />
      </System>
      <EventData>
        <Data Name="NodeName">ServerDB02</Data>
      </EventData>
    </Event>
    We don't know if this can happen again, what if this happens on disk with data?! We don't know if this is related to the virtual disk sharing technology or anything related to virtualization, but I'm asking here to find out if it is a possibility.
    Any ideas are appreciated.
    Thanks.
    Eduardo Rojas

    Hi,
    Please refer to the following link:
    http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx#.Ux172HnxtNA
    Best Regards,
    Vincent Wu
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Windows server 2012 Datacenter Hyper-V Cluster -- Failed to validate Operating System Installation Option?

    Hi I have a 4 node Windows server 2012 Hyper-V cluster. When I try to run a cluster validation report, everything else is fine but it fails at validate the Operating System Installation Option step. I did some research but couldn't really find any solution.
    Anyone knows how to pass this test? Thanks.
    Here's the error I get when run the test:
    An error occurred while executing the test.
    The operation has failed. An error occurred while getting the operating system installation option for node "server1"

    Hi JasonLiu2002,
    Please post the original error information, the current information is so wide that difficult to determine where may have issue and please offer more information about your
    server configuration, you can refer the following article to prepare your cluster environment first.
    Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)
    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx
    When you preparing the new cluster on Server 2012 please install the Recommended hotfixes and updates for Windows Server 2012-based failover clusters updates.
    http://support.microsoft.com/kb/2784261
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Proper steps to fail over to another host in a cluster

    Hello,
    Pardon my ignorance.  What is the proper steps to force a fail over to the standby host in a cluster with two nodes?
    My secondary host is the currently the active host for custer name. I would like to force it to fail to the primary, which is acting as a standby.  Thank you in advance.

    Hi MS_Moron,
    You can refer the following KB gracefully move the cluster resource to another node.
    Test the Failover of a Clustered Service or Application
    http://technet.microsoft.com/en-us/library/cc754577.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • TES v6.1 want to install a Win Tidal Agent on a Cluster Resource with Fail Over... help

    I am working with my Window Admins and they want to know how to install the Win Tidal Agent on a Cluster Resource with Fail Over.  Currently running Tidal Master (UNIX) v6.1.0.483
    Thanks,
    Rich

    Please refer to the Agent Installation and Configuration guide.pdf from Cisco. The steps to configure the agents in a cluster have been explained in section Configuring the Agents for a Cluster

  • Unplanned failover in a Hyper-V cluster vs unplanned failover in an ordinary (not Hyper-V) cluster

    Hello!
    Please excuse me if you think my question is silly, but before deploying something  in a production environment I'd like to dot the i's  and cross the t's.
    1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
    If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
    including their system state to another node without downtime.
    In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
    WITHOUT their system state. As far as I understand this can lead to some data loss.
    http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
    If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
    2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
    Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
    Is it correct?
    Thank you in advance,
    Michael

    "And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
    First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs.  By default, Hyper-V is set to move a maximum of 2 VMs at a time.  You can change that, but it would be foolish to increase that value if
    all you have is a single 1GE network.  The other VMs will be queued.
    Secondly, you are getting that amount of time confused with what is actually happening.  Think of a single VM.  Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate.  (Highly unlikely, even
    on a 1 GE NIC).  During that 13 minutes the VM takes to live migrate, the VM continues to perform normally.  In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
    on the original host.
    Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster.  The VM is doing its work reading and writing to its data files.  At that instance in time when the host
    fails, the VM may have some unwritten data buffers in memory.  Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time.  It is not going to lose any 13 minutes of data.  In fact, if you have an application
    that is processing data at this volume, you most likely have something like SQL running.  When the VM goes down, the cluster will automatically restart the VM on another node of the cluster.  SQL will automatically replay transaction logs to recover
    to the best of its ability.
    Is there a possibility of data loss?  Yes, a very tiny possibility for a very small amount.  Is there a possibility of data corruption?  Yes, a very, very tiny possibility, just like with a physical machine.
    The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM.  The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
    "clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
    Not exactly true.  Let's use SQL as an example again.  When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances.  When the node on which the active instance is running fails, there is
    a brief pause in service while the instance starts on the other node.  Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
    Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application.  Recovery time is about the biggest issue.  As you have noted, restarting a VM, i.e. rebooting it, takes time. 
    And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
    clients may be unable to access for up to a minute or so.
    However, the amount of data potentially lost is quite dependent upon the application.  SQL is designed to recover nicely in either environment, and it is likely not to lose any data.  Sequential writing applications will be dependent upon things
    like disk cache held in memory - large caches means higher probability of losing data.  No disk cache means there is not likely to be any loss of data.
    .:|:.:|:. tim

  • Hyper-V Cluster - VM Heartbeat Monitoring

    Hi, 
    Could someone help me understand this. 
    I have a lot of nodes in a cluster, with regards to how failover works i simply want VMs and CSV be moved\restarted on other nodes in the event of total node failure. 
    I do not want the cluster to ever, ever touch a VM if it detects a VM to have issues with the guest operating system. I want this to be totally contained. 
    On the polices tab for a virtual machine role i see the default is "If resource fails, attempt to restart on current node" then if this fails "fail over all resources in this role". Is this the setting that governs fail over between node? 
    If so, do i simply have to disable "Enable heartbeat monitoring for the virtual machine" on the settings tab to get the behavior i describe above? 
    If i'm correct in these assumptions id kill for an idea of how i could script this change for lots of VMs, can this setting be manipulated via Powershell?
    Many thanks,

    Hi Hob_Gadling,
    If we want to keep the clustered role failback to specific node, we need configure the preferred node with Cluster Manager.
    When the option Enable heartbeat monitoring for the virtual machine heartbeats
     is selected,  the heartbeat are sent from the operating system running in the virtual machine to the operating system running the Hyper-V role. If the heartbeats stop, indicating that virtual machine has become unresponsive,
    the cluster is notified, and can attempt to restart the clustered virtual machine or fail it over.
    More information:
    Failover behavior on clusters of three or more nodes
    https://support.microsoft.com/kb/299631?wa=wsignin1.0
    Virtual Machine <Resource Name> Properties: Settings Tab
    http://technet.microsoft.com/en-us/library/dd834728.aspx
    What does "Enable heartbeat monitoring for the virtual machine" do ?
    http://blogs.msdn.com/b/robertvi/archive/2011/01/11/what-does-quot-enable-heartbeat-monitoring-for-the-virtual-machine-quot-do.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Event Logs of VMs Migration in Hyper-V Cluster

    Hello All,
    We're running Failover Cluster of Win Server 2012 R2 Hyper-V hosts. If any host gets down unexpectedly (due to any reason power/bugcheck/hardware failure or what so ever), then the VMs on that host, of course, get migrated (either quick or live) to some
    other host within the cluster.
    I want to have logs/events of this VMs migration. I want to know that which of the VMs were residing on that host at that time of failure. Of course, we can't have this info in the Cluster events is Failover Cluster Manager. I am unable to find this info
    anywhere. I have searched in Event Viewer --> Administrative Roles --> Hyper-V. I have searched a lot in the SCVMM, but no success. We're using SCVMM 2012 R2 with UR5.
    Please help me in finding the exact location of these logs/events. I would also like to know that if the VM was quick migrated or live migrated, and to which host the VM got migrated.
    I'd be highly grateful.
    Thanks in anticipation.
    Regards,
    Hasan

    You have posted this same question in two different forums.  The answer on where to look is posted in the other forum. 
    https://social.technet.microsoft.com/Forums/en-US/7f0da2a8-debc-4dd8-9214-72ed46e3c76b/event-logs-of-vms-migration-in-failover-cluster-of-hyperv-hosts?forum=winserverhyperv
    In the future, forum etiquette requests that you do not cross-post.
    Again, when a host fails, there is no migration, quick or live, to another node of the cluster.  There is a restart.  When a host fails, the VMs on that host also fail.  The cluster detects the failure and the resources (VMs) that had
    been running on the failed node are restarted on another node.  You will see different events entered into the event log for a resource start than for a quick/live migration.
    The easiest way to see this is to do it.  Open up the event viewer on a host to which you plan to migrate a VM.  Perform a quick/live migration.  Refresh the event viewer and note the events that were logged.
    . : | : . : | : . tim

  • Windows Server 2012 - Hyper-V - Cluster Sharded Storage - VHDX unexpectedly gets copied to System Volume Information by "System", Virtual Machines stops respondig

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM. This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM. This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched off.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, VMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation:
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with two 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair.
    Our Problem:
    Our problem is that for some reason a random VHDX gets copied to System Volume Information by "System" of the Clusterd Shared Storage (i.e. C:\ClusterStorage\Volume1\System Volume Information).
    All VMs stops responding or responds very slowly during this copy process and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    This happens at random and not every day and different VHDX files from different VMs gets copied each time. Some time it happens during daytime wich causes a lot of problems, especially when a 200 GB file gets copied (which take a lot of time).
    What it is not:
    We thought that this was connected to the backup, but the backup had finished 3 hours before the last time this happended and the backup never uses any of the files in System Volume Information so it is not the backup.
    An observation:
    When this happend today I switched on ShadowCopy (previous files) and set it to only to use 320 MB of storage and then the Copy Process stopped and the virtual Machines started responding again. This could be unrelated since there is no way to see
    how much of the VHDX that is left to be copied, so it might have been finished at the same time as I enabled  ShadowCopy (previos files).
    Our question:
    Why is a VHDX copied to System Volume Information when scheduled ShadowCopy (previous version of files) is switched off? As far as I know, nothing should be copied to this folder when this functionis switched off?
    List of VSS Writers:
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Writer name: 'VSS Metadata Store Writer'
       Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06}
       Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93}
       State: [1] Stable
       Last error: No error
    Writer name: 'Performance Counters Writer'
       Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2}
       Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381}
       State: [1] Stable
       Last error: No error
    Writer name: 'System Writer'
       Writer Id: {e8132975-6f93-4464-a53e-1050253ae220}
       Writer Instance Id: {7848396d-00b1-47cd-8ba9-769b7ce402d2}
       State: [1] Stable
       Last error: No error
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {8b6c534a-18dd-4fff-b14e-1d4aebd1db74}
       State: [5] Waiting for completion
       Last error: No error
    Writer name: 'Cluster Shared Volume VSS Writer'
       Writer Id: {1072ae1c-e5a7-4ea1-9e4a-6f7964656570}
       Writer Instance Id: {d46c6a69-8b4a-4307-afcf-ca3611c7f680}
       State: [1] Stable
       Last error: No error
    Writer name: 'ASR Writer'
       Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4}
       Writer Instance Id: {fc530484-71db-48c3-af5f-ef398070373e}
       State: [1] Stable
       Last error: No error
    Writer name: 'WMI Writer'
       Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0}
       Writer Instance Id: {3792e26e-c0d0-4901-b799-2e8d9ffe2085}
       State: [1] Stable
       Last error: No error
    Writer name: 'Registry Writer'
       Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
       Writer Instance Id: {6ea65f92-e3fd-4a23-9e5f-b23de43bc756}
       State: [1] Stable
       Last error: No error
    Writer name: 'BITS Writer'
       Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0}
       Writer Instance Id: {71dc7876-2089-472c-8fed-4b8862037528}
       State: [1] Stable
       Last error: No error
    Writer name: 'Shadow Copy Optimization Writer'
       Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Instance Id: {cb0c7fd8-1f5c-41bb-b2cc-82fabbdc466e}
       State: [1] Stable
       Last error: No error
    Writer name: 'Cluster Database'
       Writer Id: {41e12264-35d8-479b-8e5c-9b23d1dad37e}
       Writer Instance Id: {23320f7e-f165-409d-8456-5d7d8fbaefed}
       State: [1] Stable
       Last error: No error
    Writer name: 'COM+ REGDB Writer'
       Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
       Writer Instance Id: {f23d0208-e569-48b0-ad30-1addb1a044af}
       State: [1] Stable
       Last error: No error
    Please note:
    Please only answer our question and do not offer any general optimization tips that do not directly adress the issue! We want the problem to go away, not to finish a bit faster!

    Hallo Lawrence!
    Thankyou for youre reply, some comments to help you and others who read this thread:
    First of all, we use Windows Server 2012 and the VHDX as I wrote in the headline and in the text in my post. We have not had this problem in similar setups with Windows Server 2008 R2, so the problem seem to be introduced in Windows Server 2012.
    These posts that you refer to seem to be outdated and/or do not apply to our configuration:
    The post about Dynamic Disks:
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx is only a recommendation for Windows Server 2008 R2 and the VHD format. Dynamic VHDX is indeed recommended by Microsoft when using Windows Server 2012 (please look in the optimization guide
    for Windows Server 2012).
    Infact, if we use fixed VHDX then we would have a bigger problem since fixed VHDX are generaly larger then Dynamic Disks, i.e. more data would be copied and that would take longer time = the VMs would be unresponsive for a longer time.
    The post "What's the deal with the System Volume Information folder"
    http://blogs.msdn.com/b/oldnewthing/archive/2003/11/20/55764.aspx is for Windows XP / Windows Server 2003 and some things has changed since then. for instance In Windows Server 2012, Shadow Copies cannot be controlled by going to Control panel -> System.
    Instead you right-click on a Drive (i.e. a Volume, for instance the C drive/Volume) in Computer and then click "Configure Shadow Copies".
    Windows Server 2008 R2 Backup problem
    http://social.technet.microsoft.com/Forums/en/windowsbackup/thread/0fc53adb-477d-425b-8c99-ad006e132336 - This post is about the Antivirus software trying to scan files used during backup that exists in the System Volume Information folder and we do not
    have any antivirus software installed on our hosts as I stated in my post.
    Comment that might help us:
    So according to “System Volume Information” definition, the operation you mentioned is Volume Shadow Copy. Check event viewer to find Volume Shadow Copy related event logs and post them.
    Why?
    Furhter investigation suggests that a volume shadow copy is somehow created even though the Schedule for Shadows Copies is turned off for all drives. This happens at random and we have not found any pattern. Yesterday this operation took almost all available
    disk space (over 200 GB), but all the disk space was released when I turned on scheduled Shadow Copies for the CSV.
    I therefore draw these conclusions:
    The CSV Volume has about 600 GB of disk space and since Volume Shadows Copy used 200 GB, or about 33% of the disk space, and the default limit is 10% then I conclude that for some reason the unscheduled Volume Shadow Copy did not have any limit (or ignored
    the limit).
    When I turned on the Schedule I also change the limit to the minimum amount which is 320 MB and this is probably what released the disk space. That is, the unscheduled Volume Shadow Copy operation was aborted and it adhered to the limit and deleted the
    Volume Shadow Copy it had taken.
    I have also set the limit for Volume Shadow Copies for all other volumes to 320 MB by using the "Configure Shadow Copies" Window that you open by right clicking on a drive (volume) in Computer and then selecting "Configure Shadow Copies...".
    It is important to note that setting a limit for Shadow Copy Storage, and disabaling the Schedule are two different things! It is possible to have unlimited storage for Shadow Copies when the Schedule is disabled, however I do not know if this was the case
    Before I enabled Shadow Copies on the CSV since I did not look for this.
    I now have defined a limit for Shadow Copy Storage to 320 MB on all drives and then no VHDX should be copied to System Volume Information since they are all larger than 320 MB.
    Does this sound about right or am I drawing the wrong conclusions?
    Limits for Shadow Copies:
    Below we list the limits for our two hosts:
    "Primary Host":
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (91%)
    Shadow Copy Storage association
       For volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Shadow Copy Storage volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    Shadow Copy Storage association
       For volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Shadow Copy Storage volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (3%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    C:\>cd \ClusterStorage\Volume1
    Secondary host:
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 35,0 MB (10%)
    Shadow Copy Storage association
       For volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Shadow Copy Storage volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 27,3 GB (10%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 6,80 GB (10%)
    C:\>
    There is something strange about the limits on the Secondary host!
    I have not in any way changed the settings on the Secondary host and as you can see, the Secondary host has a maximum limit of only 35 MB storage on the CSV, but it also shows that this is 10% of the Volume. This is clearly not the case since 10% if 600
    GB = 60 GB!
    The question is, why does it by default set a too small limit (i.e. < 320 MB) on the CSV and is this the cause of the problem? I.e. is the limit ignored since it is smaller than the smallest amount you can provide using the GUI?
    Is the default 35 MB maximum Shadow Copy limit a bug, or is there any logical reason for setting a limit that according to the GUI is too small?

  • Hyper-V Cluster Name offline

    We have a 2012 Hyper-V cluster that isn't online and we can't migrate VMs to the other Hyper-V host.  We see event errors in the Failover Cluster Manager:
    The description for Event ID 1069 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Name
    Cluster Group
    Network Name
    The description for Event ID 1254 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    Cluster Group
    The description for Event ID 1155 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component
    on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    ACMAIL
    3604536
    Any help or info is appreciated.
    Thank you!

    Here is the network validation.  Any thoughts?
    Failover Cluster Validation Report
    Failover Cluster Validation Report
          Node: ACHV01.AshtaChemicals.localValidated
          Node: ACHV02.AshtaChemicals.localValidated
          Started8/6/2014 5:04:47 PM
          Completed8/6/2014 5:05:22 PM
    The Validate a Configuration Wizard must be run after any change is made to the
    configuration of the cluster or hardware. For more information, see
    Results by Category
          NameResult SummaryDescription
          NetworkWarning
    Network
          NameResultDescription
          List Network Binding OrderSuccess
          Validate Cluster Network ConfigurationSuccess
          Validate IP ConfigurationWarning
          Validate Multiple Subnet PropertiesSuccess
          Validate Network CommunicationSuccess
          Validate Windows Firewall ConfigurationSuccess
    Overall Result
      Testing has completed for the tests you selected. You should review the
      warnings in the Report. A cluster solution is supported by Microsoft only if
      it passes all cluster validation tests.
    List Network Binding Order
      Description: List the order in which networks are bound to the adapters on
      each node.
      ACHV01.AshtaChemicals.local
            Binding OrderAdapterSpeed
            iSCSI3Intel(R) PRO/1000 PT Quad Port LP Server Adapter #31000 Mbit/s
            Ethernet 3Intel(R) PRO/1000 PT Quad Port LP Server AdapterUnavailable
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver2000 Mbit/s
            iSCSI2Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) #371000
            Mbit/s
            3Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)Unavailable
      ACHV02.AshtaChemicals.local
            Binding OrderAdapterSpeed
            Mgt - HeartbeatMicrosoft Network Adapter Multiplexor Driver #42000
    Mbit/s
            Mgt - LiveMigrationMicrosoft Network Adapter Multiplexor Driver #32000
            Mbit/s
            MgtMicrosoft Network Adapter Multiplexor Driver #22000 Mbit/s
            iSCSI1Broadcom NetXtreme Gigabit Ethernet #71000 Mbit/s
            NIC2Broadcom NetXtreme Gigabit EthernetUnavailable
            SLOT 5 2Broadcom NetXtreme Gigabit EthernetUnavailable
            iSCSI2Broadcom NetXtreme Gigabit Ethernet1000 Mbit/s
    Back to Summary
    Back to Top
    Validate Cluster Network Configuration
      Description: Validate the cluster networks that would be created for these
      servers.
      Network: Cluster Network 1
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.131.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI3
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.113
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.131.121
            Prefix Length24
      Network: Cluster Network 2
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.141.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - Heartbeat
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.141.12
            Prefix Length24
      Network: Cluster Network 3
      DHCP Enabled: False
      Network Role: Internal
            PrefixPrefix Length
            192.168.140.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.10
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt - LiveMigration
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address192.168.140.12
            Prefix Length24
      Network: Cluster Network 4
      DHCP Enabled: False
      Network Role: Enabled
            PrefixPrefix Length
            10.1.1.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.4
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - Mgt
            DHCP EnabledFalse
            Connected to iSCSI targetFalse
            IP Address10.1.1.5
            Prefix Length24
      Network: Cluster Network 5
      DHCP Enabled: False
      Network Role: Disabled
      One or more interfaces on this network are connected to an iSCSI Target. This
      network will not be used for cluster communication.
            PrefixPrefix Length
            192.168.130.024
            ItemValue
            Network InterfaceACHV01.AshtaChemicals.local - iSCSI2
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.112
            Prefix Length24
            ItemValue
            Network InterfaceACHV02.AshtaChemicals.local - iSCSI1
            DHCP EnabledFalse
            Connected to iSCSI targetTrue
            IP Address192.168.130.121
            Prefix Length24
      Verifying that each cluster network interface within a cluster network is
      configured with the same IP subnets.
      Examining network Cluster Network 1.
      Network interface ACHV01.AshtaChemicals.local - iSCSI3 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Network interface ACHV02.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 1.
      Examining network Cluster Network 2.
      Network interface ACHV01.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Network interface ACHV02.AshtaChemicals.local - Mgt - Heartbeat has addresses
      on all the subnet prefixes of network Cluster Network 2.
      Examining network Cluster Network 3.
      Network interface ACHV01.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Network interface ACHV02.AshtaChemicals.local - Mgt - LiveMigration has
      addresses on all the subnet prefixes of network Cluster Network 3.
      Examining network Cluster Network 4.
      Network interface ACHV01.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Network interface ACHV02.AshtaChemicals.local - Mgt has addresses on all the
      subnet prefixes of network Cluster Network 4.
      Examining network Cluster Network 5.
      Network interface ACHV01.AshtaChemicals.local - iSCSI2 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Network interface ACHV02.AshtaChemicals.local - iSCSI1 has addresses on all
      the subnet prefixes of network Cluster Network 5.
      Verifying that, for each cluster network, all adapters are consistently
      configured with either DHCP or static IP addresses.
      Checking DHCP consistency for network: Cluster Network 1. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI3 matches network Cluster Network 1.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 1.
      Checking DHCP consistency for network: Cluster Network 2. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - Heartbeat matches network Cluster Network 2.
      Checking DHCP consistency for network: Cluster Network 3. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      - LiveMigration matches network Cluster Network 3.
      Checking DHCP consistency for network: Cluster Network 4. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local - Mgt
      matches network Cluster Network 4.
      Checking DHCP consistency for network: Cluster Network 5. Network DHCP status
      is disabled.
      DHCP status (disabled) for network interface ACHV01.AshtaChemicals.local -
      iSCSI2 matches network Cluster Network 5.
      DHCP status (disabled) for network interface ACHV02.AshtaChemicals.local -
      iSCSI1 matches network Cluster Network 5.
    Back to Summary
    Back to Top
    Validate IP Configuration
      Description: Validate that IP addresses are unique and subnets configured
      correctly.
      ACHV01.AshtaChemicals.local
            ItemName
            Adapter NameiSCSI3
            Adapter DescriptionIntel(R) PRO/1000 PT Quad Port LP Server Adapter #3
            Physical Address00-26-55-DB-CF-73
            StatusOperational
            DNS Servers
            IP Address192.168.131.113
            Prefix Length24
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.141.10
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address192.168.140.10
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver
            Physical Address78-2B-CB-3C-DC-F5
            StatusOperational
            DNS Servers10.1.1.2, 10.1.1.8
            IP Address10.1.1.4
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom BCM5709C NetXtreme II GigE (NDIS VBD Client)
            #37
            Physical Address78-2B-CB-3C-DC-F7
            StatusOperational
            DNS Servers
            IP Address192.168.130.112
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 12
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-61-1E-49-32-8F
            StatusOperational
            DNS Servers
            IP Addressfe80::cc2f:d769:fe24:3d04%23
            Prefix Length64
            IP Address169.254.2.195
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{96B6424D-DB32-480F-8B46-056A11A0A6A8}
            Adapter DescriptionMicrosoft ISATAP Adapter
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.113%16
            Prefix Length128
            ItemName
            Adapter Nameisatap.{A0353AF4-CE7F-4811-B4FC-35273C2F2C6E}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.112%18
            Prefix Length128
            ItemName
            Adapter Nameisatap.{FAAF4D6A-5A41-4725-9E83-689D8E6682EE}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.10%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{C66443C2-DC5F-4C2A-A674-2191F76E33E1}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.4%27
            Prefix Length128
            ItemName
            Adapter Nameisatap.{B3A95E1D-CB95-4111-89E5-276497D7EF42}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.10%29
            Prefix Length128
            ItemName
            Adapter Nameisatap.{7705D42A-1988-463E-9DA3-98D8BD74337E}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.2.195%30
            Prefix Length128
      ACHV02.AshtaChemicals.local
            ItemName
            Adapter NameMgt - Heartbeat
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #4
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.141.12
            Prefix Length24
            ItemName
            Adapter NameMgt - LiveMigration
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #3
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address192.168.140.12
            Prefix Length24
            ItemName
            Adapter NameMgt
            Adapter DescriptionMicrosoft Network Adapter Multiplexor Driver #2
            Physical Address74-86-7A-D4-C9-8B
            StatusOperational
            DNS Servers10.1.1.8, 10.1.1.2
            IP Address10.1.1.5
            Prefix Length24
            IP Address10.1.1.248
            Prefix Length24
            ItemName
            Adapter NameiSCSI1
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet #7
            Physical Address74-86-7A-D4-C9-8A
            StatusOperational
            DNS Servers
            IP Address192.168.130.121
            Prefix Length24
            ItemName
            Adapter NameiSCSI2
            Adapter DescriptionBroadcom NetXtreme Gigabit Ethernet
            Physical Address00-10-18-F5-08-9C
            StatusOperational
            DNS Servers
            IP Address192.168.131.121
            Prefix Length24
            ItemName
            Adapter NameLocal Area Connection* 11
            Adapter DescriptionMicrosoft Failover Cluster Virtual Adapter
            Physical Address02-8F-46-67-27-51
            StatusOperational
            DNS Servers
            IP Addressfe80::3471:c9bf:29ad:99db%25
            Prefix Length64
            IP Address169.254.1.193
            Prefix Length16
            ItemName
            Adapter NameLoopback Pseudo-Interface 1
            Adapter DescriptionSoftware Loopback Interface 1
            Physical Address
            StatusOperational
            DNS Servers
            IP Address::1
            Prefix Length128
            IP Address127.0.0.1
            Prefix Length8
            ItemName
            Adapter Nameisatap.{8D7DF16A-1D5F-43D9-B2D6-81143A7225D2}
            Adapter DescriptionMicrosoft ISATAP Adapter #2
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.131.121%21
            Prefix Length128
            ItemName
            Adapter Nameisatap.{82E35DBD-52BE-4BCF-BC74-E97BB10BF4B0}
            Adapter DescriptionMicrosoft ISATAP Adapter #3
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.130.121%22
            Prefix Length128
            ItemName
            Adapter Nameisatap.{5A315B7D-D94E-492B-8065-D760234BA42E}
            Adapter DescriptionMicrosoft ISATAP Adapter #4
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.141.12%23
            Prefix Length128
            ItemName
            Adapter Nameisatap.{2182B37C-B674-4E65-9F78-19D93E78FECB}
            Adapter DescriptionMicrosoft ISATAP Adapter #5
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:192.168.140.12%24
            Prefix Length128
            ItemName
            Adapter Nameisatap.{104DC629-D13A-4A36-8845-0726AC9AE25E}
            Adapter DescriptionMicrosoft ISATAP Adapter #6
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:10.1.1.5%33
            Prefix Length128
            ItemName
            Adapter Nameisatap.{483266DF-7620-4427-BE5D-3585C8D92A12}
            Adapter DescriptionMicrosoft ISATAP Adapter #7
            Physical Address00-00-00-00-00-00-00-E0
            StatusNot Operational
            DNS Servers
            IP Addressfe80::5efe:169.254.1.193%34
            Prefix Length128
      Verifying that a node does not have multiple adapters connected to the same
      subnet.
      Verifying that each node has at least one adapter with a defined default
      gateway.
      Verifying that there are no node adapters with the same MAC physical address.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 78-2B-CB-3C-DC-F5 on node
      ACHV01.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV01.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - Heartbeat and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Found duplicate physical address 74-86-7A-D4-C9-8B on node
      ACHV02.AshtaChemicals.local adapter Mgt - LiveMigration and node
      ACHV02.AshtaChemicals.local adapter Mgt.
      Verifying that there are no duplicate IP addresses between any pair of nodes.
      Checking that nodes are consistently configured with IPv4 and/or IPv6
      addresses.
      Verifying that all nodes IPv4 networks are not configured using Automatic
      Private IP Addresses (APIPA).
    Back to Summary
    Back to Top
    Validate Multiple Subnet Properties
      Description: For clusters using multiple subnets, validate the network
      properties.
      Testing that the HostRecordTTL property for network name Name: Cluster1 is set
      to the optimal value for the current cluster configuration.
      HostRecordTTL property for network name Name: Cluster1 has a value of 1200.
      Testing that the RegisterAllProvidersIP property for network name Name:
      Cluster1 is set to the optimal value for the current cluster configuration.
      RegisterAllProvidersIP property for network name Name: Cluster1 has a value of
      0.
      Testing that the PublishPTRRecords property for network name Name: Cluster1 is
      set to the optimal value for the current cluster configuration.
      The PublishPTRRecords property forces the network name to register a PTR in
      DNS reverse lookup record IP address to name mapping.
    Back to Summary
    Back to Top
    Validate Network Communication
      Description: Validate that servers can communicate, with acceptable latency,
      on all networks.
      Analyzing connectivity results ...
      Multiple communication paths were detected between each pair of nodes.
    Back to Summary
    Back to Top
    Validate Windows Firewall Configuration
      Description: Validate that the Windows Firewall is properly configured to
      allow failover cluster network communication.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI3'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV01.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV01.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node ACHV01.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI2'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - LiveMigration'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - Mgt - Heartbeat'.
      The Windows Firewall on node 'ACHV02.AshtaChemicals.local' is configured to
      allow network communication between cluster nodes over adapter
      'ACHV02.AshtaChemicals.local - iSCSI1'.
      The Windows Firewall on node ACHV02.AshtaChemicals.local is configured to
      allow network communication between cluster nodes.
    Back to Summary
    Back to Top

  • Adding nodes to Windows Server 2008 R2 Hyper-V Cluster..

    Currently we have a 3 node Windows Server 2008 R2 Hyper-V Cluster in production. There are about 3 terrabytes worth of VMs running across these nodes.
    It is over-committed, so i've setup two new nodes to add to the cluster.
    I've done this before in a SQL cluster but never a Hyper-V cluster.
    If I don't run validation when adding the nodes, will there be downtime?
    The quorum is setup for disk majority, everything is identical on all nodes that needs to be. Shared storage is recognized and ready on the new nodes. I've gone through every checklist that Microsoft has. I'm just curious if the virtual machines will go
    offline on the current nodes when i add the two new nodes.
    Everything is identical down to the wsus updates installed. From networking to storage everything is perfect.
    I don't want to run validation as I know that'll take everything offline.

    Hi,
    It is recommend to run a validation test. You can select custom test. (skip storage).
    When add the new node to existing cluster . it will not bring down existing VM. 
    Lai (My blog:- http://www.ms4u.info)

  • Microsoft Virtual Machine Converter VMware To Hyper-V Cluster

    I'm not sure if this should technically be in the clustering section but I have just moved from SCVMM 2012 SP1 to 2012 R2 and I kinda miss the built-in converter tool. What I used to do when converting VMware to Hyper-V was uninstall VMware Tools and then
    I would do a physical to virtual conversion on the VMware virtual machine and SCVMM would handle copying the virtual machine while it was online and register it in our Hyper-V cluster. Now, the only thing I could come up with is the Microsoft Virtual Machine
    Converter but it seems rather limited and doesn't seem to have an option to import to a cluster. So is the only option to convert it over to Hyper-V as if it were a local machine and then run another export/import process to get it in the cluster? I tried
    to point it to a CSV and while it copied the disk over, it registered the virtual machine in ProgramData (the default location). This obviously causes issues when trying to make the VM highly available. Any one have a suggested process of the best way to go
    about this? Thank you in advance for your time!

    So no matter what option I choose, V2V always shuts down the source VM during the conversion process. On the other hand, if I use the old method of uninstalling VMware tools manually and then doing a P2V instead, the source VM has to stay online for
    that type of process. Then I just have to migrate it to become highly available and that seems to accomplish what I want. The only annoying part is that I couldn't run it on my Win 8.1 Pro workstation as it requires the BITS feature to be installed
    and that only appears to be available on server editions (correct me if there is a way to get it on 8.1). I guess the documentation (I think) says that it should run on server editions only but V2V runs fine from 8.1 since it doesn't need BITS since it's
    an offline process.

  • SMB3 for Hyper-V Cluster

    I'm contemplating using the much hyped SMB3 backed Hyper-V cluster. I just have a few questions.
    1. Is there a way for the SMB3 share to be HA?
    2. Is it easily scalable / can I add storage live without downtime?
    3. Is there any performance or reliability advantage over iSCSI attached storage?
    This is assuming the data within the VHDs is OS and general company data, not large CAD or multimedia data.
    I can probably google most of this knowledge but I'm looking for the confirmation from someone who has done / is doing it. Whitepapers can be not so helpful sometimes and sales guys usually have to refer me to their sales engineer. Thanks in advance.
    This topic first appeared in the Spiceworks Community

    Are there any ways around this limitation without having to install 3rd party software?
    I'm surprised I wasn't able to find much about this on any of my searches.
    Run your workload inside a virtual machine. Configure guest VM cluster between a pair of VMs running Windows Server 2012 R2 and make built-in MSFT iSCSI target to do a failover. See for reference:
    Configure MSFT iSCSI Target for HA
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    (yes it would bring up virtualization overhead as all I/O would be routed over VMbus and also you'll be still active-passive as MSFT cannot do active-active but if you don't want to use third-party software and don't care much about performance that's the
    viable way to go)
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SUN Cluster.PMF.pmfd Failed to stay up

    Dear All,
    Please help I am facing problem and unable to start sun cluster concurrent manager resource group it is showing me status "starting" but unable to start please find below the log
    Oct 16 14:06:24 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="prdclone-rg,PRODE-cmg-res,0.svc", cmd="/bin/sh -c /opt/SUNWscebs/cmg/bin/start_cmg -R 'PRODE-cmg-res' -G 'prdclone-rg' -C '/bkpclone/acvetprdcm/inst/apps/PRODE_iat-dc-prdclone' -U 'acvetprdcm' -P 'apps' -V '12.0' -S 'PRODE' -O '/bkpclone/acvetprdcm/apps/tech_st/10.1.2' -L '77' ", Failed to stay up.
    Oct 16 14:06:24 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "prdclone-rg,PRODE-cmg-res,0.svc" restarting too often ... sleeping 8 seconds.
    Oct 16 14:06:32 iat-dc-ebpdb02 SC[SUNWscebs.cmg.start]:prdclone-rg:PRODE-cmg-res: [ID 567783 daemon.error] startebs - ld.so.1: sh: fatal: /usr/lib/secure/libschost.so.1: open failed: No such file or directory
    Oct 16 14:06:32 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="prdclone-rg,PRODE-cmg-res,0.svc", cmd="/bin/sh -c /opt/SUNWscebs/cmg/bin/start_cmg -R 'PRODE-cmg-res' -G 'prdclone-rg' -C '/bkpclone/acvetprdcm/inst/apps/PRODE_iat-dc-prdclone' -U 'acvetprdcm' -P 'apps' -V '12.0' -S 'PRODE' -O '/bkpclone/acvetprdcm/apps/tech_st/10.1.2' -L '77' ", Failed to stay up.
    Oct 16 14:06:32 iat-dc-ebpdb02 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "prdclone-rg,PRODE-cmg-res,0.svc" restarting too often ... sleeping 16 seconds.
    Oct 16 14:06:48 iat-dc-ebpdb02 SC[SUNWscebs.cmg.start]:prdclone-rg:PRODE-cmg-res: [ID 567783 daemon.error] startebs - ld.so.1: sh: fatal: /usr/lib/secure/libschost.so.1: open failed: No such file or directory
    kindly help to resolve the issue.
    Regards,

    Thanks unable to resolve the issue.
    Please see below are my setup:
    Database tier:
    Sun cluster 3.2u3
    oracle EBS 12.1.3
    Two node sun cluster active node a1 and passive b1.
    Application Tier:
    App01
    I want to move concurrent manager from appo1 to database tier what I did below was my action plan.
    step-1 cloned application(app01) to DB on primary host and enabled only batch processing and other all disabled using same virtual host as we have same defined virtual host for database resource group LH (vhost)
    the problem is when I start CM it started but immediately stop when I cloned again with physical host
    CM is started and working fine but I need anyone to tell me how can i start manually and move to CM resource sun cluster.
    Question: Can I choose same LR host for application or I need to put physical name of the primary node during cloning process as I said same we are using LR host for DB tier or need to add new virtual host for CM.
    thanks
    Regards,

Maybe you are looking for