Additional DAG IP not shown in failover cluster manager

Hi,
I am facing a strange issue in our environment having Exchange 2010 SP3 RU2 with Windows 2008 R2 SP1
We had a DAG cluster spread across 2 different AD sites, recently we had added an additional IP address in DAG from the secondary site IP subnet.
Both the IP addresses are updated under DAG properties in EMC & in EMS, but only old IP address is visible in failover cluster manager.
I believe both the IP addresses should be visible in failover cluster manager (One IP should be Online & Second IP should be Offline).
There is no events getting generated which could give me some clue about where the issue lies. Please suggest if someone had faced similar issue.
Thanks
Supreet Singh
Supreet Singh

Hi,
Based on my knowledge, this issue is related to that you don't add a server in DR site to the existing DAG.
The additional IP address should display in Failover Cluster Manager after you add a Mailbox server in DR site to your existing DAG.
You can look at the "Database availability group lifecycle" section in the following article.
http://technet.microsoft.com/en-GB/library/dd979799(v=exchg.141).aspx
Best regards,
If you have feedback for TechNet Subscriber Support, contact
[email protected]
Belinda Ma
TechNet Community Support

Similar Messages

  • Disks are not shown in Failover Cluster Manager

    Hi All,
    As per the validation report we have successfully configured cluster between two nodes. Active Node
    my computer and failover cluster manager:
    storage shows all the available clustered disks but not shown when clicked on active node in failover cluster manager. 
    Had a glimpse on few earlier posts and all I understood is that Windows Server 2008R2 has a bug and MS is working on it.
    Do we have any fix for this issue? Please suggest. 
    Grateful to your time and support.
    Regards,
    Kalyan
    Grateful to your time and support. Regards, Shiva

    Got the solution in fact it's not a solution it's just an awareness on how the disks are available in fail-over cluster manager.
    1) All the available disks are shown in fail-over cluster manager "storage".
    2) While installation it asks us to add available disks for SQL Server then add accordingly. (Ex: E:drive for data files and L:drive for Log files) 
    3) Once the installation is done you see those disks in MSSQL (SQL Server) group.
    Means when you click on Active node it should show you MSSQL (SQL Server) group in which you can find network name, network ip, E: & L: Drives, SQL Server and Agent services.
    Note: MSDTC and Quorum are also clustered disks which can reside (Preferably Active Node) in any of the nodes but automatically fail-overs to active node in case of any passive node failures.
    Regards,
    Kalyan
    Grateful to your time and support. Regards, Shiva

  • SCVMM created VMs not displayed in Failover Cluster Manager

    I have a 2012 Hyper-V failover cluster setup and recently added SCVMM 2012 SP1 to the mix so I could perform some P2V migrations and familiarize myself with its other many capabilities. I noticed that if I create a VM inside SCVMM it doesn't show up in the
    FCM UI with the other VMs I created from FCM. VMs that you create in FCM do get picked up by SCVMM however. Is this by design?
    Thanks,
    Greg

    For my issue above, this was because I'd not noticed and thus not ticked the box on the Live Migrate wizard that says "Make this VM highly available".
    I moved the VM out of the cluster, manually deleted the failed "SCVMM <VMName> Resource", then moved the VM back onto the cluster again but this time ticking the box to make the VM highly available. All looked fine in failover cluster manager.
    I do rather wonder why SCVMM designers think I might want to migrate a VM onto a Hyper-V cluster and NOT want it to be highly available...? Likewise, to be able to move the VM back out again to a standalone host once it's correctly in the cluster, you have
    to untick the "Make this VM highly available box". Surely this should just be done automatically in the background?

  • How to recover from accidental VM role removal in Hyper-V Failover Cluster Manager?

    Hello
    My VMs are still available in Hyper-V Manager but they are no longer available in Cluster Manager. I am curious if there's a way to move them back into the Failover Cluster Manager while retaining their GUIDs?
    I know I can manually create a new VM and add the existing VHD but that process recreates a new GUID causing the VM to regard the stuff I wanted as old.
    I have tried to follow
    http://blogs.technet.com/b/heyscriptingguy/archive/2013/10/14/recovering-virtual-machines-in-hyper-v-server-2012-r2-part-1.aspx but I am curious if there's a faster way to do this without becoming a scripting genius overnight.
    Thanks,
    Prince

    Thanks for the reply but that does not help.
    I am not looking to move my VMs to a remote Host but rather looking to repopulate the fail-over cluster manager with the existing VMs in Hyper-V Manager. In a sense, I am looking to reuse the existing GUID without creating new ones.
    Again, my VMs are alive in Hyper-V Manager but are currently not available in Failover Cluster Manager.
    Any scripting assistance will be greatly appreciated.
    Thanks,
    Prince

  • Cluster Network 2 is missing from Failover cluster Manager

    I have two node Windows 2008 R2 SP1 MS SQL cluster, there was an issue with one node NIC card which was replaced and since then "Cluster Network 2" is not visible in failover cluster manager, also Heartbeat and Public IP nic card of both nodes
    are appearing in "Cluster Network 1" container. even though both network public and hearbeat are on differenet subnet, but still cluster detects it as one subnet. Need to know how to bring back "Cluster Network 2"

    After replacing NIC, cluster worked fine on other node for couple of days and then cluster services started terminating on both the nodes.
    when checked cluster logs found below error.
    "ERR   [CORE] Node 2: exception caught AlreadyExists(183)' because of 'already exists'(AODBP2DB1 - Local Area Connection)"
    Hence to correct this problem, renamed both nodes NIC cards as Public and Private and cluster was started and tested the failover also. but since then only able to see "Cluster Network 1" and all NIC are listed under this container.
    Though when looked at cluster validation report, it shows Cluster Network 1 and Cluster Network 2 entries in it
    Pasted network section from Cluster Validation report
    Network: Cluster Network 1
    DHCP Enabled: False
    Network Role: Enabled
    Prefix
    Prefix Length
    10.100.18.0
    25
    Item
    Value
    Network Interface
    Node1 - Public
    DHCP Enabled
    False
    IP Address
    10.100.18.11
    Prefix Length
    25
    Item
    Value
    Network Interface
    Node2 - Public
    DHCP Enabled
    False
    IP Address
    10.100.18.16
    Prefix Length
    25
    Network: Cluster Network 2
    DHCP Enabled: False
    Network Role: Internal
    Prefix
    Prefix Length
    10.101.130.0
    25
    Item
    Value
    Network Interface
    Node1 - Private
    DHCP Enabled
    False
    IP Address
    10.101.130.11
    Prefix Length
    25
    Item
    Value
    Network Interface
    Node2 - Private1
    DHCP Enabled
    False
    IP Address
    10.101.130.13
    Prefix Length
    25
    Verifying that each cluster network interface within a cluster network is configured with the same IP subnets.
    Examining network Cluster Network 1.
    Network interface Node1- Public has addresses on all the subnet prefixes of network Cluster Network 1.
    Network interface Node2- Public has addresses on all the subnet prefixes of network Cluster Network 1.
    Examining network Cluster Network 2.
    Network interface Node1- Private has addresses on all the subnet prefixes of network Cluster Network 2.
    Network interface Node2- Private1 has addresses on all the subnet prefixes of network Cluster Network 2.
    Verifying that, for each cluster network, all adapters are consistently configured with either DHCP or static IP addresses.
    Checking DHCP consistency for network: Cluster Network 1. Network DHCP status is disabled.
    DHCP status (disabled) for network interface Node1- Public matches network Cluster Network 1.
    DHCP status (disabled) for network interface Node2- Public matches network Cluster Network 1.
    Checking DHCP consistency for network: Cluster Network 2. Network DHCP status is disabled.
    DHCP status (disabled) for network interface Node1- Private matches network Cluster Network 2.
    DHCP status (disabled) for network interface Node2- Private1 matches network Cluster Network 2.

  • Network DR test causes Exchange DAG network to fail (Failover Cluster Manager reports comms errors)

    We have a DAG configured between 2 mailbox servers, one in each of our main data centres. Our comms team recently performed a DR test between our 2 data centres, switiching from the main production link to the backup link. During this outage the Failover
    Cluster Manager reported errors, with each mailbox server reporting the other as uncontactable. The Events that were logged include the following:
    Isatap interface isatap.{02ADE20A-D5D4-437F-AD00-E6601F7E7A9D} is no longer active. (EventID 4201)
    Cluster node 'MAILBOX_SERVER' was removed from the active failover cluster membership. The Cluster service on this node may have stopped. This could also be due to the node having lost communication with other active nodes in the failover cluster. Run the
    Validate a Configuration wizard to check your network configuration. If the condition persists, check for hardware or software errors related to the network adapters on this node. Also check for failures in any other network components to which the node is
    connected such as hubs, switches, or bridges. (EventID 1135)
    File share witness resource 'File Share Witness (\\WITNESS_SERVER\SHARE_NAME)' failed to arbitrate for the file share '\\WITNESS_SERVER\SHARE_NAME'. Please ensure that file share '\\WITNESS_SERVER\SHARE_NAME' exists and is accessible by the cluster. (EventID
    1564)
    Cluster resource 'File Share Witness (\\\WITNESS_SERVER\SHARE_NAME)' in clustered service or application 'Cluster Group' failed. (EventID 1069)
    The Cluster service is shutting down because quorum was lost. This could be due to the loss of network connectivity between some or all nodes in the cluster, or a failover of the witness disk. Run the Validate a Configuration wizard to check your network
    configuration. If the condition persists, check for hardware or software errors related to the network adapter. Also check for failures in any other network components to which the node is connected such as hubs, switches, or bridges. (EventID 1177)
    The Cluster Service service terminated with service-specific error A quorum of cluster nodes was not present to form a cluster. (EventID 7024)
    The Microsoft Exchange Information Store service terminated unexpectedly.  It has done this 1 time(s).  The following corrective action will be taken in 5000 milliseconds: Restart the service. (EventID 7031)
    Looking at the Cluster Events in the Failover Cluster Manager Snap-In i see a heap of Event ID 47 (cannot activate the DAG databases as the server is not up according to Windows Failover Cluster Service) and:
    Node status could not be recorded. This could prevent some network failure logic from functioning correctly. NodeStatus:IsHealthy=True,HasADAccess=True,ClusterErrorOverrideFalse,LastUpdate=5/2/2011 8:25:42 AMUTC Failure:An Active Manager operation failed.
    Error: An error occurred while attempting a cluster operation. Error: Cluster API '"ClusterRegSetValue() failed with 0x6be. Error: The remote procedure call failed"' failed.. (EventID 184)
    Forcefully dismounting all the locally mounted databases on server 'BACKUP_MAILBOX_SERVER. (EventID 307).
    Our Comms team doesn't believe it is a comms issue as they did not log any network communication errors between the servers in the two sites (using icmp). So if it is not a comms issue, how can I configure the Failover Cluster Manager to be resilient to
    this type of network failover event.
    Thanks
    Dan

    Isn't it also true that in a stretched DAG with even numbered nodes, the PAM needs to be in the same site as the active DAG node?  If the connection between both nodes goes down, and the PAM is in the "passive" site, the primary node will
    dismount the databases since it can't check with the PAM to make sure its safe for it to be up.  
    In a even-numbered node stretched DAG, the PAM changes to the DR/passive site everytime a failover occurs, but doesn't automatically switch back when you reactivate the primary node.

  • VM will not boot after moving using Failover Cluster Manager - "a disk read error occurred......"

    My current Configuration:
    3 node cluster, using clustered shared storage and about 22 VM's.   The Host servers are running 2012 Data Center while all guest are running 2012 Standard.  The SAN is EqualLogic and we are using HIT Kit 4.5.
    I have a CSV that is running out of space, so I created another CSV so that I could move some of the VM's to a new home.    I tested this by creating a test VM, and moved it successfully 3 times.     I then moved an actual
    LIVE VM and while it seemed to move ok, it will now not start.   The message is "a disk read error occurred Press ctrl+alt+del to restart".     I moved the test VM and it failed as well.    
    I have read several things about this, but nothing seems to relate to my specific issue.   I have verified that VSS is working and free of errors as well.    From the Settings menu for the VM, if I select "Inspect" the drive,
    the properties all look fine.    It is a VHDX and both the current file size and maximum disk size seem correct.
    The VM's were moved using the "move - virtual machine storage" option within Failover Cluster Manager.
    Suggestions?
    Thanks.

    Lets see if I can answer all of those and I appreciate the brain storming.   This really needs to work, correctly.
    1.  The Storage is moving.
    2.   VM's and SAN are on same device.
    3.  No, my  Clustered Shared Volume, CSV, is out of room, (more one that later)
    4.  No, I actually have 2 sans grouped together.   However, I'm moving the VM', form one CSV to another CSV on the Same san.  EqualLogic PS 6110 is the one I am trying to move VMS around on, and the other SAN not involved in any way except
    for it is in a SAN group is an EqualLogic PS6010.
    5.  No error During move, it took about 5-10 minutes, no error messages.   Note, I did a test and it worked GREAT 3 times.   Now both a live VM, and the test VM are doing the same thing.
    6.  No, the machine is not to large.   The test making was a 50 gig drive, just 2012 standard installed with updates.   The live VM was a 75 gig VM that was my Trend Micro Server, or anti-virus host.
    7.  Expand the existing SCV?   Yes I should be able to, but there is an issue there.   The volume was expanded correctly, Equallogic sees the added space, Fail Over cluster manager sees the added space, however disk manager only
    sort of does.    When looking at disk manager, there are 2 areas that tell you a little bit about the drive.   The top part and then the bottom part.   The top part only shows 500G, the original size, while the bottom part
    says that it is 1 TB in size.   I call Dell's technical support and after they looked at it I was told by the technician that they had seen this a couple of times and the only way to fix it was to move all the VM's to another CSV and delete the troubled
    CSV.   I thought about adding more space to the troubled CSV, but its on a production server with about 12 VM's running on it and I did not want to take a chance.   The Trend VM was running on CSV-1 and working fine.   
    I must admit that the test VM, was on CSV-2.    I moved the Test VM from csv-2 to csv-3 back and forth several times with no errors.   The Trend Server was on CSV-1 and was moved to CSV-3, however it failed.  Again, I then moved
    the test VM from CSV-2 to CSV-3 and it failed the same way.   I could not test the "TEST - VM" on csv-1 due to csv-1 not having enough space.
    8.   I did disable the network from the VM to see if that mattered it did not. 
    9.   I have not yet had a chance to connect the VHDX to a new VM, but I will do that in about an hour, hopefully.    Once I am able to test that suggestion I will post the results as well.
    Again, thanks for all the suggestions and comments, as I had rather have lots to look at and try.   I hope I answered them well enough.
    Kenny

  • 2008R2 to 2012R2 = Failover Cluster Manager not found ..gray out feature.

    Hello everyone,
    I just upgraded from our Hyper-V host running Windows Server 2008R2 to 2012R2,
    I didn't get any errors during the upgrade and everything looked fine except that the failover cluster manager is no longer exist ..I went to add it from the server manager and found that the feature already exist but it is grayed out, I can not remove it and
    add it again!
    how to get the failover manager cluster working again ?
    Thanks
    Misbah

    ok, the fix seems simple,
    I needed to go from server manager and select remove role/features and then re-add it.
    going to add roles/feature will not allow me to remove any feature ..it is just for adding features.

  • Hyper-V internal (1 of 2, NOT Cluster) network unavailable in Failover Cluster Manager 2008R2

    Hi all,
    I had a very strange situation in my Hyper-V 2 nodes-cluster:
    I have one networtk for HertBeat only (10.0.0.0/24) and second for HyperV internal networking for virtual machines (In properties marked "Do not allow clustern network communication")
    Machines were working properly and any migration too.
    One day, my secon done HyperV2 was marked red in Failover Cluster Manager mmc. I discovered that HyperV LAN is unavailable on this second node. BUT everything war working properly - HyperV2 node was on internet, communicated to AD domain, even culd run any
    virtual machine...
    Several times I checked the configuration, also check TMG configuratio, I was wondering if it can not be wrong settings on network access rule, I tried to restart this host - no result, ... network was still unavailable.
    After about a hour I found the resolutuion:
    On my second Hyper-V node Disable / Enable Local Area Connection network adapter, connected to Hyper-V LAN in Network Connections control panel!
    Hope this will help to somebody ;)
    Marian, just trying to help you

    Resolutuion:
    On affected Hyper-V node Disable / Enable Local Area Connection network adapter, connected to Hyper-V LAN in Network Connections control panel
    I guess, sometnig flush on network configuration and / or some combination with network adapter driver
    Marian, just trying to help you

  • Hyper-v Failover Cluster management via powershell

    Hi
        We are looking at having a management server act as proxy for managing couple of hyper-v clusters using CSV. We plan to do management using powershell commands.
        We create a session one of the host in the cluster  and execute commands using invoke-command. The cluster verbs seems to fail with the following warning. 
    WARNING: If you are running Windows PowerShell remotely, note that some failover clustering cmdlets do not
    work remotely. When possible, run the cmdlet locally and specify a remote computer as the target. To run the
     cmdlet remotely, try using the Credential Security Service Provider (CredSSP). All additional errors or
    warnings from this cmdlet might be caused by running it remotely.
      What is the recommended way to do setup for using FailoverCluster ? We want to have a single management server that act proxy for all servers clustered or not.
      Also, is there a document that describe various operations done via Failover Cluster Manager and corresponding powershell commands (or set of commands).
    Thanks
    /Jd

    Regarding the Stop action from Failover Cluster Manager, Eric, I understand your point. But when I do shutdown from Failover Cluster Manager, the VM shuts down as expected even when the setting is set to Save.
    I was very specifically talking about the Stop-ClusterGroup cmdlet, not any command issued in Failover Cluster Manager. But, well, yeah, if you tell a VM to shut down, it shuts down. I don't know why you'd expect anything different to happen. If you're looking
    for the equivalent to Stop-ClusterGroup inside Failover Cluster Manager, it's not called "Shut Down". You can use "Stop Role" on the "More Actions" menu for the VM. You can also find the configuration object (usually named in the format of "Virtual Machine
    Configuration XXX") and take it offline.
    I tested a number of times after your first post, and Stop-ClusterGroup does what the Cluster-Controlled Action is set to every single time for me.
    I could only make educated guesses at the underlying mechanics of FCM and PowerShell's cluster cmdlets, but the stand-out difference is that FCM has no method to operate in a double-hop situation at all, while PowerShell does. You only encounter these difficulties
    with PowerShell in that second hop. The question you're asking: "it would be great to know how Failover Cluster Manager works without this setup ?" is an apples-to-oranges comparison.
    This particular sentence of yours sort of changes the overall parameter of your question:
    "... so our automation works..."
    I was under the impression you were setting up this double-hop because you wanted admins to manually execute PowerShell cmdlets against your cluster from a single controlled location.
    If automation is your goal, do it right from the cluster. I obviously don't know your entire wishlist and it's none of my business, but this double-hop situation may not be ideal.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • "New Role" listed in failover cluster manager

    This has appeared in my failover cluster manager & I'm not certain what it is. I'm not experiencing any issues at the moment. Can anyone shed some light on this for me?

    Hi Carl Marshall,
    Additional, if you confirm there don’t have others resource depend on it and it is useless you can delete it safely.
    More information:
    Explanation of Dependencies in Microsoft Cluster Server and Windows Server Failover Clustering
    http://support.microsoft.com/kb/171791
    Add a Resource to a Clustered Service or Application
    https://technet.microsoft.com/en-us/library/cc754633.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Cannot migrate VM in VMM but can in Failover Cluster Manager network adapters network optimization warning

    I have a 4 node Server 2012 R2 Hyper-V Cluster and manage it with VMM 2012 R2.  I just upgraded the cluster from 2012 RTM to 2012 R2 last week which meant pulling 2 nodes out of the existing cluster, creating the new R2 cluster, running the copy
    cluster roles wizard since the VHDs are stored on CSVs, and then added the other 2 nodes after installing R2 on them, back into the cluster.  After upgrading the cluster I am unable to migrate some VMs from one node to another.  When trying to do
    a live migration, I get the following notifications under the Rating Explanation tab:
    Warning: There currently are not network adapters with network optimization available on host Node7. 
    Error: Configuration issues related to the virtual machine VM1 prevent deployment and must be resolved before deployment can continue. 
    I get this error for 3 out of the 4 nodes in the cluster.  I do not get this error for Node10 and I can live migrate to that node in VMM.  It has a green check for Network optimization.  The others do not.  These errors only affect
    VMM. In the Failover Cluster Manager, I can live migrate any VM to any node in the cluster without any issues.  In the old 2012 RTM cluster I used to get the warning but I could still migrate the VMs anywhere I wanted to.  I've checked the network
    adapter settings in VMM on VM1 and they are the same as VM2 which can migrate to any host in VMM.  I then checked the network adapter settings of the VMs from the Failover Cluster Manager and VM1 under Hardware Acceleration has "Enable virtual machine
    queue" and Enable IPsec task offloading" checked.  I unchecked those 2 boxes refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still could not live migrate VM1.  Why is this an issue now but it wasn't before
    running on the new cluster?  How do I resolve the issue?  VMM is useless if I can't migrate all my VMs with it.

    I checked the settings on the physical nics on each node and here is what I found:
    Node7: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node8: Virtual machine queue is not listed (Cannot live migrate problem VM's to this node in VMM)
    Node9: Virtual machine queue is listed and enabled (Cannot live migrate problem VM's to this node in VMM)Node10: Virtual machine queue is listed and enabled (Live Migration works on all VMs in VMM)
    From Hyper-V or the Failover Cluster manager I can see in the network adapter settings of the VMs under Hardware Acceleration that these two settings are checked "Enable virtual machine queue" and Enable IPsec task offloading".  I unchecked those
    2 boxes, refreshed the VMs, refreshed the cluster, rebooted the VM and refreshed again but I still cannot live migrate the problem VMs.
    It seems to me that if I could adjust those VM settings from VMM that it might fix the problem.  Why isn't that an option to do so in VMM? 
    Do I have to rebuild the VMM server with a new DB and then before adding the Hyper-V cluster uncheck those two settings on the VM's from Hyper-V manager?  That would be a lot of unnecessary work but I don't know what else to do at this point.

  • 2008 R2 SP1 failover cluster, after connect to VM from failover cluster manager console I've get only upper quourter of screen console

    Hello :)
    I have to ask because this issue is boring me a long time ago :(
    From time to time when I connect to virtual machine from failover cluster manager console I've get virtual machine connection screen (console) which is reduced only to upper left quorter of full screen. There is no scroll bar ... If I live migrate
    VM to another node (or restart VM on same node) and reconnect, console screen is displayed ok (ie all console screen is visible).
    It is happening regardless of OS version installed into guest VM: 2003 R2, 2008 R2, 2012, 2012 R2).
    I've check inside VM ( I can see that integration services are installed and running but when I click onto console window with mouse I've get message:
    "Virtual Machine Connection
    Mouse not captured in Remote Desktop session
    The mouse is available in a Remote Desktop session when
    integration services are installed in the guest operating
    system...."
    From SCVMM console I can see that IC version on guest VM is: 6.1.7601.17514
    I would like to know what is going on and is there any way how can I detect those situation or even better how to prevent it ?
    Thank you for any idea.
    Best regards
    Nenad

    Hi Nenad,
    As you mentioned you have tried 2003 R2, 2008 R2, 2012, 2012 R2 guest vm, but all this do not work properly, With Server 2003 platform the 2008r2 Hyper-V the 2003R2 are not
    supported, in your case you must update to Windows Server 2003 R2 with Service Pack 2.
    Server 2012 as guest vm on 2008r2 you must install the following hot fix:
    You cannot run a Windows 8-based or Windows Server 2012-based virtual machine in Windows Server 2008 R2
    https://support2.microsoft.com/kb/2744129?wa=wsignin1.0
    Server 2012 is the last version of Windows that will be supported as a guest operating system on 2008r2 Hyper-V therefore the 2012r2 guest vm is not supported.
    Please use the following command can query all guest vm’s IC version, please compare this problematic vms IC version with functioning vms.
    The following powershell command can be used to display the version of integration services installed on all VMs on the Hyper-V host:
    PS C:\Users\administrator> get-vm | ft name, IntegrationServicesVersion
    Name                                                       
    IntegrationServicesVersion
    TestVM2012                                           
    6.2.9200.16433
    SQL01                                                    
    6.2.9200.16433
    SQL02                  
                                      6.2.9200.16433
    SCVMM01                                               
    6.2.9200.16384
    SCVMM02                                               
    6.2.9200.16384
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Failover Cluster Manager 2012 Showing Wrong Disk Resource - Fix by Powershell

    On Server 2012 Failover Cluster Manager, we have one Hyper-V virtual machine that is showing the wrong storage resource.  That is, it is showing a CSV that is in no way associated with the VM.  The VM has only one .vhd, which exists on Volume 16. 
    The snapshot file location and smart paging file are also on Volume 16.  This much is confirmed by using the Failover Cluster Manager to look at the VM settings.  If you start into the "Move Virtual Machine Storage" dialog, you can see
    the .vhd, snapshots, second level paging, and current configuration all exist on Volume 16.  Sounds good.
    However, if you look at the resources tab for the virtual machine, Volume 16 is not listed under storage.  Instead, it says Volume 17, which is a disk associated with a different virtual machine.  That virtual machine also (correctly) shows Volume
    17 as a resource.
    So, if everything is on Volume 16, why does the Failover Cluster Manager show Volume 17, and not 16, as the Storage Resource?  Perhaps this was caused by an earlier move with the wrong tool (Hyper-V manager), but I don't remember doing this.
    In Server 2003, there was a "refresh virtual machine configuration" option to fix this, but it doesn't appear in Failover Cluster Manager in Server 2012.
    Instead, the only way I've found to fix the problem is in PowerShell.
      Update-ClusterVirtualMachineConfiguration "put configuration name here in quotes"
    You would think that this would be an important enough operation to include GUI support for it, possibly in the "More Actions" right-click action on the configuration file.

    Hi,
    Thanks for sharing your experience!
    You experience and solution can help other community members facing similar problems.
    Please copy your post and create a new reply, then we can mark the new reply as answer.
    Thanks for your contribution to Windows Server Forum!
    Have a nice day!
    Lawrence
    TechNet Community Support

  • Monitor Replication Health in Failover Cluster Manager?

    Have two Hyper-V 2012 R2 Clusters.
    One Cluster contains all production VM's.The second cluster is on another location and is intended only for HA purposes. All production VM's from the first clusters should be replicated to the other cluster.
    What is the best way to monitor the replication health of all VM's? In Failover Cluster manager, it's not possible to add a column "replication health" to the "roles" window. It's not practical to click on every VM to view the replication
    health. In Hyper-V Manager, it's easy to add "replication health" to the virtual machine window, but clicking on every host to verify the replication health is not very practical as well.
    Thank you in advance for any hint (I know that we could use SCOM or something similar, but these complex tools are out of scope).
    Franz

    Hi FranzSchenk
    I think the only real option open to you is the use of PowerShell in your situation. Have a look at this link and see if this is the sort of thing that will help you.
    http://www.serverwatch.com/server-tutorials/checking-hyper-v-replication-health-using-powershell-cmdlets.html
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Maybe you are looking for