2012 R2 Cluster and Live Migration

6 Node Cluster with Server 2012 R2, all VM's are Server 2012 R2
4 Fiber SAN's
Moving and Live Migration worked fine in FailOver Cluster Manager
But this time we were trying do it with SCVMM 2012 R2 and just move one VM (Gen2)
Of course it failed at 99%
Error (12711)
VMM cannot complete the WMI operation on the server (whatever) because of an error: [MSCluster_Resource.Name="SCVMM VMHost"] The cluster resource could not be found.
The cluster resource could not be found (0x138F)
Recommended Action
Resolve the issue and then try the operation again.
How do I fix this? the VM is still running. The two vhdx files it was moving are smaller then orginal's , but it change the configuration file to point to new ones, which are bad.
it says I can Repair it... Redo or Undo....of course neither of those options work.
Wait for the object to be updated automatically by the next periodic Live migrate storage of virtual machine vmhost from whatever to whatever job.
ID: 1708
Cluster has no errors, SAN's have no errors, CSV have no errors. the machine running scvmm is VM running on the cluster

How did you create this VM? if this is created outside of VMM, I recommend doing a manual refresh of the VM first to ensure that VMM can read its attributes. Then retry the operation.
Btw, are the VMs using diff disk? any checkpoints associated with them?
Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

Similar Messages

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Hyper V Lab and Live Migration

    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    Hi Sir,
    >>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario . 
    But ,if you have more physical computers you may try other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Oracle VM (XEN) and Live Migration throws PCI error

    First off, anyone using a HS21 XM blade with Oracle VM? Anyone attempted a live-migration, does it work?
    When attempting a live migration on a HS21 XM blade we get the following errors and the host hangs:
    102 I Blade_02 08/10/08, 08:46:38 (CADCOVMA02) Blade reboot
    103 E Blade_02 08/10/08, 08:46:26 (CADCOVMA02) Software critical interrupt.
    104 E Blade_02 08/10/08, 08:46:21 (CADCOVMA02) SMI Hdlr: 00150700 PERR: Slave signaled parity error B/D/F/Slot=07000300 Vend=8086 Dev=3 0C Status=8000
    105 E Blade_02 08/10/08, 08:46:19 (CADCOVMA02) SMI Hdlr: 00150900 SERR/PERR Detected on PCI bus
    106 E Blade_02 08/10/08, 08:46:19 (CADCOVMA02) SMI Hdlr: 00150700 PERR: Slave signaled parity error B/D/F/Slot=00020000 Vend=8086 Dev=25F7 Statu
    107 E Blade_02 08/10/08, 08:46:18 (CADCOVMA02) PCI PERR: parity error.
    108 E Blade_02 08/10/08, 08:46:17 (CADCOVMA02) SMI Hdlr: 00150900 SERR/PERR Detected on PCI bus
    109 E Blade_02 08/10/08, 08:46:17 (CADCOVMA02) SMI Hdlr: 00150400 SERR: Device Signaled SERR B/D/F/Slot=07000300 Vend=8086 Dev=350C Sta us=4110
    110 E Blade_02 08/10/08, 08:46:16 (CADCOVMA02) SMI Hdlr: 00150400 SERR: Device Signaled SERR B/D/F/Slot=00020000 Vend=8086 Dev=25F7 Status=4110
    111 E Blade_02 08/10/08, 08:46:14 (CADCOVMA02) PCI system error.
    any ideas? I've called IBM support but their options are reseat the hardware or replace it. this happens on more than one blade so we're assuming it has something to do with oracle VM. Thanks!

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Best Configuration for T4-4 and live migration

    I have several T4-4 servers with Solaris 11.1
    What kind of settings I need for production environments with OVM live migration?
    Should I use OVM manager for rugged environments?
    Should I set two io-domain on each server?
    Considering that I do live migration, for guess ldom system disk should I configure (publish) one big LUN to all servers or just a separate LUN for each ldom guest.

    For the first questions: it's best to have separate LUNs or other backend disks for each guest - otherwise how would you plan to share them across different guests and systems? Separate LUNs also is good for performance by encouraging parallel I/O. Choice of using an additional I/O domain is up to you, but it's very typical for production users because of the resiliency it adds.
    For advice on live migration, seehttps://blogs.oracle.com/jsavit/entry/best_practices_live_migration
    Other blog posts after that discuss availability and performance.
    Hope that helps, Jeff

  • RDS 2012 re-connection after live migration.

    Is there a way to speed up the re-connection after a live migration?
    So if i am in a vm that live migrates it feels like it hangs for about 10 seconds the reconnects and is fine..... While this is OK its not ideal. Is there a way to improve this?

    Actually 10 seconds sounds like a very long time to me. In my experience using Shared Nothing Live Migration I've seen the switch being almost instantaneous, with a continual ping possibly dropping one or two packets, and certainly quick enough that it's
    unlikely any users would notice the change. So in terms of whether it can be improved I'd say yes.
    As you can see from the technical overview here
    http://technet.microsoft.com/en-us/library/hh831435.aspx the final step is for a signal to be sent to the switch informing it of the new MAC address of the servers new destination, so I wonder if the slow switch over might be connected to that, or perhaps
    some other network issue.
    Is the network connection poor between the servers which might cause a delay during the final sync of changes between the server copies? Are you moving between subnets?

  • LDOMs, Solaris zones and Live Migration

    Hi all,
    If you are planning to use Solaris zones inside a LDOM and using an external zpool as Solaris zone disk, wouldn't this break one of the requirements for being able to do a Live Migration ? If so, do you have any ideas on how to use Solaris zones inside an LDOM and at the same time be able to do a Live Migration or is it impossible ? I know this may sound as a bad idea but I would very much like to know if it is doable.

    Thanks,
    By external pool I am thinking of the way you probably are doing it, separate LUNs mirrored in a zpool for the zones coming from two separate IO/Service domains. So even if this zpool exist inside the LDOM as zone storage this will not prevent LM ? That's good news. The requirement "no zpool if Live Migration" must then only be valid for the LDOM storage itself and not for storage attached to the running LDOM. I am also worried about a possible performance penalty introducing an extra layer of virtualisation. Have you done any tests regarding this ?

  • Hyper-V host crash and live migration - what happens?

    Hello there,
    it would be great if you could help me understand how live migrationworks on a sudden host crash of a cluster node.
    I know that the cluster functionality mounts the vhd file on another host and runs the VM.
    But what happens to the VM itself? Does it crash too and is simply restarted on another host?
    To my knowledge the RAM of the VM is run in the RAM of the host, wo I would assume that is all lost on a host crash?
    Thanks for any help understanding this,
    Best regards, Marcus

    In a sudden crash, there is no time for the state of any running VMs to move to another system.  Therefore, the VM is restarted on another node of the cluster.  Think of the host crashing as being the same as pulling the power cable on the VM.
    . : | : . : | : . tim

  • Host server live migration causing Guest Cluster node goes down

    Hi 
    I have two node Hyper host cluster , Im using converged network for Host management,Live migartion and cluster network. And Separate NICs for ISCSI multi-pathing. When I live migrate the Guest node from one host to another , within guest cluster the node
    is going down.  I have increased clusterthroshold and clusterdelay values.  Guest nodes are connecting to ISCSI network directly from ISCSI initiator on Server 2012. 
    The converged networks for management ,cluster and live migration networks are built on top of a NIC Team with switch Independent mode and load balancing as Hyper V port. 
    I have VMQ enabled on Converged fabric  and jumbo frames enabled on ISCSI. 
    Can Anyone guess why would live migration cause failure on the guest node. 
    thanks
    mumtaz 

    Repost here: http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/threads
    in the Hyper-V forum.  You'll get a lot more help there.
    This forum is for Virtual Server 2005.

  • Live migration Vnic on hosts randomly losing connectivity HELP

    Hello Everyone,
    I am building out a new 2012 R2 cluster using VMM with converged network configuration.  I have 5 physical nics and teaming 3 of them using dynamic load balancing.  I have configured 3 virtual network adapters in host which are for management,
    cluster and Live migration.  The live migration nic loses connectivity randomly and fails migrations 50% of the time.
    Hardware is IBM blades (HS22) with Broadcom Netextreme II nics.  I have updated firmware and drivers to the latest versions.  I found a forum with something that looks very similar but this was back in November so Im guessing there is a fix. 
    http://www.hyper-v.nu/archives/mvaneijk/2013/11/vnics-and-vms-loose-connectivity-at-random-on-windows-server-2012-r2/
    Really need help with this.
    Thanks

    Hi,
    Does your cluster can pass the cluster validation test? Please install the recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters update first
    then monitor again.
    More information:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Is it possible to Migrate Live VMs from a Windows 2012 Hyper-V Cluster to a different 2012 R2 Cluster?

    At the moment I'm in a bit of dilemma because I know that Windows 2012 supports "Shared Nothing Live Migration" but recently I got to know that this feature is available for stand alone Hyper-V Servers. My Setup is that I have
    3 Servers running Windows Server 2012 Hyper-v Failover Cluster and I need to migrate everything to a new cluster running on Windows Server 2012 R2 and offcourse some VMs can't be turned off during migration so I need to do this live during production hours.
    Another note is that both clusters will be running on the different LUNs since each cluster has its own CSVs. I need to know if it is possible to migrate such VMs/Roles Live without have any downtime of the VMs/Roles?

    Yes, but one exception - live migration works only on "compatible cpu's". There actually is a setting in CPU for forcing compatibility
    mode. If that is not set, and you mvoe for example from AMD to Intel, then - this is not possible in live migration, cluster or not.

  • Live Migration failed using virtual HBA's and Guest Clustering

    Hi,
    We have a Guest Cluster Configuration on top of an Hyper-V Cluster. We are using Windows 2012 and Fiber Channel shared storage.
    The problem is regarding Live Migration. Some times when we move a virtual machine from node A to node B everything goes well but when we try to move back to node A Live Migration fails. What we can see is that when we move the VM from node A to B and Live
    Migration completes successfully the virtual ports remain active on node A, so when we try to move back from B to A Live Migration fails because the virtual ports are already there.
    This doesn't happen every time.
    We have checked the zoning between Host Cluster Hyper-V and the SAN, the mapping between physical HBA's and the vSAN's on the Hyper-V and everything is ok.
    Our doubt is, what is the best practice for zoning the vHBA on the VM's and our Fabric? We setup our zoning using an alias for the vHBA 1 and the two WWN (A and B) on the same object and an alias for the vHBA 2 and the correspondent WWN (A and B). Is it
    better to create an alias for vHBA 1 -> A (with WWN A) and other alias for vHBA 1 -> B (with WWN B)? 
    The guest cluster VM's have 98GB of RAM each. Could it be a time out issue when Live Migration happen's and the virtual ports remain active on the source node? When everything goes well, the VM moves from node A with vHBA WWN A to node B and stays there
    with vHBA WWN B. On the source node the virtual ports should be removed automatically when the Live Migration completes. And that is the issue... sometimes the virtual ports (WWN A) stay active on the source node and when we try to move back the VM Live Migration
    fails.
    I hope You may understand the issue.
    Regards,
    Carlos Monteiro.

    Hi ,
    Hope the following link may help.
    To support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V automatically alternates between the Set A and Set B
    WWN addresses during a live migration. This ensures that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration.
    Hyper-V Virtual Fibre Channel Overview
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    More information:
    Hyper-V Virtual Fibre Channel Troubleshooting Guide
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Hyper-V virtual SAN
    http://salworx.blogspot.co.uk/
    Thanks.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • Hyper-V 2012 R2 Cluster - Drain Roles / Fail Roles Back

    Hi all,
    In the past when I've needed to apply windows updates to my 3 Hyper-V cluster nodes I used to make a note of which VM's were running on each node, then I'd live migrate them to one of the other cluster nodes before pausing the node I need to work on and
    carry out the updates, once I finished installing the updates I'd then simply resume the node and live migrate the VM's back to their original node.
    Having recently upgraded my nodes to Windows 2012 R2 I decided to use the new functionality in Failover Cluster Manager where you can pause & drain a node of its roles, perform the updates/maintenance, and then resume & fail roles back to the node,
    unfortunately this didn't go as smoothly as I'd hoped, for some reason it seems like the drain/fail back decided to be cumulative rather than one off jobs per-node ... hard to explain, hopefully the following will be clear enough if the formatting survives:
    1. Beginning State:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    2. Drain Hyper1:
    Hyper1     Hyper2     Hyper3
                    VM04       VM01
                    VM05       VM02
                    VM06       VM03
                                   VM07
                                   VM08
                                   VM09
    3. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    4. Drain Hyper2:
    Hyper1     Hyper2     Hyper3
    VM01                       VM04
    VM02                       VM05
    VM03                       VM06
                                   VM07
                                   VM08
                                   VM09
    5. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
                    VM01       VM07
                    VM02       VM08
                    VM03       VM09
                    VM04  
                    VM05
                    VM06
    6. Manually Live Migrate VM's back to correct location:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    7. Drain Hyper3:
    Hyper1     Hyper2     Hyper3
    VM01        VM04
    VM02        VM05
    VM03        VM06
                    VM07
                    VM08
                    VM09
    8. Fail Roles Back:
    Hyper1     Hyper2     Hyper3
                                   VM01
                                   VM02
                                   VM03
                                   VM04
                                   VM05
                                   VM06
                                   VM07
                                   VM08
                                   VM09
    9. Manually Live Migrate VM's back to correct location:
    Hyper1     Hyper2     Hyper3
    VM01        VM04       VM07
    VM02        VM05       VM08
    VM03        VM06       VM09
    Step 8 was a rather hairy moment, although I was pleased to see my cluster hardware capacity planning rubber stamped, good to know that if I were ever to loose 2 out of 3 nodes everything would keep ticking over!
    So, I'm back to the old ways of doing things for now, has anyone else experienced this strange behaviour?
    Thanks in advance,
    Ben

    Hi,
    Just want to confirm the current situations.
    Please feel free to let us know if you need further assistance.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

Maybe you are looking for