V-HBA Guest VM cannot Live Migration

Hi All,
I'm using Windows Server 2012 Data Center OS and configured the for the hyper-V server role with Fail-over cluster (4 Node – Cluster).
Each Physical server installed with Two Qlogic HBA Adapters.
We have configured following configuration on hyper-v Server and SAN (HP) storage.
Enabled the Virtual SAN Manager.
Created Each Hyper-V Host Virtual Fiber channel Switch. Each Host configured two adapters. (Fibre Channel SAN 1 /  Fibre Channel SAN 2) .
Using the HP storage and enabled the switch port level (N_Port ID Virtualization (NPIV).
Created new VM and assign new two vHBA Adapters. Each adapter having two sets of wwn numbers.
Each Adapter SET A wwn Address discovered by SAN switch and configured the zone . However SET B not discovered but we manually created the zone for SET B for both Adapters.
Present the Storage to the VM and installed the MPIO. SAN disk visible on the VM disk management.
During the live migrate the VM to different host I’m getting below error.
Live migration of 'Virtual Machine TEST-Server' failed.
Virtual machine migration operation for 'TEST-Server' failed at migration destination 'HV02'. (Virtual machine ID 378AE3E3-5A4F-4AE7-92F8-D47855C2D1C5)
The virtual machine 'TEST-Server' is not compatible with physical computer 'HV02'. (Virtual machine ID 378AE3E3-5A4F-4AE7-92F8-D47855C2D1C5)
Could not find a virtual fibre channel resource pool with ID 'New Fibre Channel SAN1'
Could not find a virtual fibre channel resource pool with ID 'New Fibre Channel SAN2'
Event ID : 21502
I need some support from Expert for this issue. I’m trying to configure the Guest Cluster with HBA. The Quick Migration working fine but still failing with live migration. I want to resolve the live migration issue.
Aucsna

Hi Aucsna,
"it's issue with the Virtual SAN Switch label name."
Has the problem been solved ?
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Live migration with zones

    Hi all,
    I have been reading into making "SPARC Private Cloud" whitepapers with LDOM's from Oracle. One thing really pops out from the text which really confuses me:
    from Page 8 and 10:
    "VMs may also be securely live migrated or automatically started or restarted across any servers in their respective pools. *Zones are cold migrated*"
    "Secure live migration—Move domains off of servers that are undergoing planned maintenance. *Zones are cold-migrated*."
    Does this really mean that if I have zones inside LDOM guest, I can live migrate the LDOM guests but not the zones? Hence zones will go down if I do this? If so, whats the reason behind this, its hard to grasp the idea that the OS itself can be live migrated, but not zones inside it that are using the same kernel, binaries etc from it....
    Links:
    https://blogs.oracle.com/infrared/entry/building_private_iaas_with_sparc
    http://www.oracle.com/us/groups/public/@otn/documents/webcontent/1659149.pdf
    - Jukka

    Lumi, I'm pretty sure they are comparing LDOMs with zones on a standalone system (i.e. no LDOMs).
    When you migrate a domain, everything the guest kernel is doing should emerge as it was before.
    Migration might take a bit longer than for the GZ alone, since you're using more virtual memory.
    To move an NGZ between standalone GZ's, you would indeed have to halt, detach, attach, and boot it.
    But please don't take my word for it... feel free to try both methods for yourself. =-)
    The only limitation for zones in LDOMs that I'm aware of: You cannot currently set elastic power policy.
    Other than that, I don't see why you couldn't keep zones running inside your guest as it moves around.
    Hope that helps... -cheers, CSB

  • Unable to Live Migrate to Some Hosts

    We have been having issues where we cannot Live Migrate to some of our hosts. I have verified the VMM Agent is the same version and the Host OS are the same. I've compared all settings in VMM and Hyper-V between hosts that are Live Migration Candidates against
    hosts that are not Live Migration Candidates.  The Rating Message for those that we are unable to Live Migrate to is the following:
     Unable to migrate or clone the virtual machine "" because the version of virtualization software on the host does not match the version of virtual machine's virtualization software on source (6.3.9600.17334). To be migrated or cloned,
    the virtual machine must be stopped and should not contain any saved state. 
    Any help resolving this is greatly appreciated!

    Hi,
    i have also a few questions. You say, that you are the total noob. Therefore you should check the total noob things :)
    What Version of SCVMM are you using?
    What about your infrastructure? Hyper-V Cluster? And how many Cluster?
    Are you trying to migrate between Hyper-V Cluster? Or between Cluster nodes?
    Are the VMs made high available on your Hyper-V Host?
    Does the target host have access to the shared volume?
    did you check the processor version?
    Check the Patch Level of the Hosts and the Agents
    Are source and destination host the same OS Version and Patch level? Are they both in the same Cluster?
    without knowing more about your infrastructure i could ask a lot more questions. It would be cool if you could give us some more information about your setup. OR let us know your solution if you are able to fix the problem.
    regards,
    Thomas
    Thomas Hanrath [MCT | Regional Lead Germany]
    Follow me on TWITTER
    Thomas Hanrath Private Cloud Blog
    Microsoft Learning and Certification Blog

  • Live Migration failed using virtual HBA's and Guest Clustering

    Hi,
    We have a Guest Cluster Configuration on top of an Hyper-V Cluster. We are using Windows 2012 and Fiber Channel shared storage.
    The problem is regarding Live Migration. Some times when we move a virtual machine from node A to node B everything goes well but when we try to move back to node A Live Migration fails. What we can see is that when we move the VM from node A to B and Live
    Migration completes successfully the virtual ports remain active on node A, so when we try to move back from B to A Live Migration fails because the virtual ports are already there.
    This doesn't happen every time.
    We have checked the zoning between Host Cluster Hyper-V and the SAN, the mapping between physical HBA's and the vSAN's on the Hyper-V and everything is ok.
    Our doubt is, what is the best practice for zoning the vHBA on the VM's and our Fabric? We setup our zoning using an alias for the vHBA 1 and the two WWN (A and B) on the same object and an alias for the vHBA 2 and the correspondent WWN (A and B). Is it
    better to create an alias for vHBA 1 -> A (with WWN A) and other alias for vHBA 1 -> B (with WWN B)? 
    The guest cluster VM's have 98GB of RAM each. Could it be a time out issue when Live Migration happen's and the virtual ports remain active on the source node? When everything goes well, the VM moves from node A with vHBA WWN A to node B and stays there
    with vHBA WWN B. On the source node the virtual ports should be removed automatically when the Live Migration completes. And that is the issue... sometimes the virtual ports (WWN A) stay active on the source node and when we try to move back the VM Live Migration
    fails.
    I hope You may understand the issue.
    Regards,
    Carlos Monteiro.

    Hi ,
    Hope the following link may help.
    To support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V automatically alternates between the Set A and Set B
    WWN addresses during a live migration. This ensures that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration.
    Hyper-V Virtual Fibre Channel Overview
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    More information:
    Hyper-V Virtual Fibre Channel Troubleshooting Guide
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Hyper-V virtual SAN
    http://salworx.blogspot.co.uk/
    Thanks.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Hyper-V guest SQL 2012 cluster live migration failure

    I have two IBM HX5 nodes connected to IBM DS5300. Hyper-V 2012 cluster was built on blades. In HV cluster was made six virtual machines, connected to DS5300 via HV Virtual SAN. These VMs was formed a guest SQL Cluster. Databases' files are placed on
    DS5300 storage and available through VM FibreChannel Adapters. IBM MPIO Module is installed on all hosts and VMs.
    SQL Server instances work without problem. But! When I try to live migrate SQL VM to another HV node an SQL Instance fails. In SQL error log I see:
    2013-06-19 10:39:44.07 spid1s      Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.07 spid1s      SQLServerLogMgr::LogWriter: Operating system error 170(The requested resource is in use.) encountered.
    2013-06-19 10:39:44.07 spid1s      Write error during log flush.
    2013-06-19 10:39:44.07 spid55      Error: 9001, Severity: 21, State: 4.
    2013-06-19 10:39:44.07 spid55      The log for database 'Admin' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
    2013-06-19 10:39:44.07 spid55      Database Admin was shutdown due to error 9001 in routine 'XdesRMFull::CommitInternal'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
    2013-06-19 10:39:44.31 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.31 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.31 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.31 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.32 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.32 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.32 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.32 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.33 spid36s     Starting up database 'Admin'.
    2013-06-19 10:39:44.58 spid36s     349 transactions rolled forward in database 'Admin' (6:0). This is an informational message only. No user action is required.
    2013-06-19 10:39:44.58 spid36s     SQLServerLogMgr::FixupLogTail (failure): alignBuf 0x000000001A75D000, writeSize 0x400, filePos 0x156adc00
    2013-06-19 10:39:44.58 spid36s     blankSize 0x3c0000, blkOffset 0x1056e, fileSeqNo 1313, totBytesWritten 0x0
    2013-06-19 10:39:44.58 spid36s     fcb status 0x42, handle 0x0000000000000BC0, size 262144 pages
    2013-06-19 10:39:44.58 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.58 spid36s     SQLServerLogMgr::FixupLogTail: Operating system error 170(The requested resource is in use.) encountered.
    2013-06-19 10:39:44.58 spid36s     Error: 5159, Severity: 24, State: 13.
    2013-06-19 10:39:44.58 spid36s     Operating system error 170(The requested resource is in use.) on file "v:\MSSQL\log\Admin\Log.ldf" during FixupLogTail.
    2013-06-19 10:39:44.58 spid36s     Error: 3414, Severity: 21, State: 1.
    2013-06-19 10:39:44.58 spid36s     An error occurred during recovery, preventing the database 'Admin' (6:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected,
    contact Technical Support.
    In windows system log I see a lot of warnings like this:
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Microsoft-Windows-Ntfs" Guid="{3FF37A1C-A68D-4D6E-8C9B-F79E8B16C482}" />
      <EventID>140</EventID>
      <Version>0</Version>
      <Level>3</Level>
      <Task>0</Task>
      <Opcode>0</Opcode>
      <Keywords>0x8000000000000008</Keywords>
      <TimeCreated
    SystemTime="2013-06-19T06:39:44.314400200Z" />
      <EventRecordID>25239</EventRecordID>
      <Correlation
    />
      <Execution
    ProcessID="4620" ThreadID="4284" />
      <Channel>System</Channel>
      <Computer>sql-node-5.local.net</Computer>
      <Security
    UserID="S-1-5-21-796845957-515967899-725345543-17066" />
      </System>
    - <EventData>
      <Data Name="VolumeId">\\?\Volume{752f0849-6201-48e9-8821-7db897a10305}</Data>
      <Data Name="DeviceName">\Device\HarddiskVolume70</Data>
      <Data Name="Error">0x80000011</Data>
      </EventData>
     </Event>
    The system failed to flush data to the transaction log. Corruption may occur in VolumeId: \\?\Volume{752f0849-6201-48e9-8821-7db897a10305}, DeviceName: \Device\HarddiskVolume70.
    ({Device Busy}
    The device is currently busy.)
    There aren't any error or warning in HV hosts.

    Hello,
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click 
    here.
    Fanny Liu
    TechNet Community Support

  • Host server live migration causing Guest Cluster node goes down

    Hi 
    I have two node Hyper host cluster , Im using converged network for Host management,Live migartion and cluster network. And Separate NICs for ISCSI multi-pathing. When I live migrate the Guest node from one host to another , within guest cluster the node
    is going down.  I have increased clusterthroshold and clusterdelay values.  Guest nodes are connecting to ISCSI network directly from ISCSI initiator on Server 2012. 
    The converged networks for management ,cluster and live migration networks are built on top of a NIC Team with switch Independent mode and load balancing as Hyper V port. 
    I have VMQ enabled on Converged fabric  and jumbo frames enabled on ISCSI. 
    Can Anyone guess why would live migration cause failure on the guest node. 
    thanks
    mumtaz 

    Repost here: http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/threads
    in the Hyper-V forum.  You'll get a lot more help there.
    This forum is for Virtual Server 2005.

  • Live Migration Failed while Quick Migration is Ok...Virtual machine with synthetic FC HBA !

    When I migration the virtual machine with synthetic FC HBA  in windows server 2012 R2 Cluster,it fails
    but I do it in the style of quick migraton ,it secceed!
    The error event here
    Live migration of 'Virtual Machine PTSCSQL01' failed.
    Virtual machine migration operation for 'PTSCSQL01' failed at migration destination 'PTCLS0106'. (Virtual machine ID B8FBDE64-FF97-4E9B-BC40-6DCFA09B31BE)
    'PTSCSQL01' Synthetic FibreChannel Port: Failed to finish reserving resources with Error 'Unspecified error' (0x80004005). (Virtual machine ID B8FBDE64-FF97-4E9B-BC40-6DCFA09B31BE)
    'PTSCSQL01' Synthetic FibreChannel Port: Failed to finish reserving resources with Error 'Unspecified error' (0x80004005). (Virtual machine ID B8FBDE64-FF97-4E9B-BC40-6DCFA09B31BE)
    My virtual machine's  synthetic FC HBA setting here
    做微软的先行者,享受用户体验

    Yes, definitely check your zoning/masking.  Remember that with vHBA you have twice as many WWPNs to account for.  Performing a live migration makes use of both pairs during the transfer from one host to the other - one set is active on the machine
    currently running and the second set is used to ensure connectivity on the destination.  So if you are using Address Set A on Host1, Host2 will try to set up the fibre channel connection using Address Set B.  If you do a quick migration, you would
    continue to use the same Address Set on the second host.  That's why you most likely need to check your zoning/masking for the alternate set.
    . : | : . : | : . tim

  • Cannot perform a live migration while replication is enabled

    I have two Server 2012 machines that I am running hyper-v on (just testing for now).  I can right click on a virtual machine and move it (live migration) to the other server with no issues.  I can also enable replication on the VM and that
    appears to work properly.  However, if I have replication enabled on a VM I am unable to perform a live migration.  I receive an error that says the VM already exists on the destination server (because of the replication). 
    Is there any way to perform a live migration while replication is enabled?

    I can't think of a reason why you would want to anywhere else but a test environment. Replication's intended use is DR across a WAN between two physical location (there are others, but this is primarily why it was created). If Office1 burns down, you can
    boot your Replica at Office2. Live Migration is for local HA. If Server1 has a hardware failure, Live Migrate to Server2.
    If everyone could simply Live Migrate over their WAN link between offices, Replication would be redundant. But getting a fast enough WAN link for this is extremely expensive, so Microsoft created Replication for high latency, low bandwidth WAN connections.
    TL;DR, Replication and Live Migration are mutually exclusive in almost all environments. It's not even that they *can't* work together, it's just that there's no point in making them work together because they're for different use cases.

  • Failed live migration from HyperV2012 to Hyperv2012R2 cluster

    I’m at a loss on something and I was wondering if you could point me in the right direction.    I tried to migrate a server from one of my 2012 hosts to a brand new verified Hyperv 2012 R2 cluster.  I had already succesfully migrated 5 others
    from there but this one Sql 2012 guest gave me a problem.
    When I tried to live migrate it, it failed.  It said the hardware on the destination computer was not compatible.  I did some research and one blog said that sometimes you migrate to different hardware and it fails, like processors.  The guy
    on the blog said to check the box under processor\compatibility to put the virtual processor in compatibility mode.  I did that and tried again but it failed again.  This blog said something about resource groups.  The virt was “locked” at that
    point.
    I got frustrated so I just turned off the guest and copied the disks and rebuilt it on Aag with the same memory processor setting.  I thought I was fine, but I looked at it the next day and in the failover cluster manager it shows the machine is “off”. 
    BUT the VM is actually running because the server is working. It’s a database server and the sites using it are up, and I can RDC to it.  So I’m afraid to touch it.
    And these are the errors from the event log
    'Virtual Machine Configuration DEVDB1' failed to register the virtual machine with the virtual machine management service.
    The Virtual Machine Management Service failed to register the configuration for the virtual machine '62AAD2B5-8E03-4B59-84E0-D52CBF36934B' at 'C:\ClusterStorage\Volume2\Devdb1\DEVDB1': The
    system cannot find the file specified. (0x80070002). If the virtual machine is managed by a failover cluster, ensure that the file is located at a path that is accessible to other nodes of the cluster.
    Cluster resource 'Virtual Machine Configuration DEVDB1' of type 'Virtual Machine Configuration' in clustered role 'DEVDB1' failed. The error code was '0x2' ('The system cannot find the file
    specified.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart
    it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    I think I moved it while it was in a funky state.  I also think I was supposed to “export” the vm first.  I guess I’m a newbie but I’m not sure what to do to fix it.  Any advice is greatly appreciated.

    I think i found at least part of the problem but i'm not sure.  My new VM's configuration files and directory are called this.
    D79490DB-48F9-40A4-9540-53F2532D3F7F
    D79490DB-48F9-40A4-9540-53F2532D3F7F.xml
    Not 62AAD2B5-8E03-4B59-84E0-D52CBF36934B
    Still not sure what to do about that though.

  • VM windows 2008 Cluster on Hyperv 2012 Server FC Disk error after Live Migration of the active VM Cluster Node

    Deployed a 2 nodes Windows 2008 R2 SP1 Failover Cluster on a HyperV 2012 Server cluster deployed on IBM HS23 blade.
    Disk susbsystem is IBM Storwize V7000. MPIO driver installed plus IBM DDSM.
    Lun presented to the VM are connected with Virtual FC Adapter and everything works fine in the cluster until we start a live migration of the VM which hold the cluster disks online.
    After migration complete positively, at the moment the MPIO of the migrated VM goes crazy with a lot of errors (source: mpio eventID 16 ) and warnings (source: mpio EventID: 17) in the system event log. After that the disks becomes unavailable.
    Consequently everything hangs until power off the migrated vm, so the services on the cluster switchs on the second node.
    I try to set the registry key HKLM\CurrentControlSet\Services\Disk\TimeOutValue to 190 as i found on various articles but nothing seems to change....
    Any idea?
    vannig

    Hello,
    I've just been through the IBM interoperability matrix and came across this statement:
    Hyper-V on x64 based systems is supported with the following guest OS: Windows 2003 32bit, Windows 2003 64bit, Windows 2008 32bit, Windows 2008 64bit.
    Clustering of guest OS is not supported at this time. When using Emulex HBAs with Hyper-V please select the settings mentioned in the Host Attachment section of SVC Info Center
    http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
    Thanks

  • Live Migration : virtual Fibre Channel vSAN

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    BlatniS

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    Virtual Fibre Channel had sense in pre-R2 times when there was no shared VHDX and you had to somehow provide fault tolerant shared storage to guest VM cluster (spawning iSCSI target on top of FC was slow and ugly). Now there\s no point in putting one into
    production so if you have issues just use shared VHDX. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Shared VHDX is much more flexible and has better performance. 
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Live Migrating Virtual Machines with Shared VHDx

    I am facing problems when live migrating a Virtual Machine that is using Shared VHDx.  The Virtual Machine gets migrated that is the configuration gets migrated, but the Virtual Machine fails to start up and if manually tried, it fails too. 
    What is the method to to live migrate virtual machines that are using Shared VHDx.  Thanks in advance. 

    Another couple of gotchas:
    You cannot do host-level backups of the guest cluster.  This is the same as it always was.  You will have to install backup agents in the guest cluster nodes and back them up as if they were physical machines.
    You cannot perform a hot-resize of the shared VHDX.  But you can hot-add more shared VHDX files to the clustered VMs.
    You cannot Storage Live Migrate the shared VHDX file.  You can move the other VM files and perform normal Live Migration.
    as Long as you have your shared VHDx on a SMB3 Share you also could have the Nodes of the Guest Cluster on different Hyper-V Hosts.

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Error 10698 Virtual machine could not be live migrated to virtual machine host

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky
    Raja. B

Maybe you are looking for