Unable to live migrate VM (error 21502)

Hi,
I have four node Hyper-V cluster build on Windows Server 2012. I found an issue when one virtual machine is unable to live migrate to another cluster node with following error:
Live migration of 'Virtual Machine VM' failed.
Virtual machine migration operation for 'VM' failed at migration destination 'HYPERV2'. (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
'VM' Microsoft Emulated IDE Controller (Instance ID 83F8638B-8DCA-4152-9EDA-2CA8B33039B4): Failed to restore with Error 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
It's possible to migrate VM in Stopped state but then the VM cannot start on new host with following error:
'Virtual Machine VM' failed to start.
'VM' failed to start. (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
'VM' Microsoft Emulated IDE Controller (Instance ID 83F8638B-8DCA-4152-9EDA-2CA8B33039B4): Failed to Power on with Error 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
'VM': Failed to open attachment 'C:\ClusterStorage\Volume1\VM\VM.vhdx'. Error: 'The process cannot access the file because another process has locked a portion of the file.' (0x80070021). (Virtual machine ID EB7708F3-6D0B-4F7E-9EC9-EA7EE718A134)
Live storage migration works fine. When I migrate VM back to original node then VM starts correctly.
Thanks for any response.

Hi, Daniel,
Sometimes you might face failed live migration due to VMSwitches being named differently. So, the first thing to do is to make sure that VMSwitches on both hosts have the same name.
Also, you can try to take cluster offline and perform repairing procedure that appears to fix the mysterious issue causing live migrations of VMs to fail. ( Open Failover Cluster Manager -> Select the cluster name -> Take Offline
-> More Actions, click Repair.
Otherwise, if you’re short on time and willing to migrate VM as soon as possible, you can perform one-time backup/restore operation, using one of the free backup utilities available on the market (VeaamZIP or similar). In many way this
tool acts as zip-utility for VMs. It helped us a lot, when migration failed for whatever reason, and we didn't have enough time to find the root cause.
Kind regards, Leonardo.

Similar Messages

  • Unable to Live Migrate to Some Hosts

    We have been having issues where we cannot Live Migrate to some of our hosts. I have verified the VMM Agent is the same version and the Host OS are the same. I've compared all settings in VMM and Hyper-V between hosts that are Live Migration Candidates against
    hosts that are not Live Migration Candidates.  The Rating Message for those that we are unable to Live Migrate to is the following:
     Unable to migrate or clone the virtual machine "" because the version of virtualization software on the host does not match the version of virtual machine's virtualization software on source (6.3.9600.17334). To be migrated or cloned,
    the virtual machine must be stopped and should not contain any saved state. 
    Any help resolving this is greatly appreciated!

    Hi,
    i have also a few questions. You say, that you are the total noob. Therefore you should check the total noob things :)
    What Version of SCVMM are you using?
    What about your infrastructure? Hyper-V Cluster? And how many Cluster?
    Are you trying to migrate between Hyper-V Cluster? Or between Cluster nodes?
    Are the VMs made high available on your Hyper-V Host?
    Does the target host have access to the shared volume?
    did you check the processor version?
    Check the Patch Level of the Hosts and the Agents
    Are source and destination host the same OS Version and Patch level? Are they both in the same Cluster?
    without knowing more about your infrastructure i could ask a lot more questions. It would be cool if you could give us some more information about your setup. OR let us know your solution if you are able to fix the problem.
    regards,
    Thomas
    Thomas Hanrath [MCT | Regional Lead Germany]
    Follow me on TWITTER
    Thomas Hanrath Private Cloud Blog
    Microsoft Learning and Certification Blog

  • How to Fix: Error (10698) The virtual machine () could not be live migrated to the virtual machine host () using this cluster configuration.

    I am unable to live migrate via SCVMM 2012 R2 to one Host in our 5 node cluster.  The job fails with the errors below.
    Error (10698)
    The virtual machine () could not be live migrated to the virtual machine host () using this cluster configuration.
    Recommended Action
    Check the cluster configuration and then try the operation again.
    Information (11037)
    There currently are no network adapters with network optimization available on host.
    The host properties indicate network optimization is available as indicated in the screen shot below.
    Any guidance on things to check is appreciated.
    Thanks,
    Glenn

    Here is a snippet of the cluster log when from the current VM owner node of the failed migration:
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RHS] Resource Virtual Machine Configuration VMNameHere called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
    00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO  [RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration VMNameHere', gen(0) result 0/0.
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RHS] Resource Virtual Machine VMNameHere called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
    00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO  [RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine VMNameHere', gen(0) result 0/0.
    00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO  [RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine VMNameHere', gen(0) result 0/0.
    00000b6c.000020ec::2014/02/03-13:16:07.495 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine Configuration <Virtual Machine Configuration VMNameHere>: Current state 'MigrationSrcWaitForOffline', event 'MigrationSrcCompleted', result 0x8007274d
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine Configuration <Virtual Machine Configuration VMNameHere>: State change 'MigrationSrcWaitForOffline' -> 'Online'
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine <Virtual Machine VMNameHere>: Current state 'MigrationSrcOfflinePending', event 'MigrationSrcCompleted', result 0x8007274d
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine <Virtual Machine VMNameHere>: State change 'MigrationSrcOfflinePending' -> 'Online'
    00000e50.00002080::2014/02/03-13:16:07.510 ERR   [RES] Virtual Machine <Virtual Machine VMNameHere>: Live migration of 'Virtual Machine VMNameHere' failed.
    Virtual machine migration operation for 'VMNameHere' failed at migration source 'SourceHostNameHere'. (Virtual machine ID 6901D5F8-B759-4557-8A28-E36173A14443)
    The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host 'DestinationHostNameHere': No connection could be made because the tar
    00000e50.00002080::2014/02/03-13:16:07.510 ERR   [RHS] Resource Virtual Machine VMNameHere has cancelled offline with error code 10061.
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'Virtual Machine VMNameHere', gen(0) result 0/10061.
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] Res Virtual Machine VMNameHere: OfflinePending -> Online( StateUnknown )
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] TransitionToState(Virtual Machine VMNameHere) OfflinePending-->Online.
    00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] rcm::QueuedMovesHolder::VetoOffline: (VMNameHere with flags 0)
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] rcm::QueuedMovesHolder::RemoveGroup: (VMNameHere) GroupBeingMoved: false AllowMoveCancel: true NotifyMoveFailure: true
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] VMNameHere: Removed Flags 4 from StatusInformation. New StatusInformation 0
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] rcm::RcmGroup::CancelClusterGroupOperation: (VMNameHere)
    00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000021a8::2014/02/03-13:16:07.510 INFO  [GUM] Node 3: executing request locally, gumId:3951, my action: /dm/update, # of updates: 1
    00000b6c.000021a8::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000022a0::2014/02/03-13:16:07.510 INFO  [RCM] moved 0 tasks from staging set to task set.  TaskSetSize=0
    00000b6c.000022a0::2014/02/03-13:16:07.510 INFO  [RCM] rcm::RcmPriorityManager::StartGroups: [RCM] done, executed 0 tasks
    00000b6c.00000dd8::2014/02/03-13:16:07.510 INFO  [RCM] ignored non-local state Online for group VMNameHere
    00000b6c.000021a8::2014/02/03-13:16:07.526 INFO  [GUM] Node 3: executing request locally, gumId:3952, my action: /dm/update, # of updates: 1
    00000b6c.000021a8::2014/02/03-13:16:07.526 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000018e4::2014/02/03-13:16:07.526 INFO  [RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine VMNameHere', gen(0) result 0/0.
    No entry is made on the cluster log of the destination node. 
    To me this means the nodes cannot talk to each other, but I don’t know why.  
    They are on the same domain.  Their server names resolve properly and they can ping eachother both by name and IP.

  • Vmm Protection Error, Unable to perform the live migration due to cloud protection issue for G2 system..Error (23833)

    Error (23833)
    The selected cloud does not support protection, or the recovery point objective requested for the virtual machine xxxx (30 seconds) is not within the range that the cloud supports.
    Recommended Action
    Select a different cloud, enable protection for the cloud, or change the requested recovery point objective for the virtual machine to a value greater than or equal to NO_PARAM seconds, and then try the operation again.

    Microsoft has not provided me with much support on this issue, but I was able to resolve this myself. What you need to do is remove the DR Protection attribute from your VM using the command below.
    Source:
    Stop protecting a virtual machine
    $vm = get-scvirtualmachine -Name "SQLVM1"
    Set-SCVirtualMachine -VM $vm -ClearDRProtection
    It is important to note that this command does nothing, unless the Azure Site Recovery provider is installed on your VMM server, and your VMM server is registered in ASR. Once I had run this command against my virtual machines, I was able to uninstall the
    provider and unregister my server from ASR. Now my VM's are free to live migrate again.

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Error 10698 Virtual machine could not be live migrated to virtual machine host

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky
    Raja. B

  • VM windows 2008 Cluster on Hyperv 2012 Server FC Disk error after Live Migration of the active VM Cluster Node

    Deployed a 2 nodes Windows 2008 R2 SP1 Failover Cluster on a HyperV 2012 Server cluster deployed on IBM HS23 blade.
    Disk susbsystem is IBM Storwize V7000. MPIO driver installed plus IBM DDSM.
    Lun presented to the VM are connected with Virtual FC Adapter and everything works fine in the cluster until we start a live migration of the VM which hold the cluster disks online.
    After migration complete positively, at the moment the MPIO of the migrated VM goes crazy with a lot of errors (source: mpio eventID 16 ) and warnings (source: mpio EventID: 17) in the system event log. After that the disks becomes unavailable.
    Consequently everything hangs until power off the migrated vm, so the services on the cluster switchs on the second node.
    I try to set the registry key HKLM\CurrentControlSet\Services\Disk\TimeOutValue to 190 as i found on various articles but nothing seems to change....
    Any idea?
    vannig

    Hello,
    I've just been through the IBM interoperability matrix and came across this statement:
    Hyper-V on x64 based systems is supported with the following guest OS: Windows 2003 32bit, Windows 2003 64bit, Windows 2008 32bit, Windows 2008 64bit.
    Clustering of guest OS is not supported at this time. When using Emulex HBAs with Hyper-V please select the settings mentioned in the Host Attachment section of SVC Info Center
    http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
    Thanks

  • Oracle VM (XEN) and Live Migration throws PCI error

    First off, anyone using a HS21 XM blade with Oracle VM? Anyone attempted a live-migration, does it work?
    When attempting a live migration on a HS21 XM blade we get the following errors and the host hangs:
    102 I Blade_02 08/10/08, 08:46:38 (CADCOVMA02) Blade reboot
    103 E Blade_02 08/10/08, 08:46:26 (CADCOVMA02) Software critical interrupt.
    104 E Blade_02 08/10/08, 08:46:21 (CADCOVMA02) SMI Hdlr: 00150700 PERR: Slave signaled parity error B/D/F/Slot=07000300 Vend=8086 Dev=3 0C Status=8000
    105 E Blade_02 08/10/08, 08:46:19 (CADCOVMA02) SMI Hdlr: 00150900 SERR/PERR Detected on PCI bus
    106 E Blade_02 08/10/08, 08:46:19 (CADCOVMA02) SMI Hdlr: 00150700 PERR: Slave signaled parity error B/D/F/Slot=00020000 Vend=8086 Dev=25F7 Statu
    107 E Blade_02 08/10/08, 08:46:18 (CADCOVMA02) PCI PERR: parity error.
    108 E Blade_02 08/10/08, 08:46:17 (CADCOVMA02) SMI Hdlr: 00150900 SERR/PERR Detected on PCI bus
    109 E Blade_02 08/10/08, 08:46:17 (CADCOVMA02) SMI Hdlr: 00150400 SERR: Device Signaled SERR B/D/F/Slot=07000300 Vend=8086 Dev=350C Sta us=4110
    110 E Blade_02 08/10/08, 08:46:16 (CADCOVMA02) SMI Hdlr: 00150400 SERR: Device Signaled SERR B/D/F/Slot=00020000 Vend=8086 Dev=25F7 Status=4110
    111 E Blade_02 08/10/08, 08:46:14 (CADCOVMA02) PCI system error.
    any ideas? I've called IBM support but their options are reseat the hardware or replace it. this happens on more than one blade so we're assuming it has something to do with oracle VM. Thanks!

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • V-HBA Guest VM cannot Live Migration

    Hi All,
    I'm using Windows Server 2012 Data Center OS and configured the for the hyper-V server role with Fail-over cluster (4 Node – Cluster).
    Each Physical server installed with Two Qlogic HBA Adapters.
    We have configured following configuration on hyper-v Server and SAN (HP) storage.
    Enabled the Virtual SAN Manager.
    Created Each Hyper-V Host Virtual Fiber channel Switch. Each Host configured two adapters. (Fibre Channel SAN 1 /  Fibre Channel SAN 2) .
    Using the HP storage and enabled the switch port level (N_Port ID Virtualization (NPIV).
    Created new VM and assign new two vHBA Adapters. Each adapter having two sets of wwn numbers.
    Each Adapter SET A wwn Address discovered by SAN switch and configured the zone . However SET B not discovered but we manually created the zone for SET B for both Adapters.
    Present the Storage to the VM and installed the MPIO. SAN disk visible on the VM disk management.
    During the live migrate the VM to different host I’m getting below error.
    Live migration of 'Virtual Machine TEST-Server' failed.
    Virtual machine migration operation for 'TEST-Server' failed at migration destination 'HV02'. (Virtual machine ID 378AE3E3-5A4F-4AE7-92F8-D47855C2D1C5)
    The virtual machine 'TEST-Server' is not compatible with physical computer 'HV02'. (Virtual machine ID 378AE3E3-5A4F-4AE7-92F8-D47855C2D1C5)
    Could not find a virtual fibre channel resource pool with ID 'New Fibre Channel SAN1'
    Could not find a virtual fibre channel resource pool with ID 'New Fibre Channel SAN2'
    Event ID : 21502
    I need some support from Expert for this issue. I’m trying to configure the Guest Cluster with HBA. The Quick Migration working fine but still failing with live migration. I want to resolve the live migration issue.
    Aucsna

    Hi Aucsna,
    "it's issue with the Virtual SAN Switch label name."
    Has the problem been solved ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • OVM 3.1.1 - Live migration not completed

    Hi,
    I'm facing an interesting case with the VM live migration.
    If I issue a migration from the manager, the VM is effectively moved to the new server but the job still in a "in progress" mode (0% completed) and the OVM servers still locked until I abort the job.
    Once the job is aborted, everything is back to normal and the VM is running on the targeted server.
    Any idea what's wrong?
    thanks for the help.
    Below is the log of the job:
    Job Construction Phase
    begin()
    Appended operation 'Bridge Configure Operation' to object '0004fb000020000051ceed7ebd6f2ad9 (network.BondPort (1) in oracle55)'.
    Appended operation 'Bridge Configure Operation' to object '0004fb000020000004cb599206575194 (network.EthernetPort (3) in oracle55)'.
    Appended operation 'Virtual Machine Migrate' to object '0004fb0000060000a4a1035c270b5f7b (RH63_PVM_XDC_Node2)'.
    commit()
    Completed Step: COMMIT
    Objects and Operations
    Object (IN_USE): [Server] 36:34:31:30:31:36:43:5a:33:32:32:32:4b:42:4b:35 (oracle55)
    Object (IN_USE): [EthernetPort] 0004fb000020000004cb599206575194 (network.EthernetPort (3) in oracle55)
    Operation: Bridge Configure Operation
    Object (IN_USE): [Server] 36:34:31:30:31:36:43:5a:33:32:32:32:4b:42:4b:33 (oracle54)
    Object (IN_USE): [VirtualMachine] 0004fb0000060000a4a1035c270b5f7b (RH63_PVM_XDC_Node2)
    Operation: Virtual Machine Migrate
    Object (IN_USE): [BondPort] 0004fb000020000051ceed7ebd6f2ad9 (network.BondPort (1) in oracle55)
    Operation: Bridge Configure Operation
    Job Running Phase at 14:02 on Thu, Jan 10, 2013
    Job Participants: [36:34:31:30:31:36:43:5a:33:32:32:32:4b:42:4b:33 (oracle54)]
    Actioner
    Starting operation 'Bridge Configure Operation' on object '0004fb000020000004cb599206575194 (network.EthernetPort (3) in oracle55)'
    Bridge [0004fb001054934] already exists (and should exist) on interface [eth2] on server [oracle55]; skipping bridge creation
    Completed operation 'Bridge Configure Operation' completed with direction ==> DONE
    Starting operation 'Virtual Machine Migrate' on object '0004fb0000060000a4a1035c270b5f7b (RH63_PVM_XDC_Node2)'
    Completed operation 'Virtual Machine Migrate' completed with direction ==> LATER
    Starting operation 'Bridge Configure Operation' on object '0004fb000020000051ceed7ebd6f2ad9 (network.BondPort (1) in oracle55)'
    Bridge [15.136.24.0] already exists (and should exist) on interface [bond0] on server [oracle55]; skipping bridge creation
    Completed operation 'Bridge Configure Operation' completed with direction ==> DONE
    Starting operation 'Virtual Machine Migrate' on object '0004fb0000060000a4a1035c270b5f7b

    Some other log info from the ovs-agent.log file:
    [2013-01-10 17:29:02 7647] DEBUG (notification:291) Connected to manager.
    [2013-01-10 17:29:17 7655] ERROR (notification:64) Unable to send notification: (2, 'No such file or directory')
    [2013-01-10 17:29:18 7647] ERROR (notification:333) Error in NotificationServer process: 'Invalid URL Request (receive) http://15.136.28.56:7001/ovm/core/OVMManagerCoreServlet'
    Traceback (most recent call last):
    File "/usr/lib64/python2.4/site-packages/agent/notification.py", line 308, in serve_forever
    foundry = cm.getFoundryContext()
    File "/usr/lib/python2.4/site-packages/com/oracle/ovm/mgr/api/manager/OvmManager.py", line 38, in getFoundryContext
    self.foundry = self.getModelManager().getFoundryContext()
    File "/usr/lib/python2.4/site-packages/com/oracle/ovm/mgr/api/manager/OvmManager.py", line 31, in getModelManager
    if self.modelMgr == None:
    File "/usr/lib/python2.4/site-packages/com/oracle/ovm/mgr/api/manager/ModelManager.py", line 364, in __cmp__
    return self.compareTo(obj)
    File "/usr/lib/python2.4/site-packages/com/oracle/ovm/mgr/api/manager/ModelManager.py", line 250, in compareTo
    return self.exchange.invokeMethodByName(self.identifier,"compareTo","java.lang.Object",args,5,False)
    File "/usr/lib/python2.4/site-packages/com/oracle/odof/OdofExchange.py", line 68, in invokeMethodByName
    return self._send_(InvokeMethodByNameCommand(identifier, method, params, args, access))
    File "/usr/lib/python2.4/site-packages/com/oracle/odof/OdofExchange.py", line 164, in send
    return self._sendGivenConnection_(connection, command, timeout)
    File "/usr/lib/python2.4/site-packages/com/oracle/odof/OdofExchange.py", line 170, in sendGivenConnection
    result = connection.receive(command, timeout)
    File "/usr/lib/python2.4/site-packages/com/oracle/odof/io/ServletConnection.py", line 88, in receive
    raise OdofException("Invalid URL Request (receive) %s" % self.url, sys.exc_info()[1])
    OdofException: 'Invalid URL Request (receive) http://15.136.28.56:7001/ovm/core/OVMManagerCoreServlet'
    [2013-01-10 17:29:38 7655] ERROR (notification:64) Unable to send notification: (2, 'No such file or directory')
    [2013-01-10 17:29:54 7647] DEBUG (notification:289) Trying to connect to manager.
    [2013-01-10 17:29:58 7655] ERROR (notification:64) Unable to send notification: (2, 'No such file or directory')
    [2013-01-10 17:30:19 7655] ERROR (notification:64) Unable to send notification: (2, 'No such file or directory')

  • Cannot perform a live migration while replication is enabled

    I have two Server 2012 machines that I am running hyper-v on (just testing for now).  I can right click on a virtual machine and move it (live migration) to the other server with no issues.  I can also enable replication on the VM and that
    appears to work properly.  However, if I have replication enabled on a VM I am unable to perform a live migration.  I receive an error that says the VM already exists on the destination server (because of the replication). 
    Is there any way to perform a live migration while replication is enabled?

    I can't think of a reason why you would want to anywhere else but a test environment. Replication's intended use is DR across a WAN between two physical location (there are others, but this is primarily why it was created). If Office1 burns down, you can
    boot your Replica at Office2. Live Migration is for local HA. If Server1 has a hardware failure, Live Migrate to Server2.
    If everyone could simply Live Migrate over their WAN link between offices, Replication would be redundant. But getting a fast enough WAN link for this is extremely expensive, so Microsoft created Replication for high latency, low bandwidth WAN connections.
    TL;DR, Replication and Live Migration are mutually exclusive in almost all environments. It's not even that they *can't* work together, it's just that there's no point in making them work together because they're for different use cases.

  • Hyper-V live migration failed

    There is Hyper-V cluster with 2 nodes. Windows Server 2012 R2 is used as operating system.
    Trying to live migrate test VM from node 1 to node 2 and get error 21502:
    Live migration of 'Virtual Machine test' failed.
    'Virtual Machine test' failed to fixup network settings. Verify VM settings and update them as necessary.
    VM has Network Adapter connected to Virtual switch. This vSwitch has Private network as connection type.
    If I set virtual switch property to "Not connected" in Network Adapter settings of VM I get successful migration.
    All VM's that are not connected to any private networks (virtual switches with private network connection type) can be live migrated without any issues.
    Is there any official reference related to Hyper-V live migration of VM's that have "private network" connection type?

    I can Live Migrate virtual machines with adapters on private switches without error. Aside from having the wrong name, the only way I can get it to fail is if I make the switch on one host use a different QoS minimum mode than the other and
    enable QoS on the virtual adapter. Even then I get a different message than what you're getting. I only get that one with differently named switches.
    There is a PowerShell cmdlet available to see why a guest won't run on another host.
    Here's an example of its usage.
    There's a way to use it to get it to Live Migrate.
    But there is no way to truly Live Migrate three virtual machines in perfect lockstep. Even if you figure out whatever is preventing you from migrating these machines, there will still be periods during Live Migration where they can't communicate across that
    private network. You also can't guarantee that all these guests will always be running on the same host without preventing Live Migration in the first place. This is why there really isn't anyone doing what you're trying to do. I suggest you consider another
    isolation solution, like VLANs.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Virtual Machines With Large RAM Fails Live Migration

    Hi everyone....
    I have a 2-node Hyper-V cluster managed by SCVMM 2012 R2.  I am current unable to migrate a VM that is using 48 GB of RAM.  Each node has 256 GB of RAM running Windows Server 2012 R2.
    When the VM is running on node 1; there is 154 GB in use, and 102 GB available.  When I try to migrate this VM to node 2, (which has 5.6 GB in use, and 250 GB available), I get this error message in VMM:
    Error (10698)
    The virtual machine (abc-defghi-vm) could not be live migrated to the virtual machine host (xyz-wnbc-nd03) using this cluster configuration.
    Recommended Action
    Check the cluster configuration and then try the operation again.
    (In case you were wondering, I ran the cluster validation and it passed without a problem.)
    The Failover Cluster event log shows two key entries:
    First:
    Cluster resource 'SCVMM xyz-wnbc-vm' in clustered role 'SCVMM xyz-wnbc-vm Resources' has transitioned from state OfflineCallIssued to state OfflinePending. 
    Exactly 60 seconds later, this message takes place:
    Cluster resource 'SCVMM abc-defghi-vm in clustered role 'SCVMM abc-defghi-vm Resources' rejected a move request to node 'xyz-wnbc-nd03'. The error code was '0x340031'.  Cluster resource 'SCVMM abc-defghi-vm' may be busy or in a state where it
    cannot be moved.  The cluster service may automatically retry the move.
    Nothing found after Googling "0x340031".  Does anyone know what error that is?
    Other notes:
    If the Virtual machine is shut down I can migrate it.
    If I lower the VM RAM settings and start it up again I can do a Live Migration.
    All other VMs can do the Live Migration; largest RAM size is 16GB.
    Any suggestions?

    Hi Sir,
    Could you please check if  NUMA setting for every node are same via using perfmon.exe and using counter "Hyper-v VM Vid Numa Node " :
    >>and the virtual machine in question has it enabled as well.
    Could you please also post the NUMA setting of that VM for us ?
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Server 2012 cluster - virtual machine live migration does not work

    Hi,
    We have a hyper-v cluster with two nodes running Windows Server 2012. All the configurations are identical.
    When I try to make a Live migration from one node to the other I get an error message saying:
    Live migration of 'Virtual Machine XXXXXX' failed.
    I get no other error messages, not even in event viewer. This same happens with all of our virtual machines.
    A normal Quick migration works just fine for all of the virtual machines, so network configuration should not be an issue.
    The above error message does not provide much information.

    Hi,
    Please check whether your configuration meet live migration requirement:
    Two (or more) servers running Hyper-V that:
    Support hardware virtualization.
    Yes they support virtualization. 
    Are using processors from the same manufacturer (for example, all AMD or all Intel).
    Both Servers are identical and brand new Fujitsu-Siemens RX300S7 with the same kind of processor (Xeon E5-2620).
    Belong to either the same Active Directory domain, or to domains that trust each other.
    Both nodes are in the same domain.
    Virtual machines must be configured to use virtual hard disks or virtual Fibre Channel disks (no physical disks).
    All of the vitual machines have virtual hard disks.
    Use of a private network is recommended for live migration network traffic.
    Have tried this, but does not help.
    Requirements for live migration in a cluster:
    Windows Failover Clustering is enabled and configured.
    Yes
    Cluster Shared Volume (CSV) storage in the cluster is enabled.
    Yes
    Requirements for live migration using shared storage:
    All files that comprise a virtual machine (for example, virtual hard disks, snapshots, and configuration) are stored on an SMB share. They are all on the same CSV
    Permissions on the SMB share have been configured to grant access to the computer accounts of all servers running Hyper-V.
    Requirements for live migration with no shared infrastructure:
    No extra requirements exist.
    Also please refer to this article to check whether you have finished all preparation works for live migration:
    Virtual Machine Live Migration Overview
    http://technet.microsoft.com/en-us/library/hh831435.aspx
    Hyper-V: Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/dd446679(v=WS.10).aspx
    Configure and Use Live Migration on Non-clustered Virtual Machines
    http://technet.microsoft.com/en-us/library/jj134199.aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support
    I have also read all of the technet articles but can't find anything that could help.

Maybe you are looking for