Problems live migrating Windows DomU

Hi,
I have problems when I live migrate a Windows DomU.
I receive the following output:
[root@wihsovs1 ~]# xm migrate -l 15 wihsovs2
Error: Timeout waiting for domain 15 to suspend
I'm able to start the DomU on both servers (xm create). I'm also able to live migrate Linux DomU without a problem, only the Windows ones doesn't work.
Somebody facing the same problem?
Reg,
Rene

hi Rene,
your not able perform live migration of windows guest whereas you are able migrate linux guest. is this your question then check below:
1) if u had installed PV Drivers on windows guest then live migration will not happen.
2) Try to migrate a windows guest which doesn't have PV Drivers installed, migration will happen.
hope this resloves your problem....

Similar Messages

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VM windows 2008 Cluster on Hyperv 2012 Server FC Disk error after Live Migration of the active VM Cluster Node

    Deployed a 2 nodes Windows 2008 R2 SP1 Failover Cluster on a HyperV 2012 Server cluster deployed on IBM HS23 blade.
    Disk susbsystem is IBM Storwize V7000. MPIO driver installed plus IBM DDSM.
    Lun presented to the VM are connected with Virtual FC Adapter and everything works fine in the cluster until we start a live migration of the VM which hold the cluster disks online.
    After migration complete positively, at the moment the MPIO of the migrated VM goes crazy with a lot of errors (source: mpio eventID 16 ) and warnings (source: mpio EventID: 17) in the system event log. After that the disks becomes unavailable.
    Consequently everything hangs until power off the migrated vm, so the services on the cluster switchs on the second node.
    I try to set the registry key HKLM\CurrentControlSet\Services\Disk\TimeOutValue to 190 as i found on various articles but nothing seems to change....
    Any idea?
    vannig

    Hello,
    I've just been through the IBM interoperability matrix and came across this statement:
    Hyper-V on x64 based systems is supported with the following guest OS: Windows 2003 32bit, Windows 2003 64bit, Windows 2008 32bit, Windows 2008 64bit.
    Clustering of guest OS is not supported at this time. When using Emulex HBAs with Hyper-V please select the settings mentioned in the Host Attachment section of SVC Info Center
    http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
    Thanks

  • How to migrate windows server 2003 to 2008 live server

    How to migrate windows server 2003 to 2008 live server without shutdown could anyone explain me ?
    Thanks & Regards, Amol . Amol Dhaygude

    I don't think you can do it without a shutdown. This also depends on the types of services, roles etc hosted on the server. 
    Refer this : http://technet.microsoft.com/en-us/library/cc755199(v=ws.10).aspx
    Arnav Sharma | http://arnavsharma.net/ Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.

  • Hyper-V guest SQL 2012 cluster live migration failure

    I have two IBM HX5 nodes connected to IBM DS5300. Hyper-V 2012 cluster was built on blades. In HV cluster was made six virtual machines, connected to DS5300 via HV Virtual SAN. These VMs was formed a guest SQL Cluster. Databases' files are placed on
    DS5300 storage and available through VM FibreChannel Adapters. IBM MPIO Module is installed on all hosts and VMs.
    SQL Server instances work without problem. But! When I try to live migrate SQL VM to another HV node an SQL Instance fails. In SQL error log I see:
    2013-06-19 10:39:44.07 spid1s      Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.07 spid1s      SQLServerLogMgr::LogWriter: Operating system error 170(The requested resource is in use.) encountered.
    2013-06-19 10:39:44.07 spid1s      Write error during log flush.
    2013-06-19 10:39:44.07 spid55      Error: 9001, Severity: 21, State: 4.
    2013-06-19 10:39:44.07 spid55      The log for database 'Admin' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
    2013-06-19 10:39:44.07 spid55      Database Admin was shutdown due to error 9001 in routine 'XdesRMFull::CommitInternal'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
    2013-06-19 10:39:44.31 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.31 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.31 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.31 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.32 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.32 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.32 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.32 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.33 spid36s     Starting up database 'Admin'.
    2013-06-19 10:39:44.58 spid36s     349 transactions rolled forward in database 'Admin' (6:0). This is an informational message only. No user action is required.
    2013-06-19 10:39:44.58 spid36s     SQLServerLogMgr::FixupLogTail (failure): alignBuf 0x000000001A75D000, writeSize 0x400, filePos 0x156adc00
    2013-06-19 10:39:44.58 spid36s     blankSize 0x3c0000, blkOffset 0x1056e, fileSeqNo 1313, totBytesWritten 0x0
    2013-06-19 10:39:44.58 spid36s     fcb status 0x42, handle 0x0000000000000BC0, size 262144 pages
    2013-06-19 10:39:44.58 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.58 spid36s     SQLServerLogMgr::FixupLogTail: Operating system error 170(The requested resource is in use.) encountered.
    2013-06-19 10:39:44.58 spid36s     Error: 5159, Severity: 24, State: 13.
    2013-06-19 10:39:44.58 spid36s     Operating system error 170(The requested resource is in use.) on file "v:\MSSQL\log\Admin\Log.ldf" during FixupLogTail.
    2013-06-19 10:39:44.58 spid36s     Error: 3414, Severity: 21, State: 1.
    2013-06-19 10:39:44.58 spid36s     An error occurred during recovery, preventing the database 'Admin' (6:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected,
    contact Technical Support.
    In windows system log I see a lot of warnings like this:
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Microsoft-Windows-Ntfs" Guid="{3FF37A1C-A68D-4D6E-8C9B-F79E8B16C482}" />
      <EventID>140</EventID>
      <Version>0</Version>
      <Level>3</Level>
      <Task>0</Task>
      <Opcode>0</Opcode>
      <Keywords>0x8000000000000008</Keywords>
      <TimeCreated
    SystemTime="2013-06-19T06:39:44.314400200Z" />
      <EventRecordID>25239</EventRecordID>
      <Correlation
    />
      <Execution
    ProcessID="4620" ThreadID="4284" />
      <Channel>System</Channel>
      <Computer>sql-node-5.local.net</Computer>
      <Security
    UserID="S-1-5-21-796845957-515967899-725345543-17066" />
      </System>
    - <EventData>
      <Data Name="VolumeId">\\?\Volume{752f0849-6201-48e9-8821-7db897a10305}</Data>
      <Data Name="DeviceName">\Device\HarddiskVolume70</Data>
      <Data Name="Error">0x80000011</Data>
      </EventData>
     </Event>
    The system failed to flush data to the transaction log. Corruption may occur in VolumeId: \\?\Volume{752f0849-6201-48e9-8821-7db897a10305}, DeviceName: \Device\HarddiskVolume70.
    ({Device Busy}
    The device is currently busy.)
    There aren't any error or warning in HV hosts.

    Hello,
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click 
    here.
    Fanny Liu
    TechNet Community Support

  • Hyper-v Live Migration not completing when using VM with large RAM

    hi,
    i have a two node server 2012 R2 cluster hyper-v which uses 100GB CSV, and 128GB RAM across 2 physical CPU's (approx 7.1GB used when the VM is not booted), and 1 VM running windows 7 which has 64GB RAM assigned, the VHD size is around 21GB and the BIN file
    is 64GB (by the way do we have to have that, can we get rid of the BIN file?). 
    NUMA is enabled on both servers, when I attempt to live migrate i get event 1155 in the cluster events, the LM starts and gets into 60 something % but then fails. the event details are "The pending move for the role 'New Virtual Machine' did not complete."
    however, when i lower the amount of RAM assigned to the VM to around 56GB (56+7 = 63GB) the LM seems to work, any amount of RAM below this allows LM to succeed, but it seems if the total used RAM from the physical server (including that used for the
    VMs) is 64GB or above, the LM fails.... coincidence since the server has 64GB per CPU.....
    why would this be?
    many thanks
    Steve

    Hi,
    I turned NUMA spanning off on both servers in the cluster - I assigned 62 GB, 64GB and 88GB and each time the VM started up no problems. with 62GB LM completed, but I cant get LM to complete with 64GB+.
    my server is a HP DL380 G8, it has the latest BIOS (I just updated it today as it was a couple of months behind), i cant see any settings in BIOS relating to NUMA so i'm guessing it is enabled and cant be changed.
    if i run the cmdlt as admin I get ProcessorsAvailability : {0, 0, 0, 0...}
    if i run it as standard user i get ProcessorsAvailability
    my memory and CPU config are as follows, hyper-threading is enabled for the CPU but I dont
    think that would make a difference?
    Processor 1 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    Processor 2
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    thanks
    Steve

  • Failed live migration from HyperV2012 to Hyperv2012R2 cluster

    I’m at a loss on something and I was wondering if you could point me in the right direction.    I tried to migrate a server from one of my 2012 hosts to a brand new verified Hyperv 2012 R2 cluster.  I had already succesfully migrated 5 others
    from there but this one Sql 2012 guest gave me a problem.
    When I tried to live migrate it, it failed.  It said the hardware on the destination computer was not compatible.  I did some research and one blog said that sometimes you migrate to different hardware and it fails, like processors.  The guy
    on the blog said to check the box under processor\compatibility to put the virtual processor in compatibility mode.  I did that and tried again but it failed again.  This blog said something about resource groups.  The virt was “locked” at that
    point.
    I got frustrated so I just turned off the guest and copied the disks and rebuilt it on Aag with the same memory processor setting.  I thought I was fine, but I looked at it the next day and in the failover cluster manager it shows the machine is “off”. 
    BUT the VM is actually running because the server is working. It’s a database server and the sites using it are up, and I can RDC to it.  So I’m afraid to touch it.
    And these are the errors from the event log
    'Virtual Machine Configuration DEVDB1' failed to register the virtual machine with the virtual machine management service.
    The Virtual Machine Management Service failed to register the configuration for the virtual machine '62AAD2B5-8E03-4B59-84E0-D52CBF36934B' at 'C:\ClusterStorage\Volume2\Devdb1\DEVDB1': The
    system cannot find the file specified. (0x80070002). If the virtual machine is managed by a failover cluster, ensure that the file is located at a path that is accessible to other nodes of the cluster.
    Cluster resource 'Virtual Machine Configuration DEVDB1' of type 'Virtual Machine Configuration' in clustered role 'DEVDB1' failed. The error code was '0x2' ('The system cannot find the file
    specified.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart
    it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    I think I moved it while it was in a funky state.  I also think I was supposed to “export” the vm first.  I guess I’m a newbie but I’m not sure what to do to fix it.  Any advice is greatly appreciated.

    I think i found at least part of the problem but i'm not sure.  My new VM's configuration files and directory are called this.
    D79490DB-48F9-40A4-9540-53F2532D3F7F
    D79490DB-48F9-40A4-9540-53F2532D3F7F.xml
    Not 62AAD2B5-8E03-4B59-84E0-D52CBF36934B
    Still not sure what to do about that though.

  • Failover Cluster 2008 R2 - VM lose connectivity after live migration

    Hello,
    I have a Failover Cluster with 3 server nodes running. I have 2 VMs running in one the the host without problems, but when I do a live migration of the VM to another host the VM lose network connectivity, for example if I leave a ping running, the ping command
    has 2 response, and 3 packets lost, then 1 response again, then 4 packets lost again, and so on... If I live migrate the VM to the original host, everything goes OK again.
    The same bihavior is for the 2 VMs, but I do a test with a new VM and with that new VM everything Works fine, I can live migrate it to every host.
    Any advice?
    Cristian L Ruiz

    Hi Cristian Ruiz,
    What your current host nic settings now, from you description it seems you are using the incorrect network nic design. If you are using iSCSI storage it need use the dedicate
    network in cluster.
    If your NIC teaming is iconfigured in switch independent + dynamic, please try to disable VMQ on VM Setting for narrow down the issue area.
    More information:
    VMQ Deep Dive, 1 of 3
    http://blogs.technet.com/b/networking/archive/2013/09/10/vmq-deep-dive-1-of-3.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.
    Hi!
    thank you for your reply!
    Yes, We are using iSCSI storage, but it has its own NICs for that (2 independant NIcs just to connect the server with the storage) and they are configured to not use those NICs to cluster communication. The team configuration is just for the LAN connectivity.
    The NIC teaming is configured using BACS4 software from a DELL server and in Smart Load Balancing and Failover (as you can see here
    http://www.micronova.com.ar/cap01.jpg). The link you passed is for Windows Server 2012 and we are running Windows Server 2008 R2, BUT as you can see in the following capture the NICs has that feature disabled
    ( http://www.micronova.com.ar/cap02.jpg ).
    One test that I'm thinking to do is to remove teaming configuration and test just with one independant NIC for LAN connection. But, I do not know if you think another choice.
    Thanks in advance.
    Cristian L Ruiz
    Sorry, another choice I'm thinking too is to update the driver versión. But the server is in production and I need to take a downtime window for test that.
    Cristian L Ruiz

  • V-HBA Guest VM cannot Live Migration

    Hi All,
    I'm using Windows Server 2012 Data Center OS and configured the for the hyper-V server role with Fail-over cluster (4 Node – Cluster).
    Each Physical server installed with Two Qlogic HBA Adapters.
    We have configured following configuration on hyper-v Server and SAN (HP) storage.
    Enabled the Virtual SAN Manager.
    Created Each Hyper-V Host Virtual Fiber channel Switch. Each Host configured two adapters. (Fibre Channel SAN 1 /  Fibre Channel SAN 2) .
    Using the HP storage and enabled the switch port level (N_Port ID Virtualization (NPIV).
    Created new VM and assign new two vHBA Adapters. Each adapter having two sets of wwn numbers.
    Each Adapter SET A wwn Address discovered by SAN switch and configured the zone . However SET B not discovered but we manually created the zone for SET B for both Adapters.
    Present the Storage to the VM and installed the MPIO. SAN disk visible on the VM disk management.
    During the live migrate the VM to different host I’m getting below error.
    Live migration of 'Virtual Machine TEST-Server' failed.
    Virtual machine migration operation for 'TEST-Server' failed at migration destination 'HV02'. (Virtual machine ID 378AE3E3-5A4F-4AE7-92F8-D47855C2D1C5)
    The virtual machine 'TEST-Server' is not compatible with physical computer 'HV02'. (Virtual machine ID 378AE3E3-5A4F-4AE7-92F8-D47855C2D1C5)
    Could not find a virtual fibre channel resource pool with ID 'New Fibre Channel SAN1'
    Could not find a virtual fibre channel resource pool with ID 'New Fibre Channel SAN2'
    Event ID : 21502
    I need some support from Expert for this issue. I’m trying to configure the Guest Cluster with HBA. The Quick Migration working fine but still failing with live migration. I want to resolve the live migration issue.
    Aucsna

    Hi Aucsna,
    "it's issue with the Virtual SAN Switch label name."
    Has the problem been solved ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Live Migration failed using virtual HBA's and Guest Clustering

    Hi,
    We have a Guest Cluster Configuration on top of an Hyper-V Cluster. We are using Windows 2012 and Fiber Channel shared storage.
    The problem is regarding Live Migration. Some times when we move a virtual machine from node A to node B everything goes well but when we try to move back to node A Live Migration fails. What we can see is that when we move the VM from node A to B and Live
    Migration completes successfully the virtual ports remain active on node A, so when we try to move back from B to A Live Migration fails because the virtual ports are already there.
    This doesn't happen every time.
    We have checked the zoning between Host Cluster Hyper-V and the SAN, the mapping between physical HBA's and the vSAN's on the Hyper-V and everything is ok.
    Our doubt is, what is the best practice for zoning the vHBA on the VM's and our Fabric? We setup our zoning using an alias for the vHBA 1 and the two WWN (A and B) on the same object and an alias for the vHBA 2 and the correspondent WWN (A and B). Is it
    better to create an alias for vHBA 1 -> A (with WWN A) and other alias for vHBA 1 -> B (with WWN B)? 
    The guest cluster VM's have 98GB of RAM each. Could it be a time out issue when Live Migration happen's and the virtual ports remain active on the source node? When everything goes well, the VM moves from node A with vHBA WWN A to node B and stays there
    with vHBA WWN B. On the source node the virtual ports should be removed automatically when the Live Migration completes. And that is the issue... sometimes the virtual ports (WWN A) stay active on the source node and when we try to move back the VM Live Migration
    fails.
    I hope You may understand the issue.
    Regards,
    Carlos Monteiro.

    Hi ,
    Hope the following link may help.
    To support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V automatically alternates between the Set A and Set B
    WWN addresses during a live migration. This ensures that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration.
    Hyper-V Virtual Fibre Channel Overview
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    More information:
    Hyper-V Virtual Fibre Channel Troubleshooting Guide
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Hyper-V virtual SAN
    http://salworx.blogspot.co.uk/
    Thanks.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Live Migration : virtual Fibre Channel vSAN

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    BlatniS

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    Virtual Fibre Channel had sense in pre-R2 times when there was no shared VHDX and you had to somehow provide fault tolerant shared storage to guest VM cluster (spawning iSCSI target on top of FC was slow and ugly). Now there\s no point in putting one into
    production so if you have issues just use shared VHDX. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Shared VHDX is much more flexible and has better performance. 
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Shared nothing live migration over SMB. Poor performance

    Hi,
    I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
    Hardware:
    Dell M620 blades
    256Gb RAM
    2*8C Intel E5-2680 CPUs
    Samsung 840 Pro 512Gb SSD running in Raid1
    6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
    The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
    The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
    I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
    The graphs are from 4 tests.
    Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
    Test 3 is a shared nothing live migration of a live VM over SMB
    Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
    It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
    While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
    Any ideas?
    Test
    Config
    Vmswitch
    RSS
    VMQ
    Live Migration Config
    Throughput (MB/s)
    NTtcp
    NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    500
    NTtcp
    NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    1130
    Shared nothing live migration
    Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
    No
    Yes
    No
    Kerberos, Use SMB, any available net
    74
    Storage migration
    Offline VM, 8Gb disk. Migrated from host 1 -> host2
    No
    Yes
    No
    Unencrypted BITS transfer
    350

    Hi Per Kjellkvist,
    Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
    Then test 3 and 4 .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Virtual Machines With Large RAM Fails Live Migration

    Hi everyone....
    I have a 2-node Hyper-V cluster managed by SCVMM 2012 R2.  I am current unable to migrate a VM that is using 48 GB of RAM.  Each node has 256 GB of RAM running Windows Server 2012 R2.
    When the VM is running on node 1; there is 154 GB in use, and 102 GB available.  When I try to migrate this VM to node 2, (which has 5.6 GB in use, and 250 GB available), I get this error message in VMM:
    Error (10698)
    The virtual machine (abc-defghi-vm) could not be live migrated to the virtual machine host (xyz-wnbc-nd03) using this cluster configuration.
    Recommended Action
    Check the cluster configuration and then try the operation again.
    (In case you were wondering, I ran the cluster validation and it passed without a problem.)
    The Failover Cluster event log shows two key entries:
    First:
    Cluster resource 'SCVMM xyz-wnbc-vm' in clustered role 'SCVMM xyz-wnbc-vm Resources' has transitioned from state OfflineCallIssued to state OfflinePending. 
    Exactly 60 seconds later, this message takes place:
    Cluster resource 'SCVMM abc-defghi-vm in clustered role 'SCVMM abc-defghi-vm Resources' rejected a move request to node 'xyz-wnbc-nd03'. The error code was '0x340031'.  Cluster resource 'SCVMM abc-defghi-vm' may be busy or in a state where it
    cannot be moved.  The cluster service may automatically retry the move.
    Nothing found after Googling "0x340031".  Does anyone know what error that is?
    Other notes:
    If the Virtual machine is shut down I can migrate it.
    If I lower the VM RAM settings and start it up again I can do a Live Migration.
    All other VMs can do the Live Migration; largest RAM size is 16GB.
    Any suggestions?

    Hi Sir,
    Could you please check if  NUMA setting for every node are same via using perfmon.exe and using counter "Hyper-v VM Vid Numa Node " :
    >>and the virtual machine in question has it enabled as well.
    Could you please also post the NUMA setting of that VM for us ?
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Live Migration with Different CPU versions on the hosts, win 2012R2 Datacenter

    Hello
    This question have been asked in different forums but when I read the the thread's I feel that I get mixed answers.
    And most answers are dating from 2012 (Win 2008R2), I don't know if they are still correct in win 2012R2.
    So now I ask the question myself and hope to get at clear answer :)
    We are in the process of installing a new Hyper-V cluster using Win srv 2012 R2 Datacenter as OS.
    I'm planning to re-use some of the "old" servers from our current Hyper-V 2008 R2 cluster, removing it from the cluster and do a clean installation of 2012R2 Datacenter.
    But I will need to buy two new servers to manage this (with a new version of CPU, same brand (AMD))
    Old server: AMD Opteron(tm) Processor 6172 (12 Cores)
    New server:
    AMD Opteron™ 6344 (12-core)
    Now my question:
    Will Live Migration work between these servers in my new cluster without me doing any special settings in hyper-v or in the VM or what do I need to do to get this to work?
    /Anders

    Hi,
    It is important that all the hardware supporting Windows Server 2012 Failover Clusters be certified to work with Windows Server 2012. 
    In a cluster where all the nodes of the cluster are exactly the same, hardware migration is fairly straightforward. There are no concerns about differences in hardware, and
    especially no concerns about different capabilities of the CPUs.
    More information:
    When to Use Processor Compatibility Mode to Migrate Virtual Machines
    http://technet.microsoft.com/en-us/magazine/gg299590.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

Maybe you are looking for

  • How to work on one project but with two different computers

    I'm editing a movie. My computer might be overwhelmed with all the information - speed is slowed and it gets hot quickly. I would like to use my second computer where I have cloud installed to edit on the same timeline or in a different sequence to g

  • Internet running slow on mac but not PC

    The internet on my MacBook is running painfully slow both through a wired connection and through my wireless network. The problem happens when utilizing Firefox, Safari, updates and iTunes. It makes it impossible to view Youtube videos and download l

  • N80 FREEZING/CRASHING/SLOOOOOOOW

    If anyone can help me that would be wicked! Had my N80 for about 12 months now, worked fine when i got it. Now all of a sudden it constantly freezes when i try and go into anything like messages,gallery etc.Its very very slow and crashes and freezes

  • Sometimes Mail doesn't send right away

    I'm trying to trouble shoot why sometimes when I send an email, I get the spinning gear for a fairly long time, 20+ seconds and other times the mail goes out instantly upon clicking Send. Is it an app issue, ISP problem? Other?

  • Element 9 stops working

    Hi everyone!  I just bought a brand new sony vaio computer yesterday.  I just installed premier elements 9 and also downloaded the latest version of quicktime. Everytime I start editing a project I'll make a few edits and then I get a message saying