Win 2012 shared-nothing live migration slow performance

Win 2012 shared-nothing live migration slow performance
Hello,
i have two standalone hyper-v servers - A) 2012 and  B) 2012 r2.
I tried live migration on non-shared storage between them. The functionality is alright but performance is low. The copying take a long time with max 200 Mbps.
I am not able to find the bottleneck. Network a disk performance seems to be good. It is 1Gbps network and when I tried simple copy paste the vhd file from A to B through cifs protocol, the speed was about almost full 1Gbps - it was nice and fast.
Is it feature? I am not able reach full network performance with shared-nothing live migration.
Thank you for reply
sincerely
Peter Weiss

Hi,
I don’t found the similar issue with Hyper-V, Does both of your hosts have the chipsets in the same family? Could you try to switch the three Live Migration Performance
Option method then monitor again? Or It seems there may have some disk or file system performance issue, please try to update your RAID card firmware to the least version.
More information:
Shared Nothing Live Migration: Goodbye Shared Storage?
http://blogs.technet.com/b/canitpro/archive/2013/01/03/shared-nothing-live-migration-goodbye-shared-storage.aspx
How to: Copy very large files across a slow or unreliable network
http://blogs.msdn.com/b/granth/archive/2010/05/10/how-to-copy-very-large-files-across-a-slow-or-unreliable-network.aspx
The similar thread:
How to determine when to use the xcopy with the /j parameter?
http://social.technet.microsoft.com/Forums/en-US/5ebfc25a-41c8-4d82-a2a6-d0f15d298e90/how-to-determine-when-to-use-the-xcopy-with-the-j-parameter?forum=winserverfiles
Hope this helps

Similar Messages

  • Shared nothing live migration over SMB. Poor performance

    Hi,
    I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
    Hardware:
    Dell M620 blades
    256Gb RAM
    2*8C Intel E5-2680 CPUs
    Samsung 840 Pro 512Gb SSD running in Raid1
    6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
    The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
    The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
    I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
    The graphs are from 4 tests.
    Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
    Test 3 is a shared nothing live migration of a live VM over SMB
    Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
    It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
    While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
    Any ideas?
    Test
    Config
    Vmswitch
    RSS
    VMQ
    Live Migration Config
    Throughput (MB/s)
    NTtcp
    NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    500
    NTtcp
    NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
    No
    Yes
    No
    N/A
    1130
    Shared nothing live migration
    Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
    No
    Yes
    No
    Kerberos, Use SMB, any available net
    74
    Storage migration
    Offline VM, 8Gb disk. Migrated from host 1 -> host2
    No
    Yes
    No
    Unencrypted BITS transfer
    350

    Hi Per Kjellkvist,
    Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
    Then test 3 and 4 .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • Hyper-V replica vs Shared Nothing Live Migration

      Shared Nothing Live Migration allows to transport your VM over the WAN without shutting it down (how much time it takes on io intensive vm is another story)
      Hyper-V replica does not allow to perform the DR switch without shutdown operation on primary site VM !
      why can't it take the VM live to the DR ?
      that's because if we use Shared Nothing across the WAN, we don't use the data that Hyper-V replica can and then it also breaks everything hyper-V replica does.
      Point is: how to take the VM to DR in running state, what is the best way to do that ?
    Shahid Roofi

    Hi Shahid,
    Hyper-V Replica is designed as a DR technology, not as a technique to move VMs. It assumes that should you require it, the source VM would probably be offline and therefore you would be powering up the passive copy from a previous point in time
    as its not a true synchronous replica copy. It does give you the added benefit to be able to run a planned failover which as you say, powers of the VM first, runs a final Sync then powers the new VM up. Obviously you cant have the duplicate copy of this VM
    running all the time at the remote site, otherwise you would have a split brain situation for network traffic.
    Like the live migration the shared nothing live migration is a technology aimed at moving a VM, but as you know offers the ability to do this without having shared storage and only requires a network connection. When initiated moves the whole
    VM, well copies the virtual drive and memory before sending machine writes to both, then only to the new VM when they both match. With regards to the speed, I assume you have the SNLM setup to compress data before sending across the wire?
    If you want a true live migration between remote sites, one way would be to have a SAN array between both sites synchronously replicating data, then stretch the Hyper-V cluster across both sites. Obviously this is a very expensive solution but perhaps
    the perfect scenario.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Shared nothing live migration wrong network

    Hey,
    I have a problem when doing shared nothing live migration between clusters using SCVMM 2012 R2.
    The transfer goes ok. But It choose to use my team of 2 x 1GB nic (routable) instead of the team with 2x10GB (non routable, but opened between all hosts).
    I have set vmmigrationsubnet to correct net. Check live migration setting, and that port 6600 is listening on the correct IP.
    But still it chooses to use the 1GB net.
    Anyone got any idea what I can do next?
    //Johan Runesson

    Do you have only your live migration network defined as such on the clusters or do you have both defined?  What are the IP networks on the live migration networks on each cluster?
    .:|:.:|:. tim

  • Hyper-V 2012 R2 VMQ live migrate (shared nothing) Blue Screen

    Hello,
    Have Windows Server 2012 R2 Hyper-V server, fully patched, new install. There are two intel Network cards ant there is configured NIC team of them (Windows NIC teaming). Also there is some "Virtual" NICS with assigned VLAN ID.
    If I enable VMQ on these NICS and do shared nothing live migration of VM - host gets BSDO. What can be wrong?

    Hi,
    I would like to check if you need further assistance.
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Live Migration Slow - Maxes Out

    Hi everyone...
    I have a private 10GB network that all of my Hyper-V hosts are connected to.  I have told VMM to use this network for live migrations.
    Using performance monitor, I see that the migration is using this private network.  Unfortunately, it will only transfer at 100MB per second.
    There must be a parameter somewhere to say "Use all the bandwidth you need...."
    It is worth mentioning that I also backup the virtual machines using this same network to our backup server, and the transfer rates goes to 10GB per second.
    Any suggestions?
    Thanks!

    Is this network dedicated for Live Migration?
    Is this a converged setup where you have a physical team and a VM switch created upon that team?
    In case that is true, then VMQ is enabled and RSS is disabled, which don't let you leverage all the bandwidth for live migration.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Service Manager 2012 SP1 consoles hanging or slow performance issue in Virtual Environment

    Hii,
    We are facing SCSM SP1 console performance issue which is utilizing high CPU and working deadly.
    For information, our SCSM is in Virtual environment via Hyper-V.
    When running the console over an RDP session to a Hyper-V virtual machine, we have to be careful not to maximize the console so that it will remain fast.  If we maximize on the VM, the console is so slow as for it to be unusable.
    Can someone share his experience, please?
    Regards, Syed Fahad Ali

    Hi Sayed,
    This is a bug and hopefully Microsoft team will solve it soon. if you can to vote for this bug here
    https://connect.microsoft.com/WindowsServer/feedback/details/810667/scsm-console-consumes-a-lot-of-cpu-when-opened-maximized-on-work-item-view-like-all-incidents
    Mohamed Fawzi | http://fawzi.wordpress.com

  • RDS 2012 re-connection after live migration.

    Is there a way to speed up the re-connection after a live migration?
    So if i am in a vm that live migrates it feels like it hangs for about 10 seconds the reconnects and is fine..... While this is OK its not ideal. Is there a way to improve this?

    Actually 10 seconds sounds like a very long time to me. In my experience using Shared Nothing Live Migration I've seen the switch being almost instantaneous, with a continual ping possibly dropping one or two packets, and certainly quick enough that it's
    unlikely any users would notice the change. So in terms of whether it can be improved I'd say yes.
    As you can see from the technical overview here
    http://technet.microsoft.com/en-us/library/hh831435.aspx the final step is for a signal to be sent to the switch informing it of the new MAC address of the servers new destination, so I wonder if the slow switch over might be connected to that, or perhaps
    some other network issue.
    Is the network connection poor between the servers which might cause a delay during the final sync of changes between the server copies? Are you moving between subnets?

  • Hyper V Lab and Live Migration

    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    Hi Sir,
    >>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario . 
    But ,if you have more physical computers you may try other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Live Migration Failure

    I am attempting to migrate virtual machines from one site to another but they are failing. I have sufficient bandwidth but am concerned about the latency. Can anyone recommend the maximum latency between the two sites?

    Hi Sir,
    I assume it is shared nothing live migration .
    I would suggest you to perform live migration inside each site to check if the configuration is OK.
    If live migration run successfully within local site , you may need to check the link layer between two sites .
    During live migration you also can use netmon.exe to analyse the traffic to find some useful information.
    In addition , please check the evet log of hyper-v host to check if there is any clue .
    Best Regards,
    Elton Ji 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Live migration support in ops Cenetr?

    Dose Enterprise Manager Ops Center 11gr1 support the new live migration feature?

    It does not natively provision the necessary bits for Live Migration (i.e. OVM Server 2.1), but if you install OVM 2.1 on your OVM Servers, you can perform the live migrations through OC (assuming your firmware etc. are all at Live Migration minimums too).
    The syntax for the migrations didn't change, so "under the covers" a live migration is performed if possible.

  • Hyper-v 2012 r2 slow throughputs on network / live migrations

    Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
    Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
    the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
    large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-(  but it doesn't really seem to make a difference with all of these settings.
    The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
    town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
    I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.

    Hi Mark,
    The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc.  The physical network is 2 Netgear Gs724T switches which
    are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image)  The hyper-v port is set to independent Hyper-v load balancing. 
    The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored.  The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
    I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
    see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
    the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
    I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
    each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
    Regards Keith

  • LIve migration trigger on Resource crunch in Hyper-v 2012 or R2

    Like VMware DRS , who can trigger Migration of virtual machine from 1 datastore to another or 1 host to another. do we have same kind of machanism in Hyper-v 2012 also.
    i have five HYper-v 2012 host in single cluster. now i want if any host face resource crucnh of memory than it migrate the virtual machine on another host.
    Thanks ravinder
    Ravi

    SCVMM has a feature called Dynamic Optimization.
    Dynamic Optimization can be configured on a host group, to migrate virtual machines within host clusters with a specified frequency and aggressiveness. Aggressiveness determines the amount of load imbalance that is required to initiate a migration during
    Dynamic Optimization. By default, virtual machines are migrated every 10 minutes with medium aggressiveness. When configuring frequency and aggressiveness for Dynamic Optimization, an administrator should factor in the resource cost of additional migrations
    against the advantages of balancing load among hosts in a host cluster. By default, a host group inherits Dynamic Optimization settings from its parent host group.
    Dynamic Optimization can be set up for clusters with two or more nodes. If a host group contains stand-alone hosts or host clusters that do not support live migration, Dynamic Optimization is not performed on those hosts. Any hosts that are in maintenance
    mode also are excluded from Dynamic Optimization. In addition, VMM only migrates highly available virtual machines that use shared storage. If a host cluster contains virtual machines that are not highly available, those virtual machines are not migrated during
    Dynamic Optimization.
    http://technet.microsoft.com/en-us/library/gg675109.aspx
    Cheers !
    Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
    InsideVirtualization.com

  • SCVMM live migrations very slow

    I'm running Hyper V with SCVMM 2012 in my test environment. 2 hosts connected to a MD1200 and set up in a cluster. Everything works perfectly, except that live migrations take a super long time. By this I mean that when I start the Migration Wizard it takes up to 5 minutes to rate the hosts, and then when it finally completes and I start the migration, it can take up to another 5 minutes to complete the move.In my production clusters this doesn't work this way, there the Migration Wizard takes seconds to rate the hosts and usually less than a minute to move the VM.The only major difference between my test environment and my production environment is that test is using direct SAS connections to the SAN, where in prod everything runs through switches. In prod one cluster is all 1gig and one cluster is 10gig and both are about the same...
    This topic first appeared in the Spiceworks Community

    Hey guys,
    I have searched online and in MS forums, and cannot resolve my issue.
    I have checked through similar articles, without any resolution.
    Migrating
    Mailbox From Exchange 2007 to  Exchange 2010: Very Slow                        
    Exchange 2013 Mailbox Migration Rate/Speed ?                        
    Public Folders after migration
    to Exchange 2013 are extremly slow                        
    Exchange 2010 mailbox move to Exchange 2013                        
    Single Mailbox Very Slow after 2003
    to 2010 Migration
    I have also come across the most common solution which is written here
    http://support.microsoft.com/kb/2807668, however this is not my problem.
    My problem is that mailbox migrations are SLOW, I'm talking about < 15MB per minute.  I do NOT have any contentIndex errors on any of my databases, they are all healthy and stay healthy throughout the migrations.  Both Exchange 2010 and 2013
    are DAGs.
    When I 'view details' from the ECP migrations page, I see things like 'Stalled duration
    00:53:21 ', however like I said there is no errors, and nothing mentioned in the user report.
    Get-ClientAccessServer | Test-MRSHealth this checks out fine, no problems.
    I have disabled offloading on all of the NICs.
    Is there anything else I can do or monitor to find out why its so slow?
    Could it be because of DAG even though there are no errors and the state is healthy.  Would an idea be to take down the dag during migration then add that after all is done?
    Andrew Huddleston | Hillsong Church | Sydney

Maybe you are looking for

  • Mac OSX 10.9.4 Internet Issue

    Hello, I recently updated my OS to 10.9.4 and started getting some Internet problems after the install. Before the update, my Internet was working fine, but afterwards, my browsers (Safari and Chrome) started restricting website access. Now, I can on

  • Mail crashes on startup and can't be quitted

    Since OS Mavericks Mail is not working. Afert all updates Mail still crashes everytime. That is very annoying Apple! On startup it shows up for two or three seconds and then crash. Crashed Thread: 50 -[MFPOPAccount _synchronizeMailboxListWithFileSyst

  • Enque deque on updating std table VBEP

    Hi Gurus, i need your help ..... here is my problem.. i added 4 z fields to the std table VBEP. now i am trying to add/update values to those newly added fields through my program . CALL FUNCTION 'SD_SALES_DOCUMENT_ENQUEUE'     EXPORTING       VBELN 

  • Photo count wrong in library module

    I Have wrong photo count on all my catalogs in the Lightroom 3 in the library module. It always show 0 count on the top level folders. This creates problem on photo search. I have tried all resources and could not find a solution. Please help!

  • Authorisation management

    I want to restrict creation of BP to specific users .In my case , BPs with role employee only should have authorisation to Create or Edit Business partner .Can any one give me clear steps to achieve this ? Thanks & Regards, Ravi