Live Migration Slow - Maxes Out

Hi everyone...
I have a private 10GB network that all of my Hyper-V hosts are connected to.  I have told VMM to use this network for live migrations.
Using performance monitor, I see that the migration is using this private network.  Unfortunately, it will only transfer at 100MB per second.
There must be a parameter somewhere to say "Use all the bandwidth you need...."
It is worth mentioning that I also backup the virtual machines using this same network to our backup server, and the transfer rates goes to 10GB per second.
Any suggestions?
Thanks!

Is this network dedicated for Live Migration?
Is this a converged setup where you have a physical team and a VM switch created upon that team?
In case that is true, then VMQ is enabled and RSS is disabled, which don't let you leverage all the bandwidth for live migration.
-kn
Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

Similar Messages

  • Win 2012 shared-nothing live migration slow performance

    Win 2012 shared-nothing live migration slow performance
    Hello,
    i have two standalone hyper-v servers - A) 2012 and  B) 2012 r2.
    I tried live migration on non-shared storage between them. The functionality is alright but performance is low. The copying take a long time with max 200 Mbps.
    I am not able to find the bottleneck. Network a disk performance seems to be good. It is 1Gbps network and when I tried simple copy paste the vhd file from A to B through cifs protocol, the speed was about almost full 1Gbps - it was nice and fast.
    Is it feature? I am not able reach full network performance with shared-nothing live migration.
    Thank you for reply
    sincerely
    Peter Weiss

    Hi,
    I don’t found the similar issue with Hyper-V, Does both of your hosts have the chipsets in the same family? Could you try to switch the three Live Migration Performance
    Option method then monitor again? Or It seems there may have some disk or file system performance issue, please try to update your RAID card firmware to the least version.
    More information:
    Shared Nothing Live Migration: Goodbye Shared Storage?
    http://blogs.technet.com/b/canitpro/archive/2013/01/03/shared-nothing-live-migration-goodbye-shared-storage.aspx
    How to: Copy very large files across a slow or unreliable network
    http://blogs.msdn.com/b/granth/archive/2010/05/10/how-to-copy-very-large-files-across-a-slow-or-unreliable-network.aspx
    The similar thread:
    How to determine when to use the xcopy with the /j parameter?
    http://social.technet.microsoft.com/Forums/en-US/5ebfc25a-41c8-4d82-a2a6-d0f15d298e90/how-to-determine-when-to-use-the-xcopy-with-the-j-parameter?forum=winserverfiles
    Hope this helps

  • SCVMM live migrations very slow

    I'm running Hyper V with SCVMM 2012 in my test environment. 2 hosts connected to a MD1200 and set up in a cluster. Everything works perfectly, except that live migrations take a super long time. By this I mean that when I start the Migration Wizard it takes up to 5 minutes to rate the hosts, and then when it finally completes and I start the migration, it can take up to another 5 minutes to complete the move.In my production clusters this doesn't work this way, there the Migration Wizard takes seconds to rate the hosts and usually less than a minute to move the VM.The only major difference between my test environment and my production environment is that test is using direct SAS connections to the SAN, where in prod everything runs through switches. In prod one cluster is all 1gig and one cluster is 10gig and both are about the same...
    This topic first appeared in the Spiceworks Community

    Hey guys,
    I have searched online and in MS forums, and cannot resolve my issue.
    I have checked through similar articles, without any resolution.
    Migrating
    Mailbox From Exchange 2007 to  Exchange 2010: Very Slow                        
    Exchange 2013 Mailbox Migration Rate/Speed ?                        
    Public Folders after migration
    to Exchange 2013 are extremly slow                        
    Exchange 2010 mailbox move to Exchange 2013                        
    Single Mailbox Very Slow after 2003
    to 2010 Migration
    I have also come across the most common solution which is written here
    http://support.microsoft.com/kb/2807668, however this is not my problem.
    My problem is that mailbox migrations are SLOW, I'm talking about < 15MB per minute.  I do NOT have any contentIndex errors on any of my databases, they are all healthy and stay healthy throughout the migrations.  Both Exchange 2010 and 2013
    are DAGs.
    When I 'view details' from the ECP migrations page, I see things like 'Stalled duration
    00:53:21 ', however like I said there is no errors, and nothing mentioned in the user report.
    Get-ClientAccessServer | Test-MRSHealth this checks out fine, no problems.
    I have disabled offloading on all of the NICs.
    Is there anything else I can do or monitor to find out why its so slow?
    Could it be because of DAG even though there are no errors and the state is healthy.  Would an idea be to take down the dag during migration then add that after all is done?
    Andrew Huddleston | Hillsong Church | Sydney

  • Hyper-v 2012 r2 slow throughputs on network / live migrations

    Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
    Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
    the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
    large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-(  but it doesn't really seem to make a difference with all of these settings.
    The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
    town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
    I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.

    Hi Mark,
    The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc.  The physical network is 2 Netgear Gs724T switches which
    are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image)  The hyper-v port is set to independent Hyper-v load balancing. 
    The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored.  The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
    I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
    see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
    the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
    I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
    each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
    Regards Keith

  • HT202807 I have a MB Air 128gb and a Time Cap.802.11AC. My HD is maxed out because of 30GB of photo storage after migration to 10.3.3 Photos. Please recommend an external drive best for connecting to TC and moving the iPhoto Lib off my HD

    I have a 13" mid 2011 MB Air 128gb and a Time Cap.802.11AC. My HD is maxed out because of 30GB of photo storage after migration to 10.10.3 Photos. I have reduced apps and files as much as possible so the only thing left is to move the photos.  Please recommend an external drive compatible for connecting to my TC 802.11ac and moving the iPhoto Lib off my HD which will free approx 15 gbs. of space. I don't plan on using it for any other purpose so a small capacity drive will suffice. I know the "optimization" is supposed to adjust for this condition but I don't trust that assertion.I only have 2gb free space remaining. 

    I don't plan on using it for any other purpose so a small capacity drive will suffice.
    If you really only want such a small capacity drive.. what about just buying a 32GB or 64GB USB memory stick.
    You should still have a back up to the internal hard disk of the AC TC.
    Or just buy a WD Passport.. don't waste your money buying 500GB as it is only a bit more to buy 1TB and a bit more again for 2TB.. For future use if you do buy a USB hard disk buy 2.5" type.. and buy 1TB or if the budget stretches 2TB..

  • Maxed out Mac Pro (nearly new) DEAD SLOW - why???

    *I have always been a PC guy (animation and film production is what I do -- www.speedbumpstudios.com) but I decided to bite the bullet and go Mac. I maxed out my Mac Pro-- if it's available, I got it. It's a 15,000 machine. Great. So I've had it for a few months and the thing is so dead slow I am finding it almost unusable. My three year old PC laptop kicks its butt. Here are the symptoms.*
    *Opening Photoshop takes anywhere from 1 to 3 minutes. Once it is open, if I try to open a file, it takes another couple of minutes for the finder to pop up so I can search for it.*
    *iTunes regularly hangs and/or takes several minutes to load. My iPod, which is also new, sometimes cannot even connect to it (error).*
    *HP scanning software takes minutes to open. Once the scan is ready to be saved, it takes minutes for finder to even offer the option to name the new file.*
    *Almost all the software on this thing starts up painfully slow.*
    *IMPORTANT: I have noticed that the machine runs almost all right immediately after a restart. The longer it runs, the worse these symptoms get.*
    *ALSO IMPORTANT: -- I had been running Boot Camp on the machine, allowing me to boot into Windows XP. Recently bought Parallels and ran that. While this Mac never ran great, the problems seemed to get markedly worse at that point. I have since, out of desperation, removed Parellels completely, but the problems persist.*
    *I am, of course, sick about how much I am paying for this machine that is supposed to be the best available and it about as useful as a boat anchor. WHAT SHOULD I DO!?*

    George, I feel your pain. I too have a maxed out Mac Pro;
    Mac Pro - 8 Core (Dual Quad-Core 3GHz)
    16GB RAM (Crucial Memory)
    4x 750GB Drives
    ATI 1900x Vid Card
    2x 30" Cinema Displays
    etc etc etc .... I even have a 2-port eSATA cables installed (OWC) with a 1.5TB external drive (Iomega, and fast as the dickens!)
    My machine has been in the Apple repair facilities THREE times so far (I am working with an executive relations person for 3 months now) and we are about to swap it out for a new machine because the term "ghosts in the machine" apply to many Mac Pros. In fact, the last time Apple fixed this machine was yesterday, replacing a bluetooth module, and as soon as I booted the machine it started having kernel panics (for the first time), we thought it was the RAM Riser cards being seated wrong... after making sure everything was seated properly, and doing a complete OS reinstall, it is still kernel panic-ing every hour (if I let the machine sit quietly)...so, there is a ghost in this machine that does not want to go away.
    To get your situation though, MY machine has ALWAYS been much slower than the dual-G5 I replaced last April. You are not alone here. This is an 8-Core machine, ran Tiger slow and is now running Leopard only about 20% faster, which is still slower than the G5. Makes me want to cry sometimes because I paid, what, $1300 for the processor upgrade alone?
    Something is wrong with your machine. I suppose you have already tried to reinstall, reformat your drives, correct permissions, etc and still found your machine wanting in the ways of speed, right? What you need to do is to contact Apple Executive Relations and tell them what is going on. Once they see that you ordered such an expensive machine (no one buys extra RAM from Apple so they will sympathize right there!) they WILL contact you and work with you until your problem is solved. The only reason they have not replaced MY machine yet (they will usually do so after the 2nd or 3rd repair attempt) is because in my case I cannot afford the downtime. Now, however, I think we have to because no one can figure out why this machine is having panics after sitting still for an hour.
    Anyway, be assured this is NOT representative of Apple or the Mac Pro speed. Mac Pro's are plenty fast, and unbelievably fast with newer, Intel-based software (CS3), and even though Leopard has many many bugs it is faster than Tiger. My partner has the same machine as I do and it is so fast I think it does most of his work before he even arrives in the morning.
    Work with Apple, let them fix your machine or replace it. They will.

  • After upgrading to 10.5.4, CPU keeps maxing out and system runs slower

    Hi there,
    I upgraded to 10.5.4 last week and quite frequently my CPU starts revving up and stays at a high level until I put my iMac to sleep. When I did my weekly reboot last Friday it seemed to not have the problem for a while, but then it came back again pretty quick. By checking Activity Monitor I found out that my resources are maxed out all the time, I've got about 80% CPU attached to Safari right now. Applications seem to load slower, everything runs a bit slower when it's like this.
    I found under System Preferences > Energy Saver that I had my Processor Performance set to Highest, so I switched that to Automatic, but I'm still having this problem.
    Can anyone help me with this?
    Travis

    Thank you, Pondini, that's all I was really asking about in the 3 (4) previous messages. I wanted to know if I could download the combo updater and install it over top of the 10.5.4 version I downloaded through Software Update. Thanks for the confirmation. I did download it this morning after getting your post and have installed it.
    Initially, after installation and reboot of the combo update 10.5.4, my CPU maxed out again for a little while, and I checked Activity Monitor and it said a program called "Pattern ?" was hogging the CPU, the ? refers to a second word that I didn't write down at the time, I think it started with the letter M.
    I had my iMac turned off for a couple hours after that. When I turned it back on, I was ripping 4 CD's into iTunes and so it sounded like the CPU was running high again through that, but I think that's typical for when I'm ripping a CD. Anyhow, right now the CPU is running normal, about 20% of total cycles used and that Pattern ? whatever program has disappeared from the list. So maybe that was something that the Combo Updater 10.5.4 was using to fix a problem and it might be back to normal again.
    So, so far so good, thanks to the people here who have helped. I will post again if the problem persists.
    Also, Pondini, thanks for reminding me to check posts in the forum for information. I almost always do that initially, before I post my question. I did that this time, but I had no idea at the time that downloading the combo updater would be the solution as I was scanning the forum for people who had spiking CPU's, not any other type of problem. I suppose in hindsight, after finding that combo updater might be a possible solution to my problem I should have ended this thread and sought new information, posed a new question / post if necessary. But like you say, I would have found other posts about installing combo updater.
    Travis

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • ITunes download on NAS WD MyBook Live - super slow!

    I have a WD MyBook Live with latest firmware - 2TB. Ever since I moved my iTunes media folder to the NAS performance is great, except my download speeds are crazy slow! I have a 450MBPS 5ghz WIFI connection. When transferring to and from my Macbook Pro to the NAS I get speeds in the region of 30MB/s. Yet when I am downloading with iTunes I get 1MB/s!
    If I download straight to my HDD ( tested by making another iTunes folder ) I get super fast speeds that pretty much max out my 120MBPS Virgin Media connection. So what gives? I tried both making an alias and also a symbolic link point downloads to a different location, but iTunes does not recognise this.
    Any suggestions?

    I would really appreciate some input on this issue please!

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Error 10698 Virtual machine could not be live migrated to virtual machine host

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky

    Hi all,
    I am running a fail over cluster of
    Host:
    2 x WS2008 R2 Data Centre
    managed by VMM:
    VMM 2008 R2
    Virtual Host:
    1x windows 2003 64bit guest host/virtual machine
    I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:
    Error (10698)
    Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
     (Unspecified error (0x80004005))
    What i have found when running the cluster validation:
    1 out of the 2 hosts have an error with RPC related to network configuration:
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    However there are no errors or events on host02 that are showing any probelms at all.
    In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.
    See below:
    List BIOS Information
    List BIOS information from each node.
    xxx-Host01
    Gathering BIOS Information for xxx-Host01
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    xxx-Host02
    Gathering BIOS Information for xxx-Host02
    Item  Value 
    Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
    Manufacturer  Dell Inc. 
    SMBios Present  True 
    SMBios Version  1.1.6 
    SMBios Major Version  2 
    SMBios Minor Version  5 
    Current Language  en|US|iso8859-1 
    Release Date  3/23/2008 9:00:00 AM 
    Primary BIOS  True 
    Back to Summary
    Back to Top
    List Cluster Core Groups
    List information about the available storage group and the core group in the cluster.
    Summary 
    Cluster Name: xxx-Cluster01 
    Total Groups: 2 
    Group  Status  Type 
    Cluster Group  Online  Core Cluster 
    Available Storage  Offline  Available Storage 
     Cluster Group
    Description:
    Status: Online
    Current Owner: xxx-Host01
    Preferred Owners: None
    Failback Policy: No failback policy defined.
    Resource  Type  Status  Possible Owners 
    Cluster Disk 1  Physical Disk  Online  All Nodes 
    IP Address: 10.10.0.60  IP Address  Online  All Nodes 
    Name: xxx-Cluster01  Network Name  Online  All Nodes 
     Available Storage
    Description:
    Status: Offline
    Current Owner: Per-Host02
    Preferred Owners: None
    Failback Policy: No failback policy defined.
     Cluster Shared Volumes
    Resource  Type  Status  Possible Owners 
    Data  Cluster Shared Volume  Online  All Nodes 
    Snapshots  Cluster Shared Volume  Online  All Nodes 
    System  Cluster Shared Volume  Online  All Nodes 
    Back to Summary
    Back to Top
    List Cluster Network Information
    List cluster-specific network settings that are stored in the cluster configuration.
    Network: Cluster Network 1 
    DHCP Enabled: False 
    Network Role: Internal and client use 
    Metric: 10000 
    Prefix  Prefix Length 
    10.10.0.0  20 
    Network: Cluster Network 2 
    DHCP Enabled: False 
    Network Role: Internal use 
    Metric: 1000 
    Prefix  Prefix Length 
    10.13.0.0  24 
    Subnet Delay  
    CrossSubnetDelay  1000 
    CrossSubnetThreshold  5 
    SameSubnetDelay  1000 
    SameSubnetThreshold  5 
    Validating that Network Load Balancing is not configured on node xxx-Host01.
    Validating that Network Load Balancing is not configured on node xxx-Host02.
    An error occurred while executing the test.
    Failed to connect to the service manager on 'xxx-Host02'.
    The RPC server is unavailable
    Back to Summary
    Back to Top
    If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.
    I have also checked the rpc service and it has started.
    If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.
    Kind regards,
    Chucky
    Raja. B

  • Live Migration and private network

    Is it a best practice to put up a Private Network beetween the nodes in a pool (reserving a few network cards and switch ports for it), to have a dedicated network for the traffic generated e.g. by live migration and/or ocfs2 heartbeat? I was wondering why such setup is generally recommended in other virtualization solutions, but apparently it's not considered strictly necessary in OVM... Why? Are there any docs regarding this? I couldn't find any.
    Thanks!

    Hi Roynor,
    regarding the physical separation beetween management+hypervisor and the guest VMs, it's now implemented and working...
    My next doubt on the list of doubts :-) at this point is:
    I could easily set up ONE MORE dedicated bond, create a Bridge with a private IP on it on each server (e.g. 10.xxx.xxx.xxx), and then create a Private VLAN completely insulated from the rest of the world.
    I'd be putting the physical switch ports where the Private Bonds/Bridges belong to on the same VLAN ID.
    But:
    - How can I be sure that this network WILL be actually used by the relevant traffic? If I'm not wrong, when you set up e.g. a physical RAC cluster, at a certain point you are prompted to choose what network to use for the Heartbeat (and it will be marked as PRIVATE), and what network will be used by clients traffic (PUBLIC).
    In Oracle VM such setting does not exist... Neither during installation, nor in VM Manager, nowhere.
    - Apart from Security, I'm doubting that during heavy VMs migration problems could arise, because if the network gets saturated, there are chances that the OCFS2 heartbeat would be somehow "lost", therefore messing up HA etc. This is at least the reason why in a RAC setup a private network is highly recommended.
    - I finally found that doc you mention from IBM (thanks for pointing it out!) but my opinion is that THEIR INTENTION was to separate the traffic at the same way I'd like to, but there is simply NO PROOF that such setup would work... They do not mention where you can specify what traffic you want to be on what network...
    This is a very important point... I'm wondering why this lack of information.
    Thanks for your feedback, btw
    Edited by: rlomba on Dec 17, 2009 6:16 AM

  • Hyper-v 2012 R2 Live migration issue in 2003 Domain function Level

    hi Team ,
    i recently build 2012 R2 Hyper-v Cluster with three node. Everrything working fine with out any issue . Cluster working also fine. Later i came across one issue when tried to Live migration virtual machine from one host to another . it failed all the time
    while quick migration is working . i gone through few articles and find it is known issue with hyper-v 2012 R2 where domain functional level is set to 2003 . although they have provided Hotfix but no solution.
    http://support.microsoft.com/kb/2838043
    Please let me know if any one face similar issue and able to resolve by any hotfix. My host are updated .
    Thanks
    Ravindra
    Ravi

    Hi Ravi1987,
    The KB2838043 is applied for Server 2012 node, Could you offer us the related cluster error event id, or you can refer the following article to check your cluster
    network binding order is correct or not.
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    You can try to install recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters first, then monitor this issue again.
    The KB download:
    Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters
    http://support.microsoft.com/kb/2920151
    More information:
    Windows Server 2008 R2 Live Migration – “The devil may be in the networking details.”
    http://blogs.technet.com/b/askcore/archive/2009/12/10/windows-server-2008-r2-live-migration-the-devil-may-be-in-the-networking-details.aspx
    I’m glad to be of help to you!

  • IP Conflict when doing a Hyper-V Live Migration

    Hello,
    I am using Windows Server 2012 R2 Update 1 on a Fujitsu Cluster in a Box.
    I have the following issue:
    When I do a manual Live Migration of a VM, the VM Nic gets an ip conflict with itself and stops responding.
    When I do a ipconfig inside that VM, I see an APIPA Address assigned (because of that conflict). If I then disable and enable the NIC inside the guest again, the IP works correctly again with the static address I assigned inside the guest.
    When I do a quick migration there is no ip conflict issue for that VM.
    Also when Live Migration is initiated due to a reboot of the owning cluster node, there is also no issue with ip address of the VM.
    Only when doing a manual live migration. Maybe the Cisco Switch where the cluster is attached to is too slow to register the quick MAC-IP change?
    Is there anything I can do to fix the issue? Ideas?

    Answering my own question after doing some research:
    The issue is related to the networking equipment in my case.
    Setting ArpRetryCount to 0 in the Registry of TCPIP/Parameters fixed the issue in the VM.
    Interesting enough, I found the solution in the VMWare KB :-)
    See: VMWare KB 1028373

Maybe you are looking for