Live Migration between AD Forests

Hello Everyone.
I would like to LiveMigrate a VM from HostA in AD Forest1 to HostB in AD Forest2.  There is a two-way trust established between the two AD Forests.  Does anyone know if this is possible?  I have not been successful.

OK! Managed to get this working!! Changed a million things so not sure what actually resolved it.
Created secondary DNS zones of the other forest on each DC for each side of the trust
Opened up firewall rules quite relaxed, each Hyper-V server can talk to all the DCs in the other forest
On source/dest Hyper-V box added the other domain administrator + computer object to local administrators group.
Added DNS search suffixes for the other forest
Hosts file was needed as well to resolve destHyperVhost.remoteforest.fqdn for some reason (netbios name would resolve).
Had to tweak name suffix routing on the trusts. This is specific to our environment (couldn't auth properly without).
Ticked "The other domain supports Kebreros AES Encryption" on the trust settings.
Configured Kerberos delegation on each Hyper-V box. "Trust this computer for delegation (Kerberos only).
Logged in to both Hyper-V boxes as sourceforest\domainadmin, double check can query WMI (e.g Get-WMIObject -Namespace "root\virtualization\v2" -class Msvm_StorageAllocationSettingData -ComputerName theotherhost). 
Confirmed all SPNs etc configured properly (they were, no changed needed).
Then it works fine. In this case, it was a standalone Windows Server 2012 host, to a clustered Windows Server 2012 R2 host (but not to a CSV).

Similar Messages

  • Live Migration between two WS2012 R2 Clusters with SCVMM 2012 R2 creates multiple objects on Cluster

    Hi,
    I'm seeing an issue when migrating VM's between two 2012 R2 Hyper-V Clusters, using VMM 2012 R2, that have their Storage provided by a 4 Node Scale Out File Server Cluster that the two clusters share.
    A migration between the two clusters is successful and the VM is operation but I'm left with two roles added to the cluster the VM has moved to instead of the expected one.
    For example: Say I have a VM that was created on cluster A with SCVMM, resulting in a name of : "SCVMM Test-01 Resources"
    I then do a live migration to Cluster B which has access to the same storage and then I end up with two new roles instead of one.
    "SCVMM abw-app-fl-01 Resources" and "abw-app-fl-01"
    The "SCVMM abw-app-fl-01 Resources" is left in an unknown state and "abw-app-fl-01" is operational.
    I can safely delete "SCVMM abw-app-fl-01 Resources" and everything still works but it looks like something is failing during the process.
    Has anyone else seen this?
    I'll probably have one of my guys open a support ticket in the new year but was wondering if anyone else is seeing this.
    Kind regards,
    Jas :)

    In my case the VMs where created in VMM in my one and only Hyper-V cluster (that's been created and is managed by VMM).
    All Higly Available VM:s have a FCM role named "SCVMM vmname" where vmname is the name if the VM in VMM. On top of that a lot
    of VM:s, but not all,  have a second role name named vmname. Lots of name in that sentence.
    All VMs that have duplicates are using the role named vmname.
    I thought it had to do with whether a VM had been migrated so I took one that never had been migrated and did. It did not get a duplicate.
    Is there any progress on this?

  • Hyper-V replica vs Shared Nothing Live Migration

      Shared Nothing Live Migration allows to transport your VM over the WAN without shutting it down (how much time it takes on io intensive vm is another story)
      Hyper-V replica does not allow to perform the DR switch without shutdown operation on primary site VM !
      why can't it take the VM live to the DR ?
      that's because if we use Shared Nothing across the WAN, we don't use the data that Hyper-V replica can and then it also breaks everything hyper-V replica does.
      Point is: how to take the VM to DR in running state, what is the best way to do that ?
    Shahid Roofi

    Hi Shahid,
    Hyper-V Replica is designed as a DR technology, not as a technique to move VMs. It assumes that should you require it, the source VM would probably be offline and therefore you would be powering up the passive copy from a previous point in time
    as its not a true synchronous replica copy. It does give you the added benefit to be able to run a planned failover which as you say, powers of the VM first, runs a final Sync then powers the new VM up. Obviously you cant have the duplicate copy of this VM
    running all the time at the remote site, otherwise you would have a split brain situation for network traffic.
    Like the live migration the shared nothing live migration is a technology aimed at moving a VM, but as you know offers the ability to do this without having shared storage and only requires a network connection. When initiated moves the whole
    VM, well copies the virtual drive and memory before sending machine writes to both, then only to the new VM when they both match. With regards to the speed, I assume you have the SNLM setup to compress data before sending across the wire?
    If you want a true live migration between remote sites, one way would be to have a SAN array between both sites synchronously replicating data, then stretch the Hyper-V cluster across both sites. Obviously this is a very expensive solution but perhaps
    the perfect scenario.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Shared nothing live migration wrong network

    Hey,
    I have a problem when doing shared nothing live migration between clusters using SCVMM 2012 R2.
    The transfer goes ok. But It choose to use my team of 2 x 1GB nic (routable) instead of the team with 2x10GB (non routable, but opened between all hosts).
    I have set vmmigrationsubnet to correct net. Check live migration setting, and that port 6600 is listening on the correct IP.
    But still it chooses to use the 1GB net.
    Anyone got any idea what I can do next?
    //Johan Runesson

    Do you have only your live migration network defined as such on the clusters or do you have both defined?  What are the IP networks on the live migration networks on each cluster?
    .:|:.:|:. tim

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • AD Migration from one domain to another domain between different Forest.

    Dear Team,
    We have a domain named "test.gov.in" .Now we want migrate all the users,computers,groups,GP ....etc in to our new domain "abc.net".Operating system of the source DC and destination Dc is same (Windows 2003 32 bit)..
    Pls provide me the steps to migrate one  domain to another domain between different forest
    Thanks
    Anurag

    Would agree with Christoffer and migrate using ADFS but before you can do this you will need to set up a trust between the two domains.  Once this has been accomplished then you can run ADMT.
    http://technet.microsoft.com/en-us/library/cc740018(v=WS.10).aspx
    Downloading ADMT is a free tool from Microsoft
    http://www.microsoft.com/en-us/download/details.aspx?id=8377
    ADMT Guide
    http://www.microsoft.com/en-us/download/details.aspx?id=19188
    Paul Bergson
    MVP - Directory Services
    MCITP: Enterprise Administrator
    MCTS, MCT, MCSE, MCSA, Security, BS CSci
    2012, 2008, Vista, 2003, 2000 (Early Achiever), NT4
    Twitter @pbbergs http://blogs.dirteam.com/blogs/paulbergson
    Please no e-mails, any questions should be posted in the NewsGroup.
    This posting is provided AS IS with no warranties, and confers no rights.
    I think you mean ADMT and not ADFS :)
    Enfo Zipper
    Christoffer Andersson – Principal Advisor
    http://blogs.chrisse.se - Directory Services Blog

  • Live Migration with Different CPU versions on the hosts, win 2012R2 Datacenter

    Hello
    This question have been asked in different forums but when I read the the thread's I feel that I get mixed answers.
    And most answers are dating from 2012 (Win 2008R2), I don't know if they are still correct in win 2012R2.
    So now I ask the question myself and hope to get at clear answer :)
    We are in the process of installing a new Hyper-V cluster using Win srv 2012 R2 Datacenter as OS.
    I'm planning to re-use some of the "old" servers from our current Hyper-V 2008 R2 cluster, removing it from the cluster and do a clean installation of 2012R2 Datacenter.
    But I will need to buy two new servers to manage this (with a new version of CPU, same brand (AMD))
    Old server: AMD Opteron(tm) Processor 6172 (12 Cores)
    New server:
    AMD Opteron™ 6344 (12-core)
    Now my question:
    Will Live Migration work between these servers in my new cluster without me doing any special settings in hyper-v or in the VM or what do I need to do to get this to work?
    /Anders

    Hi,
    It is important that all the hardware supporting Windows Server 2012 Failover Clusters be certified to work with Windows Server 2012. 
    In a cluster where all the nodes of the cluster are exactly the same, hardware migration is fairly straightforward. There are no concerns about differences in hardware, and
    especially no concerns about different capabilities of the CPUs.
    More information:
    When to Use Processor Compatibility Mode to Migrate Virtual Machines
    http://technet.microsoft.com/en-us/magazine/gg299590.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • Hyper-v 2012 r2 slow throughputs on network / live migrations

    Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
    Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
    the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
    large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-(  but it doesn't really seem to make a difference with all of these settings.
    The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
    town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
    I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.

    Hi Mark,
    The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc.  The physical network is 2 Netgear Gs724T switches which
    are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image)  The hyper-v port is set to independent Hyper-v load balancing. 
    The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored.  The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
    I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
    see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
    the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
    I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
    each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
    Regards Keith

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Server 2012 R2 Hyper-V Cluster, VM blue screens after migration between nodes.

    I currently have a two node Server 2012 R2 Hyper-v Cluster (fully patched) with a Windows Server 2012 R2 Iscsi target.
    The VMs run fine all day long, but when I try to do a live/quick migration the VM blue screens after about 20 minutes. The blue reports back about a “Critical_Structure_Corruption”.
    I’m being to think it might be down to CPU, as one system has an E-2640v2 and the other one has an E5-2670v3. Should I be able to migrate between these two systems with these type of CPU?
    Tim

    Sorry Tim, is that all 50 blue screen if live migrated?
    Are they all on the latest integration services? Does a cluster validation complete successfully? Are the hosts patched to the same level?
    The fact that if you power them off then migrate them and they boot fine, does point to a processor incompatibility and the memory BIN file is not accepted on the new host.
    Bit of a long shot but the only other thing I can think of off the top of my head if the compatibility option is checked, is checking the location of the BIN file when the VM is running to make sure its in the same place as the VHD\VHDX in the
    CSV storage where the VM is located and not on the local host somewhere like C:\program data\.... that is stopping it being migrated to the new host when the VM is live migrated.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Live migration with zones

    Hi all,
    I have been reading into making "SPARC Private Cloud" whitepapers with LDOM's from Oracle. One thing really pops out from the text which really confuses me:
    from Page 8 and 10:
    "VMs may also be securely live migrated or automatically started or restarted across any servers in their respective pools. *Zones are cold migrated*"
    "Secure live migration—Move domains off of servers that are undergoing planned maintenance. *Zones are cold-migrated*."
    Does this really mean that if I have zones inside LDOM guest, I can live migrate the LDOM guests but not the zones? Hence zones will go down if I do this? If so, whats the reason behind this, its hard to grasp the idea that the OS itself can be live migrated, but not zones inside it that are using the same kernel, binaries etc from it....
    Links:
    https://blogs.oracle.com/infrared/entry/building_private_iaas_with_sparc
    http://www.oracle.com/us/groups/public/@otn/documents/webcontent/1659149.pdf
    - Jukka

    Lumi, I'm pretty sure they are comparing LDOMs with zones on a standalone system (i.e. no LDOMs).
    When you migrate a domain, everything the guest kernel is doing should emerge as it was before.
    Migration might take a bit longer than for the GZ alone, since you're using more virtual memory.
    To move an NGZ between standalone GZ's, you would indeed have to halt, detach, attach, and boot it.
    But please don't take my word for it... feel free to try both methods for yourself. =-)
    The only limitation for zones in LDOMs that I'm aware of: You cannot currently set elastic power policy.
    Other than that, I don't see why you couldn't keep zones running inside your guest as it moves around.
    Hope that helps... -cheers, CSB

  • Live Migration failed using virtual HBA's and Guest Clustering

    Hi,
    We have a Guest Cluster Configuration on top of an Hyper-V Cluster. We are using Windows 2012 and Fiber Channel shared storage.
    The problem is regarding Live Migration. Some times when we move a virtual machine from node A to node B everything goes well but when we try to move back to node A Live Migration fails. What we can see is that when we move the VM from node A to B and Live
    Migration completes successfully the virtual ports remain active on node A, so when we try to move back from B to A Live Migration fails because the virtual ports are already there.
    This doesn't happen every time.
    We have checked the zoning between Host Cluster Hyper-V and the SAN, the mapping between physical HBA's and the vSAN's on the Hyper-V and everything is ok.
    Our doubt is, what is the best practice for zoning the vHBA on the VM's and our Fabric? We setup our zoning using an alias for the vHBA 1 and the two WWN (A and B) on the same object and an alias for the vHBA 2 and the correspondent WWN (A and B). Is it
    better to create an alias for vHBA 1 -> A (with WWN A) and other alias for vHBA 1 -> B (with WWN B)? 
    The guest cluster VM's have 98GB of RAM each. Could it be a time out issue when Live Migration happen's and the virtual ports remain active on the source node? When everything goes well, the VM moves from node A with vHBA WWN A to node B and stays there
    with vHBA WWN B. On the source node the virtual ports should be removed automatically when the Live Migration completes. And that is the issue... sometimes the virtual ports (WWN A) stay active on the source node and when we try to move back the VM Live Migration
    fails.
    I hope You may understand the issue.
    Regards,
    Carlos Monteiro.

    Hi ,
    Hope the following link may help.
    To support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V automatically alternates between the Set A and Set B
    WWN addresses during a live migration. This ensures that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration.
    Hyper-V Virtual Fibre Channel Overview
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    More information:
    Hyper-V Virtual Fibre Channel Troubleshooting Guide
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Hyper-V virtual SAN
    http://salworx.blogspot.co.uk/
    Thanks.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Live Migration : virtual Fibre Channel vSAN

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    BlatniS

    I can do live migration, from one node to another. No error. Problem / Question that I have is, is live migration really lie migration.
    When I do live migration from cluster or SCVMM  it save and start  virtual machine. Which fro me is not live migration.
    I have describe in more details : http://social.technet.microsoft.com/Forums/en-US/a52ac102-4ea3-491c-a8c5-4cf4dd14768d/synthetic-fibre-channel-hba-live-migration-savestopstart?forum=winserverhyperv
    Virtual Fibre Channel had sense in pre-R2 times when there was no shared VHDX and you had to somehow provide fault tolerant shared storage to guest VM cluster (spawning iSCSI target on top of FC was slow and ugly). Now there\s no point in putting one into
    production so if you have issues just use shared VHDX. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Shared VHDX
    http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
    Shared VHDX is much more flexible and has better performance. 
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Losing pings on live migration

    I currently have a 2 node OVM 3.1.1 cluster, fully patched, and have noticed that I lose network connectivity while performing a live migration. I have separate networks for VM's and for live migration. I am new to OVM so I do not know if this is typical or not. It appears that every time I migrate a VM, the client will lose connectivity to the network for anywhere from 5 - 20 seconds.

    Are you talking about lost pings to the guest VM that is live migrated? If yes, that I'd suppose that to be normal behaviour, since there has to be some interruption, once the memory contents is finally synchronized between the source and the target VM server.
    Depending on much RAM is used by the running VM and the speed of your network, this "outage" might vary in time, but there will surely always be some time span where pings to the VM that is being live migrated get lost.

Maybe you are looking for