LIve migration trigger on Resource crunch in Hyper-v 2012 or R2
Like VMware DRS , who can trigger Migration of virtual machine from 1 datastore to another or 1 host to another. do we have same kind of machanism in Hyper-v 2012 also.
i have five HYper-v 2012 host in single cluster. now i want if any host face resource crucnh of memory than it migrate the virtual machine on another host.
Thanks ravinder
Ravi
SCVMM has a feature called Dynamic Optimization.
Dynamic Optimization can be configured on a host group, to migrate virtual machines within host clusters with a specified frequency and aggressiveness. Aggressiveness determines the amount of load imbalance that is required to initiate a migration during
Dynamic Optimization. By default, virtual machines are migrated every 10 minutes with medium aggressiveness. When configuring frequency and aggressiveness for Dynamic Optimization, an administrator should factor in the resource cost of additional migrations
against the advantages of balancing load among hosts in a host cluster. By default, a host group inherits Dynamic Optimization settings from its parent host group.
Dynamic Optimization can be set up for clusters with two or more nodes. If a host group contains stand-alone hosts or host clusters that do not support live migration, Dynamic Optimization is not performed on those hosts. Any hosts that are in maintenance
mode also are excluded from Dynamic Optimization. In addition, VMM only migrates highly available virtual machines that use shared storage. If a host cluster contains virtual machines that are not highly available, those virtual machines are not migrated during
Dynamic Optimization.
http://technet.microsoft.com/en-us/library/gg675109.aspx
Cheers !
Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
InsideVirtualization.com
Similar Messages
-
Live Migration between two WS2012 R2 Clusters with SCVMM 2012 R2 creates multiple objects on Cluster
Hi,
I'm seeing an issue when migrating VM's between two 2012 R2 Hyper-V Clusters, using VMM 2012 R2, that have their Storage provided by a 4 Node Scale Out File Server Cluster that the two clusters share.
A migration between the two clusters is successful and the VM is operation but I'm left with two roles added to the cluster the VM has moved to instead of the expected one.
For example: Say I have a VM that was created on cluster A with SCVMM, resulting in a name of : "SCVMM Test-01 Resources"
I then do a live migration to Cluster B which has access to the same storage and then I end up with two new roles instead of one.
"SCVMM abw-app-fl-01 Resources" and "abw-app-fl-01"
The "SCVMM abw-app-fl-01 Resources" is left in an unknown state and "abw-app-fl-01" is operational.
I can safely delete "SCVMM abw-app-fl-01 Resources" and everything still works but it looks like something is failing during the process.
Has anyone else seen this?
I'll probably have one of my guys open a support ticket in the new year but was wondering if anyone else is seeing this.
Kind regards,
Jas :)In my case the VMs where created in VMM in my one and only Hyper-V cluster (that's been created and is managed by VMM).
All Higly Available VM:s have a FCM role named "SCVMM vmname" where vmname is the name if the VM in VMM. On top of that a lot
of VM:s, but not all, have a second role name named vmname. Lots of name in that sentence.
All VMs that have duplicates are using the role named vmname.
I thought it had to do with whether a VM had been migrated so I took one that never had been migrated and did. It did not get a duplicate.
Is there any progress on this? -
Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
I have read several articles about this issues like this ones:
https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
But haven't been able to fix my issue.
The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
Quick migration works without problems.
We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
Any ideas on how to solve this is deeply appreciated.
Thank you!
Eduardo RojasHi Eduardo,
How are things going ?
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Hyper-V guest SQL 2012 cluster live migration failure
I have two IBM HX5 nodes connected to IBM DS5300. Hyper-V 2012 cluster was built on blades. In HV cluster was made six virtual machines, connected to DS5300 via HV Virtual SAN. These VMs was formed a guest SQL Cluster. Databases' files are placed on
DS5300 storage and available through VM FibreChannel Adapters. IBM MPIO Module is installed on all hosts and VMs.
SQL Server instances work without problem. But! When I try to live migrate SQL VM to another HV node an SQL Instance fails. In SQL error log I see:
2013-06-19 10:39:44.07 spid1s Error: 17053, Severity: 16, State: 1.
2013-06-19 10:39:44.07 spid1s SQLServerLogMgr::LogWriter: Operating system error 170(The requested resource is in use.) encountered.
2013-06-19 10:39:44.07 spid1s Write error during log flush.
2013-06-19 10:39:44.07 spid55 Error: 9001, Severity: 21, State: 4.
2013-06-19 10:39:44.07 spid55 The log for database 'Admin' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
2013-06-19 10:39:44.07 spid55 Database Admin was shutdown due to error 9001 in routine 'XdesRMFull::CommitInternal'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
2013-06-19 10:39:44.31 spid36s Error: 17053, Severity: 16, State: 1.
2013-06-19 10:39:44.31 spid36s fcb::close-flush: Operating system error (null) encountered.
2013-06-19 10:39:44.31 spid36s Error: 17053, Severity: 16, State: 1.
2013-06-19 10:39:44.31 spid36s fcb::close-flush: Operating system error (null) encountered.
2013-06-19 10:39:44.32 spid36s Error: 17053, Severity: 16, State: 1.
2013-06-19 10:39:44.32 spid36s fcb::close-flush: Operating system error (null) encountered.
2013-06-19 10:39:44.32 spid36s Error: 17053, Severity: 16, State: 1.
2013-06-19 10:39:44.32 spid36s fcb::close-flush: Operating system error (null) encountered.
2013-06-19 10:39:44.33 spid36s Starting up database 'Admin'.
2013-06-19 10:39:44.58 spid36s 349 transactions rolled forward in database 'Admin' (6:0). This is an informational message only. No user action is required.
2013-06-19 10:39:44.58 spid36s SQLServerLogMgr::FixupLogTail (failure): alignBuf 0x000000001A75D000, writeSize 0x400, filePos 0x156adc00
2013-06-19 10:39:44.58 spid36s blankSize 0x3c0000, blkOffset 0x1056e, fileSeqNo 1313, totBytesWritten 0x0
2013-06-19 10:39:44.58 spid36s fcb status 0x42, handle 0x0000000000000BC0, size 262144 pages
2013-06-19 10:39:44.58 spid36s Error: 17053, Severity: 16, State: 1.
2013-06-19 10:39:44.58 spid36s SQLServerLogMgr::FixupLogTail: Operating system error 170(The requested resource is in use.) encountered.
2013-06-19 10:39:44.58 spid36s Error: 5159, Severity: 24, State: 13.
2013-06-19 10:39:44.58 spid36s Operating system error 170(The requested resource is in use.) on file "v:\MSSQL\log\Admin\Log.ldf" during FixupLogTail.
2013-06-19 10:39:44.58 spid36s Error: 3414, Severity: 21, State: 1.
2013-06-19 10:39:44.58 spid36s An error occurred during recovery, preventing the database 'Admin' (6:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected,
contact Technical Support.
In windows system log I see a lot of warnings like this:
- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
<Provider
Name="Microsoft-Windows-Ntfs" Guid="{3FF37A1C-A68D-4D6E-8C9B-F79E8B16C482}" />
<EventID>140</EventID>
<Version>0</Version>
<Level>3</Level>
<Task>0</Task>
<Opcode>0</Opcode>
<Keywords>0x8000000000000008</Keywords>
<TimeCreated
SystemTime="2013-06-19T06:39:44.314400200Z" />
<EventRecordID>25239</EventRecordID>
<Correlation
/>
<Execution
ProcessID="4620" ThreadID="4284" />
<Channel>System</Channel>
<Computer>sql-node-5.local.net</Computer>
<Security
UserID="S-1-5-21-796845957-515967899-725345543-17066" />
</System>
- <EventData>
<Data Name="VolumeId">\\?\Volume{752f0849-6201-48e9-8821-7db897a10305}</Data>
<Data Name="DeviceName">\Device\HarddiskVolume70</Data>
<Data Name="Error">0x80000011</Data>
</EventData>
</Event>
The system failed to flush data to the transaction log. Corruption may occur in VolumeId: \\?\Volume{752f0849-6201-48e9-8821-7db897a10305}, DeviceName: \Device\HarddiskVolume70.
({Device Busy}
The device is currently busy.)
There aren't any error or warning in HV hosts.Hello,
I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
Thank you for your understanding and support.
Regards,
Fanny Liu
If you have any feedback on our support, please click
here.
Fanny Liu
TechNet Community Support -
Hyper-V 2012 R2 - live migration dynamic memory issue
Hi..
we use a two node hyper-v cluster.
On each node running 5 vm that shared the 64GB system memory.
The vms are configured to use dynamic memory. The sum of the startup (min) memory of the 10 vms are currently 50 GB.
So now our problem:
If we try to drain roles on node A for maintenance purpose two vms are still on node A because they wasted currently each 25GB memory and so on node B is not enough free memory to host this vms.
That sounds logical
But as all vms configured to use dynamic memory they should shrink their memory to get enough space for all vms? If we shutdown all vms, move it to one node, then everything works fine.
Is it by design that live migration do not trigger the vm to shrink their memory?
Best regards, Alexander ZirbesHi Alexander,
Dynamic memory means that memory can increase or decrease based on the demand of the VM, not based on the load of the Hyper-V host. If the gust OS of VM dont need that much memory, memory will be released. I dont think live migration has any role shrinking
the allocated memory of a VM.
Since you are using two node cluster, you should only load 50% resourses on each host so that incase of a failure of second node, the first node can have all the VMs.
This calculation will become more effective if you have more nodes. I am having a six node cluster and I load each host upto 80% so that in case of a single node failure, the VMs can be split on other five nodes without issues.
Cheers,
Shaba
Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.
InsideVirtualization.com -
Hyper-v 2012 r2 slow throughputs on network / live migrations
Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-( but it doesn't really seem to make a difference with all of these settings.
The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.Hi Mark,
The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc. The physical network is 2 Netgear Gs724T switches which
are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image) The hyper-v port is set to independent Hyper-v load balancing.
The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored. The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
Regards Keith -
Hyper-V replica vs Shared Nothing Live Migration
Shared Nothing Live Migration allows to transport your VM over the WAN without shutting it down (how much time it takes on io intensive vm is another story)
Hyper-V replica does not allow to perform the DR switch without shutdown operation on primary site VM !
why can't it take the VM live to the DR ?
that's because if we use Shared Nothing across the WAN, we don't use the data that Hyper-V replica can and then it also breaks everything hyper-V replica does.
Point is: how to take the VM to DR in running state, what is the best way to do that ?
Shahid RoofiHi Shahid,
Hyper-V Replica is designed as a DR technology, not as a technique to move VMs. It assumes that should you require it, the source VM would probably be offline and therefore you would be powering up the passive copy from a previous point in time
as its not a true synchronous replica copy. It does give you the added benefit to be able to run a planned failover which as you say, powers of the VM first, runs a final Sync then powers the new VM up. Obviously you cant have the duplicate copy of this VM
running all the time at the remote site, otherwise you would have a split brain situation for network traffic.
Like the live migration the shared nothing live migration is a technology aimed at moving a VM, but as you know offers the ability to do this without having shared storage and only requires a network connection. When initiated moves the whole
VM, well copies the virtual drive and memory before sending machine writes to both, then only to the new VM when they both match. With regards to the speed, I assume you have the SNLM setup to compress data before sending across the wire?
If you want a true live migration between remote sites, one way would be to have a SAN array between both sites synchronously replicating data, then stretch the Hyper-V cluster across both sites. Obviously this is a very expensive solution but perhaps
the perfect scenario.
Kind Regards
Michael Coutanche
Blog:
Twitter: LinkedIn:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. -
Hyper V Lab and Live Migration
Hi Guys,
I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations.
The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
so they restarted/ didnt restart etc but with no success.
My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
Hope all that makes senseHi Sir,
>>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations.
As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario .
But ,if you have more physical computers you may try other's progects .
Also please refer to this thread :
https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager
Hi,
Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
Manager?
I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
DR site. Both sites are connected/will be connected to each other through dark fibre.
I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
Replica.
Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
another host within the same cluster,
the Migration VM Wizard gives me the following "Rating Explanation" error:
"The virtual machine virtual machine name which
requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
When I stop replication of the VM, the error goes away.
Initially, I thought this error was because I attempted to manually configure
the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features.
I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
with each other?
If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
DThis can be considered as a minor GUI bug.
Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself.
If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
error but it should rather be an informative message instead.
Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
I have personally reported this as a bug. I will check on this one and get back to this thread.
Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
-kn
Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com ) -
Hyper-v Live Migration not completing when using VM with large RAM
hi,
i have a two node server 2012 R2 cluster hyper-v which uses 100GB CSV, and 128GB RAM across 2 physical CPU's (approx 7.1GB used when the VM is not booted), and 1 VM running windows 7 which has 64GB RAM assigned, the VHD size is around 21GB and the BIN file
is 64GB (by the way do we have to have that, can we get rid of the BIN file?).
NUMA is enabled on both servers, when I attempt to live migrate i get event 1155 in the cluster events, the LM starts and gets into 60 something % but then fails. the event details are "The pending move for the role 'New Virtual Machine' did not complete."
however, when i lower the amount of RAM assigned to the VM to around 56GB (56+7 = 63GB) the LM seems to work, any amount of RAM below this allows LM to succeed, but it seems if the total used RAM from the physical server (including that used for the
VMs) is 64GB or above, the LM fails.... coincidence since the server has 64GB per CPU.....
why would this be?
many thanks
SteveHi,
I turned NUMA spanning off on both servers in the cluster - I assigned 62 GB, 64GB and 88GB and each time the VM started up no problems. with 62GB LM completed, but I cant get LM to complete with 64GB+.
my server is a HP DL380 G8, it has the latest BIOS (I just updated it today as it was a couple of months behind), i cant see any settings in BIOS relating to NUMA so i'm guessing it is enabled and cant be changed.
if i run the cmdlt as admin I get ProcessorsAvailability : {0, 0, 0, 0...}
if i run it as standard user i get ProcessorsAvailability
my memory and CPU config are as follows, hyper-threading is enabled for the CPU but I dont
think that would make a difference?
Processor 1 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 1 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 1 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 1 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 2 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 2 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 2 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor 2 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
Processor Name
Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
Processor Status
OK
Processor Speed
2400 MHz
Execution Technology
12/12 cores; 24 threads
Memory Technology
64-bit Capable
Internal L1 cache
384 KB
Internal L2 cache
3072 KB
Internal L3 cache
30720 KB
Processor 2
Processor Name
Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
Processor Status
OK
Processor Speed
2400 MHz
Execution Technology
12/12 cores; 24 threads
Memory Technology
64-bit Capable
Internal L1 cache
384 KB
Internal L2 cache
3072 KB
Internal L3 cache
30720 KB
thanks
Steve -
Hyper-v 2012 R2 Live migration issue in 2003 Domain function Level
hi Team ,
i recently build 2012 R2 Hyper-v Cluster with three node. Everrything working fine with out any issue . Cluster working also fine. Later i came across one issue when tried to Live migration virtual machine from one host to another . it failed all the time
while quick migration is working . i gone through few articles and find it is known issue with hyper-v 2012 R2 where domain functional level is set to 2003 . although they have provided Hotfix but no solution.
http://support.microsoft.com/kb/2838043
Please let me know if any one face similar issue and able to resolve by any hotfix. My host are updated .
Thanks
Ravindra
RaviHi Ravi1987,
The KB2838043 is applied for Server 2012 node, Could you offer us the related cluster error event id, or you can refer the following article to check your cluster
network binding order is correct or not.
Configuring Windows Failover Cluster Networks
http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
You can try to install recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters first, then monitor this issue again.
The KB download:
Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters
http://support.microsoft.com/kb/2920151
More information:
Windows Server 2008 R2 Live Migration – “The devil may be in the networking details.”
http://blogs.technet.com/b/askcore/archive/2009/12/10/windows-server-2008-r2-live-migration-the-devil-may-be-in-the-networking-details.aspx
I’m glad to be of help to you! -
Hyper-V Failover Cluster Live migration over Virtual Adapter
Hello,
Currently we have a test environment 3 HyperHost wich have 2 Nic teams. LAN team and team Live/Mngt.
In Hyper-V we created a Virtual Switch (Wich is Nic team Live/Mngt)
We want to seperate the Mngt and live with Vlans. To do this we created 2 virtual adapters. Mngt and Live, assigned ip adresses and 2 vlans(Mngt 10, Live 20)
Now here is our problem: In Failover cluster you cannot select the Virtual Adapter(live). Only the Virtual Switch wich both are on. Meaning live migration simple uses the vSwitch instead of the Virtual adapter.
Either its not possible to seperate Live migration with a Vlan this way or maybe there are Powershell commands to add Live migration to a Virtual adapter?
Greetings SelmerIt can be done in PowerShell but it's not intuitive.
In Failover Cluster Manager, right-click Networks and you'll find this:
Click Live Migration Settings and you'll find this:
Checked networks are allowed to carry traffic. Networks higher in the list are preferred.
Eric Siron
Altaro Hyper-V Blog
I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts. -
There is Hyper-V cluster with 2 nodes. Windows Server 2012 R2 is used as operating system.
Trying to live migrate test VM from node 1 to node 2 and get error 21502:
Live migration of 'Virtual Machine test' failed.
'Virtual Machine test' failed to fixup network settings. Verify VM settings and update them as necessary.
VM has Network Adapter connected to Virtual switch. This vSwitch has Private network as connection type.
If I set virtual switch property to "Not connected" in Network Adapter settings of VM I get successful migration.
All VM's that are not connected to any private networks (virtual switches with private network connection type) can be live migrated without any issues.
Is there any official reference related to Hyper-V live migration of VM's that have "private network" connection type?I can Live Migrate virtual machines with adapters on private switches without error. Aside from having the wrong name, the only way I can get it to fail is if I make the switch on one host use a different QoS minimum mode than the other and
enable QoS on the virtual adapter. Even then I get a different message than what you're getting. I only get that one with differently named switches.
There is a PowerShell cmdlet available to see why a guest won't run on another host.
Here's an example of its usage.
There's a way to use it to get it to Live Migrate.
But there is no way to truly Live Migrate three virtual machines in perfect lockstep. Even if you figure out whatever is preventing you from migrating these machines, there will still be periods during Live Migration where they can't communicate across that
private network. You also can't guarantee that all these guests will always be running on the same host without preventing Live Migration in the first place. This is why there really isn't anyone doing what you're trying to do. I suggest you consider another
isolation solution, like VLANs.
Eric Siron Altaro Hyper-V Blog
I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
"Every relationship you have is in worse shape than you think." -
Hyper-V 2012 R2 VMQ live migrate (shared nothing) Blue Screen
Hello,
Have Windows Server 2012 R2 Hyper-V server, fully patched, new install. There are two intel Network cards ant there is configured NIC team of them (Windows NIC teaming). Also there is some "Virtual" NICS with assigned VLAN ID.
If I enable VMQ on these NICS and do shared nothing live migration of VM - host gets BSDO. What can be wrong?Hi,
I would like to check if you need further assistance.
Thanks.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
IP Conflict when doing a Hyper-V Live Migration
Hello,
I am using Windows Server 2012 R2 Update 1 on a Fujitsu Cluster in a Box.
I have the following issue:
When I do a manual Live Migration of a VM, the VM Nic gets an ip conflict with itself and stops responding.
When I do a ipconfig inside that VM, I see an APIPA Address assigned (because of that conflict). If I then disable and enable the NIC inside the guest again, the IP works correctly again with the static address I assigned inside the guest.
When I do a quick migration there is no ip conflict issue for that VM.
Also when Live Migration is initiated due to a reboot of the owning cluster node, there is also no issue with ip address of the VM.
Only when doing a manual live migration. Maybe the Cisco Switch where the cluster is attached to is too slow to register the quick MAC-IP change?
Is there anything I can do to fix the issue? Ideas?Answering my own question after doing some research:
The issue is related to the networking equipment in my case.
Setting ArpRetryCount to 0 in the Registry of TCPIP/Parameters fixed the issue in the VM.
Interesting enough, I found the solution in the VMWare KB :-)
See: VMWare KB 1028373
Maybe you are looking for
-
Hi All, Can some one give me a code sample of exact syntax for IMPORT statement. I tried the following way IMPORT gt_dispute_data FROM MEMORY ID co_memory_id. but giving an error. shylesh
-
How do I print form windows 8 to a Mac with os 10.7.5
I need help printing from windows 8 based pc to a printer connected to a Mac mini tuning os 10.7.5. The two computers are on the same network.
-
Add image map to layer graphic?
There is no allowance for image maps on the Properties menu when a graphic placed in a layer is selected. I have as US map within a <div> layer for exact placement. But I want to draw image maps for each state. How can I do that?
-
I have learned that QT X does not have the ability to combine multiple movies into a single movie, but version 7 does. The movies i want combined are all from my iphone 4 and thus are currently in iphoto. For whatever reason, the only way I can combi
-
Hi guys, I think this is a silly question, but for me it's a challenge. I'm a HFM implementation resource, so I usually follow the project plan and build the application according to the client's needs. However, this time my boss has asked me to do a