PowerShell to Live Migrate

I have a scenario here.  I have 10 clusters (2 nodes each cluster)
I would like to LiveMigrate only a single VM, which is very specific and not mission critical to test the VM every month.  This is to test the LiveMigration works on the cluster.  I can also execute the PowerShell script remotely.  If the
VM is residing in Host1, then this will execute the LiveMigration to Host2. If the VM is residing in Host2, then it will migrate to Host1.
The purpose of the script is LiveMigrate only a single VM instead of entire.  It's manual executeable and just for the purpose of ensuring the LiveMigration works every month. I know that I can do it from GUI, but as more clusters are coming in, then
I can't be able to do it manually.  Prefer a powershell script to it with only a single execution.  
Probably can look into reading from a CSV file e.g.
LOCATION,CLUSTER_NODE,HOST1,HOST2,VM_App1
After the powershell is executed, then can generate a report for successful and failed.
Thank you.

Here is some background on Live Migration.  Please read the complete article.
http://blogs.technet.com/b/keithmayer/archive/2012/10/04/31days_2d00_winserv_2d00_livemigration.aspx
¯\_(ツ)_/¯

Similar Messages

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Failed live migration from HyperV2012 to Hyperv2012R2 cluster

    I’m at a loss on something and I was wondering if you could point me in the right direction.    I tried to migrate a server from one of my 2012 hosts to a brand new verified Hyperv 2012 R2 cluster.  I had already succesfully migrated 5 others
    from there but this one Sql 2012 guest gave me a problem.
    When I tried to live migrate it, it failed.  It said the hardware on the destination computer was not compatible.  I did some research and one blog said that sometimes you migrate to different hardware and it fails, like processors.  The guy
    on the blog said to check the box under processor\compatibility to put the virtual processor in compatibility mode.  I did that and tried again but it failed again.  This blog said something about resource groups.  The virt was “locked” at that
    point.
    I got frustrated so I just turned off the guest and copied the disks and rebuilt it on Aag with the same memory processor setting.  I thought I was fine, but I looked at it the next day and in the failover cluster manager it shows the machine is “off”. 
    BUT the VM is actually running because the server is working. It’s a database server and the sites using it are up, and I can RDC to it.  So I’m afraid to touch it.
    And these are the errors from the event log
    'Virtual Machine Configuration DEVDB1' failed to register the virtual machine with the virtual machine management service.
    The Virtual Machine Management Service failed to register the configuration for the virtual machine '62AAD2B5-8E03-4B59-84E0-D52CBF36934B' at 'C:\ClusterStorage\Volume2\Devdb1\DEVDB1': The
    system cannot find the file specified. (0x80070002). If the virtual machine is managed by a failover cluster, ensure that the file is located at a path that is accessible to other nodes of the cluster.
    Cluster resource 'Virtual Machine Configuration DEVDB1' of type 'Virtual Machine Configuration' in clustered role 'DEVDB1' failed. The error code was '0x2' ('The system cannot find the file
    specified.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart
    it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    I think I moved it while it was in a funky state.  I also think I was supposed to “export” the vm first.  I guess I’m a newbie but I’m not sure what to do to fix it.  Any advice is greatly appreciated.

    I think i found at least part of the problem but i'm not sure.  My new VM's configuration files and directory are called this.
    D79490DB-48F9-40A4-9540-53F2532D3F7F
    D79490DB-48F9-40A4-9540-53F2532D3F7F.xml
    Not 62AAD2B5-8E03-4B59-84E0-D52CBF36934B
    Still not sure what to do about that though.

  • Hyper-V Failover Cluster Live migration over Virtual Adapter

    Hello,
    Currently we have a test environment 3 HyperHost wich have 2 Nic teams.  LAN team and team Live/Mngt.
    In Hyper-V we created a Virtual Switch (Wich is Nic team Live/Mngt)
    We want to seperate the Mngt and live with Vlans. To do this we created 2 virtual adapters. Mngt and Live, assigned ip adresses and 2 vlans(Mngt 10, Live 20)
    Now here is our problem: In Failover cluster you cannot select the Virtual Adapter(live). Only the Virtual Switch wich both are on. Meaning live migration simple uses the vSwitch instead of the Virtual adapter.
    Either its not possible to seperate Live migration with a Vlan this way or maybe there are Powershell commands to add Live migration to a Virtual adapter?
    Greetings Selmer

    It can be done in PowerShell but it's not intuitive.
    In Failover Cluster Manager, right-click Networks and you'll find this:
    Click Live Migration Settings and you'll find this:
    Checked networks are allowed to carry traffic. Networks higher in the list are preferred.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Live Migration to Best possible node

    Hi,
    I have a 20 node cluster with virtual machine role.
    I would like know equivalent power shell command for migrating VMs to best possible node.
    When I right click on VM from cluster manager, I see below option for Live migration to Best possible node. Same thing I would like to achieve through powershell.
    Thanks in advance.
    Thanks, Krishna

    Well, you're asking the cluster to make its best determination on where the VMs should go, so I don't really know that I can second-guess its behavior.
    You could ensure that all highly available VMs are moved off a specific node by using
    Suspend-ClusterNode:
    Suspend-ClusterNode -Name "node1" -Drain
    Then when you want to put the roles back, use
    Resume-ClusterNode:
    Resume-ClusterNode -Name "node1" -Failback Immediate
    You can enter multiple node names at once, if you want.
    But if stress testing a network device is your aim, I would look at actual test tools, like
    IOMeter.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Hyper-V live migration failed

    There is Hyper-V cluster with 2 nodes. Windows Server 2012 R2 is used as operating system.
    Trying to live migrate test VM from node 1 to node 2 and get error 21502:
    Live migration of 'Virtual Machine test' failed.
    'Virtual Machine test' failed to fixup network settings. Verify VM settings and update them as necessary.
    VM has Network Adapter connected to Virtual switch. This vSwitch has Private network as connection type.
    If I set virtual switch property to "Not connected" in Network Adapter settings of VM I get successful migration.
    All VM's that are not connected to any private networks (virtual switches with private network connection type) can be live migrated without any issues.
    Is there any official reference related to Hyper-V live migration of VM's that have "private network" connection type?

    I can Live Migrate virtual machines with adapters on private switches without error. Aside from having the wrong name, the only way I can get it to fail is if I make the switch on one host use a different QoS minimum mode than the other and
    enable QoS on the virtual adapter. Even then I get a different message than what you're getting. I only get that one with differently named switches.
    There is a PowerShell cmdlet available to see why a guest won't run on another host.
    Here's an example of its usage.
    There's a way to use it to get it to Live Migrate.
    But there is no way to truly Live Migrate three virtual machines in perfect lockstep. Even if you figure out whatever is preventing you from migrating these machines, there will still be periods during Live Migration where they can't communicate across that
    private network. You also can't guarantee that all these guests will always be running on the same host without preventing Live Migration in the first place. This is why there really isn't anyone doing what you're trying to do. I suggest you consider another
    isolation solution, like VLANs.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Live Migration with Different CPU versions on the hosts, win 2012R2 Datacenter

    Hello
    This question have been asked in different forums but when I read the the thread's I feel that I get mixed answers.
    And most answers are dating from 2012 (Win 2008R2), I don't know if they are still correct in win 2012R2.
    So now I ask the question myself and hope to get at clear answer :)
    We are in the process of installing a new Hyper-V cluster using Win srv 2012 R2 Datacenter as OS.
    I'm planning to re-use some of the "old" servers from our current Hyper-V 2008 R2 cluster, removing it from the cluster and do a clean installation of 2012R2 Datacenter.
    But I will need to buy two new servers to manage this (with a new version of CPU, same brand (AMD))
    Old server: AMD Opteron(tm) Processor 6172 (12 Cores)
    New server:
    AMD Opteron™ 6344 (12-core)
    Now my question:
    Will Live Migration work between these servers in my new cluster without me doing any special settings in hyper-v or in the VM or what do I need to do to get this to work?
    /Anders

    Hi,
    It is important that all the hardware supporting Windows Server 2012 Failover Clusters be certified to work with Windows Server 2012. 
    In a cluster where all the nodes of the cluster are exactly the same, hardware migration is fairly straightforward. There are no concerns about differences in hardware, and
    especially no concerns about different capabilities of the CPUs.
    More information:
    When to Use Processor Compatibility Mode to Migrate Virtual Machines
    http://technet.microsoft.com/en-us/magazine/gg299590.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 r2 live migration fails with hardware error

    Hello all, we just upgraded one of our hyper v hosts from server 2012 to server 2012 r2; previously we had live replication setup between it and another box on the network which was also running server 2012. After installing server 2012 r2 when a live migration
    is attempted we get the message:
    "The virtual machine cannot be moved to the destination computer. The hardware on the destination computer is not compatible with the hardware requirements of this virtual machine. Virtual machine migration failed at migration source."
    The servers in question are both dell, currently we have a poweredge r910 running server 2012 and a poweredge r900 running server 2012 r2. The section under processor for "migrate to a physical computer using a different processor" is already checked
    and this same vm was successfully being live replicated before the upgrade to server 2012 r2. What would have changed around hardware requirements?
    We are migrating from server 2012 on the poweredge r910 to server 2012 r2 on the poweredge r900. Also When I say this was an upgrade, we did a full re install and wiped out the installation of server 2012 and installed server 2012 r2, this was not an upgrade
    installation.

    The only cause I’ve seen so far is virtual switches being named differently. I do remember that one of our VMs didn’t move, but we simply bypassed this problem, using one-time backup (VeeamZIP, more specifically).
    If it’s one-time operation you can use the same procedure for the VMs in question -> backup and restore them at new server.
    Kind regards, Leonardo.

  • Hyper-v 2012 r2 slow throughputs on network / live migrations

    Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
    Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
    the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
    large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-(  but it doesn't really seem to make a difference with all of these settings.
    The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
    town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
    I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.

    Hi Mark,
    The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc.  The physical network is 2 Netgear Gs724T switches which
    are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image)  The hyper-v port is set to independent Hyper-v load balancing. 
    The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored.  The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
    I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
    see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
    the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
    I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
    each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
    Regards Keith

  • Server 2012 cluster - virtual machine live migration does not work

    Hi,
    We have a hyper-v cluster with two nodes running Windows Server 2012. All the configurations are identical.
    When I try to make a Live migration from one node to the other I get an error message saying:
    Live migration of 'Virtual Machine XXXXXX' failed.
    I get no other error messages, not even in event viewer. This same happens with all of our virtual machines.
    A normal Quick migration works just fine for all of the virtual machines, so network configuration should not be an issue.
    The above error message does not provide much information.

    Hi,
    Please check whether your configuration meet live migration requirement:
    Two (or more) servers running Hyper-V that:
    Support hardware virtualization.
    Yes they support virtualization. 
    Are using processors from the same manufacturer (for example, all AMD or all Intel).
    Both Servers are identical and brand new Fujitsu-Siemens RX300S7 with the same kind of processor (Xeon E5-2620).
    Belong to either the same Active Directory domain, or to domains that trust each other.
    Both nodes are in the same domain.
    Virtual machines must be configured to use virtual hard disks or virtual Fibre Channel disks (no physical disks).
    All of the vitual machines have virtual hard disks.
    Use of a private network is recommended for live migration network traffic.
    Have tried this, but does not help.
    Requirements for live migration in a cluster:
    Windows Failover Clustering is enabled and configured.
    Yes
    Cluster Shared Volume (CSV) storage in the cluster is enabled.
    Yes
    Requirements for live migration using shared storage:
    All files that comprise a virtual machine (for example, virtual hard disks, snapshots, and configuration) are stored on an SMB share. They are all on the same CSV
    Permissions on the SMB share have been configured to grant access to the computer accounts of all servers running Hyper-V.
    Requirements for live migration with no shared infrastructure:
    No extra requirements exist.
    Also please refer to this article to check whether you have finished all preparation works for live migration:
    Virtual Machine Live Migration Overview
    http://technet.microsoft.com/en-us/library/hh831435.aspx
    Hyper-V: Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2
    http://technet.microsoft.com/en-us/library/dd446679(v=WS.10).aspx
    Configure and Use Live Migration on Non-clustered Virtual Machines
    http://technet.microsoft.com/en-us/library/jj134199.aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support
    I have also read all of the technet articles but can't find anything that could help.

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V replica vs Shared Nothing Live Migration

      Shared Nothing Live Migration allows to transport your VM over the WAN without shutting it down (how much time it takes on io intensive vm is another story)
      Hyper-V replica does not allow to perform the DR switch without shutdown operation on primary site VM !
      why can't it take the VM live to the DR ?
      that's because if we use Shared Nothing across the WAN, we don't use the data that Hyper-V replica can and then it also breaks everything hyper-V replica does.
      Point is: how to take the VM to DR in running state, what is the best way to do that ?
    Shahid Roofi

    Hi Shahid,
    Hyper-V Replica is designed as a DR technology, not as a technique to move VMs. It assumes that should you require it, the source VM would probably be offline and therefore you would be powering up the passive copy from a previous point in time
    as its not a true synchronous replica copy. It does give you the added benefit to be able to run a planned failover which as you say, powers of the VM first, runs a final Sync then powers the new VM up. Obviously you cant have the duplicate copy of this VM
    running all the time at the remote site, otherwise you would have a split brain situation for network traffic.
    Like the live migration the shared nothing live migration is a technology aimed at moving a VM, but as you know offers the ability to do this without having shared storage and only requires a network connection. When initiated moves the whole
    VM, well copies the virtual drive and memory before sending machine writes to both, then only to the new VM when they both match. With regards to the speed, I assume you have the SNLM setup to compress data before sending across the wire?
    If you want a true live migration between remote sites, one way would be to have a SAN array between both sites synchronously replicating data, then stretch the Hyper-V cluster across both sites. Obviously this is a very expensive solution but perhaps
    the perfect scenario.
    Kind Regards
    Michael Coutanche
    Blog:   
    Twitter:   LinkedIn:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Hyper-V guest SQL 2012 cluster live migration failure

    I have two IBM HX5 nodes connected to IBM DS5300. Hyper-V 2012 cluster was built on blades. In HV cluster was made six virtual machines, connected to DS5300 via HV Virtual SAN. These VMs was formed a guest SQL Cluster. Databases' files are placed on
    DS5300 storage and available through VM FibreChannel Adapters. IBM MPIO Module is installed on all hosts and VMs.
    SQL Server instances work without problem. But! When I try to live migrate SQL VM to another HV node an SQL Instance fails. In SQL error log I see:
    2013-06-19 10:39:44.07 spid1s      Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.07 spid1s      SQLServerLogMgr::LogWriter: Operating system error 170(The requested resource is in use.) encountered.
    2013-06-19 10:39:44.07 spid1s      Write error during log flush.
    2013-06-19 10:39:44.07 spid55      Error: 9001, Severity: 21, State: 4.
    2013-06-19 10:39:44.07 spid55      The log for database 'Admin' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
    2013-06-19 10:39:44.07 spid55      Database Admin was shutdown due to error 9001 in routine 'XdesRMFull::CommitInternal'. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.
    2013-06-19 10:39:44.31 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.31 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.31 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.31 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.32 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.32 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.32 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.32 spid36s     fcb::close-flush: Operating system error (null) encountered.
    2013-06-19 10:39:44.33 spid36s     Starting up database 'Admin'.
    2013-06-19 10:39:44.58 spid36s     349 transactions rolled forward in database 'Admin' (6:0). This is an informational message only. No user action is required.
    2013-06-19 10:39:44.58 spid36s     SQLServerLogMgr::FixupLogTail (failure): alignBuf 0x000000001A75D000, writeSize 0x400, filePos 0x156adc00
    2013-06-19 10:39:44.58 spid36s     blankSize 0x3c0000, blkOffset 0x1056e, fileSeqNo 1313, totBytesWritten 0x0
    2013-06-19 10:39:44.58 spid36s     fcb status 0x42, handle 0x0000000000000BC0, size 262144 pages
    2013-06-19 10:39:44.58 spid36s     Error: 17053, Severity: 16, State: 1.
    2013-06-19 10:39:44.58 spid36s     SQLServerLogMgr::FixupLogTail: Operating system error 170(The requested resource is in use.) encountered.
    2013-06-19 10:39:44.58 spid36s     Error: 5159, Severity: 24, State: 13.
    2013-06-19 10:39:44.58 spid36s     Operating system error 170(The requested resource is in use.) on file "v:\MSSQL\log\Admin\Log.ldf" during FixupLogTail.
    2013-06-19 10:39:44.58 spid36s     Error: 3414, Severity: 21, State: 1.
    2013-06-19 10:39:44.58 spid36s     An error occurred during recovery, preventing the database 'Admin' (6:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected,
    contact Technical Support.
    In windows system log I see a lot of warnings like this:
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider
    Name="Microsoft-Windows-Ntfs" Guid="{3FF37A1C-A68D-4D6E-8C9B-F79E8B16C482}" />
      <EventID>140</EventID>
      <Version>0</Version>
      <Level>3</Level>
      <Task>0</Task>
      <Opcode>0</Opcode>
      <Keywords>0x8000000000000008</Keywords>
      <TimeCreated
    SystemTime="2013-06-19T06:39:44.314400200Z" />
      <EventRecordID>25239</EventRecordID>
      <Correlation
    />
      <Execution
    ProcessID="4620" ThreadID="4284" />
      <Channel>System</Channel>
      <Computer>sql-node-5.local.net</Computer>
      <Security
    UserID="S-1-5-21-796845957-515967899-725345543-17066" />
      </System>
    - <EventData>
      <Data Name="VolumeId">\\?\Volume{752f0849-6201-48e9-8821-7db897a10305}</Data>
      <Data Name="DeviceName">\Device\HarddiskVolume70</Data>
      <Data Name="Error">0x80000011</Data>
      </EventData>
     </Event>
    The system failed to flush data to the transaction log. Corruption may occur in VolumeId: \\?\Volume{752f0849-6201-48e9-8821-7db897a10305}, DeviceName: \Device\HarddiskVolume70.
    ({Device Busy}
    The device is currently busy.)
    There aren't any error or warning in HV hosts.

    Hello,
    I am trying to involve someone more familiar with this topic for a further look at this issue. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated.
    Thank you for your understanding and support.
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click 
    here.
    Fanny Liu
    TechNet Community Support

  • Hyper V Lab and Live Migration

    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    Hi Sir,
    >>I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations. 
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is same as my first lab environment ) , then I realized that I need one or more host to simulate more production scenario . 
    But ,if you have more physical computers you may try other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

Maybe you are looking for