Live Migration Bandwidth using Teaming in LACP

Hi there,
Just a quick question, how can I Lice Migration over my LACP configuration of 2 x 1 GB. Every time I do a LM of 2 VM at the same time, the maximum speed I get is only 1GB?
Regards
David
David

Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Flow Control
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *FlowControl
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Interrupt Moderation
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *InterruptModeration
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : IPv4 Checksum Offload
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *IPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : IPsec Offload
DisplayValue         : Disabled
DefaultDisplayValue  : Disabled
ValidDisplayValues   : {Disabled, Auth Header Enabled, ESP Enabled, Auth Header & ESP Enabled}
RegistryKeyword      : *IPsecOffloadV2
RegistryValue        : {0}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Jumbo Packet
DisplayValue         : 9014 Bytes
DefaultDisplayValue  : Disabled
ValidDisplayValues   : {Disabled, 4088 Bytes, 9014 Bytes}
RegistryKeyword      : *JumboPacket
RegistryValue        : {9014}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Large Send Offload V2 (IPv4)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *LsoV2IPv4
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Large Send Offload V2 (IPv6)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *LsoV2IPv6
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Maximum number of RSS Processors
DisplayValue         : 8
DefaultDisplayValue  : 8
ValidDisplayValues   : {1, 2, 4, 8}
RegistryKeyword      : *MaxRssProcessors
RegistryValue        : {8}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Preferred NUMA node
DisplayValue         : System Default
DefaultDisplayValue  : System Default
ValidDisplayValues   : {System Default, Node 0, Node 1, Node 2...}
RegistryKeyword      : *NumaNodeId
RegistryValue        : {65535}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Maximum Number of RSS Queues
DisplayValue         : 2 Queues
DefaultDisplayValue  : 2 Queues
ValidDisplayValues   : {1 Queue, 2 Queues, 4 Queues, 8 Queues}
RegistryKeyword      : *NumRssQueues
RegistryValue        : {2}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Packet Priority & VLAN
DisplayValue         : Packet Priority & VLAN Enabled
DefaultDisplayValue  : Packet Priority & VLAN Enabled
ValidDisplayValues   : {Packet Priority & VLAN Disabled, Packet Priority Enabled, VLAN Enabled, Packet Priority & VLAN 
                       Enabled}
RegistryKeyword      : *PriorityVLANTag
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Receive Buffers
DisplayValue         : 256
DefaultDisplayValue  : 256
ValidDisplayValues   : 
RegistryKeyword      : *ReceiveBuffers
RegistryValue        : {256}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Receive Side Scaling
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *RSS
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : RSS Base Processor Number
DisplayValue         : 0
DefaultDisplayValue  : 0
ValidDisplayValues   : 
RegistryKeyword      : *RssBaseProcNumber
RegistryValue        : {0}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Maximum RSS Processor Number
DisplayValue         : 63
DefaultDisplayValue  : 63
ValidDisplayValues   : 
RegistryKeyword      : *RssMaxProcNumber
RegistryValue        : {63}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : RSS load balancing profile
DisplayValue         : ClosestProcessor
DefaultDisplayValue  : ClosestProcessor
ValidDisplayValues   : {ClosestProcessor, ClosestProcessorStatic, NUMAScaling, NUMAScalingStatic...}
RegistryKeyword      : *RSSProfile
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Speed & Duplex
DisplayValue         : Auto Negotiation
DefaultDisplayValue  : Auto Negotiation
ValidDisplayValues   : {Auto Negotiation, 10 Mbps Half Duplex, 10 Mbps Full Duplex, 100 Mbps Half Duplex...}
RegistryKeyword      : *SpeedDuplex
RegistryValue        : {0}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : SR-IOV
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *SRIOV
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : TCP Checksum Offload (IPv4)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *TCPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : TCP Checksum Offload (IPv6)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *TCPChecksumOffloadIPv6
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Transmit Buffers
DisplayValue         : 512
DefaultDisplayValue  : 512
ValidDisplayValues   : 
RegistryKeyword      : *TransmitBuffers
RegistryValue        : {512}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : UDP Checksum Offload (IPv4)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *UDPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : UDP Checksum Offload (IPv6)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *UDPChecksumOffloadIPv6
RegistryValue        : {3}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Virtual Machine Queues
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *VMQ
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Wake on Magic Packet
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *WakeOnMagicPacket
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Wake on Pattern Match
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *WakeOnPattern
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Enable PME
DisplayValue         : Disabled
DefaultDisplayValue  : Disabled
ValidDisplayValues   : {Enabled, Disabled}
RegistryKeyword      : EnablePME
RegistryValue        : {0}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Interrupt Moderation Rate
DisplayValue         : Adaptive
DefaultDisplayValue  : Adaptive
ValidDisplayValues   : {Adaptive, Extreme, High, Medium...}
RegistryKeyword      : ITR
RegistryValue        : {65535}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Log Link State Event
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Enabled, Disabled}
RegistryKeyword      : LogLinkStateEvent
RegistryValue        : {51}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Gigabit Master Slave Mode
DisplayValue         : Auto Detect
DefaultDisplayValue  : Auto Detect
ValidDisplayValues   : {Auto Detect, Force Master Mode, Force Slave Mode}
RegistryKeyword      : MasterSlave
RegistryValue        : {0}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Locally Administered Address
DisplayValue         : 
DefaultDisplayValue  : 
ValidDisplayValues   : 
RegistryKeyword      : NetworkAddress
RegistryValue        : 
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Reduce Speed On Power Down
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Enabled, Disabled}
RegistryKeyword      : ReduceSpeedOnPowerDown
RegistryValue        : {1}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Wait for Link
DisplayValue         : Auto Detect
DefaultDisplayValue  : Auto Detect
ValidDisplayValues   : {Off, On, Auto Detect}
RegistryKeyword      : WaitAutoNegComplete
RegistryValue        : {2}
Name                 : LM 2
InterfaceDescription : Intel(R) 82576 Gigabit Dual Port Network Connection #4
DisplayName          : Wake on Link Settings
DisplayValue         : Disabled
DefaultDisplayValue  : Disabled
ValidDisplayValues   : {Disabled, Forced}
RegistryKeyword      : WakeOnLink
RegistryValue        : {0}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Header Data Split
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *HeaderDataSplit
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : IPv4 Checksum Offload
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *IPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : IPsec Offload
DisplayValue         : Auth Header & ESP Enabled
DefaultDisplayValue  : Auth Header & ESP Enabled
ValidDisplayValues   : {Disabled, Auth Header Enabled, ESP Enabled, Auth Header & ESP Enabled}
RegistryKeyword      : *IPsecOffloadV2
RegistryValue        : {3}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Large Send Offload Version 2 (IPv4)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *LsoV2IPv4
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Large Send Offload Version 2 (IPv6)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *LsoV2IPv6
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Recv Segment Coalescing (IPv4)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *RscIPv4
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Recv Segment Coalescing (IPv6)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *RscIPv6
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Receive Side Scaling
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *RSS
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : TCP Checksum Offload (IPv4)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *TCPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : TCP Checksum Offload (IPv6)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *TCPChecksumOffloadIPv6
RegistryValue        : {3}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : UDP Checksum Offload (IPv4)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *UDPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : UDP Checksum Offload (IPv6)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *UDPChecksumOffloadIPv6
RegistryValue        : {3}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Virtual Machine Queues
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *VMQ
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Virtual Machine Queues - Shared Memory
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *VMQLookaheadSplit
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : Virtual Machine Queues - VLAN Id Filtering
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *VMQVlanFiltering
RegistryValue        : {1}
Name                 : LM
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver #3
DisplayName          : MAC Address
DisplayValue         : 
DefaultDisplayValue  : 
ValidDisplayValues   : 
RegistryKeyword      : NetworkAddress
RegistryValue        : 
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : 802.3az EEE
DisplayValue         : Disable
DefaultDisplayValue  : Disable
ValidDisplayValues   : {Disable, Enable}
RegistryKeyword      : *EEE
RegistryValue        : {0}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Flow Control
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *FlowControl
RegistryValue        : {3}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Interrupt Moderation
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *InterruptModeration
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Jumbo Mtu
DisplayValue         : 9000
DefaultDisplayValue  : 1500
ValidDisplayValues   : 
RegistryKeyword      : *JumboPacket
RegistryValue        : {9000}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Large Send Offload V2 (IPv4)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *LsoV2IPv4
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Large Send Offload V2 (IPv6)
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *LsoV2IPv6
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Maximum Number of RSS Queues
DisplayValue         : RSS 2 Queues
DefaultDisplayValue  : RSS 1 Queue
ValidDisplayValues   : {RSS 1 Queue, RSS 2 Queues, RSS 4 Queues}
RegistryKeyword      : *NumRssQueues
RegistryValue        : {2}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : ARP Offload
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *PMARPOffload
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : NS Offload
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *PMNSOffload
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Priority & VLAN
DisplayValue         : Priority & VLAN Enabled
DefaultDisplayValue  : Priority & VLAN Enabled
ValidDisplayValues   : {Priority & VLAN Disabled, Priority Enabled, VLAN Enabled, Priority & VLAN Enabled}
RegistryKeyword      : *PriorityVLANTag
RegistryValue        : {3}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Receive Buffers
DisplayValue         : Default
DefaultDisplayValue  : Default
ValidDisplayValues   : {Default, Minimum, Maximum}
RegistryKeyword      : *ReceiveBuffers
RegistryValue        : {200}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Receive Side Scaling
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *RSS
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Speed & Duplex
DisplayValue         : Auto Negotiation
DefaultDisplayValue  : Auto Negotiation
ValidDisplayValues   : {Auto Negotiation, 10 Mbps Half Duplex, 10 Mbps Full Duplex, 100 Mbps Half Duplex...}
RegistryKeyword      : *SpeedDuplex
RegistryValue        : {0}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : TCP/UDP Checksum Offload (IPv4)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *TCPUDPChecksumOffloadIPv4
RegistryValue        : {3}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : TCP/UDP Checksum Offload (IPv6)
DisplayValue         : Rx & Tx Enabled
DefaultDisplayValue  : Rx & Tx Enabled
ValidDisplayValues   : {Disabled, Tx Enabled, Rx Enabled, Rx & Tx Enabled}
RegistryKeyword      : *TCPUDPChecksumOffloadIPv6
RegistryValue        : {3}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Transmit Buffers
DisplayValue         : 500
DefaultDisplayValue  : 200
ValidDisplayValues   : 
RegistryKeyword      : *TransmitBuffers
RegistryValue        : {500}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Virtual Machine Queues
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *VMQ
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : VMQ VLAN Filtering
DisplayValue         : Disable
DefaultDisplayValue  : Disable
ValidDisplayValues   : {Disable, Enable}
RegistryKeyword      : *VMQVlanFiltering
RegistryValue        : {0}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Wake on Magic Packet
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *WakeOnMagicPacket
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Wake on Pattern Match
DisplayValue         : Enabled
DefaultDisplayValue  : Enabled
ValidDisplayValues   : {Disabled, Enabled}
RegistryKeyword      : *WakeOnPattern
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : EEE Control Policies
DisplayValue         : Optimal Power and Performance
DefaultDisplayValue  : Optimal Power and Performance
ValidDisplayValues   : {Maximum Power Saving, Optimal Power and Performance, Maximum Performance}
RegistryKeyword      : EeeCtrlMode
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Network Address
DisplayValue         : 
DefaultDisplayValue  : 
ValidDisplayValues   : 
RegistryKeyword      : NetworkAddress
RegistryValue        : 
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : VLAN ID
DisplayValue         : 0
DefaultDisplayValue  : 0
ValidDisplayValues   : 
RegistryKeyword      : VlanID
RegistryValue        : {0}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : Ethernet@WireSpeed
DisplayValue         : Enable
DefaultDisplayValue  : Enable
ValidDisplayValues   : {Disable, Enable}
RegistryKeyword      : WireSpeed
RegistryValue        : {1}
Name                 : LM 1
InterfaceDescription : Broadcom NetXtreme Gigabit Ethernet #4
DisplayName          : WOL Speed
DisplayValue         : Lowest Speed Advertised
DefaultDisplayValue  : Lowest Speed Advertised
ValidDisplayValues   : {Auto, 10 Mb, 100 Mb, Lowest Speed Advertised}
RegistryKeyword      : WolSpeed
RegistryValue        : {256}
David

Similar Messages

  • Live Migration only using one live migration adapter

    Hi,
    I have a 2 node cluster spanning two buildings, 4 switches (2 in each building), 2 live migration NICs in each server. I have NIC1 going through switch 1 and NIC2 going through switch 2, same in the other building.
    everything talks fine to each other. I decided to team the Live Migration networks so I can get more transferring at once (getting 2Gbps instead of 1, this seemed like a good idea at the time but when I ran a test scenario of failing one of the switches
    the live migration was unable to complete (despite the switches being stacked together, maybe I have a config issue I dont know).
    so I destroyed the team and thought I'd have two seperate live mig channels, i select these two live mig channels in as live migration networks in failover cluster manager. when i try to live migrate a bunch of VM's it is only using the one adapter, no matter
    how many VM's are going down there it will deal with 2 at a time (as per hyper-v settings). 
    the question is, why is not using my other live migration channel even though that adapter is configured correctly, I have even moved it up in the list and then it will use it so i know it works, but it won't use both together...?
    thanks
    Steve

    Live Migration uses the first network interface in the order you configured it. As far as I know the second interface is only used when the first interface becomes unavailable. But don't pin me on this. Now back to your NIC teaming, because there
    is one thing you should know:
    When you configure a NIC team without a switch assisted (LACP) configuration your team will have a 2x1Gb outbound and 1x1Gb inbound connection. This is probably why you only get 1x1Gb througput.
    When you configure a NIC team with a switch assisted (LACP) configuration you team will have a 2x1Gb outbound and 2x1Gb inbound connection. Such a NIC teaming configuration is in fact a port-channel / ether-channel like you would configure between switches.
    This is an optimal configuration.
    I hope this information makes more sense to you. If you have any questions let me know.
    Boudewijn Plomp, BPMi Infrastructure & Security
    Please remember, if you see a post that helped you please click "Vote as Helpful" and if it answered your question, please click "Mark as Answer".

  • Live Migration failed using virtual HBA's and Guest Clustering

    Hi,
    We have a Guest Cluster Configuration on top of an Hyper-V Cluster. We are using Windows 2012 and Fiber Channel shared storage.
    The problem is regarding Live Migration. Some times when we move a virtual machine from node A to node B everything goes well but when we try to move back to node A Live Migration fails. What we can see is that when we move the VM from node A to B and Live
    Migration completes successfully the virtual ports remain active on node A, so when we try to move back from B to A Live Migration fails because the virtual ports are already there.
    This doesn't happen every time.
    We have checked the zoning between Host Cluster Hyper-V and the SAN, the mapping between physical HBA's and the vSAN's on the Hyper-V and everything is ok.
    Our doubt is, what is the best practice for zoning the vHBA on the VM's and our Fabric? We setup our zoning using an alias for the vHBA 1 and the two WWN (A and B) on the same object and an alias for the vHBA 2 and the correspondent WWN (A and B). Is it
    better to create an alias for vHBA 1 -> A (with WWN A) and other alias for vHBA 1 -> B (with WWN B)? 
    The guest cluster VM's have 98GB of RAM each. Could it be a time out issue when Live Migration happen's and the virtual ports remain active on the source node? When everything goes well, the VM moves from node A with vHBA WWN A to node B and stays there
    with vHBA WWN B. On the source node the virtual ports should be removed automatically when the Live Migration completes. And that is the issue... sometimes the virtual ports (WWN A) stay active on the source node and when we try to move back the VM Live Migration
    fails.
    I hope You may understand the issue.
    Regards,
    Carlos Monteiro.

    Hi ,
    Hope the following link may help.
    To support live migration of virtual machines across Hyper-V hosts while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper-V automatically alternates between the Set A and Set B
    WWN addresses during a live migration. This ensures that all LUNs are available on the destination host before the migration and that no downtime occurs during the migration.
    Hyper-V Virtual Fibre Channel Overview
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    More information:
    Hyper-V Virtual Fibre Channel Troubleshooting Guide
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Hyper-V virtual SAN
    http://salworx.blogspot.co.uk/
    Thanks.
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • Live Migration Slow - Maxes Out

    Hi everyone...
    I have a private 10GB network that all of my Hyper-V hosts are connected to.  I have told VMM to use this network for live migrations.
    Using performance monitor, I see that the migration is using this private network.  Unfortunately, it will only transfer at 100MB per second.
    There must be a parameter somewhere to say "Use all the bandwidth you need...."
    It is worth mentioning that I also backup the virtual machines using this same network to our backup server, and the transfer rates goes to 10GB per second.
    Any suggestions?
    Thanks!

    Is this network dedicated for Live Migration?
    Is this a converged setup where you have a physical team and a VM switch created upon that team?
    In case that is true, then VMQ is enabled and RSS is disabled, which don't let you leverage all the bandwidth for live migration.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Hyper-V Failover Cluster Live migration over Virtual Adapter

    Hello,
    Currently we have a test environment 3 HyperHost wich have 2 Nic teams.  LAN team and team Live/Mngt.
    In Hyper-V we created a Virtual Switch (Wich is Nic team Live/Mngt)
    We want to seperate the Mngt and live with Vlans. To do this we created 2 virtual adapters. Mngt and Live, assigned ip adresses and 2 vlans(Mngt 10, Live 20)
    Now here is our problem: In Failover cluster you cannot select the Virtual Adapter(live). Only the Virtual Switch wich both are on. Meaning live migration simple uses the vSwitch instead of the Virtual adapter.
    Either its not possible to seperate Live migration with a Vlan this way or maybe there are Powershell commands to add Live migration to a Virtual adapter?
    Greetings Selmer

    It can be done in PowerShell but it's not intuitive.
    In Failover Cluster Manager, right-click Networks and you'll find this:
    Click Live Migration Settings and you'll find this:
    Checked networks are allowed to carry traffic. Networks higher in the list are preferred.
    Eric Siron
    Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.

  • Live Migration Failed while Quick Migration is Ok...Virtual machine with synthetic FC HBA !

    When I migration the virtual machine with synthetic FC HBA  in windows server 2012 R2 Cluster,it fails
    but I do it in the style of quick migraton ,it secceed!
    The error event here
    Live migration of 'Virtual Machine PTSCSQL01' failed.
    Virtual machine migration operation for 'PTSCSQL01' failed at migration destination 'PTCLS0106'. (Virtual machine ID B8FBDE64-FF97-4E9B-BC40-6DCFA09B31BE)
    'PTSCSQL01' Synthetic FibreChannel Port: Failed to finish reserving resources with Error 'Unspecified error' (0x80004005). (Virtual machine ID B8FBDE64-FF97-4E9B-BC40-6DCFA09B31BE)
    'PTSCSQL01' Synthetic FibreChannel Port: Failed to finish reserving resources with Error 'Unspecified error' (0x80004005). (Virtual machine ID B8FBDE64-FF97-4E9B-BC40-6DCFA09B31BE)
    My virtual machine's  synthetic FC HBA setting here
    做微软的先行者,享受用户体验

    Yes, definitely check your zoning/masking.  Remember that with vHBA you have twice as many WWPNs to account for.  Performing a live migration makes use of both pairs during the transfer from one host to the other - one set is active on the machine
    currently running and the second set is used to ensure connectivity on the destination.  So if you are using Address Set A on Host1, Host2 will try to set up the fibre channel connection using Address Set B.  If you do a quick migration, you would
    continue to use the same Address Set on the second host.  That's why you most likely need to check your zoning/masking for the alternate set.
    . : | : . : | : . tim

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • Live Migration Fails with error Synthetic FiberChannel Port: Failed to finish reserving resources on an VM using Windows Server 2012 R2 Hyper-V

    Hi, I'm currently experiencing a problem with some VMs in a Hyper-V 2012 R2 failover cluster using Fiber Channel adapters with Virtual SAN configured on the hyper-v hosts.
    I have read several articles about this issues like this ones:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/baca348d-fb57-4d8f-978b-f1e7282f89a1/synthetic-fibrechannel-port-failed-to-start-reserving-resources-with-error-insufficient-system?forum=winserverhyperv
    http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channel-troubleshooting-guide.aspx
    But haven't been able to fix my issue.
    The Virtual SAN is configured on every hyper-v host node in the cluster. And every VM has 2 fiber channel adapters configured.
    All the World Wide Names are configured both on the FC Switch as well as the FC SAN.
    All the drivers for the FC Adapter in the Hyper-V Hosts have been updated to their latest versions.
    The strange thing is that the issue is not affecting all of the VMs, some of the VMs with FC adapters configured are live migrating just fine, others are getting this error.
    Quick migration works without problems.
    We even tried removing and creating new FC Adapters on a VM with problems, we had to configure the switch and SAN with the new WWN names and all, but ended up having the same problem.
    At first we thought is was related to the hosts, but since some VMs do work live migrating with FC adapters we tried migrating them on every host, everything worked well.
    My guess is that it has to be something related to the VMs itself but I haven't been able to figure out what is it.
    Any ideas on how to solve this is deeply appreciated.
    Thank you!
    Eduardo Rojas

    Hi Eduardo,
    How are things going ?
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-v Live Migration not completing when using VM with large RAM

    hi,
    i have a two node server 2012 R2 cluster hyper-v which uses 100GB CSV, and 128GB RAM across 2 physical CPU's (approx 7.1GB used when the VM is not booted), and 1 VM running windows 7 which has 64GB RAM assigned, the VHD size is around 21GB and the BIN file
    is 64GB (by the way do we have to have that, can we get rid of the BIN file?). 
    NUMA is enabled on both servers, when I attempt to live migrate i get event 1155 in the cluster events, the LM starts and gets into 60 something % but then fails. the event details are "The pending move for the role 'New Virtual Machine' did not complete."
    however, when i lower the amount of RAM assigned to the VM to around 56GB (56+7 = 63GB) the LM seems to work, any amount of RAM below this allows LM to succeed, but it seems if the total used RAM from the physical server (including that used for the
    VMs) is 64GB or above, the LM fails.... coincidence since the server has 64GB per CPU.....
    why would this be?
    many thanks
    Steve

    Hi,
    I turned NUMA spanning off on both servers in the cluster - I assigned 62 GB, 64GB and 88GB and each time the VM started up no problems. with 62GB LM completed, but I cant get LM to complete with 64GB+.
    my server is a HP DL380 G8, it has the latest BIOS (I just updated it today as it was a couple of months behind), i cant see any settings in BIOS relating to NUMA so i'm guessing it is enabled and cant be changed.
    if i run the cmdlt as admin I get ProcessorsAvailability : {0, 0, 0, 0...}
    if i run it as standard user i get ProcessorsAvailability
    my memory and CPU config are as follows, hyper-threading is enabled for the CPU but I dont
    think that would make a difference?
    Processor 1 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 1 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 1 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 4 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 9 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor 2 12 Good, In Use Yes 713756-081 DIMM DDR3 16384 MB 1600 MHz 1.35 V 2 Synchronous
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    Processor 2
    Processor Name
    Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
    Processor Status
    OK
    Processor Speed
    2400 MHz
    Execution Technology
    12/12 cores; 24 threads
    Memory Technology
    64-bit Capable
    Internal L1 cache
    384 KB
    Internal L2 cache
    3072 KB
    Internal L3 cache
    30720 KB
    thanks
    Steve

  • How to Fix: Error (10698) The virtual machine () could not be live migrated to the virtual machine host () using this cluster configuration.

    I am unable to live migrate via SCVMM 2012 R2 to one Host in our 5 node cluster.  The job fails with the errors below.
    Error (10698)
    The virtual machine () could not be live migrated to the virtual machine host () using this cluster configuration.
    Recommended Action
    Check the cluster configuration and then try the operation again.
    Information (11037)
    There currently are no network adapters with network optimization available on host.
    The host properties indicate network optimization is available as indicated in the screen shot below.
    Any guidance on things to check is appreciated.
    Thanks,
    Glenn

    Here is a snippet of the cluster log when from the current VM owner node of the failed migration:
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RHS] Resource Virtual Machine Configuration VMNameHere called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
    00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO  [RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration VMNameHere', gen(0) result 0/0.
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RHS] Resource Virtual Machine VMNameHere called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
    00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO  [RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine VMNameHere', gen(0) result 0/0.
    00000b6c.00001a9c::2014/02/03-13:16:07.495 INFO  [RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine VMNameHere', gen(0) result 0/0.
    00000b6c.000020ec::2014/02/03-13:16:07.495 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine Configuration <Virtual Machine Configuration VMNameHere>: Current state 'MigrationSrcWaitForOffline', event 'MigrationSrcCompleted', result 0x8007274d
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine Configuration <Virtual Machine Configuration VMNameHere>: State change 'MigrationSrcWaitForOffline' -> 'Online'
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine <Virtual Machine VMNameHere>: Current state 'MigrationSrcOfflinePending', event 'MigrationSrcCompleted', result 0x8007274d
    00000e50.000025c0::2014/02/03-13:16:07.495 INFO  [RES] Virtual Machine <Virtual Machine VMNameHere>: State change 'MigrationSrcOfflinePending' -> 'Online'
    00000e50.00002080::2014/02/03-13:16:07.510 ERR   [RES] Virtual Machine <Virtual Machine VMNameHere>: Live migration of 'Virtual Machine VMNameHere' failed.
    Virtual machine migration operation for 'VMNameHere' failed at migration source 'SourceHostNameHere'. (Virtual machine ID 6901D5F8-B759-4557-8A28-E36173A14443)
    The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host 'DestinationHostNameHere': No connection could be made because the tar
    00000e50.00002080::2014/02/03-13:16:07.510 ERR   [RHS] Resource Virtual Machine VMNameHere has cancelled offline with error code 10061.
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] HandleMonitorReply: OFFLINERESOURCE for 'Virtual Machine VMNameHere', gen(0) result 0/10061.
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] Res Virtual Machine VMNameHere: OfflinePending -> Online( StateUnknown )
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] TransitionToState(Virtual Machine VMNameHere) OfflinePending-->Online.
    00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] rcm::QueuedMovesHolder::VetoOffline: (VMNameHere with flags 0)
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] rcm::QueuedMovesHolder::RemoveGroup: (VMNameHere) GroupBeingMoved: false AllowMoveCancel: true NotifyMoveFailure: true
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] VMNameHere: Removed Flags 4 from StatusInformation. New StatusInformation 0
    00000b6c.000020ec::2014/02/03-13:16:07.510 INFO  [RCM] rcm::RcmGroup::CancelClusterGroupOperation: (VMNameHere)
    00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000021a8::2014/02/03-13:16:07.510 INFO  [GUM] Node 3: executing request locally, gumId:3951, my action: /dm/update, # of updates: 1
    00000b6c.000021a8::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.00001a9c::2014/02/03-13:16:07.510 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000022a0::2014/02/03-13:16:07.510 INFO  [RCM] moved 0 tasks from staging set to task set.  TaskSetSize=0
    00000b6c.000022a0::2014/02/03-13:16:07.510 INFO  [RCM] rcm::RcmPriorityManager::StartGroups: [RCM] done, executed 0 tasks
    00000b6c.00000dd8::2014/02/03-13:16:07.510 INFO  [RCM] ignored non-local state Online for group VMNameHere
    00000b6c.000021a8::2014/02/03-13:16:07.526 INFO  [GUM] Node 3: executing request locally, gumId:3952, my action: /dm/update, # of updates: 1
    00000b6c.000021a8::2014/02/03-13:16:07.526 INFO  [GEM] Node 3: Sending 1 messages as a batched GEM message
    00000b6c.000018e4::2014/02/03-13:16:07.526 INFO  [RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine VMNameHere', gen(0) result 0/0.
    No entry is made on the cluster log of the destination node. 
    To me this means the nodes cannot talk to each other, but I don’t know why.  
    They are on the same domain.  Their server names resolve properly and they can ping eachother both by name and IP.

  • Hyper-v 2012 r2 slow throughputs on network / live migrations

    Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
    Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between
    the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip
    large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-(  but it doesn't really seem to make a difference with all of these settings.
    The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut
    town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
    I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.

    Hi Mark,
    The servers (hyper-v 2012 r2) are all basically configured with ssvmm2012R2 where they all have teamed 1Gb pNics, into a virtual switch, then there are vNics for the Vmcloud, live migration etc.  The physical network is 2 Netgear Gs724T switches which
    are interlinked and each servers 1st nic is plugged into the switch1 and the second nic is plugged into the switch2.See Below Image)  The hyper-v port is set to independent Hyper-v load balancing. 
    The R320 servers are running raid 5 sas drives, the R210s have 1Tb drives mirrored.  The servers all are using DAS storage, we have not moved to looking at using iscsi and san is out the question at the moment.
    I am currently testing between 2x 320s and 2x R210s, I am not copying data to the vm's yets, I am basically testing the transfer between the actual hosts at the moment by copying a 4Gb file manually, After testing the live migrations I decided to test to
    see the transfer rates between the servers first, I have been playing around with the offload settings and rss, what I don't understand is yesterday, the copy between the servers was running up to 228Mbs ie (using both nics) when copying
    the file between the servers, and then a few hours later it only was copying at 50/60Mbs, but its now back at 113Mbs seemingly to be only using one nic.
    I was under the impression if you copy a file between 2 servers the nicks could use the 2gb bandwidth, but after reading many posts they say only one nic, so how did the copies get up to 2Gb yesterday. Then again if you copy files between 3 servers, then
    each copy would use one nic, basically giving you 2Gbs, but this is again not being seen.
    Regards Keith

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Failover Cluster 2008 R2 - VM lose connectivity after live migration

    Hello,
    I have a Failover Cluster with 3 server nodes running. I have 2 VMs running in one the the host without problems, but when I do a live migration of the VM to another host the VM lose network connectivity, for example if I leave a ping running, the ping command
    has 2 response, and 3 packets lost, then 1 response again, then 4 packets lost again, and so on... If I live migrate the VM to the original host, everything goes OK again.
    The same bihavior is for the 2 VMs, but I do a test with a new VM and with that new VM everything Works fine, I can live migrate it to every host.
    Any advice?
    Cristian L Ruiz

    Hi Cristian Ruiz,
    What your current host nic settings now, from you description it seems you are using the incorrect network nic design. If you are using iSCSI storage it need use the dedicate
    network in cluster.
    If your NIC teaming is iconfigured in switch independent + dynamic, please try to disable VMQ on VM Setting for narrow down the issue area.
    More information:
    VMQ Deep Dive, 1 of 3
    http://blogs.technet.com/b/networking/archive/2013/09/10/vmq-deep-dive-1-of-3.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.
    Hi!
    thank you for your reply!
    Yes, We are using iSCSI storage, but it has its own NICs for that (2 independant NIcs just to connect the server with the storage) and they are configured to not use those NICs to cluster communication. The team configuration is just for the LAN connectivity.
    The NIC teaming is configured using BACS4 software from a DELL server and in Smart Load Balancing and Failover (as you can see here
    http://www.micronova.com.ar/cap01.jpg). The link you passed is for Windows Server 2012 and we are running Windows Server 2008 R2, BUT as you can see in the following capture the NICs has that feature disabled
    ( http://www.micronova.com.ar/cap02.jpg ).
    One test that I'm thinking to do is to remove teaming configuration and test just with one independant NIC for LAN connection. But, I do not know if you think another choice.
    Thanks in advance.
    Cristian L Ruiz
    Sorry, another choice I'm thinking too is to update the driver versión. But the server is in production and I need to take a downtime window for test that.
    Cristian L Ruiz

  • Host server live migration causing Guest Cluster node goes down

    Hi 
    I have two node Hyper host cluster , Im using converged network for Host management,Live migartion and cluster network. And Separate NICs for ISCSI multi-pathing. When I live migrate the Guest node from one host to another , within guest cluster the node
    is going down.  I have increased clusterthroshold and clusterdelay values.  Guest nodes are connecting to ISCSI network directly from ISCSI initiator on Server 2012. 
    The converged networks for management ,cluster and live migration networks are built on top of a NIC Team with switch Independent mode and load balancing as Hyper V port. 
    I have VMQ enabled on Converged fabric  and jumbo frames enabled on ISCSI. 
    Can Anyone guess why would live migration cause failure on the guest node. 
    thanks
    mumtaz 

    Repost here: http://social.technet.microsoft.com/Forums/en-US/winserverhyperv/threads
    in the Hyper-V forum.  You'll get a lot more help there.
    This forum is for Virtual Server 2005.

  • Live migration Vnic on hosts randomly losing connectivity HELP

    Hello Everyone,
    I am building out a new 2012 R2 cluster using VMM with converged network configuration.  I have 5 physical nics and teaming 3 of them using dynamic load balancing.  I have configured 3 virtual network adapters in host which are for management,
    cluster and Live migration.  The live migration nic loses connectivity randomly and fails migrations 50% of the time.
    Hardware is IBM blades (HS22) with Broadcom Netextreme II nics.  I have updated firmware and drivers to the latest versions.  I found a forum with something that looks very similar but this was back in November so Im guessing there is a fix. 
    http://www.hyper-v.nu/archives/mvaneijk/2013/11/vnics-and-vms-loose-connectivity-at-random-on-windows-server-2012-r2/
    Really need help with this.
    Thanks

    Hi,
    Does your cluster can pass the cluster validation test? Please install the recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters update first
    then monitor again.
    More information:
    Configuring Windows Failover Cluster Networks
    http://blogs.technet.com/b/askcore/archive/2014/02/20/configuring-windows-failover-cluster-networks.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for