Hyper-V LBFO Teaming VMQ RSS-settings

Hi,
We are running a six host Hyper-V Cluster on 2012 R2 Core machines. Since the environment was set up we have been receiving the following error:
Source: Hyper-V-VmSwitch
Event ID: 106
Description: Available processor sets of the underlying physical NICs belonging to the LBFO team on switch [GUID] (Friendly Name: VMNet Logical Switch are not configured correctly. Reason: The processor sets overlap when LBFO is configured with sum-queue
mode. 
The error pertains to the LBFO-team for our VM-Networks that was created automatically by SCVMM after joining the two NIC:s into a logical switch. After reading the MS documentation on Server 2012 teaming - http://www.microsoft.com/en-us/download/confirmation.aspx?id=30160)
- I came to the following conclusion:
The team is SwitchIndependent, Active-Active with HyperVPort as LBA. Our hosts have 32 logical cores therefore i ran the following commands:
Set-NetAdapterRss -name VMNet01 -BaseProcessorNumber 2 -MaxProcessorNumber 16 (Leaving out CPU core 0 and 1 for system processing as per recommendations)
Set-NetAdapterRss -name VMNet02 -BaseProcessorNumber 17 -MaxProcessorNumber 31
(I did not run any commands on the Logical Switch itself)
Which would give each NIC 15 cores each to use for VMQ. After that configuration was made and I rebooted the server i receive a different error upon boot:
Source: Hyper-V-VmSwitch
Event ID: 15
Description: Failed to restore configuration for port Properties (Friendly Name: ) on switch [GUID] (Friendly Name: ), status = Object Name not found.
Where [GUID] is the GUID of the Hyper-V-switch and the Friendly name of that Switch is blank (instead of saying VMNetLogical Switch) as before. I have tried removing the switch and adding it again (destroying and recreating the team). But the issue persists.
Any ideas on how to fix this? Might this just be a "cosmetic flaw" or is it something I should get in order before putting this host back in to our production environment?
This is what i get when i run Get-NetadapterRSS on the NIC:s and the switch
Name  : VMnet Logical Switch
InterfaceDescription: Microsoft Network Adapter Multiplexor Driver
Enabled: True
NumberOfReceiveQueues: 0
Profile: NUMAStatic
BaseProcessor: [Group:Number]  : 0:0
MaxProcessor: [Group:Number]  : 0:30
MaxProcessors  : 4
RssProcessorArray: [Group:Number/NUMA Distance] :
  0:0/0  0:2/0  0:4/0  0:6/0
                                                  0:8/0  0:10/0  0:12/0  0:14/0
                                                  0:16/0  0:18/0  0:20/0
                                                  0:22/0  0:24/0  0:26/0
                                                  0:28/0  0:30/0
IndirectionTable: [Group:Number] :
Name : VMNet01
InterfaceDescription: HP FlexFabric 10Gb 2-port 534FLB Adapter  #106
Enabled: True
NumberOfReceiveQueues: 8
Profile :
BaseProcessor: [Group:Number] : :2
MaxProcessor: [Group:Number]: :16
MaxProcessors  : 16
RssProcessorArray: [Group:Number/NUMA Distance] :
IndirectionTable: [Group:Number]:
Name  : VMNet02
InterfaceDescription: HP FlexFabric 10Gb 2-port 534FLB Adapter  #107
Enabled : True
NumberOfReceiveQueues : 8
Profile:
BaseProcessor: [Group:Number] : :17
MaxProcessor: [Group:Number] : :31
MaxProcessors : 16
RssProcessorArray: [Group:Number/NUMA Distance] :
IndirectionTable: [Group:Number] :

Caption                                 : MSFT_NetAdapterVmqSettingData 'HP
                                          FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #106'
Description                             : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #106
ElementName                             : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #106
InstanceID                              : {DDF16911-57A4-4F36-B919-F28237002030
InterfaceDescription                    : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #106
Name                                    : VMNet01
Source                                  : 2
SystemName                              : asrhv06.*.local
AnyVlanSupported                        :
BaseProcessorGroup                      : 0
BaseProcessorNumber                     : 2
DynamicProcessorAffinityChangeSupported :
Enabled                                 : True
InterruptVectorCoalescingSupported      :
LookaheadSplitSupported                 : False
MaxLookaheadSplitSize                   : 0
MaxProcessorNumber                      : 16
MaxProcessors                           : 16
MinLookaheadSplitSize                   : 0
NumaNode                                :
NumberOfReceiveQueues                   : 15
NumMacAddressesPerPort                  : 0
NumVlansPerPort                         : 0
TotalNumberOfMacAddresses               : 15
VlanFilteringSupported                  : True
PSComputerName                          :
ifAlias                                 : VMNet01
InterfaceAlias                          : VMNet01
ifDesc                                  : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #106
Caption                                 : MSFT_NetAdapterVmqSettingData 'HP
                                          FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #107'
Description                             : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #107
ElementName                             : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #107
InstanceID                              : {94BFD5DC-6D42-4B4B-82EA-0D724C3A36E1
InterfaceDescription                    : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #107
Name                                    : VMNet02
Source                                  : 2
SystemName                              : asrhv06.*.local
AnyVlanSupported                        :
BaseProcessorGroup                      : 0
BaseProcessorNumber                     : 17
DynamicProcessorAffinityChangeSupported :
Enabled                                 : True
InterruptVectorCoalescingSupported      :
LookaheadSplitSupported                 : False
MaxLookaheadSplitSize                   : 0
MaxProcessorNumber                      : 31
MaxProcessors                           : 16
MinLookaheadSplitSize                   : 0
NumaNode                                :
NumberOfReceiveQueues                   : 15
NumMacAddressesPerPort                  : 0
NumVlansPerPort                         : 0
TotalNumberOfMacAddresses               : 15
VlanFilteringSupported                  : True
PSComputerName                          :
ifAlias                                 : VMNet02
InterfaceAlias                          : VMNet02
ifDesc                                  : HP FlexFabric 10Gb 2-port 534FLB
                                          Adapter  #107
Caption                                 : MSFT_NetAdapterVmqSettingData
                                          'Microsoft Network Adapter
                                          Multiplexor Driver'
Description                             : Microsoft Network Adapter
                                          Multiplexor Driver
ElementName                             : Microsoft Network Adapter
                                          Multiplexor Driver
InstanceID                              : {28148EF9-CACD-4DC8-ABAC-AA9150361DED
InterfaceDescription                    : Microsoft Network Adapter
                                          Multiplexor Driver
Name                                    :  VMnet Logical Switch
Source                                  : 2
SystemName                              : asrhv06.*.local
AnyVlanSupported                        :
BaseProcessorGroup                      : 0
BaseProcessorNumber                     : 0
DynamicProcessorAffinityChangeSupported :
Enabled                                 : True
InterruptVectorCoalescingSupported      :
LookaheadSplitSupported                 : False
MaxLookaheadSplitSize                   : 0
MaxProcessorNumber                      :
MaxProcessors                           :
MinLookaheadSplitSize                   : 0
NumaNode                                :
NumberOfReceiveQueues                   : 30
NumMacAddressesPerPort                  : 0
NumVlansPerPort                         : 0
TotalNumberOfMacAddresses               : 30
VlanFilteringSupported                  : True
PSComputerName                          :
ifAlias                                 : VMnet Logical Switch
InterfaceAlias                          :  VMnet Logical Switch
ifDesc                                  : Microsoft Network Adapter
                                          Multiplexor Driver

Similar Messages

  • LBFO team and VMQ configuration

    Hi gang,
    So I am running Server 2012 R2 in a two node cluster with some teamed network cards (2 - dual 10 gig cards)
    The teaming is switch independent and dynamic.
    In the event viewer I am seeing the following error:
    Event ID: 106
    Available processor sets of the underlying physical NICs belonging to the LBFO team NIC /DEVICE/{2C85A178-B9EA-436B-8E53-FAE34A578E95} (Friendly Name: Microsoft Network Adapter Multiplexor
    Driver) on switch 9740F036-858E-49C8-8A29-DAE5BDA94871 (Friendly Name: VLANs) are not configured correctly. Reason: The processor sets overlap when LBFO is configured with sum-queue mode.
    So I read a bunch on the VMQ stuff and just want to make sure this is the proper configuration:
    Set-netadaptervmq –name “NIC A” –baseprocessornumber 2 –maxprocessors 8
    Set-netadaptervmq –name “NIC B” –baseprocessornumber 10 –maxprocessors 8
    Set-netadaptervmq –name “NIC C” –baseprocessornumber 18 –maxprocessors 8
    Set-netadaptervmq –name “NIC D” –baseprocessornumber 24 –maxprocessors 6
    Hyper threading is enabled:
    numberofcores numberoflogicalprocessors
    8                       
    16
    8                       
    16
    The last line of the vmq configuration is my greatest confusion.
    Thanks.

    Hi Alexey,
    So something like this should work:
    Set-netadaptervmq –name “NIC A” –baseprocessornumber 4 –maxprocessors 4 --4,6,8,10
    Set-netadaptervmq –name “NIC B” –baseprocessornumber 12 –maxprocessors 4 --12,14,16,18
    Set-netadaptervmq –name “NIC C” –baseprocessornumber 20 –maxprocessors 3 --20,22,24
    Set-netadaptervmq –name “NIC D” –baseprocessornumber 26 –maxprocessors 3 --26,28,30
    Correct?

  • Event 106-VMSwitch Error:Available processor sets of the underlying physical NICs belonging to the LBFO team...............

    I am Using Hyper-V 2012 R2. and the NIC Teaming mode is "Switch Independent (Dynamic)", I am getting the following Error on NIC Teaming on Both Hyper-V.
    Available processor sets of the underlying physical NICs belonging to the LBFO team NIC /DEVICE/{DBC60E38-B514-4752-941F-09E0418E6D87} (Friendly Name: Microsoft Network Adapter Multiplexor Driver) on switch DE3CB5B5-A67F-449F-83E4-A302671D9715 (Friendly
    Name: VSwitch) are not configured correctly. Reason: The processor sets overlap when LBFO is configured with sum-queue mode.
    Every Hyper-V has 2 NIC with 10GB (Broadcm) Teamed. and Hyper-V Specification is "16 Core , Socket 2, Logical Processor 32.
    Can ANy one Helps why I am getting this error and whats the solution to the Error....
    Thanksss...

    Hi Asfand,
    As you mentioned ,every hyper-v has 2 NICs and team mode is sum-queue .
    You can try to use following powershell command (if you want to allocate 14,16 LPs for each NIC and 0,1 LP for default queue ) :
    Set-NetAdapterVMQ -Name NIC1 -BaseProcessorgroup 0 –BaseProcessorNumber 2 -Enabled $True -MaxProcessorNumber 15 –MaxProcessors 14
    Set-NetAdapterVMQ -Name NIC2 -BaseProcessorgroup 0 –BaseProcessorNumber 16 -Enabled $True -MaxProcessorNumber 31 –MaxProcessors 16
    Please refer to the similar thread :
    http://social.technet.microsoft.com/Forums/en-US/1b320975-eb20-41a4-a06a-98bbff5a85db/hypervvmswitch-event-106?forum=winserverhyperv
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Rss Settings link missing under Library settings?

    Rss Settings link missing under Library settings?
    Could you please help on the above?

    give Site Collection Admin privileges
    Also check below:
    http://webcache.googleusercontent.com/search?q=cache:YqNnUd4C1UYJ:jonnytreesonline.blogspot.com/2010/11/sharepoint-2010-rss-settings-missing-in.html+&cd=3&hl=en&ct=clnk&gl=in

  • Hyper-V NIC Team Load Balancing Algorithm: TranportPorts vs Hyper-VPorts

    Hi, 
    I'm going to need to configure a NIC team for the LAN traffic for a Hyper-V 2012 R2 environment. What is the recommended load balancing algorithm? 
    Some background:
    - The NIC team will deal with LAN traffic (NOT iSCSI storage traffic)
    - I'll set up a converged network. So there'll be a virtual switch on top of this team, which will have vNICs configured for each cluster, live migration and management
    - I'll implement QOS at the virtual switch level (using option -DefaultFlowMinimumBandwidthWeight) and at the vNIC level (using option -MinimumBandwidthWeight)
    - The CSV is set up on an Equallogics cluster. I know that this team is for the LAN so it has nothing to do with the SAN, but this reference will become clear in the next paragraph. 
    Here's where it gets a little confusing. I've checked some of the Equallogics documentation to ensure this environment complies with their requirements as far as storage networking is concerned. However, as part of their presentation the Dell publication
    TR1098-4, recommends creating the LAN NIC team with the TrasportPorts Load Balancing Algorithm. However, in some of the Microsoft resources (i.e. http://technet.microsoft.com/en-us/library/dn550728.aspx), the recommended load balancing algorithm is HyperVPorts.
    Just to add to the confusion, in this Microsoft TechEd presentation, http://www.youtube.com/watch?v=ed7HThAvp7o, the recommendation (at around minute 8:06) is to use dynamic ports algorithm mode. So obviously there are many ways to do this, but which one is
    correct? I spoke with Equallogics support and the rep said that their documentation recommends TransportPorts LB algorithm because that's what they've tested and works. I'm wondering what the response from a Hyper-V expert would be to this question. Anyway,
    any input on this last point would be appreciated.

    Gleb,
    >>See Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management  for more
    info
    Thanks for this reference. It seems that I have an older version of this document where there's absolutely
    no mention of the dynamic LBA. Hence my confusion when in the Microsoft TechEd presentation the
    recommendation was to use Dynamic. I almost implemented this environment with switch dependent and Address Hash Distribution because, based on the older version of the document, this combination offered: 
    a) Native teaming for maximum performance and switch diversity is not required; or
    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
    The new version of the document recommends Dynamic over the other two LBA. The analogy that the document
    makes of TCP flows with human speech was really helpful for me to understand what this algorithm is doing. For those who will never read the document, I'm referring to this: 
    "The outbound loads in this mode are dynamically balanced based on the concept of
    flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
    occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow,
    the algorithm will opportunistically rebalance the flow to another team member if apropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity
    between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members. "
    Anyway, this post made my week. You sir are deserving of a beer!

  • Poor Performance over Converged Networks using LBFO Team

    Hi,
    I have  built a Hyper-v 2012 R2 cluster with converged networking and correct logical networks. 
    However, I notice that I am not able to utilise more than apprx 1gb over the converged network for live migration, cluster, management etc even though they have a total of 8 x 1gb physical adapters available (switch independant, dynamic load balancing).
    e.g
    Hyper-v host1 and hyper-v host2 have 4 x 1gb pNics connected to switch1 and 4x1gb pNics connected to switch2. Switch1 and switch2 has a portchannel configured between them.
    I can see if I carry out a file copy between Hyper-v host1 and Hyper-v host2 (both configured exactly the same) then the Management, Live-Migration and Cluster vNics on the host use approx 300mbps on each vNic during the copy but never get anywhere near
    their full utilization (here I would be expecting to see approximately the total of the team available bandwidth which is 8gbps as there is no other traffic using these networks).
    Is it because the host vNICs cannot use vRSS (VMQ)? Although I struggle to see this as the issue because I don't see any CPU cores maxing out during the copy! Or maybe someone could recommend anything else to check?
    Microsoft Partner

    Without doing a case study on your environment, it would be pretty much impossible to give you a truly correct answer on what you should do.
    In the generic sense, using at least partial convergence is usually in your best interests. If you have a completely unconverged environment, then it is absolutely guaranteed that any given adapter will only have a single gigabit pipe to work with no matter
    what. Aidan's example of dual concurrent Live Migrations is good, but it's only one of a nearly infinite set of possibilities. As another example, let's say that you start a hypervisor-level over-the-network backup and at the same time you copy an ISO file
    from a remote file server into the management operating system. In an unconverged network, they'll both be fighting for attention from one adapter. If you have 8 total adapters, you can almost guarantee that at least one of the other 7 is sitting idle -- and
    there's nothing you can do about it.
    If you're converged (dependent upon setup), then you have a pretty good chance that those activities will be separated across physical adapters. Extrapolate those examples to the management OS and a stack of virtual machines, all cumulatively involved in
    dozens or hundreds of individual transmissions, and you see pretty quickly that convergence gives you the greatest capability for utilization of available resources. It's like when the world went from single- to dual-core computers. A dual-core system isn't
    twice as fast as its single-core counterpart. Its operating system's thread scheduler just has two places to run threads instead of one. But, the rule of one thread per core still applies. The name of the game is concurrency, not aggregation.
    Again, that's in the generic sense. When you get into trying to tap the power of SMB Multichannel, that changes the equation. That's where the case study picks up.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • Hyper-V Nic Teaming (reserve a nic for host OS)

    Whilst setting up nic teaming on my host (server 2012 r2) the OS recommends leaving one nic for host management(access). IS this best practice?  Seems like a waste for a nic as the host would hardly ever be accessed after initial setup.
    I have 4 nics in total. What is the best practice in this situation?

    Depending on if it is a single and the one and only or you build a Cluster you need some networks on your Hyper-V
    at least one connection for the Host to do Management.
    so in case of a single node with local disks you would create a Team with the 4 Nics and create a Hyper-V Switch with the Option checked for creating that Management OS Adapter what is a so called vNIC on that vSwitch and configure that vNIC with the needed
    IP Setting etc...
    If you plan a Cluster and also ISCSI/SMB for Storage Access take a look here
    http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/
    You find a few possible ways for teaming and the Switch Settings and also all needed Steps for doing a fully converged Setup via PowerShell.
    If you share more Informations on you setup we can give more Details on that.

  • Hyper-V 2012 R2 VMQ live migrate (shared nothing) Blue Screen

    Hello,
    Have Windows Server 2012 R2 Hyper-V server, fully patched, new install. There are two intel Network cards ant there is configured NIC team of them (Windows NIC teaming). Also there is some "Virtual" NICS with assigned VLAN ID.
    If I enable VMQ on these NICS and do shared nothing live migration of VM - host gets BSDO. What can be wrong?

    Hi,
    I would like to check if you need further assistance.
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows Server 2012 R2 - Hyper-V NIC Teaming Issue

    Hi All,
    I have cluster windows server 2012 R2 with hyper-v role installed. I have an issue with one of my windows 2012 R2 hyper-v host. 
    The virtual machine network adapter show status connected but it stop transmit data, so the vm that using that NIC cannot connect to external network.
    The virtual machine network adapter using Teamed NIC, with this configuration:
    Teaming Mode : Switch Independent
    Load Balance Algorithm : Hyper-V Port
    NIC Adapter : Broadcom 5720 Quad Port 1Gbps
    I already using the latest NIC driver from broadcom.
    I found a little trick for this issue by disable one of the teamed NIC, but it will happen again.
    Anyone have the same issue with me, and any workaround for this issue?
    Please Advise
    Thanks,

    Hi epenx,
    Thanks for the information .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V, NIC Teaming and 2 hosts getting in the way of each other

    Hey TechNet,
    After my initial build of 2 Hyper-V Core server which took me a bit of time without a domain, I started building 2 more for another site. After the initial two, setting up the new ones went very fast until I ran into a very funny issue. And I am willing
    to bet it is just my luck but I am wondering if any other out there ended up with it.
    So, I build these 2 new servers, create a NIC teaming on each host, add the management OS adapter, give it an IP and I can ping the world. So I went back to my station and tried to start working on these hosts but I kept getting DCed especially from one
    of them. Reinstalled it and remade the NIC teaming config, just in case. Same issue
    So I started pinging both of the servers and I remarked that when one was pinging, the other one tended to not answer ping anymore and vice versa. After testing the firewall and the switch and even trying to put the 2 machines on different switches, did
    not help. So I thought, what the heck, let's just remove all the network config from both machine, reboot, and redo the network config. Since then no issue.
    I only forgot to do one thing before removing the network configuration, I forgot to check if the MAC address on the Management OS adapters were the same. Even if it is a small chance, it can still happen (1 in 256^4 i'd say).
    So to get to my question, am I that unlucky or might it have been something else ?
    Enjoy your weekends

    I raised this bug long ago (one year ago in fact) and it still happens today.
    If you create a virtual switch, then add a management vNIC to it - there are times when you will get two hosts with the same MAC on the vNIC that was added for management.
    I have seen this in my lab (and I can reproduce it at will).
    Modify the entire Hyper-V MAC address pool.  Or else you will have the same issue with VMs.  This is the only workaround.
    But yes, it is a very confusing issue.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Hyper-v NIC teaming, external network

    http://blogs.technet.com/b/keithmayer/archive/2012/11/26/configuring-hyper-v-virtual-networking-in-w...Hello Yall's, Wanted to get some advise. I am running Server 2012r2 on the Host OS with 4 NICs... I team two of the NICs The team comes up active.1. I connect the team to a Virtual Switch in Hyper-v2 . I uncheck "Allow management operating system to share this network adapter"3. I connect a VM to this V-switchI feel like I am missing somethingProblem. Can't get no external network on the VM. Please advise.Thank u!
    This topic first appeared in the Spiceworks Community

    I raised this bug long ago (one year ago in fact) and it still happens today.
    If you create a virtual switch, then add a management vNIC to it - there are times when you will get two hosts with the same MAC on the vNIC that was added for management.
    I have seen this in my lab (and I can reproduce it at will).
    Modify the entire Hyper-V MAC address pool.  Or else you will have the same issue with VMs.  This is the only workaround.
    But yes, it is a very confusing issue.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Settings of VMQ?

    We have around 100VMs .I want to ask about an error I got in our environment recently .And i searched everywhere and a bit confused about VMQs as I got this error in HyperV Cluster hosts.
     “Available processor sets of the underlying physical NICs belonging to the LBFO team NIC /DEVICE/{D2345946-C500-492B-AA6F-1DC2FF258E94} (Friendly Name: Microsoft Network Adapter Multiplexor
    Driver) on switch 8E2B961A-4004-496D-B83B-6EFDFF639B9A (Friendly Name: ConvergedSwitch) are not configured correctly. Reason: The processor sets overlap when LBFO is configured with sum-queue mode.”
    We have 4 nodes each with :
    CPU:2x8 Cores=16 Cores and 32 Logical Processors.Hyper Threading enabled.
    Network:2x10 Gbps
    OS: Windows Server 2012 R2
    One network Team :
                    Teaming Mode: Switch Independent.
                    Load Balancing Mode: Dynamic.
                    Standby adapter: None (All adapters active)
    One virtual switch and two virtual NICs with the allow management OS access to the switch set to true .
    What would be the recommended settings of VMQ or dVMQ ?and should I leave VMQ enabled on the virtual machines ?

    Hi,
    " set-netadaptervmq NIC1 -baseprocessornumber 2 -maxprocessors 8
    set-netadaptervmq NIC2 - baseprocessornumber 18 -maxprocessors 8
    VMQ i understand it uses only physical cores.
    it will use theses cores if hyperthreading is enabled
    2 4 6 8 10 12 14 16                 18 20 22 24 26 28 30 32? "
    You need to take CPU 0,1 in to account .
    I think it need to be configured as :
    set-netadaptervmq NIC1 -baseprocessornumber 2 -maxprocessors 8
    set-netadaptervmq NIC2 - baseprocessornumber 18 -maxprocessors 7
    The result is :
    2 4 6 8 10 12 14 16                 18 20 22 24 26 28 30
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Which is better Microsoft 2012 R2 teaming or HP teaming for Hyper-V ?

    hi,
    I Plan to use network teaming on our hyper-v host 2012 R2
    I want to know which is recommended HP Teaming or Microsoft 2012 R2 Teaming 

    I would use Microsoft Teaming (LBFO Teaming) for the following reasons:
    Dynamic Load Balancing (New in R2 and just generally awesome in terms of perf)
    Compatibility with different NIC versions
    VMM 2012 SP1 and 2012 R2 compatibility in creating teams automatically.
    HP recommends it. (I did a quick search and could not find the source I've used in the past, so take this one with a grain of salt.
    Better support.
    I have four customers where I support a Hyper-V Private cloud and every single one of them uses Microsoft Teaming.
    You also may want to read through this white paper to get a better idea of all the options you have and when you would not want to use MS Teaming:
    http://www.microsoft.com/en-us/download/details.aspx?id=40319

Maybe you are looking for

  • How to convert OS X 10.3.9 Applescripts to run on Lion?

    I have recently upgraded my computer from a G4 iMac running Panther to an Intel 21" screen model which runs Lion 10.7.3. I found that neither the Set Up Assistant or Migration Assistant worked as far as transferring the contents of the old system and

  • How to get the footage's current frame number in expression?

    There are 2 layers.One is text layer,one is sequence layer. Footage has begun from the 20th frame. Therefore, the number of composition's current frame number and footage's frame number has shifted. I want to show footage's frame number in text layer

  • How to import multiple files linked to swf into captivate?

    Hi all I am trying to import a flash swf animation into captivate 5.5. I figured it should be straight forward by going Insert > Animation, but the animation doesn't do what it is meant to. I think this is because the swf file is associated with a nu

  • Error in Addon Queue

    Dear Experts, I am doing upgrade from SAP 4.7 110 to SAP ECC EHP4. Its not taking VIRSA package which was accepted from 4.7 to ECC6 in sandbox. Its giving as a prerquisite for SAP_APPL but in stack xml file I have provided till 604. Please give me so

  • Mac os x install wont start

    Hello. I have a a 13" white macbook. The one with the removable battery. 2 g of ddr. Few days ago a blinking folder with a question mark appeared. I could not even go to the recovery mode and the hd would make a clicking noise. So I bought a SSD driv