Dynamic(Switch Independent) NIC Teaming Mode Problem

Hi,
We are using Switch Independent Dynamic distribution NIC Teaming configuration on Hyper-V 2012 R2 cluster. In the following graphs between 19:48 - 19:58 you can see clearly how dynamic NIC teaming mode badly affected the web servers response times.
We have encountered this problem on the different cluster nodes that use different NIC chipsets Intel or Broadcom and configured with dynamic NIC teaming for VM Network connection.
If we change the teaming mode as Hyper-V Port or remove NIC teaming for VM network as you can see from the graphic, response times are back to normal.
Any ideas?
Regards

Hi,
In the heavy inbound and outbound network load, If you are using the “Switch Dependent” 
mode, you must configure your switch have the same teaming support mode.
There have two scenario.
“Switch Dependent” with LACP if your switch supports the aka 802.1ax.
“Switch Dependent” mode with Static.
Example of  Cisco® switch configuration when you using the “Switch Dependent” mode with “Static” mode:
CiscoSwitch(config)# int port-channel1
CiscoSwitch(config-if)# description NIC team for Windows Server 2012
CiscoSwitch(config-if)# int gi0/23
CiscoSwitch(config-if)# channel-group 1 mode on
CiscoSwitch(config-if)# int gi0/24
CiscoSwitch(config-if)# channel-group 1 mode on
CiscoSwitch(config)# port-channel load-balance src-dst-ip
Hope this helps.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Windows Server 2012/2012R2 NIC Teaming Mode

    Hi,
    Question 1:
    In Windows Server 2012 the following teaming mode was recommended for Hyper-V NIC teams:
    Teaming mode: Switch Independent
    Load balancing mode: Hyper-V Port
    All Adapers Active
    In a session at TechEd 2014 it was stated that Dynamic is the new recommendation for Windows Server 2012 R2. However, a Microsoft PFE stated a few weeks ago that he would still recommend Hyper-V Port for Windows Server 2012 R2. What is your opinions around
    this?
    Question 2:
    We have a Hyper-V Failover Cluster which isn`t migrated to 2012 R2 yet, it`s running 2012. In this cluster we use Switch Independent/Hyper-V Port for the team. We also use converged networking, having 2 physical adapters bound to the NIC team, as well as
    3 virtual adapters in the management OS for management, CSV and Live Migration. Recently one of the team NICs failed, and this incident also caused the cluster membership on the affected node to go offline even though the other team NIC was
    connected. Is this expected behaviour? Would the behaviour be different if 2012 R2 with Dynamic mode was being used?

    Hello,
    As for question number 1:
    For Hyper-V workload it's recommended to use Dynamic with
    Switch Independent mode. Why?
    This configuration will distribute the load based on the TCP Ports address hash as modified by the Dynamic load balancing algorithm. The Dynamic load balancing algorithm will redistribute flows to optimize team member bandwidth utilization so individual
    flow transmissions may move from one active team member to another.  The algorithm takes into account the small possibility that redistributing traffic could cause out-of-order delivery of packets so it takes steps to minimize that possibility.
    The receive side, however, will look identical to Hyper-V Port distribution.  Each Hyper-V switch port’s traffic, whether bound for a virtual NIC in a VM (vmNIC) or a virtual NIC in the host (vNIC), will see all its inbound traffic arriving on a single
    NIC.
    This mode is best used for teaming in both native and Hyper-V environments except when:
    1) Teaming is being performed in a VM,
    2) Switch dependent teaming (e.g., LACP) is required by policy, or
    3) Operation of a two-member Active/Standby team is required by policy. 
    As for question number 2:
    The Switch Independent/Hyper-V Port will send packets using all active team members distributing the load based on the Hyper-V switch port number.  Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port
    is affinitized to exactly one team member at any point in time. 
    In all cases where this configuration was recommended back in Windows Server 2012 the new configuration in 2012 R2, Switch Independent/Dynamic, will provide better performance.
    Microsoft recommend for a clustered Hyper-V deployment
    in Windows server 2012 to use Switch Independent/Hyper-V Port as you mentioned and to configure
    Hyper-V QoS that applies to the virtual switch. (Configure minimum bandwidth in
    weight mode instead of in bits per second and Enable and configure QoS
    for all virtual network adapters 
    Did you apply QoS on the Converged vSwitch after you
    created the team?? However Nodes are considered down if they do not respond to 5 heartbeats. The Switch Independent/Hyper-V Port does not cause the cluster to goes down if one NIC failed. The issue is somewhere else and not in the teaming mode
    that you choose.
    Hope this help.
    Regards,
    Charbel Nemnom
    MCSA, MCSE, MCS, MCITP
    Blog: www.charbelnemnom.com
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if
    a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • NIC Teaming on new 2012R2 install IntelX540-T2 Netgear XS712T not functioning properly

    I have a new 2012R2 server with an Intel LOM X540-T2 Dual port, 10-gig copper NIC. I'm trying to team both ports to a Netgear 10-gig copper switch (XS712T, CAT6A cables) and not having much luck. Latest firmware/drivers all around. I reset to factory defaults
    on the Netgear switch.
    From a console, I go to server manager, NIC Teaming, New Team. 
    When I try Team Mode: Switch Independent / Load balancing mode: Dynamic. Both ports become Active. I understand that now the adapters listed in networking connections are to be left alone, and that configuration should take place on the Microsoft
    Network Adapter Multiplexor Driver. The 'Team' adapter is supposed to DHCP, but never does, and ends up with a private 169.254 address. If I manually configure ip4 on the 'Team' adapter, I still have no communication to the network.  Both ports are able
    to DHCP when un-teamed. Something interesting is that port 1 only shows received packets, and none sent. At the same time, port 2 shows a much smaller number of sent packets, but zero received. Odd.
    When I try Team mode: LACP / Load balancing mode: Dynamic. I add two of the Netgear ports to a LAG and enable LACP. One of the ports in Windows will show 'Active' (and has DHCP'd with normal send/receive), and the other port shows 'Faulted LACP Negotiation'
    with very few sent packets and zero received.
    Any ideas, or places to look next?
    Edit: It looks like I can get physical port 1 to join the LACP group, but physical port 2 will not join, even after swapping cables and rebooting the Netgear switch. But port 1 still goes 'active' regardless of the switchport that it's plugged into.

    Hi,
    I agree with SpackTime, please confirm his replay.
    Thanks.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • SR-IOV Uplink Port with NIC Teaming

    Hello,
    I'm trying to setup my uplink port profile and logical switch with NIC Teaming and SR-IOV support. In Hyper-V this was easy, just had to create the NIC Team (which I configured as Dynamic & LACP) then check the box on the virtual switch.
    I'm VMM it does not seem to like to enable NIC Teams with SR-IOV:
    Can anyone advise? I'm not using any virtual ports. I just want all my VMs to connect to the physical switch though the LACP NIC Team, something which I thought would be simple.
    I have a plan B - don't use Microsoft's NIC Teaming and instead use the Intel technology to present all the adapters as one to the host. I'd rather no do this.
    Thanks
    MrGoodBytes

    Hi Sir,
    "SR-IOV does have certain limitations. If you configure port access control lists (ACLs), extensions or policies in the virtual switch, SR-IOV is disabled because its traffic totally bypasses the switch.
    You can’t team two SR-IOV network cards in the host. You can, however, take two physical SR-IOV NICs in the host, create separate virtual switches and team two virtual network cards within a VM. "
    There is really a limitation when using NIC teaming :
    http://technet.microsoft.com/en-us/magazine/dn235778.aspx
    Best Regards,
    Elton Ji 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Switch-independent load-balancing NIC teaming on server-side and MAC/ARP flapping on L2/L3 switches

    Since active deployment of Windows Server 2012, our servers support team began to utilize new feature - switch-independent load-balancing NIC teaming. At first look it seems great - no additional network configuration is required and load balancing is performed by server itself by sending frames in round-robin or some hash algorithm out from different NICs (say two for simplicity) but with same MAC address. Theoretical bandwith is now grown up to 2Gbps (if we have two 1G NICs per server) against failover NIC teaming configuration, when one of two adapters is always down.
    But how does this affect (if does) switching and routing performance of network equipment? From point of view of L2 switch - it has to rewrite its CAM table each time a server sends frame from different NIC. Isn't it expensive operation? Won't it affect switching in a bad way? We see in our logs that same server make switches to change mac-to-port associations several times per second.
    Well, and how does it affect routing, if the switch to which server is connected is L3 switch an performs routing for the subnet server connected to? Will CEF operate well if ARP entry chages several times per second?
    Thank you.

    Since nobody answered here, we created service request and got the following answer (in short):
    L2 MAC flapping between ports is very bad and you must avoid such configurations as much as possible. There is one possible variant that can be considered in your situation - use port-channel (either L2 or L3), in this configuration port-channel will be treted as single port and there won't be flapping.
    Conversation example is here: https://ramazancan.wordpress.com/tag/best-practice/

  • NIC teaming and Hyper-V switch recommendations in a cluster

    HI,
    We’ve recently purchased four HP Gen 8 servers with a total of ten NICS to be used in a Hyper-V 2012 R2 Cluster
    These will be connecting to ISCSI storage so I’ll use two of the NICs for the ISCSI storage connection.
    I’m then deciding between to options.
    1. Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect my VMs to the same switch.
    2. Create two NIC teams, four adapters in each.  Use one team just for Management, Live Migration and CSV\Cluster VNics - QOS to manage all this traffic. 
    Then the other team will be dedicated just for my VMs.
    Is there any benefit to isolating the VMs on their own switch?
    Would having two teams allow more flexibility with the teaming 
    configurations I could use, such as using Switch Independent\Hyper-V Port mode for the VM team? (I do need to read up on the teaming modes a little more)
    Thanks,

    I’m not teaming the ISCSI adapters.  These would be configured with MPIO. 
    What I want to know,
    Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect
    my VMs to the same switch.
    http://blogs.technet.com/b/cedward/archive/2014/02/22/hyper-v-2012-r2-network-architectures-series-part-3-of-7-converged-networks-managed-by-scvmm-and-powershell.aspx
    What are the disadvantages to having this configuration? 
    Should RSS be disabled on the NICs in this configuration with DVMQ left enabled? 
    After reading through this post, I think I’ll need to do this. 
    However, I’d like to understand this a little more.
    I have the option of adding an additional two 10GB NICS. 
    This would mean I could create another team and Hyper-V switch on top and then dedicate this to my VMs leaving the other team for CSV\Management and Live Migration.
     How does this option affect the use of RSS and DVMQ?

  • Nic teaming - what is dynamic load balancing

    When set up nic teaming in Windows  2012 I have the option of selecting "Address Hash", "Hyper-V Port", or "Dynamic" for the load balancing mode. The technet documentation explains "Address Hash" and "Hyper-V
    Port" but there is nothing about "Dynamic". Is there anywhere I can find a description of what the "Dynamic" option provides?

    Microsoft's official recommendation is to use Dynamic load balancing in most configurations.
    Section 3.3 of
    the NIC Teaming Deployment Guide explains what Dynamic is.  Section 3.4 suggests when to use Dynamic load balancing, and when to use other modes.
    I suggest reading the Guide from start to finish.  I learn new things every time I look at it.

  • Correct binding order in a Cluster with logical switches, NIC teams, and vNICs on the host.

    I have seen many recommendations to set the network binding order on you Hyper-V hosts to something similar to:
    Management NIC
    Cluster NICs
    iSCSI NICS
    However, all of  these recommendations are for scenarios where the NICs are all physical NICs in the host.
    Using Server 2012 R2, I am building converged networks with logical switches, NIC Teams, and vNICs on the host.  So when I go set the network binding order, I now have all these components to deal with as well.  For example, on a 4 adapter blade,
    I might typically have the following items in the binding order drop-down.
    4 - physical NICs (2- teamed for the 1 virtual switch, the other 2 used for iSCSI)
    1 - Team interface (Datacenter_Switch)
    5 - vNICs (Management, Cluster, LiveMigration, iSCSI-1, iSCSI-2)
    So, should you only worry about order of the vNICS (placed at the top) and let the other components just fall to the bottom of the list?  This seems to be likely to me, since the binding order applies to service access to the resources, and the other
    components are not being directly accessed by network services?
    Or, should the order start out with the physical resources needed to access the vNICs, followed by any intermediate resources (switches or team interfaces, then the vNICS themselves, to ensure that the resources are available to the subcompnents accessing
    them?
    Any help would be appreciated.
    Thanks.
    -Tim Reid

    If by 'network binding order' you mean the order set in the Advanced Settings of the Network Connections of the Control Panel, then the most important one is to make sure the domain network is at the top of the list.  Whichever network is at the top
    of the list is used first for auth functions.  So auth functions perform best when the proper network is placed first in the binding order.  After that, I don't know that it makes much difference at all.  (If it does, I'm sure my statement will
    start a lively discussion. <grin>)
    . : | : . : | : . tim

  • Using NIC Teaming and a virtual switch for Windows Server 2012 host networking and Hyper-V.

    Using NIC Teaming and a virtual switch for Windows Server 2012 host networking!
    http://www.youtube.com/watch?v=8mOuoIWzmdE
    Hi thanks for reading. Now I may well have my terminology incorrect here so I will try to explain  as best I can and apologies from the start.
    It’s a bit of both Hyper-v and Server 2012R2. 
    I am setting up a lab with Server 2012 R2. I have several physical network cards that I have teamed called “HostSwitchTeam” from those I have made several Virtual Network Adaptors such as below
    examples.
    New-VMSwitch "MgmtSwitch" -MinimumBandwidthMode weight -NetAdaptername "HostSwitchTeam" -AllowManagement $false
    Add-VMNetworkAdapter -ManagementOS -Name "Vswitch" -SwitchName "MgmtSwitch"
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "MgmtSwitch"
    When I install Hyper-V and it comes to adding a virtual switch during installation it only shows the individual physical network cards and the
    HostSwitchTeam for selection.  When installed it shows the Microsoft Network Multiplexor Driver as the only option. 
    Is this correct or how does one use the Vswitch made above and incorporate into the Hyper-V so a weight can be put against it.
    Still trying to get my head around Vswitches,VMNetworkadapters etc so somewhat confused as to the way forward at this time so I may have missed the plot altogether!
    Any help would be much appreciated.
    Paul
    Paul Edwards

    Hi P.J.E,
    >>I have teams so a bit confused as to the adapter bindings and if the teams need to be added or just the vEthernet Nics?.
    Nic 1,2 
    HostVMSwitchTeam
    Nic 3,4,5
             HostMgmtSwitchTeam
    >>The adapter Binding settings are:
    HostMgmtSwitchTeam
    V-Curric
    Nic 3
    Nic 4
    Nic 5
    V-Livemigration
    HostVMSwitch
    Nic 1
    Nic 2
    V-iSCSI
    V-HeartBeat
    Based on my understanding of the description , "HostMgmtSwitchTeam and
    HostVMSwitch " are teamed NIC .
    You can think of them as two physical NICs (do not use NIC 1,2,3,4,5 any more , there are just two NICs "HostMgmtSwitchTeam and
    HostVMSwitch").
    V-Curric,
    V-Livemigration , V-iSCSI ,
    V-HeartBeat are just VNICs of host  (you can change their name then check if the virtual switch name will be changed )
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Problem with network after deleting NIC teaming.

    We have server HP ProLiant DL360p Gen8 with Windows Server 2012.  Couple months ago I  created a team NIC Teaming (use 2 network interfaces, the other 2 are disable and not connected).  Also NLB (Network Load Balancing) feature was installed
    but not configure (I think it is important). IIS and MS SQL 2012 Express were installed too and anything else
    Now I need delete team NIC Teaming and use network interfaces separately (with different IPs but the same network 192.168.1.0). When I delete team and configure IPv4 with static IP (we don't have DHCP) network does nor work. Because there is no default gateway
    in IPv4 properties. It is problem and I don't know how fix this. When I recover team NIC Teaming - all OK. I checked registry and Gateway is in Interfaces (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<Adapter
    GUID>)
    I uncheked NLB in network adapter's settings.
    I did
    netsh interface ip reset 
    I checked Route Print  -  0.0.0.0 to 192.168.1.1 is present in single copy.
    I reinstalled drivers network adapter - it fixed problem before restart. After restart the problem recovered :)
    I don't know what should do next.. I cannot resetup OS. Could you please help with this, please. And sorry for my English.
    Best regards,
    Alex.

    Hi ,
    After this please try to check the protocol which bounded properly .
    If it is normal and still can not access outside as you mentioned above  , please try to open the device manager -->
    view --> show hidden devices --> then try to remove all the devices under network adapters
    (I would recommend you to note the driver files' path in the properties of physical NIC in device manager --> tab
    driver --> driver details , try to delete the file after remove the NIC in device manage )
    Then restart your computer , install your NIC driver and retry .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.
    Well, I fixed problem finally. :) I deleted all network adapters in device manager with driver file. Than I restarted server and Windows Server setuped Microsoft driver. After that all work!  I tried to setup HP driver and problem comes back. I can
    conclude that the problem is in the driver manufacturer. Thanks for all and good luck.

  • Nic teaming and hyper-v switches

    I come from the ESX world but I am slowly falling in love with the simplicity of Hyper-v. I have a stack of dell c2100's I have been experimenting with. each have 2 1gb connections  teamed to a cisco switch. when testing bandwidth with a file copy I
    get around 240MBps. however if I add a hyper-v switch I max out at 90Mbps. worse than no teaming at all (112Mbps). 
    team is with integrated broadcom nics, LACP and I can confirm I get full bandwidth between 2 2012 r2 machines until adding a hyper-v switch. removing the switch lets me transfer at full bandwidth but then I cant use Hyper-v guests.
    my goal will eventually be to add dual port 10gb cards to 5 of the C2100's and run them in a cluster to host all my VM's in HA. I don't want to waist my money on the switch and nics until I can get what i have working correctly.
    HDD speed is also not the issue as each has 12 3tb WD re4 drives with 2 Intel 250GB ssd as cache. they easily hold 3000MBps sustained.

    http://itproctology.blogspot.com/2008/05/hyper-v-tcpoffloading-poor-network.html
    http://itproctology.blogspot.com/2011/03/tcp-checksum-offload-is-not-equal-to.html
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • Create NIC team using all interfaces

    I'm trying to put together a function that will take all the physical NICs on a server and team them together. I'm not even sure where I found the example, but here is the code I have below. This seems like it should be much simpler and that I'm missing
    something.
    # Create a network team using switch independent teaming and Hyper-V port mode
    Function TeamSetup ($teamName)
    $adapters = Get-NetAdapter
    $nicList = @()
    Foreach ($nic in $adapters)
    $nicList += $nic.Name
    New-NetLbfoTeam $teamName –TeamMembers ($nicList) –TeamNicName $teamName -TeamingMode SwitchIndependent -LoadBalancingAlgorithm HyperVPort

    This actually seems to be some sort of problem with running these commands as part of a script. If I enter my original code directly into PowerShell line-by-line, it works as expected. As does the same command using the * wildcard instead of the $nicList
    variable. No matter how I change the code in the script, it fails with this error:
    New-NetLbfoTeam : There are no teamable NetAdapters on the system matching TeamMembers parameter
    At C:\Script.ps1:190 char:2
    + New-NetLbfoTeam $teamName –TeamMembers ($nicList) –TeamNicName $teamName -T ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidArgument: (MSFT_NetLbfoTeam:root/StandardCimv2/MSFT_NetLbfoTeam) [New-NetLbfoTeam
    ], CimException
    + FullyQualifiedErrorId : MiClientApiError_InvalidParameter,New-NetLbfoTeam

  • Server 2012 R2 NIC Teams

    Hey guys, i was hoping someone could help me understand this. I kind of get how this teaming stuff works, but i want to make sure I'm not doing anything wrong/ not supported.
    So for the past few days I've been trying to wrap my head around this stuff. And this is what I have so far.
    R720 Dell Server --> 4 Teamed NICs ---> Multiplexor --> vSwitch --> VMs
    So the vSwitch is connected to the multiplexor and the VMs are connected to that switch. I figured I'd try to see if i saw any performance differences. As a matter of fact I saw it got worse. Sometimes the VM on the server stays connected, other times it
    gets disconnected and wont reconnect
    When I do a LAN speed test to another server i get like 9 Mbps vs other servers that are connected strait from a nic to a switch and that switch to a server that gets 800 Mbps +
    I did a switch independent with dynamic  LB. I did some looking up on all of the settings, but I really dont understand why its so slow. 
    Also if i were to create vNICs based off that switch for lets say, management, cluster, live migration etc..
    So, the NICs are based off that vSwitch and you can do the Qos for those NICs, but I'm confused as to how those NICs are specifically used. 
    If none of that made sense, i'll be around looking for some more info and i can elaborate if need be. 

    Hi,
    In switch
    independent mode, outbound traffic bandwidth could be increased. If you want improve both inbound and outbound bandwidth you could configure a switch dependent
    mode. And it is recommended to choose hyper-v switch port as
    Traffic distribution algorithms.
    For dynamic teaming
    most switches require manual administration to enable LACP on the port. 
    At last, you referred that VM lose connection intermittently. I wonder the VM lose connection with your virtual switch or lose connection with the other servers.
    A good blog for you:
    NIC teaming on Virtual Machines
    http://blog.marcosnogueira.org/nic-teaming-on-virtual-machines/
    Please Note: Since the web site is not hosted by Microsoft, the link may change without notice. Microsoft does not guarantee the accuracy of this information.
    Hope this helps.

  • Server 2012 R2 Crashes with NIC Team

    Server 2012 R2 Core configured for Hyper-V. Using 2-port 10Gbe Brocades, we want to use NIC teaming for guest traffic. Create the team... seems fine. Create the virtual switch in Hyper-V, and assign it to the NIC team... seems fine. Create
    a VM, assign the network card to the Virtual switch... still doing okay. Power on the VM... POOF! The host BSOD's. If I remove the switch from the VM, I can run the VM from the console, install the OS, etc... but as soon as I reassign the virtual
    NIC to the switch, POOF! Bye-bye again. Any ideas here?
    Thank you in advance!
    EDIT: A little more info... Two 2-port Brocades and two Nexus 5k's. Running one port on NIC1 to one 5k, and one port on NIC2 to the other 5k. NIC team is using Switch Independent Mode, Address Hash load balancing, and all adapters active.

    Hi,
    Have you updated the NIC driver to latest?
    If issue persists after updating the driver, we can use WinDbg to analyze a crash dump.
    If the NIC driver cause the BSOD, please consult the NIC manufacture about this issue.
    For detailed information about how to analyze a crash dump, please refer to the link below,
    http://blogs.technet.com/b/juanand/archive/2011/03/20/analyzing-a-crash-dump-aka-bsod.aspx
    Best Regards.
    Steven Lee
    TechNet Community Support

Maybe you are looking for