Windows 7/8.0/8.1 NIC teaming issue

Hello,
I'm having an issue with Teaming network adapters in all recent Windows client OSs.
I'm using Intel Pro Dual Port or Broadcom NetExtreme II GigaBit adapters with the appropriate drivers/applications from the vendors.
I am able to set up teaming and fail-over works flawlessly, but the connection will not use the entire advertised bandwidth of 2Gbps. Basically it will use either one port or the other.
I'm doing the testing with the iperf tool and am communicating with a unix based server.
I have the following setup:
Dell R210 II server with 2 Broadcom NetEtreme II adapters and a DualPort Intel Pro adapter - Centos 6.5 installed bonding configured and working wile communicating with other unix based systems.
Zyxel GS2200-48 switch - Link Aggregation configured and working
Dell R210 II with Windows 8.1 with Broadcom NetExtreme II cards or Intel Pro dualport cards.
For the Windows machine I have also tried Windows 7 and Windows 8, also non server type hardware with identical results.
so.. Why am I not getting > 1 Gbps throughput on the created team? although load balancing is activated, team adapter says the connection type is 2 Gbps, a the same setup with 2 unix machines works flawlessly.
Am I to understand that Link Aggregation (802.3ad) under Microsoft OS does not support load balancing if connection is only towards one IP?
To make it clear, I need client version of Windows OS to communicate unix based OS over a higher then 1Gbps bandwidth (as close to 2 Gbps as possible). Without the use of 10 Gbps network adapters.
Thanks in advance,
Endre

As v-yamliu has mentioned, NIC teaming through the operating system is
only available in Windows Server 2012 and Windows Server 2012 R2. For Windows Client or for previous versions of Windows Server you will need to create the team via the network driver. For Broadcom this is accomplished
using the Broadcom Advanced Server Program (BASP) as documented here and
for Intel via Advanced Network Services as documented here.
If you have configured the team via the drivers, you may need to ensure the driver is properly installed and updated. You may also want to ensure that the adapters are configured for aggregation (802.3ad/802.1ax/LACP), rather than fault tolerance or load
balancing and that the teaming configuration on the switch matches and is compatible with the server configuration. Also ensure that all of the links are connecting at full duplex as this is a requirement.
Brandon
Windows Outreach Team- IT Pro
The Springboard Series on TechNet

Similar Messages

  • Windows Server 2012 R2 - Hyper-V NIC Teaming Issue

    Hi All,
    I have cluster windows server 2012 R2 with hyper-v role installed. I have an issue with one of my windows 2012 R2 hyper-v host. 
    The virtual machine network adapter show status connected but it stop transmit data, so the vm that using that NIC cannot connect to external network.
    The virtual machine network adapter using Teamed NIC, with this configuration:
    Teaming Mode : Switch Independent
    Load Balance Algorithm : Hyper-V Port
    NIC Adapter : Broadcom 5720 Quad Port 1Gbps
    I already using the latest NIC driver from broadcom.
    I found a little trick for this issue by disable one of the teamed NIC, but it will happen again.
    Anyone have the same issue with me, and any workaround for this issue?
    Please Advise
    Thanks,

    Hi epenx,
    Thanks for the information .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows 2008 R2 iSCSI Boot LUN - NIC driver issue

    Hello,
    I've gotten the opportunity to create a win2k8r2 golden image that will be deployed to IBM3650 M3's over
    iSCSI boot LUNs. I have a need to update the on-board Broadcom 1Gb NICs from the drivers that come with the installation media. I am able to load the drivers that I need without any issue, the system survives multiple re-boots and displays the expected driver
    version via device manager. After I prepare the image I run sysprep, sysprep runs smoothly and the system shuts down as expected. At this point I unmap the existing LUN and clone the sysprep'ed image out.
    This is where I run into trouble, if I update to the latest Broadcom driver the OS fails to boot, the host
    logs into the LUN successfully and I get the Windows splash screen after a bit the screen goes black and I get no response from windows at all. With this behavior I would generally expect that there is some issue with my unattend file but if I don't upgrade
    the driver there are no problems, windows boots normally, and  the post install process runs successfully. After some digging I saw that I might be running into KB974072. Just as the article suggested I inserted the necessary drivers into the installation
    media. Unfortunately this produced the same result.
    I've got PersistAllDeviceInstalls set to "true" in the generalize pass of the unattend so sysprep
    should not be making any changes to the drivers. If anyone has any ideas on what else, if anything, I can put in the unattend to get sysprep to leave the drivers alone or any thoughts on this issue at all I would greatly appreciate it!
    Regards,
    Toll_Hou5e 

    Hi,
    First, try to set the registry key before capture the image.
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\Sysprep\Settings\sppnp set
    PersistAllDeviceInstalls to
    1.
    If it doesn’t work, we can try another to way, inject driver packet during installation.
    Managing and Deploying Driver Packages
    http://technet.microsoft.com/en-us/library/dd348456(v=ws.10).aspx
    Hope this helps.

  • Windows Server 2012/2012R2 NIC Teaming Mode

    Hi,
    Question 1:
    In Windows Server 2012 the following teaming mode was recommended for Hyper-V NIC teams:
    Teaming mode: Switch Independent
    Load balancing mode: Hyper-V Port
    All Adapers Active
    In a session at TechEd 2014 it was stated that Dynamic is the new recommendation for Windows Server 2012 R2. However, a Microsoft PFE stated a few weeks ago that he would still recommend Hyper-V Port for Windows Server 2012 R2. What is your opinions around
    this?
    Question 2:
    We have a Hyper-V Failover Cluster which isn`t migrated to 2012 R2 yet, it`s running 2012. In this cluster we use Switch Independent/Hyper-V Port for the team. We also use converged networking, having 2 physical adapters bound to the NIC team, as well as
    3 virtual adapters in the management OS for management, CSV and Live Migration. Recently one of the team NICs failed, and this incident also caused the cluster membership on the affected node to go offline even though the other team NIC was
    connected. Is this expected behaviour? Would the behaviour be different if 2012 R2 with Dynamic mode was being used?

    Hello,
    As for question number 1:
    For Hyper-V workload it's recommended to use Dynamic with
    Switch Independent mode. Why?
    This configuration will distribute the load based on the TCP Ports address hash as modified by the Dynamic load balancing algorithm. The Dynamic load balancing algorithm will redistribute flows to optimize team member bandwidth utilization so individual
    flow transmissions may move from one active team member to another.  The algorithm takes into account the small possibility that redistributing traffic could cause out-of-order delivery of packets so it takes steps to minimize that possibility.
    The receive side, however, will look identical to Hyper-V Port distribution.  Each Hyper-V switch port’s traffic, whether bound for a virtual NIC in a VM (vmNIC) or a virtual NIC in the host (vNIC), will see all its inbound traffic arriving on a single
    NIC.
    This mode is best used for teaming in both native and Hyper-V environments except when:
    1) Teaming is being performed in a VM,
    2) Switch dependent teaming (e.g., LACP) is required by policy, or
    3) Operation of a two-member Active/Standby team is required by policy. 
    As for question number 2:
    The Switch Independent/Hyper-V Port will send packets using all active team members distributing the load based on the Hyper-V switch port number.  Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port
    is affinitized to exactly one team member at any point in time. 
    In all cases where this configuration was recommended back in Windows Server 2012 the new configuration in 2012 R2, Switch Independent/Dynamic, will provide better performance.
    Microsoft recommend for a clustered Hyper-V deployment
    in Windows server 2012 to use Switch Independent/Hyper-V Port as you mentioned and to configure
    Hyper-V QoS that applies to the virtual switch. (Configure minimum bandwidth in
    weight mode instead of in bits per second and Enable and configure QoS
    for all virtual network adapters 
    Did you apply QoS on the Converged vSwitch after you
    created the team?? However Nodes are considered down if they do not respond to 5 heartbeats. The Switch Independent/Hyper-V Port does not cause the cluster to goes down if one NIC failed. The issue is somewhere else and not in the teaming mode
    that you choose.
    Hope this help.
    Regards,
    Charbel Nemnom
    MCSA, MCSE, MCS, MCITP
    Blog: www.charbelnemnom.com
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if
    a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

  • NIC teaming creates packet loss (Windows 2008 R2)?

    I'm experiencing some packet loss to all of our VMs that we didn't have before we made some changes to our Hyper-V implementation (Windows 2008 R2). Most of the VMs also run 2008 R2 - with 3 that run Server 2003.
    The host server is a Dell R610 with three 4 port NICS - two Intel quad port gigabit and a quad port Broadcom. 
    We us the individual ports of the Broadcom for host management and live migration - no problems here. We use the Intel cards for both iSCSI and VM networks. Calling the two intel cards “A” and “B”, and the ports P1-4 we've used AP1, AP2, BP1, BP2 (ports
    1 & 2 of both Intel NICs) for iSCSI connections, and we've created a NIC Team with AP3, AP4, BP3, and BP4 (ports 3 and 4 of both Intel NICs). The team type is "Virtual Machine Load Balancing". We then created a Hyper-V switch based on this team
    for use with all of the VMs created on the host. (as a side note: prior to implementing the NIC team, we just had 4 Hyper-V switches, one associated with each of these 4 ports.)
    The 4 ports of the NIC team are connected to two different Cisco SG200 switches - AP3 and BP3 are connected to switch1, and AP4 and BP4 are connected to switch2 (in an attempt to maximize redundancy). The two Cisco SG200s are simply connected to the rest
    of our network - each to a different switch within the subnet. There is minimal configuration done to the SG200s (for example NO
     link aggregation); spanning tree is enable however.
    My question is: can the network cables be connected to different switches (as they currently are) and if so is there some configuration piece (either on the switch or within Windows) that I'm missing? 
    What are the options here if this configuration is incorrect? The packet loss is in the range of 0.1%, but we've had odd spikes where a VM was essentially unavailable for a brief period (a few minutes) then returned to "normal" (0,1% loss). 
    Pinging a device (like the SG200 itself) or another physical server (for example our domain controller or the hyper-v host itself) results in essentially 0 loss; maybe one or two packets during the course of a 12 hour ping (this was the “normal” ping
    response to VMs before we created the NIC team, so I’m quite sure this has something to do with it).
    Thanks in advance!

    I believe when utilizing the Virtual Machine Load Balancing the ports must be connected to the same switch, stack, or chassis as the arp for the MAC could move.  I believe, although I could be wrong, that the outages you see is when the machine "moves"
    between ports and the arp being updated between the two switches. 
    I believe you are looking for switch fault tolerance teaming which will allow for the failure of adapter, cabling, or switch which will achieve your goal of maximum redundancy.  This is achieved via spanning tree on the switches, which you indicated
    is already configured.
     

  • Using NIC Teaming and a virtual switch for Windows Server 2012 host networking and Hyper-V.

    Using NIC Teaming and a virtual switch for Windows Server 2012 host networking!
    http://www.youtube.com/watch?v=8mOuoIWzmdE
    Hi thanks for reading. Now I may well have my terminology incorrect here so I will try to explain  as best I can and apologies from the start.
    It’s a bit of both Hyper-v and Server 2012R2. 
    I am setting up a lab with Server 2012 R2. I have several physical network cards that I have teamed called “HostSwitchTeam” from those I have made several Virtual Network Adaptors such as below
    examples.
    New-VMSwitch "MgmtSwitch" -MinimumBandwidthMode weight -NetAdaptername "HostSwitchTeam" -AllowManagement $false
    Add-VMNetworkAdapter -ManagementOS -Name "Vswitch" -SwitchName "MgmtSwitch"
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "MgmtSwitch"
    When I install Hyper-V and it comes to adding a virtual switch during installation it only shows the individual physical network cards and the
    HostSwitchTeam for selection.  When installed it shows the Microsoft Network Multiplexor Driver as the only option. 
    Is this correct or how does one use the Vswitch made above and incorporate into the Hyper-V so a weight can be put against it.
    Still trying to get my head around Vswitches,VMNetworkadapters etc so somewhat confused as to the way forward at this time so I may have missed the plot altogether!
    Any help would be much appreciated.
    Paul
    Paul Edwards

    Hi P.J.E,
    >>I have teams so a bit confused as to the adapter bindings and if the teams need to be added or just the vEthernet Nics?.
    Nic 1,2 
    HostVMSwitchTeam
    Nic 3,4,5
             HostMgmtSwitchTeam
    >>The adapter Binding settings are:
    HostMgmtSwitchTeam
    V-Curric
    Nic 3
    Nic 4
    Nic 5
    V-Livemigration
    HostVMSwitch
    Nic 1
    Nic 2
    V-iSCSI
    V-HeartBeat
    Based on my understanding of the description , "HostMgmtSwitchTeam and
    HostVMSwitch " are teamed NIC .
    You can think of them as two physical NICs (do not use NIC 1,2,3,4,5 any more , there are just two NICs "HostMgmtSwitchTeam and
    HostVMSwitch").
    V-Curric,
    V-Livemigration , V-iSCSI ,
    V-HeartBeat are just VNICs of host  (you can change their name then check if the virtual switch name will be changed )
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows Server 2012 R2 NIC Teaming and DHCP Issue

    Came across a weird issue today during a server deployment. I was doing a physical server deployment and got Windows installed and was getting ready to connect it to our network. Before connecting the Ethernet cables to the network adapters, I created a
    NIC Team using Windows Server 2012 R2 built-in software with a static IP address (we'll say its 192.168.1.56). Once I plugged in the Ethernet cables, I got network access but was unable to join our domain. At this time, I deleted the NIC team and the two network
    adapters got their own IP addresses issued from DHCP (192.168.1.57 and 192.168.1.58) and at this point I was able to join our domain. I recreated the NIC team and set a new static IP (192.168.1.57) and everything was working great as intended.
    My issue is when I went into DHCP I noticed a random entry that was using the IP address I used for the first NIC teaming attempt (192.168.1.56), before I joined it to the domain. I call this a random entry because it is using the last 8 characters of the
    MAC address as the hostname instead of the servers hostname.
    It seems when I deleted the first NIC team I created (192.168.1.56), a random MAC address Server 2012 R2 generated for the team has remained embedded in the system. The IP address is still pingable even though an ipconfig /all shows the current NIC team
    with the IP 192.168.1.57. There is no IP address of 192.168.1.56 configured on the current server and I have static IPs set yet it is still pingable and registering with DHCP.
    I know this is slightly confusing but I am hoping someone else has encountered this issue and may be able to tell me how to fix this. Simply deleting the DHCP entry does not do the trick, it comes back.

    Hi,
    Please confirm you have choose the right NIC team type, If you’ve previously configured NIC teaming, you’re aware NIC teams usually require the assistance of network-side
    protocols. Prior to Windows 2012, using a NIC team on a server also meant enabling protocols like EtherChannel or LACP (also known as 802.1ax or 802.3ad) on network ports.
    More information:
    NIC teaming configure in Server 2012
    http://technet.microsoft.com/en-us/magazine/jj149029.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • NIC teaming and direct access in windows 2012 server core

    Hello All,
    I have installed windows 2012 r2 server core and i want to implement direct access with nic teaming enabled.
    Has anyone tried this kind of setup? Were they successful in it? Moreover can we configure Direct access when we have NIC teaming configured?
    -Ashish

    Hi There - NIC teaming in both core and gui is a standard feature and there is no reason (and I have used it successfully) why you cannot do so. As always make sure you look at TCP Offload as per UAG / TMG Days to ensure best performance and also Network
    Card Binding Order.
    The link for details is here -
    http://technet.microsoft.com/en-us/library/hh831648.aspx
    Kr
    John Davies

  • Nic Teaming in guest OS

    We have 4 ESX Servers each containing 4 gig nics for production traffic. These are teamed to form a 4Gb pipe to the Cisco Switch. Our guest VM's are windows 2008 rc2 and my issue is that i need to find out if its possible to team nics within the windows 2008 vm's.
    We have physical windows 2008 servers with same network config as in 4 physical nics teamed to form 4gb pipe to cisco switch
    If i copy a 5GB file from one physical server to another physical server it takes about 70 seconds (which is great). If i copy the same file from one of the physical servers to one of my VM servers it takes over 3 minutes (not good).
    If i copy the same file between 2 servers on the same ESX host it also takes over 3 minutes (not good)
    If my theory is sound then i think the bottleneck is the fact that my VM servers only have a 1Gb nic so ideally i'd like to be able to team a pair of nics in a VM and redo the test.
    Any information on how to team nics within guest vm's would be appreciated.
    thanks

    Depending on the switch load balancing policy you can force each virtual NIC to use a separate physical NIC. If you have a piece of software which implements the the LB in the guest (as the VMware drivers don't implement this) you'll be able to achieve Transmit Load Balancing (TLB). Failover is implemented on the vSwitch layer.
    IMHO this benefit is a theoretical one as other guest are using these physical NIC's also load distributed, as well.
    For real load balancing you should have physical switches which support one of the LB protocols available.
    AWo
    \[:o]===\[o:]
    =Would you like to have this posting as a ringtone on your cell phone?=
    =Send "Posting" to 911 for only $999999,99!=

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • NIC teaming - Server 2008 R2 DC combined with other Software

    Hello!
    I've been searching all morning for an answer of what we have in mind to do at work....
    We've got a server installed with Windows Server 2008 R2 and have 4 NICs on it. We want to make it a DC (with DNS, DHCP and print services) and also want to install our Backup Solution (from Veeam) for our VMs. This server will be the only physical Microsoft
    server next to our 3 ESX servers at the end.
    I read here (http://markparris.co.uk/2010/02/09/top-tipactive-directory-domain-controllers-and-teamed-network-cards/) that there is a statement that a DC with NIC teaming is only using the FO (Fail-Over) feature of the teaming. Since there is also the backup
    solution on this server, it would be great also to use the LB (Load-Balancing) feature. My question is, when I active NIC teaming and install the DC roles, does the roles just use the FO feature and neglect the LB feature or does it enable/disable those modes/features
    of NIC teaming? Cause it would be nice if the backup solution could use the LB for bigger bandwidth for backup and restores and I wouldn't really care about the FO for the DC role.
    cheers
    Ivo

    Hi,
    I think the issue is related to the third party NIC teaming solution. You can refer to the third party manufacture.
    Here I should remind you something else, a DC with multiple NICs will cause many problems. So I would recommend you run a dedicated
    Hyper-v server and promote a DC on one of the virtual machine.
    Hope this helps.

  • Nic Team network speed

    Hello!
    There're two physical servers (Hyper-V is not installed) with two nic teams, each consisting of two 1Gb nics:
    To test these teams I tried to copy two files from server1 to server2:
    1) I started copying the first file and ~20 sec later started copying the second file to the same SSD (from server1 to server2)
    2) I copied ~simultaneously two different files to the two different SSDs (from server1 to server2)
    As shown in the picture 1 when I added the second copying the first one had stopped completely, although this SSD can tolerate transfer rate up to 350-380MBps.
    Both pictures show that the total file transfer speed was less than that of a single team member (1Gbps):
    0+112MBps < 1Gbps
    57.1 MBps + 56.5MBps < 1Gbps
    According to http://technet.microsoft.com/en-us/library/hh831648.aspx
    NIC Teaming, also known as load balancing and failover (LBFO), allows multiple network adapters on a computer to be placed into a team for the following purposes:
    Bandwidth aggregation
    Traffic failover to prevent connectivity loss in the event of a network component failure
    Test1 and Test2  show no bandwith aggregation... Are my tests wrong?
    Thank you in advance,
    Michael

    P.S. In a production network it means users would read data from servers using the total amount of a team's bandwidth but write data using the bandwidth of a single team member - that's not I would ever like to have in my network.
    And once again: http://technet.microsoft.com/en-us/library/hh831648.aspx
    Traffic distribution algorithms
    NIC Teaming in Windows Server 2012 supports the following traffic distribution methods:
    Hashing. This algorithm creates a hash based on components of the packet, and then it assigns packets that have that hash value to one of the available network adapters. This keeps all packets from the same TCP stream on the
    same network adapter. Hashing alone usually creates balance across the available network adapters. Some NIC Teaming solutions that are available on the market monitor the distribution of the traffic and reassign specific hash values to different
    network adapters in an attempt to better balance the traffic. The dynamic redistribution is known as smart load balancing or adaptive load balancing.
    The components that can be used as inputs to the hashing function include:
    Source and destination MAC addresses
    Source and destination IP addresses, with or without considering the MAC addresses (2-tuple hash)
    Source and destination TCP ports, usually used along with the IP addresses (4-tuple hash)
    I don't see in this explanation any reason for not creating balance when the sourses are different but the destination is the same...
    Regards,
    Michael

  • Hyper-V NIC Team Load Balancing Algorithm: TranportPorts vs Hyper-VPorts

    Hi, 
    I'm going to need to configure a NIC team for the LAN traffic for a Hyper-V 2012 R2 environment. What is the recommended load balancing algorithm? 
    Some background:
    - The NIC team will deal with LAN traffic (NOT iSCSI storage traffic)
    - I'll set up a converged network. So there'll be a virtual switch on top of this team, which will have vNICs configured for each cluster, live migration and management
    - I'll implement QOS at the virtual switch level (using option -DefaultFlowMinimumBandwidthWeight) and at the vNIC level (using option -MinimumBandwidthWeight)
    - The CSV is set up on an Equallogics cluster. I know that this team is for the LAN so it has nothing to do with the SAN, but this reference will become clear in the next paragraph. 
    Here's where it gets a little confusing. I've checked some of the Equallogics documentation to ensure this environment complies with their requirements as far as storage networking is concerned. However, as part of their presentation the Dell publication
    TR1098-4, recommends creating the LAN NIC team with the TrasportPorts Load Balancing Algorithm. However, in some of the Microsoft resources (i.e. http://technet.microsoft.com/en-us/library/dn550728.aspx), the recommended load balancing algorithm is HyperVPorts.
    Just to add to the confusion, in this Microsoft TechEd presentation, http://www.youtube.com/watch?v=ed7HThAvp7o, the recommendation (at around minute 8:06) is to use dynamic ports algorithm mode. So obviously there are many ways to do this, but which one is
    correct? I spoke with Equallogics support and the rep said that their documentation recommends TransportPorts LB algorithm because that's what they've tested and works. I'm wondering what the response from a Hyper-V expert would be to this question. Anyway,
    any input on this last point would be appreciated.

    Gleb,
    >>See Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management  for more
    info
    Thanks for this reference. It seems that I have an older version of this document where there's absolutely
    no mention of the dynamic LBA. Hence my confusion when in the Microsoft TechEd presentation the
    recommendation was to use Dynamic. I almost implemented this environment with switch dependent and Address Hash Distribution because, based on the older version of the document, this combination offered: 
    a) Native teaming for maximum performance and switch diversity is not required; or
    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
    The new version of the document recommends Dynamic over the other two LBA. The analogy that the document
    makes of TCP flows with human speech was really helpful for me to understand what this algorithm is doing. For those who will never read the document, I'm referring to this: 
    "The outbound loads in this mode are dynamically balanced based on the concept of
    flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
    occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow,
    the algorithm will opportunistically rebalance the flow to another team member if apropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity
    between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members. "
    Anyway, this post made my week. You sir are deserving of a beer!

  • Switch-independent load-balancing NIC teaming on server-side and MAC/ARP flapping on L2/L3 switches

    Since active deployment of Windows Server 2012, our servers support team began to utilize new feature - switch-independent load-balancing NIC teaming. At first look it seems great - no additional network configuration is required and load balancing is performed by server itself by sending frames in round-robin or some hash algorithm out from different NICs (say two for simplicity) but with same MAC address. Theoretical bandwith is now grown up to 2Gbps (if we have two 1G NICs per server) against failover NIC teaming configuration, when one of two adapters is always down.
    But how does this affect (if does) switching and routing performance of network equipment? From point of view of L2 switch - it has to rewrite its CAM table each time a server sends frame from different NIC. Isn't it expensive operation? Won't it affect switching in a bad way? We see in our logs that same server make switches to change mac-to-port associations several times per second.
    Well, and how does it affect routing, if the switch to which server is connected is L3 switch an performs routing for the subnet server connected to? Will CEF operate well if ARP entry chages several times per second?
    Thank you.

    Since nobody answered here, we created service request and got the following answer (in short):
    L2 MAC flapping between ports is very bad and you must avoid such configurations as much as possible. There is one possible variant that can be considered in your situation - use port-channel (either L2 or L3), in this configuration port-channel will be treted as single port and there won't be flapping.
    Conversation example is here: https://ramazancan.wordpress.com/tag/best-practice/

  • Nic teaming - what is dynamic load balancing

    When set up nic teaming in Windows  2012 I have the option of selecting "Address Hash", "Hyper-V Port", or "Dynamic" for the load balancing mode. The technet documentation explains "Address Hash" and "Hyper-V
    Port" but there is nothing about "Dynamic". Is there anywhere I can find a description of what the "Dynamic" option provides?

    Microsoft's official recommendation is to use Dynamic load balancing in most configurations.
    Section 3.3 of
    the NIC Teaming Deployment Guide explains what Dynamic is.  Section 3.4 suggests when to use Dynamic load balancing, and when to use other modes.
    I suggest reading the Guide from start to finish.  I learn new things every time I look at it.

Maybe you are looking for

  • Reset CMOS, system will not boot

    This is a long story but I will try to keep it short: I just bought a new 875P Neo-FISR and a 3.0 GHz Prescott, which from reading the other posts could have been a mistake. I installed the CPU, memory, and the video card and pressed the power button

  • SRM Installation in Laptop

    Dear Sap Gurus, I have installed SRM 5.0 in my laptop, whose configuration is 2GB Ram and 160GB Harddisk. My friends told me that i need to connect SAP R/3 to the SRM server and moreover i need to connect to Internet. Please inform me, what should i

  • Restrict to insert Duplicate Condition Types

    Hi, During PO creation with Freight Condition - ZFR1, user is able to insert this condition TWICE. Means Freight Condition is showing 2 times in Conditions...due to this My purchase register report get incorrect data... How to restrict user to insert

  • Can we have descriptions in an LoV?

    I have an LoV attached to a parameter that contains 55 different school codes (each code is a 3 digit number). A particular school's staff member might not necessarily know the code for his/her school, so I was wondering if it were possible to add a

  • Indesign CS6   File Open Fail

    I just upgraded to Mountain Lion OSX 10.8  about 2 weeks ago and was running Indesign CS5.5 fine before the update. Started to get program crashing issues before I even updated the OS, then after, I upgraded to CS6 and now am getting crashes everytim