Nic Team network speed

Hello!
There're two physical servers (Hyper-V is not installed) with two nic teams, each consisting of two 1Gb nics:
To test these teams I tried to copy two files from server1 to server2:
1) I started copying the first file and ~20 sec later started copying the second file to the same SSD (from server1 to server2)
2) I copied ~simultaneously two different files to the two different SSDs (from server1 to server2)
As shown in the picture 1 when I added the second copying the first one had stopped completely, although this SSD can tolerate transfer rate up to 350-380MBps.
Both pictures show that the total file transfer speed was less than that of a single team member (1Gbps):
0+112MBps < 1Gbps
57.1 MBps + 56.5MBps < 1Gbps
According to http://technet.microsoft.com/en-us/library/hh831648.aspx
NIC Teaming, also known as load balancing and failover (LBFO), allows multiple network adapters on a computer to be placed into a team for the following purposes:
Bandwidth aggregation
Traffic failover to prevent connectivity loss in the event of a network component failure
Test1 and Test2  show no bandwith aggregation... Are my tests wrong?
Thank you in advance,
Michael

P.S. In a production network it means users would read data from servers using the total amount of a team's bandwidth but write data using the bandwidth of a single team member - that's not I would ever like to have in my network.
And once again: http://technet.microsoft.com/en-us/library/hh831648.aspx
Traffic distribution algorithms
NIC Teaming in Windows Server 2012 supports the following traffic distribution methods:
Hashing. This algorithm creates a hash based on components of the packet, and then it assigns packets that have that hash value to one of the available network adapters. This keeps all packets from the same TCP stream on the
same network adapter. Hashing alone usually creates balance across the available network adapters. Some NIC Teaming solutions that are available on the market monitor the distribution of the traffic and reassign specific hash values to different
network adapters in an attempt to better balance the traffic. The dynamic redistribution is known as smart load balancing or adaptive load balancing.
The components that can be used as inputs to the hashing function include:
Source and destination MAC addresses
Source and destination IP addresses, with or without considering the MAC addresses (2-tuple hash)
Source and destination TCP ports, usually used along with the IP addresses (4-tuple hash)
I don't see in this explanation any reason for not creating balance when the sourses are different but the destination is the same...
Regards,
Michael

Similar Messages

  • Using NIC Teaming and a virtual switch for Windows Server 2012 host networking and Hyper-V.

    Using NIC Teaming and a virtual switch for Windows Server 2012 host networking!
    http://www.youtube.com/watch?v=8mOuoIWzmdE
    Hi thanks for reading. Now I may well have my terminology incorrect here so I will try to explain  as best I can and apologies from the start.
    It’s a bit of both Hyper-v and Server 2012R2. 
    I am setting up a lab with Server 2012 R2. I have several physical network cards that I have teamed called “HostSwitchTeam” from those I have made several Virtual Network Adaptors such as below
    examples.
    New-VMSwitch "MgmtSwitch" -MinimumBandwidthMode weight -NetAdaptername "HostSwitchTeam" -AllowManagement $false
    Add-VMNetworkAdapter -ManagementOS -Name "Vswitch" -SwitchName "MgmtSwitch"
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "MgmtSwitch"
    When I install Hyper-V and it comes to adding a virtual switch during installation it only shows the individual physical network cards and the
    HostSwitchTeam for selection.  When installed it shows the Microsoft Network Multiplexor Driver as the only option. 
    Is this correct or how does one use the Vswitch made above and incorporate into the Hyper-V so a weight can be put against it.
    Still trying to get my head around Vswitches,VMNetworkadapters etc so somewhat confused as to the way forward at this time so I may have missed the plot altogether!
    Any help would be much appreciated.
    Paul
    Paul Edwards

    Hi P.J.E,
    >>I have teams so a bit confused as to the adapter bindings and if the teams need to be added or just the vEthernet Nics?.
    Nic 1,2 
    HostVMSwitchTeam
    Nic 3,4,5
             HostMgmtSwitchTeam
    >>The adapter Binding settings are:
    HostMgmtSwitchTeam
    V-Curric
    Nic 3
    Nic 4
    Nic 5
    V-Livemigration
    HostVMSwitch
    Nic 1
    Nic 2
    V-iSCSI
    V-HeartBeat
    Based on my understanding of the description , "HostMgmtSwitchTeam and
    HostVMSwitch " are teamed NIC .
    You can think of them as two physical NICs (do not use NIC 1,2,3,4,5 any more , there are just two NICs "HostMgmtSwitchTeam and
    HostVMSwitch").
    V-Curric,
    V-Livemigration , V-iSCSI ,
    V-HeartBeat are just VNICs of host  (you can change their name then check if the virtual switch name will be changed )
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Problem with network after deleting NIC teaming.

    We have server HP ProLiant DL360p Gen8 with Windows Server 2012.  Couple months ago I  created a team NIC Teaming (use 2 network interfaces, the other 2 are disable and not connected).  Also NLB (Network Load Balancing) feature was installed
    but not configure (I think it is important). IIS and MS SQL 2012 Express were installed too and anything else
    Now I need delete team NIC Teaming and use network interfaces separately (with different IPs but the same network 192.168.1.0). When I delete team and configure IPv4 with static IP (we don't have DHCP) network does nor work. Because there is no default gateway
    in IPv4 properties. It is problem and I don't know how fix this. When I recover team NIC Teaming - all OK. I checked registry and Gateway is in Interfaces (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<Adapter
    GUID>)
    I uncheked NLB in network adapter's settings.
    I did
    netsh interface ip reset 
    I checked Route Print  -  0.0.0.0 to 192.168.1.1 is present in single copy.
    I reinstalled drivers network adapter - it fixed problem before restart. After restart the problem recovered :)
    I don't know what should do next.. I cannot resetup OS. Could you please help with this, please. And sorry for my English.
    Best regards,
    Alex.

    Hi ,
    After this please try to check the protocol which bounded properly .
    If it is normal and still can not access outside as you mentioned above  , please try to open the device manager -->
    view --> show hidden devices --> then try to remove all the devices under network adapters
    (I would recommend you to note the driver files' path in the properties of physical NIC in device manager --> tab
    driver --> driver details , try to delete the file after remove the NIC in device manage )
    Then restart your computer , install your NIC driver and retry .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.
    Well, I fixed problem finally. :) I deleted all network adapters in device manager with driver file. Than I restarted server and Windows Server setuped Microsoft driver. After that all work!  I tried to setup HP driver and problem comes back. I can
    conclude that the problem is in the driver manufacturer. Thanks for all and good luck.

  • Network adapters (NIC) doesn't show up in NIC Teaming manager

    We have been running Windows Server 2012 R2 for couple of months now with no major problems, but recently had to swap out one of the NICs. Unfortunately interfaces were not removed from Team before it was swapped out and that appears to have screwed up something
    in Windows NIC Teaming manager - NICs doesn't appear there anymore. Not even the NICs that were not touched.
    Network interfaces appear to be connected, no issues with drivers (screen shot from device manager), but they don't show up in NIC Teaming, any idea how to reset NIC Teaming manager and/or resolve this issue without reinstalling Windows?
    Thanks!

    Thanks for leading me the right direction!
    Looking around I found that one of  "invalid class" error causes is corrupted WMI.  Using this script provided by MS to Rebuild/Re-register WMI (different case/problem, but WMI is WMI...) resolved the issue with NICs not showing up:
    @echo off
    sc config winmgmt start= disabled
    net stop winmgmt /y
    %systemdrive%
    cd %windir%\system32\wbem
    for /f %%s in ('dir /b *.dll') do regsvr32 /s %%s
    wmiprvse /regserver
    winmgmt /regserver
    sc config winmgmt start= Auto
    net start winmgmt
    for /f %%s in ('dir /s /b *.mof *.mfl') do mofcomp %%s

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • PS Script to Automate NIC Teaming and Configure Static IP Address based off an Existing Physical NIC

    # Retrieve IP Address and Default Gateway from static IP Assigned NIC and assign to variables.
    $wmi = Get-WmiObject Win32_NetworkAdapterConfiguration -Filter "IPEnabled = True" |
    Where-Object { $_.IPAddress -match '192\.' }
    $IPAddress = $wmi.IpAddress[0]
    $DefaultGateway = $wmi.DefaultIPGateway[0]
    # Create Lbfo TEAM1, by binding “Ethernet” and “Ethernet 2” NICs.
    New-NetLbfoTeam -Name TEAM1 -TeamMembers "Ethernet","Ethernet 2" -TeamingMode Lacp -LoadBalancingAlgorithm TransportPorts -Confirm:$false
    # 20 second pause to allow TEAM1 to form and come online.
    Start-Sleep -s 20
    # Configure static IP Address, Subnet, Default Gateway, DNS Server IPs to newly formed TEAM1 interface.
    New-NetIPAddress –InterfaceAlias “TEAM1” –IPAddress $IPAddress –PrefixLength 24 -DefaultGateway $DefaultGateway
    Set-DnsClientServerAddress -InterfaceAlias “TEAM1” -ServerAddresses xx.xx.xx.xx, xx.xx.xx.xx
    Howdy All!
    I was recently presented with the challenge of automating the creation and configuration of a NIC Team on Server 2012 and Server 2012 R2.
    Condition:
    New Team will use static IP Address of an existing NIC (one of two physical NICs to be used in the Team).  Each server has more than one NIC.
    Our environment is pretty static, in the sense that all our servers use the same subnet mask and DNS server IP Addresses, so I really only had
    to worry about the Static IP Address and the Default Gateway.
    1. Retrieve NIC IP Address and Default Gateway:
    I needed a way to query only the NIC with the correct IP Address settings and create required variables based on that query.  For that, I
    leveraged WMI.  For example purposes, let's say the servers in your environment start with 192. and you know the source physical NIC with desired network configurations follows this scheme.  This will retrieve only the network configuration information
    for the NIC that has the IP Address that starts with "192."  Feel free to replace 192 with whatever octet you use.  you can expand the criteria by filling out additional octects... example:
    Where-Object
    $_.IPAddress
    -match'192\.168.' } This would search for NICs with IP Addresses 192.168.xx.xx.
    $wmi
    = Get-WmiObject
    Win32_NetworkAdapterConfiguration
    -Filter "IPEnabled = True"
    |
    Where-Object {
    $_.IPAddress
    -match '192\.' }
    $IPAddress
    = $wmi.IpAddress[0]
    $DefaultGateway
    = $wmi.DefaultIPGateway[0]
    2. Create Lbfo TEAM1
    This is a straight forward command based off of New-NetLbfoTeam.  I used  "-Confirm:$false" to suppress prompts. 
    Our NICs are named “Ethernet” and “Ethernet 2” by default, so I was able to keep –TeamMembers as a static entry. 
    Also added start-sleep command to give the new Team time to build and come online before moving on to network configurations. 
    New-NetLbfoTeam
    -Name TEAM1
    -TeamMembers "Ethernet","Ethernet 2"
    -TeamingMode SwitchIndependent
    -LoadBalancingAlgorithm
    Dynamic -Confirm:$false
    # 20 second pause to allow TEAM1 to form and come online.
    Start-Sleep
    -s 20
    3. Configure network settings for interface "TEAM1".
    Now it's time to pipe the previous physical NICs configurations to the newly built team.  Here is where I will leverage
    the variables I created earlier.
    There are two separate commands used to fully configure network settings,
    New-NetIPAddress : Here is where you assign the IP Address, Subnet Mask, and Default Gateway.
    Set-DnsClientServerAddress: Here is where you assign any DNS Servers.  In my case, I have 2, just replace x's with your
    desired DNS IP Addresses.
    New-NetIPAddress
    –InterfaceAlias “TEAM1”
    –IPAddress $IPAddress
    –PrefixLength 24
    -DefaultGateway $DefaultGateway
    Set-DnsClientServerAddress
    -InterfaceAlias “TEAM1”
    -ServerAddresses xx.xx.xx.xx, xx.xx.xx.xx
    Hope this helps and cheers!

    I've done this before, and because of that I've run into something you may find valuable. 
    Namely two challenges:
    There are "n" number of adapters in the server.
    Adapters with multiple ports should be labeled in order.
    MS only supports making a LBFO Team out of "like speed" adapters.
    To solve both of these challenges I standardized the name based on link speed for each adapter before creating hte team.  Pretty simple really!  FIrst I created to variables to store the 10g and 1g adapters.  I went ahead and told it to skip
    any "hyper-V" ports for obvious reasons, and sorted by MAC address as servers tend to put all thier onboard NICs in sequentially by MAC:
    $All10GAdapters = (Get-NetAdapter |where{$_.LinkSpeed -eq "10 Gbps" -and $_.InterfaceDesription -notmatch 'Hyper-V*'}|sort-object MacAddress)
    $All1GAdapters = (Get-NetAdapter |where{$_.LinkSpeed -eq "1 Gbps" -and $_.InterfaceDesription -notmatch 'Hyper-V*'}|sort-object MacAddress)
    Sweet ... now that I have my adapters I can rename them into something standardized:
    $i=0
    $All10GAdapters | ForEach-Object {
    Rename-NetAdapter -Name $_.Name -NewName "Ethernet_10g_$i"
    $i++
    $i = 0
    $All1GAdapters | ForEach-Object {
    Rename-NetAdapter -Name $_.Name -NewName "Ethernet_1g_$i"
    $i++
    Once that's done Now i can return to your team command but use a wildcard sense I know the standardized name!
    New-NetLbfoTeam -Name TEAM1G -TeamMembers Ethernet_1g_* -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false
    New-NetLbfoTeam -Name TEAM10G -TeamMembers Ethernet_10g_* -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false

  • Nic Teaming still only 1gbps?

    Hello,
    I have a Dell R610 with 4 Broadcom Nics that I am trying to team together so I can transfer data on my local network faster that ~112MBps. The problem is, when I go through Server 2012 R2 and create the NIC team (I've done LACP, static, etc.) I am still
    only getting ~112MBps.
    I am transferring to another Server 2012 R2 server, setting it's LACP/LAG up the same way.
    I am using a Cisco SG200-18 Smart Switch, which is setup for LACP/LAG.
    I was reading another post about Server 2012 R2 only using 1x NIC for TCP/IP transfer of data, but I'm not sure how to enable it to use more than that so I can achieve faster that 1gbps transfer speeds.
    Thanks

    There is no way to get more than 1 Gbps out of a 1Gbps NIC.  That is all that it can do.  When you start a copy process, a TCP session is opened on a single NIC and data moves across that link at a maximum of 1 Gbps.  Now, if you have multiple
    files, you can start multiple jobs.  Each separate job would create a new TCP session that can each move a maximum of 1 Gbps.
    So, a 4-NIC team could theoretically have four streams (four copy jobs) going at the same time to achieve maximum throughput.  You would have to create each copy stream to copy specific files so that you don't copy the same file multiple times.
    You still won't get 4 Gbps throughput, but you will get better than trying to accomplish the copy with a single process.
    Technically you can have more than four streams going on simultaneously, but the maximum you are going to get through on any one NIc is still going to be 1 Gbps.
    . : | : . : | : . tim

  • Server 2012 R2 Crashes with NIC Team

    Server 2012 R2 Core configured for Hyper-V. Using 2-port 10Gbe Brocades, we want to use NIC teaming for guest traffic. Create the team... seems fine. Create the virtual switch in Hyper-V, and assign it to the NIC team... seems fine. Create
    a VM, assign the network card to the Virtual switch... still doing okay. Power on the VM... POOF! The host BSOD's. If I remove the switch from the VM, I can run the VM from the console, install the OS, etc... but as soon as I reassign the virtual
    NIC to the switch, POOF! Bye-bye again. Any ideas here?
    Thank you in advance!
    EDIT: A little more info... Two 2-port Brocades and two Nexus 5k's. Running one port on NIC1 to one 5k, and one port on NIC2 to the other 5k. NIC team is using Switch Independent Mode, Address Hash load balancing, and all adapters active.

    Hi,
    Have you updated the NIC driver to latest?
    If issue persists after updating the driver, we can use WinDbg to analyze a crash dump.
    If the NIC driver cause the BSOD, please consult the NIC manufacture about this issue.
    For detailed information about how to analyze a crash dump, please refer to the link below,
    http://blogs.technet.com/b/juanand/archive/2011/03/20/analyzing-a-crash-dump-aka-bsod.aspx
    Best Regards.
    Steven Lee
    TechNet Community Support

  • Windows 7/8.0/8.1 NIC teaming issue

    Hello,
    I'm having an issue with Teaming network adapters in all recent Windows client OSs.
    I'm using Intel Pro Dual Port or Broadcom NetExtreme II GigaBit adapters with the appropriate drivers/applications from the vendors.
    I am able to set up teaming and fail-over works flawlessly, but the connection will not use the entire advertised bandwidth of 2Gbps. Basically it will use either one port or the other.
    I'm doing the testing with the iperf tool and am communicating with a unix based server.
    I have the following setup:
    Dell R210 II server with 2 Broadcom NetEtreme II adapters and a DualPort Intel Pro adapter - Centos 6.5 installed bonding configured and working wile communicating with other unix based systems.
    Zyxel GS2200-48 switch - Link Aggregation configured and working
    Dell R210 II with Windows 8.1 with Broadcom NetExtreme II cards or Intel Pro dualport cards.
    For the Windows machine I have also tried Windows 7 and Windows 8, also non server type hardware with identical results.
    so.. Why am I not getting > 1 Gbps throughput on the created team? although load balancing is activated, team adapter says the connection type is 2 Gbps, a the same setup with 2 unix machines works flawlessly.
    Am I to understand that Link Aggregation (802.3ad) under Microsoft OS does not support load balancing if connection is only towards one IP?
    To make it clear, I need client version of Windows OS to communicate unix based OS over a higher then 1Gbps bandwidth (as close to 2 Gbps as possible). Without the use of 10 Gbps network adapters.
    Thanks in advance,
    Endre

    As v-yamliu has mentioned, NIC teaming through the operating system is
    only available in Windows Server 2012 and Windows Server 2012 R2. For Windows Client or for previous versions of Windows Server you will need to create the team via the network driver. For Broadcom this is accomplished
    using the Broadcom Advanced Server Program (BASP) as documented here and
    for Intel via Advanced Network Services as documented here.
    If you have configured the team via the drivers, you may need to ensure the driver is properly installed and updated. You may also want to ensure that the adapters are configured for aggregation (802.3ad/802.1ax/LACP), rather than fault tolerance or load
    balancing and that the teaming configuration on the switch matches and is compatible with the server configuration. Also ensure that all of the links are connecting at full duplex as this is a requirement.
    Brandon
    Windows Outreach Team- IT Pro
    The Springboard Series on TechNet

  • Nic Teaming in guest OS

    We have 4 ESX Servers each containing 4 gig nics for production traffic. These are teamed to form a 4Gb pipe to the Cisco Switch. Our guest VM's are windows 2008 rc2 and my issue is that i need to find out if its possible to team nics within the windows 2008 vm's.
    We have physical windows 2008 servers with same network config as in 4 physical nics teamed to form 4gb pipe to cisco switch
    If i copy a 5GB file from one physical server to another physical server it takes about 70 seconds (which is great). If i copy the same file from one of the physical servers to one of my VM servers it takes over 3 minutes (not good).
    If i copy the same file between 2 servers on the same ESX host it also takes over 3 minutes (not good)
    If my theory is sound then i think the bottleneck is the fact that my VM servers only have a 1Gb nic so ideally i'd like to be able to team a pair of nics in a VM and redo the test.
    Any information on how to team nics within guest vm's would be appreciated.
    thanks

    Depending on the switch load balancing policy you can force each virtual NIC to use a separate physical NIC. If you have a piece of software which implements the the LB in the guest (as the VMware drivers don't implement this) you'll be able to achieve Transmit Load Balancing (TLB). Failover is implemented on the vSwitch layer.
    IMHO this benefit is a theoretical one as other guest are using these physical NIC's also load distributed, as well.
    For real load balancing you should have physical switches which support one of the LB protocols available.
    AWo
    \[:o]===\[o:]
    =Would you like to have this posting as a ringtone on your cell phone?=
    =Send "Posting" to 911 for only $999999,99!=

  • [SOLVED] Network speed very slow, no apparent reason

    Hello, recently I switched from Windows 7 RC1 to Arch on my home machine (I have used Gentoo, Ubuntu, and Fedora 10 on the same machine in the past), and for some reason network speed is very slow.
    I have Verizon's 20mbit/5mbit package, and I have always gotten that speed.
    Using speedtest.net and 100mb.test from cachefly on multiple computers I have come to the conclusion that it is infact my Arch install that is causing the problem:
    - All other machines on my network are getting 20/5 (both wired and wireless)
    - I ran a speedtest from 2 other machines using the ethernet cable that this PC is on.  Again, full 20/5.
    - scp transfer of 100mb.test from this PC -> other Arch box: ~2.8MB/s.
    - scp transfer of 100mb.test from other Arch box -> this PC: ~2.8MB/s.
    - scp transfer of 100mb.test from this PC -> UK VPS (100mbit line): ~539KB/s
    - scp transfer of 100mb.test from UK VPS -> this PC: ~76KB/s
    - scp transfer of 100mb.test from this PC -> Chicago Server (dual gbit lines): ~563KB/s
    - scp transfer of 100mb.test from Chicago Server -> this PC: ~91KB/s
    speedtest.net result:
    Upload speed seems to be unaffected.
    I have tried disabling TCP window scaling, and appending my hostname to /etc/hosts.
    What is weird though is that the other Arch box has an identical network config,  I don't see any reason why it shouldn't work.
    My NIC is an integrated Realtek something, I can get the exact model if needed.
    Last edited by whipsch (2009-06-05 18:29:21)

    Hi, whipsch
    Can you try ethtool to see if your ethernet card is actually negociating and using a 100Mb full duplex link on your LAN ? If not you can try to force the link parameter with ethtool. Also maybe the driver of your NIc has options related to link negociation.
    Hope this helps,
    JF

  • Set network speed & mode in Solaris 10

    Hi all,
    Can someone know how to change the network speed and mode in solaris 10 SPARC
    I tried with ndd -set , is this valid for Solaris 10
    How can I check the speed and mode
    Regards,
    Hakim

    It depends on what type of interface card you have, but basically you can use ndd -set and ndd-get commands on most of the NICs in a Sun box:
    Network Interface Cards documentation:
    http://docs.sun.com/app/docs/prod/net.inter.crds#hic
    I have bge cards, so, I run this command to see what parameters I can alter:
    ndd -get /dev/bge0 \?
    ? (read only)
    autoneg_cap (read only)
    pause_cap (read only)
    asym_pause_cap (read only)
    1000fdx_cap (read only)
    1000hdx_cap (read only)
    100T4_cap (read only)
    100fdx_cap (read only)
    100hdx_cap (read only)
    10fdx_cap (read only)
    10hdx_cap (read only)
    adv_autoneg_cap (read and write)
    adv_pause_cap (read and write)
    adv_asym_pause_cap (read and write)
    adv_1000fdx_cap (read and write)
    adv_1000hdx_cap (read and write)
    adv_100T4_cap (read only)
    adv_100fdx_cap (read and write)
    adv_100hdx_cap (read and write)
    adv_10fdx_cap (read and write)
    adv_10hdx_cap (read and write)
    lp_autoneg_cap (read only)
    lp_pause_cap (read only)
    lp_asym_pause_cap (read only)
    lp_1000fdx_cap (read only)
    lp_1000hdx_cap (read only)
    lp_100T4_cap (read only)
    lp_100fdx_cap (read only)
    lp_100hdx_cap (read only)
    lp_10fdx_cap (read only)
    lp_10hdx_cap (read only)
    link_status (read only)
    link_speed (read only)
    link_duplex (read only)
    link_autoneg (read only)
    link_rx_pause (read only)
    link_tx_pause (read only)
    loop_mode (read only)
    msi_cnt (read and write)
    drain_max (read and write)
    speed:
    ndd -get /dev/bge0 link_speed (Note that this is read-only attribute)
    100
    duplex:
    ndd -get /dev/bge0 100fdx_cap (read/write atrribute)
    1
    etc...
    To set duplex during boot, I do this:
    cat /etc/init.d/NICtune.s
    #!/sbin/sh
    # This script tunes the NIC to 100 Full Duplex
    case "$1" in
    start)
    ndd -set /dev/bge0 adv_1000fdx_cap 0
    ndd -set /dev/bge0 adv_1000hdx_cap 0
    ndd -set /dev/bge0 adv_100fdx_cap 1
    ndd -set /dev/bge0 adv_100hdx_cap 0
    ndd -set /dev/bge0 adv_10fdx_cap 0
    ndd -set /dev/bge0 adv_10hdx_cap 0
    ndd -set /dev/bge0 adv_autoneg_cap 0
    stop)
    continue
    echo "Usage: $0 { start | stop }"
    exit 1
    esac
    exit 0
    ln -s /etc/init.d/NICtune /etc/rc2.d/S99NICTune
    HTH
    John

  • Standard local network speed test?

    I recently setup up a dual-band (dedicated n, dedicated g) wireless network, and I think I am having some issues with throughput (qualitative observation). So I started trying to find a good metric for performance of the wireless systems. Note, I am talking here about the local wireless (Airport devices between home computers), not the connected internet speed (cable, DSL, fiber).
    I see a lot of people copying files and timing the transfer. Is this the best there is? From what I've read, ping isn't such a good metric for network throughput. You can watch the kb/s transfer report on copy, but that isn't exactly stable or indicative of general performance. Any other tools, programs, methods, etc.? Or should I just be timing file transfers?
    Thanks!

    Yes, I know how to handle and suppress errors in LabView. I was mainly looking for a good example on a network speed test.
    The test we have uses was written by one of our software guys, it uses two computers in a client-server arrangement and stops and displays an error message when the client computer loses connection to the server. Regardless of the reasoning I have given the programmer refuses to change it because he wrote the test for him and that's how he wants it.
    I want a program that can use one computer with two network cards. 
    It will send a data stream out one NIC, through the UUT, and back in the other NIC.
    Where it Will be checked for errors and dropped packets, and sent back through the UUT.
    A continuous throughput speed measurment will also be logged along with errors, dropped packets, and retries.
    I have never used any of the LV networking, tcp, etc  VI's and was hoping someone had done this before because I do not have time to start from scratch.

  • NIC teaming in OVM

    Hi folks,
    Does Oracle VM support NIC Teaming(Bonding of more than 1 NICs in to 1 Logical NIC)? Does it require any driver or patch for the same.
    I have SUN server x4710 which has 2 NIC cards.
    Moreover If I ve VMs(guests) with IPs of different VLANs(Eg. 10.22.70.x and 202.49.214.x) residing on same VM server what would be the best practice?
    Hope fully I have made the points clear..
    Thanks in advance...

    user10310678 wrote:
    Does Oracle VM support NIC Teaming(Bonding of more than 1 NICs in to 1 Logical NIC)? Does it require any driver or patch for the same.Yes it does support bonding and no, it doesn't require any additional drivers or patches. Bonding is built into the kernel.
    Moreover If I ve VMs(guests) with IPs of different VLANs(Eg. 10.22.70.x and 202.49.214.x) residing on same VM server what would be the best practice?I usually create a bridge per VLAN. That way, I can create a virtual interface to a guest that is already on a particular VLAN and the guest doesn't have to worry about VLANs. Also, it means you can control VLAN assignments outside the guest OS. See this wiki page for more info:
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-bondedand+trunked+network+interfaces

  • Hyper-V, NIC Teaming and 2 hosts getting in the way of each other

    Hey TechNet,
    After my initial build of 2 Hyper-V Core server which took me a bit of time without a domain, I started building 2 more for another site. After the initial two, setting up the new ones went very fast until I ran into a very funny issue. And I am willing
    to bet it is just my luck but I am wondering if any other out there ended up with it.
    So, I build these 2 new servers, create a NIC teaming on each host, add the management OS adapter, give it an IP and I can ping the world. So I went back to my station and tried to start working on these hosts but I kept getting DCed especially from one
    of them. Reinstalled it and remade the NIC teaming config, just in case. Same issue
    So I started pinging both of the servers and I remarked that when one was pinging, the other one tended to not answer ping anymore and vice versa. After testing the firewall and the switch and even trying to put the 2 machines on different switches, did
    not help. So I thought, what the heck, let's just remove all the network config from both machine, reboot, and redo the network config. Since then no issue.
    I only forgot to do one thing before removing the network configuration, I forgot to check if the MAC address on the Management OS adapters were the same. Even if it is a small chance, it can still happen (1 in 256^4 i'd say).
    So to get to my question, am I that unlucky or might it have been something else ?
    Enjoy your weekends

    I raised this bug long ago (one year ago in fact) and it still happens today.
    If you create a virtual switch, then add a management vNIC to it - there are times when you will get two hosts with the same MAC on the vNIC that was added for management.
    I have seen this in my lab (and I can reproduce it at will).
    Modify the entire Hyper-V MAC address pool.  Or else you will have the same issue with VMs.  This is the only workaround.
    But yes, it is a very confusing issue.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

Maybe you are looking for

  • Query code cotains Syntax Errors

    Hello Experts, In the process of upgrading from 4.6 to 6.0 ECC SAP version, we had some issues at the moment of testing Queries from SQ01 transaction, nevertheless almost all the problems with queries were solved by regenerating them all over again f

  • Cr2 files go Black and white

    Hi - I'm new to the forum - and to Photoshop! I'm currently using (ie - trying to learn) Cs5 on Windows 7. I've searched the forum but can't find an answer to my problem. I can open Cr2 files ok but when I go to save them (as a Raw) file I get the me

  • Sharing custom classes between different projects

    I'm using different projects as modules for a main application. I'm trying to use a custom class to share data between the modules. I've been unable to share the custom data's variables. Can this be done? Thanks for the help!! Carlos

  • Liste lecture vide sur itunes 11

    Bonjour J en suis a ma 100eme tentative et je n y arrive toujours pas! J essaye de mettre une liste de lecture dans mon iphone et tout a l air de bien se passer sauf que quand je vais dans "musique" sur le telephone la liste de lecture apparait ....m

  • 5310 Auto shutdown / start

    Does 5310 have Auto shutdown or start