Configuring NIC teaming

Hello, everyone. I'm hoping this thread is in the right place.
I've been doing some research trying to understand logical switches/port profiles/etc. in VMM and have been having a hard time. Most of the articles I've found either don't go into enough detail or seem to lack proper examples. My goal is to enable NIC teaming
on my cluster hosts.
Currently, each cluster node has 1 standard switch per physical NIC. One of these NICs is trunked, and the others are not. Everything is working fine, but I'm looking to improve the infrastructure behind these hosts.
I evicted one node from the cluster to experiment with. I enabled LACP on the switch side (Cisco) and enabled NIC teaming on the server (2012 R2). The server is online and functioning, but this is where my knowledge ends. I can't create a logical switch
and add it to this host as the job fails stating that the switch can't be added since the host is already teamed. I'm a little confused about the proper process of getting a logical switch created and added to my host. Do I need to remove LACP and disable
NIC teaming on the host and then re-enable it? Am I going down the wrong path by using LACP? 
Any tips and advice would be greatly appreciated. I'd also be happy to provide any additional details I may have left out.

We use LACP teaming for four NICs, two teams, one Production vSwitch and one for management.
We create the management team on the HyperV host first, add it into VMM then push out a team FROM VMM for the switch. The trick is to create a port profile (using algorithm Hyper-V Port and Teaming Mode LACP.) bind this port profile to your logical network(s).
then create your virtual switch and select uplink mode TEAM. and add in your uplink port profile.
Once you have done this you can then right click the host (in VMs and Services) Properties and navigate to virtual switches. Add a new Virtual Switch (New logical switch) then you will be able to add multiple adapters to the switch.
Hit apply and it *should* team for you.
If you need further clarification I can send screen prints and exact steps on tuesday when i'm back in the office.

Similar Messages

  • Windows Server 2012 R2 NIC Teaming and DHCP Issue

    Came across a weird issue today during a server deployment. I was doing a physical server deployment and got Windows installed and was getting ready to connect it to our network. Before connecting the Ethernet cables to the network adapters, I created a
    NIC Team using Windows Server 2012 R2 built-in software with a static IP address (we'll say its 192.168.1.56). Once I plugged in the Ethernet cables, I got network access but was unable to join our domain. At this time, I deleted the NIC team and the two network
    adapters got their own IP addresses issued from DHCP (192.168.1.57 and 192.168.1.58) and at this point I was able to join our domain. I recreated the NIC team and set a new static IP (192.168.1.57) and everything was working great as intended.
    My issue is when I went into DHCP I noticed a random entry that was using the IP address I used for the first NIC teaming attempt (192.168.1.56), before I joined it to the domain. I call this a random entry because it is using the last 8 characters of the
    MAC address as the hostname instead of the servers hostname.
    It seems when I deleted the first NIC team I created (192.168.1.56), a random MAC address Server 2012 R2 generated for the team has remained embedded in the system. The IP address is still pingable even though an ipconfig /all shows the current NIC team
    with the IP 192.168.1.57. There is no IP address of 192.168.1.56 configured on the current server and I have static IPs set yet it is still pingable and registering with DHCP.
    I know this is slightly confusing but I am hoping someone else has encountered this issue and may be able to tell me how to fix this. Simply deleting the DHCP entry does not do the trick, it comes back.

    Hi,
    Please confirm you have choose the right NIC team type, If you’ve previously configured NIC teaming, you’re aware NIC teams usually require the assistance of network-side
    protocols. Prior to Windows 2012, using a NIC team on a server also meant enabling protocols like EtherChannel or LACP (also known as 802.1ax or 802.3ad) on network ports.
    More information:
    NIC teaming configure in Server 2012
    http://technet.microsoft.com/en-us/magazine/jj149029.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • After NICs teaming, DHCP can't bind the NIC, and DHCP stop working.

    Hello, we have a DELL Server with windows 2008r2,
    I used the windows network Bridge first, in that situation I can see DHCP can Bind the "network Bridge".
    Now I need use LACP to team the NICs, I downloaded the
     Broadcom Advanced Control Suite to Teaming the 3 NICs(BCM5709C),
    I followed this video, and the teaming works, but now in DHCP I can't see the virtual adapter,
    https://www.youtube.com/watch?v=x2nq-qEwAzg
    The DHCP got 1041 error: The DHCP service is not servicing any DHCPv4 clients because none of the active network interfaces have statically configured IPv4 addresses, or there are no active interfaces.
    Same like here 
    http://www.experts-exchange.com/Networking/Network_Management/Network_Operations/Q_28159948.html
    Anyone had this issue before and know a solution?
    Thanks
    Jason
    Jason

    Hi,
    I am investigating the issue. I did find a post here
    https://communities.intel.com/message/61068 that indicates after configuring NIC teaming you might need to wait a while for the team to initialize before configuring a static IP address. Otherwise, the static IP address does not work correctly.
    -Greg

  • Is enabling NIC teaming service affecting?

    Hi,
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0in 5.4pt 0in 5.4pt;
    mso-para-margin:0in;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;
    mso-bidi-font-family:"Times New Roman";
    mso-bidi-theme-font:minor-bidi;}
    I want to enable NIC teaming on a CUCM 7.1 with the "set network failover enable" command.  Is this a service affecting command (e.g. does it require a server reboot or cause phones to unregister)?
    Thanks!
    Mike

    Hi Mike,
    Configuring NIC Teaming via cli does not require a server
    restart and will not cause the phones to re-register
    Cheers!
    Rob

  • PS Script to Automate NIC Teaming and Configure Static IP Address based off an Existing Physical NIC

    # Retrieve IP Address and Default Gateway from static IP Assigned NIC and assign to variables.
    $wmi = Get-WmiObject Win32_NetworkAdapterConfiguration -Filter "IPEnabled = True" |
    Where-Object { $_.IPAddress -match '192\.' }
    $IPAddress = $wmi.IpAddress[0]
    $DefaultGateway = $wmi.DefaultIPGateway[0]
    # Create Lbfo TEAM1, by binding “Ethernet” and “Ethernet 2” NICs.
    New-NetLbfoTeam -Name TEAM1 -TeamMembers "Ethernet","Ethernet 2" -TeamingMode Lacp -LoadBalancingAlgorithm TransportPorts -Confirm:$false
    # 20 second pause to allow TEAM1 to form and come online.
    Start-Sleep -s 20
    # Configure static IP Address, Subnet, Default Gateway, DNS Server IPs to newly formed TEAM1 interface.
    New-NetIPAddress –InterfaceAlias “TEAM1” –IPAddress $IPAddress –PrefixLength 24 -DefaultGateway $DefaultGateway
    Set-DnsClientServerAddress -InterfaceAlias “TEAM1” -ServerAddresses xx.xx.xx.xx, xx.xx.xx.xx
    Howdy All!
    I was recently presented with the challenge of automating the creation and configuration of a NIC Team on Server 2012 and Server 2012 R2.
    Condition:
    New Team will use static IP Address of an existing NIC (one of two physical NICs to be used in the Team).  Each server has more than one NIC.
    Our environment is pretty static, in the sense that all our servers use the same subnet mask and DNS server IP Addresses, so I really only had
    to worry about the Static IP Address and the Default Gateway.
    1. Retrieve NIC IP Address and Default Gateway:
    I needed a way to query only the NIC with the correct IP Address settings and create required variables based on that query.  For that, I
    leveraged WMI.  For example purposes, let's say the servers in your environment start with 192. and you know the source physical NIC with desired network configurations follows this scheme.  This will retrieve only the network configuration information
    for the NIC that has the IP Address that starts with "192."  Feel free to replace 192 with whatever octet you use.  you can expand the criteria by filling out additional octects... example:
    Where-Object
    $_.IPAddress
    -match'192\.168.' } This would search for NICs with IP Addresses 192.168.xx.xx.
    $wmi
    = Get-WmiObject
    Win32_NetworkAdapterConfiguration
    -Filter "IPEnabled = True"
    |
    Where-Object {
    $_.IPAddress
    -match '192\.' }
    $IPAddress
    = $wmi.IpAddress[0]
    $DefaultGateway
    = $wmi.DefaultIPGateway[0]
    2. Create Lbfo TEAM1
    This is a straight forward command based off of New-NetLbfoTeam.  I used  "-Confirm:$false" to suppress prompts. 
    Our NICs are named “Ethernet” and “Ethernet 2” by default, so I was able to keep –TeamMembers as a static entry. 
    Also added start-sleep command to give the new Team time to build and come online before moving on to network configurations. 
    New-NetLbfoTeam
    -Name TEAM1
    -TeamMembers "Ethernet","Ethernet 2"
    -TeamingMode SwitchIndependent
    -LoadBalancingAlgorithm
    Dynamic -Confirm:$false
    # 20 second pause to allow TEAM1 to form and come online.
    Start-Sleep
    -s 20
    3. Configure network settings for interface "TEAM1".
    Now it's time to pipe the previous physical NICs configurations to the newly built team.  Here is where I will leverage
    the variables I created earlier.
    There are two separate commands used to fully configure network settings,
    New-NetIPAddress : Here is where you assign the IP Address, Subnet Mask, and Default Gateway.
    Set-DnsClientServerAddress: Here is where you assign any DNS Servers.  In my case, I have 2, just replace x's with your
    desired DNS IP Addresses.
    New-NetIPAddress
    –InterfaceAlias “TEAM1”
    –IPAddress $IPAddress
    –PrefixLength 24
    -DefaultGateway $DefaultGateway
    Set-DnsClientServerAddress
    -InterfaceAlias “TEAM1”
    -ServerAddresses xx.xx.xx.xx, xx.xx.xx.xx
    Hope this helps and cheers!

    I've done this before, and because of that I've run into something you may find valuable. 
    Namely two challenges:
    There are "n" number of adapters in the server.
    Adapters with multiple ports should be labeled in order.
    MS only supports making a LBFO Team out of "like speed" adapters.
    To solve both of these challenges I standardized the name based on link speed for each adapter before creating hte team.  Pretty simple really!  FIrst I created to variables to store the 10g and 1g adapters.  I went ahead and told it to skip
    any "hyper-V" ports for obvious reasons, and sorted by MAC address as servers tend to put all thier onboard NICs in sequentially by MAC:
    $All10GAdapters = (Get-NetAdapter |where{$_.LinkSpeed -eq "10 Gbps" -and $_.InterfaceDesription -notmatch 'Hyper-V*'}|sort-object MacAddress)
    $All1GAdapters = (Get-NetAdapter |where{$_.LinkSpeed -eq "1 Gbps" -and $_.InterfaceDesription -notmatch 'Hyper-V*'}|sort-object MacAddress)
    Sweet ... now that I have my adapters I can rename them into something standardized:
    $i=0
    $All10GAdapters | ForEach-Object {
    Rename-NetAdapter -Name $_.Name -NewName "Ethernet_10g_$i"
    $i++
    $i = 0
    $All1GAdapters | ForEach-Object {
    Rename-NetAdapter -Name $_.Name -NewName "Ethernet_1g_$i"
    $i++
    Once that's done Now i can return to your team command but use a wildcard sense I know the standardized name!
    New-NetLbfoTeam -Name TEAM1G -TeamMembers Ethernet_1g_* -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false
    New-NetLbfoTeam -Name TEAM10G -TeamMembers Ethernet_10g_* -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false

  • Server 2012 R2 Crashes with NIC Team

    Server 2012 R2 Core configured for Hyper-V. Using 2-port 10Gbe Brocades, we want to use NIC teaming for guest traffic. Create the team... seems fine. Create the virtual switch in Hyper-V, and assign it to the NIC team... seems fine. Create
    a VM, assign the network card to the Virtual switch... still doing okay. Power on the VM... POOF! The host BSOD's. If I remove the switch from the VM, I can run the VM from the console, install the OS, etc... but as soon as I reassign the virtual
    NIC to the switch, POOF! Bye-bye again. Any ideas here?
    Thank you in advance!
    EDIT: A little more info... Two 2-port Brocades and two Nexus 5k's. Running one port on NIC1 to one 5k, and one port on NIC2 to the other 5k. NIC team is using Switch Independent Mode, Address Hash load balancing, and all adapters active.

    Hi,
    Have you updated the NIC driver to latest?
    If issue persists after updating the driver, we can use WinDbg to analyze a crash dump.
    If the NIC driver cause the BSOD, please consult the NIC manufacture about this issue.
    For detailed information about how to analyze a crash dump, please refer to the link below,
    http://blogs.technet.com/b/juanand/archive/2011/03/20/analyzing-a-crash-dump-aka-bsod.aspx
    Best Regards.
    Steven Lee
    TechNet Community Support

  • Windows 7/8.0/8.1 NIC teaming issue

    Hello,
    I'm having an issue with Teaming network adapters in all recent Windows client OSs.
    I'm using Intel Pro Dual Port or Broadcom NetExtreme II GigaBit adapters with the appropriate drivers/applications from the vendors.
    I am able to set up teaming and fail-over works flawlessly, but the connection will not use the entire advertised bandwidth of 2Gbps. Basically it will use either one port or the other.
    I'm doing the testing with the iperf tool and am communicating with a unix based server.
    I have the following setup:
    Dell R210 II server with 2 Broadcom NetEtreme II adapters and a DualPort Intel Pro adapter - Centos 6.5 installed bonding configured and working wile communicating with other unix based systems.
    Zyxel GS2200-48 switch - Link Aggregation configured and working
    Dell R210 II with Windows 8.1 with Broadcom NetExtreme II cards or Intel Pro dualport cards.
    For the Windows machine I have also tried Windows 7 and Windows 8, also non server type hardware with identical results.
    so.. Why am I not getting > 1 Gbps throughput on the created team? although load balancing is activated, team adapter says the connection type is 2 Gbps, a the same setup with 2 unix machines works flawlessly.
    Am I to understand that Link Aggregation (802.3ad) under Microsoft OS does not support load balancing if connection is only towards one IP?
    To make it clear, I need client version of Windows OS to communicate unix based OS over a higher then 1Gbps bandwidth (as close to 2 Gbps as possible). Without the use of 10 Gbps network adapters.
    Thanks in advance,
    Endre

    As v-yamliu has mentioned, NIC teaming through the operating system is
    only available in Windows Server 2012 and Windows Server 2012 R2. For Windows Client or for previous versions of Windows Server you will need to create the team via the network driver. For Broadcom this is accomplished
    using the Broadcom Advanced Server Program (BASP) as documented here and
    for Intel via Advanced Network Services as documented here.
    If you have configured the team via the drivers, you may need to ensure the driver is properly installed and updated. You may also want to ensure that the adapters are configured for aggregation (802.3ad/802.1ax/LACP), rather than fault tolerance or load
    balancing and that the teaming configuration on the switch matches and is compatible with the server configuration. Also ensure that all of the links are connecting at full duplex as this is a requirement.
    Brandon
    Windows Outreach Team- IT Pro
    The Springboard Series on TechNet

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • NIC teaming in OVM

    Hi folks,
    Does Oracle VM support NIC Teaming(Bonding of more than 1 NICs in to 1 Logical NIC)? Does it require any driver or patch for the same.
    I have SUN server x4710 which has 2 NIC cards.
    Moreover If I ve VMs(guests) with IPs of different VLANs(Eg. 10.22.70.x and 202.49.214.x) residing on same VM server what would be the best practice?
    Hope fully I have made the points clear..
    Thanks in advance...

    user10310678 wrote:
    Does Oracle VM support NIC Teaming(Bonding of more than 1 NICs in to 1 Logical NIC)? Does it require any driver or patch for the same.Yes it does support bonding and no, it doesn't require any additional drivers or patches. Bonding is built into the kernel.
    Moreover If I ve VMs(guests) with IPs of different VLANs(Eg. 10.22.70.x and 202.49.214.x) residing on same VM server what would be the best practice?I usually create a bridge per VLAN. That way, I can create a virtual interface to a guest that is already on a particular VLAN and the guest doesn't have to worry about VLANs. Also, it means you can control VLAN assignments outside the guest OS. See this wiki page for more info:
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-bondedand+trunked+network+interfaces

  • Hyper-V, NIC Teaming and 2 hosts getting in the way of each other

    Hey TechNet,
    After my initial build of 2 Hyper-V Core server which took me a bit of time without a domain, I started building 2 more for another site. After the initial two, setting up the new ones went very fast until I ran into a very funny issue. And I am willing
    to bet it is just my luck but I am wondering if any other out there ended up with it.
    So, I build these 2 new servers, create a NIC teaming on each host, add the management OS adapter, give it an IP and I can ping the world. So I went back to my station and tried to start working on these hosts but I kept getting DCed especially from one
    of them. Reinstalled it and remade the NIC teaming config, just in case. Same issue
    So I started pinging both of the servers and I remarked that when one was pinging, the other one tended to not answer ping anymore and vice versa. After testing the firewall and the switch and even trying to put the 2 machines on different switches, did
    not help. So I thought, what the heck, let's just remove all the network config from both machine, reboot, and redo the network config. Since then no issue.
    I only forgot to do one thing before removing the network configuration, I forgot to check if the MAC address on the Management OS adapters were the same. Even if it is a small chance, it can still happen (1 in 256^4 i'd say).
    So to get to my question, am I that unlucky or might it have been something else ?
    Enjoy your weekends

    I raised this bug long ago (one year ago in fact) and it still happens today.
    If you create a virtual switch, then add a management vNIC to it - there are times when you will get two hosts with the same MAC on the vNIC that was added for management.
    I have seen this in my lab (and I can reproduce it at will).
    Modify the entire Hyper-V MAC address pool.  Or else you will have the same issue with VMs.  This is the only workaround.
    But yes, it is a very confusing issue.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • ESXi 4.1 NIC Teaming's Load-Balancing Algorithm,Nexus 7000 and UCS

    Hi, Cisco Gurus:
    Please help me in answering the following questions (UCSM 1.4(xx), 2 UCS 6140XP, 2 Nexus 7000, M81KR in B200-M2, No Nexus 1000V, using VMware Distributed Switch:
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned?
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct?
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES?
    I would really appreciate if someone can help me clear these lingering doubts of mine.
    God Bless.
    SiM

    Sim,
    Here are my thoughts without a 1000v in place,
    Q1. For me to configure vPC on a pair of Nexus 7000, do I have to connect Ethernet Uplink from each Cisco Fabric Interconnect to the 2 Nexus 7000 in a bow-tie fashion? If I connect, say 2 10G ports from Fabric Interconnect 1 to 1 Nexus 7000 and similar connection from FInterconnect 2 to the other Nexus 7000, in this case can I still configure vPC or is it a validated design? If it is, what is the pro and con versus having 2 connections from each FInterconnect to 2 separate Nexus 7000?   //Yes, for vPC to UCS the best practice is to bowtie uplink to (2) 7K or 5Ks.
    Q2. If vPC is to be configured in Nexus 7000, is it COMPULSORY to configure Port Channel for the 2 Fabric Interconnects using UCSM? I believe it is not. But what is the pro and con of HAVING NO Port Channel within UCS versus HAVING Port Channel when vPC is concerned? //The port channel will be configured on both the UCSM and the 7K. The pro of a port channel would be both bandwidth and redundancy. vPC would be prefered.
    Q3. if vPC is to be configured in Nexus 7000, I understand there is a limitation on confining to ONLY 1 vSphere NIC Teaming's Load-Balancing Algorithm i.e. Route Based on IP Hash. Is it correct? //Without the 1000v, I always tend to leave to dvSwitch load balence behavior at the default of "route by portID". 
    Again, what is the pro and con here with regard to application behaviours when Layer 2 or 3 is concerned? Or what is the BEST PRACTICES? UCS can perform L2 but Northbound should be performing L3.
    Cheers,
    David Jarzynka

  • Load Balancing and NIC Teaming

    Hi! i have been looking through lots of links and none of them actually can fully answer my queries.
    I am to do a writeup on load balancing and NIC Teaming, is there any1 that knows what are the commonly used load balancing and NIC Teaming methods, when to use each method, and the advantages and disadvantages of each method and the configuration for each
    method!
    Sorry its lots of questions but i have to do a detailed writeup!
    Many thanks in advance :D

    HI
    NIC Teaming - On a single server, you will have mutiple NIC. You can Team the NIC so that both NIC will act togather to provide better bandwidth and High avaliblity.
    Example : NIC 1 - 1 GB and NIC -2 1 GB so in Team it can act a 2 GB single NIC, If one fails speed will be reduced but it will have HA
    Loadbalancing : Two servers hosting same content:
    Example : Microsoft.com can be hosted in two or even more servers and a loadbalancer will be used to split load to each server based of the current load and traffic.
    No disadvantages

  • Relationship between coherence and NIC teaming

    Hi,
    We are using Tangosol coherence for clustering purpose in our product Webmethods Integration server.
    When our server starts up it tries to jojn tne cluster.
    Our scenario is this :-
    We have 2 servers running on 2 separate boxes A&B.
    They are on same network segment.
    Multicast test is working properly .
    The issue is only one of the nodes(which is started first) in becoming the part of the cluster and other one remain disabled.
    We found out that the NIC teaming was disabled in the boxes.
    When we enabled NIC teaming with smart load balancing then both the nodes are able to join the cluster.
    My specific question is,
    Is there any relationship between Tangosol coherence and NIC teaming? If yes, what's the relationship.
    Regards,
    Ritwik Bhattacharyya

    I did some tinkering a while back trying to get 4Gb/s bonded etherchannels going on linux boxes but I had issues with out of order and missing packets:
    4Gb/s bonded ethernet test results - finally...
    But to answer your question there is no reason that you would need NIC teaming on in order to make Coherence work. It sounds like something is not configured correctly with your NIC or switch. Maybe try connecting the machines with a crossover cable instead of a switch just to eliminate the switch as a possible problem. It sounds like maybe you're just using the wrong ethernet port on a server or something.
    -Andrew

  • Are these viable designs for NIC teaming on UCS C-Series?

    Is this a viable design on ESXi 5.1 on UCS C240 with 2 Quad port nic adapters?
    Option A) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Active configuration through alternate and redundant hardware paths to the network.
    Option B) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Standy By configuration through alternate and redundant hardware paths to the network.
    Option A:
    Option B:
    Thanks.

    No.  It really comes down to what Active/Active means and the type of upstream switches.  For ESXi NIC teaming - Active/Active load balancing provided the opportunity to have all network links be active for different guest devices.  Teaming can be configured in a few different methods.  The default is by virtual port ID where each guest machine gets assigned to an active port and then also a backup port.  Traffic for that host would only be sent on one link at a time.
    For example lets assume 2 Ethernet Links and 4 guests on the ESX host.  Link 1 to Switch 1 would be active for Guest 1 and 2 and Link 2 to Switch 2 would be backup for Guest 1 and 2.  However Link 2 to Switch 2 would be active for Guest 3 and 4 and Link 1 to Switch 1 would be backup for guest 1 and 2. 
    The following provides details on the configuration of NIC teaming with VMWare:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
    There are also possibilities of configuring LACP in some situations, but there are special hardware considerations on the switch side as well as the host side.
    Also keep in mind that the vSwitch does not indiscriminately forward broadcast/multicast/unknown unicast out all ports.  It has a strict set of rules that prevents it from looping.  It is not a traditional L2 forwarder so loops are not a consideration in an active/active environment. 
    This document further explains VMWare Virtual Networking Concepts.
    http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf
    Steve McQuerry
    UCS - Technical Marketing

  • Windows Server 2012/2012R2 NIC Teaming Mode

    Hi,
    Question 1:
    In Windows Server 2012 the following teaming mode was recommended for Hyper-V NIC teams:
    Teaming mode: Switch Independent
    Load balancing mode: Hyper-V Port
    All Adapers Active
    In a session at TechEd 2014 it was stated that Dynamic is the new recommendation for Windows Server 2012 R2. However, a Microsoft PFE stated a few weeks ago that he would still recommend Hyper-V Port for Windows Server 2012 R2. What is your opinions around
    this?
    Question 2:
    We have a Hyper-V Failover Cluster which isn`t migrated to 2012 R2 yet, it`s running 2012. In this cluster we use Switch Independent/Hyper-V Port for the team. We also use converged networking, having 2 physical adapters bound to the NIC team, as well as
    3 virtual adapters in the management OS for management, CSV and Live Migration. Recently one of the team NICs failed, and this incident also caused the cluster membership on the affected node to go offline even though the other team NIC was
    connected. Is this expected behaviour? Would the behaviour be different if 2012 R2 with Dynamic mode was being used?

    Hello,
    As for question number 1:
    For Hyper-V workload it's recommended to use Dynamic with
    Switch Independent mode. Why?
    This configuration will distribute the load based on the TCP Ports address hash as modified by the Dynamic load balancing algorithm. The Dynamic load balancing algorithm will redistribute flows to optimize team member bandwidth utilization so individual
    flow transmissions may move from one active team member to another.  The algorithm takes into account the small possibility that redistributing traffic could cause out-of-order delivery of packets so it takes steps to minimize that possibility.
    The receive side, however, will look identical to Hyper-V Port distribution.  Each Hyper-V switch port’s traffic, whether bound for a virtual NIC in a VM (vmNIC) or a virtual NIC in the host (vNIC), will see all its inbound traffic arriving on a single
    NIC.
    This mode is best used for teaming in both native and Hyper-V environments except when:
    1) Teaming is being performed in a VM,
    2) Switch dependent teaming (e.g., LACP) is required by policy, or
    3) Operation of a two-member Active/Standby team is required by policy. 
    As for question number 2:
    The Switch Independent/Hyper-V Port will send packets using all active team members distributing the load based on the Hyper-V switch port number.  Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port
    is affinitized to exactly one team member at any point in time. 
    In all cases where this configuration was recommended back in Windows Server 2012 the new configuration in 2012 R2, Switch Independent/Dynamic, will provide better performance.
    Microsoft recommend for a clustered Hyper-V deployment
    in Windows server 2012 to use Switch Independent/Hyper-V Port as you mentioned and to configure
    Hyper-V QoS that applies to the virtual switch. (Configure minimum bandwidth in
    weight mode instead of in bits per second and Enable and configure QoS
    for all virtual network adapters 
    Did you apply QoS on the Converged vSwitch after you
    created the team?? However Nodes are considered down if they do not respond to 5 heartbeats. The Switch Independent/Hyper-V Port does not cause the cluster to goes down if one NIC failed. The issue is somewhere else and not in the teaming mode
    that you choose.
    Hope this help.
    Regards,
    Charbel Nemnom
    MCSA, MCSE, MCS, MCITP
    Blog: www.charbelnemnom.com
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if
    a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Maybe you are looking for

  • Installation of PI 7.1 CE and ESR all together

    Hi All, I'd like to know if is not recommended to install PI 7.1, CE and ESR on same SAP instance, in these kind of configuration we have the AS JAVA with three components installed (PI,CE,ESR). I want also to know if in this configuration is possibl

  • SetXPath works in FF2, but not IE7

    consider this XML: ====================================================================== <images> <image name="01.jpg" width="267" height="270"> <caption></caption> <tags> <tag>Family</tag> <tag>Fun</tag> <tag>Fred</tag> </tags> </image> <image name

  • Connecting to a database coldfusion

    Following instructions in Getting Started Experiance Tutorial page 18 of 47 Item 7 figure 28 shows Databases Tab with + - butons I do not have these buttons and can not select Microsoft Access with Unicode Connection as shown in fig 29 page 19 any su

  • Commit operation button not working

    jdevloper 11.1.2.0 version 64 i drag and drop operation commit to my page and make disable proporty false but commit button just post my new value in page and not commit in database and this the code source <af:commandButton actionListener="#{binding

  • System data of Service Desk

    we are using the function help / create support message. At the moment we don't get all system data out of the satellite system anymore (e.g. User, transaction). How can we change to system to get the "old" entries like: SY-DBSYS................ SY-H