NIC Teaming with CSS

Hi Gilles,
Is NIC teaming supported with CSS? How can I load balance two servers with each server having two NIC and using NIC teaming.
Thanks
Sushil

As far as I know there is no LAG on the css. I think the best solution is to use a switch in between the servers and the css.

Similar Messages

  • HP Servers NIC Teaming with Cisco Nexus 2000/5000

    I have number of HP switches that will be connected to Cisco Nexus 2000/5000 switches.
    In  HP Servers, there are multiple options for NIC teaming.  I like to  connect each port in a NIC card to two different Nexus 2000 switches  extension to Nexus 5000 switches.  Nexus 5000 switches will be  configured as VPC for clustering.
    Wanted to know what whould be the best NIC teaming option from the followng HP Server's NIC Teaming options:
    Automatic
    802.3ad Dynamic with Fault Tolerence
    Switch-assisted load balancing with Fault Tolerance (SLB)
    Transmit load balancing with Fault Tolerance (TLB)
    Transmit Load Balancing with Fault tolerance and preference order
    Network Fault Tolerance Only (NFT)
    Network Fault Tolerance with Preference Order

    Nexus switches only support LACP (802.3ad) or ON mode.  So, to match your server config with your switch, the first option is the best one to use.  I think, SLB is a Microsoft propriety protocol.
    HTH

  • HP NIC Team with MDT

    I am trying to deploy servers with MDT and as a requirement would need to team NIC, We use HP servers and they provide a utility
    CQNICCMD to get that done.
    Have referred the below link for the command/switches.
    http://h20564.www2.hp.com/hpsc/doc/public/display?docId=c04024934
    I am not sure where in the task sequence should I get the NIC team done cause they keep failing where every I put it.Any documentation on how to get this done will be of great help.

    I would place the command during the State Restore phase, somewhere during custom steps. Assuming that the Teaming can be done in the regular OS.
    The question is: What are the failures?
    Please note that if you are configuring the only NIC's on the machine *AND* running the scripts from *over* the network, your script could suddenly find it self running without access to it's own source. I would devise a method to copy the scripts/install
    program/drivers locally, kick off the script locally, and then wait for the network to be available.
    Keith Garner - Principal Consultant [owner] -
    http://DeploymentLive.com

  • SR-IOV Uplink Port with NIC Teaming

    Hello,
    I'm trying to setup my uplink port profile and logical switch with NIC Teaming and SR-IOV support. In Hyper-V this was easy, just had to create the NIC Team (which I configured as Dynamic & LACP) then check the box on the virtual switch.
    I'm VMM it does not seem to like to enable NIC Teams with SR-IOV:
    Can anyone advise? I'm not using any virtual ports. I just want all my VMs to connect to the physical switch though the LACP NIC Team, something which I thought would be simple.
    I have a plan B - don't use Microsoft's NIC Teaming and instead use the Intel technology to present all the adapters as one to the host. I'd rather no do this.
    Thanks
    MrGoodBytes

    Hi Sir,
    "SR-IOV does have certain limitations. If you configure port access control lists (ACLs), extensions or policies in the virtual switch, SR-IOV is disabled because its traffic totally bypasses the switch.
    You can’t team two SR-IOV network cards in the host. You can, however, take two physical SR-IOV NICs in the host, create separate virtual switches and team two virtual network cards within a VM. "
    There is really a limitation when using NIC teaming :
    http://technet.microsoft.com/en-us/magazine/dn235778.aspx
    Best Regards,
    Elton Ji 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Relationship between coherence and NIC teaming

    Hi,
    We are using Tangosol coherence for clustering purpose in our product Webmethods Integration server.
    When our server starts up it tries to jojn tne cluster.
    Our scenario is this :-
    We have 2 servers running on 2 separate boxes A&B.
    They are on same network segment.
    Multicast test is working properly .
    The issue is only one of the nodes(which is started first) in becoming the part of the cluster and other one remain disabled.
    We found out that the NIC teaming was disabled in the boxes.
    When we enabled NIC teaming with smart load balancing then both the nodes are able to join the cluster.
    My specific question is,
    Is there any relationship between Tangosol coherence and NIC teaming? If yes, what's the relationship.
    Regards,
    Ritwik Bhattacharyya

    I did some tinkering a while back trying to get 4Gb/s bonded etherchannels going on linux boxes but I had issues with out of order and missing packets:
    4Gb/s bonded ethernet test results - finally...
    But to answer your question there is no reason that you would need NIC teaming on in order to make Coherence work. It sounds like something is not configured correctly with your NIC or switch. Maybe try connecting the machines with a crossover cable instead of a switch just to eliminate the switch as a possible problem. It sounds like maybe you're just using the wrong ethernet port on a server or something.
    -Andrew

  • Are these viable designs for NIC teaming on UCS C-Series?

    Is this a viable design on ESXi 5.1 on UCS C240 with 2 Quad port nic adapters?
    Option A) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Active configuration through alternate and redundant hardware paths to the network.
    Option B) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Standy By configuration through alternate and redundant hardware paths to the network.
    Option A:
    Option B:
    Thanks.

    No.  It really comes down to what Active/Active means and the type of upstream switches.  For ESXi NIC teaming - Active/Active load balancing provided the opportunity to have all network links be active for different guest devices.  Teaming can be configured in a few different methods.  The default is by virtual port ID where each guest machine gets assigned to an active port and then also a backup port.  Traffic for that host would only be sent on one link at a time.
    For example lets assume 2 Ethernet Links and 4 guests on the ESX host.  Link 1 to Switch 1 would be active for Guest 1 and 2 and Link 2 to Switch 2 would be backup for Guest 1 and 2.  However Link 2 to Switch 2 would be active for Guest 3 and 4 and Link 1 to Switch 1 would be backup for guest 1 and 2. 
    The following provides details on the configuration of NIC teaming with VMWare:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
    There are also possibilities of configuring LACP in some situations, but there are special hardware considerations on the switch side as well as the host side.
    Also keep in mind that the vSwitch does not indiscriminately forward broadcast/multicast/unknown unicast out all ports.  It has a strict set of rules that prevents it from looping.  It is not a traditional L2 forwarder so loops are not a consideration in an active/active environment. 
    This document further explains VMWare Virtual Networking Concepts.
    http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf
    Steve McQuerry
    UCS - Technical Marketing

  • Hyper-V NIC Team Load Balancing Algorithm: TranportPorts vs Hyper-VPorts

    Hi, 
    I'm going to need to configure a NIC team for the LAN traffic for a Hyper-V 2012 R2 environment. What is the recommended load balancing algorithm? 
    Some background:
    - The NIC team will deal with LAN traffic (NOT iSCSI storage traffic)
    - I'll set up a converged network. So there'll be a virtual switch on top of this team, which will have vNICs configured for each cluster, live migration and management
    - I'll implement QOS at the virtual switch level (using option -DefaultFlowMinimumBandwidthWeight) and at the vNIC level (using option -MinimumBandwidthWeight)
    - The CSV is set up on an Equallogics cluster. I know that this team is for the LAN so it has nothing to do with the SAN, but this reference will become clear in the next paragraph. 
    Here's where it gets a little confusing. I've checked some of the Equallogics documentation to ensure this environment complies with their requirements as far as storage networking is concerned. However, as part of their presentation the Dell publication
    TR1098-4, recommends creating the LAN NIC team with the TrasportPorts Load Balancing Algorithm. However, in some of the Microsoft resources (i.e. http://technet.microsoft.com/en-us/library/dn550728.aspx), the recommended load balancing algorithm is HyperVPorts.
    Just to add to the confusion, in this Microsoft TechEd presentation, http://www.youtube.com/watch?v=ed7HThAvp7o, the recommendation (at around minute 8:06) is to use dynamic ports algorithm mode. So obviously there are many ways to do this, but which one is
    correct? I spoke with Equallogics support and the rep said that their documentation recommends TransportPorts LB algorithm because that's what they've tested and works. I'm wondering what the response from a Hyper-V expert would be to this question. Anyway,
    any input on this last point would be appreciated.

    Gleb,
    >>See Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management  for more
    info
    Thanks for this reference. It seems that I have an older version of this document where there's absolutely
    no mention of the dynamic LBA. Hence my confusion when in the Microsoft TechEd presentation the
    recommendation was to use Dynamic. I almost implemented this environment with switch dependent and Address Hash Distribution because, based on the older version of the document, this combination offered: 
    a) Native teaming for maximum performance and switch diversity is not required; or
    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
    The new version of the document recommends Dynamic over the other two LBA. The analogy that the document
    makes of TCP flows with human speech was really helpful for me to understand what this algorithm is doing. For those who will never read the document, I'm referring to this: 
    "The outbound loads in this mode are dynamically balanced based on the concept of
    flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
    occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow,
    the algorithm will opportunistically rebalance the flow to another team member if apropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity
    between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members. "
    Anyway, this post made my week. You sir are deserving of a beer!

  • NIC teaming creates packet loss (Windows 2008 R2)?

    I'm experiencing some packet loss to all of our VMs that we didn't have before we made some changes to our Hyper-V implementation (Windows 2008 R2). Most of the VMs also run 2008 R2 - with 3 that run Server 2003.
    The host server is a Dell R610 with three 4 port NICS - two Intel quad port gigabit and a quad port Broadcom. 
    We us the individual ports of the Broadcom for host management and live migration - no problems here. We use the Intel cards for both iSCSI and VM networks. Calling the two intel cards “A” and “B”, and the ports P1-4 we've used AP1, AP2, BP1, BP2 (ports
    1 & 2 of both Intel NICs) for iSCSI connections, and we've created a NIC Team with AP3, AP4, BP3, and BP4 (ports 3 and 4 of both Intel NICs). The team type is "Virtual Machine Load Balancing". We then created a Hyper-V switch based on this team
    for use with all of the VMs created on the host. (as a side note: prior to implementing the NIC team, we just had 4 Hyper-V switches, one associated with each of these 4 ports.)
    The 4 ports of the NIC team are connected to two different Cisco SG200 switches - AP3 and BP3 are connected to switch1, and AP4 and BP4 are connected to switch2 (in an attempt to maximize redundancy). The two Cisco SG200s are simply connected to the rest
    of our network - each to a different switch within the subnet. There is minimal configuration done to the SG200s (for example NO
     link aggregation); spanning tree is enable however.
    My question is: can the network cables be connected to different switches (as they currently are) and if so is there some configuration piece (either on the switch or within Windows) that I'm missing? 
    What are the options here if this configuration is incorrect? The packet loss is in the range of 0.1%, but we've had odd spikes where a VM was essentially unavailable for a brief period (a few minutes) then returned to "normal" (0,1% loss). 
    Pinging a device (like the SG200 itself) or another physical server (for example our domain controller or the hyper-v host itself) results in essentially 0 loss; maybe one or two packets during the course of a 12 hour ping (this was the “normal” ping
    response to VMs before we created the NIC team, so I’m quite sure this has something to do with it).
    Thanks in advance!

    I believe when utilizing the Virtual Machine Load Balancing the ports must be connected to the same switch, stack, or chassis as the arp for the MAC could move.  I believe, although I could be wrong, that the outages you see is when the machine "moves"
    between ports and the arp being updated between the two switches. 
    I believe you are looking for switch fault tolerance teaming which will allow for the failure of adapter, cabling, or switch which will achieve your goal of maximum redundancy.  This is achieved via spanning tree on the switches, which you indicated
    is already configured.
     

  • Windows Server 2012 R2 NIC Teaming and DHCP Issue

    Came across a weird issue today during a server deployment. I was doing a physical server deployment and got Windows installed and was getting ready to connect it to our network. Before connecting the Ethernet cables to the network adapters, I created a
    NIC Team using Windows Server 2012 R2 built-in software with a static IP address (we'll say its 192.168.1.56). Once I plugged in the Ethernet cables, I got network access but was unable to join our domain. At this time, I deleted the NIC team and the two network
    adapters got their own IP addresses issued from DHCP (192.168.1.57 and 192.168.1.58) and at this point I was able to join our domain. I recreated the NIC team and set a new static IP (192.168.1.57) and everything was working great as intended.
    My issue is when I went into DHCP I noticed a random entry that was using the IP address I used for the first NIC teaming attempt (192.168.1.56), before I joined it to the domain. I call this a random entry because it is using the last 8 characters of the
    MAC address as the hostname instead of the servers hostname.
    It seems when I deleted the first NIC team I created (192.168.1.56), a random MAC address Server 2012 R2 generated for the team has remained embedded in the system. The IP address is still pingable even though an ipconfig /all shows the current NIC team
    with the IP 192.168.1.57. There is no IP address of 192.168.1.56 configured on the current server and I have static IPs set yet it is still pingable and registering with DHCP.
    I know this is slightly confusing but I am hoping someone else has encountered this issue and may be able to tell me how to fix this. Simply deleting the DHCP entry does not do the trick, it comes back.

    Hi,
    Please confirm you have choose the right NIC team type, If you’ve previously configured NIC teaming, you’re aware NIC teams usually require the assistance of network-side
    protocols. Prior to Windows 2012, using a NIC team on a server also meant enabling protocols like EtherChannel or LACP (also known as 802.1ax or 802.3ad) on network ports.
    More information:
    NIC teaming configure in Server 2012
    http://technet.microsoft.com/en-us/magazine/jj149029.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • VMM 2012 sp1: unable to create a logical switch on team with different nics (bug?)

    Hyper-V Host is HP Proliant G7 with 4 onboard nics and additional (different) 4-port adapter, all 1Gbit. Server 2012 with latest updates.
    VMM 2012 is sp1 with latest updates, server 2012 running in a vm.
    When I create a logical switch with teamed uplink port profile in VMM, the job builds the team but fails to create the vswitch when ports from both onboard nics and adapter is used.
    I am able to create  a vswitch manually on the team if I do it directly on the host.
    If the team is created on either purely onboard nics or purely adapter nics, the logical switch creation succeeds.
    In short terms: I am able to create a vswitch based on a server 2012 team with different nics, but not a logical switch. Why?

    I have tried assigning both nics in one step, and one by one. When I create the logical switch team with only one nic it succeeds, but then fails when I add the second (unless it is similar to the first).
    Error in VMM is
    Error (2912)
    An internal error has occurred trying to contact the dkfuzhv01.mia.local server: : .
    WinRM: URL: [http://dkfuzhv01.mia.local:5985], Verb: [GET], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/scvmm/ErrorInfo?ID=1010]
    Unknown error (0x80041001)
    Recommended Action
    Check that WS-Management service is installed and running on server dkfuzhv01.mia.local. For more information use the command "winrm helpmsg hresult". If dkfuzhv01.mia.local is a host/library/update server or a PXE server role then ensure that
    VMM agent is installed and running.
    If I add the second non-similar team member from the host itself, I get this error from nic teaming wizard
    Validation failed and changes to the system are rolled back
    It is not a question of losing connection (been there), management interface is on a different team. And it also fails when second team member is added directly on the host...
    So far I will stick to the conclusion that my problem is with logical switch and different nics :)
    Oh, by the way nics are HP NC382i (Broadcom) and NC365T (Intel)

  • NIC teaming - Server 2008 R2 DC combined with other Software

    Hello!
    I've been searching all morning for an answer of what we have in mind to do at work....
    We've got a server installed with Windows Server 2008 R2 and have 4 NICs on it. We want to make it a DC (with DNS, DHCP and print services) and also want to install our Backup Solution (from Veeam) for our VMs. This server will be the only physical Microsoft
    server next to our 3 ESX servers at the end.
    I read here (http://markparris.co.uk/2010/02/09/top-tipactive-directory-domain-controllers-and-teamed-network-cards/) that there is a statement that a DC with NIC teaming is only using the FO (Fail-Over) feature of the teaming. Since there is also the backup
    solution on this server, it would be great also to use the LB (Load-Balancing) feature. My question is, when I active NIC teaming and install the DC roles, does the roles just use the FO feature and neglect the LB feature or does it enable/disable those modes/features
    of NIC teaming? Cause it would be nice if the backup solution could use the LB for bigger bandwidth for backup and restores and I wouldn't really care about the FO for the DC role.
    cheers
    Ivo

    Hi,
    I think the issue is related to the third party NIC teaming solution. You can refer to the third party manufacture.
    Here I should remind you something else, a DC with multiple NICs will cause many problems. So I would recommend you run a dedicated
    Hyper-v server and promote a DC on one of the virtual machine.
    Hope this helps.

  • Correct binding order in a Cluster with logical switches, NIC teams, and vNICs on the host.

    I have seen many recommendations to set the network binding order on you Hyper-V hosts to something similar to:
    Management NIC
    Cluster NICs
    iSCSI NICS
    However, all of  these recommendations are for scenarios where the NICs are all physical NICs in the host.
    Using Server 2012 R2, I am building converged networks with logical switches, NIC Teams, and vNICs on the host.  So when I go set the network binding order, I now have all these components to deal with as well.  For example, on a 4 adapter blade,
    I might typically have the following items in the binding order drop-down.
    4 - physical NICs (2- teamed for the 1 virtual switch, the other 2 used for iSCSI)
    1 - Team interface (Datacenter_Switch)
    5 - vNICs (Management, Cluster, LiveMigration, iSCSI-1, iSCSI-2)
    So, should you only worry about order of the vNICS (placed at the top) and let the other components just fall to the bottom of the list?  This seems to be likely to me, since the binding order applies to service access to the resources, and the other
    components are not being directly accessed by network services?
    Or, should the order start out with the physical resources needed to access the vNICs, followed by any intermediate resources (switches or team interfaces, then the vNICS themselves, to ensure that the resources are available to the subcompnents accessing
    them?
    Any help would be appreciated.
    Thanks.
    -Tim Reid

    If by 'network binding order' you mean the order set in the Advanced Settings of the Network Connections of the Control Panel, then the most important one is to make sure the domain network is at the top of the list.  Whichever network is at the top
    of the list is used first for auth functions.  So auth functions perform best when the proper network is placed first in the binding order.  After that, I don't know that it makes much difference at all.  (If it does, I'm sure my statement will
    start a lively discussion. <grin>)
    . : | : . : | : . tim

  • IBM MCS Server RUnning Unity with NIC Teaming

    All,
    has anyone ever run NIC teaming on and IBM MCS Server with Unity before? At question is the fact that many severs create a virtual MAC address that is different from either of the actual MACs when you team NICs. If this is the case on the IBM Servers then we may need to request an updated license. The servers are branded as Cisco MCS-7835-I1
    units.
    Thanks in advance. All replies rated!

    Hi
    According to this doc: http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/product_solution_overview0900aecd80091615.html
    The 7825-I4 is an IBM IBM x3250-M2. You can use that model number to search the IBM drivers page for the things you need:
    http://www-933.ibm.com/support/fixcentral/systemx/selectFixes
    I think the NIC teaming comes with the Network Drivers.
    Regards
    Aaron
    Please rate helpful posts...

  • WAAS issue with NIC teaming

    Hi,
    Can someone tell me what has the NIC teaming effect on WAAS.
    I was preparing a demo for one of my client but before this they were using another popular WAAS product. According to my client, they had a big network issue with NIC teaming, i.e one ip and 2 mac addresses with server loadsharing.
    scenario.
    Before the demo i just wanted to confirm that WAAS has nothing to do with the above mentioned problem...can anybody confirm...

    Hmm... Still not quite sure what your saying. It sounds like by coincidence the other accelerator box got the same IP as one of their servers. They should have been able to fix that quite simple, not to mention test for it by doing the equivalent of "show int" on the box before plugging it in.
    Or possibly they had some type of proxy arp configured wrong?
    Either way we're running WAAS boxes on our networks that have teaming of NIC's via HP's teaming utility using "Transmit load balancing" on the servers. The WAAS boxes are configured for ether-channeling their NIC's and using WCCP for traffic redirection.
    Actually, if you did them using WCCP rather than inline, there definitely shouldn't be a MAC conflict since the WAAS boxes will be on a different subnet at that point. No chance of a problem like that with Layer-3 separation...

Maybe you are looking for