Hyper NIC teaming HP ProLiant BL685c G7

we are in process of creating HYPER-V 2012 r2 with 2 server initially.
HP ProLiant BL685c G7 (2 server )
Below are the NIC details, and we have IBM SAN storage,
Can someone provide or links some details, where I can create a NIC bonding
Can I bond all the 4 NIC and from there Can i create a VNIC, 1 for management, I for Vmotion, I for cluster and 1 for VM
Is that Correct my assumption is ?

Sam,
As you have given guidance, Just few more query when I am configuring, Below is the script I have run to complete the task.
Questions are on Red color
New-NetLbfoTeam -Name TEAMNIC -TeamMembers NIC1,NIC2,NIC3,NIC4 -LoadBalancingAlgorithm HyperVPort -TeamingMode SwitchIndependent
New-VMSwitch -Name VSWITCH -NetAdapterName TEAMNIC -AllowManagementOS $False -MinimumBandwidthMode Weight
Set-VMSwitch VSWITCH -DefaultFlowMinimumBandwidthWeight 20
Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "VSWITCH"
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "VSWITCH"
Add-VMNetworkAdapter -ManagementOS -Name "VirtualMachine" -SwitchName "VSWITCH"
Add-VMNetworkAdapter -ManagementOS -Name "CLuster" -SwitchName "VSWITCH"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management" -Access -VlanId 251
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "VirtualMachine" -Access -VlanId 61
 Question1 . I am unable to add or set multiple Vlan on One VNIC--Please let me know
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "LiveMigration" -Access -VlanId 62
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "CLuster" -Access -VlanId ???
         Question2 . Do I need to assign Vlan and IP adress for the Cluster NIC and Livemigration
         Question3 . For example If  want to give IP on Cluster VNIC like treditional SQL  heartbeat NIC IP without Gateway, do I need to enable the Vlan on switch trunk, As of now only        
Vlan 251,61,62 allowed at this moment
Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10
Set-VMNetworkAdapter -ManagementOS -Name "VirtualMachine" -MinimumBandwidthWeight 50
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 30
Set-VMNetworkAdapter -ManagementOS -Name "CLuster" -MinimumBandwidthWeight 30

Similar Messages

  • Hyper-V, NIC Teaming and 2 hosts getting in the way of each other

    Hey TechNet,
    After my initial build of 2 Hyper-V Core server which took me a bit of time without a domain, I started building 2 more for another site. After the initial two, setting up the new ones went very fast until I ran into a very funny issue. And I am willing
    to bet it is just my luck but I am wondering if any other out there ended up with it.
    So, I build these 2 new servers, create a NIC teaming on each host, add the management OS adapter, give it an IP and I can ping the world. So I went back to my station and tried to start working on these hosts but I kept getting DCed especially from one
    of them. Reinstalled it and remade the NIC teaming config, just in case. Same issue
    So I started pinging both of the servers and I remarked that when one was pinging, the other one tended to not answer ping anymore and vice versa. After testing the firewall and the switch and even trying to put the 2 machines on different switches, did
    not help. So I thought, what the heck, let's just remove all the network config from both machine, reboot, and redo the network config. Since then no issue.
    I only forgot to do one thing before removing the network configuration, I forgot to check if the MAC address on the Management OS adapters were the same. Even if it is a small chance, it can still happen (1 in 256^4 i'd say).
    So to get to my question, am I that unlucky or might it have been something else ?
    Enjoy your weekends

    I raised this bug long ago (one year ago in fact) and it still happens today.
    If you create a virtual switch, then add a management vNIC to it - there are times when you will get two hosts with the same MAC on the vNIC that was added for management.
    I have seen this in my lab (and I can reproduce it at will).
    Modify the entire Hyper-V MAC address pool.  Or else you will have the same issue with VMs.  This is the only workaround.
    But yes, it is a very confusing issue.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Hyper-V NIC Team Load Balancing Algorithm: TranportPorts vs Hyper-VPorts

    Hi, 
    I'm going to need to configure a NIC team for the LAN traffic for a Hyper-V 2012 R2 environment. What is the recommended load balancing algorithm? 
    Some background:
    - The NIC team will deal with LAN traffic (NOT iSCSI storage traffic)
    - I'll set up a converged network. So there'll be a virtual switch on top of this team, which will have vNICs configured for each cluster, live migration and management
    - I'll implement QOS at the virtual switch level (using option -DefaultFlowMinimumBandwidthWeight) and at the vNIC level (using option -MinimumBandwidthWeight)
    - The CSV is set up on an Equallogics cluster. I know that this team is for the LAN so it has nothing to do with the SAN, but this reference will become clear in the next paragraph. 
    Here's where it gets a little confusing. I've checked some of the Equallogics documentation to ensure this environment complies with their requirements as far as storage networking is concerned. However, as part of their presentation the Dell publication
    TR1098-4, recommends creating the LAN NIC team with the TrasportPorts Load Balancing Algorithm. However, in some of the Microsoft resources (i.e. http://technet.microsoft.com/en-us/library/dn550728.aspx), the recommended load balancing algorithm is HyperVPorts.
    Just to add to the confusion, in this Microsoft TechEd presentation, http://www.youtube.com/watch?v=ed7HThAvp7o, the recommendation (at around minute 8:06) is to use dynamic ports algorithm mode. So obviously there are many ways to do this, but which one is
    correct? I spoke with Equallogics support and the rep said that their documentation recommends TransportPorts LB algorithm because that's what they've tested and works. I'm wondering what the response from a Hyper-V expert would be to this question. Anyway,
    any input on this last point would be appreciated.

    Gleb,
    >>See Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management  for more
    info
    Thanks for this reference. It seems that I have an older version of this document where there's absolutely
    no mention of the dynamic LBA. Hence my confusion when in the Microsoft TechEd presentation the
    recommendation was to use Dynamic. I almost implemented this environment with switch dependent and Address Hash Distribution because, based on the older version of the document, this combination offered: 
    a) Native teaming for maximum performance and switch diversity is not required; or
    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
    The new version of the document recommends Dynamic over the other two LBA. The analogy that the document
    makes of TCP flows with human speech was really helpful for me to understand what this algorithm is doing. For those who will never read the document, I'm referring to this: 
    "The outbound loads in this mode are dynamically balanced based on the concept of
    flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
    occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow,
    the algorithm will opportunistically rebalance the flow to another team member if apropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity
    between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members. "
    Anyway, this post made my week. You sir are deserving of a beer!

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Hyper-V cluster validation report "Found duplicate physical address" on nic team interfaces.

    I recently built a Windows 2012 Hyper-V cluster with 5 nodes. The validation report shows “duplicate physical address” error (error text pasted below).
    The hardware: HP BladeSystem – servers are BL460c blades, in a c7000 enclosure, connected to HP Virtual Connect switches.
    Each server has 2 physcal nics, teamed in Windows. In the NIC Teaming console, I created the following Team Interfaces and assigned each a VLAN ID:
    “Team1” (the default team)
    “Team1 - VLAN 204 – Management”
    “Team1 - VLAN 212 - 2012HB”
    “Team1 - VLAN 211 -Exchange DAG Replication”
    I also created 2 HV Virtual Switches. Neither one allows management interface to share. They are assigned to “Team1” and “Team1 - VLAN 211 -Exchange DAG Replication” respectively.
    Therefore, in Network Connection, I see the 2 physical Ethernet nics, and 4 “virtual” nics. Only 2 of them have IP addresses assigned: Management and HB. These are the two that the validation wizard complains
    about.
    The MAC address is not configurable in the NIC Teaming console, so I don’t see a way to resolve this error, except to use separate physical nics. I don’t want to do that because a) I would lose the benefits of
    the bandwidth aggregation that Virtual Connect provides, and b) When creating an Interface on a Team in Windows, it looks like it ALWAYS gives it the same MAC address, so that should be a supported configuration.
    Everything works just fine, and there are no other errors or IP conflicts or anything else. But I really want to fix it because I don’t know what unknown problems this may be causing.
    From the Cluster Validation report:
    Found duplicate physical address 10-60-4B-A9-4A-30 on node Cluster201.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster201.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Found duplicate physical address F0-92-1C-13-3C-2C on node Cluster202.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster202.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Found duplicate physical address 68-B5-99-C1-7E-9C on node Cluster210.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster210.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Found duplicate physical address 3C-4A-92-DE-1E-74 on node Cluster211.OurDomain.local adapter VC-Team - VLAN 212 - 2012HB and node Cluster211.OurDomain.local adapter
    VC-Team - VLAN 204 - Management.
    Found duplicate physical address 68-B5-99-C0-3D-50 on node Cluster212.OurDomain.local adapter
    Team1 - VLAN 212 - 2012HB and node Cluster212.OurDomain.local adapter
    Team1 - VLAN 204 - Management.
    Thanks!
    Dan

    Hi Dan,
    "It turns out that both hosts had the same default MAC address ranges for their virtual switches. Since the host vNICs were attached to the virtual switch on each host they received the first couple of MAC addresses from the switches.
    For details please refer to following link:
    http://www.jefflafr.com/blog/4/19/2013/conflicting-mac-addresses-when-building-a-hyper-v-cluster-with-converged-networking
    Hope this helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Using NIC Teaming and a virtual switch for Windows Server 2012 host networking and Hyper-V.

    Using NIC Teaming and a virtual switch for Windows Server 2012 host networking!
    http://www.youtube.com/watch?v=8mOuoIWzmdE
    Hi thanks for reading. Now I may well have my terminology incorrect here so I will try to explain  as best I can and apologies from the start.
    It’s a bit of both Hyper-v and Server 2012R2. 
    I am setting up a lab with Server 2012 R2. I have several physical network cards that I have teamed called “HostSwitchTeam” from those I have made several Virtual Network Adaptors such as below
    examples.
    New-VMSwitch "MgmtSwitch" -MinimumBandwidthMode weight -NetAdaptername "HostSwitchTeam" -AllowManagement $false
    Add-VMNetworkAdapter -ManagementOS -Name "Vswitch" -SwitchName "MgmtSwitch"
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "MgmtSwitch"
    When I install Hyper-V and it comes to adding a virtual switch during installation it only shows the individual physical network cards and the
    HostSwitchTeam for selection.  When installed it shows the Microsoft Network Multiplexor Driver as the only option. 
    Is this correct or how does one use the Vswitch made above and incorporate into the Hyper-V so a weight can be put against it.
    Still trying to get my head around Vswitches,VMNetworkadapters etc so somewhat confused as to the way forward at this time so I may have missed the plot altogether!
    Any help would be much appreciated.
    Paul
    Paul Edwards

    Hi P.J.E,
    >>I have teams so a bit confused as to the adapter bindings and if the teams need to be added or just the vEthernet Nics?.
    Nic 1,2 
    HostVMSwitchTeam
    Nic 3,4,5
             HostMgmtSwitchTeam
    >>The adapter Binding settings are:
    HostMgmtSwitchTeam
    V-Curric
    Nic 3
    Nic 4
    Nic 5
    V-Livemigration
    HostVMSwitch
    Nic 1
    Nic 2
    V-iSCSI
    V-HeartBeat
    Based on my understanding of the description , "HostMgmtSwitchTeam and
    HostVMSwitch " are teamed NIC .
    You can think of them as two physical NICs (do not use NIC 1,2,3,4,5 any more , there are just two NICs "HostMgmtSwitchTeam and
    HostVMSwitch").
    V-Curric,
    V-Livemigration , V-iSCSI ,
    V-HeartBeat are just VNICs of host  (you can change their name then check if the virtual switch name will be changed )
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • ProLiant DL380 Gen9 - NIC Teaming

    Hi community,
    since one week we have new DL380 Gen9 servers. Now we want to install them as a Win 2008 R2 server, but we cannot create a NIC teaming.
    If I install the HP NCO (Network Configuration Utility), cp023339.exe Version 10.90, I get
    "The software will bot be installed on the system because the required hardware is not present in the system or the software/firmware doesn't apply to the system"
    I updated driver / firmware of the NIC, searched any new HP Network Configuration Tools, nothing.
    NIC: HP Ethernet 1GB 4-port 331i Adapter
    Can someone help?
    Many thanks
    /Hugo
    This question was solved.
    View Solution.

    Hi:
    You may also want to post your question on the HP Business Support Forum -- DL Servers section.
    http://h30499.www3.hp.com/t5/ProLiant-Servers-ML-DL-SL/bd-p/itrc-264#.VHM-jHktC9I

  • Hyper-V Nic Teaming (reserve a nic for host OS)

    Whilst setting up nic teaming on my host (server 2012 r2) the OS recommends leaving one nic for host management(access). IS this best practice?  Seems like a waste for a nic as the host would hardly ever be accessed after initial setup.
    I have 4 nics in total. What is the best practice in this situation?

    Depending on if it is a single and the one and only or you build a Cluster you need some networks on your Hyper-V
    at least one connection for the Host to do Management.
    so in case of a single node with local disks you would create a Team with the 4 Nics and create a Hyper-V Switch with the Option checked for creating that Management OS Adapter what is a so called vNIC on that vSwitch and configure that vNIC with the needed
    IP Setting etc...
    If you plan a Cluster and also ISCSI/SMB for Storage Access take a look here
    http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/
    You find a few possible ways for teaming and the Switch Settings and also all needed Steps for doing a fully converged Setup via PowerShell.
    If you share more Informations on you setup we can give more Details on that.

  • Hyper-V vSwitch on a NIC-Team (Server 2012 R2) has limited connectivity

    Hi,
    I have two servers both running Server 2012 R2. One of these servers is serving as a DC and is working fine. The other is supposed to be a Hyper-V-Host, but I cannot get networking working on this system. Both systems have Hyper-V role installed. Both have
    2 NICs configured as a team. The team is configured as switch-independent, Hyper-V-Port, None (all adapters active). A Hyper-V-Switch using this team as external network is configured and is shared with the host operating system. This should work fine and
    does on one of the servers. On the other I get a warning sign stating limited connectivity. The server seems to sent packages but it receives none. Deleting the switch and simply using the team for network access works fine. So it has to be related to the
    switch. But I do not know what's wrong with it. Any ideas?
    Regards,
    Oliver

    Hi ogerlach_isw,
    Generally please don’t install others role on Hyper-V server, if your DC using NIC teaming, please break the teaming then monitor this issue again.
    More information”
    Everything you want to know about Network Teaming in Windows Server 2012
    http://blogs.msdn.com/b/virtual_pc_guy/archive/2012/12/07/everything-you-want-to-know-about-network-teaming-in-windows-server-2012.aspx
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • NIC Teaming for hyper-v serve.

    I have installed windows server 2012 r2 on server. Server is having network adapters i have given static ip address to both nic's
    LAN1: - 192.168.0.100 & LAN2: - 192.168.0.101 after enabling NIC Teaming server have added one more adapter called
    "Network Adapter Multiplexor" after this above mentioned ip address are not responding to PING or any requests. Then i have given
    192.168.0. 102 ip address to Multiplexor and its started working.
    So my question do i need to give ip address to LAN1 &
    LAN2 or i can just create team and give ip address to Multiplexor
    Also if i installed hyper-v server on it will it give me failover thing for this machine.????
    Akshay Pate

    Hello Akshay,
    In brief, after creating the Teaming adapter (Multiplexor) you'll use it's address for future networking purposes.
    Regarding the lack of ping, I had the same "issue" and it seems like is block by the Microsoft code itself. Still couldn't find how to allow it.
    When digging into W2k12(&R2) NIC Teaming this two pages were very explanative and usefull:
    Geek of All Trades: The Availability Answer (by Greg Shields)
    Windows server 2012 Hyper-V 3.0 network virtualization (if you need more technical detail)
    Hope it helps!

  • NIC teaming and Hyper-V switch recommendations in a cluster

    HI,
    We’ve recently purchased four HP Gen 8 servers with a total of ten NICS to be used in a Hyper-V 2012 R2 Cluster
    These will be connecting to ISCSI storage so I’ll use two of the NICs for the ISCSI storage connection.
    I’m then deciding between to options.
    1. Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect my VMs to the same switch.
    2. Create two NIC teams, four adapters in each.  Use one team just for Management, Live Migration and CSV\Cluster VNics - QOS to manage all this traffic. 
    Then the other team will be dedicated just for my VMs.
    Is there any benefit to isolating the VMs on their own switch?
    Would having two teams allow more flexibility with the teaming 
    configurations I could use, such as using Switch Independent\Hyper-V Port mode for the VM team? (I do need to read up on the teaming modes a little more)
    Thanks,

    I’m not teaming the ISCSI adapters.  These would be configured with MPIO. 
    What I want to know,
    Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect
    my VMs to the same switch.
    http://blogs.technet.com/b/cedward/archive/2014/02/22/hyper-v-2012-r2-network-architectures-series-part-3-of-7-converged-networks-managed-by-scvmm-and-powershell.aspx
    What are the disadvantages to having this configuration? 
    Should RSS be disabled on the NICs in this configuration with DVMQ left enabled? 
    After reading through this post, I think I’ll need to do this. 
    However, I’d like to understand this a little more.
    I have the option of adding an additional two 10GB NICS. 
    This would mean I could create another team and Hyper-V switch on top and then dedicate this to my VMs leaving the other team for CSV\Management and Live Migration.
     How does this option affect the use of RSS and DVMQ?

  • HYPer-v 2012 R2 NIC team breadown after Broadcom Driver upgradation

    Hi Team ,
    I am using Blade server with Two Broadcom BCM57800 NetXtream 10 G ports . Both Network card configured in one team with Switch independent and Hyper-v Port configuration using Windows Native teaming.
    I upgraded the firmware driver of network card to the latest level and then after Hyper-v Team stop responding.
    i am using Single virtual Switch with one vNIC so hyper-v can talk to mgmt LAN on Vlan X.
    as an work around i break the team and configure one nic for Mgmt IP  and other for VDItraffic (Virtual Switch)
    please let me know if any one face similar issue and able to resolved it.
    Thanks
    Ravi
    Ravi

    Hi Ravi,
    I would like to check if the issue had been resolved  .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Server 2012 R2 Crashes with NIC Team

    Server 2012 R2 Core configured for Hyper-V. Using 2-port 10Gbe Brocades, we want to use NIC teaming for guest traffic. Create the team... seems fine. Create the virtual switch in Hyper-V, and assign it to the NIC team... seems fine. Create
    a VM, assign the network card to the Virtual switch... still doing okay. Power on the VM... POOF! The host BSOD's. If I remove the switch from the VM, I can run the VM from the console, install the OS, etc... but as soon as I reassign the virtual
    NIC to the switch, POOF! Bye-bye again. Any ideas here?
    Thank you in advance!
    EDIT: A little more info... Two 2-port Brocades and two Nexus 5k's. Running one port on NIC1 to one 5k, and one port on NIC2 to the other 5k. NIC team is using Switch Independent Mode, Address Hash load balancing, and all adapters active.

    Hi,
    Have you updated the NIC driver to latest?
    If issue persists after updating the driver, we can use WinDbg to analyze a crash dump.
    If the NIC driver cause the BSOD, please consult the NIC manufacture about this issue.
    For detailed information about how to analyze a crash dump, please refer to the link below,
    http://blogs.technet.com/b/juanand/archive/2011/03/20/analyzing-a-crash-dump-aka-bsod.aspx
    Best Regards.
    Steven Lee
    TechNet Community Support

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • PS Script to Automate NIC Teaming and Configure Static IP Address based off an Existing Physical NIC

    # Retrieve IP Address and Default Gateway from static IP Assigned NIC and assign to variables.
    $wmi = Get-WmiObject Win32_NetworkAdapterConfiguration -Filter "IPEnabled = True" |
    Where-Object { $_.IPAddress -match '192\.' }
    $IPAddress = $wmi.IpAddress[0]
    $DefaultGateway = $wmi.DefaultIPGateway[0]
    # Create Lbfo TEAM1, by binding “Ethernet” and “Ethernet 2” NICs.
    New-NetLbfoTeam -Name TEAM1 -TeamMembers "Ethernet","Ethernet 2" -TeamingMode Lacp -LoadBalancingAlgorithm TransportPorts -Confirm:$false
    # 20 second pause to allow TEAM1 to form and come online.
    Start-Sleep -s 20
    # Configure static IP Address, Subnet, Default Gateway, DNS Server IPs to newly formed TEAM1 interface.
    New-NetIPAddress –InterfaceAlias “TEAM1” –IPAddress $IPAddress –PrefixLength 24 -DefaultGateway $DefaultGateway
    Set-DnsClientServerAddress -InterfaceAlias “TEAM1” -ServerAddresses xx.xx.xx.xx, xx.xx.xx.xx
    Howdy All!
    I was recently presented with the challenge of automating the creation and configuration of a NIC Team on Server 2012 and Server 2012 R2.
    Condition:
    New Team will use static IP Address of an existing NIC (one of two physical NICs to be used in the Team).  Each server has more than one NIC.
    Our environment is pretty static, in the sense that all our servers use the same subnet mask and DNS server IP Addresses, so I really only had
    to worry about the Static IP Address and the Default Gateway.
    1. Retrieve NIC IP Address and Default Gateway:
    I needed a way to query only the NIC with the correct IP Address settings and create required variables based on that query.  For that, I
    leveraged WMI.  For example purposes, let's say the servers in your environment start with 192. and you know the source physical NIC with desired network configurations follows this scheme.  This will retrieve only the network configuration information
    for the NIC that has the IP Address that starts with "192."  Feel free to replace 192 with whatever octet you use.  you can expand the criteria by filling out additional octects... example:
    Where-Object
    $_.IPAddress
    -match'192\.168.' } This would search for NICs with IP Addresses 192.168.xx.xx.
    $wmi
    = Get-WmiObject
    Win32_NetworkAdapterConfiguration
    -Filter "IPEnabled = True"
    |
    Where-Object {
    $_.IPAddress
    -match '192\.' }
    $IPAddress
    = $wmi.IpAddress[0]
    $DefaultGateway
    = $wmi.DefaultIPGateway[0]
    2. Create Lbfo TEAM1
    This is a straight forward command based off of New-NetLbfoTeam.  I used  "-Confirm:$false" to suppress prompts. 
    Our NICs are named “Ethernet” and “Ethernet 2” by default, so I was able to keep –TeamMembers as a static entry. 
    Also added start-sleep command to give the new Team time to build and come online before moving on to network configurations. 
    New-NetLbfoTeam
    -Name TEAM1
    -TeamMembers "Ethernet","Ethernet 2"
    -TeamingMode SwitchIndependent
    -LoadBalancingAlgorithm
    Dynamic -Confirm:$false
    # 20 second pause to allow TEAM1 to form and come online.
    Start-Sleep
    -s 20
    3. Configure network settings for interface "TEAM1".
    Now it's time to pipe the previous physical NICs configurations to the newly built team.  Here is where I will leverage
    the variables I created earlier.
    There are two separate commands used to fully configure network settings,
    New-NetIPAddress : Here is where you assign the IP Address, Subnet Mask, and Default Gateway.
    Set-DnsClientServerAddress: Here is where you assign any DNS Servers.  In my case, I have 2, just replace x's with your
    desired DNS IP Addresses.
    New-NetIPAddress
    –InterfaceAlias “TEAM1”
    –IPAddress $IPAddress
    –PrefixLength 24
    -DefaultGateway $DefaultGateway
    Set-DnsClientServerAddress
    -InterfaceAlias “TEAM1”
    -ServerAddresses xx.xx.xx.xx, xx.xx.xx.xx
    Hope this helps and cheers!

    I've done this before, and because of that I've run into something you may find valuable. 
    Namely two challenges:
    There are "n" number of adapters in the server.
    Adapters with multiple ports should be labeled in order.
    MS only supports making a LBFO Team out of "like speed" adapters.
    To solve both of these challenges I standardized the name based on link speed for each adapter before creating hte team.  Pretty simple really!  FIrst I created to variables to store the 10g and 1g adapters.  I went ahead and told it to skip
    any "hyper-V" ports for obvious reasons, and sorted by MAC address as servers tend to put all thier onboard NICs in sequentially by MAC:
    $All10GAdapters = (Get-NetAdapter |where{$_.LinkSpeed -eq "10 Gbps" -and $_.InterfaceDesription -notmatch 'Hyper-V*'}|sort-object MacAddress)
    $All1GAdapters = (Get-NetAdapter |where{$_.LinkSpeed -eq "1 Gbps" -and $_.InterfaceDesription -notmatch 'Hyper-V*'}|sort-object MacAddress)
    Sweet ... now that I have my adapters I can rename them into something standardized:
    $i=0
    $All10GAdapters | ForEach-Object {
    Rename-NetAdapter -Name $_.Name -NewName "Ethernet_10g_$i"
    $i++
    $i = 0
    $All1GAdapters | ForEach-Object {
    Rename-NetAdapter -Name $_.Name -NewName "Ethernet_1g_$i"
    $i++
    Once that's done Now i can return to your team command but use a wildcard sense I know the standardized name!
    New-NetLbfoTeam -Name TEAM1G -TeamMembers Ethernet_1g_* -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false
    New-NetLbfoTeam -Name TEAM10G -TeamMembers Ethernet_10g_* -TeamingMode SwitchIndependent -LoadBalancingAlgorithm Dynamic -Confirm:$false

Maybe you are looking for

  • MX 6.1: ARGUMENT values in the wrong place

    Im having a very strange problem. I have a CF web service I am invoking and sending it several arguments. In one web service method it works fine and Im only sending 3 arguments. The other I am sending 6 and when I dump the ARGUMENTS arrary the value

  • WIJ20002 error in webi java panel

    Hi All, One of user trying to create a report on universe but when trying to run query, user facing WIJ20002 error. The web intelligence can not connect to the server please contact your administrator, but her colleague not facing any error with same

  • Loading external data

    This question was posted in response to the following article: http://help.adobe.com/en_US/air/html/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7cfd.html

  • Error when calling a universe

    Good day everybody I have created a universe by universe design, when I want to import it to lumira it gives me an error "FatalException occured. The cause was : com.businessObject.corba.genric.container.proxy.internal.dependencyFailoverCallbacklmpl$

  • Photoshop Elements 9 Editor error message

    Photoshop Elements 9 Organizer opens OK ~ but Editor won't ~ Editor appears in the background, the message "Runtime Error!  Program: C:\Program F ...  This application has requested the Runtime to terminate it in an unusual way.  Please contact the a