VMQ issues with NIC Teaming

Hi All
Apologies if this is a long one but I thought the more information I can provide the better.
We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
having cropped up in the past with Broadcom NICs.
I'll give you a little bit of background into the problem...
Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
Prior to the redesign, each VM host had 12 NICs installed:
Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
to the same designated NICs in each of the VM hosts).
In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
The Production VM Switch is configured as follows:
Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
Name                           InterfaceDescription             
Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
Queues
VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
7
VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
7
VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
7
VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
7
Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
28
Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
less issues with VMQ disabled compared to it being enabled.
I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
Many thanks
Matt

Hi Gleb
I can't seem to attach any images / links until my account has been verified.
There are a couple of entries in the ndisplatform/Operational log.
Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
And
Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
And
Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
How can I find out what tNIC correlates with "0xffffe00129b79010"
Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
Interface Name          -      BaseVMQProc          -        MaxProcs         
-      VMQ / RSS
VM_SWITCH_ETH01                  10                             
     1                           VMQ
VM_SWITCH_ETH02                  12                              
    1                           VMQ
VM_SWITCH_ETH03                  14                               
   1                           VMQ
VM_SWITCH_ETH04                  16                              
    1                           VMQ
SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
Interface Name          -      BaseVMQProc          -        MaxProcs       
  -      VMQ / RSS
SMB_FABRIC_ETH01                18                                   2                           
RSS
SMB_FABRIC_ETH02                18                                   2                           
RSS
ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
Interface Name          -      BaseVMQProc         -         MaxProcs      
   -        VMQ / RSS
ISCSI_SAN_ETH01                    2                                    2                           
RSS
ISCSI_SAN_ETH02                    6                                    2                           
RSS
ISCSI_SAN_ETH03                    2                                   
2                            RSS
ISCSI_SAN_ETH04                    6                                   
2                            RSS
Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
are configured for VMQ, thus RSS has been disabled:
Interface Name        -         BaseVMQProc        -          MaxProcs       
-         VMQ / RSS
MAN_SWITCH_ETH01                 22                                  1                          
VMQ
MAN_SWITCH_ETH02                 22                                  1                           VMQ
We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
Hope this helps.
Any more info required, please ask.
Kind Regards
Matt

Similar Messages

  • WAAS issue with NIC teaming

    Hi,
    Can someone tell me what has the NIC teaming effect on WAAS.
    I was preparing a demo for one of my client but before this they were using another popular WAAS product. According to my client, they had a big network issue with NIC teaming, i.e one ip and 2 mac addresses with server loadsharing.
    scenario.
    Before the demo i just wanted to confirm that WAAS has nothing to do with the above mentioned problem...can anybody confirm...

    Hmm... Still not quite sure what your saying. It sounds like by coincidence the other accelerator box got the same IP as one of their servers. They should have been able to fix that quite simple, not to mention test for it by doing the equivalent of "show int" on the box before plugging it in.
    Or possibly they had some type of proxy arp configured wrong?
    Either way we're running WAAS boxes on our networks that have teaming of NIC's via HP's teaming utility using "Transmit load balancing" on the servers. The WAAS boxes are configured for ether-channeling their NIC's and using WCCP for traffic redirection.
    Actually, if you did them using WCCP rather than inline, there definitely shouldn't be a MAC conflict since the WAAS boxes will be on a different subnet at that point. No chance of a problem like that with Layer-3 separation...

  • SR-IOV Uplink Port with NIC Teaming

    Hello,
    I'm trying to setup my uplink port profile and logical switch with NIC Teaming and SR-IOV support. In Hyper-V this was easy, just had to create the NIC Team (which I configured as Dynamic & LACP) then check the box on the virtual switch.
    I'm VMM it does not seem to like to enable NIC Teams with SR-IOV:
    Can anyone advise? I'm not using any virtual ports. I just want all my VMs to connect to the physical switch though the LACP NIC Team, something which I thought would be simple.
    I have a plan B - don't use Microsoft's NIC Teaming and instead use the Intel technology to present all the adapters as one to the host. I'd rather no do this.
    Thanks
    MrGoodBytes

    Hi Sir,
    "SR-IOV does have certain limitations. If you configure port access control lists (ACLs), extensions or policies in the virtual switch, SR-IOV is disabled because its traffic totally bypasses the switch.
    You can’t team two SR-IOV network cards in the host. You can, however, take two physical SR-IOV NICs in the host, create separate virtual switches and team two virtual network cards within a VM. "
    There is really a limitation when using NIC teaming :
    http://technet.microsoft.com/en-us/magazine/dn235778.aspx
    Best Regards,
    Elton Ji 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • We have detected an issue with your team

    When I try to Manage My Team I get the following error message:
    We have detected an issue with your team Axis Technology, LLC. Please contact customer support to resolve this issue.
    I tried contacting Support via chat and I got no response

    Yes, you are correct, the support can not be contacted with this link, I have send a private message, please respond to it.

  • Server 2012 R2 Crashes with NIC Team

    Server 2012 R2 Core configured for Hyper-V. Using 2-port 10Gbe Brocades, we want to use NIC teaming for guest traffic. Create the team... seems fine. Create the virtual switch in Hyper-V, and assign it to the NIC team... seems fine. Create
    a VM, assign the network card to the Virtual switch... still doing okay. Power on the VM... POOF! The host BSOD's. If I remove the switch from the VM, I can run the VM from the console, install the OS, etc... but as soon as I reassign the virtual
    NIC to the switch, POOF! Bye-bye again. Any ideas here?
    Thank you in advance!
    EDIT: A little more info... Two 2-port Brocades and two Nexus 5k's. Running one port on NIC1 to one 5k, and one port on NIC2 to the other 5k. NIC team is using Switch Independent Mode, Address Hash load balancing, and all adapters active.

    Hi,
    Have you updated the NIC driver to latest?
    If issue persists after updating the driver, we can use WinDbg to analyze a crash dump.
    If the NIC driver cause the BSOD, please consult the NIC manufacture about this issue.
    For detailed information about how to analyze a crash dump, please refer to the link below,
    http://blogs.technet.com/b/juanand/archive/2011/03/20/analyzing-a-crash-dump-aka-bsod.aspx
    Best Regards.
    Steven Lee
    TechNet Community Support

  • IBM MCS Server RUnning Unity with NIC Teaming

    All,
    has anyone ever run NIC teaming on and IBM MCS Server with Unity before? At question is the fact that many severs create a virtual MAC address that is different from either of the actual MACs when you team NICs. If this is the case on the IBM Servers then we may need to request an updated license. The servers are branded as Cisco MCS-7835-I1
    units.
    Thanks in advance. All replies rated!

    Hi
    According to this doc: http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/product_solution_overview0900aecd80091615.html
    The 7825-I4 is an IBM IBM x3250-M2. You can use that model number to search the IBM drivers page for the things you need:
    http://www-933.ibm.com/support/fixcentral/systemx/selectFixes
    I think the NIC teaming comes with the Network Drivers.
    Regards
    Aaron
    Please rate helpful posts...

  • Issues with applying Team Template to a Project

    Hi All, thanks in advance for your help,
    When I attach a Team Template to the Project, though no error is showing (except a confirmation page stating that a workflow process is running), i am not able to see it in the Project home notifications nor in the Scheduled People under Resources.
    Appreciate any help to resolve this
    Regards,
    MM

    Have you run the PA: Apply Team Template workflow process? This will trigger the application and notification, not actually performing the step you noted below.

  • NIC teaming - Server 2008 R2 DC combined with other Software

    Hello!
    I've been searching all morning for an answer of what we have in mind to do at work....
    We've got a server installed with Windows Server 2008 R2 and have 4 NICs on it. We want to make it a DC (with DNS, DHCP and print services) and also want to install our Backup Solution (from Veeam) for our VMs. This server will be the only physical Microsoft
    server next to our 3 ESX servers at the end.
    I read here (http://markparris.co.uk/2010/02/09/top-tipactive-directory-domain-controllers-and-teamed-network-cards/) that there is a statement that a DC with NIC teaming is only using the FO (Fail-Over) feature of the teaming. Since there is also the backup
    solution on this server, it would be great also to use the LB (Load-Balancing) feature. My question is, when I active NIC teaming and install the DC roles, does the roles just use the FO feature and neglect the LB feature or does it enable/disable those modes/features
    of NIC teaming? Cause it would be nice if the backup solution could use the LB for bigger bandwidth for backup and restores and I wouldn't really care about the FO for the DC role.
    cheers
    Ivo

    Hi,
    I think the issue is related to the third party NIC teaming solution. You can refer to the third party manufacture.
    Here I should remind you something else, a DC with multiple NICs will cause many problems. So I would recommend you run a dedicated
    Hyper-v server and promote a DC on one of the virtual machine.
    Hope this helps.

  • Network adapters (NIC) doesn't show up in NIC Teaming manager

    We have been running Windows Server 2012 R2 for couple of months now with no major problems, but recently had to swap out one of the NICs. Unfortunately interfaces were not removed from Team before it was swapped out and that appears to have screwed up something
    in Windows NIC Teaming manager - NICs doesn't appear there anymore. Not even the NICs that were not touched.
    Network interfaces appear to be connected, no issues with drivers (screen shot from device manager), but they don't show up in NIC Teaming, any idea how to reset NIC Teaming manager and/or resolve this issue without reinstalling Windows?
    Thanks!

    Thanks for leading me the right direction!
    Looking around I found that one of  "invalid class" error causes is corrupted WMI.  Using this script provided by MS to Rebuild/Re-register WMI (different case/problem, but WMI is WMI...) resolved the issue with NICs not showing up:
    @echo off
    sc config winmgmt start= disabled
    net stop winmgmt /y
    %systemdrive%
    cd %windir%\system32\wbem
    for /f %%s in ('dir /b *.dll') do regsvr32 /s %%s
    wmiprvse /regserver
    winmgmt /regserver
    sc config winmgmt start= Auto
    net start winmgmt
    for /f %%s in ('dir /s /b *.mof *.mfl') do mofcomp %%s

  • 2012 R2 NIC Teaming and netwroking

    I have a 4 port NIC, all connected to the same network using DHCP, when I team two cards, they no longer have an IP address. Is that by design?
    Where can I find more information about the virtual networking how to?
    TIA

     
    Yes it is by design.
    If individual NICs are configured with IP settings and then you create a NIC teaming the individual nics will lose their IP settings.
    And you will no longer be able to configure the ip settings on the individual NICs which are part of a NIC team. Instead you will need to assign the ip address on the NIC Team.
    To assign ip address to the NIC teams, go to control Panel\ Network and Internet\ Network Connections
    To assign VLANs and manage/update Team interfaces: In the Server Manager select the local server and then in the local server properties section click on Nic Teaming "Enabled" link.
    This will open the NIC teaming window and here you can manage the NIC teams.
    For more information on NIC teams please refer to:
    http://blogs.technet.com/b/keithmayer/archive/2012/11/20/vlan-tricks-with-nic-teaming-in-windows-server-2012.aspx
    Kind Regards Tim (Canberra)

  • NIC teaming and direct access in windows 2012 server core

    Hello All,
    I have installed windows 2012 r2 server core and i want to implement direct access with nic teaming enabled.
    Has anyone tried this kind of setup? Were they successful in it? Moreover can we configure Direct access when we have NIC teaming configured?
    -Ashish

    Hi There - NIC teaming in both core and gui is a standard feature and there is no reason (and I have used it successfully) why you cannot do so. As always make sure you look at TCP Offload as per UAG / TMG Days to ensure best performance and also Network
    Card Binding Order.
    The link for details is here -
    http://technet.microsoft.com/en-us/library/hh831648.aspx
    Kr
    John Davies

  • Windows 7/8.0/8.1 NIC teaming issue

    Hello,
    I'm having an issue with Teaming network adapters in all recent Windows client OSs.
    I'm using Intel Pro Dual Port or Broadcom NetExtreme II GigaBit adapters with the appropriate drivers/applications from the vendors.
    I am able to set up teaming and fail-over works flawlessly, but the connection will not use the entire advertised bandwidth of 2Gbps. Basically it will use either one port or the other.
    I'm doing the testing with the iperf tool and am communicating with a unix based server.
    I have the following setup:
    Dell R210 II server with 2 Broadcom NetEtreme II adapters and a DualPort Intel Pro adapter - Centos 6.5 installed bonding configured and working wile communicating with other unix based systems.
    Zyxel GS2200-48 switch - Link Aggregation configured and working
    Dell R210 II with Windows 8.1 with Broadcom NetExtreme II cards or Intel Pro dualport cards.
    For the Windows machine I have also tried Windows 7 and Windows 8, also non server type hardware with identical results.
    so.. Why am I not getting > 1 Gbps throughput on the created team? although load balancing is activated, team adapter says the connection type is 2 Gbps, a the same setup with 2 unix machines works flawlessly.
    Am I to understand that Link Aggregation (802.3ad) under Microsoft OS does not support load balancing if connection is only towards one IP?
    To make it clear, I need client version of Windows OS to communicate unix based OS over a higher then 1Gbps bandwidth (as close to 2 Gbps as possible). Without the use of 10 Gbps network adapters.
    Thanks in advance,
    Endre

    As v-yamliu has mentioned, NIC teaming through the operating system is
    only available in Windows Server 2012 and Windows Server 2012 R2. For Windows Client or for previous versions of Windows Server you will need to create the team via the network driver. For Broadcom this is accomplished
    using the Broadcom Advanced Server Program (BASP) as documented here and
    for Intel via Advanced Network Services as documented here.
    If you have configured the team via the drivers, you may need to ensure the driver is properly installed and updated. You may also want to ensure that the adapters are configured for aggregation (802.3ad/802.1ax/LACP), rather than fault tolerance or load
    balancing and that the teaming configuration on the switch matches and is compatible with the server configuration. Also ensure that all of the links are connecting at full duplex as this is a requirement.
    Brandon
    Windows Outreach Team- IT Pro
    The Springboard Series on TechNet

  • Windows Server 2012 R2 - Hyper-V NIC Teaming Issue

    Hi All,
    I have cluster windows server 2012 R2 with hyper-v role installed. I have an issue with one of my windows 2012 R2 hyper-v host. 
    The virtual machine network adapter show status connected but it stop transmit data, so the vm that using that NIC cannot connect to external network.
    The virtual machine network adapter using Teamed NIC, with this configuration:
    Teaming Mode : Switch Independent
    Load Balance Algorithm : Hyper-V Port
    NIC Adapter : Broadcom 5720 Quad Port 1Gbps
    I already using the latest NIC driver from broadcom.
    I found a little trick for this issue by disable one of the teamed NIC, but it will happen again.
    Anyone have the same issue with me, and any workaround for this issue?
    Please Advise
    Thanks,

    Hi epenx,
    Thanks for the information .
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows Server 2012 R2 NIC Teaming and DHCP Issue

    Came across a weird issue today during a server deployment. I was doing a physical server deployment and got Windows installed and was getting ready to connect it to our network. Before connecting the Ethernet cables to the network adapters, I created a
    NIC Team using Windows Server 2012 R2 built-in software with a static IP address (we'll say its 192.168.1.56). Once I plugged in the Ethernet cables, I got network access but was unable to join our domain. At this time, I deleted the NIC team and the two network
    adapters got their own IP addresses issued from DHCP (192.168.1.57 and 192.168.1.58) and at this point I was able to join our domain. I recreated the NIC team and set a new static IP (192.168.1.57) and everything was working great as intended.
    My issue is when I went into DHCP I noticed a random entry that was using the IP address I used for the first NIC teaming attempt (192.168.1.56), before I joined it to the domain. I call this a random entry because it is using the last 8 characters of the
    MAC address as the hostname instead of the servers hostname.
    It seems when I deleted the first NIC team I created (192.168.1.56), a random MAC address Server 2012 R2 generated for the team has remained embedded in the system. The IP address is still pingable even though an ipconfig /all shows the current NIC team
    with the IP 192.168.1.57. There is no IP address of 192.168.1.56 configured on the current server and I have static IPs set yet it is still pingable and registering with DHCP.
    I know this is slightly confusing but I am hoping someone else has encountered this issue and may be able to tell me how to fix this. Simply deleting the DHCP entry does not do the trick, it comes back.

    Hi,
    Please confirm you have choose the right NIC team type, If you’ve previously configured NIC teaming, you’re aware NIC teams usually require the assistance of network-side
    protocols. Prior to Windows 2012, using a NIC team on a server also meant enabling protocols like EtherChannel or LACP (also known as 802.1ax or 802.3ad) on network ports.
    More information:
    NIC teaming configure in Server 2012
    http://technet.microsoft.com/en-us/magazine/jj149029.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Connectivity issues when using nic teaming

    Hello!
    There're three host computers in my test lab - Host1, Host2 and Host3.
    Host3 has three physical nics, two of them are used for Hyper-V - there's a virtual machine named
    FW with the nic = 10.10.0.101 (connected to the production network) and nic2 =
    20.1.1.1 - this nic2 is a default gateway for the test network 20.1.1.0/24 (consisting of several virtual machines hosted on Host1/Host2/Host3).
    Host1 has three physical nics, the two of them are members of the team (VM-TEAM):
    Virtual machine DC (hosted on Host1) uses the VM-TEAM as its network adapter with the ip = 20.1.1.2
    Vm FW and vm DC are connected to the same 5-port 1Gb switch:
    FW is connected by the single utp5 cable, DC - by the two cables.
    The problem: when coping any data from DC to FW (or vice versa) copy process stops after several seconds, then after ~10-60 seconds proceeds, then stops again and etc...
    This problem has never arisen when I was using a SINGLE nic for the Hyper-V, NOT teamed adapters.
    Are there any settings/known issues for the nic teaming that could lead to such problem?
    Thank you in advance,
    Michael

    Hi Michael,
    >>Is it possible to use two or more nics inside a VM or there can be some problems with multihomed VMs?
    Yes , VM can use multi NICs as a physical host .
    I also compared the original screenshot , I found something different (the origianl one doesn't have D-Link/8139 NIC . The second one doesn't show the teamed NIC ) .
    Best Regards,
    Elton JI
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Maybe you are looking for

  • Certain apps won't rotate

    The last few apps (games) I have downloaded will not rotate, and to make matters worse they are upside down from the way I hold my iPad. Normal? Or should I delete these apps and download again?

  • Proxy servers and the "cannot publish" error - v.1.1.2

    Using the latest iWeb (1.1.2), attempts to publish my work to my .Mac account resulted in the infamous "publish error." I then came across this KB article: http://docs.info.apple.com/article.html?artnum=303927 I turned off my proxy access in the Netw

  • Spry table does not display properly in Explorer 10

    I have set up a web page including a Spry table which works great in all browsers EXCEPT Explorer 10.  Does anyone know how to fix this? http://jarrettrifles.com/Jarrett2012-website/pet-calibers.html Here's how it looks in Explorer 10: This is how it

  • Automatic selection production version in MFBF

    Hello ! I have a material setted with Selection method = 2(Selection by production version) in Material Master, so, I have two production version: V001 & V002, the last one is locked for any usage.  I need automatic selection of production version in

  • Can't move iPhone pages' tables

    iPhone Pages won't let me add a table so I can move it freely (using the little circle in the upper left corner that appears after selecting the table). I've tried clicking elsewhere, but I have to double click for that to deselect a text insertion p