Teaming of pr-configured NICs

In my environment in some scenerios one NIC address is hardcoded and other NIC is set to default, that is connected but no IP set up and no DHCP server around. When I team these NICs I end up having to manually re-do IP configuration on the team interface
which is very annoying. Is there a way to force the Teaming wizard to pick up on configuration from the already set up NIC? My team settings are: Swich Independent, Address Hash where the non-configured adapter is selected as Standby.
yaro

Hi Sir,
Sorry for the inconvenience .
But teaming doen't have this option to apply member's IP configuration .
Best Regards,
Elton Ji
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

Similar Messages

  • Configuring NIC teaming

    Hello, everyone. I'm hoping this thread is in the right place.
    I've been doing some research trying to understand logical switches/port profiles/etc. in VMM and have been having a hard time. Most of the articles I've found either don't go into enough detail or seem to lack proper examples. My goal is to enable NIC teaming
    on my cluster hosts.
    Currently, each cluster node has 1 standard switch per physical NIC. One of these NICs is trunked, and the others are not. Everything is working fine, but I'm looking to improve the infrastructure behind these hosts.
    I evicted one node from the cluster to experiment with. I enabled LACP on the switch side (Cisco) and enabled NIC teaming on the server (2012 R2). The server is online and functioning, but this is where my knowledge ends. I can't create a logical switch
    and add it to this host as the job fails stating that the switch can't be added since the host is already teamed. I'm a little confused about the proper process of getting a logical switch created and added to my host. Do I need to remove LACP and disable
    NIC teaming on the host and then re-enable it? Am I going down the wrong path by using LACP? 
    Any tips and advice would be greatly appreciated. I'd also be happy to provide any additional details I may have left out.

    We use LACP teaming for four NICs, two teams, one Production vSwitch and one for management.
    We create the management team on the HyperV host first, add it into VMM then push out a team FROM VMM for the switch. The trick is to create a port profile (using algorithm Hyper-V Port and Teaming Mode LACP.) bind this port profile to your logical network(s).
    then create your virtual switch and select uplink mode TEAM. and add in your uplink port profile.
    Once you have done this you can then right click the host (in VMs and Services) Properties and navigate to virtual switches. Add a new Virtual Switch (New logical switch) then you will be able to add multiple adapters to the switch.
    Hit apply and it *should* team for you.
    If you need further clarification I can send screen prints and exact steps on tuesday when i'm back in the office.

  • LBFO team and VMQ configuration

    Hi gang,
    So I am running Server 2012 R2 in a two node cluster with some teamed network cards (2 - dual 10 gig cards)
    The teaming is switch independent and dynamic.
    In the event viewer I am seeing the following error:
    Event ID: 106
    Available processor sets of the underlying physical NICs belonging to the LBFO team NIC /DEVICE/{2C85A178-B9EA-436B-8E53-FAE34A578E95} (Friendly Name: Microsoft Network Adapter Multiplexor
    Driver) on switch 9740F036-858E-49C8-8A29-DAE5BDA94871 (Friendly Name: VLANs) are not configured correctly. Reason: The processor sets overlap when LBFO is configured with sum-queue mode.
    So I read a bunch on the VMQ stuff and just want to make sure this is the proper configuration:
    Set-netadaptervmq –name “NIC A” –baseprocessornumber 2 –maxprocessors 8
    Set-netadaptervmq –name “NIC B” –baseprocessornumber 10 –maxprocessors 8
    Set-netadaptervmq –name “NIC C” –baseprocessornumber 18 –maxprocessors 8
    Set-netadaptervmq –name “NIC D” –baseprocessornumber 24 –maxprocessors 6
    Hyper threading is enabled:
    numberofcores numberoflogicalprocessors
    8                       
    16
    8                       
    16
    The last line of the vmq configuration is my greatest confusion.
    Thanks.

    Hi Alexey,
    So something like this should work:
    Set-netadaptervmq –name “NIC A” –baseprocessornumber 4 –maxprocessors 4 --4,6,8,10
    Set-netadaptervmq –name “NIC B” –baseprocessornumber 12 –maxprocessors 4 --12,14,16,18
    Set-netadaptervmq –name “NIC C” –baseprocessornumber 20 –maxprocessors 3 --20,22,24
    Set-netadaptervmq –name “NIC D” –baseprocessornumber 26 –maxprocessors 3 --26,28,30
    Correct?

  • Responsible team in srm configuration

    In best practices who does the configuration for rfc connection, definiton and assignment of logical systems? Is it the basis team or the functional team?

    Hi Jackie,
    This is done by the Basis Team, obviously with directions from the Functional Team. Functional Team needs to tell the Basis team about which log system to create, which RFC connection to create etc..
    Regards,
    Nikhil

  • Minor_init failed for module... & unable to configure nics

    Hello,
    following the instructions of: "Sun Two-Node Cluster How to Guide"
    I successfully installed the "Sun Cluster 3.1U4" on my sun ultra 5 workstations (on the two nodes and on the administrative console), which are all running solaris 10 1/06.
    During the installation process I choose the option "configure later" as suggested by the guide.
    Before configuring the nodes and proceeding with the steps, I wanted to check the correct installation of the nics. I have three nics plugged in the two nodes and two in the administrative console. One per computer is integrated, the others are three pci 10/100 Ark and two Intel nics.
    The fact is that the output of dladm show-link is just hme0 for all the computers, none of the other nics seems to be recognized.
    in the output of prtconf I see
    pci instance 1 ethernet (drive not attached) ethernet (driver not attached)
    instead with prtconf -D
    pci instance 1 ethernet ethernet (driver simba)
    this is the output of one of my nodes that has two Ark nics.
    I tried with devfsadm -v (as I read in a post) and the output is
    minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    this string also appears at every boot (also when I boot -r)
    This last thing I found treated as a problem in "sun java enterprise administration guide":
    this is like my situation, but of course I am not running solaris on an x86 system, and I also do not receive "NOTICE NO PCI PROB"
    x86 machines running Solaris 10 fail to come up in cluster mode due to changes made for the Solaris boot architecture project. The following error messages are displayed when the machine boots up:
    NOTICE: Can�t open /etc/cluster/nodeid
    NOTICE: BOOTING IN NON CLUSTER MODE
    NOTICE: NO PCI PROP
    NOTICE: NO PCI PROP
    Configuring devices.
    Hostname: pvyom1
    devfsadm: minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    Loading smf(5) service descriptions: 24/24
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Not booting as part of a cluster
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Note: path_to_inst might not be updated. Please �boot -r� as needed to update.
    Solution Perform these steps:
    1. Add etc/cluster/nodeid to /boot/solaris/filelist.ramdisk.
    2. Enter these commands:
    # bootadm update-archive
    # reboot -- -r
    I did not perform the steps indicated as the "solution" for this.
    I wait because maybe the scinstall configuration can fix this, doesn't it?
    I just want to end my two node cluster config happily and begin to work with it.
    Thanks in advantage,
    Francesco

    The primary problem you seem to have is that your network adapters are not recognized. So this is not a Cluster problem. You talk about "three pco 10/100 Ark and two intel nics". The question is, if those cards have a corresponding solaris driver. Since you seem to indicate that prtdiag does report that there is no driver attached, you may be out of luck for the drivers coming with Soalris 10 01/06 you use. Maybe the vendor of those cards offers drivers that need to get installed?
    Otherwise I guess you are out of luck using them, without finding a usabel kernel driver.
    For the message
    minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    you see, this is harmless and not your problem. You can ignore it.
    This error happens when the scmd_link devfsadm link module is attempted to be
    loaded before clustering has started. It is a harmless situation.
    Greets
    Thorsten

  • Team Explorer iView Configuration

    I am trying to configure the Team Explorer.  I downloaded and installed the team explorer from OSS note 836380 (version 60.1), and downloaded the documentation from OSS note 828767 and configured accordingly.  My question is this: it appears that the only supported NAVID entry(view v_twpc_nav) is key ORGEH (and eval path ORGEH); regardless of what is entered into the NavGroup or Navid iView properties.  Is there any way to have multiple entries in v_twpc_nav and v_twpc_v for the Team Explorer?  OSS note 828767 appears to indicate that only one NAVID (and therefore, one view) is supported.

    Hi Mark:
    AT one point when I configured the NAVID, I set-up three of them:
    ORGEH with the view O_PROF ( org. units)
    Z_1 with view o_SALL (all employees for the org. unit selected)
    Z_2 with view O_SDIR. (directly reporting employees for org. unit selected fine.)
    It worked fine.  (This was in our sandbox).  Now in our dev. environment, it just brings back empty tables with no employee or org. unit info..
    Thanks
    Now I am trying to do the same

  • How to configure NIC to accept all Multicast(ALLMULTI) on Solaris 10?

    I haven't had any luck finding the correct syntax or appropriate steps to configure a network interface on Solaris 10 SPARC server to accept all multicast traffic.
    I've tried it on ifconfig with ALLMULTI or IFF_ALLMULTI with no luck.
    Background:
    I need to be able to determine multicast groups available on a specific network interface without having to do a join on all possible multicast addresses.
    Thanks

    No, Siebel is supported on the server if you run Sun Solaris 10.
    On the client you can run Siebel in HI mode only with IE.
    Siebel Tools (The client you use to do the changes in the Siebel SRF is only supported on
    Windows XP SP2 and Vista)
    If you run Siebel in SI mode you can do this on a lot more platforms, but not Sun Solaris.
    Once again, please have a look at:
    Siebel 8.0.x System Requirements and Supported Platforms & Miscellaneous Documentation
    http://download.oracle.com/docs/cd/E11886_01/V8/CORE/core_8_0.html
    Axel

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • Are these viable designs for NIC teaming on UCS C-Series?

    Is this a viable design on ESXi 5.1 on UCS C240 with 2 Quad port nic adapters?
    Option A) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Active configuration through alternate and redundant hardware paths to the network.
    Option B) VMware NIC Teaming with load balancing of vmnic interfaces in an Active/Standy By configuration through alternate and redundant hardware paths to the network.
    Option A:
    Option B:
    Thanks.

    No.  It really comes down to what Active/Active means and the type of upstream switches.  For ESXi NIC teaming - Active/Active load balancing provided the opportunity to have all network links be active for different guest devices.  Teaming can be configured in a few different methods.  The default is by virtual port ID where each guest machine gets assigned to an active port and then also a backup port.  Traffic for that host would only be sent on one link at a time.
    For example lets assume 2 Ethernet Links and 4 guests on the ESX host.  Link 1 to Switch 1 would be active for Guest 1 and 2 and Link 2 to Switch 2 would be backup for Guest 1 and 2.  However Link 2 to Switch 2 would be active for Guest 3 and 4 and Link 1 to Switch 1 would be backup for guest 1 and 2. 
    The following provides details on the configuration of NIC teaming with VMWare:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
    There are also possibilities of configuring LACP in some situations, but there are special hardware considerations on the switch side as well as the host side.
    Also keep in mind that the vSwitch does not indiscriminately forward broadcast/multicast/unknown unicast out all ports.  It has a strict set of rules that prevents it from looping.  It is not a traditional L2 forwarder so loops are not a consideration in an active/active environment. 
    This document further explains VMWare Virtual Networking Concepts.
    http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf
    Steve McQuerry
    UCS - Technical Marketing

  • SR-IOV Uplink Port with NIC Teaming

    Hello,
    I'm trying to setup my uplink port profile and logical switch with NIC Teaming and SR-IOV support. In Hyper-V this was easy, just had to create the NIC Team (which I configured as Dynamic & LACP) then check the box on the virtual switch.
    I'm VMM it does not seem to like to enable NIC Teams with SR-IOV:
    Can anyone advise? I'm not using any virtual ports. I just want all my VMs to connect to the physical switch though the LACP NIC Team, something which I thought would be simple.
    I have a plan B - don't use Microsoft's NIC Teaming and instead use the Intel technology to present all the adapters as one to the host. I'd rather no do this.
    Thanks
    MrGoodBytes

    Hi Sir,
    "SR-IOV does have certain limitations. If you configure port access control lists (ACLs), extensions or policies in the virtual switch, SR-IOV is disabled because its traffic totally bypasses the switch.
    You can’t team two SR-IOV network cards in the host. You can, however, take two physical SR-IOV NICs in the host, create separate virtual switches and team two virtual network cards within a VM. "
    There is really a limitation when using NIC teaming :
    http://technet.microsoft.com/en-us/magazine/dn235778.aspx
    Best Regards,
    Elton Ji 
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Slow migration rates for shared-nothing live migration over teaming NICs

    I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
    dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
    I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
    storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
    As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
    interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
    Thanks for any insights and for any hints where to look further!

    Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
    Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
    (9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
    As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
    no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
    Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
    As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
    team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
    Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking.

  • Windows 2012 Teaming configuration

    Dear All,
    i want to configure teaming in windows 2012. i have 4 lan card and i am confuse in switch dependent and switch independant.
    which option gives me best performance? this server having failover cluster node.
    teaming will create any problem with cluster?
    please help
    Sunil
    SUNIL PATEL SYSTEM ADMINISTRATOR

    Hi,
    It depend on your configurations and requirements.
    In Switched Independ Mode, all the network adapters are connected to different switches provide alternate routes through the network. This configuration doesn't require switch to participate in the teaming. Since the switch is independent mode the switch doesnot
    know which adapter is part of the NIC Team. This again classified into two mdes:
    Active / Active Mode
    Active / Passive Mode
    In Switched Dependant Mode, it requires the switch to participate in the teaming. And also required all the NIC card to be connected to the same physical switch. This can be configured in Two modes:
    Generic / Static Mode
    Dynamic Teaming
    Network card teaming is now supported in Windows Server 2012 with cluster. However, when iSCSI is used with dedicated NICs, then using any teaming solution is not recommended and MPIO/DSM should be used. But when iSCSI is used with shared NICs, those
    shared NICs can be teamed and will be supported as long as it’s being used with Windows Server 2012 NIC Teaming solution.
    Is NIC Teaming in Windows Server 2012 supported for iSCSI, or not supported
    for iSCSI?
    Best regards,
    Susie
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Hyper-V Nic Teaming (reserve a nic for host OS)

    Whilst setting up nic teaming on my host (server 2012 r2) the OS recommends leaving one nic for host management(access). IS this best practice?  Seems like a waste for a nic as the host would hardly ever be accessed after initial setup.
    I have 4 nics in total. What is the best practice in this situation?

    Depending on if it is a single and the one and only or you build a Cluster you need some networks on your Hyper-V
    at least one connection for the Host to do Management.
    so in case of a single node with local disks you would create a Team with the 4 Nics and create a Hyper-V Switch with the Option checked for creating that Management OS Adapter what is a so called vNIC on that vSwitch and configure that vNIC with the needed
    IP Setting etc...
    If you plan a Cluster and also ISCSI/SMB for Storage Access take a look here
    http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/
    You find a few possible ways for teaming and the Switch Settings and also all needed Steps for doing a fully converged Setup via PowerShell.
    If you share more Informations on you setup we can give more Details on that.

  • Windows Server 2012 R2 NIC Teaming and DHCP Issue

    Came across a weird issue today during a server deployment. I was doing a physical server deployment and got Windows installed and was getting ready to connect it to our network. Before connecting the Ethernet cables to the network adapters, I created a
    NIC Team using Windows Server 2012 R2 built-in software with a static IP address (we'll say its 192.168.1.56). Once I plugged in the Ethernet cables, I got network access but was unable to join our domain. At this time, I deleted the NIC team and the two network
    adapters got their own IP addresses issued from DHCP (192.168.1.57 and 192.168.1.58) and at this point I was able to join our domain. I recreated the NIC team and set a new static IP (192.168.1.57) and everything was working great as intended.
    My issue is when I went into DHCP I noticed a random entry that was using the IP address I used for the first NIC teaming attempt (192.168.1.56), before I joined it to the domain. I call this a random entry because it is using the last 8 characters of the
    MAC address as the hostname instead of the servers hostname.
    It seems when I deleted the first NIC team I created (192.168.1.56), a random MAC address Server 2012 R2 generated for the team has remained embedded in the system. The IP address is still pingable even though an ipconfig /all shows the current NIC team
    with the IP 192.168.1.57. There is no IP address of 192.168.1.56 configured on the current server and I have static IPs set yet it is still pingable and registering with DHCP.
    I know this is slightly confusing but I am hoping someone else has encountered this issue and may be able to tell me how to fix this. Simply deleting the DHCP entry does not do the trick, it comes back.

    Hi,
    Please confirm you have choose the right NIC team type, If you’ve previously configured NIC teaming, you’re aware NIC teams usually require the assistance of network-side
    protocols. Prior to Windows 2012, using a NIC team on a server also meant enabling protocols like EtherChannel or LACP (also known as 802.1ax or 802.3ad) on network ports.
    More information:
    NIC teaming configure in Server 2012
    http://technet.microsoft.com/en-us/magazine/jj149029.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Intel Server NIC I350 LACP IEEE802.3ad teaming issue

    Hello Community
    I face an issue which i cannot resolve.
    I have:
    Intel Server System R1208GL4DS with buildin I350 4 ports inet adapter
    OS: Windows Server 2008 R2
    NIC drivers ver 18.4 (PRO set with ANS)
    Data Center provides ieee802.3ad dynamic aggregation teaming connection, It uses 2 ports on my server (0 and 3)
    DC uses Cisco Nexus switches
    SpanningTreeProtocol is ON and cannot be switched off by DC.
    Problem:
    One of the adapters suddenly goes on standby state and doesnot pass traffic.
    As the result the whole connectivity to server and to services I use stuck at that moment.
    There is only one way to resolve is to restart server or restart whole team by changing the team properties.
    Nic properties:
    flow control off
    ofloads off
    rss off
    Team:
    I have tried to change everything playing with any property within nic or team. No luck.
    Some information from DC support of the swith config:
    # sh interface po1113 switchport
    Name: port-channel11
    13
      Switchport: Enabled
      Switchport Monitor: Not enabled
      Operational Mode: trunk
      Access Mode VLAN: 1 (default)
      Trunking Native Mode VLAN: 1 (default)
      Trunking VLANs Allowed: 300,390,398-399
      Voice VLAN: none
      Extended Trust State : not trusted [COS = 0]
      Administrative private-vlan primary host-association: none
      Administrative private-vlan secondary host-association: none
      Administrative private-vlan primary mapping: none
      Administrative private-vlan secondary mapping: none
      Administrative private-vlan trunk native VLAN: 1
      Administrative private-vlan trunk encapsulation: dot1q
      Administrative private-vlan trunk normal VLANs: none
      Administrative private-vlan trunk private VLANs: none
      Operational private-vlan: none
      Unknown unicast blocked: disabled
      Unknown multicast blocked: disabled
    Please advise as I'm almost stuck.
    Thank you.

    May be problem on Cisco side, Cisco is very clever, could assessed network traffic as a problem and close the port. When OS is running, then NIC Teaming working fine, but when you boot up server, "BIOS not running with NIC Teaming", in this moment
    may occur problem on Cisco side.
    I recommend, if you use Cisco, configure NIC Teaming in LACP mode and configure your two ports on Cisco to LACP, it's better way.
    Regards,
    thennet
    Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable. This helps the community, keeps the forums tidy, and recognises useful contributions. Thank you!

Maybe you are looking for

  • Saving files in Photoshop CS6 Extended (Mac)

    I just got Photoshop CS6 Extended (Mac) and am unable to save files (eg .psd) as a .jpg -- if I select the JPEG option, although that is ticked, it appears in the filename above as .gif or .eps  WHY?? Also, why do I have to select 'Enable' and scroll

  • Select with too many where conditions not working fine....equal to and not

    Hi Everyone,   I am getting rows into internal table lt_mseg even if this where condition like this werks NE gs_t001w-werks in below code is true. It looks like it's not excluding if not equal to gs_t001w-werks. Is anything wrong in below code? pleas

  • More Options in QOS please?

    Is there any way to re-configure the flash bios to allow more options in in QOS for more computers of the Linksys WRT54GL? I have two areas where I can add a computer and select weather it should have lowest-highest priorities, but two computers are

  • Host name Resolved to an IPS Adress

    I am getting this Error Host name resolved to an IPS Adress - 172.17.158.194 provide a host name Please advise Edited by: Next Level on Sep 23, 2012 2:10 PM

  • Getting an error message while trying to update Adobe AIR.

    I keep getting this error message while trying to update Adobe AIR: Sorry, an error has occurred. This application cannot be installed because this installer has been mis-configured. Please contact the application author for assistance. *Please help