NIC Teaming for hyper-v serve.

I have installed windows server 2012 r2 on server. Server is having network adapters i have given static ip address to both nic's
LAN1: - 192.168.0.100 & LAN2: - 192.168.0.101 after enabling NIC Teaming server have added one more adapter called
"Network Adapter Multiplexor" after this above mentioned ip address are not responding to PING or any requests. Then i have given
192.168.0. 102 ip address to Multiplexor and its started working.
So my question do i need to give ip address to LAN1 &
LAN2 or i can just create team and give ip address to Multiplexor
Also if i installed hyper-v server on it will it give me failover thing for this machine.????
Akshay Pate

Hello Akshay,
In brief, after creating the Teaming adapter (Multiplexor) you'll use it's address for future networking purposes.
Regarding the lack of ping, I had the same "issue" and it seems like is block by the Microsoft code itself. Still couldn't find how to allow it.
When digging into W2k12(&R2) NIC Teaming this two pages were very explanative and usefull:
Geek of All Trades: The Availability Answer (by Greg Shields)
Windows server 2012 Hyper-V 3.0 network virtualization (if you need more technical detail)
Hope it helps!

Similar Messages

  • Akamai Download Error for Hyper-V Server 2012 R2

    I am trying to download the Eval for Hyper-V Server 2012 R2.
    I keep getting the same error message:
    Unable to save File
    Please try again to save to a different location.
    I have tried this on multiple computers and browsers, all with the same error.
    what am I doing wrong?

    Hi leomoed,
    Yes , it is always running in the background .
    Use the Download Manager for efficient installations, time-saving features, and automatic restarting if the download process is interrupted.
    http://msdn.microsoft.com/en-us/subscriptions/bb153537.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Which is better Microsoft 2012 R2 teaming or HP teaming for Hyper-V ?

    hi,
    I Plan to use network teaming on our hyper-v host 2012 R2
    I want to know which is recommended HP Teaming or Microsoft 2012 R2 Teaming 

    I would use Microsoft Teaming (LBFO Teaming) for the following reasons:
    Dynamic Load Balancing (New in R2 and just generally awesome in terms of perf)
    Compatibility with different NIC versions
    VMM 2012 SP1 and 2012 R2 compatibility in creating teams automatically.
    HP recommends it. (I did a quick search and could not find the source I've used in the past, so take this one with a grain of salt.
    Better support.
    I have four customers where I support a Hyper-V Private cloud and every single one of them uses Microsoft Teaming.
    You also may want to read through this white paper to get a better idea of all the options you have and when you would not want to use MS Teaming:
    http://www.microsoft.com/en-us/download/details.aspx?id=40319

  • NIC teaming and Hyper-V switch recommendations in a cluster

    HI,
    We’ve recently purchased four HP Gen 8 servers with a total of ten NICS to be used in a Hyper-V 2012 R2 Cluster
    These will be connecting to ISCSI storage so I’ll use two of the NICs for the ISCSI storage connection.
    I’m then deciding between to options.
    1. Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect my VMs to the same switch.
    2. Create two NIC teams, four adapters in each.  Use one team just for Management, Live Migration and CSV\Cluster VNics - QOS to manage all this traffic. 
    Then the other team will be dedicated just for my VMs.
    Is there any benefit to isolating the VMs on their own switch?
    Would having two teams allow more flexibility with the teaming 
    configurations I could use, such as using Switch Independent\Hyper-V Port mode for the VM team? (I do need to read up on the teaming modes a little more)
    Thanks,

    I’m not teaming the ISCSI adapters.  These would be configured with MPIO. 
    What I want to know,
    Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect
    my VMs to the same switch.
    http://blogs.technet.com/b/cedward/archive/2014/02/22/hyper-v-2012-r2-network-architectures-series-part-3-of-7-converged-networks-managed-by-scvmm-and-powershell.aspx
    What are the disadvantages to having this configuration? 
    Should RSS be disabled on the NICs in this configuration with DVMQ left enabled? 
    After reading through this post, I think I’ll need to do this. 
    However, I’d like to understand this a little more.
    I have the option of adding an additional two 10GB NICS. 
    This would mean I could create another team and Hyper-V switch on top and then dedicate this to my VMs leaving the other team for CSV\Management and Live Migration.
     How does this option affect the use of RSS and DVMQ?

  • NIC teaming shows no adapters (server 2012 R2)

    I think you are almost to the point of nuke and pave to fix the issue. (well not totally yet). 
    What I would do is go into the device manager and remove all physical network adapters. Reboot and then let windows redetect and add them. This should fix the network communication issue. 
    With that done, you need to answer what Rod-IT asked, how are you teaming these NICs? Are you using the computer manufacturer's program, the network adapter manufacturer's program or something else? Once you get beyond that what type of teaming are you trying to setup fail over (hot spare), LACP, switch assisted load balancing. Some load balancing settings need you to change the switch settings too. 
    Also what brand of NICs are these? 

    I have a server 2012 R2 system and on the NIC teaming it shows no adapters and windows network and sharing center shows none as well.
    I do not want to have to reinstall :( I've tried rebuilding wmi repository and it doesn't work)
    This topic first appeared in the Spiceworks Community

  • Nic teaming and hyper-v switches

    I come from the ESX world but I am slowly falling in love with the simplicity of Hyper-v. I have a stack of dell c2100's I have been experimenting with. each have 2 1gb connections  teamed to a cisco switch. when testing bandwidth with a file copy I
    get around 240MBps. however if I add a hyper-v switch I max out at 90Mbps. worse than no teaming at all (112Mbps). 
    team is with integrated broadcom nics, LACP and I can confirm I get full bandwidth between 2 2012 r2 machines until adding a hyper-v switch. removing the switch lets me transfer at full bandwidth but then I cant use Hyper-v guests.
    my goal will eventually be to add dual port 10gb cards to 5 of the C2100's and run them in a cluster to host all my VM's in HA. I don't want to waist my money on the switch and nics until I can get what i have working correctly.
    HDD speed is also not the issue as each has 12 3tb WD re4 drives with 2 Intel 250GB ssd as cache. they easily hold 3000MBps sustained.

    http://itproctology.blogspot.com/2008/05/hyper-v-tcpoffloading-poor-network.html
    http://itproctology.blogspot.com/2011/03/tcp-checksum-offload-is-not-equal-to.html
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Attempting change is of your own free will.

  • Name and hash codes for Hyper-V Server 2012R2 RTM

    Hi,
    can some one please confirm the download from http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx is the final RTM version ?
    File name: "9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO"
    SHA1: "99829e03eb090251612673bf57c4d064049d067a"
    MD5: "9c9e0d82cb6301a4b88fd2f4c35caf80"
    I see different hashes, and filenames, for ISO downloaded from MSDN.
    Thank you in advance,
    Simone

    Yep. I got the same.
    File: 9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO
    CRC-32: b3ef194e
    MD4: 22a4f13b6f6e86f35381aa60b2911ace
    MD5: 9c9e0d82cb6301a4b88fd2f4c35caf80
    SHA-1: 99829e03eb090251612673bf57c4d064049d067a
    It is usually redundant to include more than one state-of-the-art hash function, but why not...

  • Hyper-V Nic Teaming (reserve a nic for host OS)

    Whilst setting up nic teaming on my host (server 2012 r2) the OS recommends leaving one nic for host management(access). IS this best practice?  Seems like a waste for a nic as the host would hardly ever be accessed after initial setup.
    I have 4 nics in total. What is the best practice in this situation?

    Depending on if it is a single and the one and only or you build a Cluster you need some networks on your Hyper-V
    at least one connection for the Host to do Management.
    so in case of a single node with local disks you would create a Team with the 4 Nics and create a Hyper-V Switch with the Option checked for creating that Management OS Adapter what is a so called vNIC on that vSwitch and configure that vNIC with the needed
    IP Setting etc...
    If you plan a Cluster and also ISCSI/SMB for Storage Access take a look here
    http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/
    You find a few possible ways for teaming and the Switch Settings and also all needed Steps for doing a fully converged Setup via PowerShell.
    If you share more Informations on you setup we can give more Details on that.

  • Server 2012 R2 Crashes with NIC Team

    Server 2012 R2 Core configured for Hyper-V. Using 2-port 10Gbe Brocades, we want to use NIC teaming for guest traffic. Create the team... seems fine. Create the virtual switch in Hyper-V, and assign it to the NIC team... seems fine. Create
    a VM, assign the network card to the Virtual switch... still doing okay. Power on the VM... POOF! The host BSOD's. If I remove the switch from the VM, I can run the VM from the console, install the OS, etc... but as soon as I reassign the virtual
    NIC to the switch, POOF! Bye-bye again. Any ideas here?
    Thank you in advance!
    EDIT: A little more info... Two 2-port Brocades and two Nexus 5k's. Running one port on NIC1 to one 5k, and one port on NIC2 to the other 5k. NIC team is using Switch Independent Mode, Address Hash load balancing, and all adapters active.

    Hi,
    Have you updated the NIC driver to latest?
    If issue persists after updating the driver, we can use WinDbg to analyze a crash dump.
    If the NIC driver cause the BSOD, please consult the NIC manufacture about this issue.
    For detailed information about how to analyze a crash dump, please refer to the link below,
    http://blogs.technet.com/b/juanand/archive/2011/03/20/analyzing-a-crash-dump-aka-bsod.aspx
    Best Regards.
    Steven Lee
    TechNet Community Support

  • Hyper-V NIC Team Load Balancing Algorithm: TranportPorts vs Hyper-VPorts

    Hi, 
    I'm going to need to configure a NIC team for the LAN traffic for a Hyper-V 2012 R2 environment. What is the recommended load balancing algorithm? 
    Some background:
    - The NIC team will deal with LAN traffic (NOT iSCSI storage traffic)
    - I'll set up a converged network. So there'll be a virtual switch on top of this team, which will have vNICs configured for each cluster, live migration and management
    - I'll implement QOS at the virtual switch level (using option -DefaultFlowMinimumBandwidthWeight) and at the vNIC level (using option -MinimumBandwidthWeight)
    - The CSV is set up on an Equallogics cluster. I know that this team is for the LAN so it has nothing to do with the SAN, but this reference will become clear in the next paragraph. 
    Here's where it gets a little confusing. I've checked some of the Equallogics documentation to ensure this environment complies with their requirements as far as storage networking is concerned. However, as part of their presentation the Dell publication
    TR1098-4, recommends creating the LAN NIC team with the TrasportPorts Load Balancing Algorithm. However, in some of the Microsoft resources (i.e. http://technet.microsoft.com/en-us/library/dn550728.aspx), the recommended load balancing algorithm is HyperVPorts.
    Just to add to the confusion, in this Microsoft TechEd presentation, http://www.youtube.com/watch?v=ed7HThAvp7o, the recommendation (at around minute 8:06) is to use dynamic ports algorithm mode. So obviously there are many ways to do this, but which one is
    correct? I spoke with Equallogics support and the rep said that their documentation recommends TransportPorts LB algorithm because that's what they've tested and works. I'm wondering what the response from a Hyper-V expert would be to this question. Anyway,
    any input on this last point would be appreciated.

    Gleb,
    >>See Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management  for more
    info
    Thanks for this reference. It seems that I have an older version of this document where there's absolutely
    no mention of the dynamic LBA. Hence my confusion when in the Microsoft TechEd presentation the
    recommendation was to use Dynamic. I almost implemented this environment with switch dependent and Address Hash Distribution because, based on the older version of the document, this combination offered: 
    a) Native teaming for maximum performance and switch diversity is not required; or
    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
    The new version of the document recommends Dynamic over the other two LBA. The analogy that the document
    makes of TCP flows with human speech was really helpful for me to understand what this algorithm is doing. For those who will never read the document, I'm referring to this: 
    "The outbound loads in this mode are dynamically balanced based on the concept of
    flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
    occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow,
    the algorithm will opportunistically rebalance the flow to another team member if apropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity
    between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members. "
    Anyway, this post made my week. You sir are deserving of a beer!

  • VMQ issues with NIC Teaming

    Hi All
    Apologies if this is a long one but I thought the more information I can provide the better.
    We have recently designed and built a new Hyper-V environment for a client, utilising Windows Server R2 / System Centre 2012 R2 however since putting it into production, we are now seeing problems with Virtual Machine Queues. These manifest themselves as
    either very high latency inside virtual machines (we’re talking 200 – 400 mSec round trip times), packet loss or complete connectivity loss for VMs. Not all VMs are affected however the problem does manifest itself on all hosts. I am aware of these issues
    having cropped up in the past with Broadcom NICs.
    I'll give you a little bit of background into the problem...
    Frist, the environment is based entirely on Dell hardware (Equallogic Storage, PowerConnect Switching and PE R720 VM Hosts). this environment was based on Server 2012 and a decision was taken to bring this up to speed to R2. This was due to a number
    of quite compelling reasons, mainly surrounding reliability. The core virtualisation infrastructure consists of four VM hosts in a Hyper-V Cluster.
    Prior to the redesign, each VM host had 12 NICs installed:
    Quad port on-board Broadcom 5720 daughter card: Two NICs assigned to a host management team whilst the other two NICs in the same adapter formed a Live Migration / Cluster heartbeat team, to which a VM switch was connected with two vNICs exposed to the
    management OS. Latest drivers and firmware installed. The Converged Fabric team here was configured in LACP Address Hash (Min Queues mode), each NIC having the same two processor cores assigned. The management team is identically configured.
    Two additional Intel i350 quad port NICs: 4 NICs teamed for the production VM Switch uplink and 4 for iSCSI MPIO. Latest drivers and firmware. The VM Switch team spans both physical NICs to provide some level of NIC level fault tolerance, whilst the remaining
    4 NICs for ISCSI MPIO are also balanced across the two NICs for the same reasons.
    The initial driver for upgrading was that we were once again seeing issues with VMQ in the old design with the converged fabric design. The two vNics in the management OS for each of these networks were tagged to specific VLANs (that were obviously accessible
    to the same designated NICs in each of the VM hosts).
    In this setup, a similar issue was being experienced to our present issue. Once again, the Converged Fabric vNICs in the Host OS would on occasion, either lose connectivity or exhibit very high round trip times and packet loss. This seemed to correlate with
    a significant increase in bandwidth through the converged fabric, such as when initiating a Live Migration and would then affect both vNICS connectivity. This would cause packet loss / connectivity loss for both the Live Migration and Cluster Heartbeat vNICs
    which in turn would trigger all sorts of horrid goings on in the cluster. If we disabled VMQ on the physical adapters and the team multiplex adapter, the problem went away. Obviously disabling VMQ is something that we really don’t want to resort to.
    So…. The decision to refresh the environment with 2012 R2 across the board (which was also driven by other factors and not just this issue alone) was accelerated.
    In the new environment, we replaced the Quad Port Broadcom 5720 Daughter Cards in the hosts with new Intel i350 QP Daughter cards to keep the NICs identical across the board. The Cluster heartbeat / Live Migration networks now use an SMB Multichannel configuration,
    utilising the same two NICs as in the old design in two isolated untagged port VLANs. This part of the re-design is now working very well (Live Migrations now complete much faster I hasten to add!!)
    However…. The same VMQ issues that we witnessed previously have now arisen on the production VM Switch which is used to uplink the virtual machines on each host to the outside world.
    The Production VM Switch is configured as follows:
    Same configuration as the original infrastructure: 4 Intel 1GbE i350 NICs, two of which are in one physical quad port NIC, whilst the other two are in an identical NIC, directly below it. The remaining 2 ports from each card function as iSCSI MPIO
    interfaces to the SAN. We did this to try and achieve NIC level fault tolerance. The latest Firmware and Drivers have been installed for all hardware (including the NICs) fresh from the latest Dell Server Updates DVD (V14.10).
    In each host, the above 4 VM Switch NICs are formed into a Switch independent, Dynamic team (Sum of Queues mode), each physical NIC has
    RSS disabled and VMQ enabled and the Team Multiplex adapter also has RSS disabled an VMQ enabled. Secondly, each NIC is configured to use a single processor core for VMQ. As this is a Sum of Queues team, cores do not overlap
    and as the host processors have Hyper Threading enabled, only cores (not logical execution units) are assigned to RSS or VMQ. The configuration of the VM Switch NICs looks as follows when running Get-NetAdapterVMQ on the hosts:
    Name                           InterfaceDescription             
    Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    VM_SWITCH_ETH01                Intel(R) Gigabit 4P I350-t A...#8 True    0:10             1            
    7
    VM_SWITCH_ETH03                Intel(R) Gigabit 4P I350-t A...#7 True    0:14             1            
    7
    VM_SWITCH_ETH02                Intel(R) Gigabit 4P I350-t Ada... True    0:12             1            
    7
    VM_SWITCH_ETH04                Intel(R) Gigabit 4P I350-t A...#2 True    0:16             1            
    7
    Production VM Switch           Microsoft Network Adapter Mult... True    0:0                           
    28
    Load is hardly an issue on these NICs and a single core seems to have sufficed in the old design, so this was carried forward into the new.
    The loss of connectivity / high latency (200 – 400 mSec as before) only seems to arise when a VM is moved via Live Migration from host to host. If I setup a constant ping to a test candidate VM and move it to another host, I get about 5 dropped pings
    at the point where the remaining memory pages / CPU state are transferred, followed by an dramatic increase in latency once the VM is up and running on the destination host. It seems as though the destination host is struggling to allocate the VM NIC to a
    queue. I can then move the VM back and forth between hosts and the problem may or may not occur again. It is very intermittent. There is always a lengthy pause in VM network connectivity during the live migration process however, longer than I have seen in
    the past (usually only a ping or two are lost, however we are now seeing 5 or more before VM Nework connectivity is restored on the destination host, this being enough to cause a disruption to the workload).
    If we disable VMQ entirely on the VM NICs and VM Switch Team Multiplex adapter on one of the hosts as a test, things behave as expected. A migration completes within the time of a standard TCP timeout.
    VMQ looks to be working, as if I run Get-NetAdapterVMQQueue on one of the hosts, I can see that Queues are being allocated to VM NICs accordingly. I can also see that VM NICs are appearing in Hyper-V manager with “VMQ Active”.
    It goes without saying that we really don’t want to disable VMQ, however given the nature of our clients business, we really cannot afford for these issues to crop up. If I can’t find a resolution here, I will be left with no choice as ironically, we see
    less issues with VMQ disabled compared to it being enabled.
    I hope this is enough information to go on and if you need any more, please do let me know. Any help here would be most appreciated.
    I have gone over the configuration again and again and everything appears to have been configured correctly, however I am struggling with this one.
    Many thanks
    Matt

    Hi Gleb
    I can't seem to attach any images / links until my account has been verified.
    There are a couple of entries in the ndisplatform/Operational log.
    Event ID 7- Querying for OID 4194369794 on TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} failed. OidBuffer is  failed.  Status = -1073676266
    And
    Event ID 6 - Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{A5FDE445-483E-45BB-A3F9-D46DDB0D1749} failed.  Status = -1073741670
    And
    Forwarding of OID 66083 from TeamNic {C67CA7BE-0B53-4C93-86C4-1716808B2C96} due to Member NDISIMPLATFORM\Parameters\Adapters\{207AA8D0-77B3-4129-9301-08D7DBF8540E} failed.  Status = -1073741670
    It would appear as though the two GUIDS in the second and third events correlate with two of the NICs in the VM Switch team (the affected team).
    Under MSLBFO Provider/Operational, there are also quite a few of the following errors:
    Event ID 8 - Failing NBL send on TeamNic 0xffffe00129b79010
    How can I find out what tNIC correlates with "0xffffe00129b79010"
    Without the use of the nice little table that I put together (that I can't upload), the NICs and Teams are configured as follows:
    Production VM Switch Team (x4 Interfaces) - Intel i350 Quad Port NICs. As above, the team itself is balanced across physical cards (two ports from each card). External SCVMM Logical Switch is uplinked to this team. Serves
    as the main VM Switch for all Production Virtual machines. Team Mode is Switch Independent / Dynamic (Sum of Queues). RSS is disabled on all of the physical NICs in this team as well as the Multiplex adapter itself. VMQ configuration is as follows:
    Interface Name          -      BaseVMQProc          -        MaxProcs         
    -      VMQ / RSS
    VM_SWITCH_ETH01                  10                             
         1                           VMQ
    VM_SWITCH_ETH02                  12                              
        1                           VMQ
    VM_SWITCH_ETH03                  14                               
       1                           VMQ
    VM_SWITCH_ETH04                  16                              
        1                           VMQ
    SMB Fabric (x2 Interfaces) - Intel i350 Quad Port on-board daughter card. As above, these two NICs are in separate, VLAN isolated subnets that provide SMB Multichannel transport for Live Migration traffic and CSV Redirect / Cluster
    Heartbeat data. These NICs are not teamed. VMQ is disabled on both of these NICs. Here is the RSS configuration for these interfaces that we have implemented:
    Interface Name          -      BaseVMQProc          -        MaxProcs       
      -      VMQ / RSS
    SMB_FABRIC_ETH01                18                                   2                           
    RSS
    SMB_FABRIC_ETH02                18                                   2                           
    RSS
    ISCSI SAN (x4 Interfaces) - Intel i350 Quad Port NICs. Once again, no teaming is required here as these serve as our ISCSI SAN interfaces (MPIO enabled) to the hosts. These four interfaces are balanced across two physical cards as per
    the VM Switch team above. No VMQ on these NICS, however RSS is enabled as follows:
    Interface Name          -      BaseVMQProc         -         MaxProcs      
       -        VMQ / RSS
    ISCSI_SAN_ETH01                    2                                    2                           
    RSS
    ISCSI_SAN_ETH02                    6                                    2                           
    RSS
    ISCSI_SAN_ETH03                    2                                   
    2                            RSS
    ISCSI_SAN_ETH04                    6                                   
    2                            RSS
    Management Team (x2 Interfaces) - The second two interfaces of the Intel i350 Quad Port on-board daughter card. Serves as the Management uplink to the host. As there are some management workloads hosted in this
    cluster, a VM Switch is connected to this team, hence a vNIC is exposed to the Host OS in order to manage the Parent Partition. Teaming mode is Switch Independent / Address Hash (Min Queues). As there is a VM Switch connected to this team, the NICs
    are configured for VMQ, thus RSS has been disabled:
    Interface Name        -         BaseVMQProc        -          MaxProcs       
    -         VMQ / RSS
    MAN_SWITCH_ETH01                 22                                  1                          
    VMQ
    MAN_SWITCH_ETH02                 22                                  1                           VMQ
    We are limited as to the number of physical cores that we can allocate to VMQ and RSS so where possible, we have tried balance NICs over all available cores where practical.
    Hope this helps.
    Any more info required, please ask.
    Kind Regards
    Matt

  • Dynamic(Switch Independent) NIC Teaming Mode Problem

    Hi,
    We are using Switch Independent Dynamic distribution NIC Teaming configuration on Hyper-V 2012 R2 cluster. In the following graphs between 19:48 - 19:58 you can see clearly how dynamic NIC teaming mode badly affected the web servers response times.
    We have encountered this problem on the different cluster nodes that use different NIC chipsets Intel or Broadcom and configured with dynamic NIC teaming for VM Network connection.
    If we change the teaming mode as Hyper-V Port or remove NIC teaming for VM network as you can see from the graphic, response times are back to normal.
    Any ideas?
    Regards

    Hi,
    In the heavy inbound and outbound network load, If you are using the “Switch Dependent” 
    mode, you must configure your switch have the same teaming support mode.
    There have two scenario.
    “Switch Dependent” with LACP if your switch supports the aka 802.1ax.
    “Switch Dependent” mode with Static.
    Example of  Cisco® switch configuration when you using the “Switch Dependent” mode with “Static” mode:
    CiscoSwitch(config)# int port-channel1
    CiscoSwitch(config-if)# description NIC team for Windows Server 2012
    CiscoSwitch(config-if)# int gi0/23
    CiscoSwitch(config-if)# channel-group 1 mode on
    CiscoSwitch(config-if)# int gi0/24
    CiscoSwitch(config-if)# channel-group 1 mode on
    CiscoSwitch(config)# port-channel load-balance src-dst-ip
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Load Balancing and NIC Teaming

    Hi! i have been looking through lots of links and none of them actually can fully answer my queries.
    I am to do a writeup on load balancing and NIC Teaming, is there any1 that knows what are the commonly used load balancing and NIC Teaming methods, when to use each method, and the advantages and disadvantages of each method and the configuration for each
    method!
    Sorry its lots of questions but i have to do a detailed writeup!
    Many thanks in advance :D

    HI
    NIC Teaming - On a single server, you will have mutiple NIC. You can Team the NIC so that both NIC will act togather to provide better bandwidth and High avaliblity.
    Example : NIC 1 - 1 GB and NIC -2 1 GB so in Team it can act a 2 GB single NIC, If one fails speed will be reduced but it will have HA
    Loadbalancing : Two servers hosting same content:
    Example : Microsoft.com can be hosted in two or even more servers and a loadbalancer will be used to split load to each server based of the current load and traffic.
    No disadvantages

  • Error upgrading Hyper-V Server 2008 R2 to 2012: 0x80070490

    Hi everyone!
    As it is release day for Hyper-V Server 2012, I decided to go ahead and upgrade from Hyper-V Server 2008 R2.  However, I keep getting this error: 
    "Windows Setup cannot find a location to store temporary installation files.  To install Windows, make sure that a partition on your boot disk has at least 1198 megabytes (MB) of free space.
    Error code 0x80070490"
    I have over 20 gigs free on the boot volume, and tons more on my VHD volume, so I don't know why it's doing this.  I have shut down all VMs, there are no snapshots, nothing cluttering up my C: drive, and yes I've rebooted it several times.  I have
    also tried copying the installation DVD to a local volume and installing that way, but I keep getting the same error.  I know the quick solution would be to just format, reinstall clean, and pray that my VMs import into 2012 (which are already safely
    on another volume), but there has to be a reason this is happening.
    Any ideas?  :)  Thanks!

    Hello Vincent,
    I get the same error trying to upgrade my 2008R2 server to 2012. Here's my diskpart output for the partition (it's a mirrored partition):
    DISKPART> detail volume
      Disk ###  Status         Size     Free     Dyn  Gpt
      Disk 2    Online          298 GB  1024 KB   *
      Disk 1    Online          298 GB  1024 KB   *
    Read-only              : No
    Hidden                 : No
    No Default Drive Letter: No
    Shadow Copy            : No
    Offline                : No
    BitLocker Encrypted    : No
    Installable            : Yes
    Volume Capacity        :  297 GB
    Volume Free Space      :  268 GB
    Any suggestions?

Maybe you are looking for

  • ITunes 8.0 Crashes When Analyzing Gapless Playback

    I made a smooth transition to iTunes 8.0 and had no problems loading my library or having Genius search through my library about a month after the transition. The problem began when I attempted to add about 2,000 song to my library. The songs were ad

  • Error in calling pkg

    questions. i copied an existing code and having a problem in the CO.java i have String paramRMODE = null; String paramDOC_TYPE = null; String paramDOC_ID = null; String paramDOC_NUM = null; Serializable paramDocLocatorParamList [] = {paramRMODE, para

  • ASA 5510 cannot connect to Microsoft IAS

    I'm at a total loss here. I am transitioning from a Microsoft ISA server to a Cisco ASA 5510. So far so good, until it comes to getting AAA functioning properly. I have a Microsoft IAS server that is functioning properly, however when I try to test i

  • Announcement title getting cut short - {tag_subject}

    Hello, can anybody help with this please? (Yes, I have gone search through the forums and Googled). I have an announcement with quite a long title that is getting cut a bit short. How/where do I edit the settings that control this? I'm pretty sure it

  • Ever since I downloaded the lastest ipad update I can not watch videos

    ever since I downloaded the lastest ipad update I can not watch videos