Maximising throughput of a team in converged networking scenario using Hyper-V virtual switch

Hi,
Recently I have been looking into the converged network scenario, I have read quite a few resources about this but I am struggling to make sense of maximising the bandwidth I have available.
I understand that a single TCP stream can only go down 1 physical NIC, so even if I had a team of 6 x 1Gbps NICS – the team would be 6Gbps, but I will never have a single transfer go faster than 1 Gbps. I’m pretty sure that’s right, how I understood it anyway.
To make a converged network, I need to team adapters together, then make a virtual switch, and make virtual NIC’s off of that virtual switch. Each vNIC I can assign for different purposes,
such as ISCSI, data sync and backup traffic. In the diagram below I have shown this configuration, each green pNIC is only 1Gbps, on my blue vNICs I use the minimum weighting configuration.
In my example below, I have 1 ISCSI vNIC, if a physical NIC failed then I understand that to mean that the bandwidth won’t reduce for ISCSI because a single stream can only go down one
NIC anyway, the team will reduce to 5Gbps. No vNIC will go faster than 1Gbps.
If this is correct, then how would I increase bandwidth to the disk system using ISCSI, is it simply a case of creating an additional vNIC for ISCSI traffic, adding it to the same team
and configuring MPIO so the traffic will eventually end up through different pNICS? In my mind, MPIO can’t round robin ISCSI data to more physical NICs because the ISCSI and MPIO only know about the vNIC, I assume it is the team that ultimately handles the
placement of the stream onto the pNICS.
Do I simply do the same for Data Sync and Backup traffic?
I’m not too concerned about the VMNET team, I am mostly focused on getting the best out of the core cluster resources I have. I haven’t shown the physical switches to the right of the diagram,
but these are already configured to accept the VLAN traffic on the ports and the same is all configured for the other half of the solution.
Just a little confused over the whole thing on how I could potentially achieve the 6Gbps I have at my disposal. In this configuration and testing so far, it seems quite difficult for me
to exceed about 1.5-2Gbps combined across all the vNICS (in this particular test server I am limited to 1 physical disk so it’s hard to gauge the performance, the disk will take probably about 200MB/2Gbps maximum, that’s with ISCSI (file copying) and data
Sync going on between two servers at the same time.
many thanks for your help
Steve

Hi,
Recently I have been looking into the converged network scenario, I have read quite a few resources about this but I am struggling to make sense of maximising the bandwidth I have available.
I understand that a single TCP stream can only go down 1 physical NIC, so even if I had a team of 6 x 1Gbps NICS – the team would be 6Gbps, but I will never have a single transfer go faster than 1 Gbps. I’m pretty sure that’s right, how I understood it anyway.
To make a converged network, I need to team adapters together, then make a virtual switch, and make virtual NIC’s off of that virtual switch. Each vNIC I can assign for different purposes,
such as ISCSI, data sync and backup traffic. In the diagram below I have shown this configuration, each green pNIC is only 1Gbps, on my blue vNICs I use the minimum weighting configuration.
In my example below, I have 1 ISCSI vNIC, if a physical NIC failed then I understand that to mean that the bandwidth won’t reduce for ISCSI because a single stream can only go down one
NIC anyway, the team will reduce to 5Gbps. No vNIC will go faster than 1Gbps.
If this is correct, then how would I increase bandwidth to the disk system using ISCSI, is it simply a case of creating an additional vNIC for ISCSI traffic, adding it to the same team
and configuring MPIO so the traffic will eventually end up through different pNICS? In my mind, MPIO can’t round robin ISCSI data to more physical NICs because the ISCSI and MPIO only know about the vNIC, I assume it is the team that ultimately handles the
placement of the stream onto the pNICS.
Do I simply do the same for Data Sync and Backup traffic?
I’m not too concerned about the VMNET team, I am mostly focused on getting the best out of the core cluster resources I have. I haven’t shown the physical switches to the right of the diagram,
but these are already configured to accept the VLAN traffic on the ports and the same is all configured for the other half of the solution.
Just a little confused over the whole thing on how I could potentially achieve the 6Gbps I have at my disposal. In this configuration and testing so far, it seems quite difficult for me
to exceed about 1.5-2Gbps combined across all the vNICS (in this particular test server I am limited to 1 physical disk so it’s hard to gauge the performance, the disk will take probably about 200MB/2Gbps maximum, that’s with ISCSI (file copying) and data
Sync going on between two servers at the same time.
many thanks for your help
Steve

Similar Messages

  • Routing of Network Traffic Between VLANs on a Hyper-V Virtual Switch

    I am trying to discover how network traffic generated by reads and writes to RDVH User Profile Disks is routed through my network.  I have a pool of Hyper-V
    desktop vm’s in their own VLAN (vlan1) with their own NIC bound to a Hyper-V Virtual Switch. On the same server I have another management NIC for the OS on a different VLAN (vlan2) and finally on another server I have a virtual machine which hosts the User
    Profile Disks. The VM that hosts the User Profile Disks is on the same VLAN as the management NIC for the OS (vlan2).
    When tracing the flow of network traffic to and from the User Profile Disk VM it all comes through the vlan2 NIC on the server where the virtual
    desktop VMs reside and nothing comes through the vlan1 NIC on this server.  I would have expected the traffic to the virtual desktop VMs to come in  through the desktop VMs VLAN NIC (vlan1).
    This leads me to two possibilities as to how the desktop vm’s on vlan1 get their  data to and from the User Pofile Disk vm on vlan2 without routing.
    The desktop vm’s Hyper-V Virtual Switch automatically routes the User Profile Disk traffic from vlan1 to vlan2 internally using a virtual switch learning algorithm
    Hyper-V itself handles all reads and writes to the User Profile Disks and since that is using the management NIC for the OS it is already on vlan2 and so the network traffic never leaves vlan2.
    Any comments on the reason for traffic taking the path it does (as outlined above) as opposed to being layer-3 routed from VLAN1 to VLAN2?

    Thanks for your reply Brian. I think your last paragraph above is what I have set up:
    If you simply forward one VLAN to one physical NIC and the VMS on the corresponding External Virtual Switch simply end up on that VLAN without Hyper-V doing anything at all - but this dedicats one physical NIC per VLAN.
    The Virtual Machines NIC that the vSwitch is patched to and the NIC for the OS are on different VLANS (both NICs are plugged into un-tagged ports on my switch).
    The vNICs on the VM's are not tagged to a VLAN (The VLAN ID\ 'Enable virtual LAN identification' box is unticked)
    My vSwitch is set up as connected to 'External Network' and isnt shared with the management network.
    What I am trying to get at is how would network traffic on the VLAN my vm's are on get to the VLAN that the NIC for the OS is on without going through the router (even though a routable path is available)  ?
    Is it possible the 'learning algorithm' referneced in a Technet article below is involved here (sorry I cant post links)?
    For the virtual machine to communicate with the management operating system, there are two options. One option is to route the network packet through the physical network adapter and out to the physical network, which then returns the packet back to
    the server running Hyper-V using the second physical network adapter. Another option is to route the network packet through the virtual network, which is more efficient. The option selected is determined by the virtual network. The virtual network includes
    a learning algorithm, which determines the most efficient port to direct traffic to and will send the network packet to that port. Until that determination is made by the virtual network, network packets are sent out to all virtual ports.
    Thanks,
    Andrew

  • Assign management ip address with SCVMM 2012 R2 for hyper-v converged network?

    Hi,
    I am setting up a converged network for our Hyper-V clusters using vNICs for the different network traffic including management, live migration, cluster-csv, hyper-v etc.
    Problem is, how do I assign the hyper-v hosts a management IP address? They need a network connection on the management network for scvmm to manage them in the first place. How do I take the existing management IP address that is directly assigned to the
    host and transfer it directly to the new vNIC so scvmm has management of it? Kind of in a chicken and egg situation here. I thought about assigning a temp ip address to the host initially but am worried that assigning the address will cause problems as then
    the host would then have 2 default gateways configured. How have others managed this scenario?
    Thanks
    Microsoft Partner

    Rule of thumb: Use one connected network for your Fabric networks (read the whitepaper), and use VLAN based networks for your tenant VMs when you want to associate VM Networks with each VLAN.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )
    We don't have tenants as such due to this being an environment on a private company LAN for sole use of the company virtual machines.
    What I have so far:
    I created "one connected network" for Hyper-V-Virtual-Machine traffic.
    Unchecked "Allow new VM networks created on this logical switch to use network virtualization"
    Checked "Create a VM network with the same name to allow vms to access this logical network directly"
    This logical network has one site called UK.
    Within this site I have defined all of the different VLANS for this site.
    Created IP pools for each VLAN subnet range.
    I hope I understand this correctly. Started reading the whitepaper from cover to cover now.
    Microsoft Partner

  • Using second wireless network for client Hyper-V VMs

    Hello all,
    I have a question concerning client Hyper-V on Win8.1 Pro Preview and wirelessly connecting only the VMs. I'm a student at a university and the university wired Ethernet network only allows 2 devices to be registered per student. So for me, those 2 would
    be the host OS and my Xbox, which leaves no room for any of my VMs. I need a way to connect my VMs to the wireless network (which has no such limitations) mostly for programming assignments using Linux.
    I'm thinking of buying a wireless NIC for my desktop and using that as a second external switch that the VMs will use for connectivity. The host will use the first external switch that currently exists. However, I'm not sure how I can keep the host on the
    wired LAN while also installing a wireless NIC only for the VMs. Do I install the NIC, make it an external switch without host access, then join the network? Or do I join the network first then set it up as a virtual switch? Can client versions of Windows
    handle multiple simultaneous NICs?
    Also, would the host even need an external switch in this case? I have one since I set up the VMs at home where all machines can use the wired NIC.

    Ahh, that's right. check out this post.  This should explain the problem and offer a workable solution.
    Hi,
    I remember some articles mentioned Windows Hyper-V does not allow you to bind a wireless network adapter to a virtual machine.
    Since the virtual switch in Hyper-V is a “layer-2 switch,” which means that it switches (i.e. determines the route a certain Ethernet packet takes) using the MAC addresses that uniquely identify each (physical and virtual) network adapter card. The MAC address
    of the source and destination machines are sent in each Ethernet packet and a layer-2 switch uses this to determine where it should send the incoming packet. An external virtual switch is connected to the external world through the physical NIC. Ethernet packets
    from a VM destined for a machine in the external world are sent out through this physical NIC. This means that the physical NIC must be able to carry the traffic from all the VMs connected to this virtual switch, thus implying that the packets flowing through
    the physical NIC will contain multiple MAC addresses (one for each VM’s virtual NIC). This is supported on wired physical NICs (by putting the NIC in promiscuous mode), but not supported on wireless NICs since the wireless channel established by the WiFi NIC
    and its access point only allows Ethernet packets with the WiFi NIC’s MAC address and nothing else. In other words, Hyper-V couldn’t use WiFi NICs for an external switch if we continued to use the current virtual switch architecture.
    To work around this limitation, you can use Microsoft Bridging solution. Create an Internal network, name it “External”, system will create a Virtual Network adapter for it. Create Network Bridge between your WiFi NIC and the Virtual External Network adapter.
    Assign External network for your VMs, so they have internet connection.
    For more information please refer to following MS articles:
    Bringing Hyper-V to “Windows 8”
    http://blogs.msdn.com/b/b8/archive/2011/09/07/bringing-hyper-v-to-windows-8.aspx
    Hyper-V: How to Run Hyper-V on a Laptop
    http://social.technet.microsoft.com/wiki/contents/articles/185.hyper-v-how-to-run-hyper-v-on-a-laptop-en-us.aspx
    Configuring Virtual Networks
    http://technet.microsoft.com/en-us/library/cc816585(v=WS.10).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support
    source: 
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/d9fb7866-0fbc-4c06-b8ea-df3c35c75c74/windows-8-hyperv-bridged-wifi-issues-when-creating-virtual-machines
    Remember to select 'Mark as Answer' for any reply that provided a solution

  • Hyper-V Network Virtual switches

    Hi,
    Good Day!
    We have some doubts on Hyper-V Network Virtual Switches:
    There are two physical NICs on a Hyper-V Host. One is connected to the public and other connects the internal VM Network. We create two virtual switches; "CustomerNetwork" and "PublicNetwork". "CustomerNetwork" has 192.168.x.x
    series IP Address and "PublicNetwork" has a public IP Address. 
    We have also setup a software router VM with two virtual NICs. One vNIC connects to "CustomerNetwork" and other one connects to "PublicNetwork". We assign an IP address to the vNIC connected to "CustomerNetwork". The IP Address
    assigned is 192.168.2.99. 
    Confusion here is that we don't know which IP Address we should assign to the VNIC of router VM which is connected to "PublicNetwork". We have only one public IP Address. Shall we assign public IP Address to the VNIC of Router VM or the "PublicNetwork"
    virtual switch or we should remove the public IP Address from the virtual switch?
    Thank you for your time for clarifying our doubts! As always, appreciate your help!
    Thank You,
    Aurther L

    On your router VM , Public IP should be assigned to Public network . Beaaware you are only allowed one external switch per hyper V host.  Alternatively Create two VNic from from Private network, Assign public IP to one and local IP to another 
    assuming you have internet connection on local network.
    hope this helps
    thanks
    mumtaz

  • Poor Performance over Converged Networks using LBFO Team

    Hi,
    I have  built a Hyper-v 2012 R2 cluster with converged networking and correct logical networks. 
    However, I notice that I am not able to utilise more than apprx 1gb over the converged network for live migration, cluster, management etc even though they have a total of 8 x 1gb physical adapters available (switch independant, dynamic load balancing).
    e.g
    Hyper-v host1 and hyper-v host2 have 4 x 1gb pNics connected to switch1 and 4x1gb pNics connected to switch2. Switch1 and switch2 has a portchannel configured between them.
    I can see if I carry out a file copy between Hyper-v host1 and Hyper-v host2 (both configured exactly the same) then the Management, Live-Migration and Cluster vNics on the host use approx 300mbps on each vNic during the copy but never get anywhere near
    their full utilization (here I would be expecting to see approximately the total of the team available bandwidth which is 8gbps as there is no other traffic using these networks).
    Is it because the host vNICs cannot use vRSS (VMQ)? Although I struggle to see this as the issue because I don't see any CPU cores maxing out during the copy! Or maybe someone could recommend anything else to check?
    Microsoft Partner

    Without doing a case study on your environment, it would be pretty much impossible to give you a truly correct answer on what you should do.
    In the generic sense, using at least partial convergence is usually in your best interests. If you have a completely unconverged environment, then it is absolutely guaranteed that any given adapter will only have a single gigabit pipe to work with no matter
    what. Aidan's example of dual concurrent Live Migrations is good, but it's only one of a nearly infinite set of possibilities. As another example, let's say that you start a hypervisor-level over-the-network backup and at the same time you copy an ISO file
    from a remote file server into the management operating system. In an unconverged network, they'll both be fighting for attention from one adapter. If you have 8 total adapters, you can almost guarantee that at least one of the other 7 is sitting idle -- and
    there's nothing you can do about it.
    If you're converged (dependent upon setup), then you have a pretty good chance that those activities will be separated across physical adapters. Extrapolate those examples to the management OS and a stack of virtual machines, all cumulatively involved in
    dozens or hundreds of individual transmissions, and you see pretty quickly that convergence gives you the greatest capability for utilization of available resources. It's like when the world went from single- to dual-core computers. A dual-core system isn't
    twice as fast as its single-core counterpart. Its operating system's thread scheduler just has two places to run threads instead of one. But, the rule of one thread per core still applies. The name of the game is concurrency, not aggregation.
    Again, that's in the generic sense. When you get into trying to tap the power of SMB Multichannel, that changes the equation. That's where the case study picks up.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Failed communication over converged network

    Hi
    We are trying to create converged networks and it seems failed to work completely. With the host server 4 X NICs are configured in 2 teams (2 NICs each), then created virtual switches as following
    New-VMSwitch "MgmtSwitch-1" -MinimumBandwidthMode Weight -NetAdapterName "Mgmttm-1" –AllowmanagementOS $false
    The Cisco switch ports are configured in trunk mode with Vlans 120 (for management & cluster communication & client connectivity) & 125 (for Live migration and cluster communication)
    Later created vNics as following
    Add-VMNetworkAdapter -ManagementOS -Name "Mgmt-Clust” -SwitchName "MgmtSwitch-1"
    Add-VMNetworkAdapter -ManagementOS -Name "Live-Migration” -SwitchName "MgmtSwitch-1"
    Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Mgmt-Clust” -Access -VlanId 120
    Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Live-Migration”-Access -VlanId 125
    But once we set the VLAN, the vNics failed to communicate with external network (w/o vLanId we can able to set  IP for VLAN 120 and NIC is working fine)
    Expect your support to correct the issue
    LMS

    Hi LMS,
    In addition please refer to following link regarding to Multiple default gateways :
    http://support.microsoft.com/kb/159168
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Using NIC Teaming and a virtual switch for Windows Server 2012 host networking and Hyper-V.

    Using NIC Teaming and a virtual switch for Windows Server 2012 host networking!
    http://www.youtube.com/watch?v=8mOuoIWzmdE
    Hi thanks for reading. Now I may well have my terminology incorrect here so I will try to explain  as best I can and apologies from the start.
    It’s a bit of both Hyper-v and Server 2012R2. 
    I am setting up a lab with Server 2012 R2. I have several physical network cards that I have teamed called “HostSwitchTeam” from those I have made several Virtual Network Adaptors such as below
    examples.
    New-VMSwitch "MgmtSwitch" -MinimumBandwidthMode weight -NetAdaptername "HostSwitchTeam" -AllowManagement $false
    Add-VMNetworkAdapter -ManagementOS -Name "Vswitch" -SwitchName "MgmtSwitch"
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "MgmtSwitch"
    When I install Hyper-V and it comes to adding a virtual switch during installation it only shows the individual physical network cards and the
    HostSwitchTeam for selection.  When installed it shows the Microsoft Network Multiplexor Driver as the only option. 
    Is this correct or how does one use the Vswitch made above and incorporate into the Hyper-V so a weight can be put against it.
    Still trying to get my head around Vswitches,VMNetworkadapters etc so somewhat confused as to the way forward at this time so I may have missed the plot altogether!
    Any help would be much appreciated.
    Paul
    Paul Edwards

    Hi P.J.E,
    >>I have teams so a bit confused as to the adapter bindings and if the teams need to be added or just the vEthernet Nics?.
    Nic 1,2 
    HostVMSwitchTeam
    Nic 3,4,5
             HostMgmtSwitchTeam
    >>The adapter Binding settings are:
    HostMgmtSwitchTeam
    V-Curric
    Nic 3
    Nic 4
    Nic 5
    V-Livemigration
    HostVMSwitch
    Nic 1
    Nic 2
    V-iSCSI
    V-HeartBeat
    Based on my understanding of the description , "HostMgmtSwitchTeam and
    HostVMSwitch " are teamed NIC .
    You can think of them as two physical NICs (do not use NIC 1,2,3,4,5 any more , there are just two NICs "HostMgmtSwitchTeam and
    HostVMSwitch").
    V-Curric,
    V-Livemigration , V-iSCSI ,
    V-HeartBeat are just VNICs of host  (you can change their name then check if the virtual switch name will be changed )
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Converged Network 2012 R2, host, VMs inaccessible

    I'm just building a new server (HP 360 G7 with the latest firmware, drivers I can find) with 2012 R2.
    I'm having strange problems with the network, it works, then it stops withouth me doing anything. I created a team (connected to an Extreme Networks switch that has these four ports in trunk mode, all 4 VLANs are available on these ports), virtual switch,
    virtual networks using these commands:
    New-NetLBFOTeam –Name ConvergedNetTeam –TeamMembers pNIC-300,pNIC-301,pNIC-400,pNIC-401,pNIC-901 –TeamingMode SwitchIndependent –LoadBalancingAlgorithm Dynamic -confirm:$false
    Set-NetLbfoTeamMember -Name pNIC-901 -Team ConvergedNetTeam -AdministrativeMode Standby
    New-VMSwitch "ConvergedNetSwitch" –NetAdapterName "ConvergedNetTeam" –AllowManagementOS 0 –MinimumBandwidthMode Weight
    Set-VMSwitch "ConvergedNetSwitch" –DefaultFlowMinimumBandwidthWeight 50
    Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "ConvergedNetSwitch"
    Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10
    Add-VMNetworkAdapter -ManagementOS -Name "LAN" -SwitchName "ConvergedNetSwitch"
    Set-VMNetworkAdapter -ManagementOS -Name "LAN" -MinimumBandwidthWeight 10
    Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "ConvergedNetSwitch"
    Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10
    Add-VMNetworkAdapter -ManagementOS -Name "Live Migration" -SwitchName "ConvergedNetSwitch"
    Set-VMNetworkAdapter -ManagementOS -Name "Live Migration" -MinimumBandwidthWeight 40
    Start-Sleep -s 15
    New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 10.10.0.67 -PrefixLength 24 -DefaultGateway 10.10.0.1
    Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Management)" -ServerAddresses "10.10.0.47,10.10.0.53,192.168.2.51"
    Set-NetIPInterface -InterfaceAlias "vEthernet (LAN)" -dhcp Disabled -verbose
    New-NetIPAddress -InterfaceAlias "vEthernet (Cluster)" -IPAddress 192.168.4.7 -PrefixLength 24
    New-NetIPAddress -InterfaceAlias "vEthernet (Live Migration)" -IPAddress 192.168.5.7 -PrefixLength 24
    Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management" -Access -VlanId 150
    Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "LAN" -Access -VlanId 100
    Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Cluster" -Access -VlanId 400
    Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Live Migration" -Access -VlanId 5
    The LAN virtual NIC was created since on of the VMs needs to be in VLAN 100. Are the steps above correct, please assist me.

    Hi CypherMike,
    "I'm having strange problems with the network, it works, then it stops withouth me doing anything"
    "works"and "stop ", could you please post more details about them ?
    "I'm just building a new server (HP 360 G7 with the latest firmware, drivers I can find) with 2012 R2."
    Do you mean the cluster is based on VM level in HOST ?
    If it is the case , my suggestion is just create a internal switch for "cluster" and "live migration" .
     Hope this helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-V NIC Team Load Balancing Algorithm: TranportPorts vs Hyper-VPorts

    Hi, 
    I'm going to need to configure a NIC team for the LAN traffic for a Hyper-V 2012 R2 environment. What is the recommended load balancing algorithm? 
    Some background:
    - The NIC team will deal with LAN traffic (NOT iSCSI storage traffic)
    - I'll set up a converged network. So there'll be a virtual switch on top of this team, which will have vNICs configured for each cluster, live migration and management
    - I'll implement QOS at the virtual switch level (using option -DefaultFlowMinimumBandwidthWeight) and at the vNIC level (using option -MinimumBandwidthWeight)
    - The CSV is set up on an Equallogics cluster. I know that this team is for the LAN so it has nothing to do with the SAN, but this reference will become clear in the next paragraph. 
    Here's where it gets a little confusing. I've checked some of the Equallogics documentation to ensure this environment complies with their requirements as far as storage networking is concerned. However, as part of their presentation the Dell publication
    TR1098-4, recommends creating the LAN NIC team with the TrasportPorts Load Balancing Algorithm. However, in some of the Microsoft resources (i.e. http://technet.microsoft.com/en-us/library/dn550728.aspx), the recommended load balancing algorithm is HyperVPorts.
    Just to add to the confusion, in this Microsoft TechEd presentation, http://www.youtube.com/watch?v=ed7HThAvp7o, the recommendation (at around minute 8:06) is to use dynamic ports algorithm mode. So obviously there are many ways to do this, but which one is
    correct? I spoke with Equallogics support and the rep said that their documentation recommends TransportPorts LB algorithm because that's what they've tested and works. I'm wondering what the response from a Hyper-V expert would be to this question. Anyway,
    any input on this last point would be appreciated.

    Gleb,
    >>See Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management  for more
    info
    Thanks for this reference. It seems that I have an older version of this document where there's absolutely
    no mention of the dynamic LBA. Hence my confusion when in the Microsoft TechEd presentation the
    recommendation was to use Dynamic. I almost implemented this environment with switch dependent and Address Hash Distribution because, based on the older version of the document, this combination offered: 
    a) Native teaming for maximum performance and switch diversity is not required; or
    b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
    The new version of the document recommends Dynamic over the other two LBA. The analogy that the document
    makes of TCP flows with human speech was really helpful for me to understand what this algorithm is doing. For those who will never read the document, I'm referring to this: 
    "The outbound loads in this mode are dynamically balanced based on the concept of
    flowlets.  Just as human speech has natural breaks at the ends of words and sentences, TCP flows (TCP communication streams) also have naturally
    occurring breaks.  The portion of a TCP flow between two such breaks is referred to as a flowlet.  When the dynamic mode algorithm detects that a flowlet boundary has been encountered, i.e., a break of sufficient length has occurred in the TCP flow,
    the algorithm will opportunistically rebalance the flow to another team member if apropriate.  The algorithm may also periodically rebalance flows that do not contain any flowlets if circumstances require it.    As a result the affinity
    between TCP flow and team member can change at any time as the dynamic balancing algorithm works to balance the workload of the team members. "
    Anyway, this post made my week. You sir are deserving of a beer!

  • Can you use NIC Teaming for Replica Traffic in a Hyper-V 2012 R2 Cluster

    We are in the process of setting up a two node 2012 R2 Hyper-V Cluster and will be using the Replica feature to make copies of some of the hosted VM's to an off-site, standalone Hyper-V server.
    We have planned to use two physical NIC's in an LBFO Team on the Cluster Nodes to use for the Replica traffic but wanted to confirm that this is supported before we continue?
    Cheers for now
    Russell

    Sam,
    Thanks for the prompt response, presumably the same is true of the other types of cluster traffic (Live Migration, Management, etc.)
    Cheers for now
    Russell
    Yep.
    In our practice we actually use converged networking, which basically NIC-teams all physical NICs into one pipe (switch independent/dynamic/active-active), on top of which we provision vNICs for the parent partition (host OS), as well as guest VMs. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • How Can we force the Hyper-V Replica to use a dedicated network between two hyper-v hosts?

    Dear Team,
    How can we force Hyper-V Replica to use a dedicated network between two hyper-v hosts?
    For live migration, we can choose the desired network in Hyper-V Manager (Use these IP addresses for Live Migration).
    I have two 10G adapters teamed together, the virtual switch is created on top as converged network with several vNICs (MangementOS, LiveMigratrion, Backup, etc...)
    Each network is set with a specific VLAN to isolate the traffic.
    The two Hyper-V hosts are on the same LAN and domain.
    Thank you.
    Regards,

    I have accomplished this by using a DNS pointer specifying an IP address on that dedicated network. That will force traffic done that network.
    John Johnston

  • Poor network when create virtual switch in hyper-v 2012

    Hello,
    i installed on DELL R620 windows server 2012 STD and add the hyper-v role
    when i created a virtual switch , the speed of network in the HOST( the server itself) is ver poor ( copy file from fileserver is 300Kbs)
    when i remove the virtual switch , the speed of the network is better(  copy file from fileserver is 10-20Mbps)
    i spoke with DELL support center , and after check up , no problem has found.
    i tried all the chnges with the offload checksum etc but nothing helped
    Please your comment about this issue
    Thanks
    Dotan

    Need some additional information
    Which Team type are you utilizing? 
    Which Raid Controller are you using on the hosts?
    I would suggest you try this as well while its specific to 10Gb CNA's it can help with 1Gb in some cases.  SMB
    FILE COPY PERFORMANCE
    Also utilizing the low latency Bios settings can often offer additional NIC performance 
    Dell 12th Gen Low Latency Bios Settings
    -Sean

  • WIN2008R2: No external network access from Hyper-V guest using Virtual Machine Bus - Legacy ok

    Windows Server 2008 R2 Enterprise x64 Hyper-V host
    HP DL370 G6, HP NC375i integrated Quad Port Multifunction Gigabit Server Adapter
    Static IP (.11), internet connection via a Cisco switch and PIX firewall
    External virtual network connected to port 1, allowing management OS to share the network adapter
    Windows Server 2008 R2 Enterprise x64 guest
    Static IP (.21) on the same subnet, same subnet mask and default gateway (.1) as host
    * with Virtual Machine Bus network adapter:
     - host can ping guest (.21), switch (.5), and has internet access
     - guest can ping host (.11), but cannot ping switch (.5) and has no internet access. 
     - network map shows the guest and host connected via a hub (Microsoft virtual switch), connected to a gateway, then a red X between gateway and internet
    * with Legacy network adapter:
     - host can ping guest (.21), switch (.5), and has internet access
     - guest can ping host (.11), switch (.5) and has internet access. 
     - network map shows the guest and host connected via a hub (Microsoft virtual switch), connected to a gateway, and no red X between gateway and internet
    I installed Hyper-V before adding the HP network drivers (there's a known problem if you install Hyper-V after adding the network drivers), so that's not it.
    This happens both with straight network adapters, and also when two are configured as a network team - no difference.
    I don't want to use the Legacy network adapter as the performance is terrible, but right now I have no choice as otherwise I can't get network or internet access from the guest. 
    Any ideas?

    Hi,
    Please refer to the following post to see whether you can resolve the issue.
    Network Adapter (not Legacy) does not work on Virtual Machine after installation through ISO
    http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2virtualization/thread/b1e9d24c-e298-472e-ad72-90cf079f6fbd
    By the way, did you only encounter this issue with one VM or all VMs? Please do the same test on VMs with other version of Windows such as Windows Server 2008 or Windows Server 2003.
    Best Regards,
    Vincent Hu

Maybe you are looking for