Microsoft NLB on Nexus 5596T

  Hi guys,
We recently installed two 5596T in a cluster at a customer. They are currently migrating their servers to a VMWARE solution and have asked if the nexus supports microsoft NLB on multicast mode.
I reached to Cisco TAC on this however I havent gotten any confirmation on the commands that are required. Can you advise what commands are required for this to allow servers to see the NLB server. So far the customer are able to migrate and the solution is working however is it recommended to set static maps on the nexus for the nlb server?
Thanks much.

Hi,
Look this document, I applied this configuration once and worked fine.
There are 3 modes to Microsoft Network Load Balancing (NLB)
1.      1.Unicast
2.      2.Multicast
3.      3.IGMP multicast  (check the IGMP checkbox in the GUI while in multicast mode)
In general,every mode uses a different sending and receiving mac address while keeping the unicast virtual IP address (VIP) constantacross all 3 modes. This concept makes switches flood traffic at layer 2 since the switch either never sees the destination mac address come in on any of its ports(and hence can’t learn it) or the multicast mac address floods. Either multicast mode, IGMP or normal multicast, also requires static ARP entries on the gateway router since Cisco routers will not learn an ARP reply with a multicast mac address tied to a unicast ip address.
Mac addresses in the 3 modes breakdown into the following components:
The     first number in the mac address is the type of NLB configuration: 01=IGMP,     02=Unicast, 03=Multicast (Note: bit 2 is the administered locally     multicast space)
The second number, (bf) is the same for unicast and multicast mode (not IGMP multicast mode     which uses the standard 01-00-5e mac address)
The last two (IGMP multicast mode) or four (unicast or     multicast mode) numbers are the virtual IP address, i.e. c0=192, a8=168,     04=4, 0a=10 and thus the IP of 192.168.4.10 has a multicast mac address 03-BF-C0-a8-04-0a     while an IGMP multicast mac address would be 01-00-5e-7f-04-0a
Summary of configuration
NLB mode
Switch  configuration
Router  configuration
Unicast
Mac  address-table static 02bf.xxxx.xxxx vlan y interface  
Not  required – unicast mac address with unicast ip address
Multicast
Mac-address-table static 03bf.xxxx.xxxx  vlan y interface   
n7k[5.2(1)]: mac address-table  multicast 03bf.xxxx.xxxx vlan y interface
Arp  03-bh-xx-xx-xx-xx arpa
IGMP multicast
Mac address-table static 01005exx.xxxx  vlan y interface   
Arp  01-00-5e-7f-xx-xx arpa
Basicly you have to add the mac address static to all ports that you learned the mac address of your cluster microsoft (including the vpc peer-link if exist).
In the router (could be the N5K, if it doing the L3 boundary) you have to add the arp entry in the vlan sub-interface configuraton mode.
The cluster comes up instantly.
Richard

Similar Messages

  • Cisco Nexus 5596T Switch question

    when did the Cisco Nexus 5596T Switch hit the market?

    I don't remember exactly, but it's been a while.
    Looking at the release notes it shows support as of Cisco NX-OS Release 5.2(1)N1(1b) and looking at the software download page this was released as of 25-September 2012.
    Regards

  • SG switches and Microsoft NLB

    Hi,
    does anyone know if the SG300 switches can be used with Microsoft NLB in Multicast mode?
    I know on traditional Catalyst switches you can statically "map" IP's to mac's and then to multiple ports but this doesn't seem to work correctly on the SG switches - it gives an error about the mac not being not Unicast?
    So, any help or links to Cisco SG examples would be appreciated.
    thanks
    John

    I have not tested it yet. But I want to know this as well.
    Keep in mind that you need to use the multicast MAC Address, not your normal MAC Address. It is bound to a multicast IP Address.

  • Microsoft NLB and Cisco 4500 VSS

    Hi,
    I have a pair of Cisco 4507 switches in VSS mode. An server (10.4.1.166)  using Microsoft NLB MAC address (03bf.0a04.01a6) is connected to VSS Node 1 on port Gi1/6/43. The following is configured on the switch.
    arp 10.4.1.166 03bf.0a04.01a6 ARPA
    mac address-table static 03bf.0a04.01a6 vlan 31 interface Gi1/6/43
    The second command appears differently in running-config but looks good in mac-address-table:
    # show running-config | inc mac address
    mac address-table static 03bf.0a04.01a6 vlan 31 interface Gi6/43
    # show mac address static | inc 01a6
      31      03bf.0a04.01a6   static Gi1/6/43
    Now, from a PC I can ping the VIP address 10.4.1.166 when connected to VSS Node 1 or any other switch connecting to VSS Node1. If the PC attachment is to VSS Node 2 directly or indirectly, then the ping times out. Doing the same for all the rest of servers not using Microsoft NLB  but connected to Node 1 only, is successful from anywhere.
    Why is the traffic not traversing the the VSL link i.e. PC -> VSS Node 2 -> VSL -> VSS Node1 -> Server.
    Thanks,
    Rick.

    Thanks Reza, Please find the output of the commands below. The VSS switch looks to be good and working for all other services.
    #show switch virtualExecuting the command on VSS member switch role = VSS Active, id = 1Switch mode                  : Virtual SwitchVirtual switch domain number : 1Local switch number          : 1Local switch operational role: Virtual Switch ActivePeer switch number           : 2Peer switch operational role : Virtual Switch StandbyExecuting the command on VSS member switch role = VSS Standby, id = 2Switch mode                  : Virtual SwitchVirtual switch domain number : 1Local switch number          : 2Local switch operational role: Virtual Switch StandbyPeer switch number           : 1Peer switch operational role : Virtual Switch Active# show switch virtual redundancyExecuting the command on VSS member switch role = VSS Active, id = 1                  My Switch Id = 1                Peer Switch Id = 2        Last switchover reason = none    Configured Redundancy Mode = Stateful Switchover     Operating Redundancy Mode = Stateful SwitchoverSwitch 1 Slot 3 Processor Information :-----------------------------------------------        Current Software state = ACTIVE                 Image Version = Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500e-UNIVERSALK9-M), Version 15.1(2)SG, RELEASE SOFTWARE (fc3)Technical Support: http://www.cisco.com/techsupportCopyright (c) 1986-2012 by Cisco Systems, Inc.Compiled Wed 05-Dec-12 04:38 by prod_rel_team                          BOOT = bootflash:cat4500e-universalk9.SPA.03.04.00.SG.151-2.SG.bin,1;        Configuration register = 0x102                  Fabric State = ACTIVE           Control Plane State = ACTIVESwitch 2 Slot 3 Processor Information :-----------------------------------------------        Current Software state = STANDBY HOT (switchover target)                 Image Version = Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500e-UNIVERSALK9-M), Version 15.1(2)SG, RELEASE SOFTWARE (fc3)Technical Support: http://www.cisco.com/techsupportCopyright (c) 1986-2012 by Cisco Systems, Inc.Compiled Wed 05-Dec-12 04:38 by pro                          BOOT = bootflash:cat4500e-universalk9.SPA.03.04.00.SG.151-2.SG.bin,1;        Configuration register = 0x102                  Fabric State = ACTIVE           Control Plane State = STANDBYExecuting the command on VSS member switch role = VSS Standby, id = 2show virtual switch redundancy is not supported on the standbySKR_4507_01#show switch virtual link port-channelExecuting the command on VSS member switch role = VSS Active, id = 1Flags:  D - down        P - bundled in port-channel        I - stand-alone s - suspended        H - Hot-standby (LACP only)        R - Layer3      S - Layer2        U - in use      N - not in use, no aggregation        f - failed to allocate aggregator        M - not in use, no aggregation due to minimum links not met        m - not in use, port not aggregated due to minimum links not met        u - unsuitable for bundling        d - default port        w - waiting to be aggregatedGroup  Port-channel  Protocol    Ports------+-------------+-----------+-------------------15     Po15(SU)         -        Te1/3/1(P)  Te1/4/1(P)16     Po16(SU)         -        Te2/3/1(P)  Te2/4/1(P)Executing the command on VSS member switch role = VSS Standby, id = 2Flags:  D - down        P - bundled in port-channel        I - stand-alone s - suspended        H - Hot-standby (LACP only)        R - Layer3      S - Layer2        U - in use      N - not in use, no aggregation        f - failed to allocate aggregator        M - not in use, no aggregation due to minimum links not met        m - not in use, port not aggregated due to minimum links not met        u - unsuitable for bundling        d - default port        w - waiting to be aggregatedGroup  Port-channel  Protocol    Ports------+-------------+-----------+-------------------15     Po15(SU)         -        Te1/3/1(P)  Te1/4/1(P)16     Po16(SU)         -        Te2/3/1(P)  Te2/4/1(P)#show run int gi1/6/43interface GigabitEthernet1/6/43 switchport access vlan 31 switchport mode access spanning-tree portfast spanning-tree guard root
    Regards,
    Rick.

  • Anybody had used Microsoft NLB to LoadBalance LDAP traffic?

    Hi all.
    I got some lines from a Microsoft spoker and he says that the NLB (Microsoft Network Load Balancing Manager) could balance LDAP traffic.
    I'm trying to find a way to do this without a hardware load balancer.
    Anybody has any experience on this field?
    Thanks.

    Paolo,
    I really thought you were using the IGMP option because of your statement:
    "2) Other hosts in the same VLAN do not receive any broadcast related to this cluster (multicast is working)"
    Without the IGMP option, NLB uses a locally administered Multicast MAC address with the format 03:BF:<IP-Address-of-the-Cluster>. Thus this is not an IANA-assingend multicast MAC address (01-00-5E), IGMP snopping cannot avoid the flooding of those frames throughout the entire VLAN, which is the only way a switch can handle such frames. The recommendation for avoiding/containing this flooding is the configuration of static MAC entries for the multicast Cluster MAC (binding it exclusively to the required ports). Those static entries then also will be listed in the "show mac address-table" output.
    With the IGMP option, you can make use of IGMP snooping in order to avoid the flooding, so static MAC entries are not required in this case and the multicast cluster MAC can be learned dynamically by IGMP snooping. It should then be listed in the "show mac address-table multicast" output.
    HTH
    Rolf

  • Nexus 5596T connecting to 3comm 5500

    Good Day all,
    I have a query. Our customer will be changing out their 3comm core switch and placing in a nexus 5596 switch instead. There access switches will now be connecting to the 2 5596T switches that will be clustered.
    Can a single 3comm switch be configured to connect two ports, one to each 5k? And will aggregating the  two ports on the 3comm end allow an active, active link to both 5ks?
    Virtual port channels are supported on nexus 2ks and this can be used to connect to both nexus 5ks however will a 3comm allow this? I think it should be therethorically the 2 5ks are clustered and will be seen as one towards the 3comm.
    What are your thoughts?

    Hi,
    the port channel is not coming up on the nexus and on the 3com side when a "display stp brief" is done i am seeing one of the links as blocking and one as forwarding. this configuration didnt include port channeling on the 3com.
    Depends upon how the port-channel on the Nexus is configured, but I would say that's expected behaviour. If the 3COM isn't configured for link aggregation the Nexus port-channel is not receiving LACPDU so the ports would go into an Individual state. I expect if you ran the show port-channel summary command on the Nexus you would have seen the Individual (I) flag set for the two interfaces.
    When port channeling is performed on the 3com end one of the links goes down. The same steps were performed on a cisco 3560 and it worked perfectly so i am not certain whats different with the 3com.
    So how is the port-channel on the Nexus configured? On the physical interfaces are you using the command channel-group 12 mode active(dynamic LAG with LACP) or simply channel-group 12 (static LAG)?
    Can you paste the output of a show port-channel summary, show lacp counters and show lacp neighbor? Also any log messages that are displayed on the Nexus when the link goes down would be useful.
    Regards

  • WAAS Inline Adaper and Microsoft NLB (ISA Server Array)

    Hi
    I would like to place a waas device with 4-port inline adapter  between a MS ISA Firewall and the LAN switches. The ISA are unfortunately forming an array and using NLB which causes the switches to do unknown unicast flooding.
                / Switch A --------------- LAN0   WAN0  ------------ ISA1 ------------- Switch C ---------- Router A
    LAN -- |            |                               WAAS                        Array                        |       HSRP     |
                \ Switch B --------------- LAN1   WAN1  ------------ ISA2 ------------- Switch D ---------- Router B
    Will the WAAS get problems since it is seen all the traffic on both inline groups? Is this setup possible?
    kind regards
    Tobias

    Gary,
    Yes you just need to configuring your firewall to allow TCP options (specifically option 33 (0x21 in HEX)), then configure the WAEs for directed mode.
    The firewall will see a TCP 3-way handshake at first so the two WAEs can auto discover each other and negotiate a UDP directed mode tunnel.
    Once the auto discovery phase is complete traffic traffic sent over the WAN side of the connection will be encapsulated in the UDP 4050 tunnel (so your firewall must allow this traffic through as well).
    Please see the configuration guide section on directed mode here which explains in more detail, and let me know if you have other questions.
    http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/configuration/guide/network.html#wpxref53362
    Cheers,
    Mike

  • Microsoft NLB

    Our environment consist of two core switches 6509 running hsrp, vCenter 4.1 (ESXs).
    We formed a MS cluster with 2 vm guest, one in ESX1 and the other one in ESX2.
    Each ESX have three uplinks to core switches 6509s.
    We configure the router according with Cisco guidelines for NLB using the IP and MAC-Address for the cluster.
    The configuration is not quite working, we don't see share loading on cluster members and failover capablities also don't work.
    Any help or advice will be really appreciate.
    Should I be looking for other solutions rather than CISCO NLB in tandem with VMWare/ESX environments?
    Thanks

    What mode of MS NLB are you current trying to set up? Depending on the mode will determine the best way to configure the network. MS NLB offers unicast mode, multicast mode and IGMP mode.
    I recommend using one of the multicast modes to avoid flooding in the VLAN. Both multicast modes will utilize a unicast IP with multicast mac-address.
    In multicast mode MS uses a 03xx.xxxx.xxxx multicast address outside of the IANA range. IGMP snooping will not dynamically program this address for you. You will want to statically configure the virtual mac-address for the cluster to the physical ports of the servers and on all trunks ports between the switches in the path to avoid flooding.
    Example: (multicast mac can be programmed to multiple ports)
    mac-address-table static 0300.5e11.1111 vlan 200 interface fa2/3 fa2/4
    Another possibility would be to configure MS NLB in IGMP mode. Now the virtual mac-address will be in the IANA  range  0100.5Exx.xxxxx. IGMP snooping will  program the virtual mac-address for you once it receives a join from a member in the cluster. Muticast will be forwarded between switches using the IGMP snooping mrouter that is dynamically programmed when using PIM or IGMP snooping querier in the VLAN.
    Since the virtual IP uses a multicast  mac-address it is unreachable outside the local subnet.  To address this you will need to configure a static ARP entry on each device with a L3 interface in the server vlan.
    Example:
    arp 10.10.10.25 0300.5e11.1111
    I must warn you of a possible bug you can hit with the 6500. CSCsw87563 "Packets with multicast mac and unicast IP are software routed by cat6500". The bug is fixed in the following IOS releases:
    12.2(18)SXF17
    12.2(33)SXH5
    12.2(33)SXI1
    If PIM is required due to other multicast applications in the VLAN please review the bug provided. It provides additional details and all workarounds available.
    Regards,
    Dale

  • Microsoft NLB or NIC Teaming which is recommended

    I have been searching Cisco's sight trying to find which method is prefered for MS server load balancing.

    Also keep in mind that if you are not tied to the windows platform this is a built in function of linux. You get a free OS, and channel bonding out of the box. It takes about 4 minutes to set up(well, I've done it a few hundred times, it might take you longer) and the documentation is readily available.
    I'm running an FTP server now with 6 nics in it, from different manufacturers and it works like a charm. I hate to waste 6 ports, but without gigabit uplinks, what are you going to do?

  • Exchange 2010 NLB on Nexus1000v - UCS - Cat4500

    Server Infrastructure: Microsoft Server 2012 Hyper-V installed on UCS Blade Servers. Network infrastructure is Nexus1000v for HyperV - FI62xx (endhost mode) uplinked to Catalyst 4510 Core Switch.
    Plan: Deploy Exchange 2010 NLB with two servers, each with one network card, NLB mode: IGMP multicast
    Configured:
    - Catalyst: static ARP for Cluster VIP
    - Nexus1000v: disabled IGMP snooping on servers VLAN
    All configuration is acting strangely, it works for some clients but not for others, if we stopped one node in NLB, more things stops working but some works fine.
    Nexus1000v configuration guide describes only NLB Unicast scenario.
    I suppose that something is missing in configuration.

    N1k only supports Unicast NLB.  For multicast & multicast+IGMP NLB there are a few things we can do that are not ideal because there will be excessive traffic flooding.
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_5_1/release/notes/n1000v_rn.html#wp117941
    NLB with multicast (non-IGMP)-
    The NLB cluster uses a unicast IP address and non-IGMP multicast mac (03:bf) so IGMP is not used. N1k floods this frame.
    This method could overwhelm the network in some situations.
    1.    Use a dedicated VLAN for NLB VMs to limit mcast replication & flooding.
    NLB with Multicast+IGMP-
    Microsoft violates RFC2236 by putting a unicast IP in the IGMP Group messages.  N1k drops these messages since they violate the RFC.  CSCue32210 - "Add support for Microsoft NLB - Multicast+IGMP mode in Nexus 1000v" is targeted for a future release.  Before this feature exists we can configure the network as follows:
    1.    Dedicate a VLAN for NLB VMs to limit mcast replication & flooding.
    2.    Disable IGMP snooping on that vlan
    vlan 10
    no ip igmp snooping
    3.    Add a static entry on upstream router for NLB cluster IP & shared MAC.
    int vlan 10
    ip arp 14.17.124.40 0100.5e7f.7c28
    4.    Use mac-pinning configuration with manual pinning NLB vEths to one set of uplinks.  This will isolate flooding to a single upstream fabric interconnect & switch.
    port-profile type veth NLB-VM
      channel-group auto mode on mac-pinning relative
      pinning id 0 backup 1   <-these numbers may differ in your environment
    Matthew

  • Enabling VM Guest NLB w/Multicast IGMP on 2012 Hyper-V host w/ converged SCVMM fabric switch

    What a mouthful.
    As short as possible: 
    WHAT I'M ATTEMPTING:
    I'm trying to build a new NLB cluster for a 2008 R2 SP1 Remote Desktop Services farm. And I'm trying to do it the right way, with multicast igmp, not unicast. 
    The two guest VMs with NLB install converge fine. VIP gets this:
    IP: 192.168.100.157
    MAC: 01-00-5e-7f-64-9d
    NLB NIC is on the same VLAN & "Converged switch" in VMM as our mgmt/server traffic (That is to say it's on production VLAN, not on a separate vlan) 
    PROBLEM:
    Can't ping 100.157. From VM guest itself, from host, or from Cisco 6509 switch. 
    Cisco show mac address lookup does not see that MAC anywhere
    show ip igmp groups shows not igmp traffic at all. Clearing counters show sno multicast increment.
    FURTHERMORE:
    Host is setup thusly:
    - Dell R810
    - 8x1GbE Broadcom 5709c in a Server 2012 LACP/HASH team built via VMM powershell cmdlets
    - On the physical switch side, those 8 nics are in a Cisco port-channel, trunked, all VLANs allowed
    -  Host has no "physical" nics per se, as in a 2008 R2 hyper-v host. Instead Host has these:
    Set-VMNetworkAdapter -ManagementOS -Name "Live Migrate" -MinimumBandwidthWeight 35
    Set-VMNetworkAdapter -ManagementOS -Name "MGMT" -MinimumBandwidthWeight 25
    Set-VMNetworkAdapter -ManagementOS -Name "CSV" -MinimumBandwidthWeight 40
    Set-VMNetworkAdapter -ManagementOS -Name "iSCSI #1" -MinimumBandwidthWeight 0
    Set-VMNetworkAdapter -ManagementOS -Name "iSCSI #2" -MinimumBandwidthWeight 0
    Set-VMNetworkAdapter -ManagementOS -Name "Aux" -MinimumBandwidthWeight 0
    Get-VMSwitch outputs this on the converged v-switch: 
    ComputerName : My-host
    Name : My awesome switch
    Id : e2377ce3-12b4-4243-9f51-e14a21f91844
    Notes :
    SwitchType : External
    AllowManagementOS : True
    NetAdapterInterfaceDescription : Microsoft Network Adapter Multiplexor
    Driver
    AvailableVMQueues : 0
    NumberVmqAllocated : 0
    IovEnabled : False
    IovVirtualFunctionCount : 0
    IovVirtualFunctionsInUse : 0
    IovQueuePairCount : 0
    IovQueuePairsInUse : 0
    AvailableIPSecSA : 0
    NumberIPSecSAAllocated : 0
    BandwidthPercentage : 0
    BandwidthReservationMode : Weight
    DefaultFlowMinimumBandwidthAbsolute : 0
    DefaultFlowMinimumBandwidthWeight : 1
    Extensions : {Microsoft NDIS Capture, Microsoft
    Windows Filtering Platform, Microsoft
    VMM DHCPv4 Server Switch Extension}
    IovSupport : False
    IovSupportReasons : {This network adapter does not support
    SR-IOV.}
    IsDeleted : False
    Question:
    Aside from a few of my favorite MS MVPs (shout out to
    WorkingHardInIt for having this same question), I can't find much documentation on employing 2008 R2 NLB on guest VM within a fabric-oriented, VMM-built 2012 Hyper-Visor converged switch (no network virtualization...yet).
    Yes I know all about VMM NLB but 1) I'm trying to wedge NLB in after building these VMs without a service template (NLB is the audible, essentially) and 2) MS NLB is configured in providers & I've created requisite VIP templates. 
    Even so, I ought to be able to create an NLB cluster without VMM's assistance in this scenario correct? Suboptimal, I know but possible, yes? Essentially I've put to synthetic NICs on each VM, set IPs manually, and assigned them to the same vlan. I can ping
    each synthetic NIC, but not the cluster IP. 
    And yes: these particular vNICs have Mac Address Spoofing enabled. 
    Cisco:
    I have a TAC case open with Cisco, but they can't quite figure it out either. IGMP Snooping enabled across the switch. And they insist that the old static arp entry to resolve this problem is no longer necessary, that Microsoft now complies with relevant
    RFCs
    Possible SOlution:
    Only thing I can think of is flipping MulticastForwarding param below from disabled to enabled. Anybody ever tried it on a converged virtual switch on the Hyper visor? Is my virtual converged switch protecting
    me from multicast igmp packets? 
    PS C:\utilities> Get-NetIPv4Protocol
    DefaultHopLimit : 128
    NeighborCacheLimit(Entries) : 1024
    RouteCacheLimit(Entries) : 128
    ReassemblyLimit(Bytes) : 1560173184
    IcmpRedirects : Enabled
    SourceRoutingBehavior : DontForward
    DhcpMediaSense : Enabled
    MediaSenseEventLog : Disabled
    IGMPLevel : All
    IGMPVersion : Version3
    MulticastForwarding : Disabled
    GroupForwardedFragments : Disabled
    RandomizeIdentifiers : Enabled
    AddressMaskReply : Disabled
    Thanks for any thoughts. 
    Robert

    Sorry for the poor follow-up Steven. We are using Server 2012 Hyper-V, not VMWare, on the hosts. You can close this but for the benefit of anyone who comes across it: 
    After working with Cisco, we decided not to implement multicast IGMP. Cisco says you still need to create a static ARP entry on the physical switch, though my cluster IP address & Microsoft NLB 2008 R2 were set up with igmp multicast, not multicast or
    unicast. Here was his email:
    Yes, we will need the static mapping for the NLB server in this case because the NLB mac address is multicast and the IP address is unicast. I was under the impression that even the server would be using IGMP but that’s not
    the case. We won’t need to do the mapping for the nodes though if they use IGMP. To this end, following is the configuration that should make this work.rp 192.168.100.157
    0100.5e7f.649d arpa
    <u5:p></u5:p>
    mac address-table static 0000.0000.649d vlan <> interface <> disable-snooping  
    ßThis is the switch interface where the NLB server is located<u5:p></u5:p>
     interface vlan<>
    <u5:p></u5:p>
    ip pim sparse-dense-mode     <- This is needed for the switch to elicit IGMP joins from the nodes<u5:p></u5:p>
    end<u5:p></u5:p>
    I don't think it got through to him that there was a virtual Layer 2/3 Hyper-V switch on top of 8 teamed GbE interfaces in LACP/hash. "Where the NLB server is located" = 1)a Cisco port-channel bound to one of six physical hosts; the NLB VM itself could be
    on any of those port channels at any given time (We have a six node Hyper-V cluster). 
    Once I enabled pim I did see activity; but we killed this later as we realized we'd have to implement the same on 40+ managed routers globally
    Anyway we further would have had to implement this across managed routers at 40 sites globally according to Cisco. 
    Robert

  • Sharepoint 2013 Foundation three tier farm with two Webservers in NLB

    Heloo,
    I have been strugling with a problem the last htree days.
    I have instelled and configured a sharepoint 2013 three tier farm with Sharepoint 2013 Foundation and MS SQL 2014 Express. This is a Test Farm and all the servers are Windows 2012 R2.
    I have one SQL Server, one Application Server and two Webservers. The tow web servers are configured with Multicasting NLB. The NLB name is "sharepoint.ws.domain.net". The IP of the NLB is also in our DNS Zone.I have made a Web Application with
    the name "sharepoint.ws.domain.net" on port 80 (NLB name) and a Site collection with the same name.
    Now whene I am working on the Sharepoint Site I get very offen a login Window or I get the message "An error occurred while processing the request on the server. The status code returned from the server was: 0".
    The error "An error occurred while processing the request on the server. The status code returned from the server was: 0" comes when I try to create a sub Site (most with no Permissions inheritance)... but not allways. I also get  sometimes
    the same message when I upload files (MS Office documents and PDF files).
    The login Windows comes whene I am navigating throw the Sites... but also not allways.I go to the Site with an IE11 and the Site is also in the Intranet security sites.
    Can you help me on this one...
    Kind Regards
    Ioannis Kyriakidis

    With no hostname on the Web Application, you have to create Host-named Site Collections. So that complicates things a bit.
    As far as NLB setup, you create Web Applications the same way you would otherwise. NLB is simply installed on both Web Servers and placed into the NLB VIP (virtual IP). The DNS A record points at the VIP.
    Also set up your Windows NLB using Unicast instead of Multicast. If you have certain types of switches that block unicast ARP from multiple clients, e.g. Cisco, you may have to make an exception for them (e.g. http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/107995-microsoft-nlb.html).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Using NLB over Application Server in SharePoint 2010?

    I have two application server that I need to load balance using Microsoft NLB.
    My queries:
    1) Just load balancing the Application Server will load balance the SharePoint service as well?  
    2) How would service application on application server would response to the request that is coming via VIP,  what I want ask is where in IIS the physical ips be configured?
    3) Once I configure the NLB and have VIP pointing to two physical IPs, how would the Web Server knows that he has to send the request over the VIP(virtual ip) . Where is this configuration done?

    In the SharePoint world an 'application server' is a server that does not handle user traffic directly. It doesn't host the websites that users access but instead handles the behind the scenes processing such as searches, PowerPivot etc.
    For those servers you do not need to use load balancing, SharePoint handles load balancing internally for those services.
    If you want to load balance the web front ends (WFEs) which serve websites to users then this blog covers it quite comprehensively:
    http://blogs.technet.com/b/praveenh/archive/2010/12/17/setting-up-load-balancing-on-a-sharepoint-farm-running-on-windows-server-2008.aspx

  • Using Microsoft Network Load Balancing for Livecycle ES 2/2.5 with JBoss clustering?

    Hi,
    Has anyone tried using Microsoft NLB for Livecycle with JBoss clustering and get it working? Able to login to livecycle's admin ui page with the NLB IP
    My enviroment:
    - 2 jboss application server (different IP address)
    - Horizontal clustered
    - LC ES2 installed on both servers
    For those who setup successfully, hope you can share your experience.
    Thank you.

    Thanks!
    Just a few more questions...hehehe
    In the document: Configuring LiveCycle ES2 Application Server Clusters Using JBoss.
    Page 35 item 3.4. Have you had to configure the Caching Locators? If yes, where did you put them, in only one machine or in all of the nodes?
    On page 29, iten 2.7 (Testing the JBoss Application Server cluster) says that for testing we can run the command specifying the server, in my case is:
    run.bat -c lc_sqlserver_cl -b <ipAddress>
    But in the Appendix C: Configuring JBoss as a Windows Service, it says: call run.bat -c all -b <ipAddress>
    So when should I start JBoss with "lc_sqlserver_cl" or "all" ?

  • Windows NLB Issues in VMware environment

    Hi there,
    We are planning to use Windows NLB cluster for high availability solution, and found several blog post in Vmware stating the issues of Windows NLB and unicast network configurations.
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
    I am not sure whether the above issue is only with WS2003 OS or also exist in the WS2012/WS2012 R2 operating system as well.
    Any help is appreciated.
    Thanks,
    Vineeth

    Hi Vineethk,
    Could you clarify your question what issue you have meet, the Windows NLB still use the unicast and multicast, but the NLB on VMware need some VMware product configuration,
    you can refer the following VMware solution to solve your issue.
    Microsoft NLB not working properly in Unicast Mode (1556)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1556
    Microsoft Network Load Balancing Multicast and Unicast operation modes (1006580)
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006580
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006778
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1556
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006525
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1006558
    The related KB:
    NLB Best Practices:
    http://technet.microsoft.com/en-us/library/cc740265(WS.10).aspx
    Configuration options for WLBS hosts connected to layer 2 switches
    http://support.microsoft.com/default.aspx?scid=kb;EN-US;193602
    I’m glad to be of help to you!

Maybe you are looking for