IDR Hotfix 123621-01 IGMP Multicasting

I have several T2000 servers running Solaris 10 08/07, and they have BEA installed on them. BEA recommends installing IDR Hotfix 123621-01 for IGMP Multicasting problem. I cannot find anything on this and was wondering if anyone knew if this was included in the latest releases of the OS, or if it exists at all any more?
Thanks!

Similar Messages

  • Windows 7 and IGMP Multicast

    We have an established piece of software that uses IGMP UDP multicast messages to establish direct TCP communication. The software joins the multicast group using the address 224.100.0.1, then simultaneously sends UDP multicast messages while listening for
    UDP multicast messages from other copies of the same software running on other machines on the local network. The messages contain information then used for establishing direct TCP connections for further communication.
    This software has been running successfully for many years now on Windows XP machines. We are in the progress of upgrading these machines to Windows 7, resulting in locations with mixed Windows 7 and XP machines on the same network, and have encountered
    a frustrating issue. Sometimes the Windows 7 machine will mysteriously refuse to send the multicast messages. The software is running fine on Windows 7, but the UDP multicast messages are not even reaching the router, much less any other machines on the network.
    There is only one network interface on the Windows 7 machine, so it's not the known issue of Windows 7 multicast getting confused about which interface to broadcast on. The firewall is turned off, and the network adapter itself is set to allow broadcast
    messages through. What else could be blocking/misdirecting the multicast messages?

    Did you turn off firewall from all profiles?
    Click Start, click All Programs, click Administrative Tools, and then click Windows Firewall with Advanced Security.
    In the navigation pane, right-click Windows Firewall with Advanced Security on Local Computer, and then click Properties.
    On each of the Domain Profile, Private Profile, and Public Profile tabs, change the Firewall state option
    to Off (not recommended).
    Click OK to save your changes.
    Arnav Sharma | http://arnavsharma.net/ Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
    the thread.

  • Enabling VM Guest NLB w/Multicast IGMP on 2012 Hyper-V host w/ converged SCVMM fabric switch

    What a mouthful.
    As short as possible: 
    WHAT I'M ATTEMPTING:
    I'm trying to build a new NLB cluster for a 2008 R2 SP1 Remote Desktop Services farm. And I'm trying to do it the right way, with multicast igmp, not unicast. 
    The two guest VMs with NLB install converge fine. VIP gets this:
    IP: 192.168.100.157
    MAC: 01-00-5e-7f-64-9d
    NLB NIC is on the same VLAN & "Converged switch" in VMM as our mgmt/server traffic (That is to say it's on production VLAN, not on a separate vlan) 
    PROBLEM:
    Can't ping 100.157. From VM guest itself, from host, or from Cisco 6509 switch. 
    Cisco show mac address lookup does not see that MAC anywhere
    show ip igmp groups shows not igmp traffic at all. Clearing counters show sno multicast increment.
    FURTHERMORE:
    Host is setup thusly:
    - Dell R810
    - 8x1GbE Broadcom 5709c in a Server 2012 LACP/HASH team built via VMM powershell cmdlets
    - On the physical switch side, those 8 nics are in a Cisco port-channel, trunked, all VLANs allowed
    -  Host has no "physical" nics per se, as in a 2008 R2 hyper-v host. Instead Host has these:
    Set-VMNetworkAdapter -ManagementOS -Name "Live Migrate" -MinimumBandwidthWeight 35
    Set-VMNetworkAdapter -ManagementOS -Name "MGMT" -MinimumBandwidthWeight 25
    Set-VMNetworkAdapter -ManagementOS -Name "CSV" -MinimumBandwidthWeight 40
    Set-VMNetworkAdapter -ManagementOS -Name "iSCSI #1" -MinimumBandwidthWeight 0
    Set-VMNetworkAdapter -ManagementOS -Name "iSCSI #2" -MinimumBandwidthWeight 0
    Set-VMNetworkAdapter -ManagementOS -Name "Aux" -MinimumBandwidthWeight 0
    Get-VMSwitch outputs this on the converged v-switch: 
    ComputerName : My-host
    Name : My awesome switch
    Id : e2377ce3-12b4-4243-9f51-e14a21f91844
    Notes :
    SwitchType : External
    AllowManagementOS : True
    NetAdapterInterfaceDescription : Microsoft Network Adapter Multiplexor
    Driver
    AvailableVMQueues : 0
    NumberVmqAllocated : 0
    IovEnabled : False
    IovVirtualFunctionCount : 0
    IovVirtualFunctionsInUse : 0
    IovQueuePairCount : 0
    IovQueuePairsInUse : 0
    AvailableIPSecSA : 0
    NumberIPSecSAAllocated : 0
    BandwidthPercentage : 0
    BandwidthReservationMode : Weight
    DefaultFlowMinimumBandwidthAbsolute : 0
    DefaultFlowMinimumBandwidthWeight : 1
    Extensions : {Microsoft NDIS Capture, Microsoft
    Windows Filtering Platform, Microsoft
    VMM DHCPv4 Server Switch Extension}
    IovSupport : False
    IovSupportReasons : {This network adapter does not support
    SR-IOV.}
    IsDeleted : False
    Question:
    Aside from a few of my favorite MS MVPs (shout out to
    WorkingHardInIt for having this same question), I can't find much documentation on employing 2008 R2 NLB on guest VM within a fabric-oriented, VMM-built 2012 Hyper-Visor converged switch (no network virtualization...yet).
    Yes I know all about VMM NLB but 1) I'm trying to wedge NLB in after building these VMs without a service template (NLB is the audible, essentially) and 2) MS NLB is configured in providers & I've created requisite VIP templates. 
    Even so, I ought to be able to create an NLB cluster without VMM's assistance in this scenario correct? Suboptimal, I know but possible, yes? Essentially I've put to synthetic NICs on each VM, set IPs manually, and assigned them to the same vlan. I can ping
    each synthetic NIC, but not the cluster IP. 
    And yes: these particular vNICs have Mac Address Spoofing enabled. 
    Cisco:
    I have a TAC case open with Cisco, but they can't quite figure it out either. IGMP Snooping enabled across the switch. And they insist that the old static arp entry to resolve this problem is no longer necessary, that Microsoft now complies with relevant
    RFCs
    Possible SOlution:
    Only thing I can think of is flipping MulticastForwarding param below from disabled to enabled. Anybody ever tried it on a converged virtual switch on the Hyper visor? Is my virtual converged switch protecting
    me from multicast igmp packets? 
    PS C:\utilities> Get-NetIPv4Protocol
    DefaultHopLimit : 128
    NeighborCacheLimit(Entries) : 1024
    RouteCacheLimit(Entries) : 128
    ReassemblyLimit(Bytes) : 1560173184
    IcmpRedirects : Enabled
    SourceRoutingBehavior : DontForward
    DhcpMediaSense : Enabled
    MediaSenseEventLog : Disabled
    IGMPLevel : All
    IGMPVersion : Version3
    MulticastForwarding : Disabled
    GroupForwardedFragments : Disabled
    RandomizeIdentifiers : Enabled
    AddressMaskReply : Disabled
    Thanks for any thoughts. 
    Robert

    Sorry for the poor follow-up Steven. We are using Server 2012 Hyper-V, not VMWare, on the hosts. You can close this but for the benefit of anyone who comes across it: 
    After working with Cisco, we decided not to implement multicast IGMP. Cisco says you still need to create a static ARP entry on the physical switch, though my cluster IP address & Microsoft NLB 2008 R2 were set up with igmp multicast, not multicast or
    unicast. Here was his email:
    Yes, we will need the static mapping for the NLB server in this case because the NLB mac address is multicast and the IP address is unicast. I was under the impression that even the server would be using IGMP but that’s not
    the case. We won’t need to do the mapping for the nodes though if they use IGMP. To this end, following is the configuration that should make this work.rp 192.168.100.157
    0100.5e7f.649d arpa
    <u5:p></u5:p>
    mac address-table static 0000.0000.649d vlan <> interface <> disable-snooping  
    ßThis is the switch interface where the NLB server is located<u5:p></u5:p>
     interface vlan<>
    <u5:p></u5:p>
    ip pim sparse-dense-mode     <- This is needed for the switch to elicit IGMP joins from the nodes<u5:p></u5:p>
    end<u5:p></u5:p>
    I don't think it got through to him that there was a virtual Layer 2/3 Hyper-V switch on top of 8 teamed GbE interfaces in LACP/hash. "Where the NLB server is located" = 1)a Cisco port-channel bound to one of six physical hosts; the NLB VM itself could be
    on any of those port channels at any given time (We have a six node Hyper-V cluster). 
    Once I enabled pim I did see activity; but we killed this later as we realized we'd have to implement the same on 40+ managed routers globally
    Anyway we further would have had to implement this across managed routers at 40 sites globally according to Cisco. 
    Robert

  • Some basic problems with multicast, IGMP & NLB

    Hi out there
    We have two DC's with 10G interconnection in  between - these connections are run as L2 links - put into a set of  nexus 5000 (the old nx5020) - acting access-switches - and uplinked to a  set of nexus 7009 which act as L3 switch for us.
    We  have a cluster of vmware boxes in each site and are running MS windows  2008 machines with MS NLB for TerminalServices - in IGMP multicast mode -  in VLAN 21.
    Now I looked in the log of the nexus 7000 and found that the PIM DR is "flapping" between the two sites from time to time:
    2013  Nov 25 22:50:58 ve-coresw-01 %PIM-5-DR_CHANGE:  pim [26128]  DR change  from 172.21.159.253 to 172.21.144.3 on interface Vlan21
    2013 Nov  25 22:51:54 ve-coresw-01 %PIM-5-DR_CHANGE:  pim [26128]  DR change from  172.21.144.3 to 172.21.159.253 on interface Vlan21
    2013 Nov 25  23:26:07 ve-coresw-01 %PIM-5-DR_CHANGE:  pim [26128]  DR change from  172.21.159.253 to 172.21.144.3 on interface Vlan21
    2013 Nov 25  23:26:10 ve-coresw-01 %PIM-5-DR_CHANGE:  pim [26128]  DR change from  172.21.144.3 to 172.21.159.253 on interface Vlan21
    I am not that familiar with multicast but the basic concepts are there - in the vrf I have defined
    ip pim ssm range 232.0.0.0/8
    the vlan is defined as:
    vlan configuration 21
      layer-2 multicast lookup mac
    vlan 2001
    under the SVI interface vlan 21 I have also defined - and there is a sample showning the nlb
    interface Vlan21
      vrf member DMZ_21
      no ip redirects
      ip address 172.21.144.3/20
      ip pim sparse-mode
      ip arp 172.21.149.19 0100.5E7F.9513
    these flapping should only occur if the keep-alives between the two sites are missed 3 times
    The uplinks to the nexus 5000 are defined as mrouters
    vlan 21
      ip igmp snooping mrouter interface port-channel5
      ip igmp snooping mrouter interface port-channel16
    SW5020-01# sh ip igmp snooping vl 21
    IGMP Snooping information for vlan 21
      IGMP snooping enabled
      IGMP querier present, address: 172.21.144.3, version: 2, interface port-channel5  -> the DR on the nx7k
      Switch-querier disabled
      IGMPv3 Explicit tracking enabled
      IGMPv2 Fast leave disabled
      IGMPv1/v2 Report suppression enabled
      IGMPv3 Report suppression disabled
      Link Local Groups suppression enabled
      Router port detection using PIM Hellos, IGMP Queries
      Number of router-ports: 3
      Number of groups: 3
      VLAN vPC function enabled
      Active ports:
        Po10        Po15    Eth1/3  Eth1/11
        Eth1/12     Eth1/13 Eth1/14 Eth1/15
        Eth1/16     Eth1/17 Eth1/18 Eth1/19
        Eth1/20     Eth1/25 Eth1/26 Eth1/27
        Eth1/28     Eth1/29 Eth1/30 Eth1/31
        Eth1/32     Po16    Po5
    The  link between the two sites - and boxes - is running error-free. As far  as I can see there hasn't been any problems in that vlan since ??
    If I look at f.ex spanning-tree the topology hast changed for long time in that vlan (2 weeks).
    Could I harden the igmp multicast setup?
    What is happening when a DR is changing? Will the multicast stop work or what happens?
    As  far as I understood the DR is the service which forwards the multicast  traffic to the groups so if suddenly some re-negotiation occurs I would  expect that the active traffic will be interrupted.
    here the actual MS NLB clusters adresses:
    SW5020-01# sh ip igmp snooping groups vl 21
    Type: S - Static, D - Dynamic, R - Router port
    Vlan  Group Address      Ver  Type  Port list
    21  */*                -    R     Po10 Po16 Po5
    21  239.255.149.19     v1   D     Eth1/14 Eth1/19 Eth1/32
    21  239.255.149.24     v1   D     Eth1/12 Eth1/15 Eth1/16
                                        Eth1/26 Eth1/31
    21  239.255.255.250    v2   D     Po15 Eth1/11 Eth1/28
                                        Eth1/29
    SW5020-01#
    Any suggeestions?

    What Is OneClickStarter.exe?
    OneClickStarter.exe is a type of EXE file associated with TuneUp Utilities 2013 developed by AVG Technologies for the Windows Operating System. The latest known version of OneClickStarter.exe is 13.0.4000.189, which was produced for Windows.
    This EXE file carries a popularity rating of 1 stars and a security rating of "UNKNOWN".
    Sounds like you have some misbehaving software on your system.  I would suggest a clean install to see if you still have all the problems you are reporting.

  • Multicast to Host but not to Guest

    Hi all
    I have a strange situation. To give you some background, I have recently built 2 Hyper V3 2 node server 2012 clusters in a failover environment; all has been working without issue. The kit moves from location to location, so is frequently turned on and shutdown.
    The other day upon bringing the server up, it was noted that the servers on cluster2 could not receive or stream mulitcast traffic, whilst those on cluster1 were fine. Initial thoughts were to it being a single guest machine problem, but upon rebooting the
    guest this made no difference. All guests were moved off of one node, rebooted the node and then shifted back, again to no avail. Wireshark was used to see whether the guests were receiving multicast traffic, and no they weren’t. Wireshark was also installed
    on the hosts, and it was being received to there.
    The guest in question was failed over to cluster1 and then worked without issues. The kit is currently in transit so I’m unable to test anything but has anyone seen anything similar to this before. As I have said, this situation was not present the last
    time the clusters were turned on, only this time.
    A little bit of background that may help: the hosts are HP385 G8 servers, and 3 ports from each HP server is connected to a cisco 3850 via LACP. The fourth port on the server is connected to each other as the heartbeat connection.
    Many thanks

    Hi Chris,
    Are you trying to enable NLB (Network Load Balancing) in Multicast mode or are you really trying to achieve something else?
    The fact is, Multicast issues can be caused by a lot of things. I don't want to go into to much detail. But to give me a bit more information. Are you using Multicast or IGMP Multicast? Are you trying to reach your guests from a source located behind
    a router? Have tried accessing the guests from the local subnet? Any diffrence? You are aware that you have to configure your netwerk devices (e.g. Cisco Catalyst switches) for multicast traffic right?
    Boudewijn Plomp, BPMi Infrastructure & Security | Please remember, if you see a post that helped you please click "Vote as Helpful" and if it answered your question, please click "Mark as Answer".

  • Multicast traffic flooding

    Hi
    We are using Network Load Balancing on some Windows 2003 servers which is configured as IGMP multicast.
    The servers are connected to a 3550 SMI switch which is connected to a HP4108.
    Before I configured either of these switches I connected a packet sniffer and saw the multicast traffic appearing on all ports on both switches.
    I configured the HP switch as a Multicast querier and now on the 3550 I see the multicast mac addresses logged against the server ports. The 3550 also detects the muticast router as you can see:
    prod-3550#sh mac-address-table mult
    Vlan Mac Address Type Ports
    1 0100.5e7f.4601 IGMP Fa0/10, Fa0/12, Gi0/1
    1 0100.5e7f.4602 IGMP Fa0/9, Fa0/11, Gi0/1
    prod-3550#
    prod-3550#sh ip igmp snoo mrouter
    Vlan ports
    1 Gi0/1(dynamic)
    The 3550 is still spamming the multicast traffic to all ports whereas the HP switch is only replicating thisto ports that are part of the multicast group.
    Can anyone tell me why the 3550 is doing this or what else I need to check?
    Thanks in advance.

    Kate
    You do not tell us much about how the 3550 is configured. From your description of the problem I am going to assume that you do not have IGMP snooping configured. This link will show you how to configure IGMP on the 3550:
    http://www.cisco.com/univercd/cc/td/doc/product/lan/c3550/12225sec/3550scg/swigmp.htm
    If my assumption is not correct and you do have IGMP snooping configured, then you will need to tell us more about how the 3550 is configured so we can determine why it is propagating multicast to all ports.
    HTH
    Rick

  • Network Load Balancing - Multicast IPv6

    I have a two servers with network load balancing. They are configured to use IGMP Multicast which works well with IPv4.  The switch correctly detects the group and sends the traffic to only the ports connected to the servers.
    However i can't get IPv6 working outside of the servers subnet.  You can access the loadbalanced IPv6 address from within the servers subnet but machines outside the subnet cannot access it.
    Does load balancing properly support IPv6?  Should it not support Multicast Listerner Discovery (MLD) to work properly with IPv6? 
    Thanks

    Thanks for your reply. 
    Yes - you are correct. We are using an IPv6 address as the cluster IP address for incoming connections but it can't be access outside of the subnet. The cluster has both a link-local and global address - both are only accessible from within the subnet.
    The two servers that are part of load balancing cluster both have IPv6 address assigned to their network adapters - these are accesible outside the subnet. Infact 80% of all our network traffic is IPv6 - routing is working fine between all servers, workstations
    and devices on our various subnets.  The problem is purley affecting the load balancing IPv6 address.
    The IP config and route tables are below.  Thanks for your help.
    Regards, Daniel
    Microsoft Windows [Version 6.1.7601]
    Copyright (c) 2009 Microsoft Corporation. All rights reserved.
    M:\>ipconfig /all
    Windows IP Configuration
    Host Name . . . . . . . . . . . . : indium
    Primary Dns Suffix . . . . . . . :
    Node Type . . . . . . . . . . . . : Hybrid
    IP Routing Enabled. . . . . . . . : No
    WINS Proxy Enabled. . . . . . . . : No
    DNS Suffix Search List. . . . . . :
    Ethernet adapter Public:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Ada
    pter
    Physical Address. . . . . . . . . : 00-15-5D-CA-6C-04
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv6 Address. . . . . . . . . . . : 2001:630:34:1010::42(Preferred)
    IPv6 Address. . . . . . . . . . . : 2001:630:34:1010::40(Preferred)
    Link-local IPv6 Address . . . . . : fe80::4c7b:41a3:be85:e6c4%10(Preferred)
    Link-local IPv6 Address . . . . . : fe80::95f6:2da7:dcdb:1fc1%10(Preferred)
    IPv4 Address. . . . . . . . . . . : 10.0.0.42(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.252.0
    IPv4 Address. . . . . . . . . . . : 10.0.0.40(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.252.0
    Default Gateway . . . . . . . . . : 2001:630:34:1010::1
    10.0.0.1
    DHCPv6 IAID . . . . . . . . . . . : 234886493
    DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-D0-9F-CD-00-15-5D-01-14-35
    DNS Servers . . . . . . . . . . . : 2001:630:34:1010::10
    2001:630:34:1010::8
    10.0.0.10
    10.0.0.8
    NetBIOS over Tcpip. . . . . . . . : Disabled
    Microsoft Windows [Version 6.1.7601]
    Copyright (c) 2009 Microsoft Corporation. All rights reserved.
    M:\>ipconfig /all
    Windows IP Configuration
    Host Name . . . . . . . . . . . . : aluminium
    Primary Dns Suffix . . . . . . . :
    Node Type . . . . . . . . . . . . : Hybrid
    IP Routing Enabled. . . . . . . . : No
    WINS Proxy Enabled. . . . . . . . : No
    DNS Suffix Search List. . . . . . :
    Ethernet adapter Public:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Ada
    pter
    Physical Address. . . . . . . . . : 00-15-5D-01-37-04
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    IPv6 Address. . . . . . . . . . . : 2001:630:34:1010::43(Preferred)
    IPv6 Address. . . . . . . . . . . : 2001:630:34:1010::40(Preferred)
    Link-local IPv6 Address . . . . . : fe80::95f6:2da7:dcdb:1fc1%10(Preferred)
    Link-local IPv6 Address . . . . . : fe80::fcab:aeb9:175d:9994%10(Preferred)
    IPv4 Address. . . . . . . . . . . : 10.0.0.43(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.252.0
    IPv4 Address. . . . . . . . . . . : 10.0.0.40(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.252.0
    Default Gateway . . . . . . . . . : 2001:630:34:1010::1
    10.0.0.1
    DHCPv6 IAID . . . . . . . . . . . : 234886493
    DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-BF-55-42-00-15-5D-01-13-45
    DNS Servers . . . . . . . . . . . : 2001:630:34:1010::10
    2001:630:34:1010::8
    10.0.0.10
    10.0.0.8
    NetBIOS over Tcpip. . . . . . . . : Disabled
    Microsoft Windows [Version 6.1.7601]
    Copyright (c) 2009 Microsoft Corporation. All rights reserved.
    M:\>route print
    IPv6 Route Table
    ===========================================================================
    Active Routes:
    If Metric Network Destination Gateway
    10 261 ::/0 2001:630:34:1010::1
    1 306 ::1/128 On-link
    10 261 2001:630:34:1010::/64 On-link
    10 261 2001:630:34:1010::40/128 On-link
    10 261 2001:630:34:1010::42/128 On-link
    10 261 fe80::/64 On-link
    10 261 fe80::4c7b:41a3:be85:e6c4/128
    On-link
    10 261 fe80::95f6:2da7:dcdb:1fc1/128
    On-link
    1 306 ff00::/8 On-link
    10 261 ff00::/8 On-link
    ===========================================================================
    Persistent Routes:
    If Metric Network Destination Gateway
    0 4294967295 ::/0 2001:630:34:1010::1
    ===========================================================================

  • Wireless bridge supporting igmp

    Has anybody got suggestions for a wireless bridge I can use on my youview setup.
    Tried a few 18 months ago, iplayer and nowtv worked but not IP channels.
    At the time I suspected it was down to incorrect igmp routing, my powerline setup is becoming problematic so I'm returning to a wireless solution
    Does anyone have a confirmed working wireless solution with ALL the ip facilities of youview including premium IP channels?
    My router is an older style bt homehub
    Ta
    Steve

    the reason why this doesnt work is that a BT HH3 doesnt pass IGMP/multicast packets onto the wireless AP.
    The supposed reason being that multicast could swamp the wireless network.
    It doesnt matter what client one uses, if IGMP packets arent passed to the wireless AP (by the router) then would wont be able to use youview wirelessly.
    by "use" I mean IPTV multicast channels (essentials BT sport etc) NOT iplayer of nowtv.
    anybody claiming to have IPTV channels working with a HH3 over wirelss is telling porkies, unless they know something that I dont.

  • Multicast mac-address Nexus 7k

    Hi,
    i'm going to use Nexus 7000 in Data Center.
    During analysis configuration, I need define mac-address-static configuration for multicast mac address for Firewall Checkpoint cluster.
    In "Layer 2 Switching Configuration Guide, Release 4.1.pdf" documentation speak about
    "Configuring a Static MAC Address
    [..]You cannot configure broadcast or multicast addresses as static MAC addresses[..]"
    Have you a suggestion to manage this problem and why is it not possible configure mac address static multicast?
    Regards
    Dino

    Joseph - The ClusterXL A/A configuration is a variation of the  StoneSoft or Rainfinity clustering technologies that have been used to  cluster Solaris and other *NIX flavored servers and firewalls for  years.  (In fact, StoneSoft filed suit against Check Point in Europe 8  or 9 years ago for patent violations, and lost.)  These configurations  were very common on Check Point clusters running on Solaris from the  late 90's forward - and, as you describe, have unicast IP's with a  multicast MAC for the VIP.  Even from the days of installing these on  the brand new (at the time) 2900 series switches you had to do exactly  as you state above - static MAC entries (or in some cases port mirrors)  so traffic was directed to both active switch ports.  In Active/Passive  mode Check Point ClusterXL clusters are almost always "plug and play"  today - rarely do the switches need anything beyond speed/duplex  settings.  The VIP assumes the MAC of the physical NIC it is currently  bound to, and therefore there are no issues as far as switch config or  proxy ARP entries on the gateways.  All of these issues have to do with  traffic flowing to the VIP and through the firewall, and the ability of  the switch to correctly identify which physical switch port(s) the VIP  is currently patched in to.  This is one of three types of traffic  associated with ClusterXL itself.  The second is state synchronization,  which is accomplished through a crossover cable and therefore not  relevant.  Even when using a switch state sync is a typical TCP 18181  connection from a unicast IP/unicast MAC on one gateway to the other  through a dedicated interface pair.
    The challenge described by CJ is not with the traffic  flowing to the VIP, however.  It is an entirely separate process - Check  Point Clustering Protocol (aka CPHA if filtering in WireShark) is  essentially the heart beat traffic.  Every interface pair within a Check  Point cluster continually communicates with its "partner" interface on  the other cluster members.  If any packet takes over 100ms or shows more  than a 5% loss the gateway is forced in to "probing" mode where it  falls back to ICMP to determine the state of the other cluster member.   Depending on the CPHA timing settings an active gateway will failover to  the passive in as quickly as 500ms or so.  ClusterXL will fail over the  entire gateway to the standby to avoid complications with asynchronous  routing.
    Out of the box, CCP is configured to use  multicast, but it supports broadcast as well. To change this in real  time (no restart required) simply issue the command:
    cphaconf set_ccp {broadcast/multicast}
    At  the Ethernet level, CCP traffic will always have a source MAC of the  Magic MAC of 00:00:00:00:xx:yy where XX is the “Cluster ID” – something  identical on each cluster member but unique from one cluster to another,  and YY is the cluster priority (00, 01, etc.) based on the priority  levels set on cluster members within Dashboard on the cluster object.  The destination MAC will always be the Ethernet broadcast of  ff:ff:ff:ff:ff:ff.
    At the IP level the source of CCP  will always appear as 0.0.0.0. The destination will always be the  network address (ie, x.x.x.0).
    Similarly in multicast mode you will see the same traffic  at the IP level but at the Ethernet level the destination will now be a  IPv4 multicast MAC (ie, 01:00:5e:4e:c2:1e).
    In a tcpdump  with the –w flag opened in WireShark and a filter applied of just “cpha”  (without the quotes) you should see a continual stream of traffic with  the same source and destination IPs on all packets (0.0.0.0 and network  IP), the destination of either a bcast or mcast MAC and the source MAC  alternating between 00:00:00:00:xx:00 and 00:00:00:00:xx:01.
    Long story short, the problem CJ is describing is a  behavior on the 7K where a packet capture taken on the Check Point  interface itself (ie, tcpdump –i eth0 –w capture.cap) ONLY shows CPHA  traffic from it’s own source MAC and no packets from it’s partner. A  tcpdump on the 7K itself will show traffic from both.
    As CJ mentioned, a simple NxOS upgrade will fix the issue per:
    This one:CSCtl67036  basically pryer to NX-OS 5.1(3) the nexus will discard packets that have a source of 0.0.0.0.  Which in broadcast mode is exactly what the CCP heartbeat is.  We bypassed this one.CSCsx47620 is the bug for the for static multicast MAC address feature but it requires 5.2 code on the 7k
    (NOTE:Additional RAM may be required for the 5.2 update)
    Also note that Check Point gateways do support IGMP  multicast groups, given that you have the correct license. It is a  feature of SecurePlatform Professional on the higher end gateways or as a  relatively inexpensive upgrade on the lower end boxes or open  platforms. For lab purposes you can simply type “pro enable” at the CLI  (without the quotes). As of the latest build there is no technical  limitation (no license check) so you can enable advanced routing  features as needed for testing in a lab. For step by step details on  configuring IGMP on SPLAT Pro go to the Check Point support site and  search for sk32702.
    This can be a frustrating issue to troubleshoot, so hopefully this helps someone avoid the headaches I ran in to.

  • IGMP limitations (SRW2048)?

    Hello,
    we just replaced our old switches with 30 SRW2048 switches.
    It seems like their IGMP/Multicast support is quite limited. Basically IGMP is working as expected but as soon as there are more than 16 active IGMP groups (show ip igmp snooping group), any additional Multicast traffic is flooded on all ports and doesn't appear in the list.
    Can anyone confirm this behaviour?
    Firmware version is 1.2.2

    I have since found out that the Lynksys switch will do IGMP snooping but not as a Querier. You will need to have one switch in the system as a Querier and then the Lynksys will use the lookup table for the multicast groups from the Querier to do the blocking till it gets a Multicast request.
    I tested the theroy with a Allied Telsys switch (AT8624) and it works.
    The CISCO will have it also, You have to pay for the feature!

  • Route IGMP link-local groups

    There are two VLANs in network:
    VLAN 30: 192.168.1.0/24
    VLAN 40: 192.168.2.0/24
    A IGMP multicast group 224.0.0.251 is used for devices in both VLANs.
    I try to configure the router to enable the multicast routing, but it seems the router cannot join that multicast group:
    ip multicast-routing distributed
    3560g-client(config-if)#ip igmp join-group 224.0.0.251
    Illegal multicast group address
    So is there anyway to make that multicast group routable?
    Thank you.

    The multicast address, you are trying to join is a link-local multicast address and does not follow normal procedures of join. Any multicast address which belongs to 224.0.0.x is link-local address and few are reserved for protocols which runs over a link, like 224.0.0.9 is used for RIP and 224.0.0.10 is for EIGRP. and similarly 224.0.0.251 is used of mDNS. We will same error message for all these groups.
    R6_ASR6(config-if)#ip igmp join-group 224.0.0.9
    Illegal multicast group address
    R6_ASR6(config-if)#ip igmp join-group 224.0.0.10
    Illegal multicast group address
    R6_ASR6(config-if)#
    http://en.wikipedia.org/wiki/Multicast_address
    http://en.wikipedia.org/wiki/Multicast_DNS
    --- Please don't forget to rate helpful posts -----
    Regards,
    Akash

  • Help with TP-Link W8960N

    Keith_Beddoe's guide to replacing HH3 with W8960N seemed very thorough, but I still can't make it work. Please tell me where I'm going wrong.
    1. I assume the BT red cable can be used to connect the modem to the TP-link router?
    2. Is the userid/password actually "broadbanduser@..." or do I substitute my own username & password?
    3. In section 4.4.2.6 (adding WAN over ETH interface) I can set up the ETH interface but the next step fails: Pressing the ADD button on figure 4-7 (ATM-EoA-PPPoE) just results in an error message saying "cannot use DSL & ETH together" [or something to that effect]
    Any ideas what I have missed?
    Solved!
    Go to Solution.

    Thanks for replies so far.  Q1 & Q2 seem to be sorted.
    I think I set it up exactly as in the thread with Kazhop but ....
    After setting up the Layer2 interface I have:
    atm0....... and
    eth0.2/LAN4
    Do I remove either of these at this stage?
    When I go to WAN Service set-up it displays the PPPoE but pressing ADD just results in:
      "No available interfaces. DSL wan connection and ETH layer2 interface coexist"
    Pressing Edit allows me to set the parameters in the table for username, password, MTU, Full Cone NAT,....IGMP Multicast
    Save and apply seems OK but displays "Choose Add, or Remove to configure a WAN service over a selected interface.
    ETH and PTM/ATM service can not coexist." above the table entry.
    Maybe it's right but it seems like it is complaining about something.
    BTW I tried a static DNS of 8.8.8.8 and now using DNS supplied by WAN. I'll switch to open DNS when I have the rest of it working.
    Also, I have not yet connected the red cable to test internet connection as it is currently in use by others.
    Is that right?  I got this far last night but when I tried to connect it failed.

  • Deploying 2x Exchange Server 2013 CAS server email traffic high availability during patching & reboot

    Hi people,
    What is the best way to utilize VMware technology to host 2x the Exchange Server 2013 CAS role VM in my production VM to ensure that the email traffic is not halted during server patching ?
    Previously in Exchange Server 2007 I am using Windows NLB (IGMP Multicast) on my ESXi 4.1, now with ESXi 5.1 and 2013 I wonder if there is any better way to make sure that the server failover does not disrupt the mail flow between the Smarthost and the CAS server role.
    Thanks

    Hey AlbertWT,
    Can you clarify exactly what you mean when you say "server patching?"  Do you mean patching at the ESXi host level or something within the guest?
    As you probably know Exchange 2013 CAS no longer needs NLB or even a hardware load balancer.  Due to changes in the architecture, even simple DNS round robin is "enough" to load balance the CAS role.  NLB has its own set of headaches which you are probably all too familiar with so getting rid of that can help remove a lot of complexity from the situation.
    If you can clarify what you mean by "server patching" and "server failover" in your post I think that would be helpful for me to give you a more definitive answer.
    Matt
    http://www.thelowercasew.com

  • Users complain slow when opening own calendar and Free/Busy schedule ?

    Hi,
    I'm using Exchange Server 2007 SP3 (no CU at all) with the following setup as VMware Virtual Machine:
    Mailbox Server role (CCR enabled across two different AD sites, same domain)
    PRODMBX1-VM - Production mailbox
    DRMBX1-VM - Recovery mailbox
    HT-CAS role (IGMP Multicast using Windows NLB)
    PRODHTCAS1-VM
    PRODHTCAS2-VM
    DR-HTCAS1-VM
    User connects to the Exchange server using CCR clustername using Outlook 2007 SP3, when the user click on the calendar tab, it works fine, but when checking on other people's calendar it takes more than 3-5 minutes with frozen display most of the times (End
    task and Restart Outlook) andsomtimes the baloon pop up comes saying ("Connecting to ExcCluster01.domain.com).
    Why is this happening quite so often ?
    Any suggestion would be greatly appreciated.
    /* Server Support Specialist */

    Hi,
    According to your description, I understand that the shared calendar can be opened but is synced slowly when opening other user’s calendar. Is it right? If I misunderstand, please feel free to let me know.
    Does the issue happen to all users or some specific users? We can open other users’ calendar in OWA to check whether it can work well. As for Outlook users, please make sure the Outlook 2007 users are using Outlook in Exchange Cached mode and configured
    correctly as the following settings:
    1. Click Account Settings on the File tab.
    2. In the Account Settings dialog box, click the Email tab, click to select the Exchange account, and then click
    Change.
    3. Click More Settings. On the Advanced tab, make sure the
    Use Cached Exchange Mode and Download Shared Folders check box have been checked.
    4. Finish it and restart Outlook.
    Regards,
    Winnie Liang
    TechNet Community Support

  • ADFS server in NLB cluster unable to reach all servers in the same subnet

    I have 2 ADFS (3.0) virtual servers (server 2012 R2 on VMware) in an NLB cluster (setup for Office 365 initially) and want to be able to use the SAML to connect to a couple of Linux servers in the same network to allow SSO to the Linux boxes.
    It was working then stopped and now the primary FS server (FS1) cannot ping either Linux box or one of our WS08R2 file and print servers. It can ping all other servers in the same network.
    I tried to get a packet capture with MS NetMon 3.4 but it only picked up the successful ping requests.
    Firewall is disabled but that made no difference.
    NLB cluster configured in Unicast mode as I found Office 365 and another outside service didn't want to work using Multicast or IGMP Multicast.
    The really bizarre thing is the secondary FS vm can ping the other boxes even with "ping server -S clusteraddress"
    Any suggestions as to where to look to track this down will be most welcome.
    Cheers
    David
    Cheers, David

    Hi,
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thanks for your understanding and support.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

Maybe you are looking for