Redundant Multicast switching

All, I have a customer with a L2 network with multiple VLANS and consisting of multiple access switches with two L2/L3 core switches.  Both core switches have a SVI for each vlan using HSRP to provde redundant Default Gateways.
The main core switch is a Cisco 6500 (running 12.2(33)SXJ6) and the backup core switch is a Cisco 4500.
One of the applications on this network is Multicast which needs the protection of the redundant core switches (this is for a critical public infrastruction and so requires the protection, if one switch fails the other must continue to support the service).
I initially tried configuing "IP PIM DENSE-MODE" on the VLAN interfaces (one of the solutions as per Cisco doc #68131 which discribes a problem and solutions for configuring multicast on a L2 network) to make the switch act as a "mrouter".  When I configure pim on the applicatable vlan's on one (the main core) switch the multicast application/s work properly but when I then configure pim on the applicatable vlan's on the other (backup) switch igmp-snooping seems to fail and all interfaces on the vlan get the multicast traffic whether they have joined the group or not (effectively causing a DOS attack on the interfaces that haven't joined the group).
Another solution from document #66131 is to enable the igmp querier feature on the L2 switches (and I assume, remove the IP PIM configuration).  This should make the switch act as a mrouter "proxy".
I have also read the chapter (chaper 38) in the IOS configuraiton guide on "Configuring IGMP Snooping" which has a section on configuring redundant igmp snooping queriers.  I am thinking of trying the configuration this section is suggesting where I would remove the IP PIM and, instead, configure "ip igmp snooping querier" on the appropriate vlan interfaces on both switches.  Unfortunately I do not have a lab to test this out on and so am currently limited to trying this out on the actual network (scary!).
So, my questions.  First, and in general, does anyone have any words of wisdom for me?  Two, if my network only has mrouter "proxies" only but no actual mrouter (as I believe will be the case if I am only using the "querier" configurations) will that cause any problems with the multicast applications?
I am under some immediate pressure to solve this redundancy issue so any help would be greatly appreciated.

Steve
Can I have both IP PIM and the igmp querier configurations on the same (routed) VLAN interface?
You don't need to. The IGMP querier function is only used when you don't have PIM enabled on the vlan interface. When you enable PIM on a L3 interface it then sends out IGMP queries and the switch listens to the responses with IGMP snooping so it can record the multicast mac address to the correct ports.  If you don't have PIM enabled, something still needs to make those IGMP queries otherwise the switch has nothing to listen for. So that is what the IGMP snooping querier does. So its one or the other. With PIM enabled you do not need the querier.
To be honest i didnt understand a lot of what you said about your physical connectivity other than each switch sees the other switches via trunk links.
So when you enable PIM on the 6500 only multicast works for all clients on all switches properly. When you enable it on both switches multicast is then flooded to all interfaces on both switches ?
Cna you just clarify what you mean by all interfaces ie. do you mean all end devices on all the switches start seeing multicast traffic ?
When you enabled PIM on the 4500 did you enable it on all L3 interfaces at the same time ?
I am just trying to get a picture of which switches were affected and how they connect back to the core switches. Like i say i did not really follow your setup because i have no experience of that.  Is each access switch in effect connected to both core switches or do they only connect to one or the other ?
Jon

Similar Messages

  • Layer 2 multicast switching capacity

    Will you tell me the "Layer 2 multicast switching capacity" of the catalyst switches(1924,2900,2950,3500,3550,3750 ,6500) and router 7600)?
    Is it same as the Layer 2 unicast switching capacity?
    Also, the layer 2 multicast switching is processed by hardware or software?
    Thanks.

    Probably not. Why? Because multicast is copied on to all ports or with IGMP and/or CGMP enabled, copied only to ports interested in such traffic. It takes more processing then a mac-address-table lookup used for unicast.
    All switching occurs in hardware ASICs.

  • Redundant Cisco Switches

    diannewaters wrote:Comparing the SG300-28 and the SG500-28 seems like a good way to start although I'm really confused about how the fail-over works still.The SG300 line is not stackable, where the SG500 line is stackable (sort of) via a 5Gb/s cross over cable that plugs into a specific GBiC port on the front of the switch.If you want to protect from a single switch failure you want a stackable switch. In this case if your servers have dual network adapters you would plug each server nic into each switch. That way if either switch fails the server will still stay online. This works the same for the SG500 series and more expensive IOS based switches. On the 2960 and 3850 mentioned above, they have a dedicated high speed stacking connection, where the SG500 is limited to 5Gb/s switch on the stacking cable. For devices that only have a...

    What's the budget?At the less costly end the SG300 or SG500 will fit the bill. These are small business style switches with a GUI management system although they do have a very similar command line interface to the more advanced switches I'm about to talk about...At the more costly end the 2960X-R will fit although they are not as feature rich as a 3850XR which has a full Layer 3 features and functionality. Both of these are managed via the Cisco command line. If you are not familiar with this then the learning curve will be steep. They are the best switches out there though with reliability, speed, throughput and features that are second to none.http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-2960-x-series-switches/data_sheet.......

  • WLC - AP Groups - Multicast - Bonjour - Apple TVv3

    Good Morning
    first off - Should start off by saying I have followed the Apple Bonjour deployment guide [except for interface group] portion
    I have searched high and low, here and there to no avail.
    http://www.cisco.com/en/US/products/hw/wireless/ps4570/products_tech_note09186a0080bb1d7c.shtml
    I am aware that the bonjour gateway IOS may or may not come out in Oct/Nov 2012, which maybe my only option at this point.
    Is this not working because of my AP groups setup or have I misssed something
    I can only get bonjour to work if multicast - unicast mode is selected, but our network slowly grinds to a halt, so it is not an option
    when I first connect to the wireless I see 1 bonjour device for about 3 minutes and then disappears.
    I can not see the appletv at all with an ipad, airplay does not appear at all.
    We have the following setup.
    2 campuses - Campus 2 is simular setup, but WLCs higher model and ios 7.2 and clients and subnets are double
    Campus 1
    2 WLC 4404 ios 7.0.230.0
    30 AP groups mapped to 30 Interfaces using subnets with /23 bit subnetmasks
    multicast - multicast is set with multicast addresses of
    controller 1 239.239.5.1 and
    controller 2 239.239.5.2
    multicast is enabled
    IGMPsnooping as well
    On Switch multicast routing is enabled
    all AP group subnets and Mgmt vlans are PIM enabled dense mode
    set up a trunk to ubuntu server to act as a bonjour gateway, installed avahi and vlan
    mapped all AP and mgmt vlans to Ubuntu server.
    avahi see the following + more
    + eth0.136 IPv6 Apple TV                                      _airplay._tcp        local
    + eth0.136 IPv4 Apple TV                                      _airplay._tcp        local
    + eth0.134 IPv6 Apple TV                                      _airplay._tcp        local
    + eth0.134 IPv4 Apple TV                                      _airplay._tcp        local
    + eth0.132 IPv6 Apple TV                                      _airplay._tcp        local
    + eth0.132 IPv4 Apple TV                                      _airplay._tcp        local
    + eth0.130 IPv6 Apple TV                                      _airplay._tcp        local
    more goes on forever
    + eth0.136 IPv4 xyz Library                             Apple Home Sharing   local
    show ip multicast
      Multicast Routing: enabled
      Multicast Multipath: disabled
      Multicast Route limit: No limit
      Multicast Triggered RPF check: enabled
      Multicast Fallback group mode: Dense
    show ip multicast interface vlan 128
    Vlan128 is up, line protocol is up
      Internet address is x.x.128.1/23
      Multicast routing: enabled
      Multicast switching: fast
      Multicast packets in/out: 14671352/276693
      Multicast boundary: not set
      Multicast TTL threshold: 0
      Multicast Tagswitching: disabled
    Where do I go from here?

    Thanks Yahya and Stephen
    I have tried to simplify my config as much as possible.
    wlc 4404
    Ethernet Multicast Forwarding............... Enable
    Ethernet Broadcast Forwarding............... Enable
    AP Multicast/Broadcast Mode................. Multicast   Address : 239.239.5.1
    IGMP snooping............................... Enabled
    IGMP timeout................................ 60 seconds
    IGMP Query Interval......................... 20 seconds
    I have an interface created 10.x.x.x/23
    I have created a new SSID APPLETV - assigned Interface
    I have added the SSID to just 1 AP Group
    show network multicast mgid summary
    Layer2 MGID Mapping:
    InterfaceName                    vlanId   MGID
    2upadhoc                         136      27
    Layer3 MGID Mapping:
    Number of Layer3 MGIDs........................... 11
    My vlan does not show up here.
    I only have 2 devices in this vlan the AppleTV and IPAD
    checking the switch for all required vlans
    show ip multicast
      Multicast Routing: enabled
      Multicast Multipath: disabled
      Multicast Route limit: No limit
      Multicast Triggered RPF check: enabled
      Multicast Fallback group mode: Dense
    admin interface
    Management, AP-Manger
    Vlan12 is up, line protocol is up
      Internet address is x.x.x.1/24
      Multicast routing: enabled
      Multicast switching: fast
      Multicast packets in/out: 238489978/724352
      Multicast boundary: not set
      Multicast TTL threshold: 0
      Multicast Tagswitching: disabled
    AP vlan
    Vlan222 is up, line protocol is up
      Internet address is x.y.z.1/24
      Multicast routing: enabled
      Multicast switching: fast
      Multicast packets in/out: 11423/238338583
      Multicast boundary: not set
      Multicast TTL threshold: 0
      Multicast Tagswitching: disabled
    The test Apple TV Vlan
    Vlan136 is up, line protocol is up
      Internet address is x.xx.1/23
      Multicast routing: enabled
      Multicast switching: fast
      Multicast packets in/out: 156740/0
      Multicast boundary: not set
      Multicast TTL threshold: 0
      Multicast Tagswitching: disabled
    interface Vlan12
    ip pim dense-mode
    interface Vlan222
    ip pim dense-mode
    interface Vlan136
    ip pim dense-mode
    Show ip igmp groups
    Group Address    Interface                Uptime    Expires   Last Reporter
    224.0.1.39       Vlan136                  2d00h     00:02:35  x.x.x.1
    So just to recap
    Same subnet in a AP Group
    New SSID
    multicast enabled on WLC - using multicast multicast mode
    Broadcast forward enable
    Switch -Multicast routing enabled
    all vlans enabled for PIM
    2 devices - added Imac to see if I could home share through Itunes.
    end result
    no bonjour clients, no apple tv, no airplay
    Bonjour Gateway device - although same subnet it shouldn't be needed
    eth0.12   Link encap:Ethernet  HWaddr bc:30:5b:x:x:x 
              inet addr:x.x.x.244  Bcast:x.x.x.255  Mask:255.255.255.0
              inet6 addr: fe80::be30:5bff:fed6:a178/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:55005 errors:0 dropped:115 overruns:0 frame:0
              TX packets:23003 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:2776156 (2.7 MB)  TX bytes:11285256 (11.2 MB)
    eth0.136  Link encap:Ethernet  HWaddr bc:30:5b:x:x:x 
              inet addr:x.x.x.9  Bcast:x.x.x.255  Mask:255.255.254.0
              inet6 addr: fe80::be30:5bff:fed6:a178/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:42167 errors:0 dropped:115 overruns:0 frame:0
              TX packets:22340 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:3251242 (3.2 MB)  TX bytes:10373581 (10.3 MB)
    eth0.222  Link encap:Ethernet  HWaddr bc:30:5b:xx:xx:xx 
              inet addr:x.x.x.9  Bcast:x.x.x.255  Mask:255.255.255.0
              inet6 addr: fe80::be30:5bff:fed6:a178/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:152397 errors:0 dropped:115 overruns:0 frame:0
              TX packets:23768 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:12795709 (12.7 MB)  TX bytes:11318103 (11.3 MB)
    + eth0.222 IPv6 67665ACD317A45B0                              _appletv-v2._tcp     local
    + eth0.222 IPv4 67665ACD317A45B0                              _appletv-v2._tcp     local
    + eth0.136 IPv6 67665ACD317A45B0                              _appletv-v2._tcp     local
    + eth0.136 IPv4 67665ACD317A45B0                              _appletv-v2._tcp     local
    + eth0.12 IPv6 67665ACD317A45B0                              _appletv-v2._tcp     local
    + eth0.12 IPv4 67665ACD317A45B0                              _appletv-v2._tcp     local
    Should Bonjour work same subnet with these settings?
    I am going to have read more about the Interface groups and the Multicast vlan.

  • Active/Standby Failover with pair of 5510s and redundant L2 links

    Hi
    I just got two ASA5510-SEC-BUN-K9 and I'm wondering is it possible to implement an Active/Standby Failover configuration (Routed mode) with two ASA5510 and redundant pair of switches from both inside and outside interfaces? In other words, I would like to have two L2 links from each ASA (in pair od ASAa) to each L2 switch (in pair of redundant L2 Switches). The configuration I would like to achive is just like one in Cisco Security Appliance Command Line Configuration Guide, page B-23, figure B-8, with only difference that I wouldn't go with multiple security contexts (I want Active/Standby failover).
    Thanks in advance
    Zoran Milenkovic

    Hello Zoran,
    Absolutely. You can have 2 ASAs configured in Active/Standby mode. For reference, here is a link which has a network connectivity diagram based on PIX, however, connectivity would still be same with ASAs-
    http://www.cisco.com/en/US/docs/security/pix/pix63/configuration/guide/failover.html#wp1053462
    The difference is that on ASA, you can only have LAN-Based failover, hence you'll need to use one additional interface on both ASAs for failover-link. You can connect these two failover-link interfaces directly using a cross cable.
    Apart from this, please refer to following link on how to go with configuration of Lan-based Active/Standby failover-
    http://www.cisco.com/en/US/docs/security/asa/asa72/configuration/guide/failover.html#wp1064158
    Also make sure that both ASAs have required hardware/software/license based on following link-
    http://www.cisco.com/en/US/docs/security/asa/asa72/configuration/guide/failover.html#wp1047269
    Hope this helps.
    Regards,
    Vibhor.

  • Which is prioritized for multicast traffic if FastSwitching and CEF is enable?

                       Hello
    Here is the related configuration and output of show command below,
    In my understanding, there are 3 swtching mode, CPU, fast-swthing and CEF swthing,
    But if FastSwthing and CEF swithing are enable both, then which swithing mode is prioritized for mutlicast traffic?
    interface Vlan302
    ip address 10.0.20.1 255.255.255.0
    3750X#sh ip int vlan 302
    Vlan302 is down, line protocol is down
      Internet address is 10.0.20.1/24
      Broadcast address is 255.255.255.255
      *omit
      IP fast switching is enabled
      IP Flow switching is disabled
      IP CEF switching is enabled
      IP CEF switching turbo vector
      IP Null turbo vector
      IP multicast fast switching is enabled
      IP multicast distributed fast switching is enabled
      IP route-cache flags are Fast, CEF
      *omit
    interface Vlan301
    ip address 10.0.10.1 255.255.255.0
    no ip mroute-cache
    3750X#sh ip int vlan 301
    Vlan301 is down, line protocol is down
      Internet address is 10.0.10.1/24
      Broadcast address is 255.255.255.255
      *omit
      IP fast switching is enabled
      IP Flow switching is disabled
      IP CEF switching is enabled
      IP CEF switching turbo vector
      IP Null turbo vector
      IP multicast fast switching is disabled
      IP multicast distributed fast switching is disabled
      IP route-cache flags are Fast, CEF, No Distributed
      *omit
    Product : Cat3750X
    IOS version :  15.0(2)SE5
    Best Regards,
    Masanobu Hiyoshi

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    I'm not 100% certain, but I believe FastSwitching and CEF switching apply to unicast, not multicast.  Your "IP mroute-cache" command enables/disables fast multicast switching.
    On a 3750, switching should be hardware based, for unicast and multicast, unless TCAM resources are insufficient.  If hardware switching falls back to non-hardware switching, you'll likely find process vs. Fast vs. CEF vs. multicast doesn't matter, all too slow.

  • Multicast between wirless clients on same AP and Mobility Group

    We have a autonomous wireless setup with a WLSM and WLSE. I have an issue with where I have 2 wirless clients that need to communicate using a multicast address for an application to work.
    The clients can ping each other but the multicast stream is not working between the clients. The SSID is part of a mobility group that sits on a Cat 6509 sup720.
    A debug on the sup720 shows the upstream multicast from one of the clients but you see no activity downstream to the other client.
    Tunnel multicast stats: -
    Tunnel176 is up, line protocol is up
    Internet address is 53.32.176.33/27
    Multicast routing: enabled
    Multicast switching: fast
    Multicast packets in/out: 2619/0
    Multicast boundary: not set
    Multicast TTL threshold: 0
    Multicast Tagswitching: disabled
    Sup 720 Debug: -
    May 9 15:15:31: IGMP(0): Send v2 Report for 224.0.1.40 on Tunnel176
    May 9 15:15:31: IGMP(0): Received v2 Report on Tunnel176 from 53.32.176.33 for 224.0.1.40
    May 9 15:15:31: IGMP(0): Received Group record for group 224.0.1.40, mode 2 from 53.32.176.33 for 0 sources
    May 9 15:15:31: IGMP(0): Updating EXCLUDE group timer for 224.0.1.40
    May 9 15:15:31: IGMP(0): MRT Add/Update Tunnel176 for (*,224.0.1.40) by 0
    May 9 15:16:15: IP(0): s=53.32.176.40 (Tunnel176) d=225.0.0.38 id=11765, prot=17, len=68(54), mroute olist null
    May 9 15:16:15: IP(0): s=53.32.176.40 (Tunnel176) d=225.0.0.37 id=11766, prot=17, len=68(54), mroute olist null
    May 9 15:16:25: IGMP(0): Send v2 general Query on Tunnel176
    May 9 15:16:25: IGMP(0): Set report delay time to 0.9 seconds for 224.0.1.40 on Tunnel176
    Please help as I need to get the 2 clients communicating using the multicast stream.
    Thanks
    Martin

    Have you enabled multicast mode on the controller ?
    If so, in what mode ? Unicast or multicast ?
    If you selected multicast mode, do you see the controller joining the original stream and sending it to the LWAPP distribution group to the other APs ?

  • A Problem with VSS in the 6509

    Hello,
    I have a problem with VSS in the 6500, when the switch 1 is active,probably a few months will switchover to switch 2 and switch 1 reload.
    config:
    switch virtual domain 1
     switch mode virtual
     switch 1 priority 110
     switch 2 priority 110
    interface Port-channel1
     description To 6509-B-VSS
     no switchport
     no ip address
     switch virtual link 1
     mls qos trust cos
     no mls qos channel-consistency
    interface Port-channel2
     description 6509-A-VSS
     no switchport
     no ip address
     switch virtual link 2
     mls qos trust cos
     no mls qos channel-consistency
    interface GigabitEthernet1/4/20
     no switchport
     no ip address
     dual-active fast-hello
    interface TenGigabitEthernet1/5/4
     description To 6509-B-VSS
     no switchport
     no ip address
     mls qos trust cos
     channel-group 1 mode on
    interface TenGigabitEthernet1/5/5
     description To 6509-B-VSS
     no switchport
     no ip address
     mls qos trust cos
     channel-group 1 mode on
    interface GigabitEthernet2/4/20
     no switchport
     no ip address
     dual-active fast-hello
    interface TenGigabitEthernet2/5/4
     description To 6509-A-VSS
     no switchport
     no ip address
     mls qos trust cos
     channel-group 2 mode on
    interface TenGigabitEthernet2/5/5
     description To 6509-A-VSS
     no switchport
     no ip address
     mls qos trust cos
     channel-group 2 mode on
    show switch virtual redundancy
                      My Switch Id = 2
                    Peer Switch Id = 1
            Last switchover reason = active unit removed
        Configured Redundancy Mode = sso
         Operating Redundancy Mode = sso
    Switch 2 Slot 5 Processor Information :
            Current Software state = ACTIVE
           Uptime in current state = 5 hours, 24 minutes
                     Image Version = Cisco IOS Software, s72033_rp Software (s72033_rp-IPBASEK9-M), Version 12.2(33)SXJ5, RELEASE SOFTWARE (fc2)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2013 by Cisco Systems, Inc.
    Compiled Thu 31-Jan-13 14:30 by prod_rel_team
                              BOOT = sup-bootdisk:/s72033-ipbasek9-mz.122-33.SXJ5.bin,12;
            Configuration register = 0x2102
                      Fabric State = ACTIVE
               Control Plane State = ACTIVE
    Switch 1 Slot 5 Processor Information :
            Current Software state = STANDBY HOT (switchover target)
           Uptime in current state = 5 hours, 18 minutes
                     Image Version = Cisco IOS Software, s72033_rp Software (s72033_rp-IPBASEK9-M), Version 12.2(33)SXJ5, RELEASE SOFTWARE (fc2)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2013 by Cisco Systems, Inc.
    Compiled Thu 31-Jan-13 14:30 by prod_rel_team
                              BOOT = sup-bootdisk:/s72033-ipbasek9-mz.122-33.SXJ5.bin,12;
            Configuration register = 0x2102
                      Fabric State = ACTIVE
               Control Plane State = STANDBY
    show log
    Jun 21 11:20:52.742: %VSL-SW2_SPSTBY-5-VSL_CNTRL_LINK:  New VSL Control Link Te2/5/4
    Jun 21 11:20:52.786: %VSLP-SW2_SPSTBY-3-VSLP_LMP_FAIL_REASON: Te2/5/4: Link down
    Jun 21 11:20:52.786: %VSLP-SW2_SPSTBY-2-VSL_DOWN:   Last VSL interface Te2/5/4 went down
    Jun 21 11:20:52.790: %VSLP-SW2_SPSTBY-2-VSL_DOWN:   All VSL links went down while switch is in Standby role
    Jun 21 11:20:52.790: %DUAL_ACTIVE-SW2_SPSTBY-1-VSL_DOWN: VSL is down - switchover, or possible dual-active situation has occurred
    Jun 21 11:20:53.622: %SYS-SW2_SPSTBY-3-LOGGER_FLUSHED: System was paused for 00:00:00 to ensure console debugging output.
    Jun 21 11:20:54.678: %C6KPWR-SP-4-PSOK: power supply 1 turned on.
    Jun 21 11:20:54.682: %C6KPWR-SP-4-PSOK: power supply 2 turned on.
    Jun 21 11:20:54.738: %SATVS_IBC-SW2_SP-5-VSL_DOWN_SCP_DROP: VSL inactive - dropping cached SCP packet: (SA/DA:0x4/0x4, SSAP/DSAP:0x19/0x0, OP/SEQ:0x320/0x8FD9, SIG/INFO:0x1/0x502, eSA:0000.0500.0000)
    Jun 21 11:20:56.490: %SATVS_IBC-5-VSL_DOWN_SCP_DROP: VSL inactive - dropping cached SCP packet: (SA/DA:0x14/0x4, SSAP/DSAP:0x18/0x0, OP/SEQ:0x19/0x98, SIG/INFO:0x1/0x502, eSA:0000.1500.0000)
    Jun 21 11:21:02.125: %VSDA-SW2_SP-3-LINK_DOWN: Interface Gi2/4/20 is no longer dual-active detection capable
    Jun 21 11:23:27.719: %VSLP-SW2_SP-5-RRP_ROLE_RESOLVED: Role resolved as ACTIVE  by VSLP
    Jun 21 11:23:27.719: %VSL-SW2_SP-5-VSL_CNTRL_LINK:  New VSL Control Link Te2/5/4
    Jun 21 11:23:27.799: %VSLP-SW2_SP-5-VSL_UP:  Ready for control traffic
    Jun 21 11:24:43.346: %PFINIT-SW2_SP-5-CONFIG_SYNC: Sync'ing the startup configuration to the standby Router.
    Jun 21 11:25:35.304: %VSLP-SW2_SP-5-VSL_UP:  Ready for data traffic
    Jun 21 11:25:14.946: %C6KPWR-SW1_SPSTBY-4-PSOK: power supply 1 turned on.
    Jun 21 11:25:14.950: %C6KPWR-SW1_SPSTBY-4-PSOK: power supply 2 turned on.
    Jun 21 11:25:16.975: %FABRIC-SW1_SPSTBY-5-CLEAR_BLOCK: Clear block option is off for the fabric in slot 5.
    Jun 21 11:25:16.975: %FABRIC-SW1_SPSTBY-5-FABRIC_MODULE_ACTIVE: The Switch Fabric Module in slot 5 became active.
    Jun 21 11:26:13.738: %SYS-SW1_SPSTBY-5-RESTART: System restarted --
    Jun 21 11:26:20.446: %SYS-SW1_SPSTBY-3-LOGGER_FLUSHED: System was paused for 00:01:47 to ensure console debugging output.
    Jun 21 11:26:32.056: %RF-SW2_SP-5-RF_TERMINAL_STATE: Terminal state reached for (SSO)
    Jun 21 11:27:56.962: %SYS-SW1_SPSTBY-3-LOGGER_FLUSHED: System was paused for 00:01:24 to ensure console debugging output.
    Jun 21 11:27:53.970: %SYS-DFC4-5-RESTART: System restarted --
    Jun 21 11:28:34.222: %VSDA-SW1_SPSTBY-5-LINK_UP: Interface Gi1/4/20 is now dual-active detection capable
    Jun 21 11:28:36.236: %VSDA-SW2_SP-5-LINK_UP: Interface Gi2/4/20 is now dual-active detection capable
    Best Regard

    hi,
    Information of Last System Crash - SP
    Writing crashinfo to bootflash:crashinfo_SP_20140621-192052-TW 00 07 00 00 00 00 00 00 00 0F 00 00 00 00 00 00 10 D6 0A 06 FD 37 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00
    Jun 21 11:19:43.723: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.723: SW1_SP: IPC: Message 504CA554 timed out waiting for Ack
    Jun 21 11:19:43.723: SW1_SP: IPC:  MSG: ptr: 0x504CA554, flags: 0x24100, retries: 8, seq: 0x315E1ED, refcount: 1, rpc_result = 0x0, data_buffer = 0x504104BC, header = 0x8A453C8, data = 0x8A453E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57837, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF38, lo: 0x8A453E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 5E 0A 06 01 01 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 08 00 00
    Jun 21 11:19:43.723: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504CA554, flags: 0x24100, retries: 8, seq: 0x315E1ED, refcount: 1, rpc_result = 0x0, data_buffer = 0x504104BC, header = 0x8A453C8, data = 0x8A453E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57837, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF38, lo: 0x8A453E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 5E 0A 06 01 01 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 08 00 00
    Jun 21 11:19:43.727: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.727: SW1_SP: IPC: Message 4528FE20 timed out waiting for Ack
    Jun 21 11:19:43.727: SW1_SP: IPC:  MSG: ptr: 0x4528FE20, flags: 0x24100, retries: 8, seq: 0x315E1EE, refcount: 1, rpc_result = 0x0, data_buffer = 0x4520B23C, header = 0x8C884C8, data = 0x8C884E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57838, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF39, lo: 0x8C884E8  || DATA: 2C D5 00 01 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 87 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 40 0A 06 0A 15 01 04 00 00 00 87 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00
    Jun 21 11:19:43.727: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x4528FE20, flags: 0x24100, retries: 8, seq: 0x315E1EE, refcount: 1, rpc_result = 0x0, data_buffer = 0x4520B23C, header = 0x8C884C8, data = 0x8C884E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57838, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF39, lo: 0x8C884E8  || DATA: 2C D5 00 01 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 87 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 40 0A 06 0A 15 01 04 00 00 00 87 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00
    Jun 21 11:19:43.727: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.727: SW1_SP: IPC: Message 4529E380 timed out waiting for Ack
    Jun 21 11:19:43.727: SW1_SP: IPC:  MSG: ptr: 0x4529E380, flags: 0x24100, retries: 8, seq: 0x315E1EF, refcount: 1, rpc_result = 0x0, data_buffer = 0x50418920, header = 0x8A93DC8, data = 0x8A93DE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57839, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF3A, lo: 0x8A93DE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 BC 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 12 00 00 00 00 00 00 08 9E 0A 06 9A 0B 01 04 00 00 00 BC 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 16 00 00
    Jun 21 11:19:43.727: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x4529E380, flags: 0x24100, retries: 8, seq: 0x315E1EF, refcount: 1, rpc_result = 0x0, data_buffer = 0x50418920, header = 0x8A93DC8, data = 0x8A93DE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57839, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF3A, lo: 0x8A93DE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 BC 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 12 00 00 00 00 00 00 08 9E 0A 06 9A 0B 01 04 00 00 00 BC 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 16 00 00
    Jun 21 11:19:43.731: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.731: SW1_SP: IPC: Message 45238524 timed out waiting for Ack
    Jun 21 11:19:43.731: SW1_SP: IPC:  MSG: ptr: 0x45238524, flags: 0x24100, retries: 8, seq: 0x315E1F0, refcount: 1, rpc_result = 0x0, data_buffer = 0x50372694, header = 0x8468FC8, data = 0x8468FE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57840, sz: 1332, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF3B, lo: 0x8468FE8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 05 00 00 00 00 00 00 03 74 0A 06 FD 4C 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.731: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x45238524, flags: 0x24100, retries: 8, seq: 0x315E1F0, refcount: 1, rpc_result = 0x0, data_buffer = 0x50372694, header = 0x8468FC8, data = 0x8468FE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57840, sz: 1332, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AF3B, lo: 0x8468FE8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 05 00 00 00 00 00 00 03 74 0A 06 FD 4C 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.731: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.731: SW1_SP: IPC: Message 4524C89C timed out waiting for Ack
    Jun 21 11:19:43.731: SW1_SP: IPC:  MSG: ptr: 0x4524C89C, flags: 0x24100, retries: 6, seq: 0x315E1F1, refcount: 1, rpc_result = 0x0, data_buffer = 0x451B45EC, header = 0x894FCC8, data = 0x894FCE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57841, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFE9, lo: 0x894FCE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 CF 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00
    Jun 21 11:19:43.731: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x4524C89C, flags: 0x24100, retries: 6, seq: 0x315E1F1, refcount: 1, rpc_result = 0x0, data_buffer = 0x451B45EC, header = 0x894FCC8, data = 0x894FCE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57841, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFE9, lo: 0x894FCE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 CF 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 01 00 00
    Jun 21 11:19:43.731: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.731: SW1_SP: IPC: Message 45268348 timed out waiting for Ack
    Jun 21 11:19:43.731: SW1_SP: IPC:  MSG: ptr: 0x45268348, flags: 0x24100, retries: 6, seq: 0x315E1F2, refcount: 1, rpc_result = 0x0, data_buffer = 0x50462254, header = 0x8D4EFC8, data = 0x8D4EFE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57842, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFEA, lo: 0x8D4EFE8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 86 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 38 00 00 00 00 00 00 26 AB 0A 06 05 41 01 04 00 00 00 86 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 19 00 00
    Jun 21 11:19:43.731: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x45268348, flags: 0x24100, retries: 6, seq: 0x315E1F2, refcount: 1, rpc_result = 0x0, data_buffer = 0x50462254, header = 0x8D4EFC8, data = 0x8D4EFE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57842, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFEA, lo: 0x8D4EFE8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 86 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 38 00 00 00 00 00 00 26 AB 0A 06 05 41 01 04 00 00 00 86 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 19 00 00
    Jun 21 11:19:43.735: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.735: SW1_SP: IPC: Message 45281D58 timed out waiting for Ack
    Jun 21 11:19:43.735: SW1_SP: IPC:  MSG: ptr: 0x45281D58, flags: 0x24100, retries: 6, seq: 0x315E1F3, refcount: 1, rpc_result = 0x0, data_buffer = 0x45201860, header = 0x8C2CEC8, data = 0x8C2CEE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57843, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFEB, lo: 0x8C2CEE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 B9 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 04 24 0A 06 97 0D 01 04 00 00 00 BB 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.735: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x45281D58, flags: 0x24100, retries: 6, seq: 0x315E1F3, refcount: 1, rpc_result = 0x0, data_buffer = 0x45201860, header = 0x8C2CEC8, data = 0x8C2CEE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57843, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFEB, lo: 0x8C2CEE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 B9 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 04 24 0A 06 97 0D 01 04 00 00 00 BB 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.735: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.735: SW1_SP: IPC: Message 504A2D18 timed out waiting for Ack
    Jun 21 11:19:43.735: SW1_SP: IPC:  MSG: ptr: 0x504A2D18, flags: 0x24100, retries: 6, seq: 0x315E1F4, refcount: 1, rpc_result = 0x0, data_buffer = 0x451F3A88, header = 0x8BA92C8, data = 0x8BA92E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57844, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFEC, lo: 0x8BA92E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 05 00 00 00 00 00 00 01 48 0A 06 FD 2F 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00
    Jun 21 11:19:43.735: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504A2D18, flags: 0x24100, retries: 6, seq: 0x315E1F4, refcount: 1, rpc_result = 0x0, data_buffer = 0x451F3A88, header = 0x8BA92C8, data = 0x8BA92E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57844, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFEC, lo: 0x8BA92E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 05 00 00 00 00 00 00 01 48 0A 06 FD 2F 01 04 00 00 00 DD 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00
    Jun 21 11:19:43.735: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.735: SW1_SP: IPC: Message 504AA128 timed out waiting for Ack
    Jun 21 11:19:43.735: SW1_SP: IPC:  MSG: ptr: 0x504AA128, flags: 0x24100, retries: 6, seq: 0x315E1F5, refcount: 1, rpc_result = 0x0, data_buffer = 0x45228364, header = 0x8D9C8C8, data = 0x8D9C8E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57845, sz: 568, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFED, lo: 0x8D9C8E8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 02 26 00 0D 00 10 02 20 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 08 00 00 00 00 00 00 02 3D 0A 06 FE 6C 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 2D 00 00
    Jun 21 11:19:43.739: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504AA128, flags: 0x24100, retries: 6, seq: 0x315E1F5, refcount: 1, rpc_result = 0x0, data_buffer = 0x45228364, header = 0x8D9C8C8, data = 0x8D9C8E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57845, sz: 568, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56AFED, lo: 0x8D9C8E8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 02 26 00 0D 00 10 02 20 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 08 00 00 00 00 00 00 02 3D 0A 06 FE 6C 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 2D 00 00
    Jun 21 11:19:43.739: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.739: SW1_SP: IPC: Message 504C15B4 timed out waiting for Ack
    Jun 21 11:19:43.739: SW1_SP: IPC:  MSG: ptr: 0x504C15B4, flags: 0x24100, retries: 4, seq: 0x315E1F6, refcount: 1, rpc_result = 0x0, data_buffer = 0x503C193C, header = 0x87593C8, data = 0x87593E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57846, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08B, lo: 0x87593E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 3B 00 00 00 00 00 00 10 A7 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 01 3B 00 00
    Jun 21 11:19:43.739: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504C15B4, flags: 0x24100, retries: 4, seq: 0x315E1F6, refcount: 1, rpc_result = 0x0, data_buffer = 0x503C193C, header = 0x87593C8, data = 0x87593E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57846, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08B, lo: 0x87593E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 3B 00 00 00 00 00 00 10 A7 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 01 3B 00 00
    Jun 21 11:19:43.739: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.739: SW1_SP: IPC: Message 45273050 timed out waiting for Ack
    Jun 21 11:19:43.739: SW1_SP: IPC:  MSG: ptr: 0x45273050, flags: 0x24100, retries: 4, seq: 0x315E1F7, refcount: 1, rpc_result = 0x0, data_buffer = 0x5044348C, header = 0x8C29BC8, data = 0x8C29BE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57847, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08C, lo: 0x8C29BE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 89 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 02 00 00 00 00 00 00 02 24 0A 06 0C 18 01 04 00 00 00 98 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.739: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x45273050, flags: 0x24100, retries: 4, seq: 0x315E1F7, refcount: 1, rpc_result = 0x0, data_buffer = 0x5044348C, header = 0x8C29BC8, data = 0x8C29BE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57847, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08C, lo: 0x8C29BE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 89 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 02 00 00 00 00 00 00 02 24 0A 06 0C 18 01 04 00 00 00 98 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.743: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.743: SW1_SP: IPC: Message 452A33E4 timed out waiting for Ack
    Jun 21 11:19:43.743: SW1_SP: IPC:  MSG: ptr: 0x452A33E4, flags: 0x24100, retries: 4, seq: 0x315E1F8, refcount: 1, rpc_result = 0x0, data_buffer = 0x50432830, header = 0x8B8A5C8, data = 0x8B8A5E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57848, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08D, lo: 0x8B8A5E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 C4 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 04 24 0A 06 A2 0B 01 04 00 00 00 C5 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.743: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x452A33E4, flags: 0x24100, retries: 4, seq: 0x315E1F8, refcount: 1, rpc_result = 0x0, data_buffer = 0x50432830, header = 0x8B8A5C8, data = 0x8B8A5E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57848, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08D, lo: 0x8B8A5E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 C4 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 04 24 0A 06 A2 0B 01 04 00 00 00 C5 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00
    Jun 21 11:19:43.743: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.743: SW1_SP: IPC: Message 4523FFA4 timed out waiting for Ack
    Jun 21 11:19:43.743: SW1_SP: IPC:  MSG: ptr: 0x4523FFA4, flags: 0x24100, retries: 4, seq: 0x315E1F9, refcount: 1, rpc_result = 0x0, data_buffer = 0x5036F47C, header = 0x844B3C8, data = 0x844B3E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57849, sz: 772, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08E, lo: 0x844B3E8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 02 F2 00 0D 00 16 02 EC 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00 00 00 00 00 01 46 0A 06 FE 5A 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00
    Jun 21 11:19:43.743: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x4523FFA4, flags: 0x24100, retries: 4, seq: 0x315E1F9, refcount: 1, rpc_result = 0x0, data_buffer = 0x5036F47C, header = 0x844B3C8, data = 0x844B3E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57849, sz: 772, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B08E, lo: 0x844B3E8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 02 F2 00 0D 00 16 02 EC 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00 00 00 00 00 01 46 0A 06 FE 5A 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00
    Jun 21 11:19:43.743: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.743: SW1_SP: IPC: Message 45279200 timed out waiting for Ack
    Jun 21 11:19:43.743: SW1_SP: IPC:  MSG: ptr: 0x45279200, flags: 0x24100, retries: 2, seq: 0x315E1FA, refcount: 1, rpc_result = 0x0, data_buffer = 0x5039C3B0, header = 0x85F65C8, data = 0x85F65E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57850, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B123, lo: 0x85F65E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 CF 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 02 00 00
    Jun 21 11:19:43.747: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x45279200, flags: 0x24100, retries: 2, seq: 0x315E1FA, refcount: 1, rpc_result = 0x0, data_buffer = 0x5039C3B0, header = 0x85F65C8, data = 0x85F65E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57850, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B123, lo: 0x85F65E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 CF 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 02 00 00
    Jun 21 11:19:43.747: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.747: SW1_SP: IPC: Message 504D3544 timed out waiting for Ack
    Jun 21 11:19:43.747: SW1_SP: IPC:  MSG: ptr: 0x504D3544, flags: 0x24100, retries: 2, seq: 0x315E1FB, refcount: 1, rpc_result = 0x0, data_buffer = 0x5044AE34, header = 0x8C71FC8, data = 0x8C71FE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57851, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B124, lo: 0x8C71FE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 88 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00 00 00 00 00 04 48 0A 06 0B 24 01 04 00 00 00 89 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00
    Jun 21 11:19:43.747: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504D3544, flags: 0x24100, retries: 2, seq: 0x315E1FB, refcount: 1, rpc_result = 0x0, data_buffer = 0x5044AE34, header = 0x8C71FC8, data = 0x8C71FE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57851, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B124, lo: 0x8C71FE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 88 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00 00 00 00 00 04 48 0A 06 0B 24 01 04 00 00 00 89 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 04 00 00
    Jun 21 11:19:43.747: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.747: SW1_SP: IPC: Message 504D5940 timed out waiting for Ack
    Jun 21 11:19:43.747: SW1_SP: IPC:  MSG: ptr: 0x504D5940, flags: 0x24100, retries: 2, seq: 0x315E1FC, refcount: 1, rpc_result = 0x0, data_buffer = 0x4521415C, header = 0x8CDD4C8, data = 0x8CDD4E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57852, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B125, lo: 0x8CDD4E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 BE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 0E 6D 0A 06 9C 5C 01 04 00 00 00 BF 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 08 00 00
    Jun 21 11:19:43.747: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504D5940, flags: 0x24100, retries: 2, seq: 0x315E1FC, refcount: 1, rpc_result = 0x0, data_buffer = 0x4521415C, header = 0x8CDD4C8, data = 0x8CDD4E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57852, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B125, lo: 0x8CDD4E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 BE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 0E 6D 0A 06 9C 5C 01 04 00 00 00 BF 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 08 00 00
    Jun 21 11:19:43.751: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.751: SW1_SP: IPC: Message 452989EC timed out waiting for Ack
    Jun 21 11:19:43.751: SW1_SP: IPC:  MSG: ptr: 0x452989EC, flags: 0x24100, retries: 2, seq: 0x315E1FD, refcount: 1, rpc_result = 0x0, data_buffer = 0x45172660, header = 0x86DCEC8, data = 0x86DCEE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57853, sz: 1060, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B126, lo: 0x86DCEE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 07 00 00 00 00 00 00 03 3E 0A 06 FE 1D 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 03 BF 00 00
    Jun 21 11:19:43.751: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x452989EC, flags: 0x24100, retries: 2, seq: 0x315E1FD, refcount: 1, rpc_result = 0x0, data_buffer = 0x45172660, header = 0x86DCEC8, data = 0x86DCEE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57853, sz: 1060, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B126, lo: 0x86DCEE8  || DATA: 2C D5 00 00 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 07 00 00 00 00 00 00 03 3E 0A 06 FE 1D 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 03 BF 00 00
    Jun 21 11:19:43.751: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.751: SW1_SP: IPC: Message 504C9DAC timed out waiting for Ack
    Jun 21 11:19:43.751: SW1_SP: IPC:  MSG: ptr: 0x504C9DAC, flags: 0x24108, retries: 0, seq: 0x315E1FE, refcount: 1, rpc_result = 0x0, data_buffer = 0x451D3748, header = 0x8A772C8, data = 0x8A772E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57854, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D4, lo: 0x8A772E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 CF 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 97 00 00
    Jun 21 11:19:43.751: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504C9DAC, flags: 0x24108, retries: 0, seq: 0x315E1FE, refcount: 1, rpc_result = 0x0, data_buffer = 0x451D3748, header = 0x8A772C8, data = 0x8A772E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57854, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D4, lo: 0x8A772E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 CF 0A 06 01 06 01 04 00 00 00 3F 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 97 00 00
    Jun 21 11:19:43.751: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.751: SW1_SP: IPC: Message 4525926C timed out waiting for Ack
    Jun 21 11:19:43.751: SW1_SP: IPC:  MSG: ptr: 0x4525926C, flags: 0x24108, retries: 0, seq: 0x315E1FF, refcount: 1, rpc_result = 0x0, data_buffer = 0x50405568, header = 0x89DD1C8, data = 0x89DD1E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57855, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D5, lo: 0x89DD1E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 88 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 80 0A 06 0B 24 01 04 00 00 00 8A 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 02 00 00
    Jun 21 11:19:43.751: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x4525926C, flags: 0x24108, retries: 0, seq: 0x315E1FF, refcount: 1, rpc_result = 0x0, data_buffer = 0x50405568, header = 0x89DD1C8, data = 0x89DD1E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57855, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D5, lo: 0x89DD1E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 88 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 06 00 00 00 00 00 00 01 80 0A 06 0B 24 01 04 00 00 00 8A 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 02 00 00
    Jun 21 11:19:43.755: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.755: SW1_SP: IPC: Message 452512E0 timed out waiting for Ack
    Jun 21 11:19:43.755: SW1_SP: IPC:  MSG: ptr: 0x452512E0, flags: 0x24108, retries: 0, seq: 0x315E200, refcount: 1, rpc_result = 0x0, data_buffer = 0x451AF00C, header = 0x891CCC8, data = 0x891CCE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57856, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D6, lo: 0x891CCE8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 C0 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0B 00 00 00 00 00 00 04 B2 0A 06 9E 0B 01 04 00 00 00 C3 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 17 00 00
    Jun 21 11:19:43.755: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x452512E0, flags: 0x24108, retries: 0, seq: 0x315E200, refcount: 1, rpc_result = 0x0, data_buffer = 0x451AF00C, header = 0x891CCC8, data = 0x891CCE8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57856, sz: 1008, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D6, lo: 0x891CCE8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 03 E0 00 0D 00 1D 03 DA 01 04 00 00 00 C0 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0B 00 00 00 00 00 00 04 B2 0A 06 9E 0B 01 04 00 00 00 C3 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 17 00 00
    Jun 21 11:19:43.755: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:43.755: SW1_SP: IPC: Message 504A9C90 timed out waiting for Ack
    Jun 21 11:19:43.755: SW1_SP: IPC:  MSG: ptr: 0x504A9C90, flags: 0x24108, retries: 0, seq: 0x315E201, refcount: 1, rpc_result = 0x0, data_buffer = 0x45158028, header = 0x85E22C8, data = 0x85E22E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57857, sz: 736, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D7, lo: 0x85E22E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 02 D0 00 0D 00 15 02 CA 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 03 4E 0A 06 FE 4C 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 01 C2 00 00
    Jun 21 11:19:43.755: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504A9C90, flags: 0x24108, retries: 0, seq: 0x315E201, refcount: 1, rpc_result = 0x0, data_buffer = 0x45158028, header = 0x85E22C8, data = 0x85E22E8  || HDR: src: 0x10000, dst: 0x315001B, index: 0, seq: 57857, sz: 736, type: 882, flags: 0x400, ext_flags: 0x0, hi: 0xA56B1D7, lo: 0x85E22E8  || DATA: 2C D5 00 02 0E 0B 00 00 00 00 00 20 00 00 02 D0 00 0D 00 15 02 CA 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 0A 00 00 00 00 00 00 03 4E 0A 06 FE 4C 01 04 00 00 00 DE 00 00 00 00 00 00 00 07 00 00 00 00 00 00 01 C2 00 00
    Jun 21 11:19:43.755: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:19:52.687: %CPU_MONITOR-SW1_SP-6-NOT_HEARD: CPU_MONITOR messages have not been heard for 120 seconds [21/1]
    Jun 21 11:20:22.687: %CPU_MONITOR-SW1_SP-6-NOT_HEARD: CPU_MONITOR messages have not been heard for 150 seconds [21/1]
    Jun 21 11:20:33.175: SW1_SP: IPC: Message 504F68E8 timed out waiting for Ack
    Jun 21 11:20:33.175: SW1_SP: IPC:  MSG: ptr: 0x504F68E8, flags: 0x34101, retries: 21, seq: 0x315AABD, refcount: 2, rpc_result = 0x0, data_buffer = 0x4517147C, header = 0x86D24C8, data = 0x86D24E8  || HDR: src: 0x10000, dst: 0x315000E, index: 1, seq: 43709, sz: 80, type: 1, flags: 0x1404, ext_flags: 0x0, hi: 0xA56AE85, lo: 0x86D24E8  || DATA: 00 00 00 15 00 00 00 00 00 00 07 D1 00 00 00 0A 00 00 00 0C 00 00 00 0F 00 00 00 05 00 00 00 0A 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Jun 21 11:20:33.175: SW1_SP: IPC: Send failed: IPC msg timeout MSG: ptr: 0x504F68E8, flags: 0x34101, retries: 21, seq: 0x315AABD, refcount: 2, rpc_result = 0x0, data_buffer = 0x4517147C, header = 0x86D24C8, data = 0x86D24E8  || HDR: src: 0x10000, dst: 0x315000E, index: 1, seq: 43709, sz: 80, type: 1, flags: 0x1404, ext_flags: 0x0, hi: 0xA56AE85, lo: 0x86D24E8  || DATA: 00 00 00 15 00 00 00 00 00 00 07 D1 00 00 00 0A 00 00 00 0C 00 00 00 0F 00 00 00 05 00 00 00 0A 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    Jun 21 11:20:33.175: SW1_SP: -Traceback= 4042B17C 404543D0 40454508 408A5DE4 4044FF98 408AAFE8 408AAFD4
    Jun 21 11:20:33.179: %C6K_PROCMIB-SW1_SP-3-IPC_TRANSMIT_FAIL: Failed to send process statistics update : error code = timeout
    -Traceback= 40BF9E18 40BF9E68 40BFA070 40BFA2D4 408AAFE8 408AAFD4
    Jun 21 11:20:52.687: %CPU_MONITOR-SW1_SP-3-TIMED_OUT: CPU_MONITOR messages have failed, resetting system [21/1]

  • What are the host network requirements for a 2012 R2 failover cluster using fiber channel?

    I've seen comments on here regarding how the heartbeat signal isn't really required anymore - is that true?  We started using Hyper-V in its infancy and have upgraded gleefully every step of the way.  With 2012 R2, we also upgraded from 1gb iSCSI
    to 8GB Fiber Channel.  Currently, I have three NICs in use on each host.  One for "No cluster communication" on it's own VLAN.  Another for "Allow cluster network communication on this network" but NOT allowing clients, on
    a different VLAN.  And lastly the public network which allows cluster comms and clients on it (public VLAN).
    Is it still necessary to have all three of these NICs in use?  If the heartbeat isn't necessary any more, is there any reason to not have two public IPs and do away with the rest of the network?  (two for fault tolerance)  Does Live Migration
    still use Ethernet if FC is available?  I wasn't sure what all has changed with these requirements since Hyper-V first came out.
    If it matters, we have 5 servers w/160GB RAM, 8 NICs, dual HBAs connected to redundant FC switches, going to two SANs.  We're running around 30 VMs right now.  
    Can someone share their knowledge with me regarding the proper setup for my environment?  Many Thanks!

    Hi,
    You can setup cluster with a single network but that leaves you with single point of failure on the Networking front, it is still recommended to have a heartbeat network.
    Live migration would still happen though Ethernet, it has nothing to do with FC. Don't get confused, you had iSCSI for storage which used one of your VLAN and now you have FC for your storage.
    Your hardware specs looks good. You can set up the following networks -
    1. Public Network - Team two or more NICs (based on bandwidth aggregation)
    2. Heartbeat Network - Don't use teamed Adaptor
    3. Live Migration - Team two or more NICs (based on bandwidth aggregation)
    Plan properly and draw guidelines to visualize and to remove single point of failure at all points.
    Feel free to ask if you have some more queries.
    Regards
    Prabhash

  • MDNSReporter causing system crash

    Ever since upgrading to 10.5.7, I've had frequent, daily system crashes. I did an archive and install from the disks, then used the combo updater to 10.5.7. The console logs show that just prior to every crash, I have a lot of activity regarding the mDNSReporter for a few seconds. Any suggestions on what might be causing this? Currently I think it may be Little Snitch (2.1.4) interacting with something else, so i plan to disable it for a while and see if I stop crashing. Logs are posted below.
    6/6/09 11:39:00 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:00 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:00 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:01 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:01 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:01 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:01 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:02 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:02 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:02 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:03 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:03 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:03 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:04 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:04 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:04 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:05 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:05 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:05 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:06 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:06 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:06 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:07 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:07 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:08 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:08 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:08 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:09 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:09 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:09 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:10 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:10 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:10 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:11 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:11 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:12 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:12 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:12 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:13 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:13 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:13 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:14 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:14 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:14 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:14 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:15 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:15 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:15 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:16 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:16 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:16 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:17 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:17 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:17 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:17 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:18 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:18 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:19 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:19 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:19 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:20 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:20 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:20 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:21 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:21 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:21 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:22 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:22 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:22 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:23 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:23 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:23 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)
    6/6/09 11:39:24 AM mDNSResponder[16] CheckNATMappings: Failed to allocate port 5350 UDP multicast socket for NAT-PMP announcements
    6/6/09 11:39:24 AM mDNSResponder[16] mDNSPlatformUDPSocket: SetupSocket 5350 failed error -1 errno 48 (Address already in use)

    Shams Shirley,
    The link you gave points directly to the one I gave you.
    If you follow [the link I gave you|http://openradar.appspot.com/radar?id=3406], you'll see that it gives a diff to apply as a workaround. One other way to make the problem in mDNSReporter go away is to put your Mac on the internet side of your router.
    You should know that [portmap|http://developer.apple.com/documentation/Darwin/Reference/ManPages/man8/portmap .8.html] is built into OS X, as is the [kill|http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/kill.1. html] command.
    Yes, finding and removing the conflicting app that opens up UDP ALL-SYSTEMS.MCAST.NET:5350 with SO_REUSEPORT would take care of the issue. What do you see when you run [netstat|http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/netstat .1.html] with the -g (multicast) switch?
    -Wayne

  • When ip cef is enable timeouts occur

    First i want to enable netflow on our routers, and in order to do that i need to enable IP CEF.  but when i enable cef all of the point to point vpn sites connected to the router, they stay connected but the terminals that connect to our citrix farm will not connect they from what i can tell timeout on the connection. disabling cef they can connect. enabling cef they can't connect,
    This is really odd behavoir since we can still remote access the site but the terminals just can't connect when ip cef is enabled.      
    I attached the config of the router, i removed the tunnels and other various things that is not relivant(i believe)        

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    I'm not 100% certain, but I believe FastSwitching and CEF switching apply to unicast, not multicast.  Your "IP mroute-cache" command enables/disables fast multicast switching.
    On a 3750, switching should be hardware based, for unicast and multicast, unless TCAM resources are insufficient.  If hardware switching falls back to non-hardware switching, you'll likely find process vs. Fast vs. CEF vs. multicast doesn't matter, all too slow.

  • RAC connection problem with interconnect NIC failure

    We have an 11g 2-node test RAC setup on RHEL 4 that is configured to have no load balancing (client or server), with Node2 existing as a failover node only. Connection and vip failover works fine in most situations (public interface fail, node fail, cable pull, node 2 interconnect fail, interconnect switch fail etc etc).
    When the node1 interconnect card failure is emulated (ifdown eth1):
    node2 gets evicted and reboots
    failover of existing connections occurs
    VIP from node2 is relocated to node1
    However new connection attempts from clients and the server receive a ORA-12541: TNS:no listener message.
    The basis of this is the issue that in the event of an interconnect failure, the lowest number node is supposed to survive - it looks like this includes the situation where the lowest number node has a failed interconnect NIC; ie it has a hardware fault.
    I checked this with Oracle via an iTAR quite some time ago (under 10g) and they eventually confirmed that this eviction of the healthy 2nd node is correct behaviour. In 10g, this situation would result in the remaining instance failing due to the unavailable NIC, however I did not get the chance to fully test and resolve this with Oracle.
    In 11g, the alert log continuously reports the NIC's unavailability. The instance remains up, but new connections cannot be established. If the NIC is re-enabled then new connections are able to be established. At all times, srvctl status nodeapps on the surviving node and lsnrtcl show that the listener is functional.
    The alert log reports the following, regarding a failed W000 or M000 process:
    ospid 13165: network interface with IP address 192.168.1.1 no longer operational
    requested interface 19.2.168.1.1 not found. Check output from ifconfig command
    ORA-603 : opidrv aborting process W000 ospid (16474_2083223480)
    Process W000 died, see its trace file
    The W000 trace file refers to an Invalid IP Address 192.168.1.1 (the interconnect ip address) obviously the source of the process dying.
    Finally, if I restart the remaining instance via srvctl stop/start instance with the NIC still unavailable, the instance will allow new connections and does not report the failures of the W000/M000 process or appear to care about the failed NIC.
    Before I go down the iTAR path or start posting details of the configuration, has anyone else experienced/resolved this, or can anyone else test it out?
    Thanks for any input,
    Gavin
    Listener.ora is:
    SID_LIST_LISTENER_NODE1=
    (SID_LIST=
    (SID_DESC=
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1)
    (SID_NAME=RAC_INST)
    (SID_DESC=
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1)
    (SID_NAME=RAC_INST1)
    (SID_DESC=
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1)
    (SID_NAME=RAC_INST2)
    SID_LIST_LISTENER_NODE2=
    (SID_LIST=
    (SID_DESC=
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1)
    (SID_NAME=RAC_INST)
    (SID_DESC=
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1)
    (SID_NAME=RAC_INST2)
    (SID_DESC=
    (ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1)
    (SID_NAME=RAC_INST1)
    LISTENER_NODE1 =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL=TCP)(HOST=vip-NODE1)(PORT=1521)(IP=FIRST))
    (ADDRESS = (PROTOCOL=TCP)(HOST=NODE1)(PORT=1521)(IP=FIRST))
    LISTENER_NODE2 =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL=TCP)(HOST=vip-NODE2)(PORT=1521)(IP=FIRST))
    (ADDRESS = (PROTOCOL=TCP)(HOST=NODE2)(PORT=1521)(IP=FIRST))
    )

    Thanks for your reply.
    There is no NIC bonding - the interconnect is a single, dedicated Gigabit link connected via a dedicated switch: plenty of bandwidth.
    I know that providing interconnect NIC redundancy would provide a fallback position on this (although how far do you go: redundant interconnect switches as well?), and that remains an option.
    However that's not the point. RAC does not require a redundant interconnect - as a high-availability solution it should inherently provide a failover position that continues to provide an available instance, as it does for all other component failures.
    Unless I've made a mistake in the configuration (which is very possible, but all the other successful failover scenarios suggest I haven't), then this could be a scenario that renders a 2-node cluster unavailable to new connections.
    Gavin

  • CSS : ISC problem

    Hello,
    I noticed in CSS11500 documentation Cisco advice not using level 2 devices for Inter-Switch Communication (see at link at the end). After sniffing, it appears ISC works with broadcasted ethernet frames.
    So how should we proceed to link two redundant content switch distant from several kilometers ?
    My problem is, in spite of cisco recommandations, I've linked CSS through two C65xx. Link works but become down for a very short time periodically (each 10 minutes).
    This seems to have no incidence on reliability...
    http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/css11000series/v6.10/configuration/advanced/guide/ASR.html#wp1038263
    Thank you,
    David

    Oh... so a cisco backup CSS can never be a real backup if CSS have to be in the same room to work... What's happend in case of fire ?
    My company owns 11 CSS and they are too critical to regroup them. So I think cisco should investigate this problem as soon as possible!

  • IPS mode with IDSM-2 module on Cat6K

    Hi,
    I have installed the IDSM-2 module on the Catalyst 6509 switch, now I was refering to the configuration guide for IPS 6.0 there are multiple modes I can configure like inline, inline vlan pair, Promiscuous & vlan group mode.. so I'm thinking which one would be the best solution...
    The catalyst 6509 is acting as the CORE/Distribution with multiple Vlan's (around 20 vlans) configured, and customer wants the IPS to be deployed in such a way that it covers the traffic from all the vlans..
    Also note that there is a redundant Cat6509 switch which also has got the IDSM-2 module installed, so can these both IDSM-2 modules be installed in active/standby or active/active combination...
    can someone through some lights on the same please...
    Regards
    Vijay.

    A sensor can enter bypass mode for several reasons, including, but not limited to:
    1) Analysis Engine reconfiguration
    2) Global  Correlation updates
    3) Daily Signature DB self purg
    4) sensorApp failure
    Most of these reasons are benign. I have written Supportability Enhancement CSCtg69012 so that each bypass log will show the reason for entering bypass mode.
    The bug is available via the CCO Bug Toolkit: http://tools.cisco.com/Support/BugToolKit/action.do?hdnAction=searchBugs.
    You may review the bug and click on the "Save Bug" button at the bottom of the page to receive email updates as changes are made to the bug's state.
    To fully diagnose your issue, I suggest opening a TAC case where we will request a "show tech," including debug level logs. This will allow us to see what is triggering the sensor to enter bypass mode.
    Thank you,
    Blayne Dreier
    Cisco TAC IDS Team
    **Please check out our Podcast**
    TAC Security Show: http://www.cisco.com/go/tacsecuritypodcast

  • IPS 4200 Fault tolerance

    Hi, Is it possible to have two IPS 4200 appliances in a failover or high availability pair? Or is it single with hardware bypass only?
    Thanks

    In data centers like these, redundant routers, switches, and even power supplies help ensure business continuity during an outbreak. The IPS appliances, however, do not support stateful failover. IPS devices maintain state with traffic flows and may drop traffic from an asymmetrical traffic flow. It is therefore important to factor this into the design.
    You can use the bypass mode as a diagnostic tool and a failover protection mechanism. You can set the sensor in a mode where all the IPS processing subsystems are bypassed and traffic is permitted to flow between the inline pairs directly. The bypass mode ensures that packets continue to flow through the sensor when the sensor's processes are temporarily stopped for upgrades or when the sensor's monitoring processes fail. There are three modes: on, off, and automatic. By default, bypass mode is set to automatic.

Maybe you are looking for