Catalyst 6500 Forwarding collapsed due to SSDP traffic

Hi all,
one of my customer had an issue related to multicast traffic.
couple of days ago, 
he received a SSDP-based DDoS. traffc was  around 300Mbps, the two devices which faces the internet  had a serious struggle and traffic was severely affected. there are were no CPU usage.per the customer  this was a TCAM issue related with the multicast nature of SSDP.
he is also planning to implement CoPP on that boxes,but as an advice they wanted to know if there are any other open issue on the current IOS they are running: (s72033_rp-IPSERVICESK9-M), Version 12.2(18)SXF7, RELEASE SOFTWARE (fc1)
if you could kindly share some knowledge I would greatly appreciate it.
with kind regards,
Lance

Hi,
If I were you, I will go for option 1.
This option will give us the time to observe the traffic pattern, time to get the network and EIGRP to stabilize and even to check for any issues on the IOS part.
This will give you time frame to work out for any issue if it happens in between the weeks time.This will gibe you tha time to see for any imcompatibilty issues as such.
HTH, Please rate if it does.
-amit singh

Similar Messages

  • Configuring the Catalyst 6500 Switch for IPS Inline Operation of the IDSM

    I understand how to configure the Catalyst 6500 switch so that the monitoring ports are access ports in two separate VLAN's for inline operation.
    However, I don't see any documentation that describes how the desired VLAN traffic gets forced through the IPS.
    In promiscuous mode, you can use VACL's to copy/capture and forward the desired traffic to the IDSM for analysis. I'm not seeing how to get the desired traffic through the IPS.
    Note that the host 6500 is running native IOS 12.2(18)SXE.
    Thanks for any assistance.

    A tranparent firewall is a fairly good comparison.
    Let's say you have vlan 10 with 100 PCs and 1 Router for the network.
    If you want to apply a transparent firewall on that vlan you can not simply put one interface of the firewall on vlan 10. Nothing would go through the firewall.
    Instead you have to create a new vlan, let's say 1010. Now you place one interface of the firewall on vlan 10 and the other on vlan 1010. Still nothing is going through the firewall. So now you move that Router from vlan 10 to vlan 1010. All you do is change the vlan, the IP Address and netmask of the router stay the same.
    The transparent firewall bridges vlan 10 and vlan 1010. The PCs on vlan 10 ae still able to communicate to and through the router, but must go through the transparent firewall to do so.
    The firewall is transparent because it does not IP Route between 2 vlans, instead the same IP subnet exists on both vlans and the firewall transparently beidges traffic between the 2 vlans.
    The transparent firewall can do firewalling between the PCs on vlan 10 and the Router on vlan 1010. But is PC A on vlan 10 talks to PC B on vlan 10, then the transparent firewall does not see and can not block that traffic.
    An InLine sensor is very similar to the transparent firewall and will bridge between the 2 vlans. And similarly an InLine sensor is able to InLine monitor traffic between PCs on vlan 10 and the Router on vlan 1010, but will not be able to monitor traffic between 2 PCs on vlan 10.
    Now the router on one vlan and the PCs on the other vlan is a typical deployment for inline sensors, but your vlans do not Have to be divided that way. You could choose to place some servers in one vlan, and desktop PCs in the other vlan. You subdivide the vlans in what ever method makes sense for your deployment.
    Now for monitoring multiple vlans the same principle still applies. You can't monitor traffic between machines on the same vlan. So for each of the vlans you want to monitor you will need to create a new vlan and split the machines between the 2 vlans.
    In your case with Native IOS you are limited to only 1 pair of vlans for InLine monitoring, but your desired deployment would require 20 vlan pairs.
    The 5.1 IPS software has now the capability to handle the 20 pairs, but the Native IOS software does not have the capability to send the 40 vlans (20 pairs) to the IDSM-2.
    The Native IOS changes are in testing right now, but I have not heard a release date for those changes.
    Now Cat OS has already made these changes. So here is a basic breakdown of what you could do in Cat OS and you can use in preparation for a Native IOS deployment when it gets released.
    For vlans 10-20, and 300-310 that you want monitored you will need to break each of those vlans in to 2 vlans.
    Let's say we make it simple and add 500 to each vlan in order to create the new vlan for each pair.
    So you have the following pairs:
    10/510, 11/511, 12/512, etc...
    300/800, 301/801, 302/802, etc....
    You set up the sensor port to trunk all 40 vlans:
    set trunk 5/7 10-20,300-310,510-520,800-810
    (Then clear all other vlans off that trunk to keep things clean)
    In the IDSM-2 configuration create the 20 inline vlan pairs on interface GigabitEthernet0/7
    Nw on each of the 20 original vlans move the default router for each vlan from the original vlan to the 500+ vlan.
    At this point you should ordinarily be good to go. The IDSM-2 won't be monitoring traffic that stays within each of the original 20 vlans, but Would monitor traffic getting routed in and out of each of the 20 vlans.
    Because of a switch bug you may have to have an additional PC moved to the same vlan as the router if the switch/MSFC is being used as the router and you are deploying with an IDSM-2.

  • IDSM on catalyst 6500 to provide IOS Inline mode support

    I am currently evaluating what kind of method to apply in my 6500. I would like to ask if IOS Version 12.2(33)SXI2a  support inline mode and inline vlan pair mode with IDSM-2???what configuration should be done with the switch in order for the multiple vlan traffic to flow with an inline interface of the IDSM2??? In my case I have 16 user vlans and 1 server vlan on catalyst 6500...The task is to protect the servers from users....The requirement is to configure inline mode to monitor the traffic from these 16 vlans when they access the servers...But as we know the IDSM-2 has only two logical sensing ports...So my question is how will you configure the switch to forward the traffic from these 16 vlans to the IDSM-2 module via only ONE sensing port, since the other sensing port will be configured in the server vlan???  Because as far as i know, when you configure inline mode on IOS,you will have to configure the sensing ports in access mode( While in CatOS, you configure these as TRUNK ports)...But this will work when you have only two vlans...But in my case, I have 16 vlans to monitor in inline mode..Please suggest any solution.
    Any urgent reply will be much grateful...
    Many Thanks in advance

    Hi Mubin,
       If you're looking to monitor all the traffic from the user VLANs to the server VLANs then the simplest way to configure the IDSM-2 would be inline on the server VLAN segment.  All traffic destined to the servers (from the users or anywhere else) has to traverse that VLAN.  Assuming you have something like this to start:
    VLAN 100-120 (users) ====== Switch ------ VLAN 200 (servers)
    you'd drop the IDSM-2 inline on VLAN 200 by using a helper VLAN:
    VLAN 100-120 (users) ====== Switch ----- VLAN 201 (server gateway) ----- IDSM-2 (bridging 201 to 200) ----- VLAN 200 (servers)
    To do this you'll need to perform the following steps:
    1.  Designate a new VLAN to use as a helper VLAN for your current server VLAN.  I'll use 201 for this example and assume your current server VLAN is 200.
    Create the helper VLAN on the switch:
    switch# conf t
    switch(config)# vlan 201
    2.  Configure the IDSM-2 to bridge the helper VLAN and the server VLAN (200-201)
    sensor# conf t
    sensor(config)# service interface
    sensor(config-int)# phsyical-interface GigabitEthernet0/7
    sensor(config-int-phy)# admin-state enabled
    sensor(config-int-phy)# subinterface-type inline-vlan-pair
    sensor(config-int-phy-inl)# subinterface 1
    sensor(config-int-phy-inl-sub)# vlan1 200
    sensor(config-int-phy-inl-sub)# vlan2 201
    sensor(config-int-phy-inl-sub)# description Server-Helper pair
    sensor(config-int-phy-inl-sub)# exit
    sensor(config-int-phy-inl)# exit
    sensor(config-int-phy)# exit
    sensor(config-int)# exit
    Apply Changes:?[yes]:
    3.  Configure the switch to trunk the helper and server VLANs to the IDSM-2 module.  I assume the module is in slot 5 in the example.  Replace the 5 with the correct slot for your deployment:
    switch# conf t
    switch(config)# intrusion-detection module 5 data-port 1 trunk allowed-vlan 200,201
    switch(config)# intrusion-detection module 5 data-port 1 autostate include
    *Warning! This next step may cause an outage if everything is configured correctly.  You'll probably want to schedule a window to do this.*
    4.  Finally, force the traffic from the server VLAN through the IDSM-2 by moving the server VLAN gateway from VLAN 200 (where it is currently) to the helper VLAN you created.  To do this, remove the SVI from VLAN 200 and apply the same IP address to VLAN 201.  I assume the current server gateway is 192.168.1.1/24
    switch# conf t
    switch(config)#int vlan 200
    switch(config-int)#no ip addr
    switch(config-int)#int vlan 201
    switch(config-int)#ip addr 192.168.1.1 255.255.255.0
    switch(config-int)#exit
    switch(config)#exit
    switch# wr mem
    Now, when the servers try to contact 192.168.1.1 (their gateway) they'll have to be bridged through the IDSM-2 to reach VLAN 201 and in the process all traffic destined to them or sourced from them will be inspected.  Do not put any hosts or servers in the helper VLAN (201) or they will not be inspected.
    Best Regards,
    Justin

  • Replacement catalyst 6500 switches under redundancy environment

    Hi everyone,
    I plan to replace old core catalyst 6500 switches with new ones for the purpose of reinforcement.
    Now two core catalyst 6500 switches are working under redundancy environment.
    There are many catalyst 6500 switches as distribution switch connect to each core catalyst
    6500 switches as attached.
    I think there are two ways to replace core catalyst 6500 switches.
    [One]
    Replacing one core catalyst 6500 switches first, then one week later, replacing another core
    catalyst 6500 switch. And all traffic will be handled another core catalyst 6500 switch automatically
    by EIGRP routing during replacement.
    Advantage:
    One another core catalyst 6500 switch continues operating even if the replacement fail.
    Disadvantage:
    Two core catalyst 6500 switches will operate in a different version (CatOS, MSFC IOS) for one week.
    Any problem might be happened due to this issue.
    [Two]
    Replacing both core catalyst 6500 switches at the same time.
    Advantage:
    Replacement will be finished at one time
    Disadvantage:
    If the replacement fail, whole network goes to down and it cause critical situation.
    I have to replace successfully so I would like know good information about this, such as
    best practice, case study and so on.
    Your information would be greatly appreciated.
    Best regards,

    Hi,
    If I were you, I will go for option 1.
    This option will give us the time to observe the traffic pattern, time to get the network and EIGRP to stabilize and even to check for any issues on the IOS part.
    This will give you time frame to work out for any issue if it happens in between the weeks time.This will gibe you tha time to see for any imcompatibilty issues as such.
    HTH, Please rate if it does.
    -amit singh

  • IPS 45xx/43xx/42xx appliance and Catalyst 6500 Inline Mode issues

    Hello to everyone!
    We have recently got our new IPS 4510 appliance and for now there is a task to develop a connection scheme to our backbone multilayer switch (Catalyst 6500).
    There are several server's and user's VLANs connected to 6500.
    6500 performs inter-vlan routing.
    The main task is to "insert" IPS appliance between traffic path from any VLAN to server's VLANs.
    The additional task is to provide failover in "fail-open" manner (We have only one 4510 appliance. So if 4510 fails then traffic should continue passing without inspections).
    As I understood from this document https://supportforums.cisco.com/docs/DOC-12206 the only way to implement Inline Mode when using multilayer switch is to "take out" default gateway address for inspected subnet on the other VLAN's SVI.
    If we replace IDSM-2 with IPS appliance I suppose we can use hardware bypass feature as a failover measure (in case if IPS fails then traffic between bridged VLANs will still be forwarded).
    But what if there are several VLANs that should be monitored?
    As I understand in such schema we will need to use addtional interface-inline-pair for each monitored VLAN.
    But what if we have 20 VLANs for servers and 50 VLANs for users?
    Can using of VLAN-group mode handle this problem?
    I am not sure but using of VLAN-groups cannot provide bridging between two different VLANs. Am I right?
    And will using of VLAN-group make hardware-bypass feature useless?
    I tryed to simulate the first scenario in Cisco Packet Tracer (i used a bridge to simulate an IPS appliance in interface-pair inline mode):
    May be this is a bug of Packet Tracer but traffic went through IPS only if it was sent from VLAN 10 to VLAN100.
    The return traffic from VLAN 100 to VLAN 10 went through the Catalyst directly.
    When Catalyst recieved the frame it said:
    "The frame destination MAC address matches the MAC address of the active VLAN interface."
    After that it decapsulates the PDU from the Ethernet frame and send IP packet directly to VLAN 10.
    Does it mean that there is a need to change SVI's mac address?
    Thanks for any advice in advance.

    Here is my guess of how to realise my scenario:
    Config on Cat6k should looks something like this:
    ip routing
    interface Ge1/0
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10-12,110-112
    switchport mode trunk
    switchport nonegotiate
    switchport vlan mapping enable
    switchport vlan mapping 110 10
    switchport vlan mapping 111 11
    switchport vlan mapping 112 12
    interface Ge1/1
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 10-12
    switchport mode trunk
    switchport nonegotiate
    interface vlan 2
    ip address 10.0.2.1 255.255.255.0
    interface vlan 3
    ip address 10.0.3.1 255.255.255.0
    interface Vlan4
    ip address 10.0.4.1 255.255.255.0
    interface Vlan110
    ip address 10.0.10.1 255.255.255.0
    interface Vlan111
    ip address 10.0.11.1 255.255.255.0
    interface Vlan112
    ip address 10.0.12.1 255.255.255.0
    no interface Vlan10
    no interface Vlan11
    no interface Vlan12
    IPS should operate in VLAN-group inline mode. We could separate traffic by VLAN tag to inspect with different virtual sensors or we use one VS for all trunk traffic.
    Traffic routed from any VLAN to VLANs 10-12 should go through IPS.
    In case if IPS gets powered off - hardware-bypass feature should provide bridging between trunk ports.
    In theory it should work.
    Remained to test it in practice
    Thoughts / suggestions?    

  • Catalyst 6500 - Nexus 7000 migration

    Hello,
    I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
    In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
    The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
    From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
    Topology drawing is attached.
    Thanks
    /Ulrich

    In a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
    I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
    HTH,
    jerry

  • 15.1(2)SY1 on Catalyst 6500

    Hi,
    We are planning to upgrade two of our Catalyst 6500 switches to version 15.1(2)SY1 Advanced IP Services.
    The switches have dual supervisors and are currently running version 12.2(33)SXI11, but we have faced some issues and also would also like to enable some new features (e.g. BFD). The switches are running a fairly simple configuration with OSPF, MPLS and MP-BGP with about 30 VRFs.
    Are you aware of any major issues with 15.1(2)SY1 and would discourage the planned upgrade? I am aware that the version was only released in December, but since there are many bug fixes I thought this version might be better than e.g. 15.1(2)SY.
    Thanks in advance for your help!
    Best regards,
    Harry

    We replaced all (~ x20) our Sup720 (SXI4a) with Sup2T during late 2012 & running with Advance Enterprise 15.0(1)SY image. We did not have any issues with that code & still many of our distribution switches running on that code.
    Then we upgraded two core switches with 15.1(1)SY mid last year another two core switches to 15.1(2)SY late last year to accomodate WS-X6904-40G. With both of these new code we had couple of bugs still not proper fix
    CSCue58955: sup2t: LC file systems are not destroyed in Active upon reset"%SNMP-3-INPUT_QFULL_ERR: Packet dropped
    There is workaround for this, but that will impact netflow data if you are using that.
    For me 15.0(1)SY, is much better for Enterprise environment (based on my experience) compare to the two latest. But due to certain limitation we have to go for this newer codes whether you like it or not.
    These bugs may be not related to you if you are not runing Sup2T, anyway just thought to share this experience
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Connecting Nexus 5548 to Catalyst 6500 VS S720 - 10 G

    good day,
    Could anyone out-there please assit me with basic connectivity/configuration of the 2 devices for the 2 devcies communicate e.g be able to ping each other managemnet interfaces.
    Nexus Configuration:
    vrf context management
      ip route 0.0.0.0/0 10.200.1.4
    vlan 1
    interface mgmt0
      ip address 10.200.1.2/16
    Catalyst 6500:
    interface Vlan1
    description Nexus
    ip address 10.200.1.4 255.255.0.0
    interface TenGigabitEthernet5/4
    switchport
    Note: I am able to get all the devices throught SH CDP NEIG command. assist please.

    Nexus# sh ip int mgmt0
    IP Interface Status for VRF "management"(2)
    mgmt0, Interface status: protocol-up/link-up/admin-up, iod: 2,
    IP address: 10.13.37.201, IP subnet: 10.13.37.128/25
    IP broadcast address: 255.255.255.255
    IP multicast groups locally joined: none
    IP MTU: 1500 bytes (using link MTU)
    IP primary address route-preference: 0, tag: 0
    IP proxy ARP : disabled
    IP Local Proxy ARP : disabled
    IP multicast routing: disabled
    IP icmp redirects: enabled
    IP directed-broadcast: disabled
    IP icmp unreachables (except port): disabled
    IP icmp port-unreachable: enabled
    IP unicast reverse path forwarding: none
    IP load sharing: none
    IP interface statistics last reset: never
    IP interface software stats: (sent/received/forwarded/originated/consumed)
    Unicast packets : 0/83401/0/20/20
    Unicast bytes : 0/8083606/0/1680/1680
    Multicast packets : 0/18518/0/0/0
    Multicast bytes : 0/3120875/0/0/0
    Broadcast packets : 0/285/0/0/0
    Broadcast bytes : 0/98090/0/0/0
    Labeled packets : 0/0/0/0/0
    Labeled bytes : 0/0/0/0/0
    Nexus# sh cdp nei
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
    S - Switch, H - Host, I - IGMP, r - Repeater,
    V - VoIP-Phone, D - Remotely-Managed-Device,
    s - Supports-STP-Dispute
    Device-ID Local Intrfce Hldtme Capability Platform Port ID
    3560 mgmt0 178 S I WS-C3560-24PS Fas0/23
    6500 Eth1/32 135 R S I WS-C6509-E Ten5/4
    Nexus# ping 10.13.37.201 vrf management
    PING 10.13.37.201 (10.13.37.201): 56 data bytes
    64 bytes from 10.13.37.201: icmp_seq=0 ttl=255 time=0.278 ms
    64 bytes from 10.13.37.201: icmp_seq=1 ttl=255 time=0.174 ms
    64 bytes from 10.13.37.201: icmp_seq=2 ttl=255 time=0.169 ms
    64 bytes from 10.13.37.201: icmp_seq=3 ttl=255 time=0.165 ms
    64 bytes from 10.13.37.201: icmp_seq=4 ttl=255 time=0.165 ms
    --- 10.13.37.201 ping statistics ---
    5 packets transmitted, 5 packets received, 0.00% packet loss
    round-trip min/avg/max = 0.165/0.19/0.278 ms
    Nexus# ping 10.13.37.202
    PING 10.13.37.202 (10.13.37.202): 56 data bytes
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 0 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 1 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 2 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 3 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 4 timed out
    --- 10.13.37.202 ping statistics ---
    5 packets transmitted, 0 packets received, 100.00% packet loss
    Nexus# ping 10.13.37.203
    PING 10.13.37.203 (10.13.37.203): 56 data bytes
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 0 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 1 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 2 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 3 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 4 timed out
    --- 10.13.37.203 ping statistics ---
    5 packets transmitted, 0 packets received, 100.00% packet loss
    3560#ping 10.13.37.201
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 10.13.37.201, timeout is 2 seconds:
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
    Note: Now I want to be able to ping Nexus (10.13.37.201) from the 6509 (10.13.37.203), and again be able to ping both the 3560 (10.13.37.202) and 6509 (10.13.37.203) from the Nexus please. How can I do that. I can ping nexus from 3560 as shown above.

  • Catalyst 6500 Central Fwd Card for WS-X67xx modules ?

    Hi ,
    I have a BOM of materials that has this part number in it.
    WS-F6700-CFC Catalyst 6500 Central Fwd Card for WS-X67xx modules 1
    Though on Cisco configurator it doesnt show up. Is this what it should be WS-F6700-DFC3A Cisco Catalyst 6500 Distributed Forwarding Daughter card-3A for 67xx modules
    Are they the same ?
    thanks
    Allan

    Allan,
    Both the cards are not same. One is the CFC i.e centralized forwarding card and the other is distributed forwarding card. DFC cards downloads the CEF cache tables on them,both the FIB and the adjacency table.
    WS-X67xx modules which are sfipped with CFC's are field upgradable to DFC's.
    http://www.cisco.com/en/US/products/hw/switches/ps708/prod_module_installation_guide09186a00801d3b60.html#wp59534
    regards,
    -amit singh

  • Two Nexus 5020 vPC etherchannel with Two Catalyst 6500 VSS

    Hi,
    we are fighting with an 40 Gbps etherchannel between 2 Nx 5000 and 2 Catalyst 6500 but the etherchannel never comes up. Here is the config:
    NK5-1
    interface port-channel30
      description Trunk hacia VSS 6500
      switchport mode trunk
      vpc 30
      switchport trunk allowed vlan 50-54
      speed 10000
    interface Ethernet1/3
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      beacon
      channel-group 30
    interface Ethernet1/4
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      channel-group 30
    NK5-2
    interface port-channel30
      description Trunk hacia VSS 6500
      switchport mode trunk
      vpc 30
      switchport trunk allowed vlan 50-54
      speed 10000
    interface Ethernet1/3
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      beacon
      channel-group 30
    interface Ethernet1/4
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      beacon
      channel-group 30
    Catalyst 6500 VSS
    interface Port-channel30
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    interface TenGigabitEthernet2/1/2
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    interface TenGigabitEthernet2/1/3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    interface TenGigabitEthernet1/1/2
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    interface TenGigabitEthernet1/1/3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    The "Show vpc 30" is as follows
    N5K-2# sh vpc 30
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    30     Po30        down*  success     success                    -         
    But the "Show vpc Consistency-parameters vpc 30" is
    N5K-2# sh vpc consistency-parameters vpc 30
        Legend:
            Type 1 : vPC will be suspended in case of mismatch
    Name                             Type  Local Value            Peer Value            
    Shut Lan                              1     No                     No                   
    STP Port Type                    1     Default                Default              
    STP Port Guard                  1     None                   None                 
    STP MST Simulate PVST 1     Default                Default              
    mode                                    1     on                     -                    
    Speed                                  1     10 Gb/s                -                    
    Duplex                                   1     full                   -                    
    Port Mode                            1     trunk                  -                    
    Native Vlan                           1     1                      -                    
    MTU                                       1     1500                   -                    
    Allowed VLANs                    -     50-54                  50-54                
    Local suspended VLANs    -     -                      -         
    We will apreciate any advice,
    Thank you very much for your time...
    Jose

    Hi Lucien,
    here is the "show vpc brief"
    N5K-2# sh vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                   : 5  
    Peer status                     : peer adjacency formed ok     
    vPC keep-alive status           : peer is alive                
    Configuration consistency status: success
    Per-vlan consistency status     : success                      
    Type-2 consistency status       : success
    vPC role                        : secondary                    
    Number of vPCs configured       : 2  
    Peer Gateway                    : Disabled
    Dual-active excluded VLANs      : -
    Graceful Consistency Check      : Enabled
    vPC Peer-link status
    id   Port   Status Active vlans   
    1    Po5    up     50-54                                                   
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    30     Po30        down*  success     success                    -         
    31     Po31        down*  failed      Consistency Check Not      -         
                                          Performed                            
    *************************************************************************+
    *************************************************************************+
    N5K-1# sh vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                   : 5  
    Peer status                     : peer adjacency formed ok     
    vPC keep-alive status           : peer is alive                
    Configuration consistency status: success
    Per-vlan consistency status     : success                      
    Type-2 consistency status       : success
    vPC role                        : primary                      
    Number of vPCs configured       : 2  
    Peer Gateway                    : Disabled
    Dual-active excluded VLANs      : -
    Graceful Consistency Check      : Enabled
    vPC Peer-link status
    id   Port   Status Active vlans   
    1    Po5    up     50-54                                                   
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    30     Po30        down*  failed      Consistency Check Not      -         
                                          Performed                            
    31     Po31        down*  failed      Consistency Check Not      -         
                                          Performed             
    I have changed the lacp on both devices to active:
    On Nexus N5K-1/-2
    interface Ethernet1/3
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      channel-group 30 mode active
    interface Ethernet1/4
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      channel-group 30 mode active    
    On Catalyst 6500
    interface TenGigabitEthernet2/1/2-3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    switchport mode trunk
    channel-protocol lacp
    channel-group 30 mode active
    interface TenGigabitEthernet1/1/2-3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    switchport mode trunk
    channel-protocol lacp
    channel-group 30 mode active
    Thanks for your time.
    Jose

  • How to remove the WiSM2 from the Catalyst 6500 series switch?

    Hello, can you explain to me how to safely remove the WiSM2 from the Catalyst 6500 series switch?
    According to the documentation "Catalyst 6500 Series Wireless Services Module 2 Installation and Verification Note":
    To remove the WiSM2, perform these steps:
    Step1     Shut down the module by one of these methods:
    In privileged mode from the router prompt, enter the hw-mod module mod shutdown command. NoteIf you enter this command to shut down the module, you must enter the following commands in global configuration mode to restart (power down, and then power up) the module:
    Router# no power enable module modRouter# power enable module mod
    If the module does not respond to any commands, press the SHUTDOWN button located on the front panel of the module.
    Step2     Verify that the WiSM2 shuts down. Do not remove the module from the switch until the POWER LEDis off.
    But, in the case of Step1 (1st methods) I do not see a option "shutdown"  in the command "hw-mod module 3"...
    All I prompted to enter is:
    c6500#hw-module module 3 ?
    boot           Specify boot options for the module through Power Management Bus control register
    reset          Reset specified component
    simulate  Simulate options for the module
    Is it hidden options? IOS version of c6500 is 12.2(33)SXJ1
    In the case of Step2 (2nd methods) there is not any button on the front panel of the module?
    And yet, it is better to remove the module configuration manually or use the command module clear-config prior to removing the module?

    Good catch.
    Which one is true, will get back to you on this if i've something soon.
    http://www.cisco.com/en/US/docs/wireless/module/wism2/installation/note/WiSM_2.html#wp34727
    The above link is procedure to remove wism2. This procedure doesn’t look like wism2 is hot swapable.
    http://www.cisco.com/en/US/docs/wireless/module/wism2/installation/note/WiSM_2.html#wp34621
    All modules, including the supervisor engine (if you have redundant supervisor engines), support hot swapping. You can add, replace, or remove modules without interrupting the system power or causing other software or interfaces to shut down. For more information about hot-swapping modules, see the Catalyst 6500 Series Switch Module Installation Guide.

  • Connection of LC/APC fiber patch cords to Cisco Catalyst 6500 $ Cisco Access 3750 Switches

    I have an LC/APC fiber patch cord infrastructure and I want to connect it to Cisco Catalyst 6500 & Cisco Access 3750 Switches. what type of transceiver should be used?
    I read a note on Cisco website stating the following for Cisco SFP+ transceivers:
    Note: "Only connections with patch cords with PC or UPC connectors are supported. Patch cords with APC connectors are not supported. All cables and cable assemblies used must be compliant with the standards specified in the standards section"

    Thank you,  but my question is that I have a single mode fiber patch cord with LC/APC connector while cisco stating a note that only use LC/PC or LC/UPC type of connectors with SFP+ transceiver.  
    So what type of transceiver should I use to connect LC/APC patch cord to cisco switches?  Is there another type or SFP+ still can be used? 

  • Cisco Catalyst 6500 version 12.2(33)SXI13 configured as DHCP server for a VLAN responds to Windows 7 client with status code NOA

    Can anyone help figure out why the Catalyst 6509 is not able to assign an IPv6 address? Thank you.
    Cisco Catalyst 6500 version 12.2(33)SXI13 configured as DHCP server for a VLAN responds to Windows 7 client with status code NOADDRS-AVAIL(2). My configuration on the 6500 for the DHCPv6 server is:
    ipv6 dhcp database disk0://DHCPV6-DB
    ipv6 dhcp pool VLAN206IPV6
     prefix-delegation pool VLAN206IPV6-POOL
     dns-server 2620:B700:0:1001::53
     domain-name global.bio.com
    ipv6 local pool VLAN206IPV6-POOL 2620:B700:0:12C7::/65 65
    interface Vlan206
     description *** IPv6 Subnet ***  
     ip address 10.2.104.2 255.255.255.0
     ipv6 address 2620:B700:0:12C7::2/64
     ipv6 nd prefix 2620:B700:0:12C7::/64 14400 14400 no-autoconfig
     ipv6 nd managed-config-flag
     ipv6 dhcp server VLAN206IPV6
     standby version 2
     standby 0 ip 10.2.104.1
     standby 0 preempt
     standby 6 ipv6 2620:B700:0:12C7::1/64
     standby 6 preempt
    I'm getting a result from my debug as follows:
    Apr 10 16:28:02.873 PDT: %LINK-3-UPDOWN: Interface GigabitEthernet2/2, changed state to up
    Apr 10 16:28:02.873 PDT: %LINK-SP-3-UPDOWN: Interface GigabitEthernet2/2, changed state to up
    Apr 10 16:28:02.877 PDT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet2/2, changed state to up
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: Received SOLICIT from FE80::5D5E:7EBD:CDBF:2519 on Vlan206
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: detailed packet contents
    Apr 10 16:28:03.861 PDT:   src FE80::5D5E:7EBD:CDBF:2519 (Vlan206)
    Apr 10 16:28:03.861 PDT:   dst FF02::1:2
    Apr 10 16:28:03.861 PDT:   type SOLICIT(1), xid 8277025
    Apr 10 16:28:03.861 PDT:   option ELAPSED-TIME(8), len 2
    Apr 10 16:28:03.861 PDT:     elapsed-time 101
    Apr 10 16:28:03.861 PDT:   option CLIENTID(1), len 14
    Apr 10 16:28:03.861 PDT:     00010001195FD895F01FAF10689E
    Apr 10 16:28:03.861 PDT:   option IA-NA(3), len 12
    Apr 10 16:28:03.861 PDT:     IAID 0x0FF01FAF, T1 0, T2 0
    Apr 10 16:28:03.861 PDT:   option UNKNOWN(39), len 32
    Apr 10 16:28:03.861 PDT:   option VENDOR-CLASS(16), len 14
    Apr 10 16:28:03.861 PDT:   option ORO(6), len 8
    Apr 10 16:28:03.861 PDT:     DOMAIN-LIST,DNS-SERVERS,VENDOR-OPTS,UNKNOWN
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: Option IA-NA(3) is not supported yet
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: Sending ADVERTISE to FE80::5D5E:7EBD:CDBF:2519 on Vlan206
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: detailed packet contents
    Apr 10 16:28:03.861 PDT:   src FE80::21D:E6FF:FEE4:4400
    Apr 10 16:28:03.861 PDT:   dst FE80::5D5E:7EBD:CDBF:2519 (Vlan206)
    Apr 10 16:28:03.861 PDT:   type ADVERTISE(2), xid 8277025
    Apr 10 16:28:03.861 PDT:   option SERVERID(2), len 10
    Apr 10 16:28:03.865 PDT:     00030001001DE6E44400
    Apr 10 16:28:03.865 PDT:   option CLIENTID(1), len 14
    Apr 10 16:28:03.865 PDT:     00010001195FD895F01FAF10689E
    Apr 10 16:28:03.865 PDT:   option STATUS-CODE(13), len 15
    Apr 10 16:28:03.865 PDT:     status code NOADDRS-AVAIL(2)
    Apr 10 16:28:03.865 PDT:     status message: NOADDRS-AVAIL

    Hello,
    maybe hitting the following bug.
    Pv6 Address Assignment Support for IPv6 DHCP Server
    CSCse81385
    Hope this helps

  • Hi, I have a Catalyst 6500 with X6K-SUP2-2ge, the IOS and bootlader image been wiped out, it starts in ROMmon SP mod end can't switch to RP to start download the IOS using Xmodem, though it shouldn't work in ROMmon SP omde but the xmodem is not gving the

    Hi, I have a Catalyst 6500 with X6K-SUP2-2ge, the IOS and bootlader image been wiped out, it starts in ROMmon SP modw and I can't switch to RP to start download the IOS using Xmodem, though Xmodem shouldn't work in ROMmon SP mode but the it's not gving the
    not executable message, the slot0: and disk0: are not accessable can't see the files inside, when I try the dir slot0: or dir disk0: it says it can't be opened and when I try to boot from them there's noting as well, what can I do to load an IOS image to the booflash: or slot0: ,each time I load the image using Xmodem at the end it gives me *** System received a Software forced crash ***
    signal=0x17, code=0x5, context=0x0
    When I run the command:
    rommom1> boot bootflash:
    boot: cannot determine first file name on deice "bootflash:"
    rommon2> boot slot0:
    boot: cannot open "slot0:"
    boot: cannot dtermine first file name on device "slot0:"
    BTW  System Bootstrap, version 7.1
    I''m looking to format the PCMCIA using a PC and format it to FAT16 and copy the boot image into it and then try to load from the PCMCIA afterward if it works I'll format it using the Supervisor engine 2.
    Any one have another new idea I can use, thanks in advance

    This is a potentially complex issue.
    Is this SUP configured to run as IOS native or CatOS Hybrid?
    While in ROMMON can you do the 'dev' command and see whad drives are recognized. Then 'dir' the drives that the SUP recognizes.
    Can you provide the screen captures as it boots?
    You would be bette served by hacing a TAC case.

  • Installing New network card on a Cisco Catalyst 6500 VSS mode

    Hi All.
    I need to install a new network card on Cisco Catalyst 6500 VSS mode, I need to follow any special procedures or is it only insert the new card and the Catalyst automatically recognizes the card?
    Thank you So mucho. 

    Hi,
    Just insert the blade and the switch should recognize it. For the 6500 series the blades are hot swap able.
    HTH

Maybe you are looking for