VPC on Nexus 5000 with Catalyst 6500 (no VSS)

Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
Questions I have are.
- Is this my best deployment choice?
- vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
     - one of the 6500 goes down
          - STP?
          - What is going to happend with the Etherchannels on the remaining  6500?
     - the Management interface goes down for any other reason
          - which one is going to be the primary NEXUS?
Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
Any help is appreciated.
Devices
·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
·         2 Cisco Nexus 5010
·         2 Cisco UCS 6120xp
·         2 UCS Chassis
     -    4  Cisco  B200-M1 blades (2 each chassis)
          - Dual 10Gb Intel card (1 per blade)
vPC Configuration on Nexus 5000
TACSWN01
TACSWN02
feature vpc
vpc domain 5
reload restore
reload restore   delay 300
Peer-keepalive   destination 10.11.3.10
role priority 10
!--- Enables vPC, define vPC domain and peer   for keep alive
int ethernet 1/9-10
channel-group 50   mode active
!--- Put Interfaces on Po50
int port-channel 50
switchport mode   trunk
spanning-tree port   type network
vpc peer-link
!--- Po50 configured as Peer-Link for vPC
inter ethernet 1/17-18
description   UCS6120-A
switchport mode   trunk
channel-group 51   mode active
!--- Associates interfaces to Po51 connected   to UCS6120xp-A  
int port-channel 51
swithport mode   trunk
vpc 51
spannig-tree port   type edge trunk
!--- Associates vPC 51 to Po51
inter ethernet 1/19-20
description   UCS6120-B
switchport mode   trunk
channel-group 52   mode active
!--- Associates interfaces to Po51 connected   to UCS6120xp-B  
int port-channel 52
swithport mode   trunk
vpc 52
spannig-tree port   type edge trunk
!--- Associates vPC 52 to Po52
!----- CONFIGURATION for Connection to   Catalyst 6506
Int ethernet 1/1-3
description   Cat6506-01
switchport mode   trunk
channel-group 61   mode active
!--- Associate interfaces to Po61 connected   to Cat6506-01
Int port-channel 61
switchport mode   trunk
vpc 61
!--- Associates vPC 61 to Po61
Int ethernet 1/4-6
description   Cat6506-02
switchport mode   trunk
channel-group 62   mode active
!--- Associate interfaces to Po62 connected   to Cat6506-02
Int port-channel 62
switchport mode   trunk
vpc 62
!--- Associates vPC 62 to Po62
feature vpc
vpc domain 5
reload restore
reload restore   delay 300
Peer-keepalive   destination 10.11.3.9
role priority 20
!--- Enables vPC, define vPC domain and peer   for keep alive
int ethernet 1/9-10
channel-group 50   mode active
!--- Put Interfaces on Po50
int port-channel 50
switchport mode   trunk
spanning-tree port   type network
vpc peer-link
!--- Po50 configured as Peer-Link for vPC
inter ethernet 1/17-18
description   UCS6120-A
switchport mode   trunk
channel-group 51   mode active
!--- Associates interfaces to Po51 connected   to UCS6120xp-A  
int port-channel 51
swithport mode   trunk
vpc 51
spannig-tree port   type edge trunk
!--- Associates vPC 51 to Po51
inter ethernet 1/19-20
description   UCS6120-B
switchport mode   trunk
channel-group 52   mode active
!--- Associates interfaces to Po51 connected   to UCS6120xp-B  
int port-channel 52
swithport mode   trunk
vpc 52
spannig-tree port   type edge trunk
!--- Associates vPC 52 to Po52
!----- CONFIGURATION for Connection to   Catalyst 6506
Int ethernet 1/1-3
description   Cat6506-01
switchport mode   trunk
channel-group 61   mode active
!--- Associate interfaces to Po61 connected   to Cat6506-01
Int port-channel 61
switchport mode   trunk
vpc 61
!--- Associates vPC 61 to Po61
Int ethernet 1/4-6
description   Cat6506-02
switchport mode   trunk
channel-group 62   mode active
!--- Associate interfaces to Po62 connected   to Cat6506-02
Int port-channel 62
switchport mode   trunk
vpc 62
!--- Associates vPC 62 to Po62
vPC Verification
show vpc consistency-parameters
!--- show compatibility parameters
Show feature
!--- Use it to verify that vpc and lacp features are enabled.
show vpc brief
!--- Displays information about vPC Domain
Etherchannel configuration on TAC 6500s
TACSWC01
TACSWC02
interface range GigabitEthernet2/38 - 43
description   TACSWN01 (Po61 vPC61)
switchport
switchport trunk   encapsulation dot1q
switchport mode   trunk
no ip address
channel-group 61   mode active
interface range GigabitEthernet2/38 - 43
description   TACSWN02 (Po62 vPC62)
switchport
switchport trunk   encapsulation dot1q
switchport mode   trunk
no ip address
channel-group 62   mode active

ihernandez81,
Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
better ?
one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

Similar Messages

  • Trunking on Nexus 5000 to Catalyst 4500

    I have 2 devices on the each end of a Point to Point.  One side has a Nexus 5000 the other end a Catalyst 4500.  We want a trunk port on both sides to allow a single VLAN for the moment.  I have not worked with Nexus before.  Could someone look at the configurations of the Ports and let me know if it looks ok?
    nexus 5000
    interface Ethernet1/17
      description
      switchport mode trunk
      switchport trunk allowed vlan 141
      spanning-tree guard root
      spanning-tree bpdufilter enable
      speed 1000
    Catalyst 4500
    interface GigabitEthernet3/39
    description
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 141
    switchport mode trunk
    speed 1000
    spanning-tree bpdufilter enable
    spanning-tree guard root

    Thanks guys, we found the issue.  The Catalyst is on my side and the Nexus is on the side of the hosting center.  The hosting center moved his connection to a different Nexus 5000 and the connection came right up.  We dropped the spanning-tree guard root. 
    It was working on the previous nexus when we set the native vlan for 141.  So we thought it was the point to point dropping the tags.
    The hosting center engineer this it might have to do with the VPC Peer-Link loop prevention on the previous Nexus. 
    Anyway it is working the way we need it to.

  • Connecting Nexus 5548 to Catalyst 6500 VS S720 - 10 G

    good day,
    Could anyone out-there please assit me with basic connectivity/configuration of the 2 devices for the 2 devcies communicate e.g be able to ping each other managemnet interfaces.
    Nexus Configuration:
    vrf context management
      ip route 0.0.0.0/0 10.200.1.4
    vlan 1
    interface mgmt0
      ip address 10.200.1.2/16
    Catalyst 6500:
    interface Vlan1
    description Nexus
    ip address 10.200.1.4 255.255.0.0
    interface TenGigabitEthernet5/4
    switchport
    Note: I am able to get all the devices throught SH CDP NEIG command. assist please.

    Nexus# sh ip int mgmt0
    IP Interface Status for VRF "management"(2)
    mgmt0, Interface status: protocol-up/link-up/admin-up, iod: 2,
    IP address: 10.13.37.201, IP subnet: 10.13.37.128/25
    IP broadcast address: 255.255.255.255
    IP multicast groups locally joined: none
    IP MTU: 1500 bytes (using link MTU)
    IP primary address route-preference: 0, tag: 0
    IP proxy ARP : disabled
    IP Local Proxy ARP : disabled
    IP multicast routing: disabled
    IP icmp redirects: enabled
    IP directed-broadcast: disabled
    IP icmp unreachables (except port): disabled
    IP icmp port-unreachable: enabled
    IP unicast reverse path forwarding: none
    IP load sharing: none
    IP interface statistics last reset: never
    IP interface software stats: (sent/received/forwarded/originated/consumed)
    Unicast packets : 0/83401/0/20/20
    Unicast bytes : 0/8083606/0/1680/1680
    Multicast packets : 0/18518/0/0/0
    Multicast bytes : 0/3120875/0/0/0
    Broadcast packets : 0/285/0/0/0
    Broadcast bytes : 0/98090/0/0/0
    Labeled packets : 0/0/0/0/0
    Labeled bytes : 0/0/0/0/0
    Nexus# sh cdp nei
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
    S - Switch, H - Host, I - IGMP, r - Repeater,
    V - VoIP-Phone, D - Remotely-Managed-Device,
    s - Supports-STP-Dispute
    Device-ID Local Intrfce Hldtme Capability Platform Port ID
    3560 mgmt0 178 S I WS-C3560-24PS Fas0/23
    6500 Eth1/32 135 R S I WS-C6509-E Ten5/4
    Nexus# ping 10.13.37.201 vrf management
    PING 10.13.37.201 (10.13.37.201): 56 data bytes
    64 bytes from 10.13.37.201: icmp_seq=0 ttl=255 time=0.278 ms
    64 bytes from 10.13.37.201: icmp_seq=1 ttl=255 time=0.174 ms
    64 bytes from 10.13.37.201: icmp_seq=2 ttl=255 time=0.169 ms
    64 bytes from 10.13.37.201: icmp_seq=3 ttl=255 time=0.165 ms
    64 bytes from 10.13.37.201: icmp_seq=4 ttl=255 time=0.165 ms
    --- 10.13.37.201 ping statistics ---
    5 packets transmitted, 5 packets received, 0.00% packet loss
    round-trip min/avg/max = 0.165/0.19/0.278 ms
    Nexus# ping 10.13.37.202
    PING 10.13.37.202 (10.13.37.202): 56 data bytes
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 0 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 1 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 2 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 3 timed out
    ping: sendto 10.13.37.202 64 chars, No route to host
    Request 4 timed out
    --- 10.13.37.202 ping statistics ---
    5 packets transmitted, 0 packets received, 100.00% packet loss
    Nexus# ping 10.13.37.203
    PING 10.13.37.203 (10.13.37.203): 56 data bytes
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 0 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 1 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 2 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 3 timed out
    ping: sendto 10.13.37.203 64 chars, No route to host
    Request 4 timed out
    --- 10.13.37.203 ping statistics ---
    5 packets transmitted, 0 packets received, 100.00% packet loss
    3560#ping 10.13.37.201
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 10.13.37.201, timeout is 2 seconds:
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
    Note: Now I want to be able to ping Nexus (10.13.37.201) from the 6509 (10.13.37.203), and again be able to ping both the 3560 (10.13.37.202) and 6509 (10.13.37.203) from the Nexus please. How can I do that. I can ping nexus from 3560 as shown above.

  • CS-MANAGER: Catalyst 6500 in VSS mode not supported????

    We have recently moved from CSM 3.2 to CSM 3.3.1 Service Pack 3.
    We are now trying to add a Cisco Catalyst 6513 configured in VSS mode using 12.2(33)SXI5 software. When we try to add th device from network, it starts Discovering the configurations, and when it reaches the 15%, we got the message "Discovery Failed - Invalid Device: Catalyst in VSS mode is not Supported"
    I´ve attached to this post the screenshot showing the error
    Any idea of what can i do to manage this device on CSM??
    BEst Regards,
    Nicolás

    Hello Nicolas
    VSS is not supported even in the latest CSM release 4.1 AFAIK
    http://www.cisco.com/en/US/docs/security/security_management/cisco_security_manager/security_manager/4.1/compatibility/information/csmsd410.html
    Check with your Cisco account team whether it is going to be added anytime soon.
    Please rate if you find the input helpful.
    Regards
    Farrukh

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Two Nexus 5020 vPC etherchannel with Two Catalyst 6500 VSS

    Hi,
    we are fighting with an 40 Gbps etherchannel between 2 Nx 5000 and 2 Catalyst 6500 but the etherchannel never comes up. Here is the config:
    NK5-1
    interface port-channel30
      description Trunk hacia VSS 6500
      switchport mode trunk
      vpc 30
      switchport trunk allowed vlan 50-54
      speed 10000
    interface Ethernet1/3
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      beacon
      channel-group 30
    interface Ethernet1/4
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      channel-group 30
    NK5-2
    interface port-channel30
      description Trunk hacia VSS 6500
      switchport mode trunk
      vpc 30
      switchport trunk allowed vlan 50-54
      speed 10000
    interface Ethernet1/3
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      beacon
      channel-group 30
    interface Ethernet1/4
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      beacon
      channel-group 30
    Catalyst 6500 VSS
    interface Port-channel30
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    interface TenGigabitEthernet2/1/2
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    interface TenGigabitEthernet2/1/3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    interface TenGigabitEthernet1/1/2
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    interface TenGigabitEthernet1/1/3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    channel-protocol lacp
    channel-group 30 mode passive
    The "Show vpc 30" is as follows
    N5K-2# sh vpc 30
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    30     Po30        down*  success     success                    -         
    But the "Show vpc Consistency-parameters vpc 30" is
    N5K-2# sh vpc consistency-parameters vpc 30
        Legend:
            Type 1 : vPC will be suspended in case of mismatch
    Name                             Type  Local Value            Peer Value            
    Shut Lan                              1     No                     No                   
    STP Port Type                    1     Default                Default              
    STP Port Guard                  1     None                   None                 
    STP MST Simulate PVST 1     Default                Default              
    mode                                    1     on                     -                    
    Speed                                  1     10 Gb/s                -                    
    Duplex                                   1     full                   -                    
    Port Mode                            1     trunk                  -                    
    Native Vlan                           1     1                      -                    
    MTU                                       1     1500                   -                    
    Allowed VLANs                    -     50-54                  50-54                
    Local suspended VLANs    -     -                      -         
    We will apreciate any advice,
    Thank you very much for your time...
    Jose

    Hi Lucien,
    here is the "show vpc brief"
    N5K-2# sh vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                   : 5  
    Peer status                     : peer adjacency formed ok     
    vPC keep-alive status           : peer is alive                
    Configuration consistency status: success
    Per-vlan consistency status     : success                      
    Type-2 consistency status       : success
    vPC role                        : secondary                    
    Number of vPCs configured       : 2  
    Peer Gateway                    : Disabled
    Dual-active excluded VLANs      : -
    Graceful Consistency Check      : Enabled
    vPC Peer-link status
    id   Port   Status Active vlans   
    1    Po5    up     50-54                                                   
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    30     Po30        down*  success     success                    -         
    31     Po31        down*  failed      Consistency Check Not      -         
                                          Performed                            
    *************************************************************************+
    *************************************************************************+
    N5K-1# sh vpc brief
    Legend:
                    (*) - local vPC is down, forwarding via vPC peer-link
    vPC domain id                   : 5  
    Peer status                     : peer adjacency formed ok     
    vPC keep-alive status           : peer is alive                
    Configuration consistency status: success
    Per-vlan consistency status     : success                      
    Type-2 consistency status       : success
    vPC role                        : primary                      
    Number of vPCs configured       : 2  
    Peer Gateway                    : Disabled
    Dual-active excluded VLANs      : -
    Graceful Consistency Check      : Enabled
    vPC Peer-link status
    id   Port   Status Active vlans   
    1    Po5    up     50-54                                                   
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    30     Po30        down*  failed      Consistency Check Not      -         
                                          Performed                            
    31     Po31        down*  failed      Consistency Check Not      -         
                                          Performed             
    I have changed the lacp on both devices to active:
    On Nexus N5K-1/-2
    interface Ethernet1/3
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      channel-group 30 mode active
    interface Ethernet1/4
      switchport mode trunk
      switchport trunk allowed vlan 50-54
      channel-group 30 mode active    
    On Catalyst 6500
    interface TenGigabitEthernet2/1/2-3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    switchport mode trunk
    channel-protocol lacp
    channel-group 30 mode active
    interface TenGigabitEthernet1/1/2-3
    switchport
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 50-54
    switchport mode trunk
    channel-protocol lacp
    channel-group 30 mode active
    Thanks for your time.
    Jose

  • Cisco nexus 9508 Vpc with catalyst switches

    Hi,
        i am karthik.
    we are going to build the nexus 9508 with NX-OS in our data center. in existing we are having 50's of catalyst L2 and L3 switches.
    If we perform the Vpc with 9K and catalyst switches. is there any restrictions on particular model catalyst switches will support Vpc with 9K?
    Kindly clarify my question?
    Thanks in advance for the valuable response!!!!

    Hi,
      i am having 4500 series switches and 6E sup engine.
    Then we are having nexus 9508 and N2232PP. when we try to configure fex between these switches.
    in Nexus 9508 showing unknown features error.
    Current Nx-OS version is n9000-dk9.6.1.2.I2.2.bin.

  • Catalyst 6500 - Nexus 7000 migration

    Hello,
    I'm planning a platform migration from Catalyst 6500 til Nexus 7000. The old network consists of two pairs of 6500's as serverdistribution, configured with HSRPv1 as FHRP, rapid-pvst and ospf as IGP. Futhermore, the Cat6500 utilize mpls/l3vpn with BGP for 2/3 of the vlans. Otherwise, the topology is quite standard, with a number of 6500 and CBS3020/3120 as serveraccess.
    In preparing for the migration, VTP will be discontinued and vlans have been manually "copied" from the 6500 to the N7K's. Bridge assurance is enabled downstream toward the new N55K access-switches, but toward the 6500, the upcoming etherchannels will run in "normal" mode, trying to avoid any problems with BA this way. For now, only L2 will be utilized on the N7K, as we're avaiting the 5.2 release, which includes mpls/l3vpn. But all servers/blade switches will be migrated prior to that.
    The questions arise, when migrating Layer3 functionality, incl. hsrp. As per my understanding, hsrp in nxos has been modified slightly to better align with the vPC feature and to avoid sub-optimal forwarding across the vPC peerlink. But that aside, is there anything that would complicate a "sliding" FHRP migration? I'm thinking of configuring SVI's on the N7K's, configuring them with unused ip's and assign the same virtual ip, only decrementing the prio to a value below the current standby-router. Also spanning-tree prio will, if necessary, be modified to better align with hsrp.
    From a routing perspective, I'm thinking of configuring ospf/bgp etc. similar to that of the 6500's, only tweaking the metrics (cost, localpref etc) to constrain forwarding on the 6500's and subsequently migrate both routing and FHRP at the same time. Maybe not in a big bang style, but stepwise. Is there anything in particular one should be aware of when doing this? At present, for me this seems like a valid approach, but maybe someone has experience with this (good/bad), so I'm hoping someone has some insight they would like to share.
    Topology drawing is attached.
    Thanks
    /Ulrich

    In a normal scenario, yes. But not in vPC. HSRP is a bit different in the vPC environment. Even though the SVI is not the HSRP primary, it will still forward traffic. Please see the below white paper.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html
    I will suggest you to set up the SVIs on the N7K but leave them in the down state. Until you are ready to use the N7K as the gateway for the SVIs, shut down the SVIs on the C6K one at a time and turn up the N7K SVIs. When I said "you are ready", it means the spanning-tree root is at the N7K along with all the L3 northbound links (toward the core).
    I had a customer who did the same thing that you are trying to do - to avoid down time. However, out of the 50+ SVIs, we've had 1 SVI that HSRP would not establish between C6K and N7K, we ended up moving everything to the N7K on a fly during of the migration. Yes, they were down for about 30 sec - 1 min for each SVI but it is less painful and waste less time because we don't need to figure out what is wrong or any NXOS bugs.
    HTH,
    jerry

  • Multicast: duplicated packets on nexus 7k with vpc and HSRP

    Hi guys,
    I'm testing multicast deployment on the lab shown below. The sender and the receiver are connected to the 6500 in two different vlans. The sender is in vlan 23 and the reciever in vlan 500. They are connected to the 6500 with a trunk link. There is VPc between the two nexus 7k and the 6500.
    Furthermore, there is HSRP running on the two vlan interface 23 and 500 on both nexus.
    I have configured the minimum to use PIM-SM with static RP. The RP is the 3750 above the nexus. (*,G) and (S,G) states are created correctly.
    IGMP snopping is enabled on 6500, and the two nexus.
    I'm using iperf to generate my flow, and netflow and snmp to monitor what happens.
    All works correctly, my receiver receive the flow and it takes the good route. My problem is that I have four times more multicast traffic on the vlan interface 500 on both nexus but this traffic is only sent one time to the receiver (which is the good comportment) and the rest of the traffic is not shown on any other physical interface in outbound.
    Indeed, I'm sending one flow, the two nexus receive it (one from peer link and the other from the 6500) in the vlan 23 (for example 25 packets inbound).
    But when the flow is routed in the vlan 500, there is 100 packets on each interface vlan 500 on each nexus in outbound.
    And when monitoring all physical interfaces, I only see 25 packets outbound on the interface linked with the receiver and the overflow isn't outgone.
    I have joined the graphs I obtain on one of the nexus for the vlan 23 and the vlan 500. Netflow says the same things in bits/s.
    Had someone already seen that? Any idea about the duplication of the packets?
    Thanks for any comment,
    Regards,
    Configuration:
    Nexus 1: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
    Nexus 2: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
    6500: s72033-adventerprisek9_wan-mz.122-33.SXI5.bin (12.2(33)SXI5)
    3750: c3750-ipservicesk9-mz.122-50.SE5.bin (12.2(50)SE5)

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Mix of Nexus 5500 & Catalyst 6500

    Hi there,
    Does Nexus 5500 series require a Nexus 7000 parent device? or it will be supported with Catalyst 6509?
    If this is the scenario :
    CAT6509 - Core Layer 3
    CAT6509 - Distribution Layer 2/3
    NX-5548 - Access Layer 2
    And FYI, no VSS on Catalyst 6509.
    Thanks in advance,
    Gerard

    Hello Chad,
    Cool thanks ... that is perfect. We're eventually moving towards NX at the Core and Distribs really soon. Yeah, that's what I've known with the Extenders requirement with parent 5k or 7k too ... 
    Gerard

  • Cisco Catalyst 6500 version 12.2(33)SXI13 configured as DHCP server for a VLAN responds to Windows 7 client with status code NOA

    Can anyone help figure out why the Catalyst 6509 is not able to assign an IPv6 address? Thank you.
    Cisco Catalyst 6500 version 12.2(33)SXI13 configured as DHCP server for a VLAN responds to Windows 7 client with status code NOADDRS-AVAIL(2). My configuration on the 6500 for the DHCPv6 server is:
    ipv6 dhcp database disk0://DHCPV6-DB
    ipv6 dhcp pool VLAN206IPV6
     prefix-delegation pool VLAN206IPV6-POOL
     dns-server 2620:B700:0:1001::53
     domain-name global.bio.com
    ipv6 local pool VLAN206IPV6-POOL 2620:B700:0:12C7::/65 65
    interface Vlan206
     description *** IPv6 Subnet ***  
     ip address 10.2.104.2 255.255.255.0
     ipv6 address 2620:B700:0:12C7::2/64
     ipv6 nd prefix 2620:B700:0:12C7::/64 14400 14400 no-autoconfig
     ipv6 nd managed-config-flag
     ipv6 dhcp server VLAN206IPV6
     standby version 2
     standby 0 ip 10.2.104.1
     standby 0 preempt
     standby 6 ipv6 2620:B700:0:12C7::1/64
     standby 6 preempt
    I'm getting a result from my debug as follows:
    Apr 10 16:28:02.873 PDT: %LINK-3-UPDOWN: Interface GigabitEthernet2/2, changed state to up
    Apr 10 16:28:02.873 PDT: %LINK-SP-3-UPDOWN: Interface GigabitEthernet2/2, changed state to up
    Apr 10 16:28:02.877 PDT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet2/2, changed state to up
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: Received SOLICIT from FE80::5D5E:7EBD:CDBF:2519 on Vlan206
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: detailed packet contents
    Apr 10 16:28:03.861 PDT:   src FE80::5D5E:7EBD:CDBF:2519 (Vlan206)
    Apr 10 16:28:03.861 PDT:   dst FF02::1:2
    Apr 10 16:28:03.861 PDT:   type SOLICIT(1), xid 8277025
    Apr 10 16:28:03.861 PDT:   option ELAPSED-TIME(8), len 2
    Apr 10 16:28:03.861 PDT:     elapsed-time 101
    Apr 10 16:28:03.861 PDT:   option CLIENTID(1), len 14
    Apr 10 16:28:03.861 PDT:     00010001195FD895F01FAF10689E
    Apr 10 16:28:03.861 PDT:   option IA-NA(3), len 12
    Apr 10 16:28:03.861 PDT:     IAID 0x0FF01FAF, T1 0, T2 0
    Apr 10 16:28:03.861 PDT:   option UNKNOWN(39), len 32
    Apr 10 16:28:03.861 PDT:   option VENDOR-CLASS(16), len 14
    Apr 10 16:28:03.861 PDT:   option ORO(6), len 8
    Apr 10 16:28:03.861 PDT:     DOMAIN-LIST,DNS-SERVERS,VENDOR-OPTS,UNKNOWN
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: Option IA-NA(3) is not supported yet
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: Sending ADVERTISE to FE80::5D5E:7EBD:CDBF:2519 on Vlan206
    Apr 10 16:28:03.861 PDT: IPv6 DHCP: detailed packet contents
    Apr 10 16:28:03.861 PDT:   src FE80::21D:E6FF:FEE4:4400
    Apr 10 16:28:03.861 PDT:   dst FE80::5D5E:7EBD:CDBF:2519 (Vlan206)
    Apr 10 16:28:03.861 PDT:   type ADVERTISE(2), xid 8277025
    Apr 10 16:28:03.861 PDT:   option SERVERID(2), len 10
    Apr 10 16:28:03.865 PDT:     00030001001DE6E44400
    Apr 10 16:28:03.865 PDT:   option CLIENTID(1), len 14
    Apr 10 16:28:03.865 PDT:     00010001195FD895F01FAF10689E
    Apr 10 16:28:03.865 PDT:   option STATUS-CODE(13), len 15
    Apr 10 16:28:03.865 PDT:     status code NOADDRS-AVAIL(2)
    Apr 10 16:28:03.865 PDT:     status message: NOADDRS-AVAIL

    Hello,
    maybe hitting the following bug.
    Pv6 Address Assignment Support for IPv6 DHCP Server
    CSCse81385
    Hope this helps

  • Hi, I have a Catalyst 6500 with X6K-SUP2-2ge, the IOS and bootlader image been wiped out, it starts in ROMmon SP mod end can't switch to RP to start download the IOS using Xmodem, though it shouldn't work in ROMmon SP omde but the xmodem is not gving the

    Hi, I have a Catalyst 6500 with X6K-SUP2-2ge, the IOS and bootlader image been wiped out, it starts in ROMmon SP modw and I can't switch to RP to start download the IOS using Xmodem, though Xmodem shouldn't work in ROMmon SP mode but the it's not gving the
    not executable message, the slot0: and disk0: are not accessable can't see the files inside, when I try the dir slot0: or dir disk0: it says it can't be opened and when I try to boot from them there's noting as well, what can I do to load an IOS image to the booflash: or slot0: ,each time I load the image using Xmodem at the end it gives me *** System received a Software forced crash ***
    signal=0x17, code=0x5, context=0x0
    When I run the command:
    rommom1> boot bootflash:
    boot: cannot determine first file name on deice "bootflash:"
    rommon2> boot slot0:
    boot: cannot open "slot0:"
    boot: cannot dtermine first file name on device "slot0:"
    BTW  System Bootstrap, version 7.1
    I''m looking to format the PCMCIA using a PC and format it to FAT16 and copy the boot image into it and then try to load from the PCMCIA afterward if it works I'll format it using the Supervisor engine 2.
    Any one have another new idea I can use, thanks in advance

    This is a potentially complex issue.
    Is this SUP configured to run as IOS native or CatOS Hybrid?
    While in ROMMON can you do the 'dev' command and see whad drives are recognized. Then 'dir' the drives that the SUP recognizes.
    Can you provide the screen captures as it boots?
    You would be bette served by hacing a TAC case.

  • Catalyst 6500 with CatOS ISCSI

    Hi, I'm configuring a Catalyst 6500 with for ISCSI.
    Following the recommendations I have to configure: portfast, jumbo frames, flow control and disable unicast storm control
    - Portfast: on the server and ISCSI SAN ports
        >
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Tabla normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin-top:0cm;
    mso-para-margin-right:0cm;
    mso-para-margin-bottom:10.0pt;
    mso-para-margin-left:0cm;
    line-height:115%;
    mso-pagination:widow-orphan;
    font-size:11.0pt;
    font-family:"Calibri","sans-serif";
    mso-ascii-font-family:Calibri;
    mso-ascii-theme-font:minor-latin;
    mso-fareast-font-family:"Times New Roman";
    mso-fareast-theme-font:minor-fareast;
    mso-hansi-font-family:Calibri;
    mso-hansi-theme-font:minor-latin;}
    set spantree portfast
    - Jumbo frames: Set port jumbo
    - Flow Control:
         > set port flow control receive desired
    Questions:
    1. Where I have to configure flow control? only on the SAN ports and NIC servers? or server ports too?
    2. Unicast Storm control: how can i configure this option?
    Thanks

    We are having the same exact problem. We've done what you've tried with no luck also. Strange thing is that in another building we have the same setup but only with a 6148V blade and that Tandberg has no issues. We're using a 6148AF with the one we're having problems with. We've tried with a 6348 blade and it works fine. I'm thinking it's something with the 6148AF firmware (ver. 8.2(2)).
    Were you able to solve your problem?

  • NX-OS firmware Upgradation in Nexus 5548 with Enhanced vPC with Dual Active FEX

    Hi All,
             Please tell me how to do "NX-OS firmware Upgradation in Nexus 5548 with Enhanced vPC with Dual Active FEX" without downtime for FEX.
    The Server are connected to FEX.
    Attached the diagram.

    Hi,
    If the 5500s are layer-2 with vPC running between them than you can use ISSU to upgade.
    here is doc to follow:
    ISSU Support for vPC Topologies
    An ISSU is completely supported when two switches are paired in a vPC configuration. In a vPC configuration, one switch functions as a primary switch and the other functions as a secondary switch .They both run the complete switching control plane, but coordinate forwarding decisions to have optimal forwarding to devices at the other end of the vPC. Additionally, the two devices appear as a single device that supports EtherChannel (static and 802.3ad) and provide simultaneously data forwarding services to that device.
    While upgrading devices in a vPC topology,you should start with the switch that is the primary switch. The vPC secondary device should be upgraded after the ISSU process completes successfully on the primary device. The two vPC devices continue their control plane communication during the entire ISSU process (except when the ISSU process resets the CPU of the switch being upgraded).
    This example shows how to determine the vPC operational role of the switch:
    link:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/upgrade/513_N1_1/n5k_upgrade_downgrade_513.html
    HTH

  • NEXUS 5000: read CPU Load with SNMP

    Hallo,
    tried to read the NEXUS 5000 cpu load:
    cseSysCPUUtilization 1.3.6.1.4.1.9.9.305.1.1.1
    but there is a  timeout:
    Timeout: No Response from 10.100.224.16
    Other MIB values readout, like the system value, is ok.
    We use snmpv3.
    Thanks
    ylvie

    Hi Ylvie,
    There are a few sources for CPU utilisation values from SNMP, the classic
    ones from CISCO-PROCESS-MIB:
    cpmCPUTotal5secRev     1.3.6.1.4.1.9.9.109.1.1.1.1.6
    cpmCPUTotal1minRev     1.3.6.1.4.1.9.9.109.1.1.1.1.7
    cpmCPUTotal5minRev      1.3.6.1.4.1.9.9.109.1.1.1.1.8
    And a new source in CISCO-SYSTEM-EXT-MIB, cseSysCPUUtilization
    (1.3.6.1.4.1.9.9.305.1.1.1). Unlike the averaged values from CISCO-PROCESS-MIB,
    cseSysCPUUtilization returns an un-smoothed value and typically shows more erratic
    results.so  I would recommended the objects from CISCO-PROCESS-MIB
    ie. cpmCPUTotal5secRev instead.
    Thanks
    Afroj

Maybe you are looking for

  • Ho to find script and the related print program for print preview of PO

    Hi All, We are getting some text output on the print preview of a purchase order. How can we determine the driver script and the corresponding print program for this. Can you please guide on this. Thanks in advance. Regards, Sanjeet

  • ORA-12560 Error

    I installed 9i Developer Suite. I did not install the net configuration manger. The installation said it was not required. I am using forms builder and can compile the form. I connect to the database. However, when running the form I receive a ORA-12

  • Help- can't connect airport to Netgear router?

    I posted yesterday then thought I had solved the problem but I hadn't so here goes again: I've never used Airport before, so i'm green I have a G4 sawtooth tower connected to a Netgear wireless router MR814v2. I don't think I have an airport card in

  • Press Ready Form

    Hi all - I have created a form in Acrobat 8 Pro. It is an order form that allows the end user to fill in their mailing address to customize a postcard. This is then sent directly to a printer for off set printing. The form itself is all set for print

  • Issues with the deicmal places in IO

    Hi Friends. I have a keyfigure IO which is of type Number Data Type = DEC. In the Additional properties is have selected  Decimal Places 0.000000 (6 decimals). the incomming recod comes with 0.000000, but when i load the data, i can see only 3 decima