Nexus 5000 vpc and fabricpath considerations

Hello community,
I'm currently in the process of implementing a fabricpath environment which includes Nexus 5548UP as well Nexus 7009
NX OS on N5K is 6.0(2)N1(2)
Regarding the FP config on the N5K I wonder what is the best practice for the peer-link. Is it necessary to configure the Portchannel like below:
interface port-channel2
  description VPC+ Peer Link
  switchport mode fabricpath
  spanning-tree port type network
  vpc peer-link
There are several VLANs configured as FP.
As I understand we can remove the command:
spanning-tree port type network
Can anyone confirm this ?
Also I noticed a "cosmetic" problem. On two port 1/9 and 1/10 on both N5K it isn't possible to execute the command "speed"?!
When the command speed is executed I receive the following error:
ERROR: Ethernet1/9: Configuration does not match the port capability
Also please notice after the vPC and FP configuration we don't do a reload!
Thanks
Udo

Hi Simon -
Have done some testings in the lab on ISSU with FEXes either in Active/Active and Straight-through fashion, and it works.
Disabling BA on N5K(except the vPC peer link) is one of the requirements for ISSU . 
In a lately lab testing with the following topo, BA is configured on the vpc 101 between the N5Ks and Cat6k.  We have a repeated regular ping between the SVI interfaces of c3750 and Cat6K. 
                      c3750
                         ||
                      vPC
                         ||
    N5K =====vPC====== N5K
                          ||
                     vpc 101
                          ||
                     Cat6k
When we changed the network type to disable BA, we observed some ping drops, which around 20-30.
I am not sure what your network looks like, hopefully this will give you some ideas about the ISSU.  As a general recommendation, schedule a change window for some changes or even ISSU.
regards,
Michael

Similar Messages

  • Nexus 5548UP VPC and/or VRRP problem

    Hi, I have two 5548UP + L3 card with LAN_ENTERPRISE_SERVICES_PKG and FC license.
    This two Nexus are the core of my network.
    Eight stacks of 2960S are connected to both NX with an etherchannel formed by two SX-1G or two SR-10G.
    I've checked the conf and maked a lot of test and everything works fine. BUT, two days after the people start working on the new building, about half of the PC don't even reach the default gateway. (Nexus VRRP)
    I've turned off VRRP and it works for minutes.
    The problem disappear if I shutdown one of the links to NX01 or NX02.
    I followed the destination MAC of one PC with the problem and the ARP table looks OK but I guest the problem is related with a corruption in the ARP table anyway.
    system image file is:   bootflash:///n5000-uk9.5.2.1.N1.1a.bin
    Thanks in advance!
    Guido./
    interface Vlanxx
      no shutdown
      ip address 10.xx.xx.1/24
      ip ospf passive-interface
      ip router ospf 1 area 0.0.0.0
      ip dhcp relay address xxxxx
      vrrp 80
        address 10.xx.xx.1
    ! Actualy is in shutdown
    interface port-channel55
      switchport mode trunk
      switchport trunk allowed vlan 1-300,303-4094
      ip dhcp snooping trust
      speed 10000
      vpc 55
    interface port-channel111
      switchport mode trunk
      switchport trunk allowed vlan 1-224
      ip dhcp snooping trust
      spanning-tree port type network
      speed 10000
      vpc peer-link

    Looks like you have configured same IP on physical and for standby. is this typo or configured on device ?
    !----------- NX01 ----------------------------------------------
    interface Vlan80
      no shutdown
      ip address 10.xx.80.1/24
      ip ospf passive-interface
      ip router ospf 1 area 0.0.0.0
      ip dhcp relay address xxxxx
      vrrp 80
        address 10.xx.80.1
    ! Actualy is in shutdown
    !----------- NX02 ----------------------------------------------
    !NX02
    interface Vlan80
      no shutdown
      ip address 10.xx.80.2/24
      ip ospf passive-interface
      ip router ospf 1 area 0.0.0.0
      ip dhcp relay address x
      vrrp 80
        address 10.x.80.1
    Also -Peer Gateway                      : Disabled
    Optional but can be turnon to make both in forwarding mode.
    Thanks
    Ajay

  • Nexus 5000 vPC suspended during reload delay period

    Hi ,
    after reloading on vPC-Peer-Switch be box comes up and all vPC-Member-Ports on the box are in suspended state until the reload delay time expired.
    Unfortunately the link of the vPC-Member Ports are already up. This behaviour leads us in some problems if we connect a Cisco-UCS-FI with a LACP-Portchannel to a vPC on N5K.
    Because the link of the suspended Port is up the FI detects the port also as up and running and set it to individual state, because of missing LACP-BPDUs, So at this time the FI hast two uplinks, one Port-Channel and one individual Ports. After 30 seconds the FI starts to repinning the servers over these two uplinks. Because the individual Port is not in forwarding state an the reloaded N5K until reload delay timer expired.
    So during this period all the servers which are pinned to the individual Port are blackholed.
    Possible Workarrounds
    1. Creating a Pin-Group for the Port-Channel and pinning all Servers to this Pin-Group to avoid in case on channel-Member goes to individual state, any server is pinned to this individual Port . This could be a solution
    2.Configuring the Port-Channel on FI for "suspend individual". Unfortunately I could not find a way to achive this. This would avoid that the individual Port is considered as possible uplink-port, so no pinning to the individual Port would happen.
    3. Find a way that during the delay restore time on the suspended vPC-Member-Ports also the link is down. (In my opinion this would be the best way)
    I am not sure if configuration of "individual suspend" on the vPC on the N5K would help.
    any other ideas?
    Hubert

    What I really want is a command I can use to prevent VPC from turning off ports at all.  I'd much rather have an active-active situation than have my entire network go down just because the primary VPC peer rebooted. VPC is not designed correctly to deal with that situation.  And yes, it has happened.  Multiple times with different VPC keepalive setups.

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • VPC on Nexus 5000 with Catalyst 6500 (no VSS)

    Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
    The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
    Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
    Questions I have are.
    - Is this my best deployment choice?
    - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
         - one of the 6500 goes down
              - STP?
              - What is going to happend with the Etherchannels on the remaining  6500?
         - the Management interface goes down for any other reason
              - which one is going to be the primary NEXUS?
    Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
    Any help is appreciated.
    Devices
    ·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
    ·         2 Cisco Nexus 5010
    ·         2 Cisco UCS 6120xp
    ·         2 UCS Chassis
         -    4  Cisco  B200-M1 blades (2 each chassis)
              - Dual 10Gb Intel card (1 per blade)
    vPC Configuration on Nexus 5000
    TACSWN01
    TACSWN02
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.10
    role priority 10
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.9
    role priority 20
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    vPC Verification
    show vpc consistency-parameters
    !--- show compatibility parameters
    Show feature
    !--- Use it to verify that vpc and lacp features are enabled.
    show vpc brief
    !--- Displays information about vPC Domain
    Etherchannel configuration on TAC 6500s
    TACSWC01
    TACSWC02
    interface range GigabitEthernet2/38 - 43
    description   TACSWN01 (Po61 vPC61)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 61   mode active
    interface range GigabitEthernet2/38 - 43
    description   TACSWN02 (Po62 vPC62)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 62   mode active

    ihernandez81,
    Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
    All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
    The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
    As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
    If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
    I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
    For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
    Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
    Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
    better ?
    one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

  • 6500-VSS and NEXUS 56XX vPC interoperability

    Hello, is it possible to establish a PORT CHANNEL between a couple of Cisco 6500 running VSS mode and a couple of NEXUS 5000 running vPC? . Design should be " Back-to-Back" :  VSS-- Port-Channel--vPC.
    I want also to support L2 and L3 flows between the two couples.
    I read many forums but i am not sure it runs.
    Is such design, if it runs; supported by Cisco?
    Thanks a lot for your help.

    Hi Tlequertier,
    We have VSS 6509Es with Sup 2Ts & 6908 modules. These have a 40gb/sec (4 x 10gb/sec) uplink to our NEXUS 5548UP vPC switches.
    So we have a fully meshed ether-channel between the 4 physical switches (2 x N5548UP & 2x6509E)
    Kind regards,
    Tim

  • SAN Port-Channel between Nexus 5000 and Brocade 5100

    I have a Nexus 5000 running in NPV mode connected to a Brocade 5100 FC switch using two FC ports on a native FC module in the Nexus 5000. I would like to configure these two physical links as one logical link using a SAN Port-Channel/ISL-Trunk. An ISL trunking license is already installed on the Brocade 5100. The Nexus 5000 is running NX-OS 4.2(1), the Brocade 5100 Fabric OS 6.20. Does anybody know if this is a supported configuration? If so, how can this be configured on the Nexus 5000 and the Brocade 5100? Thank you in advance for any comments.
    Best regards,
    Florian

    I tried that and I could see the status light on the ports come on but it still showed not connected.
    I configured another switch (a 3560) with the same config and the same layout with the fiber and I got the connection up on it. I just cant seem to get it on the 4506, would it be something with the supervisor? Could it be wanting to use the 10gb port instead of the 1gb ports?

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Nexus 5548UP - HSRP and vPC, tracking required?

    Hi,
    We've got two Nexus 5548UPs that are vPC and HSRP peers.
    I've had some feedback that I should incorporate the tracking function to close the vPC down in the case of a layer 3 problem, the thing is I'm not sure it's required. I can see in this article it recommends implementing tracking when your L2 peer-link and L3 interfaces are on the same module (which it is in my case).. http://www.cisco.com/en/US/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf
    But in this article it says not to use tracking.. http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/design_guide_c07-625857.pdf
    Any one got any real world experience and can offer some feedback.. I don't mind putting it in just want to understand why.
    Thanks,
    Nick.

    Hi Nick
    there is two tracking can be use din nexus enviroment
    HSRP tracking and vPC tracking
    for using one line card for the vPC peer link vPC tracking is recomnded
    HSRP tracking is used to track L3 uplinks to the core
    Using vPC with HSRP/VRRP object tracking may leads to traffic blackholing in case object tracking is triggered
    its better to use separate L3 inter switch link instead of using HSRP tracking
    hope this help

  • Multicast: duplicated packets on nexus 7k with vpc and HSRP

    Hi guys,
    I'm testing multicast deployment on the lab shown below. The sender and the receiver are connected to the 6500 in two different vlans. The sender is in vlan 23 and the reciever in vlan 500. They are connected to the 6500 with a trunk link. There is VPc between the two nexus 7k and the 6500.
    Furthermore, there is HSRP running on the two vlan interface 23 and 500 on both nexus.
    I have configured the minimum to use PIM-SM with static RP. The RP is the 3750 above the nexus. (*,G) and (S,G) states are created correctly.
    IGMP snopping is enabled on 6500, and the two nexus.
    I'm using iperf to generate my flow, and netflow and snmp to monitor what happens.
    All works correctly, my receiver receive the flow and it takes the good route. My problem is that I have four times more multicast traffic on the vlan interface 500 on both nexus but this traffic is only sent one time to the receiver (which is the good comportment) and the rest of the traffic is not shown on any other physical interface in outbound.
    Indeed, I'm sending one flow, the two nexus receive it (one from peer link and the other from the 6500) in the vlan 23 (for example 25 packets inbound).
    But when the flow is routed in the vlan 500, there is 100 packets on each interface vlan 500 on each nexus in outbound.
    And when monitoring all physical interfaces, I only see 25 packets outbound on the interface linked with the receiver and the overflow isn't outgone.
    I have joined the graphs I obtain on one of the nexus for the vlan 23 and the vlan 500. Netflow says the same things in bits/s.
    Had someone already seen that? Any idea about the duplication of the packets?
    Thanks for any comment,
    Regards,
    Configuration:
    Nexus 1: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
    Nexus 2: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
    6500: s72033-adventerprisek9_wan-mz.122-33.SXI5.bin (12.2(33)SXI5)
    3750: c3750-ipservicesk9-mz.122-50.SE5.bin (12.2(50)SE5)

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Nexus 5000 - Odd Ethernet interface behavior (link down inactive)

    Hi Guys,
    This would sound really trivial but it is very odd behavior.
    - We have a server connected to a 2, Nexus 5000s (for resiliancy)
    - When there is no config on the ethernet interfaces whatsoever, the ethernet interface is UP / UP, there is minimal amount of traffic on the link etc. E.g.
    Ethernet1/16 is up
      Hardware: 1000/10000 Ethernet, address: 000d.ece7.85d7 (bia 000d.ece7.85d7)
      Description: shipley-p1.its RK14/A13
      MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      full-duplex, 10 Gb/s, media type is 1/10g
      Beacon is turned off
      Input flow-control is off, output flow-control is off
      Rate mode is dedicated
      Switchport monitor is off
      Last link flapped 00:00:07
      Last clearing of "show interface" counters 05:42:32
      30 seconds input rate 0 bits/sec, 0 packets/sec
      30 seconds output rate 96 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 0 bps, 0 pps; output rate 8 bps, 0 pps
      RX
        0 unicast packets  0 multicast packets  0 broadcast packets
        0 input packets  0 bytes
        0 jumbo packets  0 storm suppression packets
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        0 unicast packets  163 multicast packets  0 broadcast packets
        163 output packets  15883 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble
        0 Tx pause
      1 interface resets
    - As soon as I configure the link to be an access port, the link goes down, flagging "inactivity" E.g.
    sh int e1/16
    Ethernet1/16 is down (inactive)
      Hardware: 1000/10000 Ethernet, address: 000d.ece7.85d7 (bia 000d.ece7.85d7)
      Description: shipley-p1.its RK14/A13
      MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      auto-duplex, 10 Gb/s, media type is 1/10g
      Beacon is turned off
      Input flow-control is off, output flow-control is off
      Rate mode is dedicated
      Switchport monitor is off
      Last link flapped 05:38:03
      Last clearing of "show interface" counters 05:41:33
      30 seconds input rate 0 bits/sec, 0 packets/sec
      30 seconds output rate 0 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 0 bps, 0 pps; output rate 0 bps, 0 pps
      RX
        0 unicast packets  0 multicast packets  0 broadcast packets
        0 input packets  0 bytes
        0 jumbo packets  0 storm suppression packets
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        0 unicast packets  146 multicast packets  0 broadcast packets
        146 output packets  13083 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble
        0 Tx pause
      0 interface resets
    - This behavior is seen on both 5Ks
    - I've tried using a different set of ports, changed SFPs, and fibre cabling to no avail
    - I can't seem to understand this behavior?!  In that, why would configuring the port cause the link to go down?
    - If anyone has experience this before, or could shed some light on this behavior, it would be appreciated.
    sh ver
    Cisco Nexus Operating System (NX-OS) Software
    TAC support: http://www.cisco.com/tac
    Copyright (c) 2002-2010, Cisco Systems, Inc. All rights reserved.
    The copyrights to certain works contained herein are owned by
    other third parties and are used and distributed under license.
    Some parts of this software are covered under the GNU Public
    License. A copy of the license is available at
    http://www.gnu.org/licenses/gpl.html.
    Software
      BIOS:      version 1.2.0
      loader:    version N/A
      kickstart: version 4.2(1)N1(1)
      system:    version 4.2(1)N1(1)
      power-seq: version v1.2
      BIOS compile time:       06/19/08
      kickstart image file is: bootflash:/n5000-uk9-kickstart.4.2.1.N1.1.bin
      kickstart compile time:  4/29/2010 19:00:00 [04/30/2010 02:38:04]
      system image file is:    bootflash:/n5000-uk9.4.2.1.N1.1.bin
      system compile time:     4/29/2010 19:00:00 [04/30/2010 03:51:47]
    thanks
    Sheldon

    I had identical issue
    Two interfaces on two different FEXes were INACTIVE. I have two Nexus 5596 in vPC and A/A FEXes.
    I also use config-sync feature.
    Very same configuration was applied to other ports on other FEXes and they were working with no problems.
    interface Ethernet119/1/1
      inherit port-profile PP-Exchange2003
    I checked VLAN status associated with this profile and it was active (of course it was, other ports were ok).
    I solved it by removing port profile from this port and re-applied it... voila, port changed state to up!
    Very very strange.

  • UCS C-Series VIC-1225 to Nexus 5000 setup

    Hello,
    I have two nexus 5000 setup with a vpc peer link. I also have an cisco c240 m3 server with a vic-1225 card that will be running esx 5.1. I also have some 4 2248 fabric extenders. I have been searching for some best practice information on how to best setup this equipment. The nexus equipment is already running, so its more about connecting the c240 and the vic-1225 to the nexus switches. I guess this is better to do rather than to connect to the fabric extenders in order to minmize hops?
    All documention I have found involves setup/configuration etc with fabric interconnects which I dont have, and have been told that I do not need. Does anyone have any info on this? and can point me in the right direction to setup this correctly?
    More specifically, how should I setup the vic-1225 card to the nexus? just create a regular vpc/port-channel to the nexuses? use lacp and set it to active?
    Do I need to make any configuration changes on the vic card via the cimc on the c240 server to make this work?

    Hello again, Im stuck
    This is what I have done. I have created the vPC between my esx host and my two nexus 5000 switches, but it doesnt seem to come up:
    S02# sh port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    Group Port-       Type     Protocol  Member Ports
          Channel
    4     Po4(SD)     Eth      LACP      Eth1/9(D)
    vPC info:
    S02# sh vpc 4
    vPC status
    id     Port        Status Consistency Reason                     Active vlans
    4      Po4         down*  success     success                    -
    vPC config:
    interface port-channel4
      switchport mode trunk
      switchport trunk allowed vlan 20,27,30,50,100,500-501
      spanning-tree port type edge trunk
      vpc 4
    interface Ethernet1/9
      switchport mode trunk
      switchport trunk allowed vlan 20,27,30,50,100,500-501
      spanning-tree port type edge trunk
      channel-group 4 mode active
    Im unsure what I must configure on the cisco 240M3(esx host) side to make this work. I only have the two default interfaces(eth0 and eth1) on the vic-1225 installed in the esx host, and both have the vlan mode is set to TRUNK.
    Any ideas on what I am missing?
    Message was edited by: HDA

  • What are best practices for connecting asa to nexus 5000

    just trying to get a feel for the best way to connect redundant asa to redundant nexus 5000
    using a vpc vlan is fine, but then running a routing protocol isn't supported, so putting static routes on 5000 works, but it doesn't support ip sla yet so you cant really stop distributing the default if your internet goes down. just looking for what was recommended.

    you want to test RAC upgrade on NON RAC database. If you ask me that is a risk but it depends on may things
    Application configuration - If your application is configured for RAC, FAN etc. you cannot test it on non RAC systems
    Cluster upgrade - If your standalone database is RAC one node you can probably test your cluster upgrade there. If you have non RAC database then you will not be able to test cluster upgrade or CRS
    Database upgrade - There are differences when you upgrade RAC vs non RAC database which you will not be able to test
    I think the best way for you is to convert your standalone database to RAC one node database and test it. that will take you close to multi node RAC

  • Trunking on Nexus 5000 to Catalyst 4500

    I have 2 devices on the each end of a Point to Point.  One side has a Nexus 5000 the other end a Catalyst 4500.  We want a trunk port on both sides to allow a single VLAN for the moment.  I have not worked with Nexus before.  Could someone look at the configurations of the Ports and let me know if it looks ok?
    nexus 5000
    interface Ethernet1/17
      description
      switchport mode trunk
      switchport trunk allowed vlan 141
      spanning-tree guard root
      spanning-tree bpdufilter enable
      speed 1000
    Catalyst 4500
    interface GigabitEthernet3/39
    description
    switchport trunk encapsulation dot1q
    switchport trunk allowed vlan 141
    switchport mode trunk
    speed 1000
    spanning-tree bpdufilter enable
    spanning-tree guard root

    Thanks guys, we found the issue.  The Catalyst is on my side and the Nexus is on the side of the hosting center.  The hosting center moved his connection to a different Nexus 5000 and the connection came right up.  We dropped the spanning-tree guard root. 
    It was working on the previous nexus when we set the native vlan for 141.  So we thought it was the point to point dropping the tags.
    The hosting center engineer this it might have to do with the VPC Peer-Link loop prevention on the previous Nexus. 
    Anyway it is working the way we need it to.

  • The meaning of Interface Ethernet250/1 under the Nexus 2000 is connected to Nexus 5000 switch

    Dear all,
       Recently, I prepared and deploy a network monitoring system to monitor the new generation Nexus connected network.  With using snmpwalk to query the interfacs information from the Nexus 5000 switch (one Nexus 2000 is connected to it via FlexLink), I found that other than normal Nexus 5000 and 2000 ports(ifName to be Ethernet1/1, Ethernet1/2, ... Ethernet190/1/1, Ethernet190/1/2...), a series of interface with ifName Ethernet250/1, Ethernet250/2, .... to be appeared in the interface SNMP tree.   With logged into the Nexus 5000 and issue display interface command, I can only found the information on the normal interfaces but not the abnormal interface Ethernet250/1, ...
       Would someone know what is it (do E250/1 is a logical interface like port channel or VLAN) and how to monitor it ?  Thanks in advances.
    HC Wong

    I've not seen that myself. Could it perhaps be a VPC (Virtual Portchannel)?

Maybe you are looking for

  • Two Accounts - Want to delete my acc

    Hi, I have two skype account. The one I have problem with is the fact that I link it with my Microsoft account, and now unlink it...so I can't log in and wants to delete my account..please help

  • Problem with the EQ setting for the music app

    I have an iPhone 6. Whenever I listen to music on my iPhone, I set the equalizer (EQ) in the settings app to Rock. However, whenever I stop playing music, and then begin playing music again a little while later, the EQ has reset itself to "Off". I am

  • I must have accidentally hit "NEVER FOR THIS SITE"

    I must have accidentally selected "NEVER FOR THIS SITE" when i logged in to hotmail. I would like to reverse that. Now when i try to log on hotmal the autofill option is greyed out. How can i reset it so that i can ALLOW autofill to work for hotmail

  • Best way to clean install Lion?

    Right, i've posted a lot today, but i'm hoping this should be the last time: There's two ways of clean installing Lion (that i've seen), which one is better?: 1. Putting the install.dmg onto a sd card/memory stick, holding alt whilst restarting, boot

  • Documentation for Translation Tool in BW (SLWA)

    Hi, i am just trying to translate an infocube using the translation tool in BW (Transaction SLWA). It is not self-explaining, so i was not able to translate anything. Does anyone has some information/documentation about how to use it ? Best regards S