Multicast: duplicated packets on nexus 7k with vpc and HSRP

Hi guys,
I'm testing multicast deployment on the lab shown below. The sender and the receiver are connected to the 6500 in two different vlans. The sender is in vlan 23 and the reciever in vlan 500. They are connected to the 6500 with a trunk link. There is VPc between the two nexus 7k and the 6500.
Furthermore, there is HSRP running on the two vlan interface 23 and 500 on both nexus.
I have configured the minimum to use PIM-SM with static RP. The RP is the 3750 above the nexus. (*,G) and (S,G) states are created correctly.
IGMP snopping is enabled on 6500, and the two nexus.
I'm using iperf to generate my flow, and netflow and snmp to monitor what happens.
All works correctly, my receiver receive the flow and it takes the good route. My problem is that I have four times more multicast traffic on the vlan interface 500 on both nexus but this traffic is only sent one time to the receiver (which is the good comportment) and the rest of the traffic is not shown on any other physical interface in outbound.
Indeed, I'm sending one flow, the two nexus receive it (one from peer link and the other from the 6500) in the vlan 23 (for example 25 packets inbound).
But when the flow is routed in the vlan 500, there is 100 packets on each interface vlan 500 on each nexus in outbound.
And when monitoring all physical interfaces, I only see 25 packets outbound on the interface linked with the receiver and the overflow isn't outgone.
I have joined the graphs I obtain on one of the nexus for the vlan 23 and the vlan 500. Netflow says the same things in bits/s.
Had someone already seen that? Any idea about the duplication of the packets?
Thanks for any comment,
Regards,
Configuration:
Nexus 1: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
Nexus 2: n7000-s1-dk9.5.2.7.bin, 2 SUP1, 1 N7K-M132XP-12, 1 N7K-M148GS-11
6500: s72033-adventerprisek9_wan-mz.122-33.SXI5.bin (12.2(33)SXI5)
3750: c3750-ipservicesk9-mz.122-50.SE5.bin (12.2(50)SE5)

Hi Kuldeep,
If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
HTH
Jay Ocampo

Similar Messages

  • Nexus 7000 with VPC and HSRP Configuration

    Hi Guys,
    I would like to know how to implement HSRP with the following setup:
    There are 2 Nexus 7000 connected with VPC Peer link. Each of the Nexus 7000 has a FEX attached to it.
    The server has two connections going to the FEX on each Nexus 7k (VPC). FEX's are not dual homed as far as I now they are not supported currently.
    R(A)              R(S)
    |                     |
    7K Peer Link 7K
    |                     |
    FEX              FEX
    Server connected to both FEX
    The question is we have two routers connected to each of the Nexus 7k in HSRP (active and one is standby). How can I configure HSRP on the nexus switches and how the traffic will routed from the Standby Nexus switch to Active Nexus switch (I know HSRP works differently here as both of them can forward packets). Will the traffic go to the secondary switch and then via the peer link to the active switch and then to the active router ? (From what I read the packet from end hosts which will go via the peer link will get dropped)
    Has anyone implemented this before ?
    Thanks

    Hi Kuldeep,
    If you intend to put those routers on a non-vpc vlan, you  may create  a new inter-switch trunk between the N7K and allow that non-vpc vlan . However if those will be on a VPC vlan, best to create two links to the N7K pair and create a VPC, otherwise configure those ports as orphan ports which will leverage the VPC peer link .
    HTH
    Jay Ocampo

  • Peer-Switch with vPC and non-vPC Vlan Port-Channels

    Hi,                 
    in a design guide i have noticed that it is best practice to split vPC and non-vPC vlans on different inter-switch port-channels. Now, if i want to use the Peer-Switch function, but the port-channel interface of the non-vPC-vlan channel moves into blocking state. The option spanning-tree pseudo-information has no influence. Is peer-switch possible in my kind of topology?
    Greeting,
    Stephan

    I believe absolutly possible. specifically coz peer-switch and spt pseudo-info are specific and local to cisco fabric services running as part of  vpc technology. Personally me has lab with vpc-domain compounded of 2 N5Ks. They are peer-switches with spt-pseudoinfo and they have MST running on non VPC links independantly from vpc.

  • Deep Packet Inspection (DPI) - coping with it and/or defeating it

    Hi,
    Now that the "DPI Genie" is well out of the bottle (or toothpaste tube) and will never go back to where it came from no matter what,
    I have been looking for ways and means to:
    1- cope with it when it comes time to send/rec emails of minor/modest security concern (maybe everything, since the Data Miners can be intent on seizing any/all about anyone they choose to; and,
    2- for ways to defeat it when it comes time to send very confidential material (legally privileged).
    I have read what I could at http://www.securemac.com/ and even though that site provides credible ideas, I believe it is the REAL-World-Users that could enlighten me (anyone) with genuine and workable solutions.
    So, with utmost humility and focus, I ask those who have considered this more than just an interesting topic to please suggest/offer if you will, the most cost-efficient method(s) for #'s 1 & 2 above.
    Some "joke" of a method I have witnessed is where a sender saves a Word file in TIFF and sends it as an attachment (perhaps with a pw protected file, too). Believe it or not I've seen this from a couple of gov't senders (mind you, they weren't asking me or telling me to get or use a pw).
    No, the application as I see it is to send text and perhaps the odd graphic or two from me to one other; and from one other back to me. And not have either of us jump through hoops of fire or wade through libraries of "how-to" in order to do this.
    For example, I've looked at some proxy site sellers of downloadable (on-board) tunnel type encryption subscriber services; and, looked at sites that say "na, just login to our proxy and to heck with on-board software sellers." -- each has its appeals and doubt-filled approach. Both seem rather price-y for a home office user over the course of 1, 5 or ten years.
    So, is there a Mac OSX-friendly package or method that is not going to be cost-prohibitive? And be able to send/rec/send/rec routinely between my Mac and, say, his or her Microsoft PC?
    Your direction and especially your specifics are very much appreciated.
    THank you,
    Max

    Without 1 reply, can we assume everyone is comfortable with the new rules?
    Someone can OPEN AND READ YOUR MAIL.
    For example, See last paragraph in following link:
    "Using detailed application knowledge, ixDPI can filter for an URL, an e-mail address, an attachment, a mime-type, a login, or a password. These filtering criteria can be combined to isolate specific flows..."
    https://www.dpacket.org/articles/myth-7-all-ip-traffic-can-be-recorded
    See the opening paragraphs of the main page (note: can "tunneling" be cracked, too?)
    https://www.dpacket.org/articles/seven-myths-ip-networks-need-deep-dpi-0
    Anyone running Mac's the least concerned about this?
    We all know about how Comcast got castigated by the FCC, but who's checking up on them and the phone company? Even if checked on, when the "checkers" go home what then, or, how about the new business model of the commercial private service sellers (e.g., PI's for a hostile takeover monitoring executives at home or an "Extreme ex-?" like in Glen Close/Michael Douglas)??
    IF not concerned, then pls suggest how a small office/home office user can also be as comfortable as you?
    Again, if there is a way without paying through the nose, I sure would like some suggestions.
    Thanks

  • Unstable vMotion behavior over DCI with vPC?

    hi out there
    I need some ideas to track a problem - we have a DC running with a wmware esxi4.1 cluster (2 x 2 sets of blade-servers - one set at each site) with a DR site which is interconnected with 4 10G fiber where we have established 2 x 2 port-channels (Cisco Nexus 5k with vPC) between - 1 vpc portchannel with 2 10g connections for iSCSI and 1 vpc portchannel also with 2 10g connections for "non-iSCSI traffic" - eg the rest - we have seperated the iscsi traffic fully from the rest of the network. We have hereby a "simple" Layer 2 DC interconnection with a latencey between the sites of ~ 1mSec - and no erros reported by any of the involved devices. The iscsi consist of two EMC VNX 5500 controlleres - one at each site with a "local" san array.
    My problem is that from time to time when we issue a vMotion or clone of a vm between the sites we get either an extrem slow response (will probably end in a timeout) or the operations fails with a timeout - could be "disk clone failed. Canceling Storage vMotion. Storage vMotion clone operation failed. Cause: Connection timed out"
    Any suggestions to track this? It is a bit hard to track the network connections since it is 10 gig (haven't got any sniffer equipment yet which can catch up with a 10 gig interface). Could there be some buffer allocation problems on the nexus switches (no errors logged - any suggestions on which debug level?)
    best regards /ti

    Hi - we have a similar setup but where we use nx5k to service the DCI and VPC as solely L2 and then run the L3 on the NX7k. You need to have all the same vlans on the vpc as far as I know. You can't fool it - but you might be able to tricks something with some q-in-q trunks between the 2 sets of nx7k's
    best regards /ti

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Nexus 5K OSPF with vPC

    Hi,
    I know it is well documented using IGP's, more specifically OSPF with 7K's and vPC's but when it comes to the same thing on 5K's I am still a little confused.
    My topology is:
    5K01 and 5K02 are connected and are vPC peers, I currently have a management network on VLAN 114, both 5k's have SVI's on this and are currently OSPF neighbors over their vPC using this vlan.
    I have an MPLS router (service provider PE) which is 2 routers but clustered so logically in this instance it is one router, the 5 k's will be conecting to this PE router via some switches over a vPC and needs to become a OSPF neighbor to both the 5K's.
    Looking at this post:
    http://adamraffe.com/2013/03/08/l3-over-vpc-nexus-7000-vs-5000/
    It suggests that I can just add VLAN 114 to the vPC up to tyhe PE and turn OSPF on on the interface on the PE, although this will not support Multicast and I don't really want to restrict myself as this may be a future requirement.
    What I thought might be a better solution would be to designate a new vlan and allow it on the vPC up to the PE and use that for the OSPF neighborships between the 5K's and the PE and not allowing it over the vPC peer link - leaving the 5K's neighborship over vlan 114.
    Can someone tell me what the best practice/supported topology is here and maybe provide some cisco links?
    Thanks a lot in advance.

    You have to be very careful when configuring L3 services and interfaces while using VPC. 
    Take a look at this document:
    http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf
    Also, take a look at this post:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    You can create a vlan used exclusively for Nexus-to-Nexus iBGP peering.  Use a new 'access' link between the two switches and place them on the new vlan.  Make sure that this VLAN does not traverse the VPC peer link.  Then, create SVIs on each switch for that VLAN and peer over that link.  Then, you can create a L3 link on each nexus to peer with your eBGP neighbors.
    The point you want to make sure you understand is the VPC loop prevention mechanism that says "If a packet is received on a VPC port, traverses the VPC peer link, it is not allowed to egress on a VPC port."

  • VPC on Nexus 5000 with Catalyst 6500 (no VSS)

    Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.
    The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.
    Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.
    Questions I have are.
    - Is this my best deployment choice?
    - vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:
         - one of the 6500 goes down
              - STP?
              - What is going to happend with the Etherchannels on the remaining  6500?
         - the Management interface goes down for any other reason
              - which one is going to be the primary NEXUS?
    Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.
    Any help is appreciated.
    Devices
    ·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)
    ·         2 Cisco Nexus 5010
    ·         2 Cisco UCS 6120xp
    ·         2 UCS Chassis
         -    4  Cisco  B200-M1 blades (2 each chassis)
              - Dual 10Gb Intel card (1 per blade)
    vPC Configuration on Nexus 5000
    TACSWN01
    TACSWN02
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.10
    role priority 10
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    feature vpc
    vpc domain 5
    reload restore
    reload restore   delay 300
    Peer-keepalive   destination 10.11.3.9
    role priority 20
    !--- Enables vPC, define vPC domain and peer   for keep alive
    int ethernet 1/9-10
    channel-group 50   mode active
    !--- Put Interfaces on Po50
    int port-channel 50
    switchport mode   trunk
    spanning-tree port   type network
    vpc peer-link
    !--- Po50 configured as Peer-Link for vPC
    inter ethernet 1/17-18
    description   UCS6120-A
    switchport mode   trunk
    channel-group 51   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-A  
    int port-channel 51
    swithport mode   trunk
    vpc 51
    spannig-tree port   type edge trunk
    !--- Associates vPC 51 to Po51
    inter ethernet 1/19-20
    description   UCS6120-B
    switchport mode   trunk
    channel-group 52   mode active
    !--- Associates interfaces to Po51 connected   to UCS6120xp-B  
    int port-channel 52
    swithport mode   trunk
    vpc 52
    spannig-tree port   type edge trunk
    !--- Associates vPC 52 to Po52
    !----- CONFIGURATION for Connection to   Catalyst 6506
    Int ethernet 1/1-3
    description   Cat6506-01
    switchport mode   trunk
    channel-group 61   mode active
    !--- Associate interfaces to Po61 connected   to Cat6506-01
    Int port-channel 61
    switchport mode   trunk
    vpc 61
    !--- Associates vPC 61 to Po61
    Int ethernet 1/4-6
    description   Cat6506-02
    switchport mode   trunk
    channel-group 62   mode active
    !--- Associate interfaces to Po62 connected   to Cat6506-02
    Int port-channel 62
    switchport mode   trunk
    vpc 62
    !--- Associates vPC 62 to Po62
    vPC Verification
    show vpc consistency-parameters
    !--- show compatibility parameters
    Show feature
    !--- Use it to verify that vpc and lacp features are enabled.
    show vpc brief
    !--- Displays information about vPC Domain
    Etherchannel configuration on TAC 6500s
    TACSWC01
    TACSWC02
    interface range GigabitEthernet2/38 - 43
    description   TACSWN01 (Po61 vPC61)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 61   mode active
    interface range GigabitEthernet2/38 - 43
    description   TACSWN02 (Po62 vPC62)
    switchport
    switchport trunk   encapsulation dot1q
    switchport mode   trunk
    no ip address
    channel-group 62   mode active

    ihernandez81,
    Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.
    All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.
    The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).
    As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.
    If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.
    I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?
    For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.
    Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.
    Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.
    better ?
    one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

  • NX-OS firmware Upgradation in Nexus 5548 with Enhanced vPC with Dual Active FEX

    Hi All,
             Please tell me how to do "NX-OS firmware Upgradation in Nexus 5548 with Enhanced vPC with Dual Active FEX" without downtime for FEX.
    The Server are connected to FEX.
    Attached the diagram.

    Hi,
    If the 5500s are layer-2 with vPC running between them than you can use ISSU to upgade.
    here is doc to follow:
    ISSU Support for vPC Topologies
    An ISSU is completely supported when two switches are paired in a vPC configuration. In a vPC configuration, one switch functions as a primary switch and the other functions as a secondary switch .They both run the complete switching control plane, but coordinate forwarding decisions to have optimal forwarding to devices at the other end of the vPC. Additionally, the two devices appear as a single device that supports EtherChannel (static and 802.3ad) and provide simultaneously data forwarding services to that device.
    While upgrading devices in a vPC topology,you should start with the switch that is the primary switch. The vPC secondary device should be upgraded after the ISSU process completes successfully on the primary device. The two vPC devices continue their control plane communication during the entire ISSU process (except when the ISSU process resets the CPU of the switch being upgraded).
    This example shows how to determine the vPC operational role of the switch:
    link:
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/upgrade/513_N1_1/n5k_upgrade_downgrade_513.html
    HTH

  • Nexus 7000 and 2000. Is FEX supported with vPC?

    I know this was not supported a few months ago, curious if anything has changed?

    Hi Jenny,
    I think the answer will depend on what you mean by is FEX supported with vPC?
    When connecting a FEX to the Nexus 7000 you're able to run vPC from the Host Interfaces of a pair of FEX to an end system running IEEE 802.1AX (802.3ad) Link Aggregation. This is shown is illustration 7 of the diagram shown on the post Nexus 7000 Fex Supported/Not Supported Topologies.
    What you're not able to do is run vPC on the FEX Network Interface that connect up to the Nexus 7000 i.e., dual-homing the FEX to two Nexus 7000. This is shown in illustrations 8 and 9 of under the FEX topologies not supported on the same page.
    There's some discussion on this in the forum post DualHoming 2248TP-E to N7K that explains why it's not supported, but essentially it offers no additional resilience.
    From that post:
    The view is that when connecting FEX to the Nexus 7000, dual-homing does not add any level of resilience to the design. A server with dual NIC can attach to two FEX  so there is no need to connect the FEX to two parent switches. A server with only a single NIC can only attach to a single FEX, but given that FEX is supported by a fully redundant Nexus 7000 i.e., SE, fabrics, power, I/O modules etc., the availability is limited by the single FEX and so dual-homing does not increase availability.
    Regards

  • Routing issue in Nexus 7009 due to vPC or hsrp

    we have two site's, on first site we have two nexus 7009 switches (Nexus A  & Nexus B)  and other site is remote site having two 6500 switches. (design attached)
    we are using hsrp on nexus switches and Active is Nexus A for all vlan’s 
    From one of my remote site user's (user's are in vlan 30 ) are not able to communicate with  nexus site vlan 20 specially if host in vlan 20 take forwarding path from nexus switch B,
    I can ping the vlan 20 both physical address's and gateway (vlan 20 configured in both nexus switch and using HSRP) from vlan 30 which configured on remote site 6500 switch
    ospf with area 0 is the  routing protocol running between both site.
    vlan 10 we are using as a management  vlan on both nexus switch  that building neighbore ship with WAN router, it's means wan router have two neighbors nexus A and nexus B, but nexus B building the neigbhorship via a Nexus A because from WAN router we have single link which is terminated on Nexus A,
    there is one layer 2 switch between nexus A and WAN router, nexus A site that switch port in vPC because we are planning to pull second link later to nexus B.
    All user's are connected with edge switch and edge switch have a redundant uplink to nexus A and B with vPC configured
    After troubleshooting we observe that if user in vlan 20 wants to communicate with vlan 30 (remote site), traffic is taking Nexus B is forwarding path, then gets drops.
    I run the tracert from pc its showing route till SVI on Nexus B  after that seems packets not finding route.  Even vlan 30 routes are available in the routing table of Nexus B. we don’t have any access-list and Firewall between this path.

    Hi,
    I suspect in your scenario that traffic is being dropped due to the characteristics of vPC, the routing table on Nexus-B may reflect the next-hop address for the destination IP, however if that next-hop address is the address of the Nexus-A off of VLAN 20 then it will be forwarded across the vPC peer-link, this breaks the convention.
    When you attach a Layer 3 device to a vPC domain, the peering of routing protocols using a VLAN also carried on the vPC peer-link is not supported. If routing protocol adjacencies are needed between vPC peer devices and a generic Layer 3 device, you must use physical routed interfaces for the interconnection.
    You can configure VLAN Interfaces for Layer 3 connectivity on the vPC peer devices to link to Layer 3 of the network for such applications as HSRP and PIM. However, Cisco recommend that you configure a separate Layer 3 link for routing from the vPC peer devices, rather than using a VLAN network interface for this purpose.
    Take a look at the following URL, this article helps to explain the characteristics of vPC and routing over the peer-link:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    Regards
    Allan.
    Hope you find this is helpful.
    Sent from Cisco Technical Support iPad App

  • Unicast Flooding on Nexus 5020 with ESXi 5 vMotion

    We recently began testing VMware ESXi 5.0 on our production network.  After observing some heavy discards (3-10 million at times) on the 10G uplinks FROM our core 6509s TO the Nexus 5Ks we began some investigation.  We started by capturing traffic on vPCs from the Nexus 5K to the 6509s.  We found a tremendous amount of unicast vMotion traffic transmitting from the 6509s to the Nexus 5Ks.  Unicast vMotion traffic should never touch the 6509s core switches since it is layer two traffic.  We found that our problem was two fold.  Problem number one was the fact that on the ESXi 5 test cluster we had vMotion and the management vm kernel nics in the same subnet.  This is a known issue in which ESXi replies back using the management virtual mac address instead of the vMotion virtual mac address.  Therefore the switch never learns the vMotion virtual mac address thus flooding all of the vMotion traffic.  We fixed problem number 1 by creating a new subnet for the vMotion vm kernel nics and we also created a new isolated vlan across the Nexus 5Ks that does not extend to the cores, modifying the vDistributed switch port group as necessary.  To verify that the vMotion traffic was no longer flooding we captured traffic locally on the N5K, not using SPAN but simply eves dropping on the vMotion VLAN as an access port.  The testing procedure involved watching the CAM table on the 5K, waiting for the vMotion mac addresses to age out then starting a vMotion from one host to another.  Doing this process we were able to consistently capture flooded vMotion traffic onto our spectator host doing the captures.  The difference from problem 1 was that the flooding did not include all of the vMotion conversation as before but when vMotioning 1-2 servers we saw anywhere from 10ms to 1 full second of flooding then it would stop.  The amount of flooding varied but greatly depended on whether the traffic traversed the vPC between the 5Ks or not.  We were able to make the flooding much worse by forcing the traffic across the vPC between the N5Ks.
    Has anyone else observed this behavior with N5Ks or VMware on another switching platform?
    We were able to eliminate the vMotion flooding by pinging both vMotion hosts before beginning the vMotion. It seems that if VMware would setup a ping to verify connectivity between the vMotion hosts before starting the vMotion it would eliminate the flooding.
    A brief description of the network..
    Two 6509 core switches with layer 2 down to two Nexus 5020 running NX-OS version 5.0(3)N2(2b) using 2232PP FEX for top-of-rack.  For testing purposes each ESXi host is dual-homed with one 10G link (CNA) to each N5K through the FEX.  VMware is using vDistributed switch with a test port-group defined for the ESXi 5 boxes.
    For curiosities sake we also observed packet captures from ESX 4.1 where we saw similar unicast flooding although it was near not as many packets as in ESXi 5.
    We have a case open with TAC and VMware to track down the issue but were curious if anyone else has observed similar behavior or had any thoughts.
    Thanks
    Cody

    Essentially the fix was to (a) turn off mac aging on the vmotion vlan on the 5K, (b) remove the L3 addressing from the vmotion vlan by not extending it to the 6K, and for good measure we (c) dedicated 2x10G ports per server just for multi-nic vmotion. These three measures did the trick.

  • Nexus 5548UP with 1Gb ether to 10Gb ether on same switch

    I have a Nexus 5548UP switch with a couple of systems some of which have been configured with 1Gb nic's running various OS's and other that have 10Gb nic's.  When I try to SSH from 1Gb box to a 1Gb box everything is fine, when I SSH from 1Gb to 10Gb box it does not work but 10Gb to 10Gb boxes will work.  The switch is a fresh install and has one VLAN 51 the first 2 ports are setup with LACP and the only differences is speed of 1000 and 10000 for the boxes attached.  Is there some trick or additional configuration that is needed to make 1Gb to 10Gb work for SSH and other applications?  I trying to setup some test boxes now to have a repeatable environment and will update when completed.
    !Command: show running-config
    !Time: Fri Jan  9 18:30:44 2009
    version 5.2(1)N1(4)
    hostname ISC5548B
    no feature telnet
    feature lacp
    feature lldp
    ...... account info removed
    ssh key rsa 2048
    ip domain-lookup
    class-map type qos class-fcoe
    class-map type queuing class-fcoe
      match qos-group 1
    class-map type queuing class-all-flood
      match qos-group 2
    class-map type queuing class-ip-multicast
      match qos-group 2
    class-map type network-qos class-fcoe
      match qos-group 1
    class-map type network-qos class-all-flood
      match qos-group 2
    class-map type network-qos class-ip-multicast
      match qos-group 2
    vrf context management
      ip route 0.0.0.0/0 192.168.52.1
    vlan 1
    vlan 51
      name InfoSec
    port-profile default max-ports 512
    interface port-channel1
      switchport access vlan 51
      speed 1000
    interface Ethernet1/1
      lacp port-priority 500
      switchport access vlan 51
      speed 1000
      channel-group 1 mode active
    interface Ethernet1/2
      switchport access vlan 51
      speed 1000
      channel-group 1 mode active
    interface Ethernet1/3
      switchport access vlan 51
      speed 1000
    interface Ethernet1/4
      switchport access vlan 51
      speed 1000
    interface Ethernet1/5
      switchport access vlan 51
      speed 1000
    interface Ethernet1/6
      switchport access vlan 51
      speed 1000
    interface Ethernet1/7
      switchport access vlan 51
      speed 1000
    interface Ethernet1/8
      switchport access vlan 51
      speed 1000
    interface Ethernet1/9
      switchport access vlan 51
      speed 1000
    interface Ethernet1/32
      switchport access vlan 51
      speed 1000
    interface Ethernet2/1
      switchport access vlan 51
    interface Ethernet2/2
      switchport access vlan 51
    interface Ethernet2/3
      switchport access vlan 51
    interface Ethernet2/4
      switchport access vlan 51
    interface Ethernet2/5
      switchport access vlan 51
    interface Ethernet2/6
      switchport access vlan 51
    interface Ethernet2/7
      switchport access vlan 51
    interface Ethernet2/8
      switchport access vlan 51
    interface Ethernet2/14
      switchport access vlan 51
    interface Ethernet2/15
      switchport access vlan 51
    interface Ethernet2/16
      switchport access vlan 51
    interface mgmt0
      ip address 192.168.52.4/24
    line console
    line vty
    boot kickstart bootflash:/n5000-uk9-kickstart.5.2.1.N1.4.bin
    boot system bootflash:/n5000-uk9.5.2.1.N1.4.bin
    ISC5548B# show int brief
    Ethernet      VLAN    Type Mode   Status  Reason                   Speed     Port
    Interface                                                                    Ch #
    Eth1/1        51      eth  access up      none                       1000(D) 1
    Eth1/2        51      eth  access up      none                       1000(D) 1
    Eth1/3        51      eth  access up      none                       1000(D) --
    Eth1/4        51      eth  access up      none                       1000(D) --
    Eth1/5        51      eth  access up      none                       1000(D) --
    Eth1/6        51      eth  access up      none                       1000(D) --
    Eth1/7        51      eth  access up      none                       1000(D) --
    Eth1/8        51      eth  access up      none                       1000(D) --
    Eth1/9        51      eth  access up      none                       1000(D) --
    Eth1/10       51      eth  access up      none                       1000(D) --
    Eth1/11       51      eth  access up      none                       1000(D) --
    Eth1/12       51      eth  access up      none                       1000(D) --
    Eth1/13       51      eth  access down    Link not connected         1000(D) --
    Eth1/14       51      eth  access down    Link not connected         1000(D) --
    Eth1/15       51      eth  access up      none                       1000(D) --
    Eth1/16       51      eth  access down    Link not connected         1000(D) --
    Eth1/17       51      eth  access down    SFP not inserted           1000(D) --
    Eth1/18       51      eth  access down    SFP not inserted           1000(D) --
    Eth1/28       51      eth  access down    SFP not inserted           1000(D) --
    Eth1/29       51      eth  access down    SFP not inserted           1000(D) --
    Eth1/30       51      eth  access down    SFP not inserted           1000(D) --
    Eth1/31       51      eth  access down    SFP not inserted           1000(D) --
    Eth1/32       51      eth  access down    SFP not inserted           1000(D) --
    Eth2/1        51      eth  access up      none                        10G(D) --
    Eth2/2        51      eth  access up      none                        10G(D) --
    Eth2/3        51      eth  access up      none                        10G(D) --
    Eth2/4        51      eth  access up      none                        10G(D) --
    Eth2/5        51      eth  access up      none                        10G(D) --
    Eth2/6        51      eth  access up      none                        10G(D) --
    Eth2/7        51      eth  access down    SFP not inserted            10G(D) --
    Eth2/8        51      eth  access down    SFP not inserted            10G(D) --
    Eth2/9        51      eth  access down    SFP not inserted            10G(D) --
    Eth2/10       51      eth  access down    SFP not inserted            10G(D) --
    Eth2/11       51      eth  access down    SFP not inserted            10G(D) --
    Eth2/12       51      eth  access down    SFP not inserted            10G(D) --
    Eth2/13       51      eth  access down    Link not connected          10G(D) --
    Eth2/14       51      eth  access down    SFP not inserted            10G(D) --
    Eth2/15       51      eth  access down    Link not connected          10G(D) --
    Eth2/16       51      eth  access down    Link not connected          10G(D) --
    Port-channel VLAN    Type Mode   Status  Reason                    Speed   Protocol
    Interface
    Po1          51      eth  access up      none                      a-1000(D)  lacp
    Port   VRF          Status IP Address                              Speed    MTU
    mgmt0  --           up     192.168.52.4                            100      1500
    ISC5548B# show interface ethernet 1/3
    Ethernet1/3 is up
     Dedicated Interface
      Hardware: 1000/10000 Ethernet, address: 8c60.4f49.818a (bia 8c60.4f49.818a)
      MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
      reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      full-duplex, 1000 Mb/s, media type is 10G
      Beacon is turned off
      Input flow-control is off, output flow-control is off
      Rate mode is dedicated
      Switchport monitor is off
      EtherType is 0x8100
      Last link flapped 1d02h
      Last clearing of "show interface" counters 1d20h
      30 seconds input rate 0 bits/sec, 0 packets/sec
      30 seconds output rate 304 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 0 bps, 0 pps; output rate 200 bps, 0 pps
      RX
        0 unicast packets  0 multicast packets  0 broadcast packets
        0 input packets  0 bytes
        0 jumbo packets  0 storm suppression bytes
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        202 unicast packets  87027 multicast packets  1077 broadcast packets
        88306 output packets  7601016 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble 0 output discard
        0 Tx pause
      4 interface resets
    ISC5548B# show interface ethernet 2/1
    Ethernet2/1 is up
     Dedicated Interface
      Hardware: 1000/10000 Ethernet, address: 8c60.4f23.5de0 (bia 8c60.4f23.5de0)
      MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
      reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      full-duplex, 10 Gb/s, media type is 10G
      Beacon is turned off
      Input flow-control is off, output flow-control is off
      Rate mode is dedicated
      Switchport monitor is off
      EtherType is 0x8100
      Last link flapped 1d20h
      Last clearing of "show interface" counters never
      30 seconds input rate 0 bits/sec, 0 packets/sec
      30 seconds output rate 312 bits/sec, 0 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 0 bps, 0 pps; output rate 200 bps, 0 pps
      RX
        285 unicast packets  0 multicast packets  218 broadcast packets
        503 input packets  32724 bytes
        0 jumbo packets  0 storm suppression bytes
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        406 unicast packets  89109 multicast packets  1077 broadcast packets
        90592 output packets  7827146 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble 0 output discard
        0 Tx pause
      1 interface resets

    I have a laptop connected to a port on the switch and I am currently adding a SUSE 11 SP2 to another port that has a 10G card for a NIC.  Once I finished the install I will post more specific tests using ping and SSH to illustrate. 
    Here is the output you requested.
    ISC5548B# show port-channel summary
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
    Group Port-       Type     Protocol  Member Ports
          Channel
    1     Po1(SU)     Eth      LACP      Eth1/1(P)    Eth1/2(P)
    ISC5548B# sh int po1
    port-channel1 is up
      Hardware: Port-Channel, address: 8c60.4f49.8188 (bia 8c60.4f49.8188)
      MTU 1500 bytes, BW 2000000 Kbit, DLY 10 usec
      reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA
      Port mode is access
      full-duplex, 1000 Mb/s
      Input flow-control is off, output flow-control is off
      Switchport monitor is off
      EtherType is 0x8100
      Members in this channel: Eth1/1, Eth1/2
      Last clearing of "show interface" counters never
      30 seconds input rate 560 bits/sec, 0 packets/sec
      30 seconds output rate 3064 bits/sec, 2 packets/sec
      Load-Interval #2: 5 minute (300 seconds)
        input rate 128 bps, 0 pps; output rate 2.74 Kbps, 2 pps
      RX
        578695 unicast packets  11167 multicast packets  1056 broadcast packets
        590918 input packets  760063851 bytes
        0 jumbo packets  0 storm suppression bytes
        0 runts  0 giants  0 CRC  0 no buffer
        0 input error  0 short frame  0 overrun   0 underrun  0 ignored
        0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
        0 input with dribble  0 input discard
        0 Rx pause
      TX
        392417 unicast packets  434693 multicast packets  2984 broadcast packets
        830094 output packets  320257324 bytes
        0 jumbo packets
        0 output errors  0 collision  0 deferred  0 late collision
        0 lost carrier  0 no carrier  0 babble 0 output discard
        0 Tx pause
      1 interface resets

  • Question re. behaviour of single homed FEX with vPC

    Hi Folks,
    I have been looking at configuring Nexus 5Ks with FEX modules.  Referring to the Cisco documentation;
    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/layer2/513_n1_1/b_Cisco_n5k_layer2_config_gd_rel_513_N1_1_chapter_01001.html
    In figure 3. showing a single homed FEX with vPC topology, I'm curious what happens if one of the 5Ks fail.  For example if the 5K on the left hand side of the diagram fails do the ports on the attached FEX that the server is attached to drop? If not I would assume that the server has no way of knowing that there is no longer a valid path through those links and will continue to use them?
    Many thanks in advance,
    Shane.

    Hello Shane.
    Depending of type of the failureboth n5k can tace corrective actions and end host will always know that one of the port-channel members is down.
    For example if one 5k will crash or will be reloaded - all connected fexes alre will go offline. FEX are not standalone switches and cannot work without "master" switch.
    Also links which will go from fex to the end-host will be in vpc mode which means that all vpc redundancy features/advantages will be present.
    HTH,
    Alex 

  • Nexus 5000 vpc and fabricpath considerations

    Hello community,
    I'm currently in the process of implementing a fabricpath environment which includes Nexus 5548UP as well Nexus 7009
    NX OS on N5K is 6.0(2)N1(2)
    Regarding the FP config on the N5K I wonder what is the best practice for the peer-link. Is it necessary to configure the Portchannel like below:
    interface port-channel2
      description VPC+ Peer Link
      switchport mode fabricpath
      spanning-tree port type network
      vpc peer-link
    There are several VLANs configured as FP.
    As I understand we can remove the command:
    spanning-tree port type network
    Can anyone confirm this ?
    Also I noticed a "cosmetic" problem. On two port 1/9 and 1/10 on both N5K it isn't possible to execute the command "speed"?!
    When the command speed is executed I receive the following error:
    ERROR: Ethernet1/9: Configuration does not match the port capability
    Also please notice after the vPC and FP configuration we don't do a reload!
    Thanks
    Udo

    Hi Simon -
    Have done some testings in the lab on ISSU with FEXes either in Active/Active and Straight-through fashion, and it works.
    Disabling BA on N5K(except the vPC peer link) is one of the requirements for ISSU . 
    In a lately lab testing with the following topo, BA is configured on the vpc 101 between the N5Ks and Cat6k.  We have a repeated regular ping between the SVI interfaces of c3750 and Cat6K. 
                          c3750
                             ||
                          vPC
                             ||
        N5K =====vPC====== N5K
                              ||
                         vpc 101
                              ||
                         Cat6k
    When we changed the network type to disable BA, we observed some ping drops, which around 20-30.
    I am not sure what your network looks like, hopefully this will give you some ideas about the ISSU.  As a general recommendation, schedule a change window for some changes or even ISSU.
    regards,
    Michael

Maybe you are looking for

  • I can no longer use 3D commands in Photoshop CS6 Extended

    I have Photoshop CS6 Extended (Ver 13.0.1x64). My desktop is Windows 7 64 Home premium; and Video card is: AMD Radeon HD 5570. The drivers and operating system is updated. I have been using simple 3D functions like "New 3D extrusion from..." from tim

  • How do I express my concern over Apple's recent e-mail offer to me for trade-in/upgrade of iPad?

    Here is what happened.  Can't even locate someone within Apple who would entertain such a complaint.  Apple has effectively turned its wonderful 'customer service' into a Rube Goldberg puzzle where the customer will take what is offered or take nothi

  • Where the INTERACTION HISTORY for the BP are stored in SAP CRM

    Hi all, We are using CRM 7.0 EHP1. In the WEBUI, under Accounts, we are seeing the Interaction history as seperate assignment block. Like wise we are seeing the same assignment block with details for Contact, Employee etc. Where these Interaction his

  • I bought an audio book like 2 years ago, but now i dont have it

    A lot of stuff happened since then and now it's no where on my iPod/iPhone or in my iTunes on the computer( i switched computers since then and didn't take a lot of stuff, but it's the same account). When i searched it in the store it just says "buy"

  • Release Strategy for New Plant

    Dear All, I have a Release strategy in my 3 Plants  and it is working fine. Now I want to add one more plant under the same release strategy. I tried to add the new Plant in CL24N. System allows me to add the same in this but the RS is not working in