FCOE over VPC

Hi All,
Acc. to attached scenario.
Can i make VFC between two Nexus switches. As i don't want another seperate dedicated link for storage traffic. I want to send storage traffic only on eth 1/4 from down nexus 1 to northbound nexus 2. i.e. Nexus 1 ---> Nexus 2
And LAN traffic will follow both links in VPC as usual. Nexus 1 ---> Nexus 2 and Nexus 1 ---> Nexus 3
If i make interface VFC 23 on two nexus only i.e. on nexus 1 and nexus 2
Nexus 1 conf
interface vfc 23
bind interface eth 1/4 ( Not binding port channel 23)
vsan 1011 interface vfc 23
Interface port-channel 23
Switchport mode trunk
spanning-tree port type network
vlan 1011
  fcoe vsan1011
vsan database
  vsan 1011
interface eth 1/4 
channel-group 23 mode active
interface eth 1/5
channel-group 23 mode active
Nexus 2 Conf:
interface vfc 23
bind interface eth 1/4
vsan 1011 interface vfc 23
Interface port-channel 23
Switchport mode trunk
spanning-tree port type network
vpc 23
vlan 1011
  fcoe vsan1011
vsan database
  vsan 1011
interface eth 1/4 
channel-group 23 mode active
Nexus 3 conf:
Interface port-channel 23
Switchport mode trunk
spanning-tree port type network
vpc 23
interface eth 1/4 
channel-group 23 mode active

Hello Stephen,
One of the reason on why FCoE over vPC ( between UCS and N5K ) is not supported is that UCS currently allows all VSANs on all uplinks and cannot be pruned.
HTH
Padma

Similar Messages

  • UCS - san-port-channel over vpc

    Hi,
    I'm hoping someone can help out with this,
    I have a setup with 2 Nexus 5548's connected over a vpc peer link and a keepalive, These two nexus switch's have a standard VPC connection to an upstream 6500. this is working fine,
    Then there are multiple native FC connections from the 5548's to 6248's. I know there is the option to configure FC port-channel's on the UCS 2.0 but is it possible to put fc interfaces of the nexus 5548 into a san port-channel across a vpc between the the nexus 5548's.
    so for example, can i port-channel the following across a vpc or is a san port-channel restricted to ports on the same nexus switch only.
    Fabric A  fc1/31 & fc1/32    uplinked to  -> NX5K-1  Port fc1/31 & NX5K-2 Port fc1/32 (san port-channel 10 over a VPC)
    Fabric B  fc1/31 & fc1/32    uplinked to  -> NX5K-1  Port fc1/32 & NXFK-2 Port fc1/31 (san port-channel 11 over a VPC)
    Thanks,
    Ray.

    This is not possible. Vpc is layer 2 only ethernet port channel. San port channel cannot be built accross vpc
    Sent from Cisco Technical Support iPhone App

  • Layer 3 peering over VPC+

    Hi, we are doing a customer deployment in which 2 x n7ks are fabricpath enabled and are doing vpc+ all the devices that are dual attached to them. We need to connect the ASAs to them and the customer wants to do dynamic layer 3 peering.  (Not static routes) .
    I am yet to do this in a lab environment, BUT will the ASAs see the 2 x N7Ks as 2 different rouiting-peers? (Same if you connect them to a VPC Domain).
    what would be the best way to interconnect the ASAs with the N7Ks?

    I am afraid this is an unsupported design and may lead to traffic loss when packets need to be switched via peer link between both N7k.
    Simple design would be to use layer2 links between N7k+ASA with VLAN interfaces on both N7ks, then peer the ASA with both of them. Assuming you use 2 ASAs with active/standby you still have redundancy if the single link to the active device goes down.
    Oh one more thing: do some failover testing with the ASA and the dynamic routing protocol. If you use OSPF get ready for a disappointing surprise.

  • FC/FCoE from N5k to UCS/FI

    I'm working on a new Data Center design, which very closely matches the hardware in the following document:
    http://www.cisco.com/en/US/products/ps9670/products_configuration_example09186a0080c13e92.shtml
    I'll have SANs directly attached to the N5k's, with UCS/FI where we are planning to utilize FCoE downstream to the UCS/FI environment.
    Now, what I am curious about, is why in the above document (Which was updated in June 2013) there is a seperate FCoE port channel and another ethernet vPC. I am curious behind the reasoning behind this. Is there a limitation with FCoE over vPC links or is just a matter of keeping the traffic from fabric A away from fabic B?
    Also I am assuming the document has an FCoE port-channel as opposed to a san-port-channel is for the additional throughput of 10Gb FCoE as opposed to the 8Gb FC links.
    I see the disclaimer note about the 2x seperate links and the FI operating in NPIV mode from the FC perspective that just isn't not helping me.
    CCNP, CCIP, CCDP, CCNA: Security/Wireless
    Blog: http://ccie-or-null.net/       

    Hello Stephen,
    One of the reason on why FCoE over vPC ( between UCS and N5K ) is not supported is that UCS currently allows all VSANs on all uplinks and cannot be pruned.
    HTH
    Padma

  • FCoE through LACP or VPC

    Hello,
    Designing a server with 2 dual port port CNAs (totally 4) for each server. The server will be connected to 2 Nexus 5K switches which will be configured as VPC.
    What may be the options that these 4 CNA Ports to be connected to the 5Ks? Is it possible to use a single VPC link and configure 4x10G ports to make 40Gbps for Ethernet and FCoE frames. We think it is possible for Ethernet data frames, but each CNA sees the link to N5K seperately, without any knowledge of VPC Links.
    The following links may give a good idea on the supported designs, but each gives an example of 2 CNA ports for the server.
    http://brasstacksblog.typepad.com/brass-tacks/2011/05/eliminating-the-san-air-gap-requirement-with-fcoe-and-vpc.html
    http://bradhedlund.com/2010/12/09/great-questions-on-fcoe-vntag-fex-vpc/
    Also for the supported and not supported designs, can you explain the reasoning for single port etherchannel requirement for the VPC, why we cannot  dual attach the CNAs to each N5K seperately?
    Thanks in Advance,
    Best Regards,

    You are correct in that you cannot bind all four interfaces of the CNA. Etherchannel in reality is for ethernet traffic only. When it comes to FC(oE), each interface in CNA will have a unique wwn. There is no concept of virtual wwn. So each interface  going into a Nexus switch needs to be a one port etherchannel.

  • VFC in VPC

    Hi All,
    I have two nexus 7018c and one nexus 5548UP chasis. its in  VPC. Below N5k i have a end host with CNA card running FCoE Traffic. but From end host prespective its in VPC with N5k and VFC also running on same link.
    But From N5k I have run another link for VFC traffic from  n5k to one N7k not other. For Vpc there are seperate links like cross over.
    My question is why we can't make VFC over VPC links. when i did VFC over VPC it's not coming UP. WHy we have to run another link for FCoE traffic between 5k & 7k.
    Why it can't run over VPC link.
    Note : My N7K modules are FCoE supported.

    Duplicate posts. 
    Go HERE.

  • Ask the Expert: Different Flavors and Design with vPC on Cisco Nexus 5000 Series Switches

    Welcome to the Cisco® Support Community Ask the Expert conversation.  This is an opportunity to learn and ask questions about Cisco® NX-OS.
    The biggest limitation to a classic port channel communication is that the port channel operates only between two devices. To overcome this limitation, Cisco NX-OS has a technology called virtual port channel (vPC). A pair of switches acting as a vPC peer endpoint looks like a single logical entity to port channel attached devices. The two devices that act as the logical port channel endpoint are actually two separate devices. This setup has the benefits of hardware redundancy combined with the benefits offered by a port channel, for example, loop management.
    vPC technology is the main factor for success of Cisco Nexus® data center switches such as the Cisco Nexus 5000 Series, Nexus 7000 Series, and Nexus 2000 Series Switches.
    This event is focused on discussing all possible types of vPC along-with best practices, failure scenarios, Cisco Technical Assistance Center (TAC) recommendations and troubleshooting
    Vishal Mehta is a customer support engineer for the Cisco Data Center Server Virtualization Technical Assistance Center (TAC) team based in San Jose, California. He has been working in TAC for the past 3 years with a primary focus on data center technologies, such as the Cisco Nexus 5000 Series Switches, Cisco Unified Computing System™ (Cisco UCS®), Cisco Nexus 1000V Switch, and virtualization. He presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE® certification (number 37139) in routing and switching, and service provider.
    Nimit Pathak is a customer support engineer for the Cisco Data Center Server Virtualization TAC team based in San Jose, California, with primary focus on data center technologies, such as Cisco UCS, the Cisco Nexus 1000v Switch, and virtualization. Nimit holds a master's degree in electrical engineering from Bridgeport University, has CCNA® and CCNP® Nimit is also working on a Cisco data center CCIE® certification While also pursuing an MBA degree from Santa Clara University.
    Remember to use the rating system to let Vishal and Nimit know if you have received an adequate response. 
    Because of the volume expected during this event, Vishal and Nimit might not be able to answer every question. Remember that you can continue the conversation in the Network Infrastructure Community, under the subcommunity LAN, Switching & Routing, shortly after the event. This event lasts through August 29, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hello Gustavo
    Please see my responses to your questions:
    Yes almost all routing protocols use Multicast to establish adjacencies. We are dealing with two different type of traffic –Control Plane and Data Plane.
    Control Plane: To establish Routing adjacency, the first packet (hello) is punted to CPU. So in the case of triangle routed VPC topology as specified on the Operations Guide Link, multicast for routing adjacencies will work. The hellos packets will be exchanged across all 3 routers and adjacency will be formed over VPC links
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/operations/n5k_L3_w_vpc_5500platform.html#wp999181
    Now for Data Plane we have two types of traffic – Unicast and Multicast.
    The Unicast traffic will not have any forwarding issues, but because the Layer 3 ECMP and port channel run independent hash calculations there is a possibility that when the Layer 3 ECMP chooses N5k-1 as the Layer 3 next hop for a destination address while the port channel hashing chooses the physical link toward N5k-2. In this scenario,N5k-2 receives packets from R with the N5k-1 MAC as the destination MAC.
    Sending traffic over the peer-link to the correct gateway is acceptable for data forwarding, but it is suboptimal because it makes traffic cross the peer link when the traffic could be routed directly.
    For that topology, Multicast Traffic might have complete traffic loss due to the fact that when a PIM router is connected to Cisco Nexus 5500 Platform switches in a vPC topology, the PIM join messages are received only by one switch. The multicast data might be received by the other switch.
    The Loop avoidance works little different across Nexus 5000 and Nexus 7000.
    Similarity: For both products, loop avoidance is possible due to VSL bit
    The VSL bit is set in the DBUS header internal to the Nexus.
    It is not something that is set in the ethernet packet that can be identified. The VSL bit is set on the port asic for the port used for the vPC peer link, so if you have Nexus A and Nexus B configured for vPC and a packet leaves Nexus A towards Nexus B, Nexus B will set the VSL bit on the ingress port ASIC. This is not something that would traverse the peer link.
    This mechanism is used for loop prevention within the chassis.
    The idea being that if the port came in the peer link from the vPC peer, the system makes the assumption that the vPC peer would have forwarded this packet out the vPC-enabled port-channels towards the end device, so the egress vpc interface's port-asic will filter the packet on egress.
    Differences:  In Nexus 5000 when it has to do L3-to-L2 lookup for forwarding traffic, the VSL bit is cleared and so the traffic is not dropped as compared to Nexus 7000 and Nexus 3000.
    It still does loop prevention but the L3-to-L2 lookup is different in Nexus 5000 and Nexus 7000.
    For more details please see below presentation:
    https://supportforums.cisco.com/sites/default/files/session_14-_nexus.pdf
    DCI Scenario:  If 2 pairs are of Nexus 5000 then separation of L3/L2 links is not needed.
    But in most scenarios I have seen pair of Nexus 5000 with pair of Nexus 7000 over DCI or 2 pairs of Nexus 7000 over DCI. If Nexus 7000 are used then L3 and L2 links are required for sure as mentioned on above presentation link.
    Let us know if you have further questions.
    Thanks,
    Vishal

  • Increase of priority flow control counters in a FCoE environment

    Hi,
    I need some input about what is normal in a FCoE environment in regards to priority flow contol counters.
    I see Increase of RxPPP and TxPPP counters on a FCoE end to end enviroment. however we don't see high traffic rates to/from storage array(<1Gbps). Customer have not reported low transfer from storage.
    Setup:
    CNA hosts - FEX2232 - N5K - EMC-VNX
    FCoE end-to-end, No native FibreChannel.
    CNA adapters = Emulex.
    Is a problem I should look further into or is it normal?
    I have read the troubleshooting guide.
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/nexus5000/sw/troubleshooting/guide/n5K_ts_fcoe.html
    some output:
    N5K-2# sh inter priority-flow-control
    ============================================================
    Port               Mode Oper(VL bmap)  RxPPP      TxPPP    
    ============================================================
    FEX101-2232 1-8
    Ethernet1/1        Auto Off           0          211994    
    Ethernet1/2        Auto Off           0          0         
    Ethernet1/3        Auto Off           2891830    0         
    Ethernet1/4        Auto Off           6269410    0         
    Ethernet1/5        Auto Off           12109662   0         
    Ethernet1/6        Auto Off           79534      0         
    Ethernet1/7        Auto Off           0          0         
    Ethernet1/8        Auto Off           0          0         
    FEX102-2232 9-16
    Ethernet1/9        Auto Off           0          9994780   
    Ethernet1/10       Auto Off           0          0         
    Ethernet1/11       Auto Off           24678      0         
    Ethernet1/12       Auto Off           0          0         
    Ethernet1/13       Auto Off           4316       0         
    Ethernet1/14       Auto Off           136        0         
    Ethernet1/15       Auto Off           0          0         
    Ethernet1/16       Auto Off           0          0         
    !VNX-FCOE-ATTACHED SP_A
    Ethernet1/19       Auto On  (8)       1888566    10100200  
    !VNX-FCOE-ATTACHED SP_A
    Ethernet1/20       Auto On  (8)       10414603   1367098   
    Ethernet1/23       Auto Off           0          0         
    Ethernet1/24       Auto Off           0          0         
    Ethernet1/25       Auto Off           0          0         
    Ethernet1/26       Auto Off           0          0         
    Ethernet1/27       Auto Off           0          0         
    Ethernet1/28       Auto Off           0          0         
    Ethernet1/30       Auto Off           0          0         
    Ethernet1/32       Auto On  (8)       0          0         
    Ethernet1/38       Auto Off           0          0         
    Ethernet1/39       Auto Off           0          0         
    Ethernet1/40       Auto Off           0          0         
    Ethernet1/41       Auto On  (8)       0          0         
    Ethernet1/42       Auto On  (8)       0          0         
    Ethernet1/43       Auto On  (8)       0          0         
    Ethernet1/44       Auto On  (8)       0          0         
    !CNA atttached hosts.
    Ethernet101/1/1    Auto On  (8)       37109      33155     
    Ethernet101/1/2    Auto On  (8)       120        32336     
    Ethernet101/1/3    Auto On  (8)       274        34404     
    Ethernet101/1/4    Auto On  (8)       1924       64754     
    Ethernet101/1/5    Auto On  (8)       144        14684     
    Ethernet101/1/24   Auto On  (8)       4296788    4466      
    Ethernet101/1/25   Auto On  (8)       104520     22        
    Ethernet101/1/26   Auto On  (8)       838        30824     
    Ethernet101/1/27   Auto On  (8)       796        7770      
    Ethernet101/1/28   Auto On  (8)       13749152   1684      
    Ethernet101/1/29   Auto On  (8)       5912918    1276      
    Ethernet101/1/30   Auto On  (8)       3296026    2292      
    Ethernet101/1/31   Auto On  (8)       0          80        
    Ethernet101/1/32   Auto On  (8)       0          0         
    Ethernet102/1/1    Auto On  (8)       75656      323512    
    Ethernet102/1/2    Auto On  (8)       0          5632      
    Ethernet102/1/3    Auto On  (8)       4278       173828    
    Ethernet102/1/4    Auto On  (8)       0          0         
    Ethernet102/1/28   Auto On  (8)       2872       300046    
    Ethernet102/1/29   Auto On  (8)       28216      11808124  
    Ethernet102/1/30   Auto On  (8)       4792       441340    
    Ethernet102/1/31   Auto On  (8)       0          0         
    Ethernet102/1/32   Auto On  (8)       1040       201214    
    Ethernet1/19       Auto On  (8)       1888566    10100200  
    Ethernet1/20       Auto On  (8)       10414603   1367098   
    Just a one of the servers
    GDC-CORE-SW04# sh inter e101/1/29 priority-flow-control
    ============================================================
    Port               Mode Oper(VL bmap)  RxPPP      TxPPP    
    ============================================================
    Ethernet101/1/29   Auto On  (8)       5912932    1276      
    GDC-CORE-SW04# sh inter e101/1/29 | in pause
        0 Rx pause
        0 Tx pause
    GDC-CORE-SW04# sh inter e101/1/29 priority-flow-control
    ============================================================
    Port               Mode Oper(VL bmap)  RxPPP      TxPPP    
    ============================================================
    Ethernet101/1/29   Auto On  (8)       5913378    1276      
    GDC-CORE-SW04#
    GDC-CORE-SW04# sh inter e101/1/29 | in pause
        0 Rx pause
        0 Tx pause
    VNX
    GDC-CORE-SW04# sh inter priority-flow-control | in Ethernet1/19 ne 1
    Ethernet1/19       Auto On  (8)       1889064    10100536  
    Ethernet1/20       Auto On  (8)       10414603   1367164   
    GDC-CORE-SW04# sh inter e1/19 | in "input rate"
      30 seconds input rate 133346744 bits/sec, 16668343 bytes/sec, 10762 packets/sec
        input rate 96.22 Mbps, 8.31 Kpps; output rate 118.68 Mbps, 8.83 Kpps
    GDC-CORE-SW04# sh inter e1/20 | in "input rate"
      30 seconds input rate 36024752 bits/sec, 4503094 bytes/sec, 2342 packets/sec
        input rate 33.96 Mbps, 2.09 Kpps; output rate 24.95 Mbps, 1.50 Kpps

    Hi, thank your for your answer.
    I actually already enabled the system qos manually since we run 5.0(2)N2 release.
    We will implement FCoE over 5500. What I'm worried about is the warning message I posted above on Nexus 7009s connected to my 5500s after I enabled the system qos.
    Could it be originated by the DCBX feature? May I have some problem if 'Priority-flow-control' setting between N5k and N7k don't match?

  • VPC+, aka L3 on back-to-back vPC domains

    Hi,
    Please consider this scenario, where L2 VLANS are spanning 2 data centers and where R1-R4 are L2/L3 N7K routers (replacing existing 6K).
    (I wish VSS would be available also in N7K to make life 10x easier!!).
                          R1                  |                     R2
                           ||                    |                     ||
    vPC peer-link  ||    =======MAN=======   ||  vPC peer-link
                           ||                    |                     ||
                          R3                  |                    R4
                                   Site A         Site B
    Attached to R1 and R3 there are (dual-attached via 6K access switches) servers that may need to communicate to other servers in the same VLAN on the other side of the MAN. Over the MAN the VLANs are trunked, so its fine. This traffic can go over R1 or R3 both for L2 (vPC) and for L3 (HSRP vPC enhancements).
    Anway, there is also a global OSPF domain for inter-VLANs communication and for going outside the DCs via other routers attached to the above cloud.
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    I've heard there is a kind of enhancement request (or bug?,
    /* Style Definitions */
    table.MsoNormalTable
    {mso-style-name:"Table Normal";
    mso-tstyle-rowband-size:0;
    mso-tstyle-colband-size:0;
    mso-style-noshow:yes;
    mso-style-priority:99;
    mso-style-qformat:yes;
    mso-style-parent:"";
    mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
    mso-para-margin:0cm;
    mso-para-margin-bottom:.0001pt;
    mso-pagination:widow-orphan;
    font-size:10.0pt;
    font-family:"Times New Roman","serif";}
    CSCtc71813) to have this kind of back-to-back vPC scenario to handle transparently L3 data (peer-gateway command should deliver also control-plane L3-info??). There are 2 workarounds available for this design:
    1.       Define an additional router-in-a-stick using an extra VDC on each 7k. In this case, for example for R1 we would use 3 VDCs: 1 VDC for admin, 1 VDC for L2, 1 VDC for R1.
    2.       Define static routes to tell each 7k how to reach the other 7k L3 next-hops.
    a) What is the best workaround to choose in order to smooth the upgrade later to the version of vPC that will handles this issue?
    b) Are there any more caveats I dont see? I havent seeen any link in CCO, so I am unsure how to proceed the design.
    c) I would be tempted to think that using additional static routes is a better choice because it would easier to remove them once vPC+ is there.
    What static routes shall I add? R1 to R2, R1 to R4 and so on and so forth? I miss the details of this implementation.
    d) How would vPC+ looks like once (when?) is there?
    Thansk for your valuable input in advance.
    G.

    To expand on Lucian's comment, because I'm sure the next though will be...can I run OSFP over a vlan and just carry THAT over my VPC.  You don't want to do this either.
    We don't support running routing protocols over VPC enabled VLANs.
    What happens is that your 6500 will form routing adj with each Nexus 7000....lets say Nexus 7000-1 and 7000-2.  Note my picture below.
    Lets say that R1 is trying to send to a network that is behind R2.  R1 is adj to 7000-1 and 7000-2...we have equal cost paths.  CEF chooses that 7000-1 to route the packet, however Etherchannel load balancing chooses the physical link to 7000-2.  7000-2 will need to switch the packet over the VPC peer-link to 7000-1.  7000-1 receives the packet and tries to send it out VPC member port to R2....however egress port drops the packet.  This happens because we don't allow packets received from VPC member link send over VPC peer-link to be sent out another VPC member link.
    I'd suggest to run an L3 link from your 6500 to each Nexus 7000 if you do want to do L3 on it.

  • Nexus 5K OSPF with vPC

    Hi,
    I know it is well documented using IGP's, more specifically OSPF with 7K's and vPC's but when it comes to the same thing on 5K's I am still a little confused.
    My topology is:
    5K01 and 5K02 are connected and are vPC peers, I currently have a management network on VLAN 114, both 5k's have SVI's on this and are currently OSPF neighbors over their vPC using this vlan.
    I have an MPLS router (service provider PE) which is 2 routers but clustered so logically in this instance it is one router, the 5 k's will be conecting to this PE router via some switches over a vPC and needs to become a OSPF neighbor to both the 5K's.
    Looking at this post:
    http://adamraffe.com/2013/03/08/l3-over-vpc-nexus-7000-vs-5000/
    It suggests that I can just add VLAN 114 to the vPC up to tyhe PE and turn OSPF on on the interface on the PE, although this will not support Multicast and I don't really want to restrict myself as this may be a future requirement.
    What I thought might be a better solution would be to designate a new vlan and allow it on the vPC up to the PE and use that for the OSPF neighborships between the 5K's and the PE and not allowing it over the vPC peer link - leaving the 5K's neighborship over vlan 114.
    Can someone tell me what the best practice/supported topology is here and maybe provide some cisco links?
    Thanks a lot in advance.

    You have to be very careful when configuring L3 services and interfaces while using VPC. 
    Take a look at this document:
    http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf
    Also, take a look at this post:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    You can create a vlan used exclusively for Nexus-to-Nexus iBGP peering.  Use a new 'access' link between the two switches and place them on the new vlan.  Make sure that this VLAN does not traverse the VPC peer link.  Then, create SVIs on each switch for that VLAN and peer over that link.  Then, you can create a L3 link on each nexus to peer with your eBGP neighbors.
    The point you want to make sure you understand is the VPC loop prevention mechanism that says "If a packet is received on a VPC port, traverses the VPC peer link, it is not allowed to egress on a VPC port."

  • VPC for L3 links

    Hi,
    I have 2 cat 6509 working as core switches mostly on L3 interfaces running OSPF and further connected to the Campus distribution ( 2x6509) and datacentre distribution ( 2x6509).I have to replace both the core switches with 2 Nexus 7K with the same configuration.Is there any possibility that I can use VPC on L3 links , Is it recommended using VPC on L3 links or What is the way that both Nexus can act as a single cluster.
    Sanjay

    To expand on Lucian's comment, because I'm sure the next though will be...can I run OSFP over a vlan and just carry THAT over my VPC.  You don't want to do this either.
    We don't support running routing protocols over VPC enabled VLANs.
    What happens is that your 6500 will form routing adj with each Nexus 7000....lets say Nexus 7000-1 and 7000-2.  Note my picture below.
    Lets say that R1 is trying to send to a network that is behind R2.  R1 is adj to 7000-1 and 7000-2...we have equal cost paths.  CEF chooses that 7000-1 to route the packet, however Etherchannel load balancing chooses the physical link to 7000-2.  7000-2 will need to switch the packet over the VPC peer-link to 7000-1.  7000-1 receives the packet and tries to send it out VPC member port to R2....however egress port drops the packet.  This happens because we don't allow packets received from VPC member link send over VPC peer-link to be sent out another VPC member link.
    I'd suggest to run an L3 link from your 6500 to each Nexus 7000 if you do want to do L3 on it.

  • FCoE on 1 gig

    Hi All,
    I know that FCoE should be running on lossless and enhanced ethernet, i.e 10G and higher. I want to know that is it technically possible to run FCoE on 1Gig ? i mean can we do such a configuration in cisco or any other vendor ?

    Hi John,
    No Cisco product supports FCoE over 1G.  Fibre Channel moves too much data to be constrained to such a slow speed which is why the industry standard uses 10G.
    Ron

  • Nexus 5596 to join Brocade fabric

    Hi all, I tried to search but didn't find a "real" answer, it is not clear if it is possible or not...
    Problem: I always worked with homogeneous Brocade fabrics. Some days ago my company asked me if it is possible to add a Nexus 5596UB to an existing (6 switches) production Brocade fabric. I underlined production as we should not interrupt SAN traffic on joining the fabric nor we should reboot and/or reconfigure existing switches.
    I found a lot of info about "interoperability mode" on MDS switches but nothing on Nexus.
    Can someone help me? Thanks.

    According to the datasheet
    http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/data_sheet_c78-618603.html
    Native interop 1,2,3 and 4 is supported !
    Therefore you are ok, and have a TAC supported configuration.
    Fibre Channel and FCoE Features (Requires Storage Services License)
    ●  T11 standards-compliant FCoE (FC-BB-5)
    ●  T11 FCoE Initialization Protocol (FIP) (FC-BB-5)
    ●  Any 10 Gigabit Ethernet port configurable as FCoE
    ●  SAN administration separate from LAN administration
    ●  FCP
    ●  Fibre Channel forwarding (FCF)
    ●  Fibre Channel standard port types: E, F, and NP
    ●  Fibre Channel enhanced port types: VE, TE, and VF
    ●  F-port trunking
    ●  F-port channeling
    ●  Direct attachment of FCoE and Fibre Channel targets
    ●  Up to 240 buffer credits per native Fibre Channel port
    ●  Up to 32 VSANs per switch
    ●  Fibre Channel (SAN) PortChannel
    ●  Native Interop Mode 1
    ●  Native Interop Mode 2
    ●  Native Interop Mode 3
    ●  Native Interop Mode 4
    ●  VSAN trunking
    ●  Fabric Device Management Interface (FDMI)
    ●  Fibre Channel ID (FCID) persistence
    ●  Distributed device alias services
    ●  In-order delivery
    ●  Port tracking
    ●  Cisco N-Port Virtualization (NPV) technology
    ●  N-port identifier virtualization (NPIV)
    ●  Fabric services: Name server, registered state change notification (RSCN), login services, and name-server zoning
    ●  Per-VSAN fabric services
    ●  Cisco Fabric Services
    ●  Diffie-Hellman Challenge Handshake Authentication Protocol (DH-CHAP) and Fibre Channel Security Protocol (FC-SP)
    ●  Distributed device alias services
    ●  Host-to-switch and switch-to-switch FC-SP authentication
    ●  Fabric Shortest Path First (FSPF)
    ●  Fabric binding for Fibre Channel
    ●  Standard zoning
    ●  Port security
    ●  Domain and port
    ●  Enhanced zoning
    ●  SAN PortChannels
    ●  Cisco Fabric Analyzer
    ●  Fibre Channel traceroute
    ●  Fibre Channel ping
    ●  Fibre Channel debugging
    ●  Cisco Fabric Manager support
    ●  Storage Management Initiative Specification (SMI-S)
    ●  Boot from SAN over VPC/EVPC

  • VFC Consideration

    Hi all,
    I want to know, Why VFC not supported over VPC. Like I have one Nexus 5548UP as a access switch and two N5K UP model upwards having in VPC with downstream nexus 5548UP like a triangle. I want to configure VFC between VPC but it is not supported i think.
    Solution is : I have to make VFC with different links other than vpc member ports. Means between one to one switch not in one to multiple switch in VPC
    Why VFC not supported over VPC.
    Kindly help...

    In FCoE, If fibre(Storage) traffic not supported over VPC peer link. Then i have a scenario in my data center.
    I have a net app filer that connected to my down N5k switches in vpc and vfc also running over vpc. Now in that case how traffic flow over VPC peer link.
    Can you explain?

  • Vfc Error disabled

    Hi,
    I have a problem connecting a CNA (Qlogic 8152) to a Nexus 5010, the network part is working goot but not the FCoE.
    What I have noticed is that the vfc is down (Error disabled), the DCBX works any how:
    Nex1# sh system internal dcbx info interface ethernet 1/16
    Interface info for if_index: 0x1a00f000(Eth1/16)
    tx_enabled: TRUE
    rx_enabled: TRUE
    dcbx_enabled: TRUE
    DCX Protocol: CEE
    This is the port configuration:
    interface port-channel16
      switchport mode trunk
      vpc 16
      switchport trunk native vlan 501
      switchport trunk allowed vlan 501,810
      spanning-tree port type edge
      flowcontrol receive on
      flowcontrol send on
    interface Ethernet1/16
      switchport mode trunk
      switchport trunk native vlan 501
      switchport trunk allowed vlan 501,810
      spanning-tree port type edge
      flowcontrol receive on
      flowcontrol send on
      channel-group 16 mode active
    Any ideas what can be the problem? I have seen that if change "channel-group 16 mode active" to "channel-group 16 mode on" the interface goes up but the network connectivity is lost...
    Br
    Per

    http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/operations/n5k_fcoe_ops.html#wp1080158
    --snip--
    LACP and FCoE To The Host
    Today, when deploying FCoE over a host-facing vPC, the vFC interface is  bound to the port channel interfaces associated with the vPC.  This  requires that the port channel interface be up and forwarding before  FCoE traffic can be switched.  Cisco recommends when running vPC in an  Ethernet environment is to use LACP in order to negotiate the parameters  on both sides of the port channel to ensure that configurations between  both sides is consistent.
    However, if there are inconsistencies in any of the Ethernet  configuration parameters LACP uses to bring up the port channel  interface, both sides of the virtual port channel will remain down.   This means that FCoE traffic from the host is now dependent on the  correct configuration on the LAN/Ethernet side.  When this dependency  occurs, Cisco recommends that you use the static port channel  configuration (channel-group # mode on) when deploying vPC and FCoE to  the same host.
    --snip--
    I'm guessing there's something about the way your CNA / host is handling LACP that caused some kind of mismatch. A packet trace may give a clue. Did you try 'mode passive' as well?

Maybe you are looking for

  • Poor printing quality

    Have downloaded the trial version of Aperture to test. When printing I am getting poor quality prints, as well as seeing horizontal lines approx 1cm apart all the way down the photo. Have just printed the same photo with iPhoto with no problems. I am

  • I've lost everything on my mac how do I back up from iCloud?

    Ive lost everything on my mac how do I back up from icloud? including my email settings and mail?

  • Editing the slide background

    I have captured an online software demo. After doing so, the company in which I captured this for has been bought. So here's the challenge. Instead of re-capturing the entire process, I was easily able to simply copy the background slide, put it into

  • Standalone Web Service clients in NetBeans with Reliable Messaging

    Following the WSIT tutorial (http://java.sun.com/webservices/reference/tutorials/wsit/doc/index.html) it's possible to create a Web Service with Reliable Messaging and a servlet client which run perfectly, great, but try and create a standalone clien

  • HT4922 How do I stop automatic updates of Microsoft Silverlight on MacBook?

    Microsoft Silverlight keeps trying to update on my computer and is really slowing it down.  How do I stop these pop up boxes for automatic updates?