FCOE design

I currently have a nexus 5010 with 10gb hosts and fiber attached storage at location A. At location B, I have an additional nexus 5010 with 10gb host with a CNA that will be using fcoe to connect to storage.
I'd like for the host at location B to connect FCOE to the fiber attached storage at location A. What configuration steps do I need to take to get location A's storage visible to FCOE. As I see it, I need to enable FCOE, allow the FCOE vlan, and tie that vlan to the vsan. At this point should location A be able to see B's storage? Thanks

Over here they describe a multihop FCOE solution.
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/tech_eval/mFCoEwp.html
However they still use dedicated links for FCOE traffic from the N5K towards the N7K's and a dedicated vPC for the normal LAN traffic. They even use a dedicated VDC for storage and normal traffic.
Therefore I am not sure if it is supported to mix the ethernettraffic with the FCOE traffic one one link.

Similar Messages

  • Office network design ideas..

    Hey all, we are upgrading to a Cisco network and wanted some input on our possible network design...
    Currently we have:
    A Juniper SSG 140 and IDP for our firewall and IDS
    3com (layer2/3) switches for our desktops
    2 Dell PowerConnect 5424 switches for our servers and firewalls
    2 Dell PowerConnect 5424 switches (separate network) for our SAN/VM hosts
    This is what we are thinking of for our next solution
    ASA 5512 for our firewall (I read we could possibly get a 25% performance speed improvement for user VPN connections?)
    2 WS-C3750x-48t-e (I think this does Layer 2/3) for our desktops
    2 WS-C3750x-48t-e for our firewalls/servers
    2 WS-C3750x-24P-L for our SAN/VM hosts
    The problem is different network services providers who are going to implement this for us are giving us different solutions
    Some desktop 3560X for desktops and 4948 for servers and others are telling me 3750x for desktops and Nexus 3048 switches for SAN
    Some are telling me we can keep SAN+VM+core traffic on the same switches and just separate them with VLANs while others are telling me we should get separate switches for them
    Basically, we just want a improved improvement with better PERFORMANCE and REDUNDANCY (esp with our core + SAN/VM traffic) without going overboard and spending a ton of money
    More thoughts:
    We need Layer 2/3 switches for core + SAN
    Do we need 10G ports?
    Let me know your thoughts...

    Hi There,
    the hardware selection actually depends on the network/site topology, number of users, traffic load and more other factors
    this is for IP network, for SAN do you mean iscsi, FCoE or pure FC SAN because these are different things and may change the HW selection,
    in general 3560 are good fro access switches and 3750 provide same capabilities with improved performance and support for swtckwise ( 3750 is a good option especially if you planing to stack them )
    for L3 it is supported on both but consider the license/image you buy with regard to the features you need
    nexus for Data center switch are the best as they are design for data center switching however you need to know, port density, 1G or 10G, do you need any FC SAN, DC load/capacity, any L3 function is required and future growth then you can decide if Nexus 3K or 5K is good for you or not
    N5K
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/data_sheet_c78-618603.html
    N3K
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps11541/at_a_glance_c45-648255.pdf
    if yo have a network topology with more details of what you need, post it here for more discussions
    hope this help
    if helpful rate

  • Please shed some light on Data Center design

    Hi,
        I want you guys to recommend what the design should be. I'm familiar with HP blade system. Let me clarify the existing device.
    1. HP Blade with Flex Fabric. It supports FCOE.
    2. MDS SAN switch for the storage
    3. Network Switch for IP network.
    4. HP Storage.
        HP Blade has 2 interface types for IP Network(Network Switch) and Fiberchannel(SAN).
       What is the benifit for using Nexus switch and FCOE for my exising devices. What should be a new design with Nexus switch? Please guide me ideas.
    THX
    Toshi 

    Hi, Toshi:
    Most of these chat boards have become quite boring. Troubleshooting OSPF LSA problems is old news. But I do pop my head in every now and then. Also, there are so many other companies out there doing exciting things in the data center. You have Dell, Brocade, Arista, Juniper, etc. So one runs the risk of developing a myopic view of the world of IT by lingering around this board for too long.
    If you want to use the new B22 FEX for the HP c7000 blade chassis, you certainly can. That means the Nexus will receive the FCoE traffic and leverage its FCF functionality; either separate the Ethernet and FC traffic there, or create a VE-port instantiation with another FCF for multihop deployments. Good luck fighting the SAN team with that one! Another aspect of using the HP B22 is the fact that the FEX is largely plug and play, so you dont have to manage the Flex Fabric switches.
    HTH

  • Best Practice - Flexpod Design

    I am working thru a 5548, UCS, and Netapp design. We are using FC, not FCoE. I have followed the FlexPod deployment standard to a "T" but have a couple of questions. First, as we are following our physical layout, EoR, we are placing a pair (two 5548's) at the end of each row to handle FC within that row (client request). We have various FC devices throughout each row, with UCS in one row, Netapp in another, and so forth. The question I have is in regards to "best practice" with the FlexPod standard. No where have I found an FlexPod design document which shows a cascade/aggregation design using an EoR switch connected to another EoR switch with a target/initiator seperated by two 5548s (NPIV/NPV). Is such a design NOT recommended? Can it be done within the standard? The second question is in regards to actual configuration. In this mode, TARGET ---- 5548(row1)-----5548(row2)---- Initiator, I assume the first 5548 is NPV mode, the second NPIV mode. Correct?
    We have not implemented in this fashion before so I am looking for some standards document/configurations,etc related to this. Your help is greatly appreciated...

    The link between the NPV-NPIV Core is not an ISL.
    The link between the NPV-NPIV Core is  F-port type. NPV Switch does not run Fibre Channel services, therefore has NO Fibre Channel Domain ID. 
    NP - Node Proxy port type is introduced on the NPV Switch since it sends requests to the NPIV Core for processing and then relays any applicable information to the downstream hosts.
    As far as FLEXPOD this Doc talks about 5548 in NPIV with UCS in NPV mode.
    http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/whitepaper__c07-727095.html
    This might not a full match but it touches the features you are discussing.
    I hope this helps.
    Regards,
    Carlos

  • Design question, UCS and Nexus 5k - FCP

    Hi,
    I need some advice from (Mainly a Nexus person); I have drawn and attached the proposed solution (below).
    I am designing a solution with 3 UCS chassis, Nexus 5K and 2X NetApp 3240 (T1 and T2). FC will be used to access disk on the SAN. Also, Non UCS compute will need access to the T2 SAN only. (UCS will access T1 and T2). It is a requirement for this solution that non UCS devices do not connect to the same Nexus switches that the UCS chassis use.
    UCS Compute:
    the 3 X chassis will connect to 2 X 6296 FIs which will cross connect to the a 2 X Nexus 5K through a FC port channel, the Nexus 5Ks will be configured in NPIV mode to provide access to the SAN. FC from each Nexus 5K to the NetApp controllers will be provided through a total of 4 X FC Port Channels (2 FC member ports per PC) from each nexus 5K one going to controller A and the other to controller B.
    Non UCS compute:
    These will connect directly through their HBAs to their own Nexus 5Ks and then to the T2 SAN, these will be zoned to never have access to the T1 SAN.
    Questions:
    1-      As the UCS compute will need to have access to the T1, what is the best way to connect the Nexus 5Ks on the LHS in the image below to the Nexus on the RHS (This should be FC connection).
    2-      Can fiber channel be configured in a vPC domain as Ethernet? Is this a better way for this solution?
    3-      Is FC better than FCoE for this solution? I hear FCoE is still not highly recommended.
    4-      Each NetApp controller is only capable of pushing 20GBps max, this is why each port channel connecting to each controller is only 2 members. However, I’m connecting 4 port channel members from each Fabric Interconnect (6296) to each Nexus switch. Is this a waste? Remember that connectivity form each FI is also required to the T2 SAN.

    Max,
    What you are implementing is traditional FlexPod design with slight variations.
    I recommend to look at FlexPod design zone for some additional material if you have not done so yet.
    http://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns1050/landing_flexpod.html
    To answer your questions:
    1) FC and FCoE do not support vPC. If UCS needs to have access to T1, then there is no need to have ISL between sites. If UCS needs to have access to T1 and T2, then best option would be to set up VSAN trunking on N5K and UCS and configure vHBAs accordingly.
    2) Both should work just fine. If you go with FCoE, then UCS would need to be on the latest version for multi-hop FCoE support.
    3) If you only worried about storage throughput then yes, you will never utilize 40Gb PO if your source will be 20Gb PO. What are your projected peak and average loads on this network?

  • Fcoe for ethernet AND san

    I'm having a bit of a brain fart....
    I see that you can create an "unified" uplink from the 6248s to an upstream Nexus 5k.  Can I use this single uplink for both VSAN and VLAN traffic, or si the best practice to separare them into an Ethernet uplink and a FCOE San uplink?
    Thanks in advance.

    I'm not sure of any immediate disadvantages.  A lot of design related inputs to separate LAN and SAN are related to pacing the integrating of the two networks. 
    If your already running FCoE and your not migrating from a pure LAN and Fibre channel environment to FCoE - then I'd run both on a unified link.
    Please rate helpful posts.

  • Data center design guide

    Hi all,
    does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?
    thanks,

    Hi all,does anybody familiar with good design guide of Cisco data center design evolving nexus 2000, 5000 & 7000 with FCoE ?thanks,
    Hi ,
    Check out the below link on Data center design with Nexus switches
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572831-00_Dsgn_Nexus_vPC_DG.pdf
    Hope to Help !!
    Ganesh.H
    Remember to rate the helpful post

  • Best Design practices of SAN with the MDS 9513,MDS 9509 and Brocade 8510

    Hi
          Am searching best design to implement CISCO MDS 9513,9509, Brocade 8510,Storage and UCS all clubbed in the topology. And please suggest me any tool to compare MDS and Brocade 8510 performance.

    Boomi,
    Both MDS and Brocade will serve basic features of Storage networking. Both can be mix and match to achieve redundancy, which you already have. However, if you are looking for any tool or perfmon then there isn't much to compare. You can use IOmeter or Akkori. I see that you have enterprise level hardware in your setup. Not sure what other line cards you have installed and what application are running through, and if you have remote sites (SAN islands) then the real difference of features and best practices can be discussed. For example, IVR, FCIP, ISCSI, FCoE, etc.
    Thanks,
    Nisar
    Sent from Cisco Technical Support iPad App

  • Nexus 7000, 2000, FCOE and Fabric Path

    Hello,
    I have a couple of design questions that I am hoping some of you can help me with.
    I am working on a Dual DC Upgrade. It is pretty standard design, customer requires a L2 extension between the DC for Vmotion etc. Customer would like to leverage certain features of the Nexus product suite, including:
    Trust Sec
    VDC
    VPC
    High Bandwidth Scalability
    Unified I/O
    As always cost is a major issue and consolidation is encouraged where possible. I have worked on a couple of Nexus designs in the past and have levergaed the 7000, 5000, 2000 and 1000 in the DC.
    The feedback that I am getting back from Customer seems to be mirrored in Cisco's technology roadmap. This relates specifically to the features supported in the Nexus 7000 and Nexus 5000.
    Many large enterprise Customers ask the question of why they need to have the 7000 and 5000 in their topologies as many of the features they need are supported in both platforms and their environments will never scale to meet such a modular, tiered design.
    I have a few specific questions that I am hoping can be answered:
    The Nexus 7000 only supports the 2000 on the M series I/O Modules; can FCOE be implemented on a 2000 connected to a 7000 using the M series I/O Module?
    Is the F Series I/O Module the only I/O Module that supports FCOE?
    Are there any plans to introduce the native FC support on the Nexus 7000?
    Are there any plans to introduce full fabric support (230 Gbps) to the M series I/O module?
    Are there any plans to introduce Fabric path to the M series I/O module?
    Are there any plans to introduce L3 support to the F series I/O Module?
    Is the entire 2000 series allocated to a single VDC or can individual 2000 series ports be allocated to a VDC?
    Is Trust Sec only support on multi hop DCI links when using the ASR on EoMPLS pwire?
    Are there any plans to inroduce Trust Sec and VDC to the Nexus 5500?
    Thanks,
    Colm

    Hello Allan
    The only IO card which cannot co-exist with other cards in the same VDC is F2 due to specific hardware realisation.
    All other cards can be mixed.
    Regarding the Fabric versions - Fabric-2 gives much bigger throughoutput in comparing with Fabric-1
    So in order to get full speed from F2/M2 modules you will need Fab-2 modules.
    Fab2 modules won't give any advantages to M1/F1 modules.
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-685394.html
    http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/prodcut_bulletin_c25-688075.html
    HTH,
    Alex

  • FCoE and FC

    Hi all
    Perhaps the community could settle an argument that is stopping everyone here from doing any real work.
    Like a lot of shops we're tossing around FCOE and trying to weigh up the pros and cons. This has moved up the agenda as we're now about to fit out a new data centre and want something that's good for 5-7 years.
    The stumbling block we've hit is should we follow what seems to be the trend and go with a pair of Nexus 5000 in the top of each rack for the 'first hop' or wait for FCOE blades to come out for the 9513?
    The 5000 plan gets everything in and cabled up from day one and we're good if FCOE takes off in the next 12 months. The wait-for-MDS plan is attractive to the bean counters but could leave us unable to react to server connectivity demands in the short term.
    What's the opinion round the virtual water cooler?

    DMX 4's. You have soo much money it doesn't matter.. :)
    My major stumbling block is hundreds of Enterprise Storage ports all at 4 Gpbs. 95% of our SAN connected servers are HP C class blades and they are fairly hefty systems and relatively new.
    I have seen no mention at all of CNA's for any of our current infrastructure. I also have core/edge and NPV/NPIV solutions and so far, nothing Nexus wise is worthwhile. But, give it a couple of years and that might change.
    I am sure my infrastructure is similar to most organisations. Blades running virtual servers seems to be rampant in the data centre. Thats really what has bemused me with Brocade. They created their own 8 Gbps HBA's. Has anyone bought one? Does anyone use chassis servers anymore?
    I just spent huge dollars replacing our aging FC switches because accountants say so. I also have huge dollars invested in native fibre channel DR solutions across data centres. What can the FCoE do for me right now?
    The really big question is: Do I want to hand over my nice neat well designed SAN to our network admin?
    Stephen

  • FC/FCoE from N5k to UCS/FI

    I'm working on a new Data Center design, which very closely matches the hardware in the following document:
    http://www.cisco.com/en/US/products/ps9670/products_configuration_example09186a0080c13e92.shtml
    I'll have SANs directly attached to the N5k's, with UCS/FI where we are planning to utilize FCoE downstream to the UCS/FI environment.
    Now, what I am curious about, is why in the above document (Which was updated in June 2013) there is a seperate FCoE port channel and another ethernet vPC. I am curious behind the reasoning behind this. Is there a limitation with FCoE over vPC links or is just a matter of keeping the traffic from fabric A away from fabic B?
    Also I am assuming the document has an FCoE port-channel as opposed to a san-port-channel is for the additional throughput of 10Gb FCoE as opposed to the 8Gb FC links.
    I see the disclaimer note about the 2x seperate links and the FI operating in NPIV mode from the FC perspective that just isn't not helping me.
    CCNP, CCIP, CCDP, CCNA: Security/Wireless
    Blog: http://ccie-or-null.net/       

    Hello Stephen,
    One of the reason on why FCoE over vPC ( between UCS and N5K ) is not supported is that UCS currently allows all VSANs on all uplinks and cannot be pruned.
    HTH
    Padma

  • Help in 4001L Switch FCoE Example

    Hi guys,
       Beginning from Release 4.0(1a)N1(1) in Nexus 5020, if I want configure a FCoE in a interface, I can use the  following commands
       vlan 101
       fcoe vsan 1
       interface Ethernet 1/1
       switchport mode trunk
       switchport trunk native vlan 2
       switchport trunk allowed vlan 2,101
       spanning-tree bpduguard enable
       spanning-tree port type edge trunk
       interface vfc 1
       no shutdown
       vsan database
       vsan 1 interface vfc 1
       In this example the vlan 101 is used by the VSAN 1 and the Vlan 2 is used by the LAN traffic from the Server.
       Now I need to configure this same server not connected in the Nexus 5020, but connected in the Nexus 4001L.
       This Nexus 4001L will be connected to the Nexus 5020 by EtherChannel
       Imagine the server is connected to the ethernet 1/1 Port and the Ports ethernet 1/15 to 1/20 are used in a Port Channel to connect the Nexus 4001L
       to Nexus 5020
       Server --- Ethernet 1/1 --- Nexus 4001L ---- PortChannel 1 (1/15 - 1/20) ---- PortChannel 1 (1/1 - 1/6) Nexus 5020
       Is the configuration below correct ??
       In Nexus 4001L
      feature fip-snooping
       vlan 101
         fip-snooping enable
         fip-snooping fc-map XXXXXX
       interface port-channel 1
       interface ethernet 1/15-20
         channel-group 1
       interface port-channel 1
         switchport mode trunk
         switchport trunk allowed vlan 2,101
         fip-snooping port-mode fcf
         no shut
      In Nexus 5020
       interface port-channel 1
       interface ethernet 1/1-6
         channel-group 1
       interface port-channel 1
         switchport mode trunk
         switchport trunk allowed vlan 2,101
         no shut
       If yes, how can I determine the fc-map value should I use in vlan 101 ???
       Thanks a Lot
          My Best Regards,
          Andre Lomonaco

    Vhova,
    I see that you are new to web design. The question about what
    you wanted to accomplish this fundamental task of simple thing you
    want with nav bars and things like that. To make it happen,
    ideally, you need to get both of your hands dirty under the hood of
    HTML (and eventually with CSS) will help a lot.
    That is to say, once you find some comfort zone working with
    HTML/CSS, the more comfortable you eventually be felt in time.
    I am sorry it might be of a bit blunt, but this gets to the
    point at all once to the bulleye directing you to the right path on
    your way to learn HTML/CSS. Because there are lot of factors
    involved, different issues will come up and go. For one,
    troubleshooting some problem that might arise from time to time,
    with some understanding on how HTML works, then you will be able to
    figure this out.
    Also, please try to use F1 help within Dreamweaver to obtain
    fundamentals of how Dreamweaver works for you and getting you
    started. Additionally, there are lot of great tutorial or steps at
    Dreamweaver Developers, and feel free to learn more about CSS under
    submenu links at DW Developers page.
    Hope that helps you get started more effective and productive
    way to learn HTML and CSS.

  • FCoE Port Channels on Single Nexus 5K - possible?

    I'm looking for information regarding FCoE on a single Nexus 5548. I'm trying to set up a port channel from a Netapp filer's CNA adapters (2 twinax cables). I was told that for some reason port channels do not work on a single Nexus 5K design (i.e. no vPC), but I didn't know if that meant simply without the use of LACP. I've configured it in both fashions, and it seems that the VFCs do not want to come up, meaning that my filers cannot log in to the SAN. This behavior is what was described to me as what would happen if I tried doing a port channel in this way - basically I've bound the VFCs to the port channels, and since the port channels are composed of ports on the same switch, it just doesn't work. Seems odd that this would bet the case, though it would normally force me to simply buy a second N5K (sneaky sneaky).
    Any tips?

    That's not my issue. I'm using only one Nexus 5K, meaning that if I want to have a port channel running between the 5K and the Netapp filers, I have to bind the virtual fibre channel interfaces to that port channel. I believe that in doing so, when there are more than one links as a member of that port channel, the VFCs simply go down by design and will not come up. For some reason VFCs require to be bound to a single physical interface, whether that interfaces is a single phyiscal port or a port channel that's only configured to have one port in it.
    Either way, I had to configure the Netapp side to be active/passive. I've bound the VFCs to each port (total of 4, so 4 VFCs as well). No port channels are being used on the Nexus side. This is not ideal but it works.
    I would still like to see some documentation regarding the behavior of the VFCs when presented with multiple links in this way. Has anyone seen any documentation regarding this?

  • N7k N2K and n5k N2K design questions

    Data Center with Dual N7K , want to add N2K for top of rack server access. I understand the model where I can set my servers up with a vPC Etherchannel to a pair of 2k with one 2k attached to each 7k. This model appears to meet all of the no single attached device rules of vPC. The question I have is how I can attach traditional server connectivity that doesn't support Etherchannel. In this model if I attach to a single 2k or team(without etherchannel) to both 2ks haven't I violated the no single attached device rule of vPC.
    I have a similar issue with the 2232 and N5k model. In order to support FCOE on a 2232, the only supported configuration is to attach the 2232 to a single 5k, in all of the design models I have seen the server is depicted with a vPC etherchannel to the two 2232s that are attached to their respective 5k. In this design if my server doesn't support an Etherchannel, and I am forced to utilize traditional teaming I have broken the no single attached device rule of VPC again.
    This no single attached device rule of vPC makes it really hard to utilize the FEX in either of these scenarios, What is the recommendation for connecting servers that don't support Etherchannel in these two models

    Hi,
    Take a look at the document Nexus 7000 Fex Supported/Not Supported Topologies and you'll see what options you currently have.
    As I mentioned, up to and including NX-OS 6.2, a single FEX can only be connected to a single Nexus 7000. This is what is shown as Figures 8, 9 and 10 in the unsupported topologies section.
    A server can be connected to two different FEX, and those two FEX could be connected to a single Nexus 7000 or two different Nexus 7000. The options are shown in Figures 6 and 7.
    Regards

  • FCoE through LACP or VPC

    Hello,
    Designing a server with 2 dual port port CNAs (totally 4) for each server. The server will be connected to 2 Nexus 5K switches which will be configured as VPC.
    What may be the options that these 4 CNA Ports to be connected to the 5Ks? Is it possible to use a single VPC link and configure 4x10G ports to make 40Gbps for Ethernet and FCoE frames. We think it is possible for Ethernet data frames, but each CNA sees the link to N5K seperately, without any knowledge of VPC Links.
    The following links may give a good idea on the supported designs, but each gives an example of 2 CNA ports for the server.
    http://brasstacksblog.typepad.com/brass-tacks/2011/05/eliminating-the-san-air-gap-requirement-with-fcoe-and-vpc.html
    http://bradhedlund.com/2010/12/09/great-questions-on-fcoe-vntag-fex-vpc/
    Also for the supported and not supported designs, can you explain the reasoning for single port etherchannel requirement for the VPC, why we cannot  dual attach the CNAs to each N5K seperately?
    Thanks in Advance,
    Best Regards,

    You are correct in that you cannot bind all four interfaces of the CNA. Etherchannel in reality is for ethernet traffic only. When it comes to FC(oE), each interface in CNA will have a unique wwn. There is no concept of virtual wwn. So each interface  going into a Nexus switch needs to be a one port etherchannel.

Maybe you are looking for