Nexus as a Storage Switch?

Hi Guys
With the new Unified Ports Concept available in Nexus 5K can we say that  Nexus 5K is as good a storage switch(one to which SAN boxes can be directly connected)  as MDS 9124/9148...
There is no clarity on this and there is lot of speculation in the channel community over this.Would help if Cisco comes out with a response to dispel the doubts
I also see a Nexus Advanced SAN License for the Nexus 7K?Does this mean that N7K can also be used as MDS 9500?
Thanks
Sumesh

The N7K also has FC switching code but only for FCoE connected devices. The N7K doesn't support an FC 4/8G SFP, you could however connect SAN FCoE front-end ports and serves with CNAs to the N7K and do all zoning there.
I guess it would be possible to connect some 5548s over FCoE to the N7K and use it as your FC core with the N5Ks as your FC edge. All SAN related configs and FCoE devices must be connected to a dedicated FCoE VDC on the N7K and you must have an F1/F2 line card to support FCoE.
An MDS 9500 with the FCoE card could also connect to the N7K for FC switching. For example if your FC core is a 9500 you could connect up hosts with CNAs and SAN array with FCoE front-end ports to the N7K and then all of the legacy traditional FC connects would stay connected to the 9500. The 9500s and N7K could then connect together with the 9500 FCoE card.
Lots of cool ways to unify LAN/SAN these days. All depends on what you have now and what the long term plans are.
The biggest challenge I have faced with trying to get FCoE into customers is political. With FCoE who owns the switch? LAN or SAN team? Sure you can use RBAC to give SAN team access to only FC switch but still there are political hurdles to get over before FCoE becomes main stream.

Similar Messages

  • Configuring Nexus 5k for SAN switching between UCS 6100s and NetApp storage

    We are trying to get our UCS environment setup to do SAN boot to LUNs on our NetApp storage. We have a pair of Nexus 5Ks with the Enterprise / SAN license on them and the 6-port FC module installed. What needs to be done to configure the Nexus 5000 to operate as the SAN switch between the storage target and the UCS environment?

    I'm still not seeing the LUN on the NetApp device from my service profile. Here are the outputs from the two
    commands you referenced here along with a few other commands if they help at all.
    Port fc2/1 is the conneciton to the UCS 6100 with FCID 0x640004 being the vHBA in my Server profile.
    Port fc2/5 is the NetApp target. I have the LUN masked to the vHBA port name 20:00:00:25:b5:01:00:ff
    I have just the wwpn from the vHBA in my server profile and the wwpn of the NetApp target port zoned together. I'm not seeing any FC traffic at the NetApp though from looking at the statistics. Do I need to include something else in my zoning?
    Again, any assistance would be appreciated. This is obviously our first venture into FC...

  • Connecting a Dell B22 FEX to Nexus 6001 and storage

    Hi Folks,
    We have a setup where we are running a Dell B22 FEX in Blade enviornment and want to connect the B22 FEX to a cisco Nexus 6001 switch. As per NX-OS release note, B22 FEX and 6001 Nexus connectivity is supported.
    Now after connecting to Nexus 6001, how do i get access to storage pool or SAN fabric ?  As per another thread of discussion, Nexus 6001 does not support direct fabric attachment at this point in time. So how do we bridge these two elements to a storage fabric ??
    As per 6.02 release note:
    "Support for DELL FEX
    Added support for the Cisco Nexus B22 Dell Fabric Extender for Cisco Nexus 6000 Series switches starting with the 6.0(2)N1(2) release."
    This is the exact reason we bought it. We have a enviornment where we are running Dell B22 FEX.  Idea is to connect the B22 FEX into Nexus 6001. We are confuse at this point. After connecting the Dell B22 FEX to Nexus 6001, how to access the storage network or storage fabric ??
    Thanks,

    Hi Rays,
    The function should be the same for all FEXs regardless of what parent switch they connect to. to.
    if you are referring to this comment:
    You can connect any edge switch that leverages a link redundancy mechanism not dependent on spanning tree such as Cisco FlexLink or vPC
    FelxLink is a different technology that does not use STP, but not every switch platform supports FelxLink. FlexLink is not used very often, as other technologies like VSS, VPC and stacking has emerged.
    HTH

  • KEYNOTE DIFFERENCES BETWEEN A CATALYST LAN SWITCH & A STORAGE SWITCH (MDS)

    Hi Guys,
    I had a very simple query. I had a very basic query. I wanted to know the difference between a switch which we connect to our campus netorks and switches connected to storage area networks. I dont mean the cost and stuff, but more into how they forward packets and the technologies. If I have a catalyst switch with 10Gb ports and a SAN switch with 10Gb ports , what would be the performance difference. what would be the difference in their forwarding mechanisms.

    Hi Rustom,
    The big difference is that the 10Gb ports on the Catalyst are ethernet and the 10Gb ports on the MDS switch are for Fibre Channel only.  You can't use the MDS ports for ethernet.
    Jim

  • Storage Switch Recommendation

    SAN fabric switches- Which SAN switch would you recommend that will support both ISCSI and Fiber channel with additional fiber port greater then 25 on each switch. Customer currently has 4 16 port Cisco Switches he's looking to replace.

    The reason that the 9124 has such a great price point is that Cisco developed this totally buttkicking chipset on the backend to hand the fibre channel traffic at high speed, instead of the brocade way of chaining a whole bunch of chipsets together and trying to cover it up with buffer memory. In the end this results in major cost savings which it looks like Cisco is passing on to the consumer.
    Now, I would venture to guess that ISCSI is not in the current capabilities of that chipset.
    Also, it kind goes against the segment where I see the 9124 marketed to. I see the target market of the 9124 to be the smaller storage implementations where the storage "network" aspect is almost an afterthought.
    While multi protocol access such as ISCSI is nice for these segments, the cost to provide this in the switches I think would price Cisco out of the market in this segment.
    --Colin

  • Nexus 5548UP for SAN Switch

    We have purchased a Nexus 5548UP switch for our SAN environment.  I've only configured it for jumbo frames and currently doing testing on the performance, as some say use jumbo frames while others say do not.  My question is, and I know this is a loaded question but how do you guys go about configuring your SAN switches?  What exactly are you configuring on it?  Such as for performance tuning or the like.  The reason I ask, is we have the switch and it's pretty much working, but I'd like to learn more about it and how to tune it and fully take advantage of the switch.

    That is correct, between two Dell Optiplex workstations.  We also tested between the server that will go into prod and an Optiplex, but once again the Optiplex could be the limiting factor.  You think that  could be it?  Here's my config, so you know how it's setup right now.
    version 5.2(1)N1(7)
    hostname N5K
    feature telnet
    feature lldp
    feature vtp
    username admin password 5 $1$oZ1234567898765432123456/  role network-admin
    banner motd #Nexus 5000 Switch
    ip domain-lookup
    policy-map type network-qos jumbo
      class type network-qos class-default
        mtu 9216
        multicast-optimize
    system qos
      service-policy type network-qos jumbo
    snmp-server user admin network-admin auth md5 0x422842c9d320064123456789fc703d20
     priv d3264f23456789000c703d20 localizedkey
    vrf context management
    port-profile default max-ports 512
    interface Ethernet1/1
      flowcontrol receive on
      flowcontrol send on
    interface Ethernet1/2
    interface Ethernet1/3
    interface Ethernet1/4
    interface Ethernet1/5
      flowcontrol receive on
      flowcontrol send on
    interface Ethernet1/6
    interface Ethernet1/7
    interface Ethernet1/8
    interface Ethernet1/9
      flowcontrol receive on
      flowcontrol send on
    interface Ethernet1/10
    interface Ethernet1/11
    interface Ethernet1/12
    interface Ethernet1/13
      flowcontrol receive on
      flowcontrol send on
    interface Ethernet1/14
    interface Ethernet1/15
    interface Ethernet1/16
    interface Ethernet1/17
    interface Ethernet1/18
    interface Ethernet1/19
    interface Ethernet1/20
    interface Ethernet1/21
    interface Ethernet1/22
    interface Ethernet1/23
    interface Ethernet1/24
    interface Ethernet1/25
    interface Ethernet1/26
    interface Ethernet1/27
    interface Ethernet1/28
    interface Ethernet1/29
    interface Ethernet1/30
    interface Ethernet1/31
    interface Ethernet1/32
    interface mgmt0
      ip address 11.11.11.11/24
    line console
      exec-timeout 720
    line vty
    boot kickstart bootflash:/n5000-uk9-kickstart.5.2.1.N1.7.bin
    boot system bootflash:/n5000-uk9.5.2.1.N1.7.bin

  • Thoughs on interconnecting Nexus 3548 and 3750 switches

    Hi,
    I have two nexus 3548 switches.
    I have created port-group 1  on both switches to group eth1/47 with eth1/48.
    I have 4 sfps, 2 per switch. to connect to a single 3750 that I want to group together as well.
    So I have gi1/0/31 going to eth1/1 on nexus1 and gi1/0/32 going to eth1/2 on nexus1
    I have gi1/0/33 goin to eth1/1 on nexus2 and gi1/0/32 goin to eth1/2 on nexus2
    When I create the port group on the 3750 do I create one group with all 4 ports or will I have to create 2, one per nexus switch?
    Thanks

    Thanks for the replies. I finally got to test the hardware and config yesterday.
    Just so I am clear. the vpc and peer to peer links are only for interconnecting the two Nexus switches. I think I got that right. sho VPC br seems to say it is up. I am using Po5
    vPC domain id : 1
    Peer status : peer adjacency formed ok
    vPC keep-alive status : peer is alive
    Configuration consistency status : success
    Per-vlan consistency status : success
    Type-2 consistency status : success
    vPC role : secondary
    Number of vPCs configured : 0
    Peer Gateway : Disabled
    Dual-active excluded VLANs : -
    Graceful Consistency Check : Enabled
    Auto-recovery status : Disabled
    vPC Peer-link status
    id Port Status Active vlans
    1 Po5 up 1
    Next do I create a port channel on Nex1 and Nex2 for the two ports that connect to the 3750 (Po6 for example) or do I add the two links to the 3750s to Po5 ? I thought I add them to Po 5 but since I am mixing 1000 and 10G ports it doesnt seem to like it.

  • Nexus 5596 Layer 2 Switching Fabric Statistics

    Hi everybody
    Basic question, I would like to know how much traffic the is switch forwarding in Layer 2.
    In other words, a simple set of counters that in an aggregated way includes statistics from all interfaces.
    Any show commands or SNMP MIBs are welcomed.
    Thanks in advance

    • Cisco Nexus 5548P and 5548UP: Layer 2 hardware forwarding at 960 Gbps or 714.24 mpps; Layer 3 performance of up to 160 Gbps or 240mpps
    • Cisco Nexus 5596UP and 5596T: Layer 2 hardware forwarding at 1920 Gbps or 1428 mpps; Layer 3 performance of up to 160 Gbps or 240mpps

  • Cisco LAN Management Solution is required to support Cisco Nexus 5548P and 5596UP switches?

    Hi,
    Could someone help to know what Cisco LAN Management Solution is required to support Cisco Nexus 5548P switches and Cisco Nexus 5596UP switches?
    These new Cisco switches are being implementing on customer network and he ask us that he requires these equipments be supported on a LMS solution (customer currently is using LMS 3.2.1)
    Can someone help?
    Thanks in advanced,
    guruiz

    Some very limited Nexus support is present in LMS 3.2.1 - see the supported device table here.
    To get more complete support, including the 5596UP, they need to upgrade to LMS 4.x (e.g.  LMS 4.2.2 is the latest and is sold under the Cisco Prime Infrastructure 1.2 umbrella). The major upgrade from 3.x to 4.x requires purchasing an upgrade license.
    Some functions (namely User Tracking ) will not be available on the 5k due to non-support of the requisite MIB on the device. I believe LMS still doesn't let you do VLAN management on 5k's - you need to use DCNM for that if you want to do it from a GUI.
    See the table here for LMS 4.2 device support.

  • Nexus 9396PX Data centre switch not booting.

    Nexus 9300 switches are not loading with code  and getting the below error.
    Loader> prompt and i tried with boot command and no luck.
    Do we need crossover cable for netboot from tftp server?
    Configured the IP setting in the nexus through Set command.
    PC-------------Nexus9300(mgmt0) port.

    Thanks Colin,
    I was thinking towards a 6500
    For a small setup with currently about 20 clients, and looking towards increasing that to about 50 in the next year, would a single / pair of 6500's be a suitable core.
    Is there any benefit in setting up an MPLS core with routers etc, I have done it in labs but can't see how to scale it for a real world implementation.
    I am thinking the VRF separation on the 6500 is going to be sufficient and forget MPLS?
    Roger

  • How does switching capacity of Nexus 7710 and Nexus 7718 calculate?

    Hi everybody.
    I found the nexus 7700 Fabric datasheet there was 220G per slot.
    For example, the nexus 7710.
    There was 220G per slot per fabric module and result is 6*220G*8=10560G, duplex is  21120G=21T.
    How does the 42T came from?

    Hi,
    The 42 Tbps number Cisco are quoting is for the Nexus 7718 platform. If you look at the Cisco Nexus 7700 Switches Data Sheet you'll see it quotes:
    The Cisco Nexus 7710 has six fabric module slots to provide simultaneously active fabric channels to each of the I/O and supervisor modules. Through the parallel forwarding fabric architecture, the Cisco Nexus 7710 can achieve 21 Tbps of forwarding capacity or more.
    And for the Nexus 7718:
    The Cisco Nexus 7700 18-Slot Switch has six fabric module slots to provide simultaneously active fabric channels to each of the I/O and supervisor modules. Through the parallel forwarding fabric architecture, the Cisco Nexus 7700 18-Slot Switch can achieve 42 Tbps of forwarding capacity or more.
    So for the Nexus 7718 platform we have 220 Gbps * 6 Fabric Modules * 16 I/O slots totaling 21,120 Gbps. If we double that for full duplex we'll have 42,240 Gbps or 42 Tbps.
    Regards

  • Two Fabric redundancy and storage flapping

    Hello!
    I have a fairly new Nexus 5548 implementation, using the Nexus for stricly storage. I have two 5548s for two different Fabrics, for redundancy. They are two seperate fabrics, and the Nexus are not stacked so they are managed individually. When I have both Nexus online my VMware side starts flapping, and loosing storage which causes my ESXi hosts to lock up and VMs to go unresponsive. This does not happen 100% of the time, but it happens intermitentaly and sometimes is catestrophic to datacenter services. When I shut down one of the Nexus switches, storage comes back and everything is healthy.
    All hosts are connected via 4GB FC (supposedly HP cant do 8GB without problems)
    5/6 of my hosts are on HP c7000 enclosures via 10gb FlexFabric switches, the rest via UCS
    Netapp clustered pair is the target. When the ESXi hosts loose storage, they are still flogi'd in to the storage and fabric
    ESXi 5.0 w/newest (and correct) drivers. VMWare tech support sees no problems, other than the "storage is getting pulled from the host"
    Newest firmware on everything HP & UCS
    Nexus running 5.0(3)N2(2a)
    Using per-iniator zoning
    Fabric-A and Fabric-B are different VSANs
    Any ideas? Do I have a design flaw in my fabric? HP, Cisco, Netapp, and VMware all pretty much have no clue. So this forum is a shot in the dark. Thanks for ANY ideas you guys can provide

    could it be some kind of trespassing on the storage array ? LUNs start balancing between the controllers and eventually the host times out. Have you tried different failover (NMP) policy in ESXi ?

  • Catalyst 3850 vs Nexus 3K

    Hi ,
    I have a question. If the only requirement for the DC is to have iSCSI for some servers , is it better to go for Nexus 3K or I should choose Cat 3850. Apparently with 10 gig uplinks Cat 3850 is more expensive. Can someone plz tell what would be the benefits of each in this scenario?
    Fawad

    Hi ,
    Cat 3850 is mainly used for unify wired and wireless networks. Cisco Catalyst 3850 Switches, which combine wired and wireless by supporting wireless tunnel termination and full wireless LAN controller functionality.
    In other hand , Nexus 3000 is datacenter top of the rack switch and low latency , unify data ad storage switch and well suited for your Iscsi requirement . so i would suggest nexus 3000 .
    HTH
    Regards,
    VS.Suresh.
    *Plz rate the usefull posts *

  • Documentation needed about networking config storage environments

    Hello,
    In my job there are more and more environments where I have to manage storage switches in vmware ESX environments with Netapp storage.
    I was wondering if there is any documentation available about how to configure storage switches in these environments. I have checked with vmware, but with no luck.
    Any help is appreciated. Thank you and kind regards,
    Ralph Willemsen
    Arnhem, Netherlands

    Ralph,
    although it is not mentioned what is the type of switch connecting storage.
    I am assuming it is cisco MDS storage switch..
    here is the MDS switch configuration guide
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ov.html
    If you are looking for configuration of storage devices itself. that has to come from either VMWare or Netapp.
    This support forum is perticularly focused on Cisco MDS/Nexus switching side.Are there any issues..configuring / attaching these devices on cisco switch side?
    when you connect any server running Vmware ot netapp storage device on MDS switch. there is nothing special required on port side. Device is suppose to negotiate automatically with switch port and it comes UP or online state.
    First thing you need to see .. Is the Port UP..  if Yes, switch side is good.
    second is zoning. create zones for required devices to be zoned together and activate.thatz it..
    rest all of the configuration is on Device side..means on VMware or Netapp device side .How you allocate disks/Lun/volume etc ..these things are outside of switch configuration,.
    Have you opened a case with Vmware or Netapp or posted question on these vendors Support site.
    Hope it gives a right direction.
    Have you

  • Nexus N7k 10 slot ISSU failure

    Hi, 
    I am trying to ISSU upgrade our Nexus 7K 10 slot switch from 5.2.7 to 6.2.8a The ISSU has failed multiple times due to the Standby Supervisor failing to come back online  with the message
    Install has failed. Return code 0x4093001e (Standby failed to come online)
    Please identify the cause of the failure, and try 'install all' again.
    Start type: SRV_OPTION_RESTART_STATELESS (23)
    Death reason: SYSMGR_DEATH_REASON_NEED_COPYRS (19)
    Last heartbeat 0.00 secs ago
    System image name: n7000-s1-dk9.6.2.8a.bin
    System image version: 6.2(8a) S2
    Exit code: SYSMGR_EXITCODE_NEED_COPYRS (66)
    Has anyone else experienced this before and found a solution?, any suggestions are welcome at this stage. I've followed the ISSU guidelines to the core. Please help!!
    thanks, Sujohn

    Hi,
    Are you following these guide lines before attempting to upgrade?
    efore attempting to use ISSU to upgrade to any software image version, follow these guidelines:
    Scheduling Schedule the upgrade when your network is stable and steady. Ensure that everyone who has access to the device or the network is not configuring the device or the network during this time. You cannot configure a device during an upgrade.
    Space Verify that sufficient space is available in the location where you are copying the images. This location includes the active and standby supervisor module bootflash: (internal to the device). Internal bootflash: has approximately 250 MB of free space available.
    Hardware Avoid power interruption during any install procedure, which can corrupt the software image.
    Connectivity to remote servers
    Configure the IPv4 address or IPv6 address for the 10/100/1000 BASE-T Ethernet port connection (interface mgmt0).
    Ensure that the device has a route to the remote server. The device and the remote server must be in the same subnetwork if you do not have a router to route traffic between subnets.
    Software images
    Ensure that the specified system and kickstart images are compatible with each other.
    If the kickstart image is not specified, the device uses the current running kickstart image.
    If you specify a different system image, ensure that it is compatible with the running kickstart image.
    Retrieve the images in one of two ways:
    Locally
    Images are locally available on the switch.
    Remotely
    Images are in a remote location and you specify the destination using the remote server parameters and the filename to be used locally.
    Before an upgrade from Cisco NX-OS Release 6.1(x) to Release 6.2, apply either "limit-resource module-type f1" or "limit-resource module-type f2" to the storage VDC, and check that the following storage VDC configurations are removed:
    Shared F2(F1) interfaces with a storage VDC that supports only F1(F2)
    F1 and F2 interfaces in the same storage VDC
    The default Control Plane Policing (CoPP) policy does not change when you upgrade the Cisco NX-OS software.
    CoPP MAC policies are supported beginning with Cisco NX-OS Release 5.1, and default policies are installed upon execution of the initial setup script. However, if you use ISSU to upgrade to Cisco NX-OS Release 6.0(1), the default CoPP policies for the following features must be manually configured: FabricPath, OTV, L2PT, LLDP, DHCP, and DOT1X. For more information on the default CoPP policies, see the Cisco Nexus 7000 Series NX-OS Security Configuration Guide.
    When you upgrade to Cisco NX-OS Release 6.0(1), the policy attached to the control plane is treated as a user-configured policy. Check the CoPP profile using the show copp profile command and make any required changes.
    The upgrade to Cisco NX-OS Release 6.0(1) in an OTV network is disruptive. You must upgrade all edge devices in the site and configure the site identifier on all edge devices in the site before traffic is restored. You can prepare OTV for ISSU to Cisco NX-OS Release 6.0(1) in a dual-homed site to minimize this disruption. See the Cisco Nexus 7000 Series NX-OS OTV Configuration Guide for information on how to prepare OTV for ISSU to Cisco NX-OS Release 6.0(1) in a dual-homed site. An edge device with an older Cisco NX-OS release in the same site can cause traffic loops. You should upgrade all edge devices in the site during the same upgrade window. You do not need to upgrade edge devices in other sites as OTV interoperates between sites with different Cisco NX-OS versions.
    The upgrade from Cisco NX-OS Release 5.2(1) or from Release 6.0(1) to Release 6.1(1) in an OTV network is non-disruptive.
    VPC peers can only operate dissimilar versions of the Cisco NX-OS software during the upgrade or downgrade process. Operating VPC peers with dissimilar versions, after the upgrade or downgrade process is complete, is not supported.
    Starting with Cisco NX-OS Release 6.1(1), Supervisor 2 is supported. Therefore, there is no upgrade of Supervisor 2 from a previous Cisco NX-OS release.
    Link:
    http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/6_x/nx-os/upgrade/guide/b_Cisco_Nexus_7000_Series_NX-OS_Software_Upgrade_and_Downgrade_Guide_Release_6-x.html#con_317522
    HTH

Maybe you are looking for