UCS connecting to SAN switches

I've configued UCS direct connecting into the SAN before, but not SAN switches.  My understanding of the main differences between the two is this:
1) With SAN switches, you configure vSANs.  To do this go to SAN tab -> SAN Cloud and configure your vSAN ID's here.  Then in your vHBA template, configure it to use the appropriate vSAN ID you configured under SAN cloud.  Anything else to do here?
2) Whereas with direct attached storage you connect FI-A to SPA and SPB and FI-B to SPA and SPB, with san switches FI-A goes to 'switch-1' and FI-B goes to 'switch-2'.  Is this correct?  If so, anyone know why it has to be this way?

No problem, tbone-111
As far as I know, only two vHBAs are supported at this time.  It's best to use 1 from the A-side and 1 from the B-side.
The vHBAs that are used depend on which wwpns you zoned to your boot luns.
For the example below, assume the following convetions:
1.) my vHBAs are named, vHBA_A1, vHBA_B1, vHBA_A2, vHBA_B2 (A for A-side fabric, and B for B-side)
2.) vHBA_A1 is the first HBA on my A side, and vHBA_B1 is the first HBA on B side
AND
3.) These HBAs' wwpns are zoned properly to my boot LUNs' Storage Processor (SP) ports on the FC Switch
AND
4.) These HBAs' wwpns are LUN masked properly on my array
Those are logical ANDs, by the way, they must ALL be true to prevent headaches later on.
5.) Disk Array SP ports A0 and B1 are connected to my A Fabric FC switch.  B0 and A1 are connected to my B Fabric.
6.) This example boot policy will have the A Fabric be the primary boot path
-FYI, best practice to have at least 2 SAN Boot policies (1 for A-side and 1 for B-side, and evenly distribute among your Service Profiles).
In my boot policy, I check the enforce vHBA naming option and specify my vHBAs names (sorry I don't have screenshots available.  just started with a new company and don't have lab access yet).
My SAN Primary setting might be: Primary Target: A0's wwpn and Secondary Target: B1's wwpn
My SAN Secondary setting might be: Primary Target: B0's wwpn and Secondary Target: A1's wwpn
It would be a good idea to confer with your Storage Admins while you are setting up the first couple of servers to boot on each Fabric to help with any troubleshooting.
If zoning and LUN masking are configured correctly, you should see WWPNs listed during the UCS Server Boot sequence at the point where the FC Boot driver is being loaded.
Of course, get Cisco TAC on the line if you run into any obstacles that you cannot get past.
I hope that helps,
Trevor
======================================================================================
If my answers have been helpful in any way, please rate accordingly.  Thanks!
Message was edited by: Trevor Roberts Jr

Similar Messages

  • Connecting two san switch in cascade

    I am planning to connect two San switch in cascade because the 2/16 san switch that I am using now only have one unused port and I am planning add four server more. So my less expensive solution is using a second 4/16 switch that I have.
    - My questions
    How I do that ?
    Do I need a special license?
    Do I need some other change in the ports configuration for this purpose?
    Some care I must take?
    Thank you
    Ehab

    The answers to your questions are really going to depend on the make and model of your switches.
    Some switch types require a license to configure E_Port (switch-to-switch) functionality, but most do not. Whichever port(s) you use to make the connection will need to either be configured as an E_Port or configured in a mode that will allow it to auto-negotiate itself to E (this could be a GL_Port, Gx_Port, or U_Port depending on switch vendor) after the switches establish communication.
    If the switch that you are adding has some zoning defined on it that is different than the existing switch then you may run into some conflicts so it is often best to just clear the zoning config on the switch that is being added. Finally, you would want to make sure that each switch has a unique Domain ID otherwise the two switches will segment and not merge. By default, most switches are set to Domain ID 1 so just make sure they are both not set to 1.
    hth.
    -j

  • Cisco UCS an McData SAN Switch

    Hallo zusammen,
    wir versuchen eine Verbindung zwischen ESX auf einem UCS und einem NetApp Metrocluster herzustellen. SAN Infrastruktur basiert auf McData SAN Switchen.
    In der Vergangenheit habe ich die komplette Infratruktur mit HP/Brocade SAN Switchen administriert ohne Probleme. Hier waren keine speziellen Einstellungen notwendig. WWN Pools und vHBA und die Verbindung stand.
    Der SAN Admin meinte NPV sei auf dem Port aktiviert und NPIV wird unterstützt. Er sieht auch meine WWPNs und hat das Zoning am SAN konfiguriert. Leider sehen unsere ESX Server keine präsentierte LUN.
    Für Tipps und Infos zur notwendigen Konfiguration wäre ich sehr dankbar. vSphere Umgebung läuft auf 5.5, UCS auf 2.1. Ich habe bereits alle Einstellungen unter UCS getestet. Ich habe auch ein bisschen mit den VSANs gespielt, was aber in der Vergangenheit nie notwendig war. FIs laufen im End Host Mode. ESX Rescan zeigt keine Pfade oder Target, LIP hat auch keinerlei Änderungen gebracht.
    Da ich auf die SAN Konfiguration keinen Zugriff habe möchte ich zunächst sicher gehen, dass ich am UCS nichts vergessen habe. Switching Mode sollte kein Thema sein.
    Viele Grüße
    Bastian

    From your putty Logs, I don't see any issues...
    e1/17 and e1/21 are both up.  I also don't see any blocked VLANs.  Where's this "ENM Loop" you speak of?
    Ethernet1/21 is up
      Hardware: 1000/10000 Ethernet, address: 547f.eeda.6f9c (bia 547f.eeda.6f9c)
      Description: U: Uplink
    RE-SLCFI1-A(nxos)# sh int trunk
    Port          Native  Status        Port
                  Vlan                  Channel
    Eth1/17       500     trunking      --
    Eth1/21       501     trunking      --
    Port          Vlans Allowed on Trunk
    Eth1/17       1,5,500,999
    Eth1/21       1,501,999
    Port          Vlans Err-disabled on Trunk
    Eth1/17       none
    Eth1/21       none
    Port          STP Forwarding
    Eth1/17       1,5,500,999
    Eth1/21       1,501,999

  • Regarding San Switch

    Dear All,                  
    I have some quaries about the San switch -DS-C9148D-4G48P-K9 and DS-C9148D-8G48P-K9.We need the cable specification for the connectivity between San switch and  servers and tape library and SAN to switch.
    waiting for your feed-back .
    Thanks,
    Whead-Ul-Islam

    The parts listed are port licenses.  The following document may be of value.
    Cisco MDS 9000 Family Pluggable Transceivers Data Sheet
    http://www/en/US/prod/collateral/ps4159/ps6409/ps4358/product_data_sheet09186a00801bc698.html
    Thank You,
    Dan Laden

  • UCS servers with IBM's TotalStorage SAN16B-2 SAN Switch

    Hi Experts,
    We have IBM TotalStorage SAN16B-2 SAN Switch, & cisco UCS 6120 Fabric Interconnect, dint find any document for compatibilty between this.
    This is a SAN switch with each port capable of delivering a throughput of up to 4 gbps.
    Can we use this SAN switch to connect ucs Fabric interconnect????
    Thanks,
    Mazhar

    As long as the switch supports NPIV it should be fine.
    Regards,
    Robert

  • UCS connection to McData SAN infrastructure

    Hi,
    I'm trying to connect an UCS to a SAN based on McData SAN switches. I configured the WWN pools and the vHBAs. The SAN Admin told me, NPV is on the port activated and NPIV is supported.
    On the swicthes, my WWPNs are visible and the SAN Admin configured the zoning and the presentation already. If I rescan the the SAN on my ESX, I can't see any path or presentated LUNs at all. LIP reset with no success.
    in the past, I had access to the whole environment and there was no special configuration to get this working. At this time I was working with Brocade/HP SAN switches. Today I'm only responsible for the UCS an the vSphere environment and the other admins have no experience with UCS.
    Any tips or hints will be very helpfull tu get this working. Do I have to make any special configurations? Does anyone have experience with UCS and McData SAN and can share the needed configurations with me?
    Kind Regards
    Bastian

    Hi,
    many thanks for your help.
    show npv flogi-table shows:
    SERVER                                                                  EXTERNAL
    INTERFACE VSAN FCID             PORT NAME               NODE NAME       INTERFAC
    E
    vfc791    1000 0x636215 20:00:00:25:b5:10:a1:00 20:00:00:25:b5:10:c1:00 fc2/1
    vfc793    1000 0x636214 20:00:00:25:b5:10:a1:01 20:00:00:25:b5:10:c1:03 fc2/1
    vfc795    1000 0x636217 20:00:00:25:b5:10:a1:02 20:00:00:25:b5:10:c1:01 fc2/1
    vfc797    1000 0x636218 20:00:00:25:b5:10:a1:03 20:00:00:25:b5:10:c1:02 fc2/1
    vfc805    1000 0x636216 20:00:00:25:b5:10:a1:04 20:00:00:25:b5:10:c1:04 fc2/1
    UCSM Version is 2.1(3a)
    adapter 1/1/1 # connect
    adapter 1/1/1 (top):1# attach-fls
    adapter 1/1/1 (fls):1# vnic
    vnic ecpu type state   lif
    8    1    fc   active  5
    9    2    fc   active  6
    adapter 1/1/1 (fls):2# login 8
    lifid: 5
      ID   PORTNAME                 NODENAME                 FID
    adapter 1/1/1 (fls):3# lunmap 8
    adapter 1/1/1 (fls):4# lunlist 8
    % Command not found
    adapter 1/1/1 (fls):5# login 9
    lifid: 6
      ID   PORTNAME                 NODENAME                 FID
    adapter 1/1/1 (fls):6# lunmap 9
    adapter 1/1/1 (fls):7#
    No output at login doesn't sound good. Is this a fault of PLOGI didn't done correct? lunlist command not found is strange. Did this commend only works after successful login?
    Kind Regards
    Bastian

  • Configuring Nexus 5k for SAN switching between UCS 6100s and NetApp storage

    We are trying to get our UCS environment setup to do SAN boot to LUNs on our NetApp storage. We have a pair of Nexus 5Ks with the Enterprise / SAN license on them and the 6-port FC module installed. What needs to be done to configure the Nexus 5000 to operate as the SAN switch between the storage target and the UCS environment?

    I'm still not seeing the LUN on the NetApp device from my service profile. Here are the outputs from the two
    commands you referenced here along with a few other commands if they help at all.
    Port fc2/1 is the conneciton to the UCS 6100 with FCID 0x640004 being the vHBA in my Server profile.
    Port fc2/5 is the NetApp target. I have the LUN masked to the vHBA port name 20:00:00:25:b5:01:00:ff
    I have just the wwpn from the vHBA in my server profile and the wwpn of the NetApp target port zoned together. I'm not seeing any FC traffic at the NetApp though from looking at the statistics. Do I need to include something else in my zoning?
    Again, any assistance would be appreciated. This is obviously our first venture into FC...

  • Query: Best practice SAN switch (network) access control rules?

    Dear SAN experts,
    Are there generic SAN (MDS) switch access control rules that should always be applied within the SAN environment?
    I have a specific interest in network-based access control rules/CLI-commands with respect to traffic flowing through the switch rather than switch management traffic (controls for traffic flowing to the switch).
    Presumably one would want to provide SAN switch demarcation between initiators and targets using VSAN, Zoning (and LUN Zoning for fine grained access control and defense in depth with storage device LUN masking), IP ACL, Read-Only Zone (or LUN).
    In a LAN environment controlled by a (gateway) firewall, there are (best practice) generic firewall access control rules that should be instantiated regardless of enterprise network IP range, TCP services, topology etc.
    For example, the blocking of malformed TCP flags or the blocking of inbound and outbound IP ranges outlined in RFC 3330 (and RFC 1918).
    These firewall access control rules can be deployed regardless of the IP range or TCP service traffic used within the enterprise. Of course there are firewall access control rules that should also be implemented as best practice that require specific IP addresses and ports that suit the network in which they are deployed. For example, rate limiting as a DoS preventative, may require knowledge of server IP and port number of the hosted service that is being DoS protected.
    So my question is, are there generic best practice SAN switch (network) access control rules that should also be instantiated?
    regards,
    Will.

    Hi William,
    That's a pretty wide net you're casting there, but i'll do my best to give you some insight in the matter.
    Speaking pure fibre channel, your only real way of controlling which nodes can access which other nodes is Zones.
    for zones there are a few best practices:
    * Default Zone: Don't use it. unless you're running Ficon.
    * Single Initiator zones: One host, many storage targets. Don't put 2 initiators in one zone or they'll try logging into each other which at best will give you a performance hit, at worst will bring down your systems.
    * Don't mix zoning types:  You can zone on wwn, on port, and Cisco NX-OS will give you a plethora of other options, like on device alias or LUN Zoning. Don't use different types of these in one zone.
    * Device alias zoning is definately recommended with Enhanced Zoning and Enhanced DA enabled, since it will make replacing hba's a heck of a lot less painful in your fabric.
    * LUN zoning is being deprecated, so avoid. You can achieve the same effect on any modern array by doing lun masking.
    * Read-Only exists, but again any modern array should be able to make a lun read-only.
    * QoS on Zoning: Isn't really an ACL method, more of a congestion control.
    VSANs are a way to separate your physical fabric into several logical fabrics.  There's one huge distinction here with VLANs, that is that as a rule of thumb, you should put things that you want to talk to each other in the same VSANs. There's no such concept as a broadcast domain the way it exists in Ethernet in FC, so VSANs don't serve as isolation for that. Routing on Fibre Channel (IVR or Inter-VSAN Routing) is possible, but quickly becomes a pain if you use it a lot/structurally. Keep IVR for exceptions, use VSANs for logical units of hosts and storage that belong to each other.  A good example would be to put each of 2 remote datacenters in their own VSAN, create a third VSAN for the ports on the array that provide replication between DC and use IVR to make management hosts have inband access to all arrays.
    When using IVR, maintain a manual and minimal topology. IVR tends to become very complex very fast and auto topology isn't helping this.
    Traditional IP acls (permit this proto to that dest on such a port and deny other combinations) are very rare on management interfaces, since they're usually connected to already separated segments. Same goes for Fibre Channel over IP links (that connect to ethernet interfaces in your storage switch).
    They are quite logical to use  and work just the same on an MDS as on a traditional Ethernetswitch when you want to use IP over FC (not to be confused with FC over IP). But then you'll logically use your switch as an L2/L3 device.
    I'm personally not an IP guy, but here's a quite good guide to setting up IP services in a FC fabric:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/4_1/configuration/guides/cli_4_1/ipsvc.html
    To protect your san from devices that are 'slow-draining' and can cause congestion, I highly recommend enabling slow-drain policy monitors, as described in this document:
    http://www.cisco.com/en/US/partner/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/int/nxos/intf.html#wp1743661
    That's a very brief summary of the most important access-control-related Best Practices that come to mind.  If any of this isn't clear to you or you require more detail, let me know. HTH!

  • Windows 2012 Hyper-V Virtual SAN Switch Best Practice

     I have a four node cluster running W2012 Datacenter edition on each.  All systems have Emulex HBA compatible with Hyper-V and Virtual SAN technology. The hosts are connected to a Brocade SAN Switch with NPIV Enabled. I've
    created a Virtual SAN on each host. The vSAN is connected to 2 physical HBA. Each HBA is connected to a different physical SAN Switch.
    I have some questions about the best practice of the whole system.
    1. Do I have to create a vSAN per physical HBA ? Or create one vSAN that include 2 HBA ?
    2. Add 2 virtual HBA per VM to have fault tolerance ? Or one virtual HBA that is connected to a vSAN with 2 HBAs ?
    3.  Do I have to create a virtual port for each VM on the HBA Gui (OCManager) ?
    Any best pratice or advice ?
    Thanks

    Hi,
    first you will do almost the same as you do on a physical SAN, normally you build 2 Fabrics right ?
    In Case you have that in place you will configure also 2 vSANs to build up your 2 seperate ways/pathes.
    Here my top links for that Topic
    Hyper-V Virtual Fibre Channel Design Guide
    http://blogs.technet.com/b/privatecloud/archive/2013/07/23/hyper-v-virtual-fibre-channel-design-guide.aspx
    Very Good Blog with all the Scenarios you can have, sinlge Fabric to Multi ......
    Here the TechNet Info around "Hyper-V Virtual Fibre Channel Overview"
    http://technet.microsoft.com/en-us/library/hh831413.aspx
    and the "Implement Hyper-V Virtual Fibre Channel"
    http://technet.microsoft.com/en-us/library/dn551169.aspx
    A good Advice is also to install this Hotfix -
    http://support.microsoft.com/kb/2894032/en-us
    It is a sexy feature but if one point in the chain is doing wrong things with your NPIV Packets you are lost, so in my personal view you miss the for me most inportant point why i do Hyper-V, the Separation of HW and Software :-)
    One Problem i had once was after live Migration of a VM with configured vFC the Destination Host lost there FC Connection, but only if the Host have been one SAN Hop away from the HP Storage , was a HP Driver Problem, used the original Qlogic Driver and
    everything was fine, so same as with some HP Networking Driver for Broadcom, do not trust every update, test it before puting it in production.
    Hope that helps ?
    Udo

  • Do we need to create two zones for Two HBA for a host connected with SAN ?

    Hi,While creating Zone , Do we need to create two zones for Two HBA for a host connected with SAN ? Or a zone is enough for
    a host which having Two HBAs...We have two 9124s for our SAN fabric...
    As I found like one zone  below, I little bit confused that , if a host having two HBA connected with SAN, should I expect two zones for every Host?
    from the zone set, I gave the command show zoneset
    zone name SQLSVR-X-NNN_CX4 vsan 1
        pwwn 50:06:NN:NN:NN:NN:NN:NN
        pwwn 50:06:NN:NN:NN:NN:NN:NN
              pwwn 10:00:NN:NN:NN:NN:NN:NN
    But I found only one zone for the server's HBA2:by the same time in the fabric I found switches A & B showing the WWNs of those HBAs on its
    connected N port...Its not only for this server alone, but for all hosts..Can you help me to clarify on this please..that should we need to create one zone for
    one HBA?

    if u have two independent fabrics between hosts and storage, i think the below confs are recommended.
    Scenario 1:  2 HBAs single port each ( redundancy across HBA / Storage port )
    HBA1 - port 0 ---------> Fabric A ----------> Storage port ( FAx/CLx )
    HBA2 - port 0 ---------> Fabirc B ----------> Storage port ( FAy/CLy )
    Scenario 2: 2 HBAs of dual port each
    HBA1 - port 0 -------> Fabric A ---------> Storage port ( FAx/CLx )
    HBA2 - port0 ---------> Fabric A ---------> Storage port ( FAs/CLs )
    HBA1 - port 1 --------> Fabric A --------> Storage port ( FAy/CLy )
    HBA2 - port 1 ---------> Fabric B --------> Storage port ( FAt/CLt )
    the zone which is in your output is VSAN 1. if its a production VSAN, Cisco doesn't recomends to use VSAN 1 ( default vsan ) for production.

  • Why a SAN switch can see vHBA WWPNs but not WWNN?

    I have setup UCSM but the SAN switches (EMC) could only see all the WWPN on each server's vHBA, not the WWNN. They are all UCS B200M2 model. I saw the WWNNs are already assign to all service profile as shown in the storage tab.
    Has anyone encountered this before? I am digging my head for more than 2 hours but no clue.
    Rebooted the blades, disassociate/reassociate blades, re-ack blades, unbind/rebind vHBA templates on the server's vHBA had been done but no luck
    Appreciate for any help here.
    kind regards,

    Simon is correct. In UCSM is where you would only see the WWNN under the storage tab. You are only zoning the WWPN. If I may ask, what reason would you need to see the WWNN if the paths to the HBA is being seen?
    Just curious?
    Sent from Cisco Technical Support iPhone App

  • JBOD 9-pin connector to SAN switch

    I'm a newbie to SAN.
    There are JBODs with 9-pin D-type connector. my SAN switch has fibre SFP connector. Is there a cable to connect them?

    Yes. There is something called a MIA, Media Interface adapter, and it will convert copper to Fibre. Usually you plug the MIA into the back on the JBOD and attach the fibre to the MIA and the switch. Often times MIA require SC type fibre connections where as the MDS used LC type connects, just make sure you get a fibre cable that is SC on one end and LC on the other. These cables are common since most patch panels use the SC interface.

  • WWN does not show up on Cisco SAN Switch

    We use Cisco SAN Switches MDS 9134 and MDS9222i with NX-IOS 4.2.(3). Our HP c7000 blade enclosure is connect through a Fibre Channel Virtual Connect Card to the SAN switches. For so far is everything working Oke. Now is the problem when I connect the 8th and 9th Blade Server (BL860c i2 Integrity Server) to the c7000 enclosure non of the wwn's show up on the Cisco Switch. Neither on the MDS9134 as the MDS9222i.Firmware on the HP Blade is up to date. The Onboard Administrator show that the HBA's are logged-in to Virtual Connect.
    Does anyone know solution for this issue.
    Best Regards,
    Robert

    Hi Robert,
    The output of "show flogi database" will show you all the wwn's of devices connected directly to a particular
    switch.
    Is NPV involved in your topology?
    Regards,
    Ken

  • Can we schedule backup of cisco san switch 9148 through windows backup utility.

    can we schedule backup of cisco san switch 9148 through windows backup utility.

    sure, you can write a batch/perl/powershell/whatever script that will connect to your switch and then backup the configuration. You need to decided where you are going to back it up to, possible options include TFTP, SCP, FTP, SFTP.  For example i am backing to a TFTP server, my perl script connect to the switch and runs this command:
    copy startup-config tftp://tfpserver-ip/mds9513.$(TIMESTAMP).config
    TIMESTAMP is actually a built-in variable that will be replaced with the date/time configured on the switch.

  • No SAN Switch Logins

    I have the typical setup:
    2 x 6248 Interconnects
    2 x Brocade SAN Switches
    EMC Storage
    All cross connected for redundancy. However, I am not seeing any logins to the second SAN switch when I boot a blade. I looked at all the vSAN config's etc., and it looks good and was wondering if anoyne had any other ideas?
    Thanks

    When you are talking SAN and say "Cross Connected" for redundancy it kind of freaks me out.
    Topology should be
    FIA => BrocadeA
    FIB => BrocadeB
    BrocadeA and BrocadeB should Never cross paths.
    You would have two vHBA's per Service Profile.
    Would look like this:
    vHBA_A => FIA => BrocadeA
    vHBA_B => FIB => BrocadeB
    The nameservers on each Brocade should only see the WWPN's for the vHBA's on their fabric.
    BUT to maybe answer the question you actually asked, the vHBA template will assign either fabric A or B. That is why you need two vHBA's for a dual fabric. If you want to see logins on both switches make sure you create a vHBA_A and vHBA_B set to corresponding Fabric/FI.
    See Below for example.
    Craig

Maybe you are looking for