Connecting Fabric Interconnect to iSCSI array

I need to connect my Fabric Interconnect to a new iSCSI array.  There are a number of UCS blades that need to connect to iSCSI LUNs. The FI is currently connected to the rest of the network through  Nexus 7000K switches.   Should I connected directly from the FI to the iSCSI array, or go through the 7000K and then to the array?

Hi
this is more of a design question, you have to think what will need access to the ISCCI storage array. For example, if only UCS blaes will have access to this storage array, you may want to consider connecting it directly as iscsi traffic won't have to go through your N7Ks if both fabrics are active.
If you want another type of server such as HP or IBM to access the storage you may want to consider connecting the storage array to the N7Ks if your fabric are configured in end-host-mode. Again this will depend on your current implementation

Similar Messages

  • Connecting Fabric Interconnect with MDS

    Dear Team,
    Kindly share how to configure connectivity between Fabric Interconnect and MDS
    what are the steps to be followed in establishing a successful connection between FI and MDS
    We have a tape library from IBM and trying to connect the same to FI via MDS and want this Tape library to be detected on the guest VM running windows 2008 on Esxi( Service profile with 2 vhba already added),we are trying using Fibre connectivity.
    Also please advice in which mode should we keep the FI (FC switching,end host etc)
    Thanks and Regards
    Jose

    Hi Jose,
    The common steps to configure FC on FI and MDS as following :
    1. FI configured in NPV mode, ensure that all FC uplinks and vHBA of Blades have been assigned on correct VSAN
        Verify if vHBA Flogi into the FI
        show npv flogi-table
        show npv status
    Example
    FI-A(nxos)# sh npv flogi-table
    SERVER                                                                  EXTERNAL
    INTERFACE VSAN FCID             PORT NAME               NODE NAME       INTERFACE
    vfc703    10   0x330001 20:00:00:cc:1e:dc:01:0a 20:00:00:25:b5:00:00:03 San-po102
    vfc711    10   0x330002 20:00:00:cc:1e:dc:02:0a 20:00:00:25:b5:00:00:02 San-po102
    Total number of flogi = 2.
    FI-A(nxos)# sh npv status
    npiv is enabled
    disruptive load balancing is disabled
    External Interfaces:
    ====================
      Interface: san-port-channel 102, State: Up
            VSAN:   10, State: Up, FCID: 0x330000
      Number of External Interfaces: 1
    Server Interfaces:
    ==================
      Interface: vfc703, VSAN:   10, State: Up
      Interface: vfc711, VSAN:   10, State: Up
      Number of Server Interfaces: 2
    2. Enable NPIV mode on MDS, configure the link to FI as F port. Assign this port into correct VSAN
        FI will become Node proxy, so verify if all vHBA have been detect on Upstream NPIV switch
    SW2(config)# sh npiv status
    NPIV is enabled
    SW2(config)# sh flogi database
    INTERFACE        VSAN    FCID           PORT NAME               NODE NAME      
    San-po102        10    0x330000  24:66:00:2a:6a:05:e9:00 20:0a:00:2a:6a:05:e9:01
    San-po102        10    0x330001  20:00:00:cc:1e:dc:01:0a 20:00:00:25:b5:00:00:03
                               [BLADE1-SAN-A]
    San-po102        10    0x330002  20:00:00:cc:1e:dc:02:0a 20:00:00:25:b5:00:00:02
                               [BLADE2-SAN-A]
    3. Configure zoning on the MDS (to make sure whether the issue on zoning or not, you can configure the default zone to permit)
    Example : zone default-zone permit vsan 1
    Verify the zoning on Upstream NPIV switch
    SW2(config)# SH ZONESet active vsan 10
    zoneset name ZONESET-VSAN10 vsan 10
      zone name INE-VSAN10 vsan 10
      * fcid 0x330001 [device-alias BLADE1-SAN-A]
      * fcid 0x330002 [device-alias BLADE2-SAN-A]
      * fcid 0x2a0000 [device-alias FC-SAN10]
    4. Ensure that vHBA of BLADE had been discovered on the storage device. Configure LUN masking on the storage
    I really don't have enough knowledge of VMware hahaha.....so can't help for troubleshooting the VM....Really sorry
    Thanks,
    Gofi

  • Connecting Fabric Interconnects together with FC

    Hi,
    Is it possible to link two Fabric Interconnects together with FC uplinks?
    If I have two sites and I have a pair of Fabric Interconnects in site A and a pair in site B. Is it possible to connect FI-A of site A to FI-A in site B via an FC upllink port? Or should I use a Cisco MDS or Brocade switch to make the connection?
    In each site we have a server with FC connections and they need access the storage on the other side. Is it possible if I place the FI in FC switching mode?
    Kind regards,
    Gert

    Hi Robert,
    Thanks for the answer. On each site there is only one server with FC connections, so it's a high cost to buy MDS switches only for that. Is it possible to connect them via FC or should we really buy some FC switches. It's only for 1 server per site...
    Thanks in advance and regards,
    Gert

  • Appliance port on 6248 Fabric Interconnects

    Is it possible to setup a storage device with a CIFS share on the 6248's through a port defined as an appliance port?  The GUI guide says this:
    "Appliance ports are only used to connect fabric interconnects to directly attached NFS storage."
    But what would be the difference if it were a CIFS share rather than an NFS mount?
    I'm trying to setup a storage unit that does both NFS and CIFS, but the backup software that I want to point to it through the FI's needs to have it as a CIFS share due to being a Windows platform.
    Any and all comments welcome.
    Thanks,
    Chad

    iSCSI, CIFS and NFS targets are all valid uses for the appliance port. 
    All the appliance port does is allow MAC learning on the interface. 
    Regards,
    Robert

  • Fabric Interconnect 6248 & 5548 Connectivity on 4G SFP with FC

    Hi,
    Recently I came across a scenario when I connected a 4G SFP on Expansion Module of 6248 Fabric Interconnect at one end and at other end 4G SFP on 5548UP. I was unable to establish FC connectivity between both of the devices and the momemt I connected 4G SFP on Fixed Module of 6248 connectivity got established between both the devices
    I would like to know do I have to do any changes on FI's Expansion module to get the connectivity working or this kind of behivor is expected behavior
    Do let me know if you need any other information on this
    Regards,
    Amit Vyas

    Yes, On FI-B port 15-16 should be in VSAN 101 instead of 100, I have made that correction
    Q. are you migrating the fc ports from the fixed to the expansion module ?
         A: As off now I am not migrating FC port but in near future I have to migrate FC ports to Expansion module and I don't want to waste my time for troubleshooting at that time.
    Is my understanding correct, that you have 2 links from each FI to a 5548, no port fc port-channel ?
         A: Yes, your understanding is correct we have 2 links from each FI to 5548 and no FC port-channel is configured
    I will do the FC port-channel later on once I am able to fix the connectivity issue
    I will try to put 4G SFP on expansion module and will provide you output of "show interface brife"
    Following is the out of "show interface brife" from both 5548UP switches
    Primary5548_SW# show interface brief
    Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port
                      Mode   Trunk                          Mode  Speed  Channel
                             Mode                                 (Gbps)
    fc1/29     100    auto   on      up               swl    F       4    --
    fc1/30     100    auto   on      up               swl    F       4    --
    fc1/31     100    auto   on      up               swl    F       4    --
    fc1/32     100    auto   on      up               swl    F       4    --
    Ethernet      VLAN    Type Mode   Status  Reason                   Speed     Port
    Interface                                                                    Ch #
    Eth1/1        1       eth  access down    Link not connected          10G(D) --
    Eth1/2        1       eth  access down    Link not connected          10G(D) --
    Eth1/3        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/4        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/5        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/6        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/7        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/8        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/9        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/10       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/11       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/12       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/13       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/14       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/15       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/16       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/17       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/18       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/19       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/20       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/21       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/22       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/23       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/24       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/25       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/26       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/27       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/28       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/1        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/2        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/3        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/4        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/5        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/6        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/7        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/8        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/9        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/10       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/11       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/12       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/13       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/14       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/15       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/16       1       eth  access down    SFP not inserted            10G(D) --
    Port   VRF          Status IP Address                              Speed    MTU
    mgmt0  --           up     172.20.10.82                            1000     1500
    Interface  Vsan   Admin  Admin   Status      Bind                 Oper    Oper
                      Mode   Trunk               Info                 Mode    Speed
                             Mode                                            (Gbps)
    vfc1       100    F     on     errDisabled Ethernet1/1              --
    Primary5548_SW#
    Secondary5548_SW# show interface brief
    Interface  Vsan   Admin  Admin   Status          SFP    Oper  Oper   Port
                      Mode   Trunk                          Mode  Speed  Channel
                             Mode                                 (Gbps)
    fc1/29     101    auto   on      up               swl    F       4    --
    fc1/30     101    auto   on      up               swl    F       4    --
    fc1/31     101    auto   on      up               swl    F       4    --
    fc1/32     101    auto   on      up               swl    F       4    --
    Ethernet      VLAN    Type Mode   Status  Reason                   Speed     Port
    Interface                                                                    Ch #
    Eth1/1        1       eth  access down    Link not connected          10G(D) --
    Eth1/2        1       eth  access down    Link not connected          10G(D) --
    Eth1/3        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/4        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/5        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/6        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/7        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/8        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/9        1       eth  access down    SFP not inserted            10G(D) --
    Eth1/10       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/11       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/12       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/13       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/14       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/15       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/16       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/17       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/18       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/19       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/20       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/21       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/22       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/23       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/24       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/25       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/26       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/27       1       eth  access down    SFP not inserted            10G(D) --
    Eth1/28       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/1        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/2        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/3        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/4        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/5        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/6        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/7        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/8        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/9        1       eth  access down    SFP not inserted            10G(D) --
    Eth2/10       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/11       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/12       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/13       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/14       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/15       1       eth  access down    SFP not inserted            10G(D) --
    Eth2/16       1       eth  access down    SFP not inserted            10G(D) --
    Port   VRF          Status IP Address                              Speed    MTU
    mgmt0  --           up     172.20.10.84                            1000     1500
    Interface  Vsan   Admin  Admin   Status      Bind                 Oper    Oper
                      Mode   Trunk               Info                 Mode    Speed
                             Mode                                            (Gbps)
    vfc1       1      F     on     errDisabled Ethernet1/1              --
    Secondary5548_SW#

  • Is there a way to see SAN luns from the UCS fabric interconnect.

                       Hello,
    I have recently added a 3rd and 4th vHBA to a B440 running RHEL5.
    Originally I had 2 vHBAs', and each created a path to 172 luns. (I was multipathed to 172 luns)
    Now I have added two additional vHBA's and my SAN admin assures me he has mapped, zoned and masked all 172 luns to the 2 new vHBA's.
    So, at this time I should have 4 paths to 172 luns....but I don't.
    What I see is 4 paths to 71 luns.
    I still see 2 paths to all 172 luns, but only see 4 paths on 71 of them.
    Its almost like my SAN admin either mapped or masked only 71 of the 172 luns to my 3rd and 4th path.
    Via the RHEL5 OS I have rescaned the FC paths, rebooted, done lsscsi, looked in the lowest level places for what devices are present on my FC channels, always I see 172 devices on the 2 original FC paths and 71 devices on the 2 new FC paths.
    My question is....is there a way to see all devices (luns) accociated with an vHBA from the command line in the Fabric Interconnect?
    My configuration is UCS w 2 FI's, using 8Gb FC connections to Cisco 9500 directors, to an EMC DMX4.
    Thanks in advance,
    Gene

    wdey,
    The link you supplied in your response is the answer.
    You can look at the devices on the FC boot paths, but not while the OS is running.
    I have a test server I could reeboot to try what you have in your document, and it does work.
    Too bad Cisco could not create a linux util rpm that would allow looking down the fabric while running. Im quit sure emulex has this kind of util, called HBAanywhere...at least for solaris.
    In any case, thanks for the link to your article, it was VERY good!
    My problem with seeing a shorter list of luns on 2 of my FC paths may get solved this afternoon by a change my SAN admin is going to make.. If it does I will post the fix.

  • 6500 / fabric interconnect

    Hi all,
    its any issue to get a fabric interconnect cluster connected to a 6504??
    becuase we are reviewing to get a connection betwen this 2 devices without any nexus devices.... is any problem with that??

    Hello ,
    Yes, you can connect UCS FI to 6500 switch.
    Make sure you configure STP port fast on the 6K switch ports connecting to FI ( if operating in End host mode ).
    Padma

  • UCS-Fabric Interconnect Replacement

    Hello everybody,
    I have a UCS domain with a failed Fabric Interconnect and i have to replace this Fabric. For me there is no action to do in the configuration of the new FI i just have to add it in the existing domain and give it the IP address, but i have a question about the firmware version. is the firmware automatically installing in the new Fabric or have i to install it manually ? the firmware version in my domain is 2.2(1b).
    Thanks for the answer
    Best Regards 
    François Fiocre

    Hi Claudio
    I assume you talk about 10G copper, not 1G copper;
    and if 10G: CX-4 (Twinax) cables, or 10G Base-T ? 
    There is a 12-port 10GBASE-T module, but this is only supported on the Nexus 5596T so of no use on Nexus 5596UP switch, nor UCS FI.
    Anyway, it is a not recommended to connect UCS FI Uplinks to a Nexus 22xx ? just think about the oversubscription ! you would like to have as much bandwidth Northbound as possible.

  • Fabric Interconnect To MDS 9148 FC Port Channel/Trunks

    Is it possible to create both a Port Channel and Trunk on a Cisco MDS 9148? Yes correct?
    Or can I only trunk between the 2 MDS switches and not the MDS to Fabric Interconnect?
    These FC links would be connected to a 2248XP Fabric Interconnect carrying multiple VSANs from the MDS to the Fabric Interconnect.
    Thanks

    Hello,
    Yes, you can have trunk links grouped into port channel ( F port-channel trunking ) between FI and MDS.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101.html
    Padma

  • 6120 Fabric Interconnect

    Is it possible to connect both ethernet switch and FC switch from 6120  Fabric Interconnect using the 8 port extended module?

    Dhinesh,
    Which expansion module are you using? N10-E0440 or N10-E0080 (both of them have eight ports).
    If its E0440 - Yes, you can use it for both Ethernet uplink as well as FC uplink.
    E0080 - Is just FC ports.
    (the ports on the module which have green color on them are FC ports).
    Here is the URL to the Spec sheet for your refernce:
    http://www.cisco.com/en/US/prod/collateral/ps10265/ps10276/spec_sheet_c17-665945.pdf
    ./Abhinav

  • Fabric Interconnect and storage solutions

    Dear
    Please I would like to clarify a question, is that a solution of storage, such as EMC VNX5300 can be connected directly to a Fabric Interconnect, but would lose functionality when connected to a Nexus 5000 switch. I would like someone to clarify this doubt indicating that functionality is lost when direct connection would be advisable to FI or the Nexus.
    Regards

    In versions of UCS earlier than 2.1, you had the  option to useDirect Attached Storage DAS with UCS. However, you needed a SAN switch connected  to the FI so the switch could push the zone database to the FI. That is,  the UCS platform was not able to build a zone database
    With the release of Version 2.1, UCS now has the  ability to build its own zone database. You can have DAS with UCS  without the need for a SAN switch to push the zoning configuration
    hope this helps

  • Cisco UCS 6200 Fabric InterConnects Needed?

    Currently looking to put Cisco UCS 5100 blades into our datacenter. Our core currently has a 7K with F2 48-Port blades with FCOE licenses. We currently have our ESXi servers terminating directly to the 7K for both 10G Ethernet and storage via FCOE. If we decide to replace our standalone rackmount servers with UCS blades, can we connect the blade chassis directly to the 7K, or are the 6200 fabric interconnects required? If so, what purpose do they serve. Thanks.

    Yes, fabric interconnect is required for a UCS blade solution; it can aggregate a maximum of 20 chassis a 8 blades, or a total of 160 blades.
    On the fabric interconnect runs UCS manager as a application, which allows you to do firmware management for the server farm (bios, I/O adaptor, CIMC,....); the concept of service profiles implements server abstraction and mobility across hardware.

  • Use UCS Fabric Interconnect 6100 with Extension Module E0060 (6 FC Ports)

    Hi Everybody,
    I have problem in my Architecture !!!   I THINK
    I want to use Extension Module E0060 (6 FC Ports) for UCS Fabric Interconnect 6100 for interconnectiong my Storage Array ( 2 Controlers with 2 FC Ports by cont.)  instead of use Nexus 5000 between FI 6100 and my Storage Array
    I tried to find details or case study for this way but No thing find !!!!!!!!!!!!
    I just find that th e best practices is to use Nexus 5000 but I have enough FC Ports with this Module ......  I THINK   but I m not an Expert
    Thanks for your help.

    Thanks for your answer Michael,
    But I really don't understand why I can't use the Module E0060 (6 FC Ports) in the FI 61xx instead of NEXUS 500 (I hope that's not commercial goal ) ????
    As you now the hardware architecture of FI 61xx is the same that Nexus 5000, the difference is that 61xx have features added and UCS Manager embedded.
    Now, If they have a features limited in 61xx like NPV or other things, I wil understand But until this moment I don't find Real ANswer in Cisco site of forum ( Al cisco documents mentionned that the best practices and the perfect design is to use 61xx and Nexus 5000 and MDS Switch between UCS and Storage Array ???????
    I Hope that you understand my problem

  • Monitoring traffic between UCS fabric extenders and fabric interconnect?

    What is the best way to monitor the traffic between UCS fabric extenders (chassis) and fabric interconnect? Specifically I am looking for parameters to keep an eye on to determine when we may need to move from 2 links per fabric extender to 4 to handle greater IO needs.
    Thanks.
    - Klaus

    One way you could monitor usage is by looking at the interface stats just as you would for any switch uplink.
    Connect to the Cluster CLI
    connect NXOS
    Look at the input/output rate of your Server Uplink interfaces.
    In my example I'm using fixed ports 1 through 4 on my FI's connecting to the IOM of the Chassis.
    UCS-A(nxos)# show int eth1/1-4 | include rate
      30 seconds input rate 17114328 bits/sec, 1089 packets/sec
      30 seconds output rate 8777736 bits/sec, 693 packets/sec
        input rate 5.07 Mbps, 198 pps; output rate 1.03 Mbps, 99 pps
      30 seconds input rate 2376 bits/sec, 0 packets/sec
      30 seconds output rate 1584 bits/sec, 2 packets/sec
        input rate 1.58 Kbps, 0 pps; output rate 1.58 Kbps, 3 pps
      30 seconds input rate 2376 bits/sec, 0 packets/sec
      30 seconds output rate 31680 bits/sec, 20 packets/sec
        input rate 1.58 Kbps, 0 pps; output rate 30.89 Kbps, 18 pps
      30 seconds input rate 2376 bits/sec, 0 packets/sec
      30 seconds output rate 1584 bits/sec, 1 packets/sec
        input rate 1.58 Kbps, 0 pps; output rate 1.58 Kbps, 3 pps
    If you notice your two links pushing near 10G consistently, it might be time to add another 2 link.
    Other than this you can use SNMP to log the stats and look based on daily/weekly/monthly usage.
    Robert

  • Failed:Error disabled - SFP vendor not supported on Fabric Interconnect 6248UP

    We are connecting a FC-uplink on a Fabric Interconnect 6248UP to a MDS9124. On the Fabric Interconnect side we use a 8Gbps SFP (DSpSFP-FC8G-SW) on the MDS9124 side there is a 4 Gbps SFP.This should work when speed is at fixed speed 4 Gbps on both sides (I was told). When the SFP's and cabling is connected I get an error in the UCS manager on the FC-uplink port we are using:
    Failed:Error disabled - SFP vendor not supported.
    I cannot change the speed on the FC-Uplink ( I can only set user-label). The Fabric Interconnect is configured for FC on the last 8 ports and that is where the SFP for the storage is located (port 31).

    I have a similar problem.  I'm using port 1/41 and gets this output
    show system firmware expand | head lines 10
    UCSM:
       Running-Vers: 2.1(3a)
       Package-Vers: 2.1(3a)A
       Activate-Status: Ready
    Catalog:
       Running-Vers: 2.1(3a)T
       Package-Vers: 2.1(3a)A
       Activate-Status: Ready
    sh interface fc 1/41
    fc1/41 is down (Error disabled - SFP vendor not supported)
       Hardware is Fibre Channel, SFP is Unknown(0)
       Port WWN is 20:29:00:2a:6a:7e:7b:00
       Admin port mode is F, trunk mode is off
       snmp link state traps are enabled
       Port vsan is 1
       Receive data field Size is 2112
       Beacon is turned off
       1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
       1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec
         0 frames input, 0 bytes
           0 discards, 0 errors
           0 CRC, 0 unknown class
           0 too long, 0 too short
         0 frames output, 0 bytes
           0 discards, 0 errors
         0 input OLS, 0 LRR, 0 NOS, 0 loop inits
         0 output OLS, 0 LRR, 0 NOS, 0 loop inits
       last clearing of "show interface" counters never
    show int fc 1/41 transceiver details
    fc1/41 sfp is present but not supported
       name is CISCO-FINISAR  
       part number is FTLX8571D3BCL-C2
       revision is A  
       serial number is FNS17401NYG    
       FC Transmitter type is Unknown(0)
       FC Transmitter supports Unknown(0) link length
       Transmission medium is Unknown(0)
       Supported speeds are - Min speed: -1 Mb/s, Max speed: -1 Mb/s
       Nominal bit rate is 10300 MBits/sec
       Link length supported for 50/125mm fiber is 80 m(s)
       Link length supported for 62.5/125mm fiber is 20 m(s)
       No tx fault, no rx loss, no sync exists, diagnostic monitoring type is 0x68
       SFP Diagnostics Information:
                                         Alarms                 Warnings
    Is this a firmware problem?

Maybe you are looking for