Vm-fex in ucs

Hi, i am doing vm-fex in ucs in my lab environment on esxi5.1. I had configured a dynamic vnic connection policy on the service template with 50 vnics.
When configuring the port-groups from ucsm, i noticed that it automatically creates an uplink-pg and deleted-pg. Questions:
- does the uplink-pg have any significance in vm-fex? I believe we are passing vlans to the port groups/port-profiles in the DVS that we create on the ucsm without using the uplink as any vtrunk interface.
- As i have created 50 dynamic vnics on the sp/blades, does this mean i can have a maximum of 50 port-groups on the DVS also? If yes, how can you find out the one-to-one association between the port-group and the dvnic?

uplink-pg
specific port config that will be pushed into vcenter as a dist port group that can be assigned to vms and vnics
deleted-pg
placeholder when switch is set up for control traffic, when a profile is deleted, all vnics are stored here, so do not use vnics from this port group

Similar Messages

  • Nexus 5548UP FCoE to C series UCS

    Ok here is the scenario
    nexus 5548 switch ---- fex 2232 --- UCS C series (C220)
    snippets of the configuration:
    vlan 1100
    fcoe vsan 1100
    vlan database
    vsan 1100
    fex 100
    description 2232-A
    fcoe
    int ethernet100/1/1
    switchport
    switchport mode trunk
    switchport trunk allowed vlan 10,1100           
    interface vfc100
    bind interface ethernet100/1/1
    switchport trunk allowed vsan 1100
    no shut
    so it is configured above, the vfc interface won't trunk unless I do the following:
    vsan database
    vsan 1100 interface vfc100
    I found this document that describes mapping VSANs to VLANs example configuration (page 8)

    Each fc and/or vfc interface must be member of a VSAN; by default, they belong all initially to VSAN 1.

  • Cisco UCS

    Looking for a Design Guide where I can see how exactly design and configure Cisco UCS Blade Series Servers with (2) 2200 IOMs to Fabric Interconnects.  I like to see how Fabric Interconnects connect to Cisco FEXs.
    The design guides I have downloaded from Cisco only talks about Cisco FEXs at a very high level, not going into too much details how they connect to  Fabric Interconnects.
    As I understand correctly, Fabric Interconnects itself have may server ports that connect down to Chassis IOMs.  Trying to understand how Server Ports on Fabric Interconnects can work inconjuction with Cisco FEXs.

    Hi Abbasali,
    Cisco FEXes in UCS are use for C-Series server integrations. You won't be able use the FEXes to expand the number of ports on your fabric interconnect.
    C-Series integration documentation below
    http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c-series_integration/ucsm2-1/b_UCSM2-1_C-Integration/b_UCSM2-1_C-Integration_chapter_01.html
    Don't forget to rate any useful posts.

  • Cisco UCS with FEX system Component requirements

    Hi ,
    I am looking for quick suggestion for required components to have a VMware vSAN implemented using FEX system with maximum Fabric trough put. Ideally a configuration for without multipath using single Fabric switch and later on to be able to upgrade to Multipath fabric.
    We are in great rush to provide POC concept and was looking if you folks can suggest me on this. If need be you want to talk to me more in detail I can available.
    Appreciate your help here.
    Who can I reach to able to pull up build and quote the configurations for implementing complete UCS solution.
    Are these below components absolutely required ?
    2 x Nexus 2232PP 10GE Fabric Extenders
    The Cisco Nexus 2232PP 10GE provides 32 x 10Gb loss-less low latency Ethernet and Fiber Channel Over Ethernet (FCoE) Small Form-Factor Pluggable Plus (SFP+) server ports and eight 10 Gb Ethernet and FCoE SFP+ uplink ports in a compact 1 rack unit (1RU) form factor. They enable Seamless inclusion of UCS rack servers into UCS Manager domain with UCS blade servers when connected to the UCS 6200 Fabric Interconnects, providing converged LAN, SAN, and management connectivity.
    We currently have these below servers :
    2 x  UCSC-BASE-M2-C460
            524 GB RAM
            No SSDs
            No VIC Card
            4 x SAS 1 TB Drives
            1 x L1 Intel 1 Gbps Ethernet Adapter
            1 x L2 10Gbps Ethernet Adapter
            1 x LSI Mega RAID SAS 9240-8i (no RAID 5 support, Need a card that supports RAID5)
    1 x  UCSC-C240-M3L
            132 GB RAM
            30 TB SAS HDD
            1 x VIC Card
            1 x Intel I350 Gigabit Ethernet Card with 4 ports
            1 x LSI Mega RAID SAS 9240-8i (no RAID 5 support, Need a card that supports RAID5)
    1 x 5548UP Nexus Switch ( Will I be able to use this switch in place of Nexus 2232PP 10GE Fabric Extenders to achieve complete UCS solution)

    Cisco UCS Manager 2.2 supports an option to connect the C-Series Rack-Mount Server directly to the Fabric Interconnects. You do not need the Fabric Extenders. This option enables Cisco UCS Manager to manage the C-Series Rack-Mount Servers using a single cable for both management traffic and data traffic.
    If you need high performance low latency (storage, for VMware VSAN), direct connection to UCS Fabric Interconnect and/or N5k is recommended.

  • In depth UCS FEX questions

    With regard to the 2100 FEX module on the UCS....what exactly does it do besides simply act as a mux?
    Does it have any buffers (ingress and egress)?
    Does it have any switching intelligence at all?
    How does it handle the reception of DCB packets (PFC, ETS, etc) from the 6100 or the VIC on the other side? Does it just pass it on?
    What does it do with regard to FIP and FIP snooping?
    Need answers ASAP!
    Thank you!

    Ex,
    The 2100 operate identically to the FEX 2000 series for N5K.  I know you've asked quite a bit of detail on that device already, same applies to the 2104XP IOM for UCS.
    Did you have any other specific questions on the UCS IOM (FEX)?
    Regards,
    Robert

  • Question on UCS Livemigration/vMotion with VM-Fex

    Hi,
    I want to know if or how does livemigration/vMotion work, if I have
    a) Vm-Fex and
    b) 2 UCS domains, each in one active datacenter. Both datacenters connected via a L2 datacenter interconnect, so they are active/active.
    I know livemigration/vMotion will work within 1 UCS domain, but within 2 UCS domains I do not know.
    thanks Alois
    Sent from Cisco Technical Support iPad App

    No, VM-FEX is managed by the fabric Interconnect cluster.  For this reason you can't use VM-FEX to migrate between separate UCS clusters.
    Regards,
    Robert

  • UCS FEX Question

    Im wondering if someone can clear up one thing for me….
    What exactly does a UCS FEX do besides aggregate the server traffic  and push it up to the Fabric Interconnects? What I am wondering is what  functionality/technology does it need to possess to do that?
    From my understanding, a conventional blade switch needs to be  DCB-capable (support at least PFC and ETS) and do FIP snooping to be  used as an FCoE pass-through. IS that the case with a UCS FEX?? I dont  think so…it seems like nothing more than a “dumb” MUX that has no  intelligence, no code to upgrade and simply passes DCB traffic between  CNA and FI…
    What am I missing, if anything?

    Manish, as usual, great stuff. Thank you..
    So, what you are telling me is inline with my thoughts about the UCS FEX. There is no switching intelligence in it and -- more pertinent to this discussion  -- no Data Center Bridging (CEE) capability to speak of. Its just a traffic MUX/DeMUX.
    That means the UCS FEX HIF ports will NOT act as an 802.1D switch port does when it is DCB-enabled and generate a PFC PAUSE frame and send it back to the server CNA if its input buffers get overwhelmed. I Iimagine then that the UCS FEX leaves it up to the Fabric Interconnect's ingress port to generate the PUASE and the FEX will simply pass it on to the CNA. Correct? That would mean the FEX has no input buffers to speak of. If it did, it would have to be able to PAUSE traffic that is overrrunning them - or so I would think.
    The same then can be said for the FEX and ETS. The FEX does not contain the hardware to participate in ETS. There is no egress scheduling algorithm and no recognition of packet prioritization or a mechanism to honor any prioritization that can be leveraged to dynamically assign bandwidth to the confgiured Traffic Class that would be serviced acording to the ETS Transmission Scheduling Algorithm (Algorithim 2, according to IEEE 802.1qaz v2.4).
    Lastly, the FEX does not participate in DCBx either - the link initialization semantics occur between the CNA and the Fabric Interconnect port.
    Is all this correct, Manish?
    I really appreciate your time and help. Thank you!

  • Cisco UCS 6324 Fabric Interconnect Used as a FEX?

    Hi,
    Is it possible to use the Cisco UCS 6324 Fabric Interconnect as a FEX to uplink UCS 5108 Blade Chassis to Cisco UCS 6200 Series Fabric Interconnects? Thank you.

    How does it work if I want to add a second 5108 to a pair of 6324's? Having the 6324's replace the IOM (FEX) modules is cool but would you need to buy IOM's for the second chassis and then use up either 4 x 10GB ports or the 1 x 40GB or would I need to purchase 2 more 6324's?
    Thanks in advance.

  • Where to download VM-FEX software on UCS manager 2.1(1a)?

    I'm looking for the .vib file to install it on the ESXi host.
    On UCS manager 2.0, there use to be a link on the main page to download the file.
    One 2.1(1a), it's not there.
    Any idea where I can find it?
    Thanks in advance.

    Hi Thomas,
    VNMC  (Virtual Network Management Center) provides a centralized multi-device  and policy management for virtual network services like Cisco vASA and  VSG.
    Please post your question on UCS support forum (https://supportforums.cisco.com/community/netpro/data-center/unified-computing) if you still have this question.
    Thanks,
    Ranga

  • Error setting up vm-fex on c-series UCS and UCSM

    I seem to keep getting an error about SRIOV bios policy settings, or 'cannot derive mac address from virtual port' when i follow guides for setting up VM-FEX for ucsm and vmware 5.1.
    Are there any fully-featured guides that go through more than just each 'side' of the configuration (ucsm or vmware)?
    I want to know, from the beginning, what ucsm method i should use if i want ucsm to manage the vlan/dvs in vmware so that i dont' have to keep creating vlans in ucsm, adding them to service profiles, and then in vmware creating the vm networks for those vlans and finally assigning a vm to use that new vlan.
    and then, assuming it's vm-fex, how to actually get that configured in ucsm and how to appropriately configure service profiles for that use.
    when you integrate vmware and ucsm, should existing VMs show up in ucsm, or only after they are set to use the dvs?  All i see is the dvs and thos port profiles.
    All the documentation i've found simply shows one 'part' of this whole setup, and i haven't been able to find a solution guide to describe how to configure the service profiles in ucsm to take advantage of vm-fex that work for me.....I keep getting the above mentioned errors when i try to associate the service profile with my ucs210 equipment.
    Thoughts?

    Resolved. 
    Jason was using VM-Pass through adapter policy and didn't have his static vNICs properly configured.
    Robert

  • Ask the Expert: Cisco UCS B-Series Latest Version New Features

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco UCS Manager 2.2(1) release, which delivers several important features and major enhancements in the fabric, compute, and operational areas. Some of these features include fabric scaling, VLANs, VIFs, IGMP groups, network endpoints, unidirectional link detection (UDLD) support, support for virtual machine queue (VMQ), direct connect C-Series to FI without FEX, direct KVM access, and several other features.
    Teclus Dsouza is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has over 15 years of total IT experience. He has worked across different technologies and a wide range of data center products. He is an expert in Cisco Nexus 1000V and Cisco UCS products. He has more than 6 years of experience on VMware virtualization products.  
    Chetan Parik is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has seven years of total experience. He has worked on a wide range of Cisco data center products such as Cisco UCS and Cisco Nexus 1000V. He also has five years of experience on VMware virtualization products.
    Remember to use the rating system to let Teclus and Chetan know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through May 9, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Jackson,
    Yes its is possible.    Connect the storage array to the fabric interconnects using two 10GB links per storage processor.  Connect each SP to both fabric interconnects and configure the ports on the fabric interconnect as “Appliance” ports from UCSM
    For more information on how to connect Netapp storage using other protocols like iSCSI or FCOE  please check the url below.
    http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6100-series-fabric-interconnects/whitepaper_c11-702584.html
    Regards
    Teclus Dsouza

  • Adapter FEX vs VM-FEX

    Having a hard time wrapping my head around the difference and particulars of Adapter FEX and VM-FEX.
    I come from the networking side of the house and not the visualization so my misunderstanding may be on the vert side.

    Hi Eric
    Have a look at this new CVD document, which has a lot of information about VM-FEX
    FlexPod Datacenter with VMware vSphere 5.5 Update 1 Design Guide
    Last Updated: August 11, 2014
    http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi55u1_design.pdf
    Walter.

  • Adapter FEX and VM Fex

    Hi!
    I have a questions about Cisco Adapter FEX and VM Fex.
    As i understand, Cisco Adapter FEX gives multiple vNIC on one mezzanine card, and each of them appears as virtual port on Fabric Interconnect.
    Also, same thing for VM-FEX, each VM NIC appears as virtual port on Fabric Interconnect.
    Am i right?
    Thank you!

    Adaptor FEX is I/O adaptor virtualization, which is agnostic to the OS and therefore works in virtualized as well as non virtualized environments.
    VM-FEX (also called Hypervisor Bypass) is available for different Hypervisors
    Cisco UCS Manager VM-FEX for Hyper-V CLI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for KVM CLI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for VMware CLI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for Hyper-V GUI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for KVM GUI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for VMware GUI Configuration Guide, Release 2.2   
    Cisco UCS Manager VM-FEX for Hyper-V CLI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for KVM CLI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for VMware CLI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for Hyper-V GUI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for KVM GUI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for VMware GUI Configuration Guide, Release 2.1   
    Cisco UCS Manager VM-FEX for KVM CLI Configuration Guide   
    Cisco UCS Manager VM-FEX for KVM GUI Configuration Guide   
    Cisco UCS Manager VM-FEX for VMware CLI Configuration Guide   
    Cisco UCS Manager VM-FEX for VMware GUI Configuration Guide   
    example VM-FEX for VMware ESX
    VM-FEX (previously known as VN-link) is a method to extend the network fabric completely down to the VMs. With VM-FEX, the Fabric Interconnects handle switching for the ESXi host's VMs. UCSM utilizes the vCenter dVS Application Programming Interfaces (API) to this end. Therefore, VM-FEX shows as a dVS in the ESXi host.
    There are many benefits to VM-FEX:
    Reduced CPU overhead on the ESX host
    Faster performance
    VMware DirectPath I/O with vMotion support
    Network management moved up to the FIs rather than on the ESXi host
    Visibility into vSphere with UCSM

  • UCS Shortcomings?

    I regularly hear a few specific arguments critiquing the UCS that I would like someone who knows the UCS ecosystem well to clarify before my organization adopts it.
    1. The Cisco UCS system is a totally proprietary and closed system, meaning:
         a) the Cisco UCS chassis cannot support other vendor’s blades.  For example, you can’t place an HP, IBM or Dell blade in a Cisco UCS  5100      chassis.
         b) The Cisco UCS can only be managed by the Cisco UCS manager – no 3rd party management tool can be leveraged.
         c) Two Cisco 6100 Fabric Interconnects can indeed support 320  server blades (40 chassis, as Cisco claims), but only with an unreasonable         amount of oversubscription. The more realistic number is two (2) 6100s for every four  (4) 5100 UCS chassis (32 servers), which will yield a more      reasonable oversubscription ratio of 4:1.
         d) A maximum of 14 UCS chassis can be managed by the UCS  manager, which resides in the 6100 Fabric Interconnects. This  creates islands of       management domains -- 14 chasses per island, which presents an interesting challenge if you indeed try to manage 40 UCS chassis (320 servers)        with the same pair of Fabric  Interconnects.
         e) The UCS blade servers can only use Cisco NIC cards (Palo).
         f) Cisco Palo cards use a proprietary version of interface virtualization and cannot support the open SR-IOV standard.
         g) The Cisco 5100 chassis can only be uplinked to the Fabric Interconnects, so any shop that already has ToR switches will have to replace them.
    I would really appreciate it if anyone can give me bulleted responses to these issues. I already posted this question on Brad Hedlund's web blog -- he really knows his stuff. But I know there are a lot of knowledgeable professionals on here, too.
    Thanks!

    Robert, thank you very much for those most informative answers. I really appreciate it.
    I have responded to your points in blue. Can you look them over right quick and give me your thoughts?
    Thanks, again!
           a) the Cisco UCS chassis cannot support other vendor’s blades.  For   example, you can’t place an HP, IBM or Dell blade in a Cisco UCS    5100      chassis.
    [Robert] - True.  This is standard in the industry.  You can't put IBM blades in an HP c7000 Chassis or vice-versa can you?
    I believe the Dell blade chassis can support blades from HP and IBM. I would have to double-check that.
         b) The Cisco UCS can only be managed by the Cisco UCS manager – no 3rd party management tool can be leveraged.
    [Robert]   - False.  UCS has a completely open API.  You can use any XML,  SMASH/CLP, IPMI, WS-MAN.  There are already applications that have  built-in support for UCS from vendors such as HP (OpenManage), IBM  (Tivoli), BMC Bladelogic, Altiris, Netcool etc.  There's even a  Microsoft SCOM plugin being deveoped. See here for more information: http://www.cisco.com/en/US/prod/ps10265/ps10281/ucs_manager_ecosystem.html
    This is very interesting. I had no idea that the Cisco UCS ecosystem can be managed by other vendor management solutions. Can a 3rd party platform be used in lieu of UCS manager (as opposed to just using 3rd party plug-ins) ? Just curious...
           c) Two Cisco 6100 Fabric Interconnects can indeed support 320  server   blades (40 chassis, as Cisco claims), but only with an   unreasonable         amount of oversubscription. The more realistic   number is two (2) 6100s for every four  (4) 5100 UCS chassis (32   servers), which will yield a more      reasonable oversubscription  ratio  of 4:1.
    [Robert]  Your oversubscription rate can vary from 2:1 all the way to 8:1  depending how many uplinks in use with the current IOM hardware.  With a  6120XP you can support "up to" 20 Chassis's (using single 10G uplinks  between Chassis and FI) assuming you're using the expansion slot for  your Ethernet & FC uplink connectivity.  You can support "up to" 40  Chassis using the 6140XP in the same regard.  Depending on your  bandwidth requirements you will might choose to scale this to at least 2  uplinks per IOM/Chassis (20GB redundant uplink). This would give you 2  uplinks from each Chassis to each Interconnect supporting a total of 80  servers with an oversubscription rate of 4:1. Choosing the  level of oversubscription requires an understanding of the underlying  technology - 10G FCoE. FCoE is a lossless technology which provides  greater efficiencies in data transmission than standard ethernet.  No  retransmissions & no dropped frames = higher performance &  efficiency.  Due to this efffciency you can allow for higher  oversubscription rates due to less chances of contention.  Of course  each environement is unique.  If you have some "really" high bandwidth  requirements you can increase the uplinks between the FI & Chassis.   For most customers we've found that 2 uplinks never becomes close to  saturated.  You best bet is to analyse/monitor the actual traffic and  decide what needs you require.
    Actually, I should have checked this out for myself. What I posted was preposterous and I should have spotted that right off the bat. And you should have hung me out to dry for it! :-) Correct me if I'm wrong, but two (2) 6140 FICs can handle A LOT more than just 4 UCS blade chassis, as I stated earlier. In fact, if a 4:1 OS ratio is desired (as in my question), two 6140 FICs can handle 20 UCS chassis, not 4. Each chassis will have 4 total uplinks to the 6140s - 2 uplinks per FEX per FIC. That equates to 160 servers.
    If 320 servers is desired, the OS ratio will have to go up to 8:1 - each chassis with 2 uplinks, one from each FEX to each FIC.
    Is all this about OS ratios correct?
           d) A maximum of 14 UCS chassis can be managed by the UCS  manager,   which resides in the 6100 Fabric Interconnects. This  creates islands   of  management domains -- 14 chasses per island, which presents an   interesting challenge if you indeed try to manage 40 UCS chassis (320   servers)        with the same pair of Fabric  Interconnects.
    [Robert]  False.  Since the release of UCS we have limited the # of Chassis  supported.  This is to ensure a controllable deployment in customer  environements.  With each version of software released we're increasing  that #.  The # of chassis is limited theoretically only by the # of  ports on the fabric interconnects (taking into account your uplink  configuration).  With the latest version 1.4, the supported chassis  count has been increased to 20.  Most customer are test driving UCS and  are not near this limiation.  For customers requiring more than this  about (or the full 40 Chassis limit) they can discuss this with their  Cisco Account manager for special consideration.
    It's  always funny how competators comment on "UCS Management Islands".  If  you look at the competition and take into consideration Chassis, KVM,  Console/iLO/RAC/DRAC, Ethernet Switch and Fiber Switch management  elements UCS has a fraction the amount of management points when scaling  to beyond hundreds of servers.  
    I understand. Makes sense.
         e) The UCS blade servers can only use Cisco NIC cards (Palo).
    [Robert]  False.  Any UCS blade server can use either a Emulex CNA, Qlogic CNA,  Intel 10G NIC, Broadcom 10G NIC or ... our own Virtual Interface Card -  aka Palo.  UCS offers a range of options to suite various customer  preferences.
    Interesting. I didnt know that.
         f) Cisco Palo cards use a proprietary version of interface virtualization and cannot support the open SR-IOV standard.
    [Robert]  Palo is SR-IOV capable.  Palo was designed originally to not be SR-IOV  dependent by design. This removes dependencies on the OS vendors to  provide driver support.  As we have control over this, Cisco can provide  the drivers for various OS's without relying on vendors to release  patch/driver updates.  Microsoft, Redhat and VMware have all been  certified to work with Palo. 
    Correct SR-IOV is a function of the NIC card and its drivers, but it does need support from the hypervisor. That having been said, can a non-Cisco NIC (perhaps one of the ones you mentiojned above) that supports SR-IOV be used with a Cisco blade server in the UCS chassis?
           g) The Cisco 5100 chassis can only be uplinked to the Fabric   Interconnects, so any shop that already has ToR switches will have to   replace them
    [Robert]  Not necessaryily true.  The Interconnects are just that - they  interconnect the Chassis to your Ethernet & FC networks. The FI's  act as your access switches for the blades which should connect into  your distribution/Core solely due to the 10G interfaces requirements.   I've seen people uplink UCS into a pair of Nexus 5000's which in turn  connect to their Data core 6500s/Nexus 7000s.  This doesn't mean you  can't reprovision or make use of ToR switches, you're just freeing up a  heap of ports that would be required to connect a legacy non-unified I/O  chassis.
    I understand what you mean, but if a client has a ToR design already in use, those ToRs must be ripped out. For example, lets say they had Brocade B-8000s at the ToR, it's not as if they can keep them in place and connect the UCS 5100 chassis to them. The 5100 needs the FICs.
    Regards,
    Joe

  • Clearing UCS Manager fault about unused adapter uplink from C-series

    I have a number of C240M3 servers in a UCS infrastructure.  They all have two VIC 1225 adapters and when they were initially connected, the technicians accidentally connected the second adapter on some of them to the FEXs.  For now at least, we only want the first adapters used (both ports - one to each FEX).  However, since adapter 2 was connected at one point, UCS Manager won't stop giving these faults:
    F0206 - Adapter 5/2 is unreachable
    F0209 - Adapter uplink interface 5/2/1 link state: unavailable
    It doesn't seem to have a way to clear the faults without reconnecting that second adapter.  Re-acknowledging the server didn't help and I'm not sure I want to try decommissioning them and then getting them to appear in UCS again.
    Does anyone know how to manually clear these faults?

    Hi Kenny,
    Correct; the second PCIe card is not connected to a FEX anymore.  I tried reack'ing the FEXs but that didn't help, unfortunately.  (If it helps, the fault appears in the individual servers' Faults tab - the FEXs don't show any faults.)

Maybe you are looking for

  • Error when installing ADFS on 2012R2

    HI, I am installing the first ADFS server on an existing Windows 2012R2 Domain Controller.  The domain is Forest and Domain Functional level 2003. Using this Guide: http://www.schmarr.com/Blog/Post/12/Installing-Windows-2012-R2-Server-ADFS-Service- C

  • Same line Item but two differentreceiving IDOCS

    Folks I am new to SAP PI and seeking your help in the following scenario legacy -> SAP PI -> branched to two idocs in ECC The scenario is that a fixed length flat file coming from Legacy wil contain some line items  but each line will be a separate i

  • Need help restoring ipad

    I resetted my ipad to factory default and tried to restore it from a itune backup that was made the previous day, but the backup doesnt kick in properly. The following settings is not restored when I try to restore it from a recent complete backup th

  • Java Add-in for AS ABAP ERP ECC 6.0 non-unicode

    Hi all, I have a AS ABAP ECC 6.0 NON-UNICODE system and i need to install Java add-in to the system without doing unicode conversion. The problem is that when i go to SAPinst: Start SAPinst [page 90]. 2. On the Welcome screen, choose <SAP system> Add

  • MacBook Pro 3.1 batter at 27% health at full charge. Time for new 1?

    My macbook pro 3.1 (since 2007) original battery says "service" and has a health of 24% (istat pro) at a full charge with 204 cycles. Have PRAMed, SMCed, disk first aided and hardware checked. All is well with the machine except the battery which sud