Ucs chassis qos support

I  bought a UCS chassis with two blades that are 'qualified' for Cisco  Voice products (cm, ccx, etc)....     i decided to not buy the $20,000  worth of VMWARE enterprise pro licenses and the v1000 switch... since  the ROI is simply not there (I'd rather have 4 separate pieces of  hardware then spend that much money on software)...  anyway...   so now  my questions are:
1. What QoS options are available (if any) on the uplink from the chassis to the 6120s and from the 6120s to my 4500 switch.
2.  What Qos options are available (if any) on individual hosts inside my vmware.
3.  Can I buy the v1000 switch (which i believe is a few grand) and get  some QoS without buying the stupid expensive vmware enterprise pro?
...or do i just return the chassis?
Thanks!

Terry,
UC on UCS deployment design considered the hardware features and UC software requirements
and concluded that there will be no congestion for the IP traffic until it leaves Fabric Interconnect.
So for the following traffic flow, we do not need additional QoS ( L2 COS ) configuration.
UC App > vSwitch > physical adapter > IOM > FI > upstream switch
However, if you have other virtual machines in the blade that contends for the bandwidth, then you environment might need Nexus 1000v or QoS configuration on the UCS.
Please refer following doc for additional information
http://docwiki.cisco.com/wiki/QoS_Design_Considerations_for_Virtual_UC_with_UCS#QoS_Design_Considerations_for_VMs_with_Cisco_UCS_B-Series_Blade_Servers
HTH
Padma

Similar Messages

  • QoS on the UCS Chassis

    I bought a UCS chassis with two blades that are 'qualified' for Cisco Voice products (cm, ccx, etc)....     i decided to not buy the $20,000 worth of VMWARE enterprise pro licenses and the v1000 switch... since the ROI is simply not there (I'd rather have 4 separate pieces of hardware then spend that much money on software)...  anyway...   so now my questions are:
    1. What QoS options are available (if any) on the uplink from the chassis to the 6120s and from the 6120s to my 4500 switch.
    2.  What Qos options are available (if any) on individual hosts inside my vmware.
    3. Can I buy the v1000 switch (which i believe is a few grand) and get some QoS without buying the stupid expensive vmware enterprise pro?
    ...or do i just return the chassis?
    Thanks!

    Terry,
    1. UCS traffic will maintain is CoS markings on the egress of the UCS system.  It's then up to the upstream switch to honor these marking accordingly.  All CoS markings between the Chassis and FI will be honored on the IOM to prioritze traffic accoridngly in the event of congestion using PFC.  Have a read on tha attached paper about UCS & QoS, should help you get an understand on the system handling of QoS/CoS. 
    2. VMware 5 does offer some new auto quality of service options.  Have a read through:
    What's new in vSphere 5 Networking
    3.  No.  Any Distributed Switch (VMware vDS or Cisco 1000v DVS) requires the Enterprise Plus license.  This includes both the Cisco 1000v and VN-Link in Hardware (PTS with Palo M81KR adaptor).
    Regards,
    Robert

  • UCS Chassis 5108 both IOMs inaccessible

    Dear forum users,
    Problem description: Sometime our UCS Chassis 5108 and both IOM are suddenly INACCESSIBLE. This happens frequently (Happend 5 times in just a month)
    Temporary Solution: To unplug and re-plug Chassis power cords twice, then the Chassis back to normal
    Once the Chassis is inaccessible state, below are what we witness on the Chassis:
    - All the blades LEDs on chassis-1 shows amber lit
    - 8 ports on IOM A and IOM B shows NO LED lit (Not even amber lit)
    - Sudden increased in Fan speed on chassis-1
    - FI ports which connected to Chassis-1, shows NO LED it
    - Please find attached (Chassis-1_Fault.jpg) Fault message on chassis-1
    We had tried the following:
    - Tried re-sit the uplink cable
    - Tried replacing the uplink cable, the port is still down
    - Reboot FI
    - Decommission the chassis-1
    - Recommission the chassis-1
    Below is our setup environment:
    - 2x UCS 6248UP
    - 2x UCS 5108
    - 4x UCS 2208XP (2 IOM per chassis)
    - 2x UCS B200 M2
    - 5x UCS B250 M2
    - Firmware 2.0(3b)
    - Each IOM has 4 Uplink cables connect to FI
    - Chassis Discovery Policy
    1. Action (4 Links)
    2. Link Grouping Preference (Port Channel)
    We hope, someone should have experience with the same problem or have some advise to be shared.
    Please let me know should you need any further details of the problem or other information.
    Look forward to a positive reply.

    Hello Mohamed,
    Please open a TAC service request with UCSM and Chassis tech support log files.
    Also, include approximate date & time of the event.
    Padma

  • UCS 1.4 support for PVLAN

    Hi all,
    Cisco advise UCS 1.4 supports PVLAN. But i see the following comment about PVLAN in UCS 1.4
    "UCS extends PVLAN support for non virtualised deployments (without vSwitch ) . "
    "UCS release 1.4(1) provides support for isolated PVLAN support for physical server access ports or for Palo CNA vNIC ports."
    Does this means PVLAN won't work for virtual machine if VMs is connected to UCS by Nexus1000v or vDS although i am using PALO (M81KR) card?
    Could anybody can confirm that?
    Thank you very much!

    Have not got that working so far...how would that traffic flow work?
    1000v -> 6120 -> 5020 -> 6500s
    (2) 10Gbe interfaces, one on each fabric to the blades. All VLANs (including the PVLAN parent and child VLAN IDs) are defined and added to the server templates - so propagated to each ESX host.
    At this point, nothing can do layer 3 except for the 6500s. Let's say my primary VLAN ID for one PVLAN is 160 and the isolated vlan ID is 161...
    On the Nexus 1000v:
    vlan 160
      name TEN1-Maint-PVLAN
      private-vlan primary
      private-vlan association 161
    vlan 161
      name TEN1-Maint-Iso
      private-vlan isolated
    port-profile type vethernet TEN1-Maint-PVLAN-Isolated
      description TEN1-Maint-PVLAN-Isolated
      vmware port-group
      switchport mode private-vlan host
      switchport private-vlan host-association 160 161
      no shutdown
      state enabled
    port-profile type vethernet TEN1-Maint-PVLAN-Promiscuous
      description TEN1-Maint-PVLAN-Promiscuous
      vmware port-group
      switchport mode private-vlan promiscuous
      switchport private-vlan mapping 160 161
      no shutdown
      state enabled
    port-profile type ethernet system-uplink
      description Physical uplink from N1Kv to physical switch
      vmware port-group
      switchport mode trunk
      switchport trunk allowed vlan all
      channel-group auto mode on mac-pinning
      no shutdown
      system vlan 20,116-119,202,1408
      state enabled
    This works fine to and from VMs on the same ESX host (PVLAN port-profiles work as expected)...If I move a VM over to another host, nothing works...pretty sure not even if in same promiscuous port-profile. How does the 6120 handle this traffic? What do they get tagged with when they leave the 1000v?

  • QoS support for UltraSparc T1

    Hi,
    I am just curious about whether there is any hardware or OS (e.g., Solaris 10) QoS support for L2 cache sharing among threads for an UltraSparc T1 machine? Thanks.

    Not explicitly. There are some things done by the OS, like stack slewing, that will, as a side effect, minimize hot bank problems in the level-2 cache.
    Stack slewing:
    http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/exec.c

  • UCS chassis PSU error

    Hi team ,
    We are getting error on PSU for Cisco UCS chassis.
    Its showing NA under performance & voltage tab.
    We tried resetating PSU as well as removing power cable & connecting it back.
    PSU is power-on & operational.
    Can anyone suggest why this issue is & how to get rid of this.

    Hi Ashish ,
    It is possible that you are hitting I2C issues. I recommend to open a TAC case to have an engineer look at your environment and confirm that the I2C bus is congested.
    Here are your options to clear the i2c bus:
    1.You can reseat the IOMs you will need to reseat one IOM at a time starting with the IOM connected to the subordinated fabric,   This should not cause a complete outage.  However, you should perform this during a maintenance window.  Please note, this method does not always clear the i2c bus 100% of the time.  You may have to use option 2 if this is the case. 
    2.The second option is to power down then power back up the chassis.  This method clears the i2c bus 100% of the time.  However, the down side is it is service impacting and will need to be done during a maintenance window.

  • UCS Chassis Not Recognized

    Hello,
    I am having a strange issue with a new UCS environment that we just brought online. One of the four UCS chassis had an issue with its firmware upgrade, specifically the IO module. All of the other componants on all of the other UCS chassis upgraded just fine but these two IO modules hung during the firmware upgrade. I tried to downgrade the firmware on all of the other componants to match the hung IO modules to see if they would come back online and they did not. I also moved the IO modules into a known working chassis and they worked after an auto-upgrade of the firmware! I then swapped them back to their original chassis thinking that they would be ok but they went down again, stating a communication problem and the 6140 reported them as being link-down. I restarted the entire chassis and also swapped the uplink cables and still they show offline. As a last ditch effort I deleted the entire chassis and went back to re-add it and now it will not even detect the chassis.
    I know that the IO modules are fine because I can put them in another chassis and they will come up but as soon I put them back in their original chassis, the 6140's report the link as being down and it will not recognize the chassis. Does this sound like a hardware issue with the chassis itself? Any ideas?

    Can you try using the cables from your "good" Chassis on the failing one?
    I'd suggest you completely power down the bad chassis.  Take the cables from a good chassis, move them over, then power the bad Chassis back up.
    If this doesn't work let me know.  Also ensure your Chassis discovery policy is set to 1 link for now.
    Robert

  • UCS Chassis I/O problems

    :                         Anyone  else having Cisco UCS 6140/5108 chassis problems that cause the fans to  act up  and the I/O card to have power and temp alarms ver1.3.0 even  when the environmental conditions are OK?

    FYI there are a couple of fixes in this release for these problems.   I believe the problem you are describing is related to this, which is part of the release discussed.
    http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCtl43716

  • UCS chassis uplinks recommendation

    Hi All,
    We currently have a UCS B chassis with 4 blades & 4 uplinks to the FI 6124 (2 from each FEX). We are now looking to populate the chassis with 4 additional blades & was wondering if there is a best practise/guidance document on how many uplinks one should have with a fully populated chassis.
    Thanks
    Kassim

    Kassim
    It all depends on the environment and what degree of oversubscription are you comfortable with between the IOM and the FI.
    As you are already running UCS, I would start by looking at the current utilization to begin with.
    The 2104 is characterized by 8 downlinks (server facing or HIFs) and up to 4 uplinks (FI facing or NIFs).
    A starting config can be either 1, 2 or 4 links i.e 2^x number of links for static pinning.
    Within UCS QoS can be configured so you can specify min bandwith guarantee at the time of congestion.
    If you just want to get a feel for whats out there, most of the time I have come across 2 links per side.
    Again, what I have seen most and not a best practise/guidance by any stretch.
    Thanks
    --Manish

  • UCS B200 M2 support

                       Hi All
    I have a customer which has UCS b200 M2 and has 5 blade install i will use one blade to install the following :
    1-  PUB CUCM 8.6
    2- PUB CUC.
    3- cisco unified presence.
    all the above will be install in one blade and the other SUB cucm , SUB CUC and presence will be install in diffrent blade but in the same chassis.
    My question  is it possible to install a SUB cucm and other virtual machine in same blade regarding to the support from cisco. or the blade must be only for cisco unified application as cucm and cuc.. mean no other virtual machin should be install...
    Regards

    Check out this link:
    http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Sizing_Guidelines
    Virtual Machines (VMs) are categorized as follows for purposes of this UC support policy:
    3rd-party application VMs (or simply 3rd-party app VMs): a VM for a non-Cisco, application, such as VMware vCenter, 3rd-party Cisco Technology Developer Program applications, non-Cisco-provided TFTP/SFTP/DNS/DHCP servers, Directories, Groupware, File/print, CRM, customer home-grown applications, etc
    Cisco does not support non-UC or 3rd-party applications VMs running on "Cisco UC Virtualization Hypervisor" or "Cisco UC Virtualization Foundation" (as described at Unified Communications VMware Requirements). If you want to deploy non-U / 3rd-party applications, you must deploy on VMware vSphere Standard, Advanced, Enterprise or Enterprise Plus Edition.
    Each Cisco UC app supports one of the following four types of co-residency:
    Full:
    The co-resident application mix may contain UC app VMs with Cisco non-UC VMs with 3rd-party application VMs. The deployment must follow the General Rules for Co-residency and Physical/Virtual Hardware Sizing rules below. The deployment must also follow the Special Rules for non-UC and 3rd-party Co-residency below.
    Keep reading on the link:
    HTH, please rate all useful posts!
    Chris

  • Number of class maps (QOS) supported on 7200 and 7600

    Hi,
    Have few queries on class maps for QOS, putting forward for your comments/inputs.
    1. Want to know if there are any limitation (s) on the number of class maps (to be applied inbound/outbound) that can be configured on the 7200 and 7600 routers.
    2. Is there any imitation on the numbers (of class maps) in general or will it depend on the sum total of BW configured in the classes? I mean which one will be the deciding factor i.e. if the limit is wrt to the configured classes or the number of classes can't go beyond the consolidated bandwidth configured on the interface.
    Kindly share details on the same and if there are any recommendations.
    Thanks! in advance.

    From: http://www.cisco.com/en/US/tech/tk543/tk545/technologies_q_and_a_item09186a00800cdfab.shtml
    "Q. How many classes does a Quality of Service (QoS) policy support?
    A. In Cisco IOS versions earlier than 12.2 you could define a maximum of only 256 classes, and you could define up to 256 classes within each policy if the same classes are reused for different policies. If you have two policies, the total number of classes from both policies should not exceed 256. If a policy includes Class-Based Weighted Fair Queueing (CBWFQ) (meaning it contains a bandwidth [or priority] statement within any of the classes), the total number of classes supported is 64.
    In Cisco IOS versions 12.2(12),12.2(12)T, and 12.2(12)S, this limitation of 256 global class-maps was changed, and it is now possible to configure up to 1024 global class-maps and to use 256 class-maps inside the same policy-map."

  • Can IPV6 QOS support in Cisco 3750x switches

    Hi 
    I have tried IPv6 qos using class map in  Catalyst 3750 switches but the platform is not support.
    Can anyone configured the IPV6 qos in Cisco 3750-X switches. Does it support?
    Cisco 3750 config
    policy-map up
      class bwtest-up
      police 2048000 128000 exceed-action drop
    policy-map down
     class bwtest-down
      police 512000 128000 exceed-action drop
      trust dscp
    class-map match-all bwtest-up
     match access-group name bwup
    class-map match-all bwtest-down
     match access-group name bwdown
    ipv6 access-list bwup
     permit ipv6 2402:xxxx:x:x::/64
    ipv6 access-list bwdown
     permit ipv6 any 2402:xxxx:x:x::/64
    L3(config)#int g1/0/4
    L3(config-if)#service-policy input up
    QoS: class(bwtest-up) IPv6 class not supported on interface GigabitEthernet1/0/4 ( error)
    Please help!

    interface GigabitEthernet1/0/4
     description ##Test LAN-IPV##
     no switchport
     bandwidth 2048
     no ip address
     load-interval 30
     speed 100
     duplex full
     ipv6 address 2402:xxxx:x:x::1/64
     ipv6 enable
     ipv6 ospf 200 area 0
    end
    switch sw version
    Cisco IOS Software, C3750 Software (C3750-IPSERVICESK9-M), Version 12.2(55)SE9, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2014 by Cisco Systems, Inc.
    Compiled Mon 03-Mar-14 22:45 by prod_rel_team
    Image text-base: 0x01000000, data-base: 0x02F00000
    ROM: Bootstrap program is C3750 boot loader
    BOOTLDR: C3750 Boot Loader (C3750-HBOOT-M) Version 12.2(44)SE5, RELEASE SOFTWARE (fc1)
    Cherry uptime is 6 days, 7 hours, 23 minutes
    System returned to ROM by power-on
    System restarted at 07:04:50 IST Thu Mar 19 2015
    System image file is "flash:/c3750-ipservicesk9-mz.122-55.SE9.bin"

  • UCS C-series support for Serial Attached SCSI (SAS) HBAHP

    Hi Team,
    Do we have a UCS c-series solution to support Serial Attached SCSI (SAS) HBAHP or anything like this?
    Sample is from third party: SC08Ge Host Bus Adapter
    Reference:
    http://h18000.www1.hp.com/products/servers/proliantstorage/adapters/sc08ge/index.html
    Thanks,
    Jojo

    Jojo,
    Currently we do not have a specific solution for serial attached scsi. The adaptors that we support can be seen at the following link.
    http://www.cisco.com/en/US/prod/ps10265/ps10493/c-series_adapter.html
    Thanks,
    Bill

  • QoS supported on EHWIC-D-8ESG?

    Hi,
    I'm finding it very difficult to find any documentation on what QoS features are supported on Cisco 1921 router with EHWIC-D-8ESG module.
    CISCO1921/K9
    Cisco IOS Software, C1900 Software (C1900-UNIVERSALK9-M), Version 15.2(4)M5, RELEASE SOFTWARE (fc2)
    EHWIC-D-8ESG
    Does it support any marking, policing and queueing? How does it handle incoming CoS and DSCP markings, are these markings kept intact?
    Daniel Dib
    CCIE #37149
    Please rate helpful posts.       

    Hi Daniel,
    So per the data sheet on these modules at:
    http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/high-speed-wan-interface-cards/data_sheet_c78-660124.html
    It states it supports IOS QoS. Therefore, classification (including NBAR), marking, LLQ/CBWFQ and even HQoS policies are all supported. Treat it like any other GE interface and apply a service-policy to it.
    HTH.
    -tim

  • UCS Shortcomings?

    I regularly hear a few specific arguments critiquing the UCS that I would like someone who knows the UCS ecosystem well to clarify before my organization adopts it.
    1. The Cisco UCS system is a totally proprietary and closed system, meaning:
         a) the Cisco UCS chassis cannot support other vendor’s blades.  For example, you can’t place an HP, IBM or Dell blade in a Cisco UCS  5100      chassis.
         b) The Cisco UCS can only be managed by the Cisco UCS manager – no 3rd party management tool can be leveraged.
         c) Two Cisco 6100 Fabric Interconnects can indeed support 320  server blades (40 chassis, as Cisco claims), but only with an unreasonable         amount of oversubscription. The more realistic number is two (2) 6100s for every four  (4) 5100 UCS chassis (32 servers), which will yield a more      reasonable oversubscription ratio of 4:1.
         d) A maximum of 14 UCS chassis can be managed by the UCS  manager, which resides in the 6100 Fabric Interconnects. This  creates islands of       management domains -- 14 chasses per island, which presents an interesting challenge if you indeed try to manage 40 UCS chassis (320 servers)        with the same pair of Fabric  Interconnects.
         e) The UCS blade servers can only use Cisco NIC cards (Palo).
         f) Cisco Palo cards use a proprietary version of interface virtualization and cannot support the open SR-IOV standard.
         g) The Cisco 5100 chassis can only be uplinked to the Fabric Interconnects, so any shop that already has ToR switches will have to replace them.
    I would really appreciate it if anyone can give me bulleted responses to these issues. I already posted this question on Brad Hedlund's web blog -- he really knows his stuff. But I know there are a lot of knowledgeable professionals on here, too.
    Thanks!

    Robert, thank you very much for those most informative answers. I really appreciate it.
    I have responded to your points in blue. Can you look them over right quick and give me your thoughts?
    Thanks, again!
           a) the Cisco UCS chassis cannot support other vendor’s blades.  For   example, you can’t place an HP, IBM or Dell blade in a Cisco UCS    5100      chassis.
    [Robert] - True.  This is standard in the industry.  You can't put IBM blades in an HP c7000 Chassis or vice-versa can you?
    I believe the Dell blade chassis can support blades from HP and IBM. I would have to double-check that.
         b) The Cisco UCS can only be managed by the Cisco UCS manager – no 3rd party management tool can be leveraged.
    [Robert]   - False.  UCS has a completely open API.  You can use any XML,  SMASH/CLP, IPMI, WS-MAN.  There are already applications that have  built-in support for UCS from vendors such as HP (OpenManage), IBM  (Tivoli), BMC Bladelogic, Altiris, Netcool etc.  There's even a  Microsoft SCOM plugin being deveoped. See here for more information: http://www.cisco.com/en/US/prod/ps10265/ps10281/ucs_manager_ecosystem.html
    This is very interesting. I had no idea that the Cisco UCS ecosystem can be managed by other vendor management solutions. Can a 3rd party platform be used in lieu of UCS manager (as opposed to just using 3rd party plug-ins) ? Just curious...
           c) Two Cisco 6100 Fabric Interconnects can indeed support 320  server   blades (40 chassis, as Cisco claims), but only with an   unreasonable         amount of oversubscription. The more realistic   number is two (2) 6100s for every four  (4) 5100 UCS chassis (32   servers), which will yield a more      reasonable oversubscription  ratio  of 4:1.
    [Robert]  Your oversubscription rate can vary from 2:1 all the way to 8:1  depending how many uplinks in use with the current IOM hardware.  With a  6120XP you can support "up to" 20 Chassis's (using single 10G uplinks  between Chassis and FI) assuming you're using the expansion slot for  your Ethernet & FC uplink connectivity.  You can support "up to" 40  Chassis using the 6140XP in the same regard.  Depending on your  bandwidth requirements you will might choose to scale this to at least 2  uplinks per IOM/Chassis (20GB redundant uplink). This would give you 2  uplinks from each Chassis to each Interconnect supporting a total of 80  servers with an oversubscription rate of 4:1. Choosing the  level of oversubscription requires an understanding of the underlying  technology - 10G FCoE. FCoE is a lossless technology which provides  greater efficiencies in data transmission than standard ethernet.  No  retransmissions & no dropped frames = higher performance &  efficiency.  Due to this efffciency you can allow for higher  oversubscription rates due to less chances of contention.  Of course  each environement is unique.  If you have some "really" high bandwidth  requirements you can increase the uplinks between the FI & Chassis.   For most customers we've found that 2 uplinks never becomes close to  saturated.  You best bet is to analyse/monitor the actual traffic and  decide what needs you require.
    Actually, I should have checked this out for myself. What I posted was preposterous and I should have spotted that right off the bat. And you should have hung me out to dry for it! :-) Correct me if I'm wrong, but two (2) 6140 FICs can handle A LOT more than just 4 UCS blade chassis, as I stated earlier. In fact, if a 4:1 OS ratio is desired (as in my question), two 6140 FICs can handle 20 UCS chassis, not 4. Each chassis will have 4 total uplinks to the 6140s - 2 uplinks per FEX per FIC. That equates to 160 servers.
    If 320 servers is desired, the OS ratio will have to go up to 8:1 - each chassis with 2 uplinks, one from each FEX to each FIC.
    Is all this about OS ratios correct?
           d) A maximum of 14 UCS chassis can be managed by the UCS  manager,   which resides in the 6100 Fabric Interconnects. This  creates islands   of  management domains -- 14 chasses per island, which presents an   interesting challenge if you indeed try to manage 40 UCS chassis (320   servers)        with the same pair of Fabric  Interconnects.
    [Robert]  False.  Since the release of UCS we have limited the # of Chassis  supported.  This is to ensure a controllable deployment in customer  environements.  With each version of software released we're increasing  that #.  The # of chassis is limited theoretically only by the # of  ports on the fabric interconnects (taking into account your uplink  configuration).  With the latest version 1.4, the supported chassis  count has been increased to 20.  Most customer are test driving UCS and  are not near this limiation.  For customers requiring more than this  about (or the full 40 Chassis limit) they can discuss this with their  Cisco Account manager for special consideration.
    It's  always funny how competators comment on "UCS Management Islands".  If  you look at the competition and take into consideration Chassis, KVM,  Console/iLO/RAC/DRAC, Ethernet Switch and Fiber Switch management  elements UCS has a fraction the amount of management points when scaling  to beyond hundreds of servers.  
    I understand. Makes sense.
         e) The UCS blade servers can only use Cisco NIC cards (Palo).
    [Robert]  False.  Any UCS blade server can use either a Emulex CNA, Qlogic CNA,  Intel 10G NIC, Broadcom 10G NIC or ... our own Virtual Interface Card -  aka Palo.  UCS offers a range of options to suite various customer  preferences.
    Interesting. I didnt know that.
         f) Cisco Palo cards use a proprietary version of interface virtualization and cannot support the open SR-IOV standard.
    [Robert]  Palo is SR-IOV capable.  Palo was designed originally to not be SR-IOV  dependent by design. This removes dependencies on the OS vendors to  provide driver support.  As we have control over this, Cisco can provide  the drivers for various OS's without relying on vendors to release  patch/driver updates.  Microsoft, Redhat and VMware have all been  certified to work with Palo. 
    Correct SR-IOV is a function of the NIC card and its drivers, but it does need support from the hypervisor. That having been said, can a non-Cisco NIC (perhaps one of the ones you mentiojned above) that supports SR-IOV be used with a Cisco blade server in the UCS chassis?
           g) The Cisco 5100 chassis can only be uplinked to the Fabric   Interconnects, so any shop that already has ToR switches will have to   replace them
    [Robert]  Not necessaryily true.  The Interconnects are just that - they  interconnect the Chassis to your Ethernet & FC networks. The FI's  act as your access switches for the blades which should connect into  your distribution/Core solely due to the 10G interfaces requirements.   I've seen people uplink UCS into a pair of Nexus 5000's which in turn  connect to their Data core 6500s/Nexus 7000s.  This doesn't mean you  can't reprovision or make use of ToR switches, you're just freeing up a  heap of ports that would be required to connect a legacy non-unified I/O  chassis.
    I understand what you mean, but if a client has a ToR design already in use, those ToRs must be ripped out. For example, lets say they had Brocade B-8000s at the ToR, it's not as if they can keep them in place and connect the UCS 5100 chassis to them. The 5100 needs the FICs.
    Regards,
    Joe

Maybe you are looking for