Cisco UCS C210M2-VCD2 network connectivity

Hi,
I am going to install three c210M2 servers and i am quite new to UCS world. As per the design i am suppose to install
4 x Vm's (CUCM, CUC, Presence, UCCX) Primary or Publisher on UCS 01
4 x  Vm's (CUCM, CUC, Presence, UCCX) Subscriber or Failover on UCS 02
2 x Vm's (MeetingPlace) on UCS 03
I have couple of 3750X access switches for network connectivity. My question is how do i configure the network or NIC cards on UCS Server. As per my knowledge every server has 2 x 10Gig NIC cards and 1  x NIC for Integrated Management. I am confused how many NIC cards each VM will have and how do i configure all of these virtual machine NIC's on 2 x physical NIC per UCS. Any suggestions please.
Regards,
Asif

Asif,
Nexus 1000 would be helpful here, but if not an option and you are using standard vSwitch then both onboard NIC's should be used as uplinks in a single vSwitch for redundancy.
Here are some QoS suggestions that might help you also.
Rredundant physical LAN interfaces are recommended
One or two pairs of teamed NIC's for UC VM traffic. One pair is sufficient on C210 due to the low load per VM
One pair of NIC's (teamed or dedicated) for VMware specific traffic (mgmt, vMotion, VMware HA, etc)
     SIDENOTE: NIC teaming pair can be split across motherboard and the PCIe card if the UCS C-series supports it, protects against PCIe card failure
Once we know exactly what vSwitch you are using like the last post asked, we may be able to pinpoint more, but this is based on using standard vSwitch
Hope this helps.

Similar Messages

  • Nexus 5548 to UCS-C series network connectivity.

    Hi There,
    I am new to nexus world, mainly telecom side. I have a situation where a vendor like to deploy two Cisco UCS-C series servers for voice deployment.
    Each UCS C240 M3 server has 4 NICs, 2 nics bonded 802.1q will connect to primary nexus 5k switch and other two bonded will connect to secondary nexus switch. we have a vPC domain and no FEXes so we have to connect these two servers directly to Nexus 5ks.
    My question is it possible with teamed nics to coonect to 2 different nexus switches?
    Can anyone guide me how can I achieve this design? see attached.
    Thanks Much

    It will also depend on the Server network settings such as OS, software switch flavor and NIC teaming option on the server.
    x- if you run ESXi, then you may check out the following KB article. 
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
    x- if you use N1Kv, then you can use LACP or static port-channeling, but be sure to make it consistent on the upstream switches.
    hope it helps.
    Michael

  • Standby cisco ACE loadbalancer issues (network connectivity)

    Hi ALL,
                We are having issues with the secondary (standby) load balancer ACE module on a 6500 switch. We see that the loadblanacer is not able to get onto the network which leads to problem with fault tolerance as well. Following is the ft status found on the load balancer for one of the contexts (this is the same pattern seen on all the contexts).
    switch/Admin# sh ft group status
    FT Group                     : 1
    Configured Status            : in-service
    Maintenance mode             : MAINT_MODE_OFF
    My State                     : FSM_FT_STATE_ACTIVE
    Peer State                   : FSM_FT_STATE_UNKNOWN
    Peer Id                      : 1
    No. of Contexts              : 1
    Sh arp on all the contexts shows the gateway/rserver to be unreachable. Please find the screenshot below for one of the contexts (the same pattern is seen on the LB for all other contexts)
    switch/1_Context# sh arp
    Context CSD_Context
    ================================================================================
    IP ADDRESS      MAC-ADDRESS        Interface  Type      Encap  NextArp(s) Status
    ================================================================================
    172.21.128.97   00.00.00.00.00.00  vlan942   GATEWAY    -                   dn
    172.21.128.103  00.0b.fc.fe.1b.09  vlan942   ALIAS      LOCAL     _         up
    172.21.128.105  00.12.43.dc.93.23  vlan942   INTERFACE  LOCAL     _         up
    7.0.0.4         00.0b.fc.fe.1b.09  vlan943   NAT        LOCAL     _         up
    - 7.0.0.6
    172.21.147.196  00.0b.fc.fe.1b.09  vlan943   ALIAS      LOCAL     _         up
    172.21.147.198  00.12.43.dc.93.24  vlan943   INTERFACE  LOCAL     _         up
    172.21.147.200  00.00.00.00.00.00  vlan943   RSERVER    -       * 3 req     dn
    172.21.147.202  00.00.00.00.00.00  vlan943   RSERVER    -       * 2 req     dn
    172.21.147.204  00.00.00.00.00.00  vlan943   RSERVER    -                   dn
    172.21.147.206  00.00.00.00.00.00  vlan943   RSERVER    -                   dn
    172.21.147.208  00.00.00.00.00.00  vlan943   RSERVER    -       * 3 req     dn
    172.21.147.210  00.00.00.00.00.00  vlan943   RSERVER    -       * 2 req     dn
    172.21.147.212  00.00.00.00.00.00  vlan943   RSERVER    -       * 1 req     dn
    172.21.147.214  00.00.00.00.00.00  vlan943   RSERVER    -       * 1 req     dn
    172.21.147.216  00.00.00.00.00.00  vlan943   RSERVER    -       * 3 req     dn
    7.0.0.1         00.0b.fc.fe.1b.09  vlan943   NAT        LOCAL     _         up
    - 7.0.0.3
    The problem is that we see the problem only on the secondary loadbalancer. primary is just running file
    also i can see some traffic denial in admin context for resource usage
    switch/Admin# sh resource usage
                                                         Allocation
            Resource         Current       Peak        Min        Max       Denied
    Context: Admin
      conc-connections              9          9     160000    6560000          0
      mgmt-connections              0         46       2000      82000          0
      proxy-connections             0          4      20972     859830          0
      xlates                        0          0      20972     859830          0
      bandwidth                     0   17715713   10000000  535000000    5799749
        throughput                  0   17710993   10000000  410000000    5799749
        mgmt-traffic rate           0       4720          0  125000000          0
      connection rate               0         43      20000     820000          0
      ssl-connections rate          0          0        100       4100          0
      mac-miss rate                 0          1         40       1640          0
      inspect-conn rate             0          0        120       4920          0
      acl-memory                56336      56336    1570072   64460552          6
      sticky                        0          0      83886          0          0
      regexp                        0          0      20972     859832          0
      syslog buffer             82944      82944      82944    3447808          0
      syslog rate                   0         44       2000      82000         25
    Context: INTEGRATION_Context
      conc-connections              0       3934     160000          0          0
      mgmt-connections              0         98       2000          0          0
      proxy-connections             0         33      20972          0          0
      xlates                        0          0      20972          0          0
      bandwidth                     0   10019910   10000000  125000000      40857
        throughput                  0   10000000   10000000          0      40857
        mgmt-traffic rate           0      19910          0  125000000          0
      connection rate               0         49      20000          0          0
      ssl-connections rate          0          0        100          0          0
      mac-miss rate                 0         32         40          0          0
      inspect-conn rate             0         58        120          0          0
      acl-memory                11920      11920    1570072          0          0
      sticky                        0          1      83886          0          0
      regexp                        0          0      20972          0          0
      syslog buffer                 0      82944      82944    3447808          0
      syslog rate                   0        312       2000          0          0
    these above 2 contexts are the only one which has bandwidth resource usage exceeding the limit. but i somehow am not sure if this is the issue. as there is just no traffic on the secondary .. then how can the bandwidth reach the threshold? can anyone throw some light on the below issue?
    thanks and regards
    kiran

    vlan on Standby_ACE switch
    svclc multiple-vlan-interfaces
    svclc module 1 vlan-group 1,4,12,13,
    svclc vlan-group 1  968
    svclc vlan-group 12  132
    svclc vlan-group 13  367-372,374,375,379,380,538,805,807,808,818,913,915
    svclc vlan-group 13  917-920,922-924,933,934,937,938,942-949,972,976-979,983
    svclc vlan-group 13  984
    ip subnet-zero
    no ip source-route
    vlans on standby ACE
    switch/Admin# sh vlans
    Vlans configured on SUP for this module
    vlan132  vlan360  vlan367-375  vlan379-380  vlan538  vlan805  vlan807-808  vlan818  vlan913  vlan91
    5  vlan917-920  vlan922-924  vlan930  vlan933-934  vlan937-938  vlan942-949  vlan968  vlan971-972  v
    lan976-979  vlan983-984
    switch/Admin#
    Active_LB_host_switch is the switch hosting the  active ACE thats connected on ten7/4 and 8/4 which is bundeled and made into
    port-channel (po72)
    CDP neighbor hosting the active ACE
    Active_LB_host_switch
                     Ten 7/4           148          R S I     WS-C6513  Ten 7/4
    Active_LB_host_switch
                     Ten 8/4           156          R S I     WS-C6513  Ten 8/4
    Po72 allows all the vlans which is the configured for ACE modules.
    Port                Vlans allowed on trunk
    Po72                132,140,181,359-383,538,668,702,805-808,815-816,818-820,836,907,909-920,922-925,
                929-935,937-949,967-973,976-984,987,3212
    vlan 968 is the FT vlan and the same hass been allowed on the trunk port.
    everything looks good to me but still not sure why isnt the ACE module not coming to the network. it was working fine
    a few months back but all of a sudden it lost the network connectivity. i am not even able to ping the physical ip of the
    ACE module.
    thanks and regards
    kiran

  • Migration Options UCS-C210M2-VCD2

                       Hi All ,
    With the above recent EOL's on these m2 Bundles , have a question as to which would be the best migration path , or bundle similar to this!
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/end_of_life_notice_c51-701928.pdf
    thank you!

    Hello Kuku,
    It Voice sales team PID. If you are looking for same price / performance of C 210 M2, it would be TRC based on C220 M3 server model.
    http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware
    HTH
    Padma

  • Unity Connection VM installation on UCS C210M2

    I am attempting to upgrade my MCS Unity Connection from MCS 7.1.5b to a VM on UCS 8.5a.
    I am have a strange problem - when I try to install CUC 8.5.1 on a VM the disk says the product is not support on current hardware.
    I have tried numerous OVA templates and I have also tried increasing resources in the default templates - all to no avail.
    Templates tried -
    CUC_1000_user_v1.0_vmv7.ova
    CUC_8.6_vmv7_v1.5.ova
    Any ideas ??
    Any help appreciated.

    Take a look at your UCS server and ensure that it meets the following requirements:
    Table 6 Cisco UCS-C210M2 Platforms
    Feature
    Description
    Model numbers
    VTG: UCS-C210M2
    R210-2121605W
    Supplier
    Cisco
    Form factor
    2RU
    Mounting
    Rack-mount
    Physical processor(s)
    Two Intel 2.53 GHz quad-core Xeon E5640 80W CPUs, 12 MB cache, DDR3 1066MHz
    Physical RAM (as shipped)
    24 GB: Six 4 GB DDR3-1333MHz RDIMM/PC3-10600/dual rank 1Gb DRAMs
    Physical hard disk
    10 146GB 6Gb SAS 15K RPM SFF HDD/hot plug/drive sled mounted
    RAID controller
    LSI 6G MegaRAID PCIe Card (RAID 0, 1, 5, 6, 10, 60) — 512WC
    RAID configuration
    2 drives in RAID 1 (VMware)
    8 drives in RAID 5 (virtual machines)
    Total available disk space
    RAID 1 array: 136 GB
    RAID 5 array: 1022 GB
    Hardware reconfiguration required
    None
    CD-ROM or DVD-ROM drives
    1 DVD-ROM drive
    Tape drive
    Not applicable
    Power supply
    Redundant on the chassis
    More information
    http://www.cisco.com/en/US/partner/products/ps10516/index.html
    Source:
    http://www.cisco.com/en/US/partner/docs/voice_ip_comm/connection/8x/supported_platforms/8xcucspl.html#wp760981
    HTH
    Adam

  • Windows 2008 R2 on Cisco UCS B200M networking problems

    This is driving me completely nuts.  Let me start by saying I am new to blade servers and Cisco UCS.  I did take an introduction class, and it seemed straight-forward enough.  I have a chassis with two B200M blades, on which I am trying to configure two Windows 2008 R2 servers, which I will eventually make Hyper-V servers.  This is all in a test environment, so I can do anything I want to on them.
    Right now I have installed W2008 directly on hard disks on the B200M hardware.
    The problem is this: even though I think I've configured the network hardware correctly, using the Cisco VIC driver software, I cannot get networking to work in any reliable way.  I cannot even get ping to work consistantly.  I can ping my local server address, but I cannot ping my gateway (HSRP address).  When I try, I get a "Reply from 10.100.1.x: Destination host unreachable (x being each particular server's last octet). I CAN, however, ping the individual IP addresses of the core switches.  I can also ping some, but not all, the other devices that share the servers' subnet.  There are no errors being generated, the arp tables  (for those devices I can ping) look good, netstat looks OK.  But I cannot get outside the local subnet...
    Except when I can.
    There are times when I can get all the way out to the Internet, and I can download patches from Microsoft.  When it works, it works as expected.  But if I reboot the server, oftentimes networking stops working.  Yet another reboot can get things going again.  This happens even though I've made no changes to either the UCS configs or the OS.
    I cannot figure out any reason when it works at some times and not at others.  I've made sure I have a native VLAN set, I've tried pinning to specific ports on the Fabric Interconnects.  There is just no rhyme or reason to it.
    Anyone know of where I can look?  I'm very familiar with Windows on stand-alone boxes (although it's no longer my area of expertise), and I manage a global WAN (BGP, OSPF, Nexus 7k, etc.) so I'm no dummy when it comes to networking, but I am utterly stumped on this one.        

    The problem was this: while the NICs on the blade server are called vNIC0 and vNIC1, Windows was calling vNIC1 "Local Area Connection" and vNIC0 "Local Area Connection 2".  So what I configured on UCS did not match what I was configuring in Windows.  Completely, utterly ridiculous.
    Anyway, networking is working now without any issues.  Thanks for you suggestion; it did get me looking in the right direction.

  • Ask the Expert : Initial Set Up and LAN Connectivity for Cisco UCS Servers

    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about related to the initial setup of UCS C & B Series which include LAN connectivity from the UCS perspective with Cisco subject matter expert Kenny Perez.
    In particularly, Kenny will cover topics such as: ESXi/Windows  installations, RAID configurations (best practices for good performance and configuration), VLAN/Jumbo Frames configuration for B series and C series servers, Pools/Policies/Upgrades/Templates/Troubleshooting Tips for blade and rack servers, Fabric Interconnects configuration, general compatibility of Hardware/Software/drivers amongst other topics
    Kenny Perez is a technical leader in Cisco Technical Assistance Center, where he works in Server Virtualization support team. His main job consists of supporting customers to implement and manage Cisco UCS B series and C series. He has background in computing, networking, and Vmware ESXi and has 3+ years of experience support UCS servers and is VCP certified.
    Remember to use the rating system to let Kenny know if he has given you an adequate response. 
    This event lasts through October 10th, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi,
    Actually  we have UCS 6248 fabric interconnect - first twelve ports are enabled  and same in Cisco UCS Manager.
    But when more port will be active by expansion module  then UCSM can manage that too or need any other licence  for UCSM too?

  • Problem connecting Broadcom NetXtreme II 57711 to Cisco UCS 6120

    Hi all,
    I ordered rackmount server(C210 M1) with Broadcom NetXtreme II 57711 which I would like to connect to Cisco UCS 6120 in switching mode, but I can't make it working. Broadcom has 2 slots for SFP/SFP+ so I inserted Cisco SFP 10GBase-SR into it. I also inserted the same SFP into the Cisco 6120.
    I also tried to cross the fiber but I still got no link.
    I tried to lookup supported SFPs from Broadcom but have no luck so far. Can anybody help, please?
    The only technical specification I found is http://www.cdw.com/shop/products/default.aspx?EDC=1997284 . I don't know if it is reliable but I can't make much of it.
    Mat

    Martin,
    Both the 6100 Farbric interconnect and the Broadcom support SFP+, below I have included some supporting documentation.
    The first link shows the supported adapters for the C Series Servers (Overview of Cisco C-Series Adapters .pdf) which has a link to a datasheet for the broadcom. The second link provides a list of supported SFPs.
    http://www.cisco.com/en/US/prod/ps10265/ps10493/c-series_adapter.html
    http://www.cisco.com/en/US/partner/prod/collateral/modules/ps5455/data_sheet_c78-455693.html
    The output of show interface brief on the fabric interconnect will display which interfaces have an SFP present, you can also use the command
    show interface ethernet x/y transceiver details for more information.
    If you still have issues you can use a process of elimination, change the cable, SFPs try a different port on the fabric interconnect. Plug another
    supported device into the fabric interconnect to see if it comes up. You can use a similar process to rule in or out a problem with the Server.
    Bill

  • Cisco Unity Connection 8.X and Cisco UCS

    Hi
    We are in a planning phase for Unity Connection 8.X on Cisco UCS C-Series. The Cisco Unified Communications SRND 8.0 states that requires reserving one physical core per physical server.
    what does it really mean?

    See sample depiction below of applications and physical core usage on a server with 2 CPUs and 8 total physical cores. In white is the reservation of one physical core.
    Regards.

  • Cisco UCS network uplink on aggregation layer

    Hello Cisco Community,
    we are using our UCS (Version 2.21d) for ESX Hosts. Each host has 3 vnics as follows:
    vnic 0 = VLAN 10 --> Fabric A, Failover Fabric B
    vnic 1 = VLAN 20 --> Fabric B, Failover Fabric A
    vnic 2 = VLAN 100 --> Fabric A, Failover Fabric B
    Actually UCS is connected to the Access Layer (Catalyst 6509) and we are migrating to Nexus (vPC). As you know, Cisco UCS Fabric Interconnects can handle layer2 traffic itself. So we are planning to connect our UCS Fabric Interconnects directly to our new l3 nexus switch.
    Does anyone have connect the UCS directly to l3? Do we have to pay attention to something? Are there some recommendations?
    thanks in advance
    best regards
    /Danny

    we are using ESXi 5.5 with dvswitche (distributed vswitch). In our cisco ucs powerworkshop, we discuss pros and contras of hard- and softwarefailover and we commit to use hardwarefailover. It is very fast and we have no problems actually.
    This is a neverending and misunderstood story: your design should provide load balancing AND failover. Hardware failover only gives you the latter.In your design, you just use one fabric per Vlan, what a waste !
    And think about the situation of a failover on a ESXi host with 200 VM's; one has to send out at least 200 GARP messages, and this is load on the FI CPU. Most likely dozens or more ESXi server are impacted......
    Cisco best practise: if you use softswitch, let it do the loadbalancing and failover, don't use hardware failover.
    see attachment (the paper is not up to date,
    For ESX Server running vSwitch/DVS/Nexus 1000v and using Cisco UCS Manager Version 1.3 and below, it is recommended that fabric failover not be enabled, as that will require a chatty server for predictable failover. Instead, create regular vNICs and let the soft switch send
    gARPs for VMs. vNICs should be assigned in pairs (Fabric A and B) so that both fabrics are utilized.
    Cisco UCS version 1.4 has introduced the Fabric Sync feature, which enhances the fabric failover functionality for hypervisors as gARPs for VMs are sent out by the standby FI on failover. It does not necessarily reduce the number of vNICs as load sharing among the fabric is highly recommended. Also recommended is to keep the vNICs with fabric failover disabled, avoiding the use of the Fabric Sync feature in 1.4 for ESX based soft switches for quicker failover.

  • Just like HP servers, does Cisco UCS C-Series Standalone supports Network Fault Tolerance ?

    Just like HP servers, does Cisco UCS C-Series Standalone supports Network Fault Tolerance ? if yes then, where can i find this option
    Windows :- 2k8 R2

    Just like HP servers, does Cisco UCS C-Series Standalone supports Network Fault Tolerance ? if yes then, where can i find this option
    Windows :- 2k8 R2

  • Ask the Expert: Cisco UCS B-Series Latest Version New Features

    Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco UCS Manager 2.2(1) release, which delivers several important features and major enhancements in the fabric, compute, and operational areas. Some of these features include fabric scaling, VLANs, VIFs, IGMP groups, network endpoints, unidirectional link detection (UDLD) support, support for virtual machine queue (VMQ), direct connect C-Series to FI without FEX, direct KVM access, and several other features.
    Teclus Dsouza is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has over 15 years of total IT experience. He has worked across different technologies and a wide range of data center products. He is an expert in Cisco Nexus 1000V and Cisco UCS products. He has more than 6 years of experience on VMware virtualization products.  
    Chetan Parik is a customer support engineer from the Server Virtualization team at the Cisco Technical Assistance Center in Bangalore, India. He has seven years of total experience. He has worked on a wide range of Cisco data center products such as Cisco UCS and Cisco Nexus 1000V. He also has five years of experience on VMware virtualization products.
    Remember to use the rating system to let Teclus and Chetan know if you have received an adequate response. 
    Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through May 9, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.

    Hi Jackson,
    Yes its is possible.    Connect the storage array to the fabric interconnects using two 10GB links per storage processor.  Connect each SP to both fabric interconnects and configure the ports on the fabric interconnect as “Appliance” ports from UCSM
    For more information on how to connect Netapp storage using other protocols like iSCSI or FCOE  please check the url below.
    http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6100-series-fabric-interconnects/whitepaper_c11-702584.html
    Regards
    Teclus Dsouza

  • Cisco UCS RAID SAS 2008M-8i Mezzanine Card Performance?

    I recently purchased a UCS C220 bundle, which includes a Cisco UCS RAID SAS 2008M-8i Mezzanine Card.  I'm planning on deploying this as a standalone server running XenServer 6.2 in the near future.
    I'm happy with the unit and testing is good, but for one aspect.  The disk IO throughput on it seems to be far short of what I expected.  I have a desktop PC with an Intel DZ77GA-70K motherboard as a lab spare, and the disk IO I can achieve from that server with the same disks exceeds what the C220 seems to be able to achieve, on a consistent repeatable basis.
    The testing I am doing is based around two benchmarks:
    1. A 400G file copy between the two machines, over the network back-to-back to note the maximum sustained throughput, and
    2. A mix of runs with 'sysbench' to mix of tests of local IO, of sequential and random reads and writes, with the command: "sysbench --test=fileio --file-total-size=150G --file-test-mode=seqrd --init-rng=on --max-time=300 --max-requests=0 run"
    For #1, I run a copy to/from an HP MicroServer Gen8, which has 12TB of disk space on it in a Linux RAID0 configuration (4x3TB Seagate drives).  If I copy files to or from this HP Server to the Intel DZ77GA-70K I am able to easily saturate the 1G network, achieving a sustained 960MBit/s for an hour or more at a time.  If I then take the exact same SATA disks from the DZ77GA-70K, connect these into the UCS box and do the exact same network copy with the exact same OS, I'm only about to get around ~400-500MBit/sec of sustained throughput.
    For #2, the test results of which are entirely local to the C220, come in around 105-110 MByte/sec on a sequential read, which drops to around 2 MByte/sec on a random read or write test.  No surprise of the enormous drop - because random reads/writes are a pretty tough IO load, but I would expect sequential reading should be much better.  I can get consistent sysbench seqrd results from the MicroServer of around 300 MByte/sec, for example.
    I can consistently replicate this with Redhat 6.5, as well as Gentoo (running the latest linux kernel) as well as from a Xen 6.2SP1 Hypervisor install on the C220 (tested from the Dom0 domain itself, as well as a Linux guest) all 64 bit.  Jumbo frames are enabled end-to-end also, and CPU is not bottlenecking.  Latest firmware is installed on all components.  The ucs-cxxx-drivers.1.5.4a.iso image states that for the Redhat and Xen systems, that the required drivers are included in the OS, so I don't need to worry about installing them separately.  Presumably the Gentoo system has even newer drivers again because it has a very new kernel, but alas the throughputs are the same on all of those systems.
    I have tried with SATA as well as a SAS drive, and the test results are also practically the same.  All disks in all servers are Seagate 6.0 Gb/s units, and none of the servers are swapping to disk at any stage.
    I am happy with network IO - I can completely saturate the 1G ports easily, and I'm convinced that's not a part of the problem here.
    What could cause this sort of performance?  Storage card logs in CIMC don't indicate anything is wrong and none of the OS's are indicating issues of any sort, but it certainly does seem something isn't right in that I'm getting significantly superior performance from a desktop motherboard and the MicroServer, than an enterprise grade server, when testing with the exact same hard drives.
    Questions:
    - Is the 2008M-8i card considered a low-end RAID card or should I be getting reasonable throughput from it?  I was anticipating performance at least as good as a desktop motherboard, but this doesn't seem to be the case.  The RAID card as a component is more expensive than an entire MicroServer or Intel Motherboard so it should run much better, yes?
    - What sort of performance should I expect out of this card on a single sequential read or write?
    - Can this RAID card run drives just as JBOD's or do all disks have to be initialised in an array (even if just a RAID0 array with 1 disk)?  It seems if they are added to the server they do not show up to any OS until they are initialised as part of an array, although I haven't delved into the BIOS settings of the card itself (only from CIMC so far).
    - I recall seeing something about best practice of having two virtual drives on these cards, what is the impact in running more, given the card certainly allows more to be created (I currently have 4 while I am testing)
    -  I noticed on Cacti graphs while rebuilding a RAID1 array that the CPU ran hotter while the array was being rebuilt, and cooled down once the rebuild had completed, which indicates the rebuild was using up CPU on the host hardware.  Should this not have been entirely transparent to the system if the RAID activity is offloaded to this card, or is an increase in CPU to be expected?
    I'm very keen to find out others experiences of this card, what people have done to get good throughput out of it, or if I should go back to a whitebox server with an Intel board   :-)
    Thanks,
    Reuben

    Hello Reuben,
    I reached out to colleagues who are more knowledge on this topic and here is their response.
    - Is the 2008M-8i card considered a  low-end RAID card or should I be getting reasonable throughput from  it?  I was anticipating performance at least as good as a desktop  motherboard, but this doesn't seem to be the case.   The RAID card as a component is more expensive than an entire  MicroServer or Intel Motherboard so it should run much better, yes? -
    2008M-8i card is a entry/value card with an  expected performance which is better than a software RAID. This card  doesn’t utilise memory in a standard RAID 0/1 configuration.
    - What sort of performance should I expect out of this card on a single sequential read or write? - We should expect around 1GB sequential read (refer link http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-computing/whitepaper_c11-728200.html)
    - Can this RAID card run drives  just as JBOD's or do all disks have to be initialised in an array (even  if just a RAID0 array with 1 disk)?  It seems if they are added to the  server they do not show up to any OS until they  are initialised as part of an array, although I haven't delved into the  BIOS settings of the card itself (only from CIMC so far). -
    This card can be used in JBOD mode. To enable JBOD mode you need to use MegaCLI commands.  This option is not present as a default  configuration.
    Please note: Once we enable the JBOD mode – it  cannot be reverted back to the default RAID mode setting.
    - I recall seeing something about  best practice of having two virtual drives on these cards, what is the  impact in running more, given the card certainly allows more to be  created (I currently have 4 while I am testing)  -
    This doesn’t apply for this card as it  does not have any cache. Can you please point us  to the document about best practice of having two virtual drives ?
    -  I noticed on Cacti graphs while  rebuilding a RAID1 array that the CPU ran hotter while the array was  being rebuilt, and cooled down once the rebuild had completed, which  indicates the rebuild was using up CPU on the  host hardware.  Should this not have been entirely transparent to the  system if the RAID activity is offloaded to this card, or is an increase  in CPU to be expected? - Creating/Deleting/Modifying a RAID volume is a CPU independent operation. 

  • Ethernet controller driver not installing after Windows Server 2008 installation on Cisco UCS blade

    I am using MDT 2010 to deploy Windows Server 2008 to a Cisco UCS B200 blade. The OS and hotfixes install with no problem. The system reboots and attempts to connect to the deployment share to continue with the task sequence and I receive an error stating
    "A connection to the deployment share could not be made. The following networking device did not have a driver installed. PCI\VEN_1137&DEV_0043&SUBSYS_00481137&REV_A2"
    The driver in question is the driver for the Ethernet controller for the UCS blade. More specifically, it is the network driver for the Palo card in the UCS Chassis. When I Ignore or Abort the install, I am able to manually install the driver and restart the
    task sequence. The build then completes successfully with no further issue. I just cannot seem to automate the driver install. Has anyone dealt with this type of issue?

    Vik and stealthfield - thank you for your responses.
    I was preparing a script to manually move the driver into the c:\windows\inf location when I had to stop to create a new task sequence for a different build. It was while building this task sequence that I found the soulution to the problem.
    In the Post Install -> Inject Drivers step of the Task Sequence, I noticed that it was executing a command of type "Inject Drivers" with a selection profile of "All Drivers" and "Install only matching drivers
    from the selection profile" selected.         
    I changed this to run a command of type "Run Command Line" and specified that it execute "%Scriptroot%\ZTIdrivers.wsf". I noticed that this was selected in one of my other builds that was working properly, so I thought I'd
    give it a try. - IT WORKED.
    So to sum up, the solution was to change the type of command being executed in the "Post Install -> Inject Drivers" step of the task sequence. After this change, the machine came up and continued the task sequence with no further issues.
    Thanks again for the responses - They were helpful and I do appreciate it.

  • Cisco UCS FC Direct Attach Question

    We are looking at the Cisco UCS as a replacement for our existing servers and SAN switches. As I understand the fabric interconnect can replace our existing SAN switches and that we will still be able to zone the ports just like we do on our SAN switches today.
    Can someone confirm how using the fabric interconnects as a replacement for our SAN switches will work? I read that the fabric interconnects have to be in switch mode for this to work. How does this affect the other connections we will have to our Ethernet network?
    Thanks.

    Q1: The 6120s run in NPV (N_Port Virtualization) mode which means it acts as a proxy for the blade vHBAs and doesn't participate in flogi (fabric logins). When a fabric switch runs in this mode it requires an uplink to a faber switch for flogi and zoning. Due to this you cannot direct attach the the 6120 to an array's front-end fiber ports.
    Q2: Yes you can uplink the 6120 to a Brocade fabric as long as the Brocade supports NPIV.

Maybe you are looking for