RAID best practice on UCS C210 M2

H everyone,
I  have a C210 M2 Server with LSI MegaRAID 9261-8i Card with 16 hard  drives of each 146GB.
If I am going to run CUCM, CUC and CUPS to support up to 5000 users, What is the best practice to configure virtual drives?
Do I need any hot spare drives?
Thanks.

Hello,
Please go through following two links
http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware#UC_on_UCS_Tested_Reference_Configurations
http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Supported_Applications
I am not sure whether your hardware configuration falls under TRC or specs based and can support those many users OVA for all apps.
If it does, what I notice for RAID recommendation is
     2 drives for ESXi in RAID 1
     Rest of HDDs in RAID 5 
If it is specs based, you can configure hot spare for RAID 5 volume.
HTH
Padma

Similar Messages

  • SQL Server Best Practices Architecture UCS and FAS3270

    Hey thereWe are moving from EMC SAN and physical servers to NetApp fas3270 and virtual environment on Cisco UCS B200 M3.Traditionally - Best Practices for SQL Server Datbases are to separate the following files on spearate LUN's and/or VolumesDatabase Data filesTransaction Log filesTempDB Data filesAlso I have seen additional separations for...
    System Data files (Master, Model, MSDB, Distribution, Resource DB etc...)IndexesDepending on the size of the database and I/O requirements you can add multiple files for databases.  The goal is provide optimal performance.  The method of choice is to separate Reads & Writes, (Random and Sequential activities)If you have 30 Disks, is it better to separate them?  Or is better to leave the files in one continous pool?  12 Drives RAID 10 (Data files)10 Drives RAID 10 (Log files)8 Drives RAID 10 (TempDB)Please don't get too caught up on the numbers used in the example, but place focus on whether or not (using FAS3270) it is better practice to spearate or consolidate drives/volumes for SQL Server DatabasesThanks!

    Hi Michael,It's a completely different world with NetApp! As a rule of thumb, you don't need separate spindles for different workloads (like SQL databases & logs) - you just put them into separate flexible volumes, which can share the same aggregate (i.e. a grouping of physical disks).For more detailed info about SQL on NetApp have a look at this doc:http://www.netapp.com/us/system/pdf-reader.aspx?pdfuri=tcm:10-61005-16&m=tr-4003.pdfRegards,Radek

  • Environment and RAID best practices please...

    hello dears...
    I have a scenario that I've never been to and its a very important one, so that I would like to receive some tips of you experts...
    I'll have to run Oracle 10.2.0.1.0 on Windows server 2003 ENTERPRISE edition on the following scenario:
    A server that contains RAID10 with 6 Hard disks and a STORAGE that also contains a RAID 10 with 6 more disks...
    How would be the best for Oracle for the best performance ? Split User data and index or let it to RAID10 and split redo and archives
    Like that:
    SITUATION 1Server:
    USER DATA, TEMP TS
    Storage:
    USER INDEX, REDO and ARCHIVEs
    SITUATION 2Server:
    USER DATA, USER INDEX, TEMP
    Storage:
    REDO and ARCHIVES..
    Are there any tips for that ? How the best way to split the files into this scenario ?
    Any kind of help would be appreciated.
    Regards

    I would like to put UNDO with the User data rather<BR>
    than Index..... pretty difficult to explain but will<BR>
    be better i think and one more thing INDEX can be<BR>
    with archives<BR><BR>
    That's an interesting thought and makes sense. Are there any other whitepapers to endorse this view?<BR>
    <BR>
    I asked a RAID Disk Layout advice for Hybrid OLTP/Batch app earlier in regards to disk layout with an index closer to thet.<BR>
    <BR>
    I understand the reasons for separating archive logs and redo logs out, but I wasn't so sure about undo/data/index/temp. I've separated data/index obviously but I wasn't sure if dedicating an entire other RAID 1+0 pair for UNDO and a pair for TEMP seemed efficient. Oracle's whitepapers (see this one on S.A.M.E and this ASM-centric one) seem to point to a "stripe and mirror everything" approach to eliminate any bottlenecks by virtue of spreading absolutely as far as you can across as many spindles as you can. I'm uncertain at what 'threshold' this appears to take effect, as it seems to only apply to larger SANs and not disk arrays that are in the 10-15 disk range.<BR>
    <BR>
    Does anybody else have experience with this 'medium-sized' layout range?

  • Best practice RAID configuration for UCS C260 M2 for Unified Communications?

    I have two UCS C260 M2 servers with 16 drives (PID: C260-BASE-2646) and I am trying to figure out what the best practice is for setting up the RAID.
    I will be running CUCM, CUP, CUC, Prime, etc. for about 2000 phone environment.
    If anyone can offer real world suggestions that would be great. I also have a redundnat server.

    The RAID setup depends a bit on your specific configuration; however, there is a guide for Cisco Collaboration on Virtual Servers that you can review here:
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CF3D71B4_00_cucm_virtual_servers/CUCM_BK_CF3D71B4_00_cucm_virtual_servers_chapter_010.html#CUCM_TK_C3AD2645_00
    If your server is spec'd as a Tested Reference Configuration (TRC) then the C260M2 TRC1 would have 16 HDD that you would configure/split into 2 x 8HDD RAID 5 arrays.
    Hailey
    Please rate helpful posts!

  • Best practice for configuring virtual drive on UCS

    Hi,
    I have two C210 M2 Server with LSI 6G MegaRAID 9261-8i Card with 10 hard drives of each 135GB. When I tried the automatic selection for the RAID configuration, the system has created one virtual drive with RAID 6. My concern is what the best practice to configure virtual drive? Is it RAID 1 and RAID5 or all in one drive with RAID6? Any help will be appreciated.
    Thanks.

    Since you are planning to have UC apps on the server, Voice applications have specified their recommendations here.
    http://docwiki.cisco.com/wiki/Tested_Reference_Configurations_%28TRC%29
    I believe your C210 server specs could be matching TRC #1,  where you need to have
    RAID 1  - First two HDDs for VMware
    RAID 5  - Remaining 8 HDDs as datastore for virtual machines ( CUCM and CUC ) 
    HTH
    Padma

  • Raid Configuration MCS 7845 (best practice)

    I'm wondering what is the best practice for RAID configuration. Looking for examples of a 4 disk and 6 disk setups. Also, which drives to pull when breaking the mirror.
    Is it possible to have a RAID 1+0 for 4/6 drives and have the mirroring set so that you would pull the top or bottom drives on a MCS 7835/7845?
    I'm also confused that using the SmartStart Array Configuration I seem to be able to create one logical drive using raid 1+0 with only having 2 drives, how is that possible?
    And links to dirrections would be appreicated.

    ICM 7.0, CVP 4.x, CCM 4.2.3, unity, and the collaboration server 5.0 and e-mail manager options for ICM.
    But to keep it simple let's look at a Roger set-up.
    Sorry for the delayed response.

  • Best Practice for Networking in UCS required

    Hi
    We are planning to deploy UCS n our environment. The Fabric Interconnects A and B will need to connect to pair of Catalyst 4900 M switch. Whats is the best practice to connect? How should the 4900 switch be configured? Can I do port channel in UCS?
    Appreciate your help.
    Regards
    Kumar

    I highly recommend you review Brad Hedlund's videos regarding UCS networking here:
    http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
    You may want to focus on Part 10 in particular, as this talks about running UCS in end-host mode without vPC or VSS.
    Regards,
    Matt

  • What is "best practice" to set up and configure a Mac Mini server with dual 1 TB drives, using RAID 1?

    I have been handed a new, out of the box, Mac Mini server.  Has two 1 TB drives in it.  Contractor suggested RAID 1 for the set up.  I have done some research
    and found out that in creating the software RAID, this takes away the recovery partition, so I have been reading up on how to create a recovery "disk" using a thumb drive.  this part of the operation I am comfortable with, but there are other issues/concerns that I have.
    Basically, what is the "best practice" to setup the Mini, configure the RAID and then start the server.  I am assuming the steps would be something like this:
    1) start up the Mini and run through the normal Maverick setup/config - keep it plain and vanilla
    2) grab a copy of the Server app and store it offline in a safe place
    3) perform the RAID configuration / reinstall of OS X Maverick using the recovery tools
    4) copy down and start the server app
    This might be considered a very simplified version of this article (http://support.apple.com/kb/HT4886 - Mac mini server (Late 2012 and Mid 2011): How to install OS X Server on a software RAID volume), with the biggest difference being I grab a copy of the Server App off of the mini before I reinstall, since I did not purchase it from the App store, but rather it came with the mini.
    Is there a best practice /  how-to tutorial somewhere that I can follow/learn from? Am I on the right track or headed for a train wreck?
    thanks in advance

    I think this article will answer your question. Hope this helps: http://wisebyte.blogspot.com/2014/01/best-configuration-for-mac-mini-server.html

  • NetApp direct connect to UCS best practices

    Folks,
    I have installed many FlexPods this year but all involved either Nexus 5Ks or 7Ks with vPC. This protects the NFS LUN connections from both network outages or UCSM FIM outages. What is not clear to me is the best practice on connecting a NetApp directly to the UCS via applaince ports. Appliance ports seem like a great idea but they seem to ass issues to designs, both in VMware and the network.
    Does anyone have a configuration example on both the NetApp, UCS & Vmware side?
    I thought I would ask the group their opinion.
    Cheers,
    David Jarzynka

    Hi David
    Can you clarify your last posting a little bit.
    I never installed direct attached NetApp Storage to the UCS as well.
    One drawback i see is, if you have your NetApp System in active/standby mode connected to the FI, then for example all Servers connect through FI A to the NetApp System (NIC with VMKernel for all Server active on FI A). If the NetApp Link fails and switches over to standby link on Fabric B all the Traffic will go form Server to FI A to Uplink Switch to FI B and then to the NetApp System - because the Server is not aware that the NetApp System did a failover. So not all companies will have 10G Uplink Switches which will cause a bottleneck in that case.
    What other things do you see? I agree completely with you - all say is a wonderful feature on the slides - but i don't think it's that smart in practise?
    Thanks for a short replay.
    Cheers
    Patrick

  • Raid harddrive best practices

    Hi
    Im editing an indie feature on Premiere Pro CC 2014 and just bought a Lacie 5big 20tb hard drive thats configured as raid 5. Its the new thunderbolt 2 version although I have an iMac with thunderbolt 1 but it was almost the same price and has raid 5.
    My question is whats the best practice of where I should have Premieres scratch disks, cache files and preview files? iMacs harddrive or should I reconfigure them to the Lacie?
    Any other advice is welcome as well. Its my first raid.
    Thanks!

    See Tweakers Page and specifically Tweakers Page - External Drives and Tweakers Page - Disk Setup

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • FC port channels between MDS and UCS FI best practice?

    Hi,
    We would like to create FC port channels between our UCS FI's and MDS9250 switches.
    At the moment we have 2 separate 8Gbps links to the FI's.
    Are there any disadvantages or reasons to NOT do this?
    Is it a best practice?
    Thanks.

    As Walter said, having port-channels is best practice.  Here is a little more information on why.
    Let's take your example of two 8Gbps links, not in a port-channel ( and no static pinning ) for Fibre Channel connectivity:
    Hosts on the UCS get automatically assigned ( pinned ) to the individual uplinks in a round-robin fashion.
    (1)If you have some hosts that are transferring a lot of data, to and from storage, these hosts can end up pinned to the same uplink and could hurt their performance. 
    In a port-channel, the hosts are pinned to the port-channel and not individual links.
    (2)Since hosts are assigned to an individual link, if that link goes down, the hosts now have to log back into the fabric over the existing working link.   Now you would have all hosts sharing a single link. The hosts will not get re-pinned to a link until they leave and rejoin the fabric.  To get them load balanced again would require taking them out of the fabric and adding them back, again via log out, power off, reload, etc...
    If the links are in a port-channel, the loss of one link will reduce the bandwidth of course, but when the link is restored, no hosts have to be logged out to regain the bandwidth.
    Best regards,
    Jim

  • ASM BEST PRACTICES FOR 'DATA' DISKGROUP(S)

    In our quest to reduce operating costs we are consolidating databases and eliminating RAC in favor of standalone servers. This is a business decision that is a certainty.  Our SAN has been upgraded, and the new database servers are newer, faster, etc.
    Our database version is 11.2.0.4 with Grid Infrastructure 12.1.0.1. Our data diskgroup is RAID-5 and our fra is RAID-1+0.  ASM has external redundancy.  All disks are of equal size with equal storage performance and availability.
    Previously our databases were on separate clusters by function: OLTP, REPORTING and ENTERPRISE CONTENT MANAGEMENT. Development/Acceptance shared a cluster, while production was separate.
    The new architecture combines different functions onto one server for dev/acc, and another for production.  This means they will all be using the same ASM instance.  Typically we followed Oracle’s recommendation to have two disk groups, one for data and the other for FRA.  That followed well when the database was the only one using the data diskgroup.  Now that we are coming databases, is the best practice still to have one data diskgroup and one FRA diskgroup?  For example, production will house 3 databases.  OLTP is 500 GB, Reporting is 1.3 TB, and Enterprise Content Management is 6 TB and growing.
    My consideration is that if all 3 databases accessing the same data diskgroup, the smaller OLTP must traverse through the 6 TB of content management.  Or is this thinking flawed?
    Does this warrant separate diskgroups?  Are there pros and cons to this?
    Any insights are appreciated.
    Best Regards,
    Sherrie

    I have many issues to deal with in this 'consolidation', but budget reduction is happening in state and regional government.  Our SAN storage is for our enterprise infrastructure and not part of my money-savings directive.  We are also migrating to UCS blades for the infrastructure, also not part of my budget reduction contribution. Oracle licensing is our biggest software cost, this is where my directive lies.  We've always been conservative and done more with less, now we will do with less, but different because the storage and hardware are awesome. 
    We've been consolidating databases onto RAC clusters and standalones since we started doing Oracle.  For the last 7 years we've supported ASM, 6 databases and 2 passive standby instances (with Data Guard) on a 2-node cluster totalling 64gb of memory.  The new UCS blades have 256gb of memory.  I get that each database must support its background processes.  If I add up the sga, pga allocated, background processes they take up about 130gb of memory, but also consider that there is an overhead to RAC.  In all the years we've had Oracle, most of our failures, outages or downtime was because of RAC.  On the plus side of that, the seamless failover saved us most times (not all times), but required administrative time for troubleshooting.
    I would love to go the Oracle 12c and use its multitenant architecture, but I have 3rd party applications that don't yet support it.  11.2 might be our last release unless I can reduce costs.  Consolidation is real and much needed, I believe why Oracle responded to the market with multitenancy. 
    But back to my first question about how many diskgroups to service a group of databases.  What I hearing, and think I agree to, is that one data group will suffice because the ASM instance knows where to retrieve the data and waste will be reduced, as well as management. 
    I still need to do some ciphering and by no means have a final plan, but thank you all for your insights and contributions.

  • (Request for:) Best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC

    Could you please share your best practices for setting up a new Windows Server 2012 r2 Hyper-V Virtualized AD DC, that will be running on a new WinSrv 2012 r2 host server.   (This
    will be for a brand new network setup, new forest, domain, etc.)
    Specifically, your best practices regarding:
    the sizing of non virtual and virtual volumes/partitions/drives,  
    the use of sysvol, logs, & data volumes/drives on hosts & guests,
    RAID levels for the host and the guest(s),  
    IDE vs SCSI and drivers both non virtual and virtual and the booting there of,  
    disk caching settings on both host and guests.  
    Thanks so much for any information you can share.

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services
    over to the new infrastructure .   We are planning on rolling out 2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for
    information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really
    seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best Practice for a Print Server

    What is the best practice for having a print server serving over 25 printers 10 of which are colour lasers and the rest black and white lasers.
    Hardware
    At the moment we have one server 2Ghz Dual G5 with 4GB Ram and xserve RAID. The server is also our main Open directory server, with about 400+ clients.
    I want to order a new server and want to know the best type of setup for the optimal print server.
    Thanks

    Since print servers need RAM and spool space, but not a lot of processing power, I'd go with a Mac Mini packed with ram and the biggest HD you can get into it. Then load a copy of Xserver Tiger on it and configure your print server there.
    Another option, if you don't mind used equipment, is to pick up an old G4 or G5 Xserve, load it up with RAM and disk space, and put tiger on that.
    Good luck!
    -Gregg

Maybe you are looking for