RAID Level Configuration Best Practices

Hi Guys ,
   We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
Files  RAID Level 
SQL Data File -->
SQL Log Files-->
Tempdb Data-->
Tempdb log-->
Backup files--> .
Any other configuration best practices   are more then welcome . 
Like Memory Setting at OS level , LUN Settings. 
Best practices to configure SQL Server in Hyper-V with clustering.
Thank you
Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

Hi,
If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
usage. Its always good to use dedicated drive for tempdb
For memory setting.Please refer
This link for setting max server memory
You should monitor SQL server memory usage using below counters taken from
this Link
SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
  So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
Below would surely be a good read
SAN storage best practice For SQL Server
SQLCAT best practice for SQL Server storage
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
My TechNet Wiki Articles

Similar Messages

  • Oracle SLA Metrics and System Level Metrics Best Practices

    I hope this is the right forum...
    Hey everyone,
    This is what I am looking for. We have several SLA's setup and we have defined many Business Metrics and we are trying to map them to System level metrics. One key area for us is Oracle. I was wondering is there is a best practice guide out there for SLA's when dealing with Oracle or even better, System Level Metric best practices.
    Any help would be ideal please.

    Hi
    Can you also include the following in the FAQ?
    1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
    2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
    3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
    4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
    Regards
    Sundar
    It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

  • DNS Configured-Best Practice on Snow Leopard Server?

    How many of you configure and run DNS on your Snow Leopard server as a best practice, even if that server is not the primary DNS server on the network, and you are not using Open Directory? Is configuring DNS a best practice if your server has a FQDN name? Does it run better?
    I had an Apple engineer once tell me (this is back in the Tiger Server days) that the servers just run better when DNS is configured correctly, even if all you are doing is file sharing. Is there some truth to that?
    I'd like to hear from you either way, whether you're an advocate for configuring DNS in such an environment, or if you're not.
    Thanks.

    Ok, local DNS services (unicast DNS) are typically straightforward to set up, very useful to have, and can be necessary for various modern network services, so I'm unsure why this is even particularly an open question.  Which leads me to wonder what other factors might be under consideration here; of what I'm missing.
    The Bonjour mDNS stuff is certainly very nice, too.  But not everything around supports Bonjour, unfortunately.
    As for being authoritative, the self-hosted out-of-the-box DNS server is authoritative for its own zone.  That's how DNS works for this stuff.
    And as for querying other DNS servers from that local DNS server (or, if you decide to reconfigure it and deploy and start using DNS services on your LAN), then that's how DNS servers work.
    And yes, the caching of DNS responses both within the DNS clients and within the local DNS server is typical.  This also means that there is need no references to ISP or other DNS servers on your LAN for frequent translations; no other caching servers and no other forwarding servers are required.

  • IP over Infiniband network configuration best practices

    Hi EEC Team,
    A question I've been asked a few times, do we have any best practices or ideas on how best to implement the IPoIB network?
    Should it be Class B or C?
    Also, what are your thoughts in regards to the netmask, if we use /24 it doesn't give us the ability to visually separate two different racks (ie Exalogic / Exadata), whereas netmask /23, we can do something like:
    Exalogic : 192.168.*10*.0
    Exadata : 192.168.*11*.0
    While still being on the same subnet.
    Your thoughts?
    Gavin

    I think it depends on a couple of factors, such as the following:
    a) How many racks will be connected together on the same IPoIB fabric
    b) What rack configuration do you have today, and do you foresee any expansion in the future - it is possible that you will move from a purely physical environment to a virtual environment, and you should consider the number of virtual hosts and their IP requirements when choosing a subnet mask.
    Class C (/24) with 256 IP values is a good start. However, you may want to choose a mask of length 23 or even 22 to ensure that you have enough IPs for running the required number of WLS, OHS, Coherence Server instances on two or more compute nodes assigned to a department for running its application.
    In general, when setting a net mask, it is always important that you consider such growth projections and possibilities.
    By the way, in my view, Exalogic and Exadata need not be in the same IP subnet, especially if you want to separate application traffic from database traffic. Of course, they can be separated by VLANs too.
    Hope this helps.
    Thanks
    Guru

  • Server 2008 R2 RDS HA Licensing configuration best practices

    Hello
    What is the best practice for setting up and HA licensing environment for RDS?  I'm using a mixture of RDS CALs for my internal/AD users and External Connector license for my external/Internet users. 
    Daddio

    Hi,
    To ensure high availability you want to have a fallback License Server in your environment. The recommended method to configure Terminal Service
    Licensing servers for high availability is to install at least two Terminal Services Licensing servers in Enterprise Mode with available Terminal Services CALs. Each server will then advertise in Active Directory as enterprise license servers with regard to
    the following Lightweight Directory Access Protocol (LDAP) path: //CN=TS-Enterprise-License-Server,CN=site name,CN=sites,CN=configuration-container.
    To get more details on how to setup your License Server environment for redundancy and fallback, go over the "Configuring License Servers for High Availability"
    section in the Windows Server 2003 Terminal Server Licensing whitepaper
    Regards,
    Dollar Wang
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support,
    contact [email protected]
    Technology changes life……

  • SRST Configuration - Best Practices

    We are starting a new Unified Communication deployment and will have an SRST at each remote location. I am wondering if there are any best practices in regards to the configuration of the SRST.
    For example Does it matter what interface is specific for the source address. I have seen some that say it needs to be the LAN address and others say it needs to be a Loopback address. Since the phones themselves will be attached to a VLAN on a switch that is connected to the router is there a benefit either way? Are there any considerations not really covered in the base configuration that need to be considered as a best practice?
    I am sure I will have more questions as we progress so thanks for the patience in advance...
    Brent                    

    Hi Brent,
    The loopback is used because it is an interface that remains up regardless of the physical layer, so provided that appropriate routing is in place, the lo address will be reachable through the physical interfaces.
    Best practices on the top of my mind should include looking at the release notes for the software version you're using, check network requirements and compatibility matrix, interworking, caveats, and reserve time for testing.
    I'm sure you'll be just fine
    hth

  • RDM Configuration (Best Practices...)

    Folks,  when attaching multiple RDM's on a VM, would you add one RDM per vSCSI adapter, or would you add multiple RDM's to the same vSCSi adapter?
    Also,  when creating the RDM would you keep it with the virtual machine, or store it on a seperate datastore (RDM pointers)?
    Just looking for some best practices. 

    Stuarty1874 wrote:
    Folks,  when attaching multiple RDM's on a VM, would you add one RDM per vSCSI adapter, or would you add multiple RDM's to the same vSCSi adapter?
    multiple
    Also,  when creating the RDM would you keep it with the virtual machine, or store it on a seperate datastore (RDM pointers)?
    Keep it the same
    Just looking for some best practices. 
    Also you might find this good
    http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-storage-guide.pdf
    start with p135

  • UC on UCS RAID with TRCs best practices

    Hi,
    We bought UCSs servers to do UC on UCS. The servers are TRC#1 240M3S, hence with 16x300GB drives.
    I am following this guide to create the RAID (I actually thought they would come pre-configured but it does not seem to be the case):
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CF3D71B4_00_cucm_virtual_servers.pdf
    When it comes to setting up the RAID for the C240-M3, it is mentioned that I should create 2 RAID 5 arrays of 8 disks each, 1 per SAS Adapter.
    The thing is that on my servers I apparently only have 1 adapter that is able to control all the 16 Disks. It might be a new card that was not available at the time the guide was written. 
    So my question is: Should I still configure two RAID 5 volumes although I only have one SAS adapter or can I use a single RAID 5 (or other) volume? 
    If I stick to two volumes, are there recommendations for example to put some UC apps on one volume and some others on another volume? Those servers will be used for two clusters, so I was thinking using one datastore per cluster.
    Thanks in advance for your thoughts
    Aurelien

    Define "Best"?
    It really comes down to what your requirements are, i.e what applications are you going to use, are you going to use SAN, how many applications, what is your budget, etc, etc.
    Here is a link to Cisco's UC on UCS wiki:
    http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware
    HTH,
    Chris

  • Deployment specific configuration - best practice

    I'm trying to figure out the best way to set-up deployment specific configurations.
    From what I've seen I can configure things like session timeouts and datasources. What I'd like to configure is a set of programmatically accessible parameters. We're connecting to a BPM server and we need to configure the URL, username and password, and make these available to our operating environment so we can set-up the connection.
    Can we set-up the parameters via a deployment descriptor?
    What about Foreign JNDI? Can I create a simple JNDI provider(from a file perhaps?) and access the values?
    Failing these, I'm looking at stuffing the configuration into the database and pulling it from there.
    Thanks

    Which version of the product are you using?
    Putting the configs in web.xml config params as in this example:
    https://codesamples.samplecode.oracle.com/servlets/tracking/remcurreport/true/template/ViewIssue.vm/id/S461/nbrresults/103
    Will allow you to change the values per deployment easily with a deployment plan.
    Another alternative 10.3.2 and later is to use a features that allows resources like normal properties files to be overloaded by putting them in the plan directory. I don't have the link to this one right now.

  • RMAN/TSM Configuration - Best Practices?

    Wondering how best to configure RMAN with TSM from a network perspective. Dual HBA's? Suggestions?
    Thanks.

    * The OWB client software will reside on each user's windows desktop.
    Yes.
    * Should there be one repository that will deploy to the runtime environments of TEST, USER ACCEPTANCE and PRODUCTION environments? or should there be three repositories, one for each environment?
    One.
    * If there is only a single repository that will deploy to all three environments, should it reside on TEST, USER ACCEPTANCE or PRODUCTION server?
    Test, but you need a repository on the other environments too.
    * Does OWB have sufficient version control capabilities to drive three environments simultaneously?
    No, no version control at all. You can buy a third-party tool like http://www.scm4all.com
    * Is it possible to have an additional development repository on a laptop to do remote development of source/target definition, ETL etc and then deploy it to a centralized development repository? Perhaps it is possible to generate OMB+ scripts from the laptop repository and then run them against the centralized repository on the shared server.
    Use the export/Import from the OWB via mdl-Files.
    Regards
    Detlef
    Message was edited by:
    289512
    null

  • WS application configuration - best practice advice

    Hi there,
    I'm a newbie on web services.
    I'm currently developing a Web Service using JAX-WS 2.1 and deploying on Tomcat 6.
    I'm wondering where to put my application's configuration. If I was developing a standalone java application I would put it in a Java Properties file or perhaps use the Java Preferences API.
    But this is not a standalone java application, it is something that runs in a container, in this case Tomcat.
    I have a fealing that web.xml is the intended place to put this information but I've also read posts discouraging this. I'm also in the dark as to how to access the information in web.xml (Context parameters section) from within my web service.
    Basically I would like to use a solution which is a much standard as possible, i.e. not tied to any specific container.
    Thanks.
    Changed typo: "have" changed to "how"

    What is the model number of the switch?
    Normally a switch that cannot be changed does not have an IP address.  So if your switch has an address (you said it was 192.168.2.12)  I would assume that it can be changed and that it must support either a gui or have some way to set or reset the switch.
    Since Router1 is using the 192.168.1.x  subnet , then the switch would need to have a 192.168.1.x  address (assuming that it even has an IP address), otherwise Router1 will not be able to access the switch.
    I would suggest that initially, you setup your two routers without the switch, and make sure they are working properly, then add the switch.  Normally you should not need to change any settings in your routers when you add the switch.
    To setup your two routers, see my post at this URL:
    http://forums.linksys.com/linksys/board/message?board.id=Wireless_Routers&message.id=108928
    Message Edited by toomanydonuts on 04-07-2009 02:39 AM

  • Configuration best practice

    My final deployed solution is going to consist of several PC-Fieldpoint pairs all located on the company intranet. Each PC talks to its own Fieldpoint and not any other. I have the sequences and Users.ini stored on the server.
    Now I'm trying to come up with the best place to store the Fieldpoint IP address on the station. I know I could create a property loader and associated xml file, and will do that if it the preferred method. But it seems like overkill to store one IP address.
    Is there a better place to store information like this? I'm a beginner, so no suggestion is too obvious for me.
    I'm trying to make a system that will be maintainable by a real TestStand expert - either me in the future or my replacement<g>.

    Hi jtdavies,
    It sounds like the use of Station Globals would be the ideal route for your application. Attached is an easy example showing the use of Station Globals.
    Nestor
    Attachments:
    stationGlobals_access2.zip ‏38 KB

  • WLC Configuration Best practices - no updates since 2008?

    There has been no updates to this doc for almost 4 years.
    http://www.cisco.com/en/US/partner/tech/tk722/tk809/technologies_tech_note09186a0080810880.shtml
    That's a long time for wireless, especially since it still references release 5.2, and we now it's 7.0.  Plus quite a few new AP families have been announced, 802.11n, Cleanair, etc.  I think this document is overdue for an update.  Has there not been any lessons learned since 2008?  Can anyone from Cisco comment on this?

    Guys:
    I agree with you. many docs are old, pretty old.
    You can use the Feedback button at the bottom of the doc page and send your feedback to Cisco.
    Most of the time they will reply you and you can discuss your opinion about that doc is very old.
    I've done this with more than one doc and config examples that describes the config by providing images for version 3.x. They updated some of the docs to reflect later releases (6.x and 7.x).
    They have no problem with updating the docs, they have a good team to work on the docs to create and update. Just you be positive and hit that "Feedback" button and tell them and they'll surely help. (if not please tell me. I have a kind of personal contact with the wireless docs manager).
    HTH,
    Amjad
    You want to say "Thank you"?
    Don't. Just rate the useful answers,
    that is more useful than "Thank you".

  • Advice on RAID Sets, Volume Sets, and RAID Levels of the Volume Sets using an Areca Controller

    I have read through a lot of information on disk usage, storage rules for an editing rig, users inquiries/member responses in this forum and I thank each and every one of you – especially Harm.
    In building my new workstation, I purchased five (5) WD 1T, 7k, 64M SATAIII hard drives and an Areca RAID card, ARC-1880ix-16-4G, which I plan to use primarily as my media/data disk array.  The workstation will use a 128GB SATAIII SSD as the OS/program drive and I will transfer two (2) WD Raptor/10k SATA 70GB drives from my current system for pagefile/scratch/render use.  I tentatively plan on using a mobo SATAIII port for the SSD and mobo SATA ports with a software RAID (level 0) for the 10k Raptors.
    In reading the Areca Instruction manual, I am now considering exactly how I should configure the 5 physical 1TB drives in terms of RAID Level(s), Volume Sets, and RAID Sets.  I must admit that I like the opportunity of allowing for a Dedicated Hot Spare as I am generally distrustful of the MTBF data that drive vendors tout and have the bad experience in the past of losing data from a mal-configured RAID array and a single drive hardware failure (admittedly, my fault!).
    In line with the logic that one doesn’t want to perform disk reading while trying to write at the same time (or vice-versa), I am thinking the approach above should work OK in using the mobo disk interface and both software and external hardware RAID controllers without having to create separate RAID level configurations within a Volume Set or further dividing up the physical drives into separate RAID sets.  I know in forum messages that Harm noted that he had 17 drives and I could envision a benefit to having separate RAID sets in that situation, but I am not at that point yet. 
    To some degree I think it might be best to just create one RAID Level on one Volume Set on one RAID Set, but want to solicit thoughts from veteran controller users on their workflows/thoughts in these regards.
    Anyone care to share thoughts/perspectives?  Thanks
    Bill

    Thanks for the speedy feedback Harm - I appreciate it.
    I was thinking RAID level 3 as well.
    Of course, it's always something!   I purchased the Caviar Blacks by mistake - which are non-TLER.   I will work with EggHead to return the ones I purchased and replace them with RE4 versions  as I'm not thrilled about the possibility of the controller declaring the volume/disks degraded unnecessarily and although I have the DOS utility WDTLER where one is supposed to be able to enable/disable TLER on WD drives  - I suspect WD is way beyond that now anyway with current builds.
    I agree with you about just testing the performance of the options for the raptors - on the mobo and then on the controller.  When I benchmark them I'll post the results in case others are curious.
    Thanks again....off to EggHead!

  • SD EDI with many partners / best practice

    I need some input on how best to approach a large EDI project.
    We will be accepting orders from about 80 customers.  Each one will be drop-shipping products from our warehouse to their customers.  I've set up the config for one location to take advatange of the EDPAR ext/int customer conversion table.  The IDOC uses the sold-to party as the KU type partner profile name (ex. 237661) which allows me to use the EDPAR conversion.  I'm able to get the IDOC processed now through to the finished order.
    Question:  How do I scale this?  Is this the best way to handle 80 partners?  If so, I will have to have one EDI translation per sold-to.  Should we really be hard-coding a sold-to account# as the partner profile name at the EDI translation level or is there a more generic way to handle this?
    It seems like there should be a way to say the partner profile for this customer group is EDIGRP01 and then use the incoming sold-to (ext. customer#) to determine which IDOC partner profile to use OR use user-exits to make that logic happen.  I want to use the configurable best practices here but it sure seems like a lot of work with hard-coded account#'s to boot.
    Thank you for your thoughts.

    Reynolds, the partner profiles are to identify the message type and process code and then function module to post the idocs. These partner numbers will not be used anywhere else. for creating sales orders you need sales area ,sold to and shipto numbers,and material numbers.
       These three values will be converted using the EDPAR and customer material info records.
        Could you please explain why the validation for customer number is required?
    If you really need the customer validation sales order creation automatically do when determining the sales area or material number.
    I question for you. what configuration you did for automatic conversion of external partner numbers into internal customer numbers? does this is used for outbound
    idocs as well? i am doing some outbound messages for orderacknowledgment.
    where i need the external partner numbers to be passed in idoc and edu message.
    but the automatic translation is not taking place? this is also not happening for inbound also for me? Could you please tell what i am missing?
    Please mail me at [email protected]
    Thanks for the help.
    Regards,
    Praveen

Maybe you are looking for