Best practices for performance on io and storage.

I'm building / buying a new server was planning on going virtual.
Dual xeon 2620 v3 with 64 GB ram we have about 15 users and 14 remote users.
Main server 2008 / 2012 SQL
2nd Server 2008 / 2012 File storage
3rd Server / Terminal Services / Citrix (may not be needed still evaluating)
Here is my concern.
HyperV server installed on a mirror dive 120 GB SAS Raid 0 (i've been informed that this is unnecessary as hyper V doesn't require much space and that having it on an SSD would only improve on the boot up speed for the actual hyper V server, hypervisor,
there fore even if I put this on a slow 5400 RPM drive this would only affect the initial boot up of hyperV (which I don't' plan on rebooting often)  is this true, would the page file be an issue?
I was then planning on having 600 sas 15K (x4) raid 10, I would use this for the datastores of the 3 server on these drives.
I've been informed that the I/O on these drives will affect performance and that each server should be on it's own  separate physical drives (raid volume).
Is this common? Should I be using separate HD's for each virtual machine?
nambi

I'm building / buying a new server was planning on going virtual.
Dual xeon 2620 v3 with 64 GB ram we have about 15 users and 14 remote users.
Main server 2008 / 2012 SQL
2nd Server 2008 / 2012 File storage
3rd Server / Terminal Services / Citrix (may not be needed still evaluating)
Here is my concern.
HyperV server installed on a mirror dive 120 GB SAS Raid 0 (i've been informed that this is unnecessary as hyper V doesn't require much space and that having it on an SSD would only improve on the boot up speed for the actual hyper V server, hypervisor,
there fore even if I put this on a slow 5400 RPM drive this would only affect the initial boot up of hyperV (which I don't' plan on rebooting often)  is this true, would the page file be an issue?
I was then planning on having 600 sas 15K (x4) raid 10, I would use this for the datastores of the 3 server on these drives.
I've been informed that the I/O on these drives will affect performance and that each server should be on it's own  separate physical drives (raid volume).
Is this common? Should I be using separate HD's for each virtual machine?
Do not create "silos" or "islands" of storage as it both a) hell of a management and b) effective way to steal IOPS from your config. OBR10 (One Big RAID10) is a way to go. See:
OBR10 (One Big RAID10) is a way to go
http://community.spiceworks.com/topic/262196-one-big-raid-10-the-new-standard-in-server-storage
Good luck :)
Hyper-V Shared Nothing Cluster. Only two Hyper-V hosts needed.

Similar Messages

  • ASM on SAN datafile size best practice for performance?

    Is their a 'Best Practice' for datafile size for performance?
    In our current production, we have 25GB datafiles for all of our tablespaces in ASM on 10GR1, but was wondering what the difference would be if I used say 50GB datafiles? Is 25GB a kind of mid point so the data can be striped across multiple datafiles for better performance?

    We will be using Redhat Linux AS 4 update u on 64-bit AMD Opterons. The complete database will be on ASM...not the binarys though. All of our datafiles we have currently in our production system are all 25GB files. We will be using RMAN-->Veritas Tape backup and RMAN-->disk backup. I just didn't know if anybody out there was using smallfile tablespaces using 50GB datafiles or not. I can see that one of our tablespaces will prob be close to 4TB.

  • Best practices for dealing with Exceptions on storage members

    We recently encountered an issue where one of our DistributedCaches was terminating itself and restarting due to an RuntimeException being thrown from our code (see below). As usual, the issue was in our own code and we have updated it to not throw a RuntimeException under any circumstances.
    I would like to know if there are any best practices for Exception handling, other than catching Exceptions and logging them. Should we always trap Exceptions and ensure that they do not bubble back up to code that is running from the Coherence jar? Is there a way to configure Coherence so that our DistributedCaches do not terminate even when custom Filters and such throw RuntimeExceptions?
    thanks, Aidan
    Exception below:
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=201, BackupPartitions=204}
    2010-02-09 12:40:39.222/88477.977 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=48): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException

    Bob - Here is the full stacktrace:
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): An exception (java.lang.RuntimeException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=StyleCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=1021, BackupCount=1, AssignedPartitions=205, BackupPartitions=204}
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47): Terminating DistributedCache due to unhandled exception: java.lang.RuntimeException
    2010-02-09 13:04:22.653/90182.274 Oracle Coherence GE 3.4.2/411 <Error> (thread=DistributedCache:StyleCache, member=47):
    java.lang.RuntimeException: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:84)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readAsObjectArray(PofBufferReader.java:3328)
         at com.tangosol.io.pof.PofBufferReader.readObjectArray(PofBufferReader.java:2168)
         at com.tangosol.util.filter.ArrayFilter.readExternal(ArrayFilter.java:243)
         at com.tangosol.io.pof.PortableObjectSerializer.initialize(PortableObjectSerializer.java:153)
         at com.tangosol.io.pof.PortableObjectSerializer.deserialize(PortableObjectSerializer.java:128)
         at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3284)
         at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2599)
         at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:348)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.net.message.requestMessage.distributedCacheRequest.partialRequest.FilterRequest.read(FilterRequest.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:4)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    Caused by: java.lang.ClassNotFoundException: com.edmunds.vehicle.Style$PublicationState
         at java.lang.Class.forName0(Native Method)
         at java.lang.Class.forName(Class.java:169)
         at com.edmunds.common.coherence.EdmundsEqualsFilter.readExternal(EdmundsEqualsFilter.java:82)
         ... 25 more
    2010-02-09 13:04:23.122/90182.743 Oracle Coherence GE 3.4.2/411 <Info> (thread=Main Thread, member=47): Restarting Service: StyleCacheOur code was doing something simple like
    catch(Exception e){
        throw new RuntimeException(e);
    }Would using the ensureRuntimeException call do anything for us here?
    Edited by: aidanol on Feb 12, 2010 11:41 AM

  • What is the best practice for creating master pages and styles with translated text?

    I format translated text all the time for my company. I want to create a set of master pages and styles for each language and then import those styles into future translated documents. That way, the formatting can be done quickly and easily.
    What are the best practices for doing this? As a company this has been tried in the past, but without success. I'd like to know what other people are doing in this regard.
    Thank you!

    I create a master template that is usually void of content, with the exception I define as many of the paragraph styles I believe can/will be used with examples of their use in the body of the document--a style guide for that client. When beginning a new document for that client, I import those styles from the paragraph styles panel.
    Exception to this is when in a rush I begin documentation first, then begin new work. Then in the new work, I still pull in those defined paragraph and or object styles via their panels into the new work.
    There are times I need new styles. If they have broader applicability than a one-off instance or publication, then I open the style template for that client and import that style(s) from the publication containing the new style(s) and create example paragraphs and usage instructions.
    Take care, Mike

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Tips n Tricks/Best Practices for integrating iPhone, iPad and MacBook Pro

    My wife just purchased an iPhone, iPad and Macbook Pro for her non profit consulting business and I was wondering if a tips and tricks or best practices for efficiently and productively integrating these devices exists?

    http://www.apple.com/icloud/

  • Best Practice For Database Parameter ARCH_LAG_TARGET and DBWR CHECKPOINT

    Hi,
    For best practice - i need to know - what is the recommended or guideline concerning these 2 Databases Parameter.
    I found for ARCH_LAG_TARGET, Oracle recommend to setup it to 1800 sec (30min)
    Maybe some one can guide me with these 2 parameters...
    Cheers

    Dear unsolaris,
    First of all if you want to track the full and incremental checkpoints, make the LOG_CHECKPOINT_TO_ALERT parameter TRUE. You will see the checkpoint SCN and the completion periods.
    Full checkpoint is being triggered when a log switch happens and checkpoint position in the controlfile is written in the datafile headers. For just a really tiny amount of time the database could be consistent eventhough it is open and in read/write mode.
    ARCH_LAG_TARGET parameter is disabled and set to 0 by default. Here is the definition for that parameter;
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
    If you want to set this parameter up the Oracle recommends it to be 1800 as you have said. This can subject to change from database to database and it is better for you to check it by experiencing it.
    Regards.
    Ogan

  • Best practice for migrating between environments and versions?

    Hi to all,
    we've got a full suite of solutions custom developed in SAP BPC 7.0, SP 7. We'd like to understand if
    - there are best practice in order to copy this applications from an environment to another environment (another client)
    - there are best practice in case the client has got a newer version of SAP BPC (they should install the 7.5, while we're still stucked with the 7.0).
    Thank you very much
    Daniele

    Hi Daniele
    I am not entirely sure, what you are asking, Please could you provide additional information.
    Are you looking for best practice recommendations for Governance, for example: Change transports between DEV, QA and PRD in BPC 7.0?
    What is the best method? Server Manager backup and restore, etc  ?
    And
    Best Practice recommendations on how to upgrade to a different version of BPC, for example: Upgrading from BPC 7.0 to 7.5 or 10.0 ?
    Kind Regards
    Daniel

  • Best Practice for Buy in Set and Dismantle for Sales

    Hi All SAP Masters,
    We have a scenario that when purchasing an item as "set", in this set, it has a few components inside this set (something like a material BOM). Example, a machine which comes with several parts. However, when the user received this set from the supplier, the user would further dismantle certain part(s) from the set/"machine" and sell it separately to the customer as a component/"single item".
    What is the best practice in the SAP process to be adopted?
    Please help. Thank you.
    Warmest Regards,
    Edwin

    If your client  have PP module , then follow this steps
    Consider A is the purchased material and going to dismantle the A into B, and C
    1) create a BOM for B material
        and assign the header material  A as consumption material with + ve qty
       and C component as byproduct and maintain - ve qty in BOM
    2) maintain backflush indicator for A & C in material master MRP2 view
    3) create routing for B and maintain auto GR for final operation
    4) create a  production order for B
    5) confirm the order in Co11n, A  will be consumed in 261 movement, C will be receipt with 531 movement
    B will receipt in 101 movement .
    once the stock is posted into unrestricted you can sale B & C

  • Best Practice for saving all fieds and searches in capital letters

    I want to save all fields in my all pages in CAPS and also to search with CAPS e.g user enters search criteria in small letters, then automatically it should convert to caps. What is the best practice to do that?

    Hi,
    There are already so many discussions on this in this forum, some of the links are:
    Uppercase
    How to convert user input in the page to upper case?
    Sireesha

  • Best Practices for SSO between NWBC and BOBJ CMC

    What are the best practices in this scenario:
    - NWBC client (using SAP ECC logon credentials)
    - BOBJ client (configured using Windows AD credentials)
    I would like my users to log into NWBC - but be automatically logged into CMC for running crystal reports inside the NWBC gui.
    Thanks
    Shane Kelly

    yes.  we're not using portal.    only SAPGUI up till now.
    but we've recently configured our DEV server to run NWBC.
    Normally my users log into CMC Infoview in a browser - but with NWBC i can bring infoview directly into the UI.
    but it asks for a sign=on every time.
    i'd like to configure SSO for NWBC to BOBJ infoview somewhow.

  • Home Networking Best Practice for Performance

    Hi there first time poster.  I have 3 wireless routers at home (Linksys WRT54G's) .  I have WEP password setup for security and everything works great.  My only question is, is there anything I can look for in the settings that may boost intranet and internet.  Just trying to make sure I have the settings set to get the best performance.  2 of the routers have a linksys firmware on it the other has something called talisman.  These were given to me.
    I know that one thing I can do is change the antennae on them to boost the wireless signal but I'm not sure if that improves the performance.  I assume that would come from settings.  Thanks in advance.

    So you need to improve the wireless signal on your computers. Here are some settings which you do it on your router, i think this might improve the wireless signal strength on your computers.
    Open an Internet Explorer browser page on your wired computer(desktop).In the address bar type - 192.168.1.1
    Leave username blank & in password use admin in lower case...
    For Wireless Settings, please do the following : -
    Click on the Wireless tab
    -Here select manual configuration...Wireless Network mode should be mixed...
    -Provide a unique name in the Wireless Network Name (SSID) box in order to differentiate your network from your neighbours network...
    - Set the Wireless channel to 11-2.462GHz...Wireless SSID broadcast should be Enabled and then click on Save Settings...
    Please make a note of Wireless Network Name (SSID) as this is the Network Identifier...
    For Wireless Security : -
    Click on the Sub tab under Wireless > Wireless Security...
    Change the Wireless security mode to WEP, Encryption should be 64 bits.Leave the passphrase blank, don't type in anything...
    Under WEP Key 1 type in any 10 numbers please(numbers only and no letters eg: your 10 digit phone number) and click on save settings...
    Please make a note of WEP Key 1 as this is the Security Key for the Wireless Network...
    Click on Advanced Wireless Settings
    Change the Beacon Interval to 75 >>Change the Fragmentation Threshold to 2304, Change the RTS Threshold to 2304 >>Click on "Save Settings"...
    Now see if you can locate your Wireless Network and attempt to connect... And check the signal strength on your computers.

  • Best practice for performing spell check in ADF

    Hi,
    I would like to know if there is a way to perform spell check in ADF. What is the best way to do? Does ADF have some type functionality for that or do I need to have an external library for that?
    Any help will be appreciated.
    thank you in advanced,
    Abraham
    I'm using Jdevelper 11.1.1.4.0 - Build JDEVADF_11.1.1.4.0_GENERIC_101227.1736.5923

    couple of related threads
    http://kr.forums.oracle.com/forums/thread.jspa?threadID=2167553
    spell check for the input text
    may be you can check if there is a ajax way of doing this through jsf

  • Best Practice for very large itunes and photo library..using Os X Server

    Ok setup....
    one Imac, one new Macbook Pro, one Macbook, all on leopard. Wired and wireless, all airport extremes and express'
    have purchased a mac mini plus a firewire 800 2TB Raid drive.
    I have a 190GB ever increasing music library (I rip one to one no compression) and a 300gb photo library.
    So..question Will it be easier to set up OS X Server on the mini and access my itunes library via that?
    Is it easy to do so?
    I only rip via the Imac, so the library is connected to that and shared to the laptops...how does one go about making the imac automatically connect to the music if i transfer all music to the server ?
    The photo bit can wait depending on the answer to the music..
    many thanks
    Adrian

    I have a much larger itunes collection (500gb/ 300k songs, a lot more photos, and several terabytes of movies). I share them out via a linux server. We use apple TV for music/video and the bottleneck appears to be the mac running itunes in the middle. I have all of the laptops (macbook pros) set up with their own "instance" of itunes that just references the files on the server. You can enable sharing on itunes itself, but with a library this size performance on things like loading cover art and browsing the library is not great. Please note also I haven't tried 8.x so there may be some performance enhancements that have improved things.
    There is a lag on accessing music/video on the server of a second or so. I suspect that this is due to speed in the mac accessing the network shares, but it's not bad and you never know it once the music starts or the video starts. Some of this on the video front may be the codec settings I used to encode the video.
    I suspect that as long as you are doing just music, this isn't going to be an issue for you with a mini. I also suspect that you don't need OSX server at all. You can just do a file share in OSX and give each machine a local itunes instance pointing back at the files on the server and have a good setup.

  • Best Practice for update to iPhone and iTouch

    OK, when 3.0 comes down the pike, what is the best way to get 3.0 as a "clean" install? Currently 2.2.1 is on both. If I do a restore, will the system only pick up 3.0 or will it see 2.2.1 which is currently on the hard drive? With that in mind, how can I delete the 2.2.1 version of the iPhone and iTouch software? Sorry for two question in one post.
    Steve H

    When firmware update 2.0 was released, the entire iPhone was eraseed first including the existing firmware - just as when restoring an iPhone with iTunes, followed by 2.0 being installed, which was followed by the iPhone's backup being transferred to the iPhone.
    The same may apply with firmware update 3.0 with your iPhone's backup being updated immediately before. If not, firmware version 2.2.1 will be updated with 3.0.
    If 2.2.1 is updated and you want a "clean" install of 3.0, you can follow the initial upgrade by restoring your iPhone with iTunes.

Maybe you are looking for

  • USB adapter for Toshiba MK6034GSX

    Does anybody know where to get a USB adapter to fit this hard drive. It has blade type connectors not pins? The laptop will not boot and I'm trying to extract the info if at all possible before I try a format. Solved! Go to Solution.

  • Crystal Reports Temp tables - Driving Me crazy

    Our ongoing saga with temp tables and crystal reports continues. I am ready to go crazy, so any help would be appreciated. Notes: I create the ##tempsmall table in query analyzer, so it is present when the code executes. When I use the sa/Corinne cre

  • OraInventory missing

    Hi, I have installed XI 3.0 with SP19. I could not find oraInventory in /oracle when i logged in as ora<SID>. I dont think this is normal. I followed the installation guide provided on SMP. I am not sure if there was any option during the installatio

  • X32 problem with Wi-Fi

    Please, I need help because my wireless don´t work. I have install the newest drivers but my wi-fi icon don´t light. ¿What i have to do? Thankyou

  • LR5 Install Errors

    Everytime I try installing my new Lightroom 5 disk, it says installation failed and I do not have administrative privileges. I don't know how I wouldn't on own computer. It offers no details on how to do this.