NTFS Cluster size is set at 4k instead of recommended 64k.

We have found that our partition is not aligned and need to get some
feedback on a few things.
Here are our numbers:
Starting partition offset = 32,256
Stripe Size = 128k (131,072)
Cluster size = 4k (4096)
We are experiencing high "Avg Queue length" and high "avg disk usage"
in windows performance monitor...
My question is this: How important is the NTFS cluster size at 4k. I know
the recommendation is 64k but how bad would a 4k cluster size ALSO
affect performance given we know that our partition is mis-aligned ?
Thanks..
Ld

> My question is this: How important is the NTFS cluster size at 4k. I know
> the recommendation is 64k but how bad would a 4k cluster size
It's very important from performance perspective, especially when you're facing a huge database and quantity of disks. 64K is the rule of thumb as cluster package size for SQL server, can be considered a minimum size of unit at least in this case. Imagine if it's cut down to 4K which is 1/16, then the disk arms need to do 16 times to grab the same amount of data.
> Starting partition offset = 32,256
>ALSO
> affect performance given we know that our partition is mis-aligned ?
Starting offset has a similar impact as allocation unit size does, but usually it only happens on old fashion storage device as far as I know, not sure maybe I'm wrong here. Last time an issue related to that I know was on HP EVA 5000 4 years ago, and after that starting offset value is part of initial optimization when it's being installed by storage vendor provided, no manual change from customer needed. But to be on the safe side, please do check with your storage vendor to make sure.
In either case of allocation unit size or starting offset size, it would be very difficult to change as long as it's set and has gone live in production environment, from either downtime or transition storage device.
Regards,

Similar Messages

  • What is the preferred NTFS cluster size of a disk where a mailbox database will installed on?

    I'm testing the 2013 Exchange deployment in our lab environment. We do have a 2010 Exchange server where mailbox databases are installed on disks formatted 64k (65536 bytes in each allocation unit). Is this also recommended for Exchange 2013?
    Remco

    Hi 
    HI
    NTFS allocation unit size
    NTFS allocation unit size represents the smallest amount   of disk space that can be allocated to hold a file.
    Supported: All allocation unit sizes.
    Best practice: 64 KB for both .edb and log file   volumes.
    Supported: All allocation unit sizes.
    Best practice: 64 KB for both .edb and log file   volumes.
    http://technet.microsoft.com/en-us/library/ee832792(v=exchg.150).aspx
    Volume configurations for the Exchange 2013 Mailbox server role
    If you have any feedback on our support, please click
    here
    Terence Yu
    TechNet Community Support

  • Does Administrator have to set cluster size during RAID 0+1,3 and 5 used according to Microsoft tech document?

    Hi everyone,
    I always thank you for providing helpful information in this forum.
    As I got a plan to set hard drive for RAID 0+1,3 and 5. Therefore should I set cluster size under each RAID type according to the url which is provided by Microsoft Tech net. I mean I want to set the cluster size which is size by one time hard disk head
    access.
    The url is ....
    https://support.microsoft.com/kb/140365
    Thanks

    Hi OlegSmirnovPetrov,
    Additional,
     on Win 2008 and later versions, disk partition alignment is enabled by default. On win 2k3 and earlier versions you need to enable , 
    more information please refer the following KB:
    1. Best practices for using dynamic disks on Windows Server 2003-based computers
    http://support.microsoft.com/kb/816307
    2. Disk Partition Alignment (Sector Alignment): Make the Case: Save Hundreds of Thousands of Dollars
    http://blogs.msdn.com/b/jimmymay/archive/2009/05/08/disk-partition-alignment- 
         sector-alignment-make-the-case-with-this-template.aspx
    3. General Hardware/OS/Network Guidelines for a SQL Box
    4.http://blogs.msdn.com/b/cindygross/archive/2011/03/10/general-hardware-os-network-guidelines-for-a-sql-box.aspx (please refer Storage specifications)
    5.Disk Partition Alignment Best Practices for SQL Server
    http://msdn.microsoft.com/en-us/library/dd758814.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Raid-0 Stripe & Cluster Size

    I just ordered 2 10k RPM Raptor S-ATA Drives from newegg, they should arrive shortly. I plan to configure my system with them as Raid-0 for increased performance, I just read the "Raid Setup Guide 865/875 LSR/FIS2R Rev 1.01" by Vango and it seems that my Mobo can be configured as Raid-0 with either the Intel ICH5R Controller or the promise controller.
    Will use promise as my raid controller, it seems it's faster , now i got another question.
    What about stripe size/cluster size? my research is giving me too many setting suggestions with all very different settings, can't decide on what to do. Can someone suggest some good setting, Intel raid manual suggest a 128 KB stripe for best performance, and said nothing about cluster size. Vango posted somewhere he used a 64kb for Stripe, but no info on cluster size.
    I will be using 2 36 gb WB Raptors in raid-0 as my main and only windows array (disk) (Will install windows and apps+games to it) then use PATA drive for backups and movie storage. My computer is used mostly for working with office, creation of web pages, playing Everquest (big game), and watching video (divx movies). I use WinXP Pro Sp1.
    Can someone suggest some general setting on stripe/cluster size that give good performance this kind of usage? what is the easiest (best) way to change the 4k default cluster size on the array after i get windows installed to it? do I bother with changing the cluster size? I got partition magic and other softtware available to do this, but dunno what's the best procedure to do this.
    Thanks in Advance

    I've always just used the 4K cluster size that Windows creates if you use NTFS. I honestly don't think this makes a big difference. If you want a different size, use PM to format the drive that way before installing XP. I would recommend against converting from one size to another. Did this once and ended up with all my files labeled in DOS 8.3 format.   (this was NOT good for my 1000+ MP3's)
    I use 64k stripe size as a compromise. My research showed that people were getting the "best scores" using a small stripe size. This seemed to come at the cost of CPU usage going up and I'm unconvinced these scores relate much to how I actually use my HDD's. They say if all your files are 128K and bigger you don't need a smaller stripe size. If you're using the Raid as your XP drive you'll actually have lots of small files so I would recommend something smaller than 128K. Maybe try 32k?
    Let us know how it goes.

  • Best Cluster Size for RAID 0

    Hi all,
    When I get my replacement SATA HDD back I will be creating another RAID 0 array.  My Windows XP cluster size is 4K, but I have a choice when I am creating my RAID array.  All hardware in Sig.
    The system is being used mainly as a gaming machine, but I will be doing other things to with it also.  Looking for the best balance (slightly in favour of the gaming,  ).
    I heard that the cluster size if the drive should match the cluster size of the RAID array.  So if my drive cluster is 4K then i should set my RAID cluster size to 4K or should I set them higher.  
    any information is more than welcome,
    Andrew
    P.S. I did do a search through the forums, but could not find much recent information.

    The "EASIEST" way to change your cluster size is to have a 3rd drive with Win XP on it....here is what you need to do.
    1. Have 3rd drive with Win XP
    2. Go to bios and change boot order to 3rd drive b4 Raid
    3. Once in wondows goto "Disk Management" (as soon as you click on it you will have a window pop up and ask you to select drive and it will also want to know if you want to convert drive to a dynamic disk...I always choose no)
    4. You will see your raid drives as 1 BIG drive..now all you do is right click on the drive and click partion...primary partition...set size...now is where you can choose cluster size...you will have 3 boxes to check off...
    5. NTFS or FAT32...Volume Label....Cluster Size......
    6. After you check all of that off then you click off quick format and Voila...you have done it now do same on rest of drives....
    7. once you have setup all your partitons and selected cluster sizes...shutdown PC unplug 3rd drive...change boot order so CD rom will be a bootable device b4 raid device and you are good to go on a clean install....remember once you get into the window when setting up XP you have choices to format drives again and choose where XP get sinstalled ...choose to leave as is...no need to format again cuz it will default to 4k cluster again.....
    let me know how this goes for ....I have been doing this trick now for a LONG time and I know for a fact that this is a fast and easy way...without using any 3rd party software.....
    my advice to you is try 16/16 then 32/32

  • How do I retrieve binary cluster data from a file without the presense of the cluster size in the data?

    Hey guys,  I'm trying to read a binary data file created by a C++ program that didn't append sizes to the structures that were used when writing out the data.  I know the format of the structures and have created a cluster typedef in LabView.  However the unflatten from string function expects to see additional bytes of data identifying the size of the cluster in the file.   This just plain bites!  I need to retrieve this data and have it formatted correctly without doing it manually for each and every single element.  Please Help!
    Message Edited by AndyP123 on 06-04-2008 11:42 AM

    Small update.  I have fixed size arrays in the clusters of data in the file and I have been using arrays in my typedefs in LabView and just defining x number of indexes in the arrays and setting them as the default value under Data Operations.  LabView may maintain the default values, but it still treats an array as an unknown size data type.  This is what causes LabView to expect the cluster size to be appended to the file contents during an unflatten.  I can circumvent this in the most simplest of cases by using clusters of the same type of data in LabView to represent a fixed size array in the file.  However, I can't go around using clusters of data to represent fixed size arrays BECAUSE I have several multi-dimentional arrays of data in the file.  To represent that as a cluster I would have to add a single value for every element to such a cluster and make sure they are lined up sequentially according to every dimension of the array.  That gets mighty hairy, mighty fast. 
    EDIT:  Didn't see that other reply before I went and slapped this in here.  I'll try that trick and let you know how it works.......
    Message Edited by AndyP123 on 06-04-2008 12:11 PM

  • Maximum cluster size?

    Hi all,
    I'm using LV 7.1 and am trying to access a function in a DLL.  The function takes a pointer to a data structure that is large and complex.  I have calculated that the structure is just under 15kbytes.
    I have built the structure as a cluster and then attempted to pass the cluster into the call library function node with this parameter set as adapt to type.  I get memory error messages and when analysing the size of the cluster I am producing it appears to be much smaller than required.
    I have also tried creating an array and passing that but I think that won't work as it needs to be a fixed size of array which can only be acheived, according to what I've read, by changing it to a cluster and this is limited to a size of 256.
    Does anybody have a suggestion of how to overcome this?
    If any more detail is required then I'm happy to supply. 
    Dave.

    John.P wrote:
    Hi Dave,
    You have already received some good advice from the community but I wanted to offer my opinion.
    I am unsure as to why the cluster size will not exceed 45, my only suggestion is to try using a type cast node as this is the suggested method for converting from array to cluster with greater than 256 elements.
    If this still does not work then in this case I would recommend that you do use a wrapper DLL, it is more work but due to the complexity of the cluster you are currently trying to create I would suggest this is a far better option.
    Have a look at this KB article about wrapper DLL's.
    Hope this helps,
    John P
    John, I am having a hard time converting an array of greater than 256 elements to a cluster.  I attempted to use the type cast node you suggested and didn't have any luck.  Please see the attached files... I’m sure I’m doing something wrong.  The .txt file has a list of 320 elements.  I want to run the VI so that in the end I have a cluster containing equal number of integer indicators/elements inside.  But more importantly, I don't want to have to build a cluster of 320 elements.  I'd like to just change the number of elements in the .txt file and have the cluster automatically be populated with the correct number of elements and their values.  No more, no less.   One of the good things about the convert array to cluster was that you could tell the converter how many elements to expect and it would automatically populate the cluster with that number of elements (up to 256 elements only).  Can the type cast node do the same thing?  Do you have any advice?  I posted this question with more detail to my application at the link below... no luck so far.  
    http://forums.ni.com/ni/board/message?board.id=170​&thread.id=409766&view=by_date_ascending&page=1
    Message Edited by PhilipJoeP on 05-20-2009 06:11 PM
    Attachments:
    cluster_builder.vi ‏9 KB
    config.txt ‏1 KB

  • Array to Cluster, Automate cluster size?

    I often use the Array to Cluster VI to quickly change an array of data into a cluster of data that I can then connect to a Waveform Chart.  Sometimes the number of plots can be different which results in extra plots (full of zeros) on the graph, or missing plots if the array is larger than the current cluster size.  I know I can right-click on the node and set the cluster size (up to 256) manually.  I could also use a case structure with as many Array to Cluster nodes as I need, set them individually and wire an Array Size to the case structure selector but that's kind of a PITA. 
    My question is whether or not anyone knows a way to control the cluster size value programatically.  It seems that if I can right-click it and do it manually there must be some way to automate it but I sure can't figure it out.  It would be nice if you could simply wire your desired value right into an optional input on the node itself.  Any ideas will be much appreciated.
    Using LabVIEW: 7.1.1, 8.5.1 & 2013
    Solved!
    Go to Solution.

    I'm under the impression it's impossible.  See this idea for related discussion.
    Tim Elsey
    LabVIEW 2010, 2012
    Certified LabVIEW Architect

  • Optimal NTFS block size for Oracle 11G on Windows 2008 R2 (OLTP)

    Hi All,
    We are currently setting up an Oracle 11G instance on a Windows 2008 R2 server and were looking to see if there was an optimal NTFS block size. I've read the following: http://docs.oracle.com/cd/E11882_01/win.112/e10845/specs.htm
    But it only mentioned the block sizes that can be used (2k - 16k). And basically what i got out of it, was the different block szes affect the max # of database files possible for each database.
    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?
    Thanks in advance

    Is there an optimal NTFS block size for Oracle 11G OLTP system on Windows?ideally FS block size should be equal to Oracle tablespace block size.
    or at least be N times less than Oracle block size.
    For example - if Oracle BS=8K then NTFS BS better to be 8K but also can be 4K or 2K.
    Also both must be 1 to N times of Disk sector size. Older disks had sectors 512 bytes.
    Contemporary HDDs have internal sector size 4K. Usually.

  • Cluster size

    hello
    can any body tells me what should be cluster size during partiton of sata hdd for both ntfs and fat32
    PENTIUM 2.4GHZ 800MHZ FSB HTT ENABLED
    MSI 865PE NEO2-PS (D.O.T DISABLED)
    2x256 KINGSTON DDR 400MHZ 3-3-3-8 (PAT TURBO)
    INNO 3D 5900XT 128MB @ 450/850(REGULAR OVERCLOCKING)
    LG T710BH 17' BRIGHT VIEW MONITOR
    80GB SEAGATE SATA HARD DISK
    SAMSUNG DVD+CD-RW 16x52x32x52
    SAMSUNG CD-RW 52x24x52
    REALTECK ALC 655 SOUND
    REALTECK LAN CARD
    ST-LABS USB 2.0 PCI CARD
    US ROBOTICS 56K EXTERNAL MODEM
    PIXEL VIEW PLAY TV PRO HD
    CREATIVE INSPIRE 5200 5.1 SPEAKER
    CODGEN 400WATT PPOWER SUPPLY
    WINDOWS XP PRO SP1
    WINDOWS XP HOME SP1

    Article about NTFS
    Article 2 on NTFS
    NTFS = 4KB, 16 clusters for 2049MB or more
    Articles on FAT32
    FAT 32:
    Drive Size
    Less then 512MB----------512 Bytes Default Cluster Size
    > = 32GB-------------32 Kilobytes Default Cluster Size

  • Array to cluster with adjustable cluster size

    Hi all
    Here I have  a dynamic 1D array and I need to convert it into cluster. So I use Array to Cluster function. But I notice that the cluster size is a fix value. How can I adjust the cluster size according to the 1D array size?
    Anyone pls give advise..
    Thanks....

    I won't disagree with any of the previous posters, but would point out a conversion technique I just recently tried and found to work well for my own particular purposes.  I've given the method a pretty good workout and not found any obvious flaws yet, but can't 100% guarantee the behavior in all settings.
    Anyhow, I've got a fairly good sized project that includes quite a few similar but distinct clusters of booleans.  Each has been turned into a typedef, complete with logical names for each cluster element.  For some of the data processing I do, I need to iterate over each boolean element in a cluster, do some evaluations, and generate an output boolean cluster.  I first structured the code to use the "Cluster to Array" primitive, then auto-index over the resulting array of booleans, perform the evaluations and auto-index an output array-of-booleans, then finally convert back using the "Array to Cluster" primitive.  I, too, was kinda bothered by having to hardcode cluster sizes in there...
    I found I could instead use the "Typecast" primitive to convert the output array back to my cluster.  I simply fed the input cluster into the middle terminal to defin! the datatype.  Then the output cluster is automatically the right size and right datatype.
    This still is NOT an adjustable cluster size, but it had the following benefits:
    1. If the size of my typedef'ed cluster changes during development by adding or removing boolean elements, none of the code breaks!  I don't have to go searching through my code for all the "Array to Cluster" primitives, identifying the ones I need to inspect, and then manually changing the cluster size on them one at a time!
    2. Some of my processing functions were quite similar to one another.  This method allowed me to largely reuse code.  I merely had to replace the input and output clusters with the appropriate new typedef.  Again, no hardcoded cluster sizes hidden in "Array to Cluster" primitives, and no broken code.
    Dunno if your situation is similar, but it gave me something similar to auto-sizing at programming time.  (You should test the behavior when you feed arrays of the wrong size into the "Typecast" primitive.  It worked for my app's needs, but you should make sure it's right for yours.)
    -Kevin P.

  • FAT32 Cluster size?

    I am trying to format my SD card to fat32 with cluster size set at 32kb is there anyway I can do this?

    Maybe "mkfs.vfat -F 32 -S 32768"?

  • Large Cluster Size

    I believe by default cluster size is limited to 436 members by default. We would like to increase this to about 550 temporarily. Can someone remind me of the setting to do this? Any critical drawbacks (I know its no ideal). Thanks in advance... Andrew.

    There was some changes made in in some 3.7.1.x release to support large clusters. Previous there was some issues with the preferred MTU, which was 64K causing OOM in the Publisher/Receiver.

  • Where is the setting to set up pages instead of spreads? Need to pdf separate pages from a spread setup.

    Where is the setting to set up pages instead of spreads? Need to pdf separate pages from a spread setup.

    This is the selection you're looking for:

  • Can a custom page size be set on a LaserJet 1100?

    Can a custom page size be set on a LaserJet 1100? Windows XP

    Hi,
    Based on this information, YES :
    Paper: A4, Letter, Legal, and custom paper sizes. Envelope: Executive, B5, C5, DL, Monarch, Com-10, and custom envelope sizes.
    Regards.
    BH
    **Click the KUDOS thumb up on the left to say 'Thanks'**
    Make it easier for other people to find solutions by marking a Reply 'Accept as Solution' if it solves your problem.

Maybe you are looking for

  • Very slow login

    I am running a late-2008 MBP 15" running 10.5.6. Lately, it's been taking a very long time after I enter my login password for the computer to start up. This happens from a cold start as well as when I wake my machine back up and am prompted for the

  • Do the Apple TV units fail?

    My Apple TV unit has stopped receiving broadcasts from my Mac. The unit looks like it is on but it doesn't work through my surround sound receiver like it did about 2 weeks ago.  It no longer takes a signal from my iPhone either.

  • LinkageError: duplicate class definition

    I am getting some error codes in my webserver log which I can't seem to figure out. I use IPlanet 6.0 Has anyone seen this error before: [14/Feb/2002:16:36:29] failure ( 105): Internal error: Unexpected error condition thrown (unknown exception,no de

  • JPA (Eclispelink/toplink) and VPD

    I have so far stumbled only on this link for implementing VPD in JPA :http://wiki.eclipse.org/EclipseLink/Examples/JPA/Auditing Is there any other link which explains how to achieve row level security(VPD) using JPA?

  • Getting ready to upgrade to Windows 7

    What steps should I take? Planning on a fresh install on Windows 7 Professional Edition Upgrade from XP Professional on Lenova Tablet x61. Have not upgraded any drivers in about a year. Your tips would be appreciated!