Read harddisk partition's cluster size

Is there a way to read the cluster size of a harddisk partition?
Are there different approaches necessary for different operating systems?
thx, cheers clownfish

clownfish wrote:
Is there a way to read the cluster size of a harddisk partition?Yes, but not not in plain Java, and nothing in the standard library.
Are there different approaches necessary for different operating systems?Yes, the native API will need to be used, which will vary between operating systems. Typically, the cluster size is given in sectors (as opposed to bytes).

Similar Messages

  • Raid-0 Stripe & Cluster Size

    I just ordered 2 10k RPM Raptor S-ATA Drives from newegg, they should arrive shortly. I plan to configure my system with them as Raid-0 for increased performance, I just read the "Raid Setup Guide 865/875 LSR/FIS2R Rev 1.01" by Vango and it seems that my Mobo can be configured as Raid-0 with either the Intel ICH5R Controller or the promise controller.
    Will use promise as my raid controller, it seems it's faster , now i got another question.
    What about stripe size/cluster size? my research is giving me too many setting suggestions with all very different settings, can't decide on what to do. Can someone suggest some good setting, Intel raid manual suggest a 128 KB stripe for best performance, and said nothing about cluster size. Vango posted somewhere he used a 64kb for Stripe, but no info on cluster size.
    I will be using 2 36 gb WB Raptors in raid-0 as my main and only windows array (disk) (Will install windows and apps+games to it) then use PATA drive for backups and movie storage. My computer is used mostly for working with office, creation of web pages, playing Everquest (big game), and watching video (divx movies). I use WinXP Pro Sp1.
    Can someone suggest some general setting on stripe/cluster size that give good performance this kind of usage? what is the easiest (best) way to change the 4k default cluster size on the array after i get windows installed to it? do I bother with changing the cluster size? I got partition magic and other softtware available to do this, but dunno what's the best procedure to do this.
    Thanks in Advance

    I've always just used the 4K cluster size that Windows creates if you use NTFS. I honestly don't think this makes a big difference. If you want a different size, use PM to format the drive that way before installing XP. I would recommend against converting from one size to another. Did this once and ended up with all my files labeled in DOS 8.3 format.   (this was NOT good for my 1000+ MP3's)
    I use 64k stripe size as a compromise. My research showed that people were getting the "best scores" using a small stripe size. This seemed to come at the cost of CPU usage going up and I'm unconvinced these scores relate much to how I actually use my HDD's. They say if all your files are 128K and bigger you don't need a smaller stripe size. If you're using the Raid as your XP drive you'll actually have lots of small files so I would recommend something smaller than 128K. Maybe try 32k?
    Let us know how it goes.

  • Does Administrator have to set cluster size during RAID 0+1,3 and 5 used according to Microsoft tech document?

    Hi everyone,
    I always thank you for providing helpful information in this forum.
    As I got a plan to set hard drive for RAID 0+1,3 and 5. Therefore should I set cluster size under each RAID type according to the url which is provided by Microsoft Tech net. I mean I want to set the cluster size which is size by one time hard disk head
    access.
    The url is ....
    https://support.microsoft.com/kb/140365
    Thanks

    Hi OlegSmirnovPetrov,
    Additional,
     on Win 2008 and later versions, disk partition alignment is enabled by default. On win 2k3 and earlier versions you need to enable , 
    more information please refer the following KB:
    1. Best practices for using dynamic disks on Windows Server 2003-based computers
    http://support.microsoft.com/kb/816307
    2. Disk Partition Alignment (Sector Alignment): Make the Case: Save Hundreds of Thousands of Dollars
    http://blogs.msdn.com/b/jimmymay/archive/2009/05/08/disk-partition-alignment- 
         sector-alignment-make-the-case-with-this-template.aspx
    3. General Hardware/OS/Network Guidelines for a SQL Box
    4.http://blogs.msdn.com/b/cindygross/archive/2011/03/10/general-hardware-os-network-guidelines-for-a-sql-box.aspx (please refer Storage specifications)
    5.Disk Partition Alignment Best Practices for SQL Server
    http://msdn.microsoft.com/en-us/library/dd758814.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • NTFS Cluster size is set at 4k instead of recommended 64k.

    We have found that our partition is not aligned and need to get some
    feedback on a few things.
    Here are our numbers:
    Starting partition offset = 32,256
    Stripe Size = 128k (131,072)
    Cluster size = 4k (4096)
    We are experiencing high "Avg Queue length" and high "avg disk usage"
    in windows performance monitor...
    My question is this: How important is the NTFS cluster size at 4k. I know
    the recommendation is 64k but how bad would a 4k cluster size ALSO
    affect performance given we know that our partition is mis-aligned ?
    Thanks..
    Ld

    > My question is this: How important is the NTFS cluster size at 4k. I know
    > the recommendation is 64k but how bad would a 4k cluster size
    It's very important from performance perspective, especially when you're facing a huge database and quantity of disks. 64K is the rule of thumb as cluster package size for SQL server, can be considered a minimum size of unit at least in this case. Imagine if it's cut down to 4K which is 1/16, then the disk arms need to do 16 times to grab the same amount of data.
    > Starting partition offset = 32,256
    >ALSO
    > affect performance given we know that our partition is mis-aligned ?
    Starting offset has a similar impact as allocation unit size does, but usually it only happens on old fashion storage device as far as I know, not sure maybe I'm wrong here. Last time an issue related to that I know was on HP EVA 5000 4 years ago, and after that starting offset value is part of initial optimization when it's being installed by storage vendor provided, no manual change from customer needed. But to be on the safe side, please do check with your storage vendor to make sure.
    In either case of allocation unit size or starting offset size, it would be very difficult to change as long as it's set and has gone live in production environment, from either downtime or transition storage device.
    Regards,

  • How do I retrieve binary cluster data from a file without the presense of the cluster size in the data?

    Hey guys,  I'm trying to read a binary data file created by a C++ program that didn't append sizes to the structures that were used when writing out the data.  I know the format of the structures and have created a cluster typedef in LabView.  However the unflatten from string function expects to see additional bytes of data identifying the size of the cluster in the file.   This just plain bites!  I need to retrieve this data and have it formatted correctly without doing it manually for each and every single element.  Please Help!
    Message Edited by AndyP123 on 06-04-2008 11:42 AM

    Small update.  I have fixed size arrays in the clusters of data in the file and I have been using arrays in my typedefs in LabView and just defining x number of indexes in the arrays and setting them as the default value under Data Operations.  LabView may maintain the default values, but it still treats an array as an unknown size data type.  This is what causes LabView to expect the cluster size to be appended to the file contents during an unflatten.  I can circumvent this in the most simplest of cases by using clusters of the same type of data in LabView to represent a fixed size array in the file.  However, I can't go around using clusters of data to represent fixed size arrays BECAUSE I have several multi-dimentional arrays of data in the file.  To represent that as a cluster I would have to add a single value for every element to such a cluster and make sure they are lined up sequentially according to every dimension of the array.  That gets mighty hairy, mighty fast. 
    EDIT:  Didn't see that other reply before I went and slapped this in here.  I'll try that trick and let you know how it works.......
    Message Edited by AndyP123 on 06-04-2008 12:11 PM

  • Maximum cluster size?

    Hi all,
    I'm using LV 7.1 and am trying to access a function in a DLL.  The function takes a pointer to a data structure that is large and complex.  I have calculated that the structure is just under 15kbytes.
    I have built the structure as a cluster and then attempted to pass the cluster into the call library function node with this parameter set as adapt to type.  I get memory error messages and when analysing the size of the cluster I am producing it appears to be much smaller than required.
    I have also tried creating an array and passing that but I think that won't work as it needs to be a fixed size of array which can only be acheived, according to what I've read, by changing it to a cluster and this is limited to a size of 256.
    Does anybody have a suggestion of how to overcome this?
    If any more detail is required then I'm happy to supply. 
    Dave.

    John.P wrote:
    Hi Dave,
    You have already received some good advice from the community but I wanted to offer my opinion.
    I am unsure as to why the cluster size will not exceed 45, my only suggestion is to try using a type cast node as this is the suggested method for converting from array to cluster with greater than 256 elements.
    If this still does not work then in this case I would recommend that you do use a wrapper DLL, it is more work but due to the complexity of the cluster you are currently trying to create I would suggest this is a far better option.
    Have a look at this KB article about wrapper DLL's.
    Hope this helps,
    John P
    John, I am having a hard time converting an array of greater than 256 elements to a cluster.  I attempted to use the type cast node you suggested and didn't have any luck.  Please see the attached files... I’m sure I’m doing something wrong.  The .txt file has a list of 320 elements.  I want to run the VI so that in the end I have a cluster containing equal number of integer indicators/elements inside.  But more importantly, I don't want to have to build a cluster of 320 elements.  I'd like to just change the number of elements in the .txt file and have the cluster automatically be populated with the correct number of elements and their values.  No more, no less.   One of the good things about the convert array to cluster was that you could tell the converter how many elements to expect and it would automatically populate the cluster with that number of elements (up to 256 elements only).  Can the type cast node do the same thing?  Do you have any advice?  I posted this question with more detail to my application at the link below... no luck so far.  
    http://forums.ni.com/ni/board/message?board.id=170​&thread.id=409766&view=by_date_ascending&page=1
    Message Edited by PhilipJoeP on 05-20-2009 06:11 PM
    Attachments:
    cluster_builder.vi ‏9 KB
    config.txt ‏1 KB

  • [Solved] Kernel failed to re-read the partition for Udisk

    re-produce procedure
    * plug a udisk
    * use `fidsk` to partition the disk `fdisk /dev/sdb`
    * save the partition. Error will raise.
    Here is the error raised by fdisk
    Disk /dev/sdb: 3.7 GiB, 3974103040 bytes, 7761920 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x2dfbdb2a
    Command (m for help): n
    Partition type
    p primary (0 primary, 0 extended, 4 free)
    e extended (container for logical partitions)
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-7761919, default 2048):
    Last sector, +sectors or +size{K,M,G,T,P} (2048-7761919, default 7761919):
    Created a new partition 1 of type 'Linux' and of size 3.7 GiB.
    Command (m for help): w
    The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Re-reading the partition table failed.: Success
    The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).
    Syncing disks.
    When use the kpartx to re-read the partition, it still raise error
    # sudo kpartx -a -s /dev/sdb
    device-mapper: reload ioctl on sdb1 failed: Invalid argument
    create/reload failed on sdb1
    following is the error message when run strace
    # sudo strace kpartx -a -s /dev/sdb
    open("/etc/udev/udev.conf", O_RDONLY|O_CLOEXEC) = 4
    fstat(4, {st_mode=S_IFREG|0644, st_size=49, ...}) = 0
    mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2f5cff0000
    read(4, "# see udev.conf(5) for details\n\n"..., 4096) = 49
    read(4, "", 4096) = 0
    close(4) = 0
    munmap(0x7f2f5cff0000, 4096) = 0
    access("/run/udev/control", F_OK) = 0
    open("/dev/urandom", O_RDONLY) = 4
    read(4, "\v~", 2) = 2
    semget(0xd4d7e0b, 1, IPC_CREAT|IPC_EXCL|0600) = 1179649
    semctl(1179649, 0, SETVAL, 0x1) = 0
    semctl(1179649, 0, GETVAL, 0xffffffffffffffff) = 1
    close(4) = 0
    semop(1179649, {{0, 1, 0}}, 1) = 0
    semctl(1179649, 0, GETVAL, 0xffffffffffffffff) = 2
    ioctl(3, DM_DEV_CREATE, 0xf4ff30) = 0
    ioctl(3, DM_TABLE_LOAD, 0xf4ff10) = -1 EINVAL (Invalid argument)
    write(2, "device-mapper: reload ioctl on s"..., 60device-mapper: reload ioctl on sdb1 failed: Invalid argument) = 60
    write(2, "\n", 1
    But if you unplug the disk and plug the disk again. The partition is re-read and I can see the sdb1.
    My system is the latest ( 2015-4-14 ).
    Did anybody meet this error and tell me how to fix it ? Thanks a lot.
    Last edited by Jeffrey4l (2015-04-30 02:01:35)

    alphaniner wrote:
    Jeffrey4l wrote:
    Actually, I first meet this issue by following step.
    * qemu-img create disk 10G
    * qemu-nbd -c /dev/nbd0 disk
    * fdisk /dev/nbd0
    write the partition table and raise the above error.
    I don't get any error when writing the table to an nbd.
    It's probably not relevant, but I load the nbd module manually with "modprobe nbd nbds_max=2 max_part=4"
    I really load that module.
    I also tried losetup file, it also raised error when write the partition info.
    So i think whether i forget some module or service which is used by hot re-partition.

  • Best Cluster Size for RAID 0

    Hi all,
    When I get my replacement SATA HDD back I will be creating another RAID 0 array.  My Windows XP cluster size is 4K, but I have a choice when I am creating my RAID array.  All hardware in Sig.
    The system is being used mainly as a gaming machine, but I will be doing other things to with it also.  Looking for the best balance (slightly in favour of the gaming,  ).
    I heard that the cluster size if the drive should match the cluster size of the RAID array.  So if my drive cluster is 4K then i should set my RAID cluster size to 4K or should I set them higher.  
    any information is more than welcome,
    Andrew
    P.S. I did do a search through the forums, but could not find much recent information.

    The "EASIEST" way to change your cluster size is to have a 3rd drive with Win XP on it....here is what you need to do.
    1. Have 3rd drive with Win XP
    2. Go to bios and change boot order to 3rd drive b4 Raid
    3. Once in wondows goto "Disk Management" (as soon as you click on it you will have a window pop up and ask you to select drive and it will also want to know if you want to convert drive to a dynamic disk...I always choose no)
    4. You will see your raid drives as 1 BIG drive..now all you do is right click on the drive and click partion...primary partition...set size...now is where you can choose cluster size...you will have 3 boxes to check off...
    5. NTFS or FAT32...Volume Label....Cluster Size......
    6. After you check all of that off then you click off quick format and Voila...you have done it now do same on rest of drives....
    7. once you have setup all your partitons and selected cluster sizes...shutdown PC unplug 3rd drive...change boot order so CD rom will be a bootable device b4 raid device and you are good to go on a clean install....remember once you get into the window when setting up XP you have choices to format drives again and choose where XP get sinstalled ...choose to leave as is...no need to format again cuz it will default to 4k cluster again.....
    let me know how this goes for ....I have been doing this trick now for a LONG time and I know for a fact that this is a fast and easy way...without using any 3rd party software.....
    my advice to you is try 16/16 then 32/32

  • SSAS 2008 R2 reading all partitions

    Hi<o:p></o:p>
    The report is very slow and we see in sql profiler SSAS reading all year partitions even we filter with one month and report is very slow. The
    cube partitions are monthly. The cube size is around 150GB. A measure column is Last Non empty. The
    cube performance is very slow. We read prefetching article and we tried the options no luck. How can we restrict not reading all partitions? Any help/suggestions
    Thanks
    Sreeni

    Hi Sreeni,
    I hope I am wrong but I think sorting out this type of issue remotely is very difficult.
    The book I can recommend is; -
    Microsoft SQL Server 2008 Analysis Services
    Unleashed
    The only other generic advice I can give is to narrow down the problem domain. In this case this means; -
    1) Creating a separate copy of your BIDS SSAS solution with a minimum number of dimensions and a single measure group with two partitions.
    2) Think of any other way you can make your separate copy of your BIDS SSAS solution as simple as possible whilst still being able to demonstrate the issue of unexpected partition scans, e.g. remove calculated members from your single measure group.
    The likely outcome of this is that you are likely to be closer to a resolution. The worst case scenario is that you would need to contract a third party on site consultant where your joint efforts would be much more focused.
    In short by creating a stripped down version of your cube you  are much more likely to identify the specific area of your cube design which is causing the unexpected scans.
    I hope this helps,
    Kind Regards,
    Kieran.
    Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/

  • Array to cluster with adjustable cluster size

    Hi all
    Here I have  a dynamic 1D array and I need to convert it into cluster. So I use Array to Cluster function. But I notice that the cluster size is a fix value. How can I adjust the cluster size according to the 1D array size?
    Anyone pls give advise..
    Thanks....

    I won't disagree with any of the previous posters, but would point out a conversion technique I just recently tried and found to work well for my own particular purposes.  I've given the method a pretty good workout and not found any obvious flaws yet, but can't 100% guarantee the behavior in all settings.
    Anyhow, I've got a fairly good sized project that includes quite a few similar but distinct clusters of booleans.  Each has been turned into a typedef, complete with logical names for each cluster element.  For some of the data processing I do, I need to iterate over each boolean element in a cluster, do some evaluations, and generate an output boolean cluster.  I first structured the code to use the "Cluster to Array" primitive, then auto-index over the resulting array of booleans, perform the evaluations and auto-index an output array-of-booleans, then finally convert back using the "Array to Cluster" primitive.  I, too, was kinda bothered by having to hardcode cluster sizes in there...
    I found I could instead use the "Typecast" primitive to convert the output array back to my cluster.  I simply fed the input cluster into the middle terminal to defin! the datatype.  Then the output cluster is automatically the right size and right datatype.
    This still is NOT an adjustable cluster size, but it had the following benefits:
    1. If the size of my typedef'ed cluster changes during development by adding or removing boolean elements, none of the code breaks!  I don't have to go searching through my code for all the "Array to Cluster" primitives, identifying the ones I need to inspect, and then manually changing the cluster size on them one at a time!
    2. Some of my processing functions were quite similar to one another.  This method allowed me to largely reuse code.  I merely had to replace the input and output clusters with the appropriate new typedef.  Again, no hardcoded cluster sizes hidden in "Array to Cluster" primitives, and no broken code.
    Dunno if your situation is similar, but it gave me something similar to auto-sizing at programming time.  (You should test the behavior when you feed arrays of the wrong size into the "Typecast" primitive.  It worked for my app's needs, but you should make sure it's right for yours.)
    -Kevin P.

  • Disk Utility partitions with wrong sizes

    I'm trying to partition an external drive do clone/backup my TiBooks harddrive, but, I can't get DU to get the right partition sizes. Using 80 GB 2.5" drive (same size as my current internal drive). I make four partitions, set the sizes, then do a partition, and after the partition is done my partitions are now different sizes. I've dried 'Locking' them, but, still doesn't work.
    I played with the sizes, and still can't get the partition sizes I set up to partition correctly.

    If the two drives are not exactly the same, then don't expect to be able to partition both of them exactly the same. Different drives will have different specifications and are not identical simply because they are both advertised as 80 GB drives. Even two drives with the same model number from the same manufacturer may differ if the firmware isn't the same on each due to different dates of manufacture. Why is it so critical for you that they be exactly the same?

  • FAT32 Cluster size?

    I am trying to format my SD card to fat32 with cluster size set at 32kb is there anyway I can do this?

    Maybe "mkfs.vfat -F 32 -S 32768"?

  • Large Cluster Size

    I believe by default cluster size is limited to 436 members by default. We would like to increase this to about 550 temporarily. Can someone remind me of the setting to do this? Any critical drawbacks (I know its no ideal). Thanks in advance... Andrew.

    There was some changes made in in some 3.7.1.x release to support large clusters. Previous there was some issues with the preferred MTU, which was 64K causing OOM in the Publisher/Receiver.

  • Cluster Size issues

    Hi,
    I am running SQL Server 2008 R2 on Windows Server 2008 R2. My databases are residing on a RAID5 configuration.
    Recently I have had to replace one of the HDDs in the RAID with a different HDD. The result is that I now have 2 HDD with a physical and logical cluster size of 512 Bytes and 1 with 3072 Bytes physical and 512 Logical.
    Since the rebuild, the databases and SQL have been fine. I could read and write to and from the databases and Backups had no issues either. Today however (2 months down the line of the RAID rebuild) I could no longer access the databases and backups did
    not work either. I kept getting this error when trying to detach the database or backing it up:
    TITLE: Microsoft SQL Server Management Studio
    Alter failed for Database 'dbname'.  (Microsoft.SqlServer.Smo)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    Cannot use file 'D:\dbname.MDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
    Cannot use file 'D:\dblogname_1.LDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
    Database 'dbname' cannot be opened due to inaccessible files or insufficient memory or disk space.  See the SQL Server errorlog for details.
    ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5178)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500&EvtSrc=MSSQLServer&EvtID=5178&LinkId=20476
    BUTTONS:
    OK
    My Temporary solution to this was to move the DB to my C: drive and attach it from there. This is not ideal as I am losing the redundancy of the RAID. 
    Can anybody tell me if it is because of the hard drive with the larger sector size? (This is the only logical explanation i have) And why would it only happen now? 
    I am sorry if this is the wrong Forum for this question

    Apparently it was not until recently that the database spilled over to that new disk. No, I don't too know much about RAIDs.
    But it seems obvious that you need to make sure that all disks in the RAID have the same sector size.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • K7n2 delta ilsr - cluster size

    How exactly would I change my cluster size?  Also, if I just wanted to do it for the raid array - not the os volume, could I do that w/o a clean windows install?

    Quote
    Originally posted by loopyloops
    Quote
    Originally posted by Bonz
    Yea I have to agree with Raven on this...don't mess with your cluster sizes because it usually leads to a reload of your system...windows wants that decided early on in the load if you are wanting to change it for real...I have never seen a benefit myself when I experimented with it...if I dropped it lower than the default 4069 then it drags...you can waste a lot of space cranking it up tho, pictures are brutal but your speed does improve...most of these things are trial and error type things tho but do as you see fit for your setup...
    Bonz
    This has to do with my upcoming raid 0 array.
    I think I'll just do it 16stripe/4cluster.

Maybe you are looking for