RAID0 Stripe size

im using the 3ware 8506-4LP with 4 WD 250 GB Caviar Drives(7200rpm).. What would be the ideal stripe size for :
-reiserfs
-reiser 4
-xfs
does the file system make a difference? whats the best stripe fo rlinux in general..everyday desktop use..
my options are from 64k all the way upto 1MB
ive read that 64k is ideal for windows..
with the 4-way raid, im getting 100MB/s in HDTach with a stripe size of 512KB (which seems a little low)
my current plan is to use 2 of the drives with a stripe size of 64k with windows and the other two with linux.. im a general desktop user, bit of divx encoding and lots of compiling. Im using the AMD FX-55 with 2GB of RAM
so what would be the ideal linux stripe size?

Hi,
Sun support confirmed that you can have the 2 halfs of the mirror with different interlace sizes, this obviously is not the optimum setup, but will allow me to detach d61 and recreate with correct interlace size, reattach d61 and let it resync, then detach d62 and recreate with correct interlace size and finally reattach d62 and let that resync
Kevin....

Similar Messages

  • Solaris volume manager changing raid0 stripe interlace size (-i 1024)

    Hi,
    I have 2 volume manager raid0 stripe volmes (d61 & d62) which are mirrored (d60 - see below), when the submirrors were created we ommited to increase the interlace size (-i 1024), can i metaclear d61 and recreate with -i 1024 and reattach/resync and then metaclear d62 and recreate with -i 1024 and reattach/resync?? i.e during the transition i would have one half of the mirror with 1 interlace size and the second half of the mirror with a different interlace size or is it best to backup the volume and recerate the whole stripe/mirror volume and the restore the data.
    Thanks in advance
    Kevin.....
    # metastat d60
    d60: Mirror
    Submirror 0: d61
    State: Okay
    Submirror 1: d62
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 1432370625 blocks (683 GB)
    d61: Submirror of d60
    State: Okay
    Size: 1432370625 blocks (683 GB)
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c3t0d0s0 0 No Okay Yes
    c3t1d0s0 4375 No Okay Yes
    c3t2d0s0 4375 No Okay Yes
    c3t3d0s0 4375 No Okay Yes
    c3t4d0s0 4375 No Okay Yes
    d62: Submirror of d60
    State: Okay
    Size: 1432370625 blocks (683 GB)
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c4t0d0s0 0 No Okay Yes
    c4t1d0s0 4375 No Okay Yes
    c4t2d0s0 4375 No Okay Yes
    c4t3d0s0 4375 No Okay Yes
    c4t4d0s0 4375 No Okay Yes

    Hi,
    Sun support confirmed that you can have the 2 halfs of the mirror with different interlace sizes, this obviously is not the optimum setup, but will allow me to detach d61 and recreate with correct interlace size, reattach d61 and let it resync, then detach d62 and recreate with correct interlace size and finally reattach d62 and let that resync
    Kevin....

  • RAID Stripe size for HD Capture

    Hey everyone,
    So I just captured a lot of Uncompressed 8 bit 4:2:2 1080i60.
    The video alone requires ~ 118 MB/s sustained transfer speed for live capture without dropped frames.
    I used our new MacPro. We built an internal RAID with 3 250 GB drives all striped together. I had to decide which stripe size to use. Since we have proportionally less files than the average RAID(200 or so at the most), and because our files are HUGE (2-40 GB), I figured a large stripe size would be apropriate. I set it to the max that Mac OS X software RAID supports, though I must confess I dont remember exactly what that was.
    We had little trouble with capture, sustaining over 200 MB/s bandwidth with this setup according to the Blackmagic drive speed test. However, when the drives filled up, they (expectedly) slowed down quite a bit. I frequently did speed tests, and when it got low I had to switch into shooting DVCPRO HD to avoid dropped frames.
    We had similar experiences before, when I had this same setup with the default stripe size, though it seemed overall a bit better with larger stripe sizes.
    My questions are thus:
    1) Is there any way to use a really large stripe (2MB or so) without buying a hardware RAID controller?
    2) Is my thinking correct that if I have a small number of gigantic files I should use a large stripe size?
    3) If I were to wipe out all my drives (including the 250 gig startup drive), and boot off the CD, could I tie all 4 drives together into RAID-0, and then install Mac OS X to that (and presumably partition it for a data storage volume)?
    4) Does anyone know of a good 2-4 port eSata PCI Express card that has drivers for the MacPro? I know sonnet has a 2 port, but its one internal SATA and one eSata. I need at least 2 eSata ports for it to be worth my time, because I have a 4 drive eSata RAID box with 4 320GB drives in it. I can use the motherboard's additional 2 SATA connectors through a backplate SATA -> eSATA converter, and with 2 more eSATA ports through PCIe card, I would be able to use all 4 eSATA drives at once.
    4 ports would be even better as I would like to leave internal ports for a BD-R or HD-DVD-R (or whatever they're called) drive later on.
    Thanks!
    -Derek Prestegard

    1) Is there any way to use a really large stripe (2MB
    or so) without buying a hardware RAID controller?
    256k is the largest block size available with Disk Utility.
    2) Is my thinking correct that if I have a small
    number of gigantic files I should use a large stripe
    size?
    128k is the largest size I would ever think about using. I even use 32k in many situations. You can test it and see what works best for you. Large block sizes do not always translate into higher performance. I find more drives in the striped RAID set helps me more than larger block sizes.
    3) If I were to wipe out all my drives (including the
    250 gig startup drive), and boot off the CD, could I
    tie all 4 drives together into RAID-0, and then
    install Mac OS X to that (and presumably partition it
    for a data storage volume)?
    You could boot from a FW800 external and use all 4 internals
    for a RAID but I think you will be happier with SATA host adapters.
    4) Does anyone know of a good 2-4 port eSata PCI
    Express card that has drivers for the MacPro
    Here are a few options for SATA host adapters on a Mac Pro that I use. The WiebeTech Tera Card TCES0-2e SATA host adapter provides two SATA ports and works with the Mac Pro using SiI-3132 Mac drivers 1.1.6. You can see a review at amug.org:
    http://www.amug.org/amug-web/html/amug/reviews/articles/wiebetech/tces0/
    The FirmTek SeriTek/2SE2-E 2-port host adapter can be found here:
    http://www.firmtek.com/seritek/seritek-2se2-e/
    It works best on the Quad as it provides boot support and SMART drive support. On the Mac Pro you use the Firmtek cardbus 2SM2-E Mac driver until FirmTek can build new EFI drivers for the Mac Pro. No boot support is provided yet but it does pass SMART data to Mac OS X. Eventually the card will have Mac Pro boot support which will be nice.
    Both cards use the SiI-3132 controller. If you mix the WiebeTech and the FirmTek cards the Silicon Image Mac driver version 1.1.6 will take over and block the SMART data info that the FirmTek card supplies. As this is the case, I would go with one brand or the other but not mix them in the same Mac Pro.
    I have used three cards to provide six external SATA ports. You could use two cards with your external 4-bay enclosure. I would create a striped RAID with 4 external drives and two internal drives. This should provide you with the performance you need to handle 1080HD. If you want more power you could use 7 hard drives = 3 int. and 4 external.
    Have fun!

  • Max_io_size equivalent in Linux and block/stripe sizes

    I'm configuring a linux Red Hat 7.1 for Oracle 9i Rel2. I'm trying to determine the best db_block_size, and db_file_multiblock_read_count parameters. I know that these Oracle settings are dependent on the OS block size and the max_io_size of the OS.
    Does anyone know what the equivalent Linux parameter for max_io_size (solaris) is and how I set it in Linux? Does resetting it involve reinstalling Linux? Any suggestions on an appropriate range to set it? Is the default Linux 1K block size OK? (The server is a Compaq DL380 with 1.4 GHertz processor and 1 GB RAM.)
    Additionally, I have a Compaq 5300 Series RAID, (5i-integrated), that we plan to configure with RAID 0+1. Our controller only goes up to a stripe size of 256K, with a default of 128K. For a "general"-type database that could hold up to 80 GB of data over 50 or so tables, with a possible equal number of full-table scans and indexed scans, would you suggest I set the stripe size at 256 for the most flexiblility down the road?
    I don't fully understand what it takes to configure Linux and RAID for the best I/O for Oracle. So, I'd really appreciate any suggestions, tips, or doc references that can help out.
    Thanks,
    Deb

    the ssd is both sd and ssd.. inside the sd.conf and inside the /etc/system...
    the following below is a tnf report of the IO size of my process to show the kernel is breaking the IO down.
    sorry i wasn't clear on this part..
    62.059582 16.185079 480 1 0x3000338ecc0 0 strategy device: 584115552256 block: 60396848 size: 1048576 buf: 0x30000a78340 flags: 34088209
    306.154426 17.819569 480 1 0x3000338ecc0 0 strategy device: 584115552256 block: 60398896 size: 1048576 buf: 0x300035dcc00 flags: 34088209

  • Moving Raid0 stripe from sata 1&2 to 3&4

    Like topic says, I wanna move my raid0 stripe from sata1&2 to 3&4, I'm pretty new to raid setups so I tought I'd ask first if it's possible without losing data & still beyin able to boot from that array. Also some info on how it should be done would be nice I have everything backed up just in case, but don't really feel like reinstalling xp if not needed.
    edit: This is on the MSI K8N Neo & I have 2 WD 120GB sata drives

    Thanks for all the info guys, so I decided to make the move then and all went pretty good. Moving the array didn't need any more setting up, just hooked'm up to the other 2 ports in the right order and it was detected without any more configuration in the raid bios.
    Tho I made 1 little mistake the first time, I had sata3&4 disabled before in bios so when I switched my array and tried to boot windows it just kept crashing & rebooting because it didn't have the drivers loaded. So I switched back to 1&2 enabled 3&4 in bios let the drivers install in winxp, rebooted, switched to 3&4 again and it worked like a charm.
    Now let's see if I can overclock this baby a bit  
    And good luck to the others who are going to switch, it went almost problem free for me but always make that backup just in case.  

  • Stripe SIze please help

    I am about to setup a RAID 0 array on my K7N2 Delta board with 2 X 80 gig Samsung SATA drives any ideas what I would need to set the Stripe size to for a Windows XP configuration?  Also does the array BIOS set it for you for optimal performance by default?

    djcla,
    I use a 16k Stripe vs the default 64k. I benchmarked all the options and found 16k optimal. Here were my latest Hard Drive Performance Numbers:
    Take Care,
    Richard

  • Setting RAID stripe size?

    Does anyone know how you can set the stripe size used for a RAID 5 setup? Even just knowing what the default stripe size used would be helpful.
    I have searched in the forums and user manual but I have not found the ability to do this anywhere. Thanks in advance.

    Don't use partition magic.  You can't partition RAID.  Cluster size of 128 is good.  After making the Raid 0 array your bios will then have the option to boot from the array as if it were one HD instead of two.  Install your OS.  While windows is loading the driver data base you need to press f6 to load third party SCSI or RAID drivers.  You should have a floppy with the drivers.  After that your done.  Make sure there are no boot partitions on the single drives before you configure the array, you might have problems installing Windows.  At least that's been my experience.

  • Stripe size for scatch disk array

    I am building a CS5/64 workstation running on Win 7/64 that will be used to edit 1-4Gb images. The scratch disks will consist of a RAID 0 array using 3-4 WD600Gb 10K drives shortstroked on an Areca card.
    What is the best stripe size for a 3-4 disk array for large images? Does Adobe publish how they R/W to the scratch disk, size of block,etc?
    Larry

    Right click on the root of your C: drive, and choose Properties.
    Click the Hardware tab, select the drive (array) you'd like to set advanced caching on, and click the [Properties] button.
    Click the Policies tab, and note the setting of the [ ] Turn off Windows write-cache buffer flushing on the device.  This may not be available, depending on the drivers.
    Note, specifically, that this feature can cause quite a lot of disk data to end up in your RAM for a while, if an application gets significantly ahead of the drive's ability to write data.  This is where the warning about having good battery backup comes in.  I'll add my own comment:  Your system should be very stable as well.  You don't want it crashing when a lot of writes are pending.
    -Noel

  • Stripe Size for BE6000 RAID 10 Configuration.

    Hey,
    What should the stripe size be for the BE6000 RAID configuration should you have to rebuild the volume? When we got a BE6000 and looked the Stripe size seemed to be set to 64K, but I see documentation saying that 128K is what should be used (although that seems to refer to the RAID 5 configurations).
    Anyone know?
    Thanks,
    Joey

    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/virtual/CUCM_BK_CF3D71B4_00_cucm_virtual_servers/CUCM_BK_CF3D71B4_00_cucm_virtual_servers_chapter_010.html doesn't make it seem like it only applies to RAID 5.

  • SE 6140 Raid 1 + 0 Stripe Size only up to 512kb?

    Question, can I set a 1MB stripe size on a SE 6140 array? The storage engineer says the maximum is 512kb.

    Go to the Help section in CAM. Under the section about storage profiles I found the supported stripe sizes are ... 8k, 16k, 32k, 64k, 128k, 256k, & 512k.
    Seems you got good advice from your engineer.

  • ASM with block device - stripe size question

    Implementing a 2 node RAC system on Linux/RHEL 10gRel2. Hardware is HP SAN storageworks storage array. We plan to use External redundancy for the disk groups, and I read in one of the RACSIG Best Practices for ASM documents to select a stripe size of 1mb or as close to that as possible. The storage array stripe size max is 64k. Can anyone discuss/explain if this will be an issue if we're using External redundancy?

    Hi buddy,
    Can anyone discuss/explain if this will be an issue if we're using External redundancy?The idea is to improve as much as possible the I/O operations. Until 10g ASM had two options of allocate space for its files: coarse and fine. For coarse, the AU (Allocation Unit) was of 1M, for fine, its size is 128K.
    Most of database files have the AU of 1M, due to this, When ASM allocates 1M of space, if You have the stripe size of 1MB, You'll guarantee one better distribuition of the I/O on all disks the LUN belongs.
    Take a look here and here
    Hope it helps,
    Cerreia

  • Raid Stripe size?

    Is 512 kb a good size compared to the default 64?
    Thanks
    Lee

    Quote
    Originally posted by lugen
    Hi,
        I have the Modded bios I can change almost anything about it.
    Lee
    Can you?  I am using the 5.4 modded bios and am unable to change the stripe size.  If I go to anything different than the original Windows will not load.
    Computer will just hang after the raid bios screen with a little blinking cursor at the top left.

  • Changing Stripe sizes?

    Hi,
        I would LIKE to experiment with stripe Sizes. I ave 512 now with the Modded BIOS for my KT3. I I were to delete a array and try a different stripe size for my RAID 0 would I have to reformat?
    Thanks

    http://kunibert.50megs.com/index.html
    I'm going to make a image of my RAID and TRY to drop it from 512 to 256
    Just need to find Ghost for XP
    Lee

  • Stripe size

    What is the optimal stripe size formatting for a drive supporting a SQL 2014 DB, on Windows 2012 R2?  Is it 4K or 64K?
    All SQL databases are on Fusion iO cards, and our primary database is approximately 3 TB, used for both OLAP and OLTP.
    Using Fusion iO cards, through Storage Spaces, we have ran iOMeter testing as follows:
    With 64k 100% read, 0% write = using 64k formatting 8,200 i/o per sec, using 4k formatting 262,000 i/o per sec.
    With 64k 50% read, 50% write = using 64k formatting 7,000 i/o per sec, using 4k formatting 11,000 i/o per sec.
    Thanks in advance
    André

    Hello,
    The optimal stripe size is 64 KB for any version of SQL Server. This is because SQL Server works with 8 KB pages and groups them on Extents. Each extents is formed of 8 pages of 8 KB each one.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Max No. of Drives in a Raid0 stripe ?

    Hi,
    I suspect I know the answer already (which is no..) but can i increase my raid from a 2 drive stripe Raid0 to a 4 drive stripe Raid0 on my K7T266-Pro2 onboard raid controller ??
    Looking at the Rocket Raid PCI cards you can have up to 4 on a strip array and on the quad controller version up to 8 !
    I suspect the PCI bus is the main limitation at a theoritical max of 133mb/s (@33mhz) anyhow but it sound good !
    Cheers for any advice on speed my discs up !
    Graeme

    Quote
    Originally posted by WaltC
    I hope this is not meant as a general comment on RAID 0 versus standard IDE.  Because if so it's completely wrong.  My RAID 0 setup performs roughly 2x as fast as the standard IDE hookup in sustained operation with the same drive. This ratio holds well across a spectrum of benchmarks.  The performance difference is obvious and noticeable even in the absence of benchmarks.
    The specific make and model drive has a lot to do with speed improvements as well.  WD's JB models don't do as well as expected in RAID-0 because the cache algo is optomised for its 8 megs in a standard desktop environment.  Also witness Seagate's still relatively new policy of replacing Cuda IV models that do poorly in RAID-0 with models that have revised firmware, which do quite a bit better.  I've seen no mention of these things mentioned in this forum but they are well known elsewhere.

Maybe you are looking for