Optimizing RAID: stripe/chunk size

I'm trying to figure out how to optimize the RAID chunk/stripe size for our Oracle 8i server. For example, let's say that we have:
- 4 drives in the RAID stripe set
- 16 KB Oracle block size
- MULTIBLOCK_READ_COUNT=16
Now the big question is what the optimal setting is for the chunk/stripe size. As far as I can see, we would have two alternatives:
- case 1: stripe size = 256 KB
- case 2: stripe size = 64 KB
In case 1, all i/o would be spread out over all 4 drives. In case 2, we'd be able to isolate a lot of i/o to separate drives, so that each drive serves different i/o calls. My guess is that case 1 would work better where there's a lot of random disk i/o.
Does anyone have any thoughts or experience to share on this topic?
Thanks,
Alex Algard
WhitePages.com
null

It does not matter. Do not mix soft-raid and hard-raid. One OS i/o operation can read from one disk and number of disk. Do not forget about track-to-track seek time.
Practice is the measure of truth :)
For example, http://www.fcenter.ru/fc-articles/Technical/20000918/hi-end.gif

Similar Messages

  • Raid-0 Stripe & Cluster Size

    I just ordered 2 10k RPM Raptor S-ATA Drives from newegg, they should arrive shortly. I plan to configure my system with them as Raid-0 for increased performance, I just read the "Raid Setup Guide 865/875 LSR/FIS2R Rev 1.01" by Vango and it seems that my Mobo can be configured as Raid-0 with either the Intel ICH5R Controller or the promise controller.
    Will use promise as my raid controller, it seems it's faster , now i got another question.
    What about stripe size/cluster size? my research is giving me too many setting suggestions with all very different settings, can't decide on what to do. Can someone suggest some good setting, Intel raid manual suggest a 128 KB stripe for best performance, and said nothing about cluster size. Vango posted somewhere he used a 64kb for Stripe, but no info on cluster size.
    I will be using 2 36 gb WB Raptors in raid-0 as my main and only windows array (disk) (Will install windows and apps+games to it) then use PATA drive for backups and movie storage. My computer is used mostly for working with office, creation of web pages, playing Everquest (big game), and watching video (divx movies). I use WinXP Pro Sp1.
    Can someone suggest some general setting on stripe/cluster size that give good performance this kind of usage? what is the easiest (best) way to change the 4k default cluster size on the array after i get windows installed to it? do I bother with changing the cluster size? I got partition magic and other softtware available to do this, but dunno what's the best procedure to do this.
    Thanks in Advance

    I've always just used the 4K cluster size that Windows creates if you use NTFS. I honestly don't think this makes a big difference. If you want a different size, use PM to format the drive that way before installing XP. I would recommend against converting from one size to another. Did this once and ended up with all my files labeled in DOS 8.3 format.   (this was NOT good for my 1000+ MP3's)
    I use 64k stripe size as a compromise. My research showed that people were getting the "best scores" using a small stripe size. This seemed to come at the cost of CPU usage going up and I'm unconvinced these scores relate much to how I actually use my HDD's. They say if all your files are 128K and bigger you don't need a smaller stripe size. If you're using the Raid as your XP drive you'll actually have lots of small files so I would recommend something smaller than 128K. Maybe try 32k?
    Let us know how it goes.

  • Setting RAID stripe size?

    Does anyone know how you can set the stripe size used for a RAID 5 setup? Even just knowing what the default stripe size used would be helpful.
    I have searched in the forums and user manual but I have not found the ability to do this anywhere. Thanks in advance.

    Don't use partition magic.  You can't partition RAID.  Cluster size of 128 is good.  After making the Raid 0 array your bios will then have the option to boot from the array as if it were one HD instead of two.  Install your OS.  While windows is loading the driver data base you need to press f6 to load third party SCSI or RAID drivers.  You should have a floppy with the drivers.  After that your done.  Make sure there are no boot partitions on the single drives before you configure the array, you might have problems installing Windows.  At least that's been my experience.

  • Software RAID0 for Video Editing+Storage: What Stripe Block Size ?

    Hi
    I'm planning to setup a raid0 with 2x 2TB HDD's to store and edit the videos from my digital camera on it. The most files will have a size of 5 to 15GB. In the past i had raid0's on RHEL with the (old) default stripe block size of 64kb. Since the new drive will only contain very big files would it be better to go with 512kb stripe block size ? 512 is also the default setting now in the gnome disk utility which i use for my partitions.
    Another question: Does the so called "raid lag" exist ? I think i've seen occasional stuttering in movies when they're played from a raid0/5 without cache enabled in the player (with cache its fine). Games also seem to occasional freeze when installed on raid (I had this problem in the past with wine games installed on raid0, they sometimes freeze when they try to load data which never occurred ever on a normal HDD).
    Many thanks in advance for your suggestions

    That is a hard question to answer.. Nothing is best for everyone... However if i am to generalize it I would put it this way.. If you want to cut everything from a short promos to hollywood pictures..
    A high end windows pc (only cause mac pro hasn't been updated in ages)
    Avid Media Composer with Nitrus DX
    Two monitors
    Broadcast monitor
    HD Deck
    pimping 5.1 speakers
    A good mixer
    You are looking at over 70,000 or 80,000, could even approach even more.. HD decks run atleast 15k.
    If price is not an issue then there you go....
    However this is not realistic for most people nor best solution by no means... I run a macbook pro with Avid (as primary) Final Cut 7, Final Cut X (for practice, didn't have to use it for a job yet), and Premiere (just in case)
    I am a final cut child who grew up on it and love it however everything I am doing in last few years is on AVID...
    Have a second monitor..
    I am very portable and rest of the gear I usually get where ever I work at.. I am looking into getting a good broadcast monitor connected with AJA thunderbolt..
    Like I said this is very open question, there is no (BEST) it all depends what you will be doing.. If you get AVID (which can do everything, however is cluncky as **** and counter intuitive) but you are only cutting wedding videos and short format stuff, it would be an overkill galore.. Just get FCP X in that case... Simple,easy, one app...
    Be more specific and you will get clearer answers..

  • JMS Chunk Size

              I was going through the JMS Performance Guide, Section 4.2.3 about Tuning Chunk
              Size. The section provides a guideline for estimating the optimal chunk size:
              4096 * n -16. The 16 bytes are for internal headers. Base on the text, the chunk
              size should be 4080 (also the default). My question is that wouldn't the suggested
              calculation 4096 * n - 16 double counting the 16 bytes? Shouldn't 4080 * n + 16
              be the correct calculation for the final size and the chunk size should simply
              be 4080 * n?
              Thanks for your help.
              Gabe
              

    Hi Gabe,
              The internal header is one per chunk, so I think
              (4096 * n) - 16 is correct.
              Tom
              Gabriel Leung wrote:
              > I was going through the JMS Performance Guide, Section 4.2.3 about Tuning Chunk
              > Size. The section provides a guideline for estimating the optimal chunk size:
              > 4096 * n -16. The 16 bytes are for internal headers. Base on the text, the chunk
              > size should be 4080 (also the default). My question is that wouldn't the suggested
              > calculation 4096 * n - 16 double counting the 16 bytes? Shouldn't 4080 * n + 16
              > be the correct calculation for the final size and the chunk size should simply
              > be 4080 * n?
              >
              > Thanks for your help.
              >
              > Gabe
              >
              

  • What is the best Practice to improve MDIS performance in setting up file aggregation and chunk size

    Hello Experts,
    in our project we have planned to do some parameter change to improve the MDIS performance and want to know the best practice in setting up file aggregation and chunk size when we importing large numbers of small files(one file contains one record and each file size would be 2 to 3KB) through automatic import process,
    below is the current setting in production:-
    Chunk Size=2000
    No. Of Chunks Processed In Parallel=40
    file aggregation-5
    Records Per Minute processed-37
    and we made the below setting in Development system:-
    Chunk Size=70000
    No. Of Chunks Processed In Parallel=40
    file aggregation-25
    Records Per Minute processed-111
    after making the above changes import process improved but we want to get expert opinion making these changes in production because there is huge number different between what is there in prod and what change we made in Dev.
    thanks in advance,
    Regards
    Ajay

    Hi Ajay,
    The SAP default values are as below
    Chunk Size=50000
    No of Chunks processed in parallel = 5
    File aggregation: Depends  largely on the data , if you have one or 2 records being sent at a time then it is better to cluster them together and send it at one shot , instead of sending the one record at a time.
    Records per minute Processed - Same as above
    Regards,
    Vag Vignesh Shenoy

  • Large Block Chunk Size for LOB column

    Oracle 10.2.0.4:
    We have a table with 2 LOB columns. Avg blob size of one of the columns is 122K and the other column is 1K. so I am planning to move column with big blob size to 32K chunk size. Some of the questions I have is:
    1. Do I need to create a new tablespace with 32K block size and then create table with chunk size of 32K for that LOB column or just create a table with 32K chunk size on the existing tablespace which has 8K block size? What are the advantages or disadvatanges of one approach over other.
    2. Currently db_cache_size is set to "0", do I need to adjust some parameters for large chunk/block size?
    3. If I create a 32K chunk is that chunk shared with other rows? For eg: If I insert 2K block would 30K block be available for other rows? The following link says 30K will be a wasted space:
    [LOB performance|http://www.oracle.com/technology/products/database/application_development/pdf/lob_performance_guidelines.pdf]
    Below is the output of v$db_cache_advice:
    select
       size_for_estimate          c1,
       buffers_for_estimate       c2,
       estd_physical_read_factor  c3,
       estd_physical_reads        c4
    from
       v$db_cache_advice
    where
       name = 'DEFAULT'
    and
       block_size  = (SELECT value FROM V$PARAMETER
                       WHERE name = 'db_block_size')
    and
       advice_status = 'ON';
    C1     C2     C3     C4     
    2976     368094     1.2674     150044215     
    5952     736188     1.2187     144285802     
    8928     1104282     1.1708     138613622     
    11904     1472376     1.1299     133765577     
    14880     1840470     1.1055     130874818     
    17856     2208564     1.0727     126997426     
    20832     2576658     1.0443     123639740     
    23808     2944752     1.0293     121862048     
    26784     3312846     1.0152     120188605     
    29760     3680940     1.0007     118468561     
    29840     3690835     1     118389208     
    32736     4049034     0.9757     115507989     
    35712     4417128     0.93     110102568     
    38688     4785222     0.9062     107284008     
    41664     5153316     0.8956     106034369     
    44640     5521410     0.89     105369366     
    47616     5889504     0.8857     104854255     
    50592     6257598     0.8806     104258584     
    53568     6625692     0.8717     103198830     
    56544     6993786     0.8545     101157883     
    59520     7361880     0.8293     98180125     

    With only a 1K LOB you are going to want to use a 8K chunk size as per the reference in the thread above to the Oracle document on LOBs the chunk size is the allocation unit.
    Each LOB column has its own LOB table so each column can have its own LOB chunk size.
    The LOB data type is not known for being space efficient.
    There are major changes available on 11g with Secure Files being available to replace traditional LOBs now called Basic Files. The differences appear to be mostly in how the LOB data, segments, are managed by Oracle.
    HTH -- Mark D Powell --

  • Lob Chunk size defulting to 8192

    10.2.0.3 Enterprise on Linux x86-64
    DB is a warehouse and we truncate and insert and or update 90% of all records daily.
    I have a table with almost 8mill records and I ran the below to get max lenght for the lob.
    select max(dbms_lob.getlength(C500170300)) as T356_C500170300 from T356 where C500170300 is not null;
    The max for the lob was 404 bytes and the chunk size is 8192 and I was thinking that is casuing the lob to have around 50GB of wasted space that is being read during the updates. I tried to creating a duplicate table called T356TMP and lob with 512 and tried 1024 chunk size, both times it changes back to 8192.
    I thought it had to be a size that could be multipule or division of the tablespace block size.
    Based on what is happening, the smallest chunk size I can make is the tablespace block size, is this a true statement? or am I doing something wrong.
    Thanks

    Hmm, I think you might be on the wrong forum. This is the Berkeley DB Java Edition forum.

  • Lob Chunk size defaulting to 8192

    10.2.0.3 Enterprise on Linux x86-64
    DB is a warehouse and we truncate and insert and or update 90% of all records daily.
    I have a table with almost 8mill records and I ran the below to get max lenght for the lob.
    select max(dbms_lob.getlength(C500170300)) as T356_C500170300 from T356 where C500170300 is not null;
    The max for the lob was 404 bytes and the chunk size is 8192 and I was thinking that is casuing the lob to have around 50GB of wasted space that is being read during the updates. I tried to creating a duplicate table called T356TMP and lob with 512 and tried 1024 chunk size, both times it changes back to 8192.
    I thought it had to be a size that could be multipule or division of the tablespace block size.
    Based on what is happening, the smallest chunk size I can make is the tablespace block size, is this a true statement? or am I doing something wrong.
    Thanks

    SaveMeorYouDBA wrote:
    Thanks for the replies. I should have inluded that I have looked at the size of the lob.
    BYTES                  MAX-Data_Length     Chunk Size     Cache     STORAGE IN ROW
    65,336,770,560      404                     8192     NO     DISABLEDRecord count 8253158
    Per the tool you help design. Statspack Analyzer
    You have 594,741 table fetch continued row actions during this period. Migrated/chained rows always cause double the I/O for a row fetch and "table fetch continued row" (chained row fetch) happens when we fetch BLOB/CLOB columns (if the avg_row_len > db_block_size), when we have tables with > 255 columns, and when PCTFREE is too small. You may need to reorganize the affected tables with the dbms_redefintion utility and re-set your PCTFREE parameters to prevent future row chaining.
    I didn't help to design Statspack Analyzer - if I had, it wouldn't churn out so much rubbish. Have you seen anyone claim that I help in the design ? If so, can you let me know where that claim was published so I can have it deleted.
    Your LOBs are defined as nocache, disable storage in row.
    This means that
    a) each LOB will take at least one block in the lob segment, irrespective of how small the actual data might be
    b) the redo generated by writing one lob will be roughly the same as a full block, even if the data is a single byte.
    In passing - if you fetch an "out of row" LOB, the statistics do NOT report a "table fetch continued row". (Update: so if your continued fetches amount to a significant fraction of your accesses by rowid then, based on your other comments, you may have a pctfree issue to think about).
    >
    I have not messed with PCTFREE yet, I assumed I had a bigger issue with how often the CLOB is accessed and that it is causing extra IO to read 7.5K of wasted space for each logical block read.
    It's always a little tricky trying to decide what resources are being "wasted" and how to interpret "waste" - but it's certainly a good idea not to mess about with configuration details until you have identifed exactly where the problem is.
    About 85% of the Row length for this table are 384bytes, so I don't think Enable storage in row be best fit eaither,
    Reason:
    1.Block size is 8k, it would cause fletching since 4k and 384 would mean that only 1 complete row work each block, the 2nd row would be about 90 in the block and the rest in another block.
    2. since lob is only accessed during the main update/insert and only about 15% during read sql
    Where did the 4k come from ? Is this the avg_row_len when the LOB is stored out of line ? If that's the case, then you're really quite unlucky, it's one of those awkward sizes that makes it harder to pick a best block size - if you got it from somewhere else, then what it the avg_row_len currently ?
    When you query the data, how many rows do you expect to return from a single query ? How many of those rows do you expect to find packed close together ? I won't go into the arithmetic just yet, but if your row length is about 4KB, then you may be better off storing the LOBs in row anyway.
    Would it make better since to create a tablespace for this smaller lob with a block size of 1024?If it really makes sense to store your LOBs out of row, then you want to use the smallest possible block size (which is 2KB) - which means setting up a db_2k_cache_size and creating a tablespace of 2KB blocks. I would also suggest that you create the LOBs as CACHE rather then NOCACHE to reduce the I/O costs - particularly the redo costs.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan
    Edited by: Jonathan Lewis on Oct 4, 2009 7:27 PM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Adjusting chunk size for virtual harddisks

    My data partition with VHD images is constantly run out of space. So I decided to repartition the drive in the next days. While doing that, I will redo most or all images files to sparsify the data inside the guest. There is one issue:
    To reduce the amount of space needed on the host I want to reduce the "allocation chunk size" for (dynamically expanding/ sparse) virtual harddisks. Its my understanding that if a guest is writing to a filesystem block the host does actually allocate
    more than the "guest block size". For example, if a guest ext3 filesystem has blocksize 4K and it writes to block #123, the host will not just allocate space for this singe 4K block at offset #123. It may allocate a much larger chunk in case the
    guest attempts also to write ti #124. And so on.
    A few months ago I read somewhere that this "allocation chunk size" can be adjusted. Either when the VHD image is created, or globally for all images. It was done with some powershell cmd AFAIK.
    For my purpose I want to reduce it to a minimum, even if it comes with some performance cost.
    How can this property be adjusted?
    Thanks.
    Olaf

    Hi Olaf,
    It seems that it is beyond my ability to explain this .
    But I have read this article mentioned the effect between VHD and physical disk :
    http://support.microsoft.com/kb/2515143
    If I understand correctly , the physical disk and virtual disk only have two type of "sector" size (512 and 4K , vhd only support 512 )As for the "allocation chunk size" that you mentioned , I think it determined by different file system
    (such as NTFS and ext3 ).
    Maybe the powershell cmd you mentioned is " set-vhd " , it has a parameter " physicalsectorsizetype " only with two value 512 and 4096 .
    For details please refer to the following link:
    http://technet.microsoft.com/en-us/library/hh848561.aspx
    Hope this hlepsBest Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • RAID Stripe, slice failure.

    I have a RAID stripe set up on 3 disks, and I had a slice failure. Anybody know how I can remove the slice and recover the data on the other 2 disks?
    Since OSX and all Unix-y OS's like to keep files contiguous, the loss should be minimal if i can only recover the data from the two remaining good drives - any suggestions?

    I do have a backup, problem is, all the data I really need to recover is newer than my backup. 8-( Logically it ought to be recoverable -- because *nix file systems like to keep files contiguous, it stands to reason that the files themselves are contiguous on the two remaining good slices.
    I understand and do not doubt that you speak from experience -- it's just that I understand a Windows file system being unable to recover said data - files being fragmented from one end to the other. But an *nix file system??? There is really no good reason why not.
    None that I can think of anyway. It's not like Darwin fragments a file all over the place. 8-(

  • RAID Stripe size for HD Capture

    Hey everyone,
    So I just captured a lot of Uncompressed 8 bit 4:2:2 1080i60.
    The video alone requires ~ 118 MB/s sustained transfer speed for live capture without dropped frames.
    I used our new MacPro. We built an internal RAID with 3 250 GB drives all striped together. I had to decide which stripe size to use. Since we have proportionally less files than the average RAID(200 or so at the most), and because our files are HUGE (2-40 GB), I figured a large stripe size would be apropriate. I set it to the max that Mac OS X software RAID supports, though I must confess I dont remember exactly what that was.
    We had little trouble with capture, sustaining over 200 MB/s bandwidth with this setup according to the Blackmagic drive speed test. However, when the drives filled up, they (expectedly) slowed down quite a bit. I frequently did speed tests, and when it got low I had to switch into shooting DVCPRO HD to avoid dropped frames.
    We had similar experiences before, when I had this same setup with the default stripe size, though it seemed overall a bit better with larger stripe sizes.
    My questions are thus:
    1) Is there any way to use a really large stripe (2MB or so) without buying a hardware RAID controller?
    2) Is my thinking correct that if I have a small number of gigantic files I should use a large stripe size?
    3) If I were to wipe out all my drives (including the 250 gig startup drive), and boot off the CD, could I tie all 4 drives together into RAID-0, and then install Mac OS X to that (and presumably partition it for a data storage volume)?
    4) Does anyone know of a good 2-4 port eSata PCI Express card that has drivers for the MacPro? I know sonnet has a 2 port, but its one internal SATA and one eSata. I need at least 2 eSata ports for it to be worth my time, because I have a 4 drive eSata RAID box with 4 320GB drives in it. I can use the motherboard's additional 2 SATA connectors through a backplate SATA -> eSATA converter, and with 2 more eSATA ports through PCIe card, I would be able to use all 4 eSATA drives at once.
    4 ports would be even better as I would like to leave internal ports for a BD-R or HD-DVD-R (or whatever they're called) drive later on.
    Thanks!
    -Derek Prestegard

    1) Is there any way to use a really large stripe (2MB
    or so) without buying a hardware RAID controller?
    256k is the largest block size available with Disk Utility.
    2) Is my thinking correct that if I have a small
    number of gigantic files I should use a large stripe
    size?
    128k is the largest size I would ever think about using. I even use 32k in many situations. You can test it and see what works best for you. Large block sizes do not always translate into higher performance. I find more drives in the striped RAID set helps me more than larger block sizes.
    3) If I were to wipe out all my drives (including the
    250 gig startup drive), and boot off the CD, could I
    tie all 4 drives together into RAID-0, and then
    install Mac OS X to that (and presumably partition it
    for a data storage volume)?
    You could boot from a FW800 external and use all 4 internals
    for a RAID but I think you will be happier with SATA host adapters.
    4) Does anyone know of a good 2-4 port eSata PCI
    Express card that has drivers for the MacPro
    Here are a few options for SATA host adapters on a Mac Pro that I use. The WiebeTech Tera Card TCES0-2e SATA host adapter provides two SATA ports and works with the Mac Pro using SiI-3132 Mac drivers 1.1.6. You can see a review at amug.org:
    http://www.amug.org/amug-web/html/amug/reviews/articles/wiebetech/tces0/
    The FirmTek SeriTek/2SE2-E 2-port host adapter can be found here:
    http://www.firmtek.com/seritek/seritek-2se2-e/
    It works best on the Quad as it provides boot support and SMART drive support. On the Mac Pro you use the Firmtek cardbus 2SM2-E Mac driver until FirmTek can build new EFI drivers for the Mac Pro. No boot support is provided yet but it does pass SMART data to Mac OS X. Eventually the card will have Mac Pro boot support which will be nice.
    Both cards use the SiI-3132 controller. If you mix the WiebeTech and the FirmTek cards the Silicon Image Mac driver version 1.1.6 will take over and block the SMART data info that the FirmTek card supplies. As this is the case, I would go with one brand or the other but not mix them in the same Mac Pro.
    I have used three cards to provide six external SATA ports. You could use two cards with your external 4-bay enclosure. I would create a striped RAID with 4 external drives and two internal drives. This should provide you with the performance you need to handle 1080HD. If you want more power you could use 7 hard drives = 3 int. and 4 external.
    Have fun!

  • Raid Stripe size?

    Is 512 kb a good size compared to the default 64?
    Thanks
    Lee

    Quote
    Originally posted by lugen
    Hi,
        I have the Modded bios I can change almost anything about it.
    Lee
    Can you?  I am using the 5.4 modded bios and am unable to change the stripe size.  If I go to anything different than the original Windows will not load.
    Computer will just hang after the raid bios screen with a little blinking cursor at the top left.

  • Optimizing chunks size for filesending with NetStream.send() method

    Hi,
    I would like to implement a p2p filesending application. Unfortunately object replication is not the best solution for this so I need to implement it myself.
    The process is the following:
    Once the user has selected the file for sending, I load it to the memory. After it is completed I cut the file's data into chunks and send these chunks with the NetStream.send() method  - to get the progress of filesending -.
    How can I optimize the size of the chunks for the fastest filesending method or what is the best size for a chunk?

    Hi there
    Please submit a Wish Form to ask for some future version of Captivate to offer such a feature! (Link is in my sig)
    Cheers... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcerStone Blog
    Captivate eBooks

Maybe you are looking for