Performance of Mirror storge vs Raid 5

Which drive configuration is better for an Oracle database? Mirroring of the disks or a Raid 5 configuration? We will be running dual P4 2.4GHz Xeon processors with 2GB RAM and 3 36GB 15K RPM drives.
Thanks.

The Mirror has better performance, but it's limited by drive size. The better schema for a large database (>200GB) is RAID 10 or "Strip of Mirrors". But, dont forguet some Spare in any case.

Similar Messages

  • Error when mirroring two striped raid sets

    I have an external 5 drive enclosure (sonnet fusion 500P) connected to my mac pro via serial host adapter (sonnet tempo SATA E4P).
    While attempting to mirror two striped raid sets, or vice versa, I get a "disk utility internal error". I am able to stripe or mirror no problem, it is only when I try to mirror the two striped sets that the error appears and makes me restart disk utility.
    Any ideas what is causing this error?

    While attempting to mirror two striped raid sets, or vice versa, I get a "disk utility internal error". I am able to stripe or mirror no problem, it is only when I try to mirror the two striped sets that the error appears and makes me restart disk utility. Any ideas what is causing this error?
    I had the same problem. I could not get the Disk Utility application to create a RAID 10 which is created by stripping two mirrors. I was able to figure out how to use diskutil to configure a RAID 10. Here are the steps that I used:
    1. Open Terminal. Use the command "diskutil list".
    2. Look for the name of the disk with a * next to it and make sure that the real disk name is listed in the "type name". Write down the "disk number" names of the volumes you want to work with.
    3. Use the following diskutil command to create the mirrors.
    "diskutil createRAID mirror RAID10 HFS+ disk4 disk6"
    Replace disk4 disk6 with your disk numbers that you discovered in step 2 above. This command will create each mrror. Create two mirror sets.
    4. As each mirror is created write down the disk number that is reported in this line: "Creating file system on RAID volume "disk5 "
    5. Use the numbers of the mirror disks, as discovered in step 4 to create the stripe of the two mirrors. Use the command:
    "diskutil createRAID stripe RAID10 HFS+ disk1 disk5"
    Substitute disk1 disk5 in the line above with the disk numbers discovered in step 4 above.
    Your RAID 10 is now completed.
    Performance with this setup will be higher with a direct connect 4-bay enclosure like the FirmTek SeriTek/2eEN4 - http://firmtek.stores.yahoo.net/sata2een4.html
    However, this should also work with a 5-bay PM enclosure just slower. Personally, I switched from the diskutil created RAID 10 to using the HighPoint RocketRAID 2314 for RAID 10 configurations.
    http://www.amug.org/amug-web/html/amug/reviews/articles/highpoint/2314/
    The reason is that while I can create a RAID 10 with diskutil it is complicated to build and complicated to rebuild when a failure occurs. The RocketRAID 2314 web manager makes all of this easy and rebuilds RAID 10 automatically. It just works while using diskutil sends you to the Apple Discussion Boards every time a problem occurs. This is one of the reasons I recommend the RocketRAID 2314 over the Tempo E4P SATA host adapter.
    If I had a E4P and the 5-bay enclosure I would be happy with a 5 drive striped RAID set or purchase the RocketRAID 2314 so that I could easily support RAID 5 and 10 configurations.
    http://www.amazon.com/exec/obidos/ASIN/B000NAXGIU/arizomacinusergr
    Have fun!

  • Asm mirroring vs. hardware raid

    We are planning a new installation of Oracle 10g Standard Edition with RAC.
    What is best to use: asm mirroring or hardware raid?
    Thank you,
    Marius

    I found this link http://www.revealnet.com/newsletter-v6/0905_D.htm which has an interesting comparation.
    We have an iSCSI.
    Porzer: I'm thinking the same, but I don't have experience with Oracle and I want to know from someone with more experience :)
    Thank you,
    Marius

  • Performance Issue on a Xraid (Raid 5) for video digitalization

    Hello Everyone
    I have a Mac Pro (2x3ghz + 8 Go Ram + Fiber channel  card) directly linked to a Xraid via Fiber (there is no Xsan and no Xserve as controller).
    The Xraid is composed of 8 X 750 Go HD and set up as Raid 5.
    We are working in video production/editing and it is important for us to a good level of security on our data but also to be able to digitalize rushes directly on the Raid.
    We are working essentially in HUD format with Panasonic Cams (HUD) and thus are working with VCR HUD format which is meant to require 100 mb/s data rate.
    When trying to capture (Final cut studio 2) directly on the Raid, we have frames drop and when I run a little utility software to test the  writing speed on the Raid it tells me that the rate is around 95-98 Mb/s which probably explains the frames dropping.
    My question is : Is this normal to have such a low data rate in writing with the Xraid configuration ?
    If yes is there a way to improve it by some means ?
    Thanks in advance
    Julien

    How are your 8 drives configured? That's an unusual, and certainly sub-optimal, configuration.
    RAID has three components that typically impact performance. One is the bus architecture (in this case a fiber channel/ATA hybrid), second is disk capacity (speed degrades as the array fills), and the third is the number of drives.
    You can't do much about the bus architecture, other than trust that Apple have scaled it appropriately.
    It's worth checking the disk space to see if that's impacting performance, but I suspect the drive configuration is going to be the issue.
    Since you're running 8 drives I'm guessing you're running two 4-drive arrays.
    The optimal performance for any array is achieved with the largest number of drives. Since the XServe RAID controllers are designed for 7 drives each, you're essentially running each array at 4/7ths of its potential.
    Of course, you don't gain much by switching all the drives onto a single controller since now you're limited by the single fiber channel link to the host, although whether one link/7 drives is better or worse than 2 links/4 drives each is something that will have to be tested.
    Your best solution may be to expand the array to 14 drives to boost the disk throughput.

  • Mirrored Disk Set Raid 1 Question

    Hello,
    I set up a mirrored drive on my test xserve and it seems to flow nicely. I have been thinking about doing this for my production servers like my email server and file servers. Has anyone had any bad experiences using the software raid in a production environment? If it is just there as a feature to list on the web, I will not do it. If it is a nice and solid feature, I will move forward with it.
    Any opinions would be helpful to me.
    Cheers.

    Virtually ALL my production servers use RAID 1 mirrors.
    There's no issue with it as far as reliability is concerned. The write overhead is more than compensated for by the ability to sleep at night.

  • Raid Performance and Rebuild Issues

    Rebuilding a Raid array
    What happens when you have a Raid array and one (or more) disk(s) fail?
    First let's consider the work-flow impact of using a Raid array or not. You may want to refresh your memory about Raids, by reading Adobe Forums: To RAID or not to RAID, that is the... again.
    Sustained transfer rates are a major factor in determining how 'snappy' your editing experience will be when editing multiple tracks. For single track editing most modern disks are fast enough, but when editing complex codecs  like AVCHD, DSLR, RED or EPIC, when using uncompressed or AVC-Intra 100 Mbps codecs, or using multi-cam or multiple tracks  the sustained transfer speed can quickly become a bottleneck and limit the 'snappy' feeling during editing.
    For that reason many use raid arrays to remove that bottleneck from their systems, but this also raises the question:
    What happens when one of more of my disks fail?
    Actually, it is simple. Single disks or single level striped arrays will lose all data. And that means that you have to replace the failed disk and then restore the lost data from a backup before you can continue your editing. This situation can become extremely bothersome if you consider the following scenario:
    At 09:00 you start editing and you finish editing by 17:00 and have a planned backup scheduled at 21:00, like you do every day. At 18:30 one of your disks fails, before your backup has been made. All your work from that day is lost, including your auto-save files, so a complete day of editing is irretrievably lost. You only have the backup from the previous day to restore your data, but that can not be done before you have installed a new disk.
    This kind of scenario is not unheard of and even worse, this usually happens at the most inconvenient time, like on Saturday afternoon before a long weekend and you can only buy a new disk on Tuesday...(sigh).
    That is the reason many opt for a mirrored or parity array, despite the much higher cost (dedicated raid controller, extra disks and lower performance than a striped array). They buy safety, peace-of-mind and a more efficient work-flow.
    Consider the same scenario as above and again one disk fails.  No worry, be happy!! No data lost at all and you could continue editing, making the last changes of the day. Your planned backup will proceed as scheduled and the next morning you can continue editing, after having the failed disk replaced. All your auto-save files are intact as well.
    The chances of two disks failing simultaneously are extremely slim, but if cost is no object and safety is everything, some consider using a raid6 array to cover that eventuality. See the article quoted at the top.
    Rebuilding data after a disk failure
    In the case of a single disk or striped arrays, you have to use your backup to rebuild your data. If the backup is not current, you lose everything you did after your last backup.
    In the case of a mirrored array, the raid controller will write all data on the mirror to the newly installed disk. Consider it a disk copy from the mirror to the new disk. This is a fast way to get back to full speed. No need to get out your (possibly older) backup and restore the data. Since the controller does this in the background, you can continue working on your time-line.
    In the case of parity raids (3/5/6) one has to make a distinction between distributed parity raids (5/6) and dedicated parity raid (3).
    Dedicated parity, raid3
    If a disk fails, the data can be rebuild by reading all remaining disks (all but the failed one) and writing the rebuilt data only to the newly replaced disk. So writing to a single disk is enough to rebuild the array. There are actually two possibilities that can impact the rebuild of a degraded array. If the dedicated parity drive failed, the rebuilding process is a matter of recalculating the parity info (relatively easy) by reading all remaining data and writing the parity to the new dedicated disk. If a data disk failed, then the data need to be rebuild, based on the remaining data and the parity and this is the most time-consuming part of rebuilding a degraded array.
    Distributed parity, raid5 or raid6
    If a disk fails, the data can be rebuild by reading all remaining disks (all but the failed one), rebuilding the data and recalculating the parity information and writing the data and parity information to the failed disk. This is always time-consuming.
    The impact of 'hot-spares' and other considerations
    When an array is protected by a hot spare, if a disk drive in that array fails the hot spare is automatically incorporated into the array and takes over for the failed drive. When an array is not protected by a hot spare, if a disk drive in that array fails, remove and replace the failed disk drive. The controller detects the new disk drive and begins to rebuild the array.
    If you have hot-swappable drive bays, you do not need to shut down the PC, you can simply slide out the failed drive and replace it with a new disk. Remember, when a drive has failed and the raid is running in 'degraded' mode, there is no further protection against data loss, so it is imperative that you replace the failed disk at the earliest moment and rebuild the array to a 'healthy' state.
    Rebuilding a 'degraded' array can be done automatically or manually, depending on the controller in use and often you can set the priority of the rebuilding process higher or lower, depending on the need to continue regular work versus the speed required to repair the array to its 'healthy' status.
    What are the performance gains to be expected from a raid and how long will a rebuild take?
    The  most important column in the table below is the sustained transfer  rate. It is indicative and no guarantee that your raid will achieve  exactly the same results. That depends on the controller, the on-board  cache and the disks in use. The more tracks you use in your editing, the higher the resolution you use, the more complex your codec, the more  you will need a high sustained transfer rate and that means more disks in the array.
    Sidebar: While testing a  new time-line for the PPBM6 benchmark, using a large variety of source  material, including RED and EPIC 4K, 4:2:2 MXF, XDCAM HD and the like,  the required sustained transfer rate for simple playback of a  pre-rendered time-line was already over 300 MB/s, even with 1/4  resolution playback, because of the 4 4 4 4 full quality deBayering of  the 4K material.
    Final thoughts
    With the increasing popularity of file based formats, the importance of backups of your media can not be stressed enough. In the past one always had the original tape if disaster stroke, but no longer. You need regular backups of your media and projects.  With single disks and (R)aid0 you take risks of complete data loss, because of the lack of redundancy.  Backups cost extra disks and extra time to create and restore in case of disk failure.
    The need for backups in case of mirrored raids is far less, since there is complete redundancy. Sure, mirrored raids require double the number of disks but you save on the number of backup disks and you save time to create and restore backups.
    In the case of parity raids, the need for backups is more than with mirrored arrays, but less than with single disks or striped arrays and in the case of 'hot-spares' the need for backups is further reduced. Initially, a parity array may look like a costly endeavor. The raid controller and the number of disks make it expensive, but if you consider what you get, more speed, more storage space, easier administration, less backups required, less time for those backups, continued working in case of a drive failure, even though somewhat sluggish, the cost is often worth more with the peace-of-mind it brings, than continuing with single disks or striped arrays.

    Raid3 is better suited for video editing work, because it is more efficient when using large files, as clips usually are. Raid5 is better suited in high I/O environments, where lots of small files need to be accessed all the time, like news sites, webshops and the like. Raid3 will usually have a better rebuild time than raid5.
    But, and there is always a but, raid3 requires an Areca controller. LSI and other controller brands do not support raid3. And Areca is not exactly cheap...
    Keep in mind that a single disk shows declining performance when the fill rate increases. See the example below:
    A Raid3 or Raid30 will not show that behavior. The performance remains nearly constant even if fill rates go up:
    Note that both charts were created with Samsung Spinpoint F1 disks, an older and slower generation of disks and with an older generation Areca ARC-1680iX-12.

  • Using RAID 1 mirror for backups

    I am considering setting up a RAID 1 volume (1 primary and 1 secondary disk) and using the secondary mirror as the source for archive to a file-server. This should give me a perfect snapshot in time of my volume and the read access from the volume would not affect system performance too much. The strategy I have in mind is to remove the mirror from the RAID 1 volume, mount it as a normal volume and make a backup from it (using EMC Retrospect) to a file server over our network. Once the backup is done, I would unmount the mirror and reconnect it to the primary disk to recreate the RAID volume.
    My questions are:
    1) Is this possible with OS X's built-in RAID functionality?
    2) Am I better off with using SoftRAID 3?
    3) Will the rebuild of the RAID volume be as if it was from scratch (that is, would it take too long for a 150GB volume)?
    4) If I have two 150GB disks (SATA), can I make the entire disk one RAID volume and still boot from it?
    5) If the answer to Q4 is "no", with OS X's built-in RAID, can I make a RAID volume out of partitions from two disks? In other words, am I forced to use whole disks for a RAID members in OS X?
    Your help in this would be very much appreciated.
    Mac Pro   Mac OS X (10.4.7)  

    Using mirrored RAID is NOT my sole backup method. It is part of it. The idea is to backup a secondary mirror to an tape or disk based backup system on our file server using EMC Retrospect. This would provide recovery from deleted or modified files.
    The need to unmount a mirror from backup comes from my desire to have perfect snaphots-in-time for backup. Backing up a live filesystem can lead to inconsistent states in the backup. For example, backing up 50G over a network can take several hours. If, for example, you are writing an article that contists of several files, and you had a draft of file #1 that was copied to the backup. A little later, while editing file #2, you decided to move some text from it to file #1. After moving this, file #2 is backed up, with the moved text removed from it. The end-result in the backup is that the text is in neither file #1 or file #2.
    Backing up live file-systems has this inherent danger. One way of avoiding it is to create an instant snapshot of the entire file system to a second drive and backup from that drive. The mirrored secondary drive in the RAID system would have this snapshot, continually updated. It would have to be disconneted from the RAID set before the backup from it is started in order to preserve the snapshot's consistency.

  • RAID Mirror failed - help interpreting error code?

    Hi all - I run the primary disk mirrored on my Dual 2.0 Xserver (10.4.7) using Apple's software RAID. No problems for two years, but found this am a warning on System Monitor, showing my RAID as degraded - one disk had dropped out of the set. All functions continued forward without interruption.
    Here's all the system log has to say:
    Aug 12 18:29:34 mail kernel[0]: AppleRAID::completeRAIDRequest - error 0xe00002ca detected for set "Xserver80G" (9477A440-E50F-11D9-B65B-000D939C5BEC), member E612F8B0-3082-4222-A9FE-B58D35CDA3B8, set byte offset = 907558912.
    Aug 12 18:32:39 mail kernel[0]: AppleRAID::recover() member E612F8B0-3082-4222-A9FE-B58D35CDA3B8 from set "Xserver80G" (9477A440-E50F-11D9-B65B-000D939C5BEC) has been marked offline.
    Aug 12 18:32:39 mail kernel[0]: AppleRAID::restartSet - restarting set "Xserver80G" (9477A440-E50F-11D9-B65B-000D939C5BEC).
    The System Monitor shows no pre-failure warnings on the HD's themselves, and I'm assuming there was an error in the mirroring, and the RAID software gracefully dropped the offending volume.
    My questions are: Any help interpreting the error code so I can narrow down the likely cause, if any? any ideas what causes this kind of error? Is it likely to recurr?
    Any help much appreciated. I'll be rebuilding the set tomorrow after working hours.

    The RAID set got out of sync. You might see this from time to time if you use Apple's raid driver.
    Take a look at softraid.com for a better driver.

  • Which hard drive to choose when a hard drive failed from mirrored raid

    this is my mirrored raid set up from 4 ext HD to make a total of 2 mirrored ext  HD raids. In one set, a HD is out of sync and has failed. I chose auto rebuild when I create the raids but it did not occurred. question-Which HD do I chose to rebuld when I use disk utility? Please see picture below. thank you.

    any suggestion? or am i in the wrong discussion board.

  • To RAID or not to RAID, that is the question

    People often ask: Should I raid my disks?
    The question is simple, unfortunately the answer is not. So here I'm going to give you another guide to help you decide when a raid array is advantageous and how to go about it. Notice that this guide also applies to SSD's, with the expection of the parts about mechanical failure.
     What is a RAID?
     RAID is the acronym for "Redundant Array of Inexpensive Disks". The concept originated at the University of Berkely in 1987 and was intended to create large storage capacity with smaller disks without the need for very expensive and reliable disks, that were very expensive at that time, often a tenfold of smaller disks. Today prices of hard disks have fallen so much that it often is more attractive to buy a single 1 TB disk than two 500 GB disks. That is the reason that today RAID is often described as "Redundant Array of Independent Disks".
    The idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. Note that 'Spanning' is not in any way comparable to RAID, it is just a way, like inverse partitioning, to extend the base partition to use multiple disks, without changing the method of reading and writing to that extended partition.
     Why use a RAID?
     Now with these lower disks prices today, why would a video editor consider a raid array? There are two reasons:
    1. Redundancy (or security)
    2. Performance
    Notice that it can be a combination of both reasons, it is not an 'either/or' reason.
     Does a video editor need RAID?
    No, if the above two reasons, redundancy and performance are not relevant. Yes if either or both reasons are relevant.
    Re 1. Redundancy
    Every mechanical disk will eventually fail, sometimes on the first day of use, sometimes only after several years of usage. When that happens, all data on that disk are lost and the only solution is to get a new disk and recreate the data from a backup (if you have one) or through tedious and time-consuming work. If that does not bother you and you can spare the time to recreate the data that were lost, then redundancy is not an issue for you. Keep in mind that disk failures often occur at inconvenient moments, on a weekend when the shops are closed and you can't get a replacement disk, or when you have a tight deadline.
    Re 2. Performance
    Opponents of RAID will often say that any modern disk is fast enough for video editing and they are right, but only to a certain extent. As fill rates of disks go up, performance goes down, sometimes by 50%. As the number of disk activities on the disk go up , like accessing (reading or writing) pagefile, media cache, previews, media, project file, output file, performance goes down the drain. The more tracks you have in your project, the more strain is put on your disk. 10 tracks require 10 times the bandwidth of a single track. The more applications you have open, the more your pagefile is used. This is especially apparent on systems with limited memory.
    The following chart shows how fill rates on a single disk will impact performance:
    Remember that I said previously the idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. That means a RAID will not fill up as fast as a single disk and not experience the same performance degradation.
    RAID basics
     Now that we have established the reasons why people may consider RAID, let's have a look at some of the basics.
    Single or Multiple? 
    There are three methods to configure a RAID array: mirroring, striping and parity check. These are called levels and levels are subdivided in single or multiple levels, depending on the method used. A single level RAID0 is striping only and a multiple level RAID15 is a combination of mirroring (1) and parity check (5). Multiple levels are designated by combining two single levels, like a multiple RAID10, which is a combination of single level RAID0 with a single level RAID1.
    Hardware or Software? 
    The difference is quite simple: hardware RAID controllers have their own processor and usually their own cache. Software RAID controllers use the CPU and the RAM on the motherboard. Hardware controllers are faster but also more expensive. For RAID levels without parity check like Raid0, Raid1 and Raid10 software controllers are quite good with a fast PC.
    The common Promise and Highpoint cards are all software controllers that (mis)use the CPU and RAM memory. Real hardware RAID controllers all use their own IOP (I/O Processor) and cache (ever wondered why these hardware controllers are expensive?).
    There are two kinds of software RAID's. One is controlled by the BIOS/drivers (like Promise/Highpoint) and the other is solely OS dependent. The first kind can be booted from, the second one can only be accessed after the OS has started. In performance terms they do not differ significantly.
    For the technically inclined: Cluster size, Block size and Chunk size
     In short: Cluster size applies to the partition and Block or Stripe size applies to the array.
    With a cluster size of 4 KB, data are distributed across the partition in 4 KB parts. Suppose you have a 10 KB file, three full clusters will be occupied: 4 KB - 4 KB - 2 KB. The remaining 2 KB is called slackspace and can not be used by other files. With a block size (stripe) of 64 KB, data are distributed across the array disks in 64 KB parts. Suppose you have a 200 KB file, the first part of 64 KB is located on disk A, the second 64 KB is located on disk B, the third 64 KB is located on disk C and the remaining 8 KB on disk D. Here there is no slackspace, because the block size is subdivided into clusters. When working with audio/video material a large block size is faster than smaller block size. Working with smaller files a smaller block size is preferred.
    Sometimes you have an option to set 'Chunk size', depending on the controller. It is the minimal size of a data request from the controller to a disk in the array and only useful when striping is used. Suppose you have a block size of 16 KB and you want to read a 1 MB file. The controller needs to read 64 times a block of 16 KB. With a chunk size of 32 KB the first two blocks will be read from the first disk, the next two blocks from the next disk, and so on. If the chunk size is 128 KB. the first 8 blocks will be read from the first disk, the next 8 block from the second disk, etcetera. Smaller chunks are advisable with smaller filer, larger chunks are better for larger (audio/video) files.
    RAID Levels
     For a full explanation of various RAID levels, look here: http://www.acnc.com/04_01_00/html
    What are the benefits of each RAID level for video editing and what are the risks and benefits of each level to help you achieve better redundancy and/or better performance? I will try to summarize them below.
    RAID0
     The Band AID of RAID. There is no redundancy! There is a risk of losing all data that is a multiplier of the number of disks in the array. A 2 disk array carries twice the risk over a single disk, a X disk array carries X times the risk of losing it all.
    A RAID0 is perfectly OK for data that you will not worry about if you lose them. Like pagefile, media cache, previews or rendered files. It may be a hassle if you have media files on it, because it requires recapturing, but not the end-of-the-world. It will be disastrous for project files.
    Performance wise a RAID0 is almost X times as fast as a single disk, X being the number of disks in the array.
    RAID1
     The RAID level for the paranoid. It gives no performance gain whatsoever. It gives you redundancy, at the cost of a disk. If you are meticulous about backups and make them all the time, RAID1 may be a better solution, because you can never forget to make a backup, you can restore instantly. Remember backups require a disk as well. This RAID1 level can only be advised for the C drive IMO if you do not have any trust in the reliability of modern-day disks. It is of no use for video editing.
    RAID3
    The RAID level for video editors. There is redundancy! There is only a small performance hit when rebuilding an array after a disk failure due to the dedicated parity disk. There is quite a perfomance gain achieveable, but the drawback is that it requires a hardware controller from Areca. You could do worse, but apart from it being the Rolls-Royce amongst the hardware controllers, it is expensive like the car.
    Performance wise it will achieve around 85% (X-1) on reads and 60% (X-1) on writes over a single disk with X being the number of disks in the array. So with a 6 disk array in RAID3, you get around 0.85x (6-1) = 425% the performance of a single disk on reads and 300% on writes.
    RAID5 & RAID6
     The RAID level for non-video applications with distributed parity. This makes for a somewhat severe hit in performance in case of a disk failure. The double parity in RAID6 makes it ideal for NAS applications.
    The performance gain is slightly lower than with a RAID3. RAID6 requires a dedicated hardware controller, RAID5 can be run on a software controller but the CPU overhead negates to a large extent the performance gain.
    RAID10
     The RAID level for paranoids in a hurry. It delivers the same redundancy as RAID 1, but since it is a multilevel RAID, combined with a RAID0, delivers twice the performance of a single disk at four times the cost, apart from the controller. The main advantage is that you can have two disk failures at the same time without losing data, but what are the chances of that happening?
    RAID30, 50 & 60
     Just striped arrays of RAID 3, 5 or 6 which doubles the speed while keeping redundancy at the same level.
    EXTRAS
     RAID level 0 is striping, RAID level 1 is mirroring and RAID levels 3, 5 & 6 are parity check methods. For parity check methods, dedicated controllers offer the possibility of defining a hot-spare disk. A hot-spare disk is an extra disk that does not belong to the array, but is instantly available to take over from a failed disk in the array. Suppose you have a 6 disk RAID3 array with a single hot-spare disk and assume one disk fails. What happens? The data on the failed disk can be reconstructed in the background, while you keep working with negligeable impact on performance, to the hot-spare. In mere minutes your system is back at the performance level you were before the disk failure. Sometime later you take out the failed drive, replace it for a new drive and define that as the new hot-spare.
    As stated earlier, dedicated hardware controllers use their own IOP and their own cache instead of using the memory on the mobo. The larger the cache on the controller, the better the performance, but the main benefits of cache memory are when handling random R+W activities. For sequential activities, like with video editing it does not pay to use more than 2 GB of cache maximum.
    REDUNDANCY(or security)
    Not using RAID entails the risk of a drive failing and losing all data. The same applies to using RAID0 (or better said AID0), only multiplied by the number of disks in the array.
    RAID1 or 10 overcomes that risk by offering a mirror, an instant backup in case of failure at high cost.
    RAID3, 5 or 6 offers protection for disk failure by reconstructing the lost data in the background (1 disk for RAID3 & 5, 2 disks for RAID6) while continuing your work. This is even enhanced by the use of hot-spares (a double assurance).
    PERFORMANCE
     RAID0 offers the best performance increase over a single disk, followed by RAID3, then RAID5 amd finally RAID6. RAID1 does not offer any performance increase.
    Hardware RAID controllers offer the best performance and the best options (like adjustable block/stripe size and hot-spares), but they are costly.
     SUMMARY
     If you only have 3 or 4 disks in total, forget about RAID. Set them up as individual disks, or the better alternative, get more disks for better redundancy and better performance. What does it cost today to buy an extra disk when compared to the downtime you have when a single disk fails?
    If you have room for at least 4 or more disks, apart from the OS disk, consider a RAID3 if you have an Areca controller, otherwise consider a RAID5.
    If you have even more disks, consider a multilevel array by striping a parity check array to form a RAID30, 50 or 60.
    If you can afford the investment get an Areca controller with battery backup module (BBM) and 2 GB of cache. Avoid as much as possible the use of software raids, especially under Windows if you can.
    RAID, if properly configured will give you added redundancy (or security) to protect you from disk failure while you can continue working and will give you increased performance.
    Look carefully at this chart to see what a properly configured RAID can do to performance and compare it to the earlier single disk chart to see the performance difference, while taking into consideration that you can have one disks (in each array) fail at the same time without data loss:
    Hope this helps in deciding whether RAID is worthwhile for you.
    WARNING: If you have a power outage without a UPS, all bets are off.
    A power outage can destroy the contents of all your disks if you don't have a proper UPS. A BBM may not be sufficient to help in that case.

    Harm,
    thanks for your comment.
    Your understanding  was absolutely right.
    Sorry my mistake its QNAP 639 PRO, populated with 5 1TB, one is empty.
    So for my understanding, in my configuration you suggest NOT to use RAID-0. Im not willing to have more drives in my workstation becouse if my projekts are finished, i archiv on QNAP or archiv on other external drive.
    My only intention is to have as much speed and as much performance as possible during developing a projekt 
    BTW QNAP i also use as media-center in combination with Sony PS3 to run the encoded files.
    For my final understanding:
    C:  i understand
    D: i understand
    E and F: does it mean, when i create a projekt on E, all my captured and project-used MPEG - files should be situated in F?  Or which media in F you mean?
    Following your suggestions in want to rebulid Harms-Best Vista64-Benchmark comp to reach maximum speed and performance. Can i use in general the those hardware components (exept so many HD drives and exept Areca raid controller ) in my drive configuration C to F. Or would you suggest some changings in my situation?

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Windows 8 64-bit, failed installations and RAID 1 resync on HP ENVY h8-1520z

    I recently purchased an HP Envy h8-1520z with a single SSD and 8 GB RAM running Windows 8 Pro 64-bit.  I installed 2 Western Digital 500 GB WDC WD5000AAKX-001CA0 HDDs on SATA ports 5 & 6 and configured them as RAID 1 in the BIOS.  I have started installing apps on the RAID 1 array (E since the SSD is only 256 GB.
    While installing software, some install fine, but often the installation hangs with no errors indicated and I have to kill the installer task through task manager.  I usually follow that with a system shutdown or restart and almost every time that results in a resync of the RAID array.  Sometimes the shutdown or restart is quick, and other times it takes several minutes with intermittent disk activity as indicated by the drive light.  I have sometimes given up after 10 minutes or so and powered down by holding down the power button.  I can understand a resync after the forced power down, but it seems that all it takes to cause a resync is a failed installation, and that seem to happen a lot, even with software that explicitly says it is Windows 8 compatible.  Sometimes the applications run fine even after an apparently failed installation and other times not so much.
    Has anybody else seen this behavior or have any suggestions?  I’d like to get this sorted out before I load any more software.

    Hi,
    Did you check each hard drive for straps? There should not be any straps installed on the hard drive. These drives do have strapping posts.
    Did you use the AMD RaidExpert to create the array volume?  Did you wait until the RAID migration was complete before using the array volume?
    Check the AMD RaidExpert for errors. It's possible that you have a bad hard drive(s).
    Many applications want to be installed on the C drive.  Your SSD is 256GB so you should have plenty of space for most applications and the application performance will be improved.  Use the RAID 1 array to keep and store your data. The mirroring process of RAID 1 will protect it in case one hard drive fails.
    You might want to consider removing all of the data etx.. from the array, use the AMD RAIDExpert to delete out the array volume.  I would then run for hard drive diagnostics.  If the hard drives are free from errors then use disk management to separately reformat each hard drive.  Now go back to the AMD RaidExpert and create your new array volume.
    Be sure to review the software and bios updates for your PC.
    HP DV9700, t9300, Nvidia 8600, 4GB, Crucial C300 128GB SSD
    HP Photosmart Premium C309G, HP Photosmart 6520
    HP Touchpad, HP Chromebook 11
    Custom i7-4770k,Z-87, 8GB, Vertex 3 SSD, Samsung EVO SSD, Corsair HX650,GTX 760
    Custom i7-4790k,Z-97, 16GB, Vertex 3 SSD, Plextor M.2 SSD, Samsung EVO SSD, Corsair HX650, GTX 660TI
    Windows 7/8 UEFI/Legacy mode, MBR/GPT

  • Adding a raid to an 8-core

    Hi everyone!
    Was wondering if anyone could provide me with helpful links to info regarding adding a RAID to my 8-core. Right now all I have is the stock 250GB HD. Would like to start out and make a 2-drive raid with a couple of raptor 150s.
    I have no idea how to do this. Could someone please point out the way for me? I would like to make the raid the boot drive (300gb) which I assume would be enough for a boot drive, or am I being too stingy? I was hoping for the speed increase the raptors seem to give.
    I would also eventually like to add another 2 drive raid for storage/scratch/recording drive after I set up the boot drive and make sure it runs properly.
    Also, any recommendations for a back-up drive?
    Should I get a USB or firewire?
    Any brands I should avoid?
    Sams has a really cheap western digital WD My Book Hard Drive - 320GB for $119.88 - see link below:
    http://www.samsclub.com/shopping/navigate.do?dest=5&item=362125
    Will that do OK? Do you select a backup drive that is slightly higher in capacity than the drive you back up?
    I know these are fundamental / rudimentary questions, but I am ignorant here, any help is greatly appreciated.
    THANKS!!!
    8-core Mac PRO   Mac OS X (10.4.9)   ATI Radeon X1900 XT - 23" cinema display
    8-core Mac PRO   Mac OS X (10.4.9)   ATI Radeon X1900 XT - 23" cinema display

    I was suggesting launching Disk Utility, then going to HELP menu, "Disk Utility Help" is the only option there, and enter "RAID" in the Help Viewer application. Which I think can pull stuff down from Apple off the net or finds the pdf help file or manual for the program (not sure where it comes from myself!)
    Help Viewer -> search "RAID" -> How to Create a RAID is a good place to begin.
    And yes, I find keeping my Raptor (even RAID) to minimum of files (OS and applications can be anywhere from 20GB up to 80GB).
    Some things are a matter of trial and error, testing, and will change and evolve over time. that is for sure.
    One of the easier ways to get into RAID is to test it out. Backup what you have and be sure you have everything. And then setup a RAID with 2-3 drives. Create two slices - meaning partition the drives first into equal sized areas, then combine "volume 1" from each and create a stripped volume. test it out. Do some work and copy some files.
    You can get almost 200-235MB/sec with three WD RE2s using outer 1/3. Do you need to have speed and performance? a minimum for video editing? or quiet and multiple independent drives for audio? JBOD can work for some audio applications best.
    The manual and demo for http://www.softraid.com is a good tutorial (quick start and manual, now supports 10.4.10 booting on mac Pro). It cost, $149, but it lets you do things like create and delete RAID and volumes without reformatting entire drives. And more reliable and robust (it alerts immediately if there is an I/O error) and much better mirroring. Apple RAID has evolved and changed over the years, even from two years ago when Tiger (10.4.0) came out, and which saw 10% boost over 10.3.9 "Panther." SoftRAID guide will help walk you through what to do, the how and why.
    Some articles and reference resources on SATA and RAID:
    MacGurus FAQ Help are threads, entries on RAID, SATA Drives, SATA Controllers and whatnot. RAID Reference
    RAID Tutorial
    RAID Terminology
    Barefeats RAID (dated but handy)
    Single vs RAID Boot Drive?
    PCGuide (old): Why Use RAID?
    PortMultiplicationGuide
    storageaccelguide
    [url=http://www.macgurus.com/forums/%3C?php%20echo%20'http://homepage.mac.com/bo ots911/.Public/PhotoshopAccelerationBasics2.3.pdf]Optimize Phososhop pdf [/url]
    http://docs.info.apple.com/article.html?artnum=106594
    Mac Pro 2GHz 4GB 10K Raptor Cinema HD   Mac OS X (10.4.10)   WD RE RAID Aaxeon FW800 PCIe Sonnet Tempo APC RS1500 Vista
    PS: I use to spend as much time around Mt Evans as possible, after work and on weekends decades ago!

  • Data Warehouse RAID Configuration

    Hi,
    We are moving from oracle 9i to 10g two of our databases.
    The first one is our Staging database.
    The second is the datawarehouse database.
    Most of the activity in the Staging database is writing data.
    Most of the activity in the datawarehouse database is reading data
    ( the only data that is writen is by transportable tablespace from the Staging
    database to the datawarehouse database)
    The size of each one is about 1 Tera.
    Our backup and recovery strategy based on Cold backup in every weekend.
    1. Could one please suggest on RAID level for each one of those databases ?
    2. Could one suggest about using ASM on datawarehouse ?
    Regards

    RAID-Definitions
    http://linux.cudeso.be/raid.php
    RAID 0 : Striped Disk Array without Fault Tolerance
    RAID 1 : Mirroring and Duplexing
    RAID 2 : Hamming Code ECC
    RAID 3 : Parallel transfer with parity
    RAID 4 : Independent Data disks with shared Parity disk
    RAID 5 : Independent Data disks with distributed parity blocks
    RAID 6 : Independent Data disks with two independent distributed parity schemes
    RAID 7 : Optimized Asynchrony for High I/O Rates as well as High Data Transfer Rates
    RAID 10 : Very High Reliability combined with High Performance
    RAID 53 : High I/O Rates and Data Transfer Performance
    RAID 0+1 : High Data Transfer Performance
    RAID0, RAID5, RAID10 and RAID0+1 are mostly common RAID being used.
    RAID5 known to have slower Write data transaction rate
    Our backup and recovery strategy based on Cold backup in every weekendYou understand this backup srategy means you potentially can lost up to 7 days of data, right?

  • Iozone website benchmark results on Xserve RAID

    Hi,
    I'd like to try and find out how the Apple RAID was configured for the
    iozone benchmark. The results of the benchmark are here:
    http://www.iozone.org/src/current/Xserver.xls
    I asked the question directly to iozone and the answer was that the
    benchmark was run in the Apple booth at some tradeshow/conference.
    The booth guys let the iozone guys run the benchmark there.
    The write results in the iozone benchmark are about 2.5 times better than
    the write results we are getting so I'd like to try and figure out how it was
    configured.
    I'm going to be using the Apple RAID in an intensive write database
    application running MySQL. Here's some info on my setup:
    My system is a Sun v40 4xOpteron with 8Gig of RAM. I'm running
    Solaris 10 release 03/05.
    The apple RAID has all 14 disks and is fibre connected. The disks
    are setup so that the mirroring is done in the Apple RAID. This produces
    a bunch of LUNs. These are all striped together at the OS level, so
    the setup is RAID 10. Note that this version of the OS only supports
    a LUN size of 2TB max.
    Thanks for any help,
    Mike
      Mac OS X (10.4.6)   Xserve RAID

    The last tab in the spreadsheet tells you how the RAID was configured, namely RAID 5 128 stripes.
    From what I recall when I ran iozone against one of my XServe RAIDs, their figures came out a little higher than mine, but not dramatically. I'll see if I can find the data dumps for comparison.
    In the meantime I would look at how you're configuring the RAID. publishing a series of mirrors and using striping at the host level seems less-than-ideal. You're forcing the XServe RAID to write the data twice on each controller, as well as requiring the OS to manage which LUNs it's writing to.
    (remember, RAID 1 write performance is lower than other RAID levels)
    You would be better off running either RAID 0+1 (striping on the XServe RAID with each side mirrored by the server), or RAID 5, leaving everything up to the XServe RAID - the XServe RAID's performance at RAID 5 is not significantly lower than RAID 0 and it eliminates any overhead on the server side.
    If it wasn't for the volume size limitation in Solaris I would recommend RAID 50 over 10 (RAID 5 on the XServe RAID, striped on the host) but that would likely exceed the 2TB volume limit.
    Other things to check are the write caches on the drive (use only if you're in a stable power environment).

Maybe you are looking for