Raid 5 or Raid10

Hi,
which one is best RAID 5 or RAID 10?
Thanks

Hi
Sorry for asking some question before giving you some feedback?
Do you mean using RAID5 or RAID10 with Oracle?
What is the OS that will be used ?
For doing what please explain, or what are you expecting to be done?
RAID can be designed to provide increased data reliability or increased I/O performance.
RAID 10 = Combining features of RAID 0 + RAID 1.
RAID 0 helps to increase performance by striping volume data across multiple disk drives.
RAID 1 provides disk mirroring which duplicates your data.
For RAID 5
RAID 5 costs more for write-intensive applications than RAID 1.
RAID 5 is less outage resilient than RAID 1.
RAID 5 suffers massive performance degradation during partial outage.
RAID 5 is less architecturally flexible than RAID 1.
Correcting RAID 5 performance problems can be very expensive.
Please post your next question is they are only OS (linux) based section the forum : Forum Home » Technologies » Linux

Similar Messages

  • Z77a-g45 RAID problem

    I'm having big problems with this board after I've made bios update to version 2.5, which I assume updated also the Intel RST Rom to 11.5.0.1414. Every time I try to install the operating system (Windows 7 Ultimate x64 or Win Server 2008 R2 x64) on RAID configuration (Raid10, Mirroring etc), I can partition the Virtual RAID Disk, it starts to copy all the needed files BUT, after first reboot, the Windows remains stuck at "Starting Windows" screen. I've tried to use "Load driver" option in Windows install step but with no luck : no matter what drivers I choose to load, the system refuses to load them (I've tried different driver versions, x64, x86 etc).
    If I choose to not use the raid, choosing in UEFI bios "ahci" instead of "raid" the installation of OS works just fine.
    I really need a redundant config as this machine is supposed to be a file server for a small company.
    What I need to do to solve this problem.
    Ps : excuse my not too good English - I'm not an English native speaker.

    The old machine which this new one should replace, worked 4+ years, 24/7, using same software or "fake" Intel ICHxR RAID with no problems. As the performance requirements are very low, a mirrored virtual drive is just enough - the only real advantage, in this particular case, of an addon raid controller could be VmWare ESXi (or other virtualization platforms) compatibility.
    What intrigues me more is the fact that this PC worked using a Raid 10 config a week or so, before I made the BIOS update. From that point, I'm unable to install an OS into that machine.
    When I'm browsing for drivers in Win install step, it finds a compatible one but I'm receiving a message which states something like I have to provide drivers for 32 bits or 64 signed drivers. I'm tried with different "f6" drivers from Intel and MSI siTes but no luck

  • To RAID or not to RAID, that is the question

    People often ask: Should I raid my disks?
    The question is simple, unfortunately the answer is not. So here I'm going to give you another guide to help you decide when a raid array is advantageous and how to go about it. Notice that this guide also applies to SSD's, with the expection of the parts about mechanical failure.
     What is a RAID?
     RAID is the acronym for "Redundant Array of Inexpensive Disks". The concept originated at the University of Berkely in 1987 and was intended to create large storage capacity with smaller disks without the need for very expensive and reliable disks, that were very expensive at that time, often a tenfold of smaller disks. Today prices of hard disks have fallen so much that it often is more attractive to buy a single 1 TB disk than two 500 GB disks. That is the reason that today RAID is often described as "Redundant Array of Independent Disks".
    The idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. Note that 'Spanning' is not in any way comparable to RAID, it is just a way, like inverse partitioning, to extend the base partition to use multiple disks, without changing the method of reading and writing to that extended partition.
     Why use a RAID?
     Now with these lower disks prices today, why would a video editor consider a raid array? There are two reasons:
    1. Redundancy (or security)
    2. Performance
    Notice that it can be a combination of both reasons, it is not an 'either/or' reason.
     Does a video editor need RAID?
    No, if the above two reasons, redundancy and performance are not relevant. Yes if either or both reasons are relevant.
    Re 1. Redundancy
    Every mechanical disk will eventually fail, sometimes on the first day of use, sometimes only after several years of usage. When that happens, all data on that disk are lost and the only solution is to get a new disk and recreate the data from a backup (if you have one) or through tedious and time-consuming work. If that does not bother you and you can spare the time to recreate the data that were lost, then redundancy is not an issue for you. Keep in mind that disk failures often occur at inconvenient moments, on a weekend when the shops are closed and you can't get a replacement disk, or when you have a tight deadline.
    Re 2. Performance
    Opponents of RAID will often say that any modern disk is fast enough for video editing and they are right, but only to a certain extent. As fill rates of disks go up, performance goes down, sometimes by 50%. As the number of disk activities on the disk go up , like accessing (reading or writing) pagefile, media cache, previews, media, project file, output file, performance goes down the drain. The more tracks you have in your project, the more strain is put on your disk. 10 tracks require 10 times the bandwidth of a single track. The more applications you have open, the more your pagefile is used. This is especially apparent on systems with limited memory.
    The following chart shows how fill rates on a single disk will impact performance:
    Remember that I said previously the idea behind RAID is to have a number of disks co-operate in such a way that it looks like one big disk. That means a RAID will not fill up as fast as a single disk and not experience the same performance degradation.
    RAID basics
     Now that we have established the reasons why people may consider RAID, let's have a look at some of the basics.
    Single or Multiple? 
    There are three methods to configure a RAID array: mirroring, striping and parity check. These are called levels and levels are subdivided in single or multiple levels, depending on the method used. A single level RAID0 is striping only and a multiple level RAID15 is a combination of mirroring (1) and parity check (5). Multiple levels are designated by combining two single levels, like a multiple RAID10, which is a combination of single level RAID0 with a single level RAID1.
    Hardware or Software? 
    The difference is quite simple: hardware RAID controllers have their own processor and usually their own cache. Software RAID controllers use the CPU and the RAM on the motherboard. Hardware controllers are faster but also more expensive. For RAID levels without parity check like Raid0, Raid1 and Raid10 software controllers are quite good with a fast PC.
    The common Promise and Highpoint cards are all software controllers that (mis)use the CPU and RAM memory. Real hardware RAID controllers all use their own IOP (I/O Processor) and cache (ever wondered why these hardware controllers are expensive?).
    There are two kinds of software RAID's. One is controlled by the BIOS/drivers (like Promise/Highpoint) and the other is solely OS dependent. The first kind can be booted from, the second one can only be accessed after the OS has started. In performance terms they do not differ significantly.
    For the technically inclined: Cluster size, Block size and Chunk size
     In short: Cluster size applies to the partition and Block or Stripe size applies to the array.
    With a cluster size of 4 KB, data are distributed across the partition in 4 KB parts. Suppose you have a 10 KB file, three full clusters will be occupied: 4 KB - 4 KB - 2 KB. The remaining 2 KB is called slackspace and can not be used by other files. With a block size (stripe) of 64 KB, data are distributed across the array disks in 64 KB parts. Suppose you have a 200 KB file, the first part of 64 KB is located on disk A, the second 64 KB is located on disk B, the third 64 KB is located on disk C and the remaining 8 KB on disk D. Here there is no slackspace, because the block size is subdivided into clusters. When working with audio/video material a large block size is faster than smaller block size. Working with smaller files a smaller block size is preferred.
    Sometimes you have an option to set 'Chunk size', depending on the controller. It is the minimal size of a data request from the controller to a disk in the array and only useful when striping is used. Suppose you have a block size of 16 KB and you want to read a 1 MB file. The controller needs to read 64 times a block of 16 KB. With a chunk size of 32 KB the first two blocks will be read from the first disk, the next two blocks from the next disk, and so on. If the chunk size is 128 KB. the first 8 blocks will be read from the first disk, the next 8 block from the second disk, etcetera. Smaller chunks are advisable with smaller filer, larger chunks are better for larger (audio/video) files.
    RAID Levels
     For a full explanation of various RAID levels, look here: http://www.acnc.com/04_01_00/html
    What are the benefits of each RAID level for video editing and what are the risks and benefits of each level to help you achieve better redundancy and/or better performance? I will try to summarize them below.
    RAID0
     The Band AID of RAID. There is no redundancy! There is a risk of losing all data that is a multiplier of the number of disks in the array. A 2 disk array carries twice the risk over a single disk, a X disk array carries X times the risk of losing it all.
    A RAID0 is perfectly OK for data that you will not worry about if you lose them. Like pagefile, media cache, previews or rendered files. It may be a hassle if you have media files on it, because it requires recapturing, but not the end-of-the-world. It will be disastrous for project files.
    Performance wise a RAID0 is almost X times as fast as a single disk, X being the number of disks in the array.
    RAID1
     The RAID level for the paranoid. It gives no performance gain whatsoever. It gives you redundancy, at the cost of a disk. If you are meticulous about backups and make them all the time, RAID1 may be a better solution, because you can never forget to make a backup, you can restore instantly. Remember backups require a disk as well. This RAID1 level can only be advised for the C drive IMO if you do not have any trust in the reliability of modern-day disks. It is of no use for video editing.
    RAID3
    The RAID level for video editors. There is redundancy! There is only a small performance hit when rebuilding an array after a disk failure due to the dedicated parity disk. There is quite a perfomance gain achieveable, but the drawback is that it requires a hardware controller from Areca. You could do worse, but apart from it being the Rolls-Royce amongst the hardware controllers, it is expensive like the car.
    Performance wise it will achieve around 85% (X-1) on reads and 60% (X-1) on writes over a single disk with X being the number of disks in the array. So with a 6 disk array in RAID3, you get around 0.85x (6-1) = 425% the performance of a single disk on reads and 300% on writes.
    RAID5 & RAID6
     The RAID level for non-video applications with distributed parity. This makes for a somewhat severe hit in performance in case of a disk failure. The double parity in RAID6 makes it ideal for NAS applications.
    The performance gain is slightly lower than with a RAID3. RAID6 requires a dedicated hardware controller, RAID5 can be run on a software controller but the CPU overhead negates to a large extent the performance gain.
    RAID10
     The RAID level for paranoids in a hurry. It delivers the same redundancy as RAID 1, but since it is a multilevel RAID, combined with a RAID0, delivers twice the performance of a single disk at four times the cost, apart from the controller. The main advantage is that you can have two disk failures at the same time without losing data, but what are the chances of that happening?
    RAID30, 50 & 60
     Just striped arrays of RAID 3, 5 or 6 which doubles the speed while keeping redundancy at the same level.
    EXTRAS
     RAID level 0 is striping, RAID level 1 is mirroring and RAID levels 3, 5 & 6 are parity check methods. For parity check methods, dedicated controllers offer the possibility of defining a hot-spare disk. A hot-spare disk is an extra disk that does not belong to the array, but is instantly available to take over from a failed disk in the array. Suppose you have a 6 disk RAID3 array with a single hot-spare disk and assume one disk fails. What happens? The data on the failed disk can be reconstructed in the background, while you keep working with negligeable impact on performance, to the hot-spare. In mere minutes your system is back at the performance level you were before the disk failure. Sometime later you take out the failed drive, replace it for a new drive and define that as the new hot-spare.
    As stated earlier, dedicated hardware controllers use their own IOP and their own cache instead of using the memory on the mobo. The larger the cache on the controller, the better the performance, but the main benefits of cache memory are when handling random R+W activities. For sequential activities, like with video editing it does not pay to use more than 2 GB of cache maximum.
    REDUNDANCY(or security)
    Not using RAID entails the risk of a drive failing and losing all data. The same applies to using RAID0 (or better said AID0), only multiplied by the number of disks in the array.
    RAID1 or 10 overcomes that risk by offering a mirror, an instant backup in case of failure at high cost.
    RAID3, 5 or 6 offers protection for disk failure by reconstructing the lost data in the background (1 disk for RAID3 & 5, 2 disks for RAID6) while continuing your work. This is even enhanced by the use of hot-spares (a double assurance).
    PERFORMANCE
     RAID0 offers the best performance increase over a single disk, followed by RAID3, then RAID5 amd finally RAID6. RAID1 does not offer any performance increase.
    Hardware RAID controllers offer the best performance and the best options (like adjustable block/stripe size and hot-spares), but they are costly.
     SUMMARY
     If you only have 3 or 4 disks in total, forget about RAID. Set them up as individual disks, or the better alternative, get more disks for better redundancy and better performance. What does it cost today to buy an extra disk when compared to the downtime you have when a single disk fails?
    If you have room for at least 4 or more disks, apart from the OS disk, consider a RAID3 if you have an Areca controller, otherwise consider a RAID5.
    If you have even more disks, consider a multilevel array by striping a parity check array to form a RAID30, 50 or 60.
    If you can afford the investment get an Areca controller with battery backup module (BBM) and 2 GB of cache. Avoid as much as possible the use of software raids, especially under Windows if you can.
    RAID, if properly configured will give you added redundancy (or security) to protect you from disk failure while you can continue working and will give you increased performance.
    Look carefully at this chart to see what a properly configured RAID can do to performance and compare it to the earlier single disk chart to see the performance difference, while taking into consideration that you can have one disks (in each array) fail at the same time without data loss:
    Hope this helps in deciding whether RAID is worthwhile for you.
    WARNING: If you have a power outage without a UPS, all bets are off.
    A power outage can destroy the contents of all your disks if you don't have a proper UPS. A BBM may not be sufficient to help in that case.

    Harm,
    thanks for your comment.
    Your understanding  was absolutely right.
    Sorry my mistake its QNAP 639 PRO, populated with 5 1TB, one is empty.
    So for my understanding, in my configuration you suggest NOT to use RAID-0. Im not willing to have more drives in my workstation becouse if my projekts are finished, i archiv on QNAP or archiv on other external drive.
    My only intention is to have as much speed and as much performance as possible during developing a projekt 
    BTW QNAP i also use as media-center in combination with Sony PS3 to run the encoded files.
    For my final understanding:
    C:  i understand
    D: i understand
    E and F: does it mean, when i create a projekt on E, all my captured and project-used MPEG - files should be situated in F?  Or which media in F you mean?
    Following your suggestions in want to rebulid Harms-Best Vista64-Benchmark comp to reach maximum speed and performance. Can i use in general the those hardware components (exept so many HD drives and exept Areca raid controller ) in my drive configuration C to F. Or would you suggest some changings in my situation?

  • How can I set up a Mac pro with internal Raid 5

    I have a Mac pro running Leopard server. I would like to format 3 of the drives as an internal Raid 5.
    Can this be done?
    Can it be done without an internal Raid card?
    or
    Can it be done if I install a Mac Pro Raid card?
    Thanks,
    RT

    RAID5 or RAID10 or such does require add-on software and / or add-on hardware, yes.
    And I'd tend to go toward external RAID storage here, as that is what the most common packages offer and as that tends to have the greatest expansion. This would involve a PCIe RAID controller and external storage here, though Apple does offer the Mac Pro RAID card for use with the internal drives.
    The integrated Apple software RAID has RAID0 and RAID1 capabilities. Not RAID5.
    As for any additional questions, here's [the FAQ|http://support.apple.com/kb/HT1346] for the Mac Pro RAID card, and here is [the manual|images.apple.com/server/docs/RAIDUtility_UserGuide.pdf].
    External RAID can involve Apple Xsan and Promise and related, or third-party FC SAN storage, or any of various third-party direct-attach storage (DAS) options.

  • Thinkserver RD650 & 720ix RAID - How to get all drives showing up in TDM ??

    Hi, Have just received an RD650 with a 720ix controller. The chasis I have is the one where it provides a hybrid of disk form factors. So at the front left of the server it first has 6 x 2.5" bays, and then 3 x 3.5" bays, that makes up the first half of the backplane. Then the rest of the front of the server and the other half of the back plane is populated by 6 x 3.5" bays. I have 6 hard drives in total: 2 x 300GB 2.5" 15k SAS and 4 x 3TB 3.5" 7.2k SAS. However, when I go into storage management in the TDM, the drive count is only 5 and I do not get the option to include all 4 of my 3.5" drives into a virtual drive, I only get the option of including 3. So the last 3.5" drive that is on the right hand side of the server and backplane is not showing as available. Now I know this controller supports up to 32 drives, so what do I need to do to get this to show up as an available drive? I'll post an image here as well, the green outlines are my drives which ARE available, the red box is my one 3.5" drive which is NOT showing as available (all lights are on and drive seems to be working normally, if I swap that drive for another I still just get the same, 3 of the 3.5 drives available instead of all 4). I would REALLY appreciate some help with this, I need that drive showing up for RAID10 or this server goes back! Thank you,LD. 

    Yeah sorry I meant to update this sooner, it was a dead drive, I thought I had checked it in other bays properly but obviously not. This server only has 2 usb ports! Crazy, so if you are flshing a firmware or loading a driverbundle or whatever into the TDM but USB, you have to take out either the keyboard or mouse, what genius decided this? All servers should have at least 3 USB ports, whether you are going to run them headless eventually or not. Also, ALL of the 4 3.5" drive caddies are cracked and broken where the screw attaches the plastic front part to the metal! This must have been done in the factory AND one of the sprung screws that holds the raid card down is snapped! Anyway, original problem is solved.

  • RAID question

    We currently have our 10g database connected to our storage array (RAID10). We're going to upgrade our servers to 64-bit machines, and in the process the physical DBA wants to start storing our data on the local drive system (RAID5).
    There is sufficient space on the new server, and it's easily upgradable if our growth estimates are incorrect. The general consensus here is that in the case of disk failure, a modern RAID5 system no longer takes an unnacceptable amount of time to rebuild a disk.
    As the logical DBA, I am concerned about changes, especially when I don't fully understand the technology. Is there anything I should be worried about, or good questions I could ask in order to insure that we get the same performance level and availability? My knowledge of RAID is purely theoretical, and comes from articles I've accessed on the web.
    Thanks
    -cf

    the first thing comes to my mind is your primary concern will be write speed impact on your system.
    RAID 5 have superior disk read rate but moderate to low write rate because of the parity generation.
    RAID 10 have over all better IO rates.
    It's also interesting to know there're definitition between Physical and Logical DBA somewhere. :)

  • T5220 RAID10 best practice

    Hello world!
    Sorry if possibly duplicate topic, but I couldn't find the solution here.
    We have T5220 box with 4 146G HDDs, by the software deployment requirement was that we should build a RAID-10 volume, allocating about 60GB for the root partition, UFS filesystem is mentioned. LSI SAS controller inside does not provide raid10 functionality, only 2 volumes either raid0 or raid1 each. My suggestion is was to construct mirrored volumes by a hardware raid through raidctl and then stripe with SVM or make raid10 through ZFS functionality with initial Solaris setup if it is possible, in spite of the deployment requirements. The question is - how to make it better? The real problem is that the root partition should reside up on that volume, striping the static root partition with SVM - never thought about it. From initial disk layout and solaris setup to make the root partition 2 times smaller than expected and after constructing metadevices and stripe to growfs the root partition - is it possible and optimal solution? I already have practice building raid10 with SVM, but there was 2 additional HDD's that was fully mirrored between each other, seperating system and data storage.
    By the way the second question is - why as such conservatism is in brains of some skilled specs is taking place when the choice of volume and filesystem construction and selection brings up, why some people don't recommend hardware raid within the server controller (I understand that some ones are really inconvenient like in these boxes are, not supporting expected functionality as raid10, I know about StorageTek Raid cards, but this is optional and it costs some money) giving preference to SVM (first subquestion- why not to construct a full hardware raid from internal disks, providing single volume to the system and software), my choose is ZFS, but UFS is still (and would be chosen for deployment for many years with such conservative views) actively used and deployed. I understand that UFS is much older and time-proven (but sometimes making me disappointed when rebooting some systems with uptime more than a year and finding logical errors within the filesystem, as a not much experienced spec I don't know yet if it was the UFS's problem or the problem of an application crushing the filesystem, but in my point of view - the consistency should be ruled by a filesystem and it's drivers, would such problems reside in ZFS - don't know, trying to trust the slogans saying "no fsck", because of no logical problems are at all, everything is transferred by i/o transactions and one big pros is that ZFS is used on some storages, when the storage must have the most reliable subsystem to provide reliable data storage. The second subquestion is about raid features - SVM raids vs ZFS's?
    Every opinion is welcome, thanks in advance!
    Edited by: Muxamed on Sep 10, 2010 2:23 PM

    I lied about unsupported h/w raid 10. It called internally as RAID1E. I've already made the volume. But other questions regarding UFS/SVM vs ZFS are still actual.

  • Please help me arrange my hard drive setup, testing out different Raid arrays.

    Hi folks! If I could bother you for your time to help me figure out what the best options are for my computer setup that would be much appreciated!! I'm editing mostly RED footage so 5K and 4K files.
    Here's what my computer build is right now:
    Asus Rampage IV Extreme
    i7-3930K CPU (OC'd to 4.4ghz)
    64GB ram
    GTX 680 video card (2GB model)
    Windows 7 Pro
    Mostly Premiere CS6 and After Effects
    Here's the fun part and the part that I'm still baffled by regardless of how many videos and searches I've done online, what to do with my hard drives!?
    Right now I have my Mobo setup in Raid mode with these plugged in to the motherboard;
    2 x 250gb Samsung pro SSD's on 6gb/s ports (Raid 0) (Windows 7 OS)
    2 x 300gb WD Velociraptors 3gb/s ports (Raid 0) (empty)
    3 Ware 9750-8i Raid controller +
    8 x 3TB Seagate Barracuda drives ready to rock and roll.
    Now based on everything I've read, I'm leaning towards setting up 4 x 3TB drives in Raid0 for my footage (making constant backups with synctoy onto an external Drobo)
    4 x 3TB drives in Raid0 for exporting footage. (with continuous backups)
    That leaves me wondering where's the best place to setup Media cache, cache files and project files and anything else. Also I've left the OS pagefile on the SSD's but wondering if I should set that up somewhere else?
    Using Crystal Disk mark, I'm getting these results so far:
    OS on Samsung Pro SSD's in Raid0
    The 4 x 3TB drives in various Raid setups:
    Raid0
    Raid5
    Raid10
    Velociraptors in Raid0
    All this is due to many months of recent work on a computer build that would get bogged down and editing became painfully slow. I'm trying out a proper dedicated Raid card instead of just 2 drives in Raid0 off the motherboard hoping this will make a big difference.
    Any expert advice on here would be greatly appreciated!!!!

    Let's start with the easy part:
    C: SSD for OS, programs & pagefile
    D: 2 x Veliciraptor in raid0 for media cache and previews
    With 8 disks available on the 3Ware controller you can go many different directions and they depend on your workflow. Let me give two extreme examples and then you may be pointed in the right correction for your editing practice.
    Workflow consists of quick and dirty editing, usually one day or less and multiple export formats each day (example Vimeo, YouTube, BDR, DVD and web), or
    Workflow consists of extensive editing, usually a week or longer and exports to a few formats at the end of the editing (example BDR and DVD only).
    If your typical workflow looks like 1, your idea of two 4x raid0 arrays can work, because the load is rather nicely distributed over the two arrays. Of course the drawback is that there is no redundancy so you depend very much on the backups to your Drobo and restoring data after data loss can be time consuming, coming from the Drobo.
    If your workflow looks more like 2, then you can consider one 7x raid5 array for projects and media and a single drive for exports. That means that your raid5 will be noticeably faster then either of the 4x raid0 arrays you considered and offer redundancy as well. As a rough rule of thumb, a 4x disk raid0 will be almost 4 times faster than a single disk, a 7x raid5 will be around 5 - 5.5 times faster than a single disk. You profit from the extra speed during the whole editing week or longer.
    The export to a single disk may be somewhat slower, but how long does it take to write say 25 GB to a single disk? Around 160 seconds. If you use a hot swappable drive cage, you can even use multiple export disks, say one for Client A, another one for Client B, etc. In your initial idea of a 4x raid0 that time may be 40 seconds, so you gain around 2 minutes on the export, but lose that in the whole editing week.
    Just my $ 0.02

  • Data File Management on SAN & RAID

    Hi everyone,
    this is more a question for some generic feedback rather than a particular problem. I'm advising on a system which runs 10g, Archiving, Flashback and Standby on a really fast machine. The whole database, which currently is some 5GB, runs pretty much of an 40GB SGA :)
    When i started working with Oracle we managed data files very detailed and paid much attention to disk placements, that was back at 8i. Today we have NAS, SAN and RAID systems which make I/O tracking a lot harder. This particular system runs an HP storage system, which runs virtual RAID's and everything appears under just 1 mount point in the OS and to the DB.
    I'm aware of the standard rules, e.g. Logs on RAID10 etc., but i'm just wondering how, everyone of you out their setting up production systems, usually deal with the placement of DB Files, Log Files, Archive File and Flashback Files on today's low end or enterprise class Storage Systems?
    If you need a particular problem to answer ..... the issue here is that IT says it's not the storage system, but out of 13h database time i have over 3h log file sync and write wait events.
    Thanks.

    Well the first thing I'd do with a 5GB database is not run a 40GB SGA.
    But to your question ... I like to place my files so that, hypothetically, I can lose a disk shelf not just a disk, and that in so doing if I lose storage I can lose the database but not its associated backups, archived redo logs and redo logs. Or, if I lose the backups I still have a running database.
    And 1 LUN is almost never the right answer to any question if performance is an issue.
    Also familiarize yourself with Clos networks and create multiple paths from everywhere to everywhere. This means that blades and 1U servers are almost never the right answer to a database question.

  • Hyper Disk Layout and Raid For Essintal Server 2012 R2 With Exchange

    Hi would raid mirror be good enough support configuration on single server run both essentials server 2012  and  exchange 2013  new to exchange look for input and suggestions 
    user load is very light just 5  
    boot disk  120 gig ssd  on it own Controller run 2012 run with hyper v installed
    2 x 3 Tb raid 1 by Lsi 1064e raid control   essentials server 2012  disk are vhdx fixed size  two each
    2 x 2 Tb raid 1 by Lsi 1064e raid control   2012 R2 server  disk are vhdx fixed size  two each
    System specs is 2 x Amd  opertron  4122 with 32 gig of Ram  4 core to each os
    Andy A

    Hi would raid mirror be good enough support configuration on single server run both essentials server 2012  and  exchange 2013  new to exchange look for input and suggestions 
    user load is very light just 5  
    boot disk  120 gig ssd  on it own Controller run 2012 run with hyper v installed
    2 x 3 Tb raid 1 by Lsi 1064e raid control   essentials server 2012  disk are vhdx fixed size  two each
    2 x 2 Tb raid 1 by Lsi 1064e raid control   2012 R2 server  disk are vhdx fixed size  two each
    System specs is 2 x Amd  opertron  4122 with 32 gig of Ram  4 core to each os
    1) Boot Hyper-V from cheap SATA or even USB stick (see link below). No point in wasting SSD for that. Completely agree with Eric. See:
    Run Hyper-V from USB Flash
    http://technet.microsoft.com/en-us/library/jj733589.aspx
    2) Don't use RAID controllers in RAID mode rather (as you already got them) stick with HBA mode passing disks AS IS, add some more SSDs and configure Storage Spaces in RAID10 equivalent mode and use SSDs as a flash cache. See:
    Storage Spaces Overview
    http://technet.microsoft.com/en-us/library/hh831739.aspx
    Because having single pool touching all spindles would provide you better IOPS compared to creating "islands of storage" that are waste or performance and hell of management.
    3) Come up with IOPS requirements for your workload (no idea from above) keeping in mind RAID10 provides ALL IOPS for reads and half IOPS for writes (because of mirroring). So as single SATA can do maybe 120-150 IOPS and single SAS can do up to 200 (you
    don't provide any model names so we have to guess) you may calculate how many IOPS your config would give away in a best and worst case scenario (write back cache from above will help but you always need to come from a WORST case). See calculator link below.
    IOPS Calculator
    http://www.wmarow.com/strcalc/
    Hope this helped a bit :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Raid 1 with two external drives

    Hi Everyone,
    I've been searching for some answers on this and havent quite found what ive been looking for so i figured id just ask. I have two identical 1tb g-drives daisy chained and hooked up to my imac via Firewire 800. I want to make one of the drives raid 1 to mirror all the information on the other. What is the best way to go about doing this?
    I was also wondering would it be better just to have time machine do backups of one drive onto the other? any advantages/disadvantages to either method?
    Thanks in advance everyone!
    Cheers,
    Scott

    scotty morrison wrote:
    These drives will be used to store Masters and dailies from photo and video shoots and well as the final edits from those shoots, so id qualify them as "mission critical".
    Instead of "making" your own RAID1 array, you may want to investigate some "turn-key" RAID solutions. For example [this one at Macsales|http://eshop.macsales.com/shop/Mercury-EliteAL-Pro-RAID]. (That's just one RAID1 turn-key unit that Macsales has.)
    Since you mention video, perhaps even consider RAID5 for more capacity, or RAID10 pr RAID0+1 for higher speeds. RAID0 would be good for the edits "scratch space", but not for any "real" storage. Thus the use of RAID10 or 0+1. (If you really want to get the max performance, do some research on the differences between 10 & 0+1.)
    ...Im now thinking however that i should have my masters in raid1 and on separate drives have my live edits being backed up by time machine as they change. Would this be a good thing to do?
    Again, RAID1 is not a backup, so regardless of where you have the "masters", they should be backed up. With the backup, you can then recover from a loss of a drive, it will just be a little more time to get running again compared to RAID1. Thus the "need" for RAID1 is debatable for the masters and final edits, etc.
    Also realize that RAID1 & RAID5 generally have slower performance than an individual drive (or RAID0) due to the overhead of writing the mirror or parity info. In many "workstations", they have a individual drive to boot from, then a RAID0 array for the editing "scratch space", and the final version of the file on some other storage. (i.e.: RAID5 SAN.)
    Be careful with reliance on Time Machine for backups. Time Machine is good, but sometimes Time Machine can't backup files that are in use. (e.g.: [iPhoto|http://support.apple.com/kb/HT4116].) This is true of most backup systems unless you get some really expensive software so for something you're considering "mission critical" like these video files, I'd be looking at an additional backup system. (Even something as lowly as manually copying files to an external hard drive or an Automator workflow.)
    To give another perspective: RAID1 if you absolutely can't have this system down for even one second. Which means your boot drive needs RAID1. Your "scratch space" on RAID0 or a SSD, since you want speed and can deal with the loss of a drive. Your "masters" & "final edits", you need to consider _+long term+_ storage. (a.k.a.: archiving.) The largest hard drives currently are 3TB. Being video, I'm going to guess you'll out grow this quickly. With RAID1, you'd be "stuck" at 3TB until larger hard drives come out...and then you've have to buy two of them...then be "stuck" again. So IMHO you should be looking at a RAID5 system that can grow. They can be "slower" RAID5 since you're looking at long term storage rather than max performance like the scratch disks. (If you need a little more performance, then RAID10, but you'd be sacrificing capacity.) For example, the 8 or 16 drive Drobo. Start with 2 or 3 drives now, then add more drives as needed. Drobo's of this capacity aren't cheap, but if they are as "mission critical" as you think they are, then a Drobo or similar is probably a worthwhile investment. But really, it appears to me that a proper backup procedure of your masters/final edits would be sufficient. RAID1 then is just icing-on-the-cake, albeit, very good icing.

  • Systemd-fsck complains that my hardware raid is in use and fail init

    Hi all,
    I have a hardware raid of two sdd drives. It seems to be properly recongnized everywhere and I can mount it manually and use it without any problem. The issue is that when I add it to the /etc/fstab My system do not start anymore cleanly.
    I get the following error( part of the journalctl messages) :
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Found device /dev/md126p1.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd[1]: Starting File System Check on /dev/md126p1...
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: /dev/md126p1 is in use. <--------------------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: e2fsck: Cannot continue, aborting.<----------- THIS ERROR
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: fsck failed with error code 8.
    Jan 12 17:16:21 biophys02.phys.tut.fi systemd-fsck[523]: Ignoring error.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Started File System Check on /dev/md126p1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Mounting /home1...
    Jan 12 17:16:22 biophys02.phys.tut.fi mount[530]: mount: /dev/md126p1 is already mounted or /home1 busy
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: home1.mount mount process exited, code=exited status=32
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Failed to mount /home1.
    Jan 12 17:16:22 biophys02.phys.tut.fi systemd[1]: Dependency failed for Local File Systems.
    Does anybody undersand what is going on. Who is mounting the  /dev/md126p1 previous the systemd-fsck. This is my /etc/fstab:
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    # /dev/sda1
    UUID=4d9f4374-fe4e-4606-8ee9-53bc410b74b9 / ext4 rw,relatime,data=ordered 0 1
    #home raid 0
    /dev/md126p1 /home1 ext4 rw,relatime,data=ordered 0 1
    The issue is that after the error I'm droped to the emergency mode console and just pressing cantrol+D to continues boots the system and the mount point seems okay. This is the output of 'system show home1.mount':
    Id=home1.mount
    Names=home1.mount
    Requires=systemd-journald.socket [email protected] -.mount
    Wants=local-fs-pre.target
    BindsTo=dev-md126p1.device
    RequiredBy=local-fs.target
    WantedBy=dev-md126p1.device
    Conflicts=umount.target
    Before=umount.target local-fs.target
    After=local-fs-pre.target systemd-journald.socket dev-md126p1.device [email protected] -.mount
    Description=/home1
    LoadState=loaded
    ActiveState=active
    SubState=mounted
    FragmentPath=/run/systemd/generator/home1.mount
    SourcePath=/etc/fstab
    InactiveExitTimestamp=Sat, 2013-01-12 17:18:27 EET
    InactiveExitTimestampMonotonic=130570087
    ActiveEnterTimestamp=Sat, 2013-01-12 17:18:27 EET
    ActiveEnterTimestampMonotonic=130631572
    ActiveExitTimestampMonotonic=0
    InactiveEnterTimestamp=Sat, 2013-01-12 17:16:22 EET
    InactiveEnterTimestampMonotonic=4976341
    CanStart=yes
    CanStop=yes
    CanReload=yes
    CanIsolate=no
    StopWhenUnneeded=no
    RefuseManualStart=no
    RefuseManualStop=no
    AllowIsolate=no
    DefaultDependencies=no
    OnFailureIsolate=no
    IgnoreOnIsolate=yes
    IgnoreOnSnapshot=no
    DefaultControlGroup=name=systemd:/system/home1.mount
    ControlGroup=cpu:/system/home1.mount name=systemd:/system/home1.mount
    NeedDaemonReload=no
    JobTimeoutUSec=0
    ConditionTimestamp=Sat, 2013-01-12 17:18:27 EET
    ConditionTimestampMonotonic=130543582
    ConditionResult=yes
    Where=/home1
    What=/dev/md126p1
    Options=rw,relatime,rw,stripe=64,data=ordered
    Type=ext4
    TimeoutUSec=1min 30s
    ExecMount={ path=/bin/mount ; argv[]=/bin/mount /dev/md126p1 /home1 -t ext4 -o rw,relatime,data=ordered ; ignore_errors=no ; start_time=[Sat, 2013-01-12 17:18:27 EET] ; stop_time=[Sat, 2013-
    ControlPID=0
    DirectoryMode=0755
    Result=success
    UMask=0022
    LimitCPU=18446744073709551615
    LimitFSIZE=18446744073709551615
    LimitDATA=18446744073709551615
    LimitSTACK=18446744073709551615
    LimitCORE=18446744073709551615
    LimitRSS=18446744073709551615
    LimitNOFILE=4096
    LimitAS=18446744073709551615
    LimitNPROC=1031306
    LimitMEMLOCK=65536
    LimitLOCKS=18446744073709551615
    LimitSIGPENDING=1031306
    LimitMSGQUEUE=819200
    LimitNICE=0
    LimitRTPRIO=0
    LimitRTTIME=18446744073709551615
    OOMScoreAdjust=0
    Nice=0
    IOScheduling=0
    CPUSchedulingPolicy=0
    CPUSchedulingPriority=0
    TimerSlackNSec=50000
    CPUSchedulingResetOnFork=no
    NonBlocking=no
    StandardInput=null
    StandardOutput=journal
    StandardError=inherit
    TTYReset=no
    TTYVHangup=no
    TTYVTDisallocate=no
    SyslogPriority=30
    SyslogLevelPrefix=yes
    SecureBits=0
    CapabilityBoundingSet=18446744073709551615
    MountFlags=0
    PrivateTmp=no
    PrivateNetwork=no
    SameProcessGroup=yes
    ControlGroupModify=no
    ControlGroupPersistent=no
    IgnoreSIGPIPE=yes
    NoNewPrivileges=no
    KillMode=control-group
    KillSignal=15
    SendSIGKILL=yes
    Last edited by hseara (2013-01-13 19:31:00)

    Hi Hatter, I'm a little confused about your statement not to use raid right now. I'm new to the Mac, awaiting the imminent delivery of my first Mac Pro Quad core with 1tb RAID10 setup. As far as I know, it's software raid, not the raid card (pricey!). My past understanding about raid10 on any system is that it offers you the best combination for speed and safety (backups) since the drives are a striped and mirrored, one drive dies, quick replacement and you're up and running a ton quicker than if you had gone RAID5 (20 mins writes per 5G data?)Or were you suggesting not to do raid with the raid card..?
    I do plan to use an external drive for archival backups of settings, setups etc, because as we all know, if the best fool proof plans can be kicked in the knees by Murhpy.
    My rig is destined to be my video editing machine so the combo of Quad core, 4G+ memory and Raid10 should make this quite the machine.. but I'm curious why you wouldn't suggest raid..
    And if you could explain this one: I see in the forums a lot of people are running Bootcamp Parralels(sp) which I assume is what you use to run mulitple OS on your Mac systems so that you can run MacOS and Windblows on the same machine.. but why is everyone leaning towards Vista when thems of us on Windblows are trying to avoid it like the plague? I've already dumped Vista from two PCs and installed XP for a quicker less bloated PC. Is vista the only MSOS that will co-exist with Mac systems? Just curious..
    Thanks in advance.. Good Holidays

  • RAID level for Redo

    Hi,
    My storage admin created RAID10 and RAID5 for database, I would like to know which RAID Level is best for keeping the REDO logs. Can someone tell me what's best for REDO?
    Thanks

    hello,
    In case of using redo logs then you need performance and availability.that is why i put this comparison for you.
    I suggest you use raid 5 because it is faster in the write operations and more recoverable.*
    <pre class="jive-pre">Note: RAID 5 disks are primarily used in the processes that require transactions. Relational databases are among the other fields that run very well under a RAID 5 storage scheme*</pre>
    RAID 5 vs RAID10
    Data Loss and Data Recovery
    Let us start off by having RAID 5 explained. In RAID 5, the data backup of any one of the disks is created. If there are 5 disks, in the storage system, then 4 of the disks will be used for storing the data and one of the disks will be used for keeping the backup of any one of the hard disks. If one of the disks in the array fails, then the data can be recovered, but in the event of a second disk failure, the recovery is not possible. RAID 10 on the other hand is a combination of RAID 0 and RAID 1. In a RAID 10 storage scheme, an even number of disks is required. Each disk array has a disk array, which is a mirrored set of the former. In RAID 10, data recovery of all but one disk can be performed. In the case of a disk failure, all the remaining disks can be used effectively without any impact on the storage scheme.
    Performance
    The RAID 5 performance in the read operations is quite appreciated, though its write operation is quite slow, as compared to RAID 10. RAID 10 is thus used for systems which require high write performance. Hence, it is very obvious, RAID 10 is not used for systems like heavy databases, which require high speed write performance.
    Redundancy
    The RAID 10 arrays are more data redundant than the RAID 5 arrays. This makes RAID 10 an ideal option for the cases where high data redundancy is required.
    Architectural Flexibility
    RAID 10 provides more architectural flexibility, as compared to RAID 5. The amount of free space left is also minimized, if you use a RAID 10 data storage scheme.
    Controller Requirement
    RAID 5 demands a high end card for the data storage performance. If the purpose of the RAID 5 controller is being solved by the operating system, then it will result in the slowing down of the performance of the computer. In case of a RAID 10 controller, any hardware controller can be used.
    Applications
    RAID 10 finds a wide variety of applications. Systems with RAID 0, RAID 1 or RAID 5 storage schemes are often replaced with a RAID 10 storage scheme. They are mainly used for medium sized databases. RAID 5 disks are primarily used in the processes that require transactions. Relational databases are among the other fields that run very well under a RAID 5 storage scheme.
    With this, I complete the RAID 5 vs RAID 10 comparison. This comparison, I hope, will help you in deciding the right storage scheme, that can suit your purpose.
    kind regards
    Mohamed

  • RAID Configurations for Cisco servers

    Hi All,
    What is the RAID configuration for Cisco Appliance(Version 8.5) like CUCM, CUPS, CUIC, Unity etc?RAID Configuration will be done while installtion itself or we need to do it explicitly?
    Regards,
    Adithya

    Hi Geoff,
    Thanks for the reply. Just wanted to know whether if this RAID configuartion is similar to the other server RAID where we install Cisco applications.(Like OS & Application Software is RAID 1 and Database is RAID10).
    Regards,
    Adithya

  • RAID 10 and ASM

    HI
    Can someone give the difference between RAID 10 and ASM
    From my understanding both is doing stripping of data.
    If we are implementing RAID 10 ,then do we need to implement ASM (if yes what is the advantage)
    Thanks

    user10394804 wrote:
    Can someone give the difference between RAID 10 and ASMWhat is the difference between a car and the road? ASM=vehicle. RAID=road.
    In other words, ASM is a Storage (or Volume) Manager System that runs on different RAID implementations. The RAID implementation can be at the actual storage layer (hardware RAID).. in which case RAID is external to ASM. Or ASM can itself be used to implement RAID to run on (software RAID).
    From my understanding both is doing stripping of data.ASM automatically stripes data across disks (e.g. RAID10 disks) in a diskgroup. RAID10 is a combination of striping and mirroring. So yes, both use striping - but ASM != RAID10. These are 2 very different layers in the storage tier.
    If we are implementing RAID 10 ,then do we need to implement ASM (if yes what is the advantage)Yes. ASM manages the storage layer for you. It can perform dynamic load balancing. With ASM 11r2, there are even more features that are introduced that makes ASM a very important part in tying the storage layer effectively and efficiently with the o/s layer and the database layer.
    Oracle specifically recommends using ASM for RAC.

Maybe you are looking for

  • Access to file server from outside the domain

    We have users from another domain who connect over VPN to our SAP. These users must drop a file on a certain server in our domain to create labels. The problem is that they are not users on the domain, so they don't have access rights. I tried with G

  • Document and Download Folder

    How do you change the download and document icons on the toolbar. Currently the icons display the files inside and I would like to display the folder icon. Another user mentioned that there is a way to do this yet they are unsure how it was changed.

  • Hi, I can't use addRow in JTable

    Hi, I can't use addRow in tableModel1.addRow. I am new in Java here some code public class Class1 extends JInternalFrame implements TableModelListener public Class1(Connection srcCN, JFrame getParentFrame) throws SQLException     try     cnCus = srcC

  • Essbase Report question...

    Ok, I have something that I think should be simple to do, but am driving myself nuts not being able to figure it out. How do you write a report script so that the first line in your output file has the dimension names as the column headers? I can get

  • My screen cracked can i get it replaced

    ipod touch screen cracked can it be replaced or not and if yes where can it be done at?