RAID5 Help

I used to use Archlinux a while ago, but went back to Windows for gaming, and I needed Visual Basic for one of my school courses.  Now I've decided to go back to Archlinux.  The problem is, my new computer has 3 HDDs in RAID5, and since I'm not removing my window installation I can't just recreate the array.  I am using the RAID function on my EVGA 680i (which is hardware assisted software from what I have read).  I couldn't find anything useful on the EVGA forums.  I was hoping someone would be able to help me set this up so I don't have to disable my array.  Thanks in advance.

You just need to find out which module supports the RAID controller on your EVGA 680i. Once you load it, you'll get new device nodes in /dev (like /dev/raid/c0d0p0 etc) and you can use those instead of "/dev/sdb1".
It's probably better to rebuild the RAID5 using Linux software RAID though: if your RAID controller dies, or your mainboard, your RAID dies aswell. Unless you happen to find the exact same hardware. With software RAID you simply replace the mainboard, or put the disks in a different system and the RAID is still recognized as such.

Similar Messages

  • Diskfailure changed RAID5 to RAID0+1 not bootable. Help needed!

    I have a MacPro (early 2008) 3,1 with a macpro raidcard and 4 WDC Black 1TB HD's. After booting up the system the raidcard gave an error message that
    bay 3 had a failure. Indeed hdd 3 was not listed in the system overview anymore. A moment later it was again attatchted to the array RS1 and was "roaming"
    The raid RS1 was even changed from RAID-5 to RAID 0+1.
    after a second bootup, my macpro wont boot anymore. I even tried a few keystrokes at bootup;
    +Cmd-R
    +Option
    +Eject
    +Shift
    None of the bootup combinations wokrk anymore...
    Anybody a suggestion. Help much appreciated!!!
    Greetings

    Hi Grant,
    Let the MAC 'do' its work overnight again, but no luck so far. This morning I studied some fora and these are the listins form within terminal.
    Any thoughts?
    Thanks in advance
    Apple raidutil version: 1.3.0
    General Status: Issues Found
                        The volume [R1V1:RS1  [ID: 1]] is degraded.
                        The volume [R1V2:RS1  [ID: 1]] is degraded.
                        The volume [R1V3:RS1  [ID: 1]] is degraded.
                        The volume [R1V4:RS1  [ID: 1]] is degraded.
                        Raidset (RS1) is not viable.
    Battery Status: Charged
    Controller  #1: Hardware Version 1.00/Firmware M-2.0.5.5
                    Write Cache enabled
    Volume            Status            Type       Drives       Size  Comments
    R1V1              Is Viable         RAID 0+1   1,2      500.00GB  Is Degraded                     
    R1V2              Is Viable         RAID 0+1   1,2      250.00GB  Is Degraded                     
    R1V3              Is Viable         RAID 0+1   1,2      250.00GB  Is Degraded                     
    R1V4              Is Viable         RAID 0+1   1,2        1.00TB  Is Degraded     
    Drives  Raidset       Size      Flags
    Bay #1  RS1             1.00TB   IsMemberOfRAIDSet:RS1 IsReliable
    Bay #2  RS1             1.00TB   IsMemberOfRAIDSet:RS1 IsReliable
    Bay #3  <none>          1.00TB   IsSpare IsReliable
    Bay #4  <none>          1.00TB   IsSpare IsReliable

  • I have 2 new HDs to add to PC - where will they help the most?

    I recently purchased 2 new Hard Drives (1TB SAMSUNG F1) that match the ones I currently use in my Intel core i7 PC running Windows 7.  I'm wondering where these new HDs will do me the most good, as far as performance and considering future growth, etc. 90% of my work is in 1080i at about 130Mbps  Oh yeah, I'm on a budget, so thanks for not suggesting I buy 10 more drives and a high end RAID card ;-)  FYI, I'm currently using the motherboard's Intel RAID controller.... I know - I would love to buy a real RAID card!!
    Here is my current HD set-up - total of 4 Samsung 1TB F1 7200 RPM:
    Drive c: OS and apps
    Drive d: 2 disk RAID0 - I set this for "capture" within each PPRO project. I also send media encoder renders here because the sources files are mostly rendered files on the e drive. Plus I often want to back-up these encodes along with the capture files.
    Drive e: "render drive"  - target for PPRO preview renders, caches  (including Cineform renders)
    * I realize the flaw with above now - since I use the Cineform CODEC for most edits, almost every clip ends up needing to be rendered, so my poor solo render drive is actually doing more work and playing bigger bit-rate files then my capture array. (I may or may not get a capture card at this point)
    ** the other issue that comes into play here is which files need to be kept and backed up and which drives can be regularly restriped. From my short time with PPRO, it would seem that none of the PPRO preview or cache files need to be saved or backed up....just the capture files and any media encoder files that I want to keep.  Am I missing anything here?
    So, I see my options as:
    add both new HDs to render array to create a 3 disk RAID0 for renders/previews, leaving the 2 disk RAID0 for captures
    add (1) HD to render to create a 2 disk RAID0 render array and use the second new HD somehow to improve performance or workflow
    Thanks for your advice on how best to use the two new HDs to improve my system. I think that I only have 6 SATA plugs on my motherboard (ASUS P6T V2), so this fills out my internal HD options.
    Thanks!
               Jim

    Baz,
    Thanks for your helpful post!  I am trudging through an upgrade to Windows 7 retail from the RC. It worked but is not very stable, so I may have to do a clean install.
    Anyway - since you just went through setting up a RAID5 using the Intel MoBo feature, I was hoping that you would be willing to reply with a basic "How-to" or list of steps involved.  It has been 8 months since I played with the hard drive arrangement and I can't remember what I did or how, ie, do I start with doing things in the BIOS, how do I stripe the RAID5, etc.  Any help would be appreciated!  In the meantime, I'll download the Intel Matrix software as you suggested.
    Thanks,
                  Jim
    My current arrangement is below. I have two new Samsung F1 HDs to add to these. FYI, the new drives are the same make/size as the existing, but are the commmercial grade. The existing HDs are the ordinary grade....I assume that these will work together in a RAID5 OK.
    MoBo = ASUS P6T Delux V2
    CPU = Core i7
    Disk 0 = OS, boot, page file, primary partition
    Disk 1 = video files  (capture)  *  this is a RAID0 using two 1TB Samsung F1
    Disk 2 = renders  * single 1TB Samsung F1

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Please help me arrange my hard drive setup, testing out different Raid arrays.

    Hi folks! If I could bother you for your time to help me figure out what the best options are for my computer setup that would be much appreciated!! I'm editing mostly RED footage so 5K and 4K files.
    Here's what my computer build is right now:
    Asus Rampage IV Extreme
    i7-3930K CPU (OC'd to 4.4ghz)
    64GB ram
    GTX 680 video card (2GB model)
    Windows 7 Pro
    Mostly Premiere CS6 and After Effects
    Here's the fun part and the part that I'm still baffled by regardless of how many videos and searches I've done online, what to do with my hard drives!?
    Right now I have my Mobo setup in Raid mode with these plugged in to the motherboard;
    2 x 250gb Samsung pro SSD's on 6gb/s ports (Raid 0) (Windows 7 OS)
    2 x 300gb WD Velociraptors 3gb/s ports (Raid 0) (empty)
    3 Ware 9750-8i Raid controller +
    8 x 3TB Seagate Barracuda drives ready to rock and roll.
    Now based on everything I've read, I'm leaning towards setting up 4 x 3TB drives in Raid0 for my footage (making constant backups with synctoy onto an external Drobo)
    4 x 3TB drives in Raid0 for exporting footage. (with continuous backups)
    That leaves me wondering where's the best place to setup Media cache, cache files and project files and anything else. Also I've left the OS pagefile on the SSD's but wondering if I should set that up somewhere else?
    Using Crystal Disk mark, I'm getting these results so far:
    OS on Samsung Pro SSD's in Raid0
    The 4 x 3TB drives in various Raid setups:
    Raid0
    Raid5
    Raid10
    Velociraptors in Raid0
    All this is due to many months of recent work on a computer build that would get bogged down and editing became painfully slow. I'm trying out a proper dedicated Raid card instead of just 2 drives in Raid0 off the motherboard hoping this will make a big difference.
    Any expert advice on here would be greatly appreciated!!!!

    Let's start with the easy part:
    C: SSD for OS, programs & pagefile
    D: 2 x Veliciraptor in raid0 for media cache and previews
    With 8 disks available on the 3Ware controller you can go many different directions and they depend on your workflow. Let me give two extreme examples and then you may be pointed in the right correction for your editing practice.
    Workflow consists of quick and dirty editing, usually one day or less and multiple export formats each day (example Vimeo, YouTube, BDR, DVD and web), or
    Workflow consists of extensive editing, usually a week or longer and exports to a few formats at the end of the editing (example BDR and DVD only).
    If your typical workflow looks like 1, your idea of two 4x raid0 arrays can work, because the load is rather nicely distributed over the two arrays. Of course the drawback is that there is no redundancy so you depend very much on the backups to your Drobo and restoring data after data loss can be time consuming, coming from the Drobo.
    If your workflow looks more like 2, then you can consider one 7x raid5 array for projects and media and a single drive for exports. That means that your raid5 will be noticeably faster then either of the 4x raid0 arrays you considered and offer redundancy as well. As a rough rule of thumb, a 4x disk raid0 will be almost 4 times faster than a single disk, a 7x raid5 will be around 5 - 5.5 times faster than a single disk. You profit from the extra speed during the whole editing week or longer.
    The export to a single disk may be somewhat slower, but how long does it take to write say 25 GB to a single disk? Around 160 seconds. If you use a hot swappable drive cage, you can even use multiple export disks, say one for Client A, another one for Client B, etc. In your initial idea of a 4x raid0 that time may be 40 seconds, so you gain around 2 minutes on the export, but lose that in the whole editing week.
    Just my $ 0.02

  • Urgent help need for login problem B310

    Im having a B310, i use veritouch for login purpose. My 4 years kid accidently changed the pass method without my knowledge. After restart, i cant log in. Yet after i try did the forget password in windows 7, it seems cant work. I tried windows password bypass software but shows nopassword set. Unable to get pass via safe mode or f8 as well. Please help me..thanks

    Hi raid5, and welcome to the Lenovo User Community!
    I'm sorry you are locked out of your B310. Unfortunately the Community Rules do not allow discussion of methods to defeat passwords. 
    I don't work for Lenovo. I'm a crazy volunteer!

  • New Computer Help

    I have been reading through these forums for the past couple of weeks and I just want to first say thanks to all the people that post here, it has really helped in me getting to this planned build I have now .
    I primarily use Premire and After Effects so this PC needs to be good at both of them. I currently have CS 5.5 but I intend to get CS 6 as soon as my college offers it. I work mostly with footage I get from the program FRAPS to make music videos. I don't know exactly what kind of codec FRAPS files are so I have no idea if what I have built is way too powerful or not enough, I do know that the files are .avi, have a 1920 X 1080 resolution, and are pretty much completely uncompressed (They are roughly 1GB/min). If anyone here knows more about FRAPS then that would be great. Since I make music videos I end up using lots of very shot clips and many different layers at a time. My budget for this entire project is around 3 - 4 grand.
    My current planned build is this
    Mobo: Asus P9x79 WS
    CPU: i7-3930K
    GPU: GTX 670, 4GB, EVGA SC
    RAM: 2 sets of Ripjaws Z series 32 (4x8GB) 240-pin DDR3 1600
    PSU: Corsair Gold AX1200
    Hard disks: 1 Intel 520 120GB ssd SATA III, 3 Seagate Barracuda 3TB SATA III, 2 Seagate Barracuda 1TB SATA III, and a 2TB external which I might make internal.
    Raid Card: ???? (help)
    Case: CoolMaster Cosmos II
    Cooling: Corsair H100
    This whole setup costs just under $3500 at the moment and I already have any other peripherals or software. I do plan to be over clocking.  Most of the questions I still have are mainly is this necessary/appropriate type and by that I mean will I notice a difference because I know that adobe can use everything I throw at it. First off, is the 4GB GTX needed or will the 2GB do, also is 64GB ram overkill and does it work with everything else.
    Disk Setup
    This has been by far the most confusing aspect of this build for me and after reading harm's Generic Disk Setup this is what I came up with.
    Use the SSD as the boot drive and have Adobe on it
    Have the 3  Barracuda 3TB in raid 5 and put media and projects there.
    Have the 2  Barracuda 1TB in raid 0 for the Pagefile and Media cache.
    Take apart my 2TB external and make it internal using my Mobo's built-in SSD cache to store the Previews and Exports.
    The external drive is something I already have and figured why not use it as well, everything else is still just planned and not final in any way and I am completely open for any kind of suggestions at all. You will also note my lack of a Raid Card which I know I need but I cannot figure out what I need out of one or how to compare them to each other so any advice on what Raid Card to get would be super helpful.
    Any other comments or suggestions are more than welcome
    Thanks

    First of all, this looks pretty good. However, raid5 using an on-board controller is lousy and the mobo does not have sufficient SATA3 connections to make that work, so you are delegated to using SATA2 ports.
    I'm not familiar with Fraps, apart that is mostly used by gamers to record screen play and often in combination with Matroska. Well, Matroska does not work with PR, so if that is the case, you need to convert first before you can edit. But when you say a data rate of around 1 GB/min that will require a lot of memory and a serious disk setup. So, from that point of view 64 GB Ram is not overkill, but your disk setup may need some work.
    I would suggest a good raid controller, either Areca ARC-1882 or LSI and at least one or more additional Barracuda 3 TB disks. You won't need the storage space, but the additional disks in the same array will make the volume much faster and you can use the SATA3 connections on the controller in combination with the 64 MB cache on each drive very well.
    Last, make sure that when you setup your system, you only use the boot disk, Intel 520, for reading and not for writing. So put your page file on another volume, set the Windows environment variables to point to another volume and do not let Adobe use your C: drive. If you do not do that, then you better get a Corsair Performance Pro SSD, that is faster and shows less performance degradation in the stable state.

  • RAID5 Disk Failure - What do I do now?

    I'm pretty green when it comes to RAID, but we just set one up in a brand new MacPro 2 weeks ago. I've got the Apple RAID card with a 2g drive in bay 2, a 1.5g in bay 3, and a 2g drive in bay 4. I used the Raid Utility to create a RAID5 between the three.
    I had left some files converting over the weekend, and I just got into work to find a message from RAID Utility that one of the drives attached to the RAID card has failed. It looks like its the drive in Bay 2. Do I just replace it? The last message RAID Utility left under Events states "Degraded RAID set RS2 - No spare available for rebuild."
    Looking to the left under Drives, it shows the 2g drive in bay 2 is roaming. I have no clue what that means. The hardware viewer lists the status of the Drive in bay 2 as
    Assigned: no
    Failed: no
    Foreign: no
    Missing: no
    Reliable: Yes
    Roaming: Yes
    Spare: no
    Any help would be much appreciated.
    Aaron

    "Degraded RAID set RS2 - No spare available for rebuild."
    What do I do now?
    Option A) you buy another drive, install it in your Mac, partition it the way you want, and add the spare drive to the RAID set, and have it rebuild your complete redundant set.
    The "failed" drive may be really bad, or may have a marginal block that needs to be re-written to come clean, or somewhere in between. You should completely re-initialize it with write Zeroes over the entire surface (takes several hours). If it comes out clean, you may be able to use it as a spare or set it to work doing backups. But a drive that develops ANY Bad Blocks in the field is likely to fail within 6 months of constant use.
    If you expect to continue to use RAID, most users suggest that a matched set (or nearly-matched set) of drives is the best way to go. Different sizes means different operating characteristics, and that can cause problems (less so for Mirroring than other RAIDs). For use in a RAID, NO other files should be placed in other partitions on the same drive, as any access to those other files will slow down your RAID to worse than "ordinary" speeds.
    Option B) decide that this RAID setup is not for you. Copy your files off to another drive and stop using the RAID card.

  • How to find which disk is faulty in a degraded RAID5 array? Then fix?

    (I already posted a question in an old RAID thread, but I was told people who know the answers don't read old threads... And I checked the Sticky.)
    My 4-disk RAID5 array (the K8N "PC1" machine) suddenly degraded today, and one disk is marked as faulty. The manuals aren't especially helpful in helping to find out which particular disk is faulty, and nothing is pointing out what kind of error it is, or what I should (safely) try...
    Here's from the Event Viewer:
    Quote
    The description for Event ID ( 7 ) in Source ( nvraid ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: .
    From MediaShield:
    nVidia reference boards have a "Disk Alert" utility showing the connector for the faulty disk, which seems like a good start (just follow the cable...). But it doesn't seem like the K8N has anything similar? That MediaShield picture doesn't at least make it 100% clear which of the disks is thrown out of the array.
    Samsung has a DOS program that can test their disks, boot from a floppy I guess. I will download that and see if I can pin-point the disk. But if not, how? And if I find which one is marked faulty, how to find out if it's really faulty or if there is a "glitch" in the RAID routine?

    <F10> at startup gets me into the MediaShield Utility, showing the degraded array.
    It gives info about the three disks in the array, and the one removed:
    Code: [Select]
    Adapt  Channel  Index
      1       0       0
      1       1       1
      2       0       2
      2       1       3 This indicates that the disk marked faulty indeed also is the one connected to the SATA connector with the highest number.
    So I guess I can try removing that disk and see if the machine still starts up, then temporarily put in a 500GB disk I have in an ESATA box (not containing anything useful). Then rebuild the array (for a couple of days?), then try testing the faulty-marked disk with the Samsung HUTIL program.

  • Help Configuring Raid 1 & Raid 5 on the P67A-GD65 (B3) motherboard

    I just purchased the P67A-GD65 (B3) motherboard for a custom system and have a few questions in regards to setting up RAID.
    I have the following HDs
    2x 1TB
    3x 3TB
    I would like to setup RAID 1 with the two 1TB drivers and RADI 5 with the three 3TB drivers.  I connected the two 1TB drives to ports 7&8 so it would be using the Marvell SE9128 and the three 3TB drives to ports 1, 2,3 to use the Intel P67 (B3) PCH.  The documentation says that SATA ports 1-6 support RAID 0/1/5/10 and SATA port 7-8 support RAID 0/1.
    The two 1TB in RAID 1 would be my boot drives (going to run VMWARE ESXi with multiple VMs) and the three 3TB RAID 5 drives are going to be my storage.
    I was under the understanding that you could use both the Intel P67 (B3) PCH and the Marvell SE9128 at the same time but it looks like I can only configure one or the other.  If I have neither of the two setup I can hit CTRL + I and get to Intel or CTRL + M to get to Marvell but when I set one of them up I can’t boot into the other to setup my second RAID. 
    At the end of the day I bought this system to run ESXi 5.0 with multiple VMs with a shared storage area.  If I can’t utilize both controllers then what’s the point of having two on the motherboard?  I made sure in the BIOS settings I enabled RAID for the HDs. 
    If anyone has configured their systems like this please let me know how. 
    NOTE: As others have mentioned if you put in all your memory at one time then the system will go into a continuous reboot state.  You should only put in one and then the others.  Right now I have two 4GB sticks in and will add the others once I figure out is going on with the RAID.

    Quote from: Jack on 29-September-11, 16:56:49
    Please stop flashing the same version all over again, before something goes wrong.  
    It's great to have that info now... is it posted anywhere on the website?
    Quote from: Jack on 29-September-11, 16:56:49
    Please make sure that the memory frequency is manually set to DDR3-1333, the Command Rate to 2T and the memory voltage to 1.575V.  Then retest with all modules.
    I changed the frequency to DDR-1333 and Voltage but I don't know how to change the Command Rate or where it is located?  Any help?
    Also,
    I'm still having problems getting the system to recognize the Marvell RAID config when I boot with the VMWare ESXi 5.0 CD and try to select where I want to install... When I go into the Marvell settings the system it shows that  I have two dirves in RAID 1 Config:
    PD 0: ST31000528AS -> PORT ID = 0; PD ID = 0; Type = SATA PD; Status = Configured; Size 953869MD; Feature Support = NCQ 3G 48Bits; Current Speed = 3G
    PD 8: ST31000528AS -> PORT ID = 1; PD ID = 8; Type = SATA PD; Status = Configured; Size 953869MD; Feature Support = NCQ 3G 48Bits; Current Speed = 3G
    under the HBA 0: Marvell 0 settings it says the following:
    VENDOR ID: 1B4B
    DEVICE ID: 91A3
    REVISION ID: B1
    BIOS VERSION: 1.0.0.1027
    FIRMWARE VERSION: 2.1.2.1505
    PCIe Speed Rate: 5.0Gbps
    Configure SATA as: IDE Mode -> In the BIOS I have the settings to RAID
    Not sure if it matters but here's how I have the Intel Raid setup...
    RAID Volumes
    ID = 0; NAME = Storage; LEVEL = RAID5(Parity); STRIP = 64KB; SIZE = 5.4TB; STATUS = Normal (it's in green); BOOTABLE = Yes
    Physical Devices:
    Port Device Mod Ser  Size  Type/Status(Vol ID)
    0 Hitachi blah blah 2.7TB Member Disk (0)
    1 Hitachi blah blah 2.7TB Member Disk (0)
    2 Hitachi blah blah 2.7TB Member Disk (0)
    Any Thoughts?  I'm trying to boot from the two 1TB drivers setup in the Marvell Raid 1 Config and the Raid5 through the Intel for all my storage needs.  
    Thanks again for helping out.

  • Replacing Xserve, how to deal with RAID5+0 array

    We're finally looking at replacing our 8+ year old Xserve, which has served us faithfully, with a new Nehalem model. The question is what to do about the two XRAID arrays that we have attached via fibrechannel. Both are fully stocked with 14 disks apiece, configured as RAID5 but then combined into a single logical drive via software RAID0.
    The question is - if we replace the old XServe with a new one, will we have to completely rebuild the RAID5+0? If so, can this be done without data loss?
    We do have the means to migrate all the data off and then back on after reconstruction but that is, of course, a lengthy and thus unappealing option.
    Any help will be most appreciated.
    Thanks,
    Cary Talbot
    Vicksburg, MS

    Hi Tony -
    Many thanks for the reply but unfortunately neither thread you referenced addressed the issue at hand.
    What I've got is two XRAID boxes with 14 disks each. All four of the RAID arrays (2 per box of course) are RAID5. However, rather than having 4 separate logical volumes, we combined the RAID5 arrays together with a software RAID0 on the Xserve host. I understand that switching out the Xserve box shouldn't have any effect on the data stored in the XRAID disks themselves but the question becomes can we simply reconstruct the software RAID0 on the new XServe and not loose all the data stored on the four RAID5 arrays? Has anyone tried this?
    Thanks for any advice/tips/experiences anyone can offer.
    Cary

  • Raid5 issue after upgrade

    TLDR = scared to death to answer 'yes' to the following from e2fsck:
    Superblock has an invalid journal (inode 8).
    Clear<y>? cancelled!
    After upgrading to linux 3.2.14-1 (along with others important packages like udev, mkinitcpio, etc) my raid5 array would not assemble.  I don't have the original message but after two days have passed and watching the forums I'm sure it was related to this: https://bugs.archlinux.org/task/29344
    Anyways I'm past that issue and am now having filesystem problems with my raid array after recreating.  Note I'm currently using
    filesystem 2012.2-2
    linux 3.2.14-1
    mkinitcpio 0.8.5-1
    udev 181-5
    During the past two days of tinkering I've never let fsck touch the filesystem nor have I done anything like mkfs on it either.  Only thing I've done is re-create the array using mdadm and then seeing if I can mount it.
    The raid5 array contains /dev/sdb1 /dev/sdc1 and /dev/sdd1.  My initial though was thinking that somehow maybe the drive labels got renamed during the kernel upgrade so I've tried re-creating the array will all six possible permutations of the disks.  Nothing worked.  I know the original order was sdb1/sdc1/sdd1.
    Here is the dumpe2fs info for each of the disks:
    # dumpe2fs /dev/sdb1
    dumpe2fs 1.42.1 (17-Feb-2012)
    Filesystem volume name: <none>
    Last mounted on: /mnt/raid
    Filesystem UUID: 08bb1bfd-4932-494a-93ba-fe8fd37ea22c
    Filesystem magic number: 0xEF53
    Filesystem revision #: 1 (dynamic)
    Filesystem features: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file
    Filesystem flags: signed_directory_hash
    Default mount options: (none)
    Filesystem state: clean
    Errors behavior: Continue
    Filesystem OS type: Linux
    Inode count: 122101760
    Block count: 488379968
    Reserved block count: 488379
    Free blocks: 106448913
    Free inodes: 121612868
    First block: 0
    Block size: 4096
    Fragment size: 4096
    Reserved GDT blocks: 907
    Blocks per group: 32768
    Fragments per group: 32768
    Inodes per group: 8192
    Inode blocks per group: 512
    RAID stride: 16
    RAID stripe width: 32
    Filesystem created: Thu Oct 8 23:09:57 2009
    Last mount time: Mon Apr 2 10:19:36 2012
    Last write time: Fri Apr 6 19:15:06 2012
    Mount count: 13
    Maximum mount count: 38
    Last checked: Sun Oct 30 06:54:03 2011
    Check interval: 15552000 (6 months)
    Next check after: Fri Apr 27 06:54:03 2012
    Lifetime writes: 12 GB
    Reserved blocks uid: 0 (user root)
    Reserved blocks gid: 0 (group root)
    First inode: 11
    Inode size: 256
    Required extra isize: 28
    Desired extra isize: 28
    Journal inode: 8
    Default directory hash: half_md4
    Directory Hash Seed: 84ce591f-34ae-43d0-846c-d8dae91ef4b9
    Journal backup: inode blocks
    dumpe2fs: A block group is missing an inode table while reading journal inode
    # dumpe2fs /dev/sdc1
    dumpe2fs 1.42.1 (17-Feb-2012)
    dumpe2fs: Bad magic number in super-block while trying to open /dev/sdc1
    Couldn't find valid filesystem superblock.
    # dumpe2fs /dev/sdd1
    dumpe2fs 1.42.1 (17-Feb-2012)
    dumpe2fs: Filesystem revision too high while trying to open /dev/sdd1
    Couldn't find valid filesystem superblock.
    When I create the array I see the following from mdadm.  Which is expected as it sees the existing filesystem:
    # mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --metadata=0.90 /dev/sdb1 /dev/sdc1 /dev/sdd1
    mdadm: layout defaults to left-symmetric
    mdadm: chunk size defaults to 512K
    mdadm: layout defaults to left-symmetric
    mdadm: /dev/sdb1 appears to contain an ext2fs file system
    size=1953519872K mtime=Mon Apr 2 10:19:36 2012
    mdadm: /dev/sdb1 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Sun Apr 8 10:01:02 2012
    mdadm: layout defaults to left-symmetric
    mdadm: /dev/sdc1 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Sun Apr 8 10:01:02 2012
    mdadm: layout defaults to left-symmetric
    mdadm: /dev/sdd1 appears to contain an ext2fs file system
    size=2066852100K mtime=Sat Apr 7 05:08:08 2029
    mdadm: /dev/sdd1 appears to be part of a raid array:
    level=raid5 devices=3 ctime=Sun Apr 8 10:01:02 2012
    mdadm: size set to 976759808K
    Continue creating array? y
    mdadm: array /dev/md0 started.
    The raid array is created without issue:
    # cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sdb1[0] sdd1[2] sdc1[1]
    1953519616 blocks level 5, 512k chunk, algorithm 2 [3/3] [UUU]
    # mdadm --detail /dev/md0
    /dev/md0:
    Version : 0.90
    Creation Time : Sun Apr 8 14:56:14 2012
    Raid Level : raid5
    Array Size : 1953519616 (1863.02 GiB 2000.40 GB)
    Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
    Raid Devices : 3
    Total Devices : 3
    Preferred Minor : 0
    Persistence : Superblock is persistent
    Update Time : Sun Apr 8 18:07:19 2012
    State : clean
    Active Devices : 3
    Working Devices : 3
    Failed Devices : 0
    Spare Devices : 0
    Layout : left-symmetric
    Chunk Size : 512K
    UUID : fe541eb6:764599e3:5d64d1b1:6d03292e (local to host jackshrimp)
    Events : 0.19
    Number Major Minor RaidDevice State
    0 8 17 0 active sync /dev/sdb1
    1 8 33 1 active sync /dev/sdc1
    2 8 49 2 active sync /dev/sdd1
    # mdadm --examine --scan
    ARRAY /dev/md0 UUID=00000000:00000000:00000000:00000000
    spares=1
    ARRAY /dev/md0 UUID=fe541eb6:764599e3:5d64d1b1:6d03292e
    This is how I originally created the filesystem on the array (back in 10/2009).  Here with "-n" to show the superblock locations:
    # mkfs.ext3 -n -v -m .1 -b 4096 /dev/md0
    mke2fs 1.42.1 (17-Feb-2012)
    fs_types for mke2fs.conf resolution: 'ext3'
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=16 blocks, Stripe width=32 blocks
    122101760 inodes, 488379904 blocks
    488379 blocks (0.10%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=0
    14905 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848
    I've tried running e2fsck with (-b) each of those blocks and none worked.  Here is what I get from e2fsck:
    # e2fsck /dev/md0
    e2fsck 1.42.1 (17-Feb-2012)
    e2fsck: Group descriptors look bad... trying backup blocks...
    e2fsck: Filesystem revision too high when using the backup blocks
    e2fsck: going back to original superblock
    e2fsck: Group descriptors look bad... trying backup blocks...
    e2fsck: Filesystem revision too high when using the backup blocks
    e2fsck: going back to original superblock
    Superblock has an invalid journal (inode 8).
    Clear<y>? cancelled!
    e2fsck: Illegal inode number while checking ext3 journal for /dev/md0
    /dev/md0: ********** WARNING: Filesystem still has errors **********
    While I'm quite sure the data is still there I'm freaking out about letting fsck do anything!
    Should I let fsck do what it want to do?  Any thoughts on how I can recover this array?  Any suggestions on other things I should try?
    Thanks for any and all help. 

    Then it looks similar to bug 6651232
    May I suggest you get in touch with Oracle support for this?

  • Partitioning a Wide Table on RAID5--Need Advice

    We will soon have to add a new table to our database that is 2 Gig in size and growing, so we are considering partitioning it for speed. I've a lot of Oracle experience, but haven't used partitioning before.
    NOTE:
    -- The database is on RAID5 disks
    -- The table is 70 fields wide so I'm only showing relevent fields
    -- We are still at Oracle 9i, unfortunately
    -- The data is re-loaded once per month; we never change it--SELECTS only
    Create table TRANSL1 (
    COMPANY_CD varchar2(5),
    POL_NO varchar2(10),
    COMMUNITY varchar2(6),
    ADDRESSKEY varchar2(25),
    STATE varchar2(2),
    FLOOD_ZONE varchar2(9),
    POST_FIRM varchar2(1) )
    The primary key will be company_cd + pol_no.
    I'm told the users usually pull up the data on one of the following:
    -- company_cd + pol_no
    -- state
    -- flood_zone
    -- addresskey
    -- post_firm
    -- company_cd by itself
    -- the first 2 characters of community
    -- they can also pull up on any other combination of fields they need
    After doing tons of reading on it, here's what I was thinking of doing.
    1. hash partition the table on company_cd
    2. local partition index on company_cd
    3. local partition index on pol_no
    4. regular index on state
    5. regular index on flood_zone
    6. regular index addresskey
    7. regular index on post_firm
    8. function index on community
    9. Create a normal primary key on company_cd + pol_no
    Does this look OK? Should the regular indexes be global partitioned instead? Should we even be using partitioning which so many indexes?

    It is true that partitioning your tables can help with 'speed'. That means different things to different folks. It can also impede things seriously if not properly administered. Partitioning is just another tool. If used incorrectly, the tool can cause the job to take longer.
    Just a note... you mention using RAID5. Oracle partitioning has more to do with logically partitioning your tables (although different partitions may or may not reside in the same tablespaces which in turn may or may not reside in the same location(s)) for ease of administration and improved performance but the use of RAID5 here is secondary. I mention this because I've seen a few associate Oracle Partitioning with disk partitioning.
    Anyways, look over the following doc:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#g1020112
    And a book that will throw much light on partitioning is Tom Kyte's book Expert Oracle Database Architecture.
    Hope this helps.

  • X4540 Windows RAID5 Performance

    We have good performance from our X4500/X4540 hardware when running ZFS on Solaris but now have a requirement to use a couple of them for Windows directly. We have tested an X4540 with Windows 2008 x64 and various RAID5 volumes both using the storage tools native to the OS and using Veritas SFW but have very poor write performance.The sequential read performance is fine >500MB/s but 18MB/s for sequential writes on an 8 disk RAID5 volume is unacceptable when a single disk or a mirrored pair are able to sustain >60MB/s. I have been using iometer to generate the load using the same test profile I use for other storage. My suspicion is that the software RAID stack in Windows is restricting the performance as a Dell PE2850 with a 5 disk U320 software RAID 5 volume also has poorer than expected performance. Before we use our Premier contract to ask Microsoft about this I thought it was worth asking here if anyone else had experience of this? It's so bad we've thought about running Solaris on it and then Windows in VirtualBox and presenting iSCSI targets from the host to the guest.

    Thanks for letting me know I'm not alone with my RAID 5 performance issue. I posted on some Microsoft Forums too but all I've had back is a recommendation to use RAID 10 which IMHO was less than helpful.
    I hadn't tried the Nvidia teaming but have just configured a failover pair using two unused interfaces on a X4540 and I get a fifth interface listed with the Device Name "NVIDIA nForce Networking Controller Virtual" and it appears to operate as expected. Then I configured it as 802.3ad and it hung entirely. Not good.

  • WARNING: Page83 data not standards compliant Promise 7 Disk RAID5

    Hello experts.
    I've following issue. After an distribution upgrade from Solaris 9 to Solaris 10 11-06 we getting an properly error message in /var/adm/messages.
    *genunix: [ID 336213 kern.warning] WARNING: Page83 data not standards compliant Promise 7 Disk RAID5 V0.0*
    The OS specs: Release 5.10 Version Generic_118833-33 64-bit
    The HW specs: Sun Fire 240 (Firmware OBP 4.16.6 2005/05/09 13:03 Sun Fire V210/V240,Netra 240)
    2x SCSi 73 Gb HDD on internal SCSI Controller
    Promise VTrack 15100 Raid System with two disks arrays a 1,5 TB on a LSI 1030R Controller
    IBM TL 4000 with 2 DLT''s and one roboter on a LSI 1030se Controller.
    After firmware update we have the same failure in the /var/adm/messages.
    Where can be the failure, i'm a little bit confused about that? I need urgently help.
    Thanks in advanced.

    You may have already got the answer you needed, but here is a little more detail on what is probably happening:
    Sun has a function that screens for page 83 SPC-3 spec violations that
    was based on SPC-3 rev 14. At that time it roled out , apparently EUI-64 descriptors could only be 8 bytes long. Now 12 and 16 byte lengths are also
    supported. Sun should have this in there system as a bug by now. It shoudn't stop anything from working, but is annoying... Opening a call with Sun like the other said would mostly likely get you to this information too.

Maybe you are looking for