Degraded RAID

We have an earlier-generation Xserve RAID, set up to RAID 5, where a drive has failed, and therefore the status is Degraded. The failed drive in question was a 250Gb Hitachi, which seem to be no longer available. Instead I have ordered a 250 Gb WD Caviar Blue PATA.
Should it be as easy as pulling out the dead drive, popping in the new drive and putting it back in position in the RAID? Will the RAID repair itself, including formatting the new drive (assuming the WD is good)?

Hi Tod,
Sorry to reply to a different post, i'm unsure how to privately email, but we are having a RAID issue it hought you might be able to help with.
We currently have an xServe Raid in RAID 50 config with 14 drives (7 each side). One drive went down and we took it out and ordered the part (now running 7 on left and 6 on right) and the RAID data was fine. While trying to configure another RAID for temp usage, I accidentally (here's where everyone shurgs) reformatted the original RAID with all our data (non destructively, not writing anything over it) but of course, i lost the original volume which created the single RAID from across the 14 discs. Help. Do you know of any tools that can find the old volume and restore it for us (no, we hadn't backed up the RAID volume). Right now we're using Virtual Lab to try and restore. But any ideas or help would be MUCH appreciated.
Thanks.

Similar Messages

  • Upgrading Controllers firmware on a degraded RAID 5 array

    Hi guys
    I have been told that upgrading the controllers firmware on a degraded raid array can cause a loss of configuration. I have been doing this for over 1 year and it has never happened, it sounds more like a myth to me that anything else but i would like some advice on it, if possible.
    Is it true that the RAID Configuration can be lost by upgrading the RAID Controller? In theory isnt the RAID configuration kept on the hard drives?
    Thanks in advance,
    Pedro

    No - this is not a myth. It is entirely possible to lose access to the storage either temporarily or permanently.
    It is extremely bad practice to upgrade any system with a degraded component no matter how small.
    Which particular array have you been upgrading?
    In most arrays, the configuration is saved on multiple disks - on some arrays as a n-way mirror where n is the number of disks in the tray. However unless you know exactly what the implications are, it is extremely important that firmware updates are not carried out on degraded arrays.

  • Degraded raid 1 SATAS

    Everything has been working great on my 939 fx-53, 2 gig Corsair XMS 3200Pro, 2 SATA Seagate 160s, raid 1, BFG 6800 Ultra until I accidently screwed with the bus speeds (I'm a complete moron) which locked up the system and shut down COM ports. Machine wouldn't boot, etc.
    I finally loaded failsafe bios settings and utimately got everything straightned out but while  bios was in this mode it defaulted to booting from my mirrored drive and Windows loaded (slowly) apparently for the first time, even telling me I had only several days left to registar it.
    After resetting everything, including the raid 1, everything works fine except it is showing both hard drives as "degraded" even though using the Nvidia utility it shows them each to be healthy but mirroring as degraded. At F10 hitting R to rebuild does nothing but they appear to still be mirroring though I'm still getting the red flashing warning on boot that this mirrored drive is degraded.
    Now when booting my PC takes FOREVER to boot and shutdown but otherwise seems to run fine.
    How can I "re-mirror" these drives?
    Thank you for any help!

    One of the things that I did when I was building my system is to figure out how to do this.
    What you need to do is to delete one of the degraded drives.  What happened is that at one point in time, the PC booted with only one drive active which put the two drives out of sync.  Now there are two active drives and the system does not know which one is supposed to be the "master".  So by deleting one, you force the remaining one to be the master and the other will be mirrored off of it.
    When you boot, hit F10 to go into the raid setup.  Delete the degraded raid that you do not want.  Then Ctrl-X and reboot.  I am not sure if you have to assign the deleted drive back to the raid configuration or not.  But if you can, go ahead and do so.  Make sure that first drive is designated as the "boot" drive.  Then continue to re-boot and the drives should start the mirroring process.
    I hope this helps.

  • Need help with RAID Card and degraded Raid-5 errors

    Dear all,
    I recently purchased a used Apple RAID card for my 2008 Mac Pro 8-Core. The installation went smooth, the card was immediately recognized and the battery reconditioned within one night.
    So I started setting up a Raid Set with the 4 identical drives which I already used before as a software Raid. But each time the Raid Level-5 Volume is created, somewhat later the status turns red and the Raid is listed as "degraded"!
    A closer look at log reveals:
    +19:42:54 Drive carrier 00:01 inserted+
    +19:42:27 Background task aborted: Task=Init,Scope=DRVGRP,Group=RS1+
    +19:42:27 Degraded RAID set RS1 - No spare available for rebuild+
    +19:42:26 Degraded RAID set RS1+
    +19:42:22 Drive carrier 00:01 removed+
    +15:10:57 Created volume “R1V1” on RAID set “RS1”+
    So it seems that the drive from Bay 1 somehow gets lost (removed) a few hours after the volume is being created and anysoon later it's being "reinserted"...
    Of course, the drive is NOT removed, nobody touched the Mac Pro! Also I did the same procedure 3 times and the result was always the same.
    I also tried setting up JBOD and different RAID levels which do all work without a problem. Only when choosing RAID5 (what I intentionally bought the card for), the problem reappears
    Anyone any solution or hint for me concerning this problem? Many thanks in advance!

    One drive completely broke down later. Replaced that drive and since the problem's gone!

  • XServe Degraded RAID Array

    Hello
    I have 2 XServe Dual Core servers experincing the same issue.
    On initial setup I used 2-500gb modules to create a mirrored drive. Then installed the OS plus the applications that I am running. After a day or so the RAIDS were degraded. I formatted the the degraded drive and the rebuilt the mirror. Upon restart the raid was degraded again. I replaced the drive and the same issue again. I originally thought it was bad hardware on one machine but I am experiencing the same issue on both. Have not only reformatted but have replaced with 3 different drives on BOTH XServes. I even tried removing the Apple drives and replacing them with Seagate 500gb SATA drives off the shelf here without any luck.
    Any ideas???
    Thanks in advance

    Thanks for the Welcome.
    I haven't tried the anything with RAM but I will..
    The odd part is that it is happening on 2 different machines. I initially created the RAID. Then installed the OS. Then installed my server apps. Everything seemed fine for a few days and then a degraded drive. After a few rebuilds and recreating the RAID. I formated the drives and used the Zero all data option. Rebooted the server and didn't launch any applications just let the serevr run and after one day the raid was degraded. I lso get a degraded RAID if I shut down the server and power it back up.
    What I will do today is try installing just the factory RAM in both servers. Format the drives and recreate the RAIDS and see what happens.
    I installed 16gb of crucial RAM into one server. And 4gb of Crucial RAM into the other.
    I will let you know

  • Degraded Raid Repair

    How can i repair a degraded raid?
    All the instruction i have read seem to require a damaged disk, but my disks are OK...
    Name: Data Mirrored
    Unique ID: Data Mirrored2e749102dd1e11d8822a000d939c63fc
    Type: Mirror
    Status: Degraded
    Device Node: disk3
    # Device Node Status
    0 disk2 OK
    1 disk1 OK
    xserve   Mac OS X (10.3.8)  

    Hi
    I have seen this before, dont have an answer why but i can help you fix it.
    You need someone to look at the front of the Xserve, notice which drive is active and whcih one is not (blue lights flashing) after you have determined which drive is not being used then just pop it out of the bay, wait a minute or so then pop it back in
    Once inserted you will be able to run the disutil repairmirror ....... command to rebuild the RAID
    Be very SURE which drive is not being used before doing this.
    Ed

  • Degraded RAID on IronPort C160

    Our IronPort C160 has a degraded RAID. How do I identify the bad hard drive and if I replace it will the RAID rebuild itself? Opening a TAC case is not an option.
    Thanks,

    Unfortunatley - the hard drives in the C160 appliance are not hot swappable.  You would need to open a support case through TAC, and have the appliance RMA'd - if it is still under contract.  Starting with the C170 - the drives are hot swappable.
    Hope this helps!
    -Robert
    (*If you have received the answer to your original question, and found this helpful/correct - please mark the question as answered, and be sure to leave a rating to reflect!)

  • Degraded RAID, trying to repair integrity

    We pulled a major bonehead move.
    We were installing another server in the rack and we didn't have the xServe locked. Me and my employee both popped drives from our xserve at the same time and we degraded the RAID.
    Setup: Dual G5 xServe, MegaRaid card, 3x300GB drives in RAID 5
    I supressed the urge to immediately panic, especially since the server was idle when it got hosed. I powered down right away and booted of the install cd.
    I used megaraid CLI utility to re-flag the drives as optimal and brought the server right back up.
    I'm noticing some odd behavior and a bit of corruption, so I assume the RAID consistency is blown, I pretty much expected that to be the case.
    The problem is, our backups are crap. I've been fighting with the administration for funding for a proper backup system for nearly a year and I've failed so far. Formatting and a bare-metal restore from tape is not an option.
    I'm running: megaraid -chkcon 0 -start
    .. and it's taking forever according to -status. The documentation is very lacking about what the parameters actually do so I'm in a bind. I can run the consistency check and have the server down for at least a day but what do I get at the end?
    Can or does chkcon repair consistency problems?
    Here's my options as far as I know, suggestions?
    - Is there a way to repair the consistency of a degraded RAID volume?
    - Should I backup what I can and scrub the RAID and then just restore what I can from current data and old crappy backups?
    - Sould I try degrading, intentionally, the drives that went off line one at a time then using the megaraid utility to properly return them to online with a megaraid -rebuild pd (pd being physical drive)?
    - Should I try the official suggestions for moving RAIDs from server to server? Apparently Apple suggests that you can move a RAID by pulling the drives, moving the card without drives, then booting to install disk and doing megaraid -destroyconfig, then shutdown, then put drives back in, then let card magically rebuild raid from meta. This sounds like trouble to me.
    Any suggestions (other than don't leave my xServe unlocked.. especially when working in the rack)?
    thanks,
    Steven.

    I was able to pop in the drives, boot off install DVD, and mark them as on-line. The server came right back up.

  • Degraded RAID set (mirror)

    I am running a pair of 2 GB external drives in RAID 1  (mirroring) using the OSX Disk Utility.  Recently I noticed that the set shows it's RAID status as Degraded and one of the two drives is indicated as "Missing," which keeps the "rebuild raid set" button grey.  However, I can verify each of them separately and they appear to be okay.  Only one of the drives appears to be partitioned correctly; the "missing" drive simply shows a RAID slice for the entire 2 GB.  
    I would recreate the mirror set, except that I don't have anywhere to store the 1.3 GB still on the good drive, and I believe I cannot create the set again without erasing the contents.  (I back up to the RAID set as well as use it for un-backed up storage, which I think is safe as long as the RAID set is working.)
    Any ideas how to get the "missing drive" to reappear, so the system can rebuild the set?  Or any other ideas to get out of this problem?  Thanks.

    It doesn't change the problem, but obviously I meant TB (not GB)

  • Degraded RAID set help

    Hello Support Community,
    I have a 4 bay RAID set that all of a sudden is showing as degraded.  I have two pairs of disks each striped, and then have those two pairs mirrored.  Not sure if this is a correct RAID format, but it is what it is right now.  So both striped sets say online, with no problems, but my mirrored set of those two pairs shows that it is degraded.  See below.
    Any thoughts as to what might be happening?  I have the options set to automatically rebuild the RAID.  How do I know this is happening?  How long should I expect this to take?  It's a 4TB raid in it's current config, and there is about 3.7TB of data on this RAID.  I have everything backed up in two other locations.  Am I better off starting from scratch?  Or should I just let this thing run for 2 weeks and see what happens?  Any help would be greatly appreciated.
    Thanks
    Dave

    No, I was doing all of this in disk utility.  I left the arrary running overnight, and it's back online now.  Not sure what got screwed up.
    Thanks

  • Degraded raid 1 which drive to buy?

    Dear All
    I have 6 Xserv (2009) with raid 1 on each.
    Today one of my Raid 1 is degraded. I can't found the same harddrive on sell to repair the raid 1.
    Does someone can give me a compatible harddrive? Does a list exist?
    My harddrive is WDC WD1002FBYS-43P1B0 Media
    Perhaps every harddrive SATA could work...
    OS : 10.5.8
    Fred

    Hi Fred,
    You should still be able to buy the Apple 1TB drive module from Apple or from a reseller. Although some non Apple drives work in the xserve it's a bit hit and miss so if the machines are in a production environment then best to get the real thing.
    Thanks
    Beatle

  • Proper removal of disk while degraded RAID

    Hi folks,
    I have a XServe G4 with 4 drives, 2 are mirrored to the next ones, so System and Data are mirrored. One of the drives doesn't work any more and the RAID is degraded. What it the proper way of solving this issue?
    Server Monitor says that faulting drive is Drive 3 in an Internal Bay 3 but when I remove it, (the third drive from left), the volume unmounts and users doesn't have an access to their data.
    When I remove Drive 4 (the first from right side), the volume doesn't unmount, but the whole server freezes. Is it normal?

    I was able to pop in the drives, boot off install DVD, and mark them as on-line. The server came right back up.

  • Degraded RAID set

    Hi All,
    Dual Core 2.0 running 10.4.2. I set up a raid set and one of the drives almost immediately reported a SMART failure. So I replaced it. No problem, no downtime, even. The machine functioned fine with just the one drive.
    Now...
    Bought a replacement second drive. Added it to the RAID set, rebuilt it. No problem, but the RAID set still reports as degraded. I cannot delete the damaged drive because I don't have it anymore (hindsight is 20/20). How can I delete the non-existent drive?
    Thanks,
    Danny
    Dualcore 2.0 G5   Mac OS X (10.4.2)  

    Yeah, but if you've only got 2 slices and one of them
    is out to lunch, well, it's not rocket science to
    know what your risk is ...
    Well, actually, one failed and has been replaced. It was pretty easy to add the new drive to the RAID set (just drag and drop into the RAID and then rebuild it). The machine worked ave without the failed drive hence...
    What worries me is that I don't know for sure
    if the RAID array is actually working as designed and
    I haven't found a definitive statement about what
    exactly 'degraded status' means. Does it mean
    'not working as well as it should but still doing the
    job' or does it mean 'this RAID array isn't doing
    diddly and sometime soon you're gonna be up the
    creek.'
    As you could see, the RAID was working very well. One drive died and the machine continued working fine, so there was NO downtime (except to switch off the G5 and yank out the failed drive). I'm guessing degraded means that one of the registered slices is missing. But if you have another registered slice, then you're fine.
    More specifically, if my primary drive actually does
    die, is there a complete dupe on the second drive in
    the array or not?
    As in my example, that's exactly right. The hard drives are exact copies.
    I guess I should go ahead and do a hard backup on an
    external drive and then just pull out the primary
    drive (it's hot-swappable) and see what happens.
    Yes, RAID is not back-up. If you have a corrupted directory structure due to software, then you would have to rely on a back-up. But it's nice to have both, really, as the most common problem with disks is hardware failure and this minimizes problems involved.
    To tell you the truth, though, I'm not sure I'd bother again. I find that using psync (available from bombich.com under the Carbon Copy cloner section) works fine. It creates a fully working system disk and can be set to clone the start-up drive daily. If the main disk fails, most people won't even notice that they're starting up from the second one.
    So back to the question: how to remove this degraded status?

  • Help with Degraded RAID?

    I have a RAID array established (mirrored aka RAID 1 across two external Firewire drives, and it's not being used as my system volume.) The array has been in place for quite some time, and I've recovered from several failures previously (note to self: don't buy any more firewire drives that don't come back on after a power failure.)
    I've just recovered from another failure, where the RAID set refused to recognize the existing drive as previously belonging to the set. I re-added the drive to the array and the drive was successfully rebuilt.
    So far, so good. So what's the problem? The array is listed as "degraded." I tried following the advice listed here:
    http://docs.info.apple.com/article.html?artnum=106987
    Executing the command "diskutil checkRAID" reveals the results at the end of this message. What can I do about the duplicate node 2? All of the diskutil commands require the Device Node as a parameter, and they don't recognize "2" or "Unknown" as a valid value.
    Help would be much appreciated ...
    RAID SETS
    Name: Backup
    Unique ID: 59256911-7EF5-4FA4-B355-776D8187E8C7
    Type: Mirror
    Status: Degraded
    Device Node: disk2
    Apple RAID Version: 2
    # Device Node UUID Status
    1 disk1s10 536A7B2E-5F78-420E-AD36-98E4F21E01A5 Online
    2 disk3s3 3E52C861-2ABF-4446-BAC6-C09DE2E95D86 Online
    2 Unknown Missing/Damaged
    ----------------------------------------------------------------------

    Well, I still don't know what happened originally. Perhaps I recovered the array pre-10.4 and some state was unhappy when I upgraded to Tiger? I'd still like to know the cause, but here's what I did to cure it ...
    I backed the whole array up using Disk Utility. Just to be safe.
    I used "diskutil destroyRAID" for which the documentation is nothing short of terrifying. I pretty much expected all the volumes to be wiped and to have to restore the whole thing. I was pleasantly surprised to find that the two volumes in the mirror were simply disconnected and both were intact duplicates of one another.
    I unmounted one of the two copies, used "diskutil enableRAID mirror" to turn the other one into a single-set RAID and used Disk Utility to add the unmounted disk back into the array and started the rebuild.
    All appears to be well again. This was under 10.4.6 so your mileage may vary. When in doubt, back EVERYTHING up first.

  • Degraded raid set now missing

    I have or should I say had 2 250GB mirrored raid set. They would no longer boot I bought a new drive (same size) and attempted to rebuild the set. This went fine for a time but then failed. Now I can no longer see the raidset. I have tried booting from the server CD and it will stay booted just long enought to show me the raid set is gone or to open terminal then it shuts itself down.
    I desperately need the data on this server. My most recent backup to tape was the end of January.
    G5 Server   Mac OS X (10.3.9)  

    Both hard drives were ruined by a power surge.

Maybe you are looking for