Rebuild a degraded RAID1

I'm currently at the stage to do some investigation for emergency cases, currently rebuilding a mirrored RAID.
My hardware is a XServe G5 running OS X Server 10.4.8 with two 80 GB mirrored harddisks. I do backups by creating a clone of the RAID system via Intego's Personal Backup X4.
Basically I have to deal with the situation that one harddisk fails, RAID1 degrades and the mirror needs to rebuild:
While it is clear to swap the broken harddisk with a new one of the same size or larger, I found different information when searching the forum:
Do I need to manually rebuild the mirror or is it done automatically as soon as the new HD is put in ?
In the disk utility the option to automatically rebuild RAID is unchecked and greyed out. Can I turn this on when the RAID runs (maybe via command line) ?
If a manual rebuild is required... how do I proceed after putting in the new HD ? Can I simply open a terminal session and execute "diskutil repairMirror <raid volume> <new disk>":
<raid volume> would be label of the RAID (as it is shown at the desktop's HD icon ?).
<new disk> would be what exactly ? - And how can I determine this name ?
Doing this with the disk utility, its documentation states to select the RAID drive and then drag & drop the new HD to the RAID volume view on the right side:
"select the RAID" drive would mean which "76,2 GB serverraid" (first line) or "serverraid" (second line). When marking the first line, it shows the RAID in the right, when marking the second line, it seems to let me build a new RAID based upon the selected one as base ?
"drag & drop the new HD" would refer to which line: the first (e.g. "76,7 GB Maxtor 1234ABC") or the second (e.g. "76,7 GB disk0s3") ?
(Hope I could explain it right; if not, how can I insert screenshots here, I would then generate one so you get an understanding ?)
Is there anything else to take care on or additional steps required ?
I know these are probably stupid questions, but it is essential for me to get familiar with so it works upon first try (when this is needed).

>Do I need to manually rebuild the mirror or is it done automatically as soon as the new HD is put in ?
That depends on whether the RAID is set to auto-rebuild.
>In the disk utility the option to automatically rebuild RAID is unchecked and greyed out. Can I turn this on when the RAID runs (maybe via command line) ?
This can only be set when the RAID is created. Off hand I don't know of any way of changing it once the RAID is live.
>If a manual rebuild is required... how do I proceed after putting in the new HD ? Can I simply open a terminal session and execute "diskutil repairMirror <raid volume> <new disk>":
Yes.
><new disk> would be what exactly ? - And how can I determine this name ?
Use diskutil list
"select the RAID" drive would mean which "76,2 GB serverraid" (first line) or "serverraid" (second line). When marking the first line, it shows the RAID in the right, when marking the second line, it seems to let me build a new RAID based upon the selected one as base ?
To repair a RAID via the GUI, select the existing RAID volume and drag another disk to the pane where it lists the disks that constitute the array.
>I know these are probably stupid questions, but it is essential for me to get familiar with so it works upon first try (when this is needed).
I disagree. The time to try this first is BEFORE you need it.
When you're in a failure state the adrenaline is running and you're likely to make mistakes. By testing this first you'll be more confident when the time comes to do it for real.
Don't worry if you don't have a spare XServe to try on. The whole RAID process is identical in Mac OS X client and you can test this all with a couple of FireWire drives attached to your desktop.

Similar Messages

  • Raid Performance and Rebuild Issues

    Rebuilding a Raid array
    What happens when you have a Raid array and one (or more) disk(s) fail?
    First let's consider the work-flow impact of using a Raid array or not. You may want to refresh your memory about Raids, by reading Adobe Forums: To RAID or not to RAID, that is the... again.
    Sustained transfer rates are a major factor in determining how 'snappy' your editing experience will be when editing multiple tracks. For single track editing most modern disks are fast enough, but when editing complex codecs  like AVCHD, DSLR, RED or EPIC, when using uncompressed or AVC-Intra 100 Mbps codecs, or using multi-cam or multiple tracks  the sustained transfer speed can quickly become a bottleneck and limit the 'snappy' feeling during editing.
    For that reason many use raid arrays to remove that bottleneck from their systems, but this also raises the question:
    What happens when one of more of my disks fail?
    Actually, it is simple. Single disks or single level striped arrays will lose all data. And that means that you have to replace the failed disk and then restore the lost data from a backup before you can continue your editing. This situation can become extremely bothersome if you consider the following scenario:
    At 09:00 you start editing and you finish editing by 17:00 and have a planned backup scheduled at 21:00, like you do every day. At 18:30 one of your disks fails, before your backup has been made. All your work from that day is lost, including your auto-save files, so a complete day of editing is irretrievably lost. You only have the backup from the previous day to restore your data, but that can not be done before you have installed a new disk.
    This kind of scenario is not unheard of and even worse, this usually happens at the most inconvenient time, like on Saturday afternoon before a long weekend and you can only buy a new disk on Tuesday...(sigh).
    That is the reason many opt for a mirrored or parity array, despite the much higher cost (dedicated raid controller, extra disks and lower performance than a striped array). They buy safety, peace-of-mind and a more efficient work-flow.
    Consider the same scenario as above and again one disk fails.  No worry, be happy!! No data lost at all and you could continue editing, making the last changes of the day. Your planned backup will proceed as scheduled and the next morning you can continue editing, after having the failed disk replaced. All your auto-save files are intact as well.
    The chances of two disks failing simultaneously are extremely slim, but if cost is no object and safety is everything, some consider using a raid6 array to cover that eventuality. See the article quoted at the top.
    Rebuilding data after a disk failure
    In the case of a single disk or striped arrays, you have to use your backup to rebuild your data. If the backup is not current, you lose everything you did after your last backup.
    In the case of a mirrored array, the raid controller will write all data on the mirror to the newly installed disk. Consider it a disk copy from the mirror to the new disk. This is a fast way to get back to full speed. No need to get out your (possibly older) backup and restore the data. Since the controller does this in the background, you can continue working on your time-line.
    In the case of parity raids (3/5/6) one has to make a distinction between distributed parity raids (5/6) and dedicated parity raid (3).
    Dedicated parity, raid3
    If a disk fails, the data can be rebuild by reading all remaining disks (all but the failed one) and writing the rebuilt data only to the newly replaced disk. So writing to a single disk is enough to rebuild the array. There are actually two possibilities that can impact the rebuild of a degraded array. If the dedicated parity drive failed, the rebuilding process is a matter of recalculating the parity info (relatively easy) by reading all remaining data and writing the parity to the new dedicated disk. If a data disk failed, then the data need to be rebuild, based on the remaining data and the parity and this is the most time-consuming part of rebuilding a degraded array.
    Distributed parity, raid5 or raid6
    If a disk fails, the data can be rebuild by reading all remaining disks (all but the failed one), rebuilding the data and recalculating the parity information and writing the data and parity information to the failed disk. This is always time-consuming.
    The impact of 'hot-spares' and other considerations
    When an array is protected by a hot spare, if a disk drive in that array fails the hot spare is automatically incorporated into the array and takes over for the failed drive. When an array is not protected by a hot spare, if a disk drive in that array fails, remove and replace the failed disk drive. The controller detects the new disk drive and begins to rebuild the array.
    If you have hot-swappable drive bays, you do not need to shut down the PC, you can simply slide out the failed drive and replace it with a new disk. Remember, when a drive has failed and the raid is running in 'degraded' mode, there is no further protection against data loss, so it is imperative that you replace the failed disk at the earliest moment and rebuild the array to a 'healthy' state.
    Rebuilding a 'degraded' array can be done automatically or manually, depending on the controller in use and often you can set the priority of the rebuilding process higher or lower, depending on the need to continue regular work versus the speed required to repair the array to its 'healthy' status.
    What are the performance gains to be expected from a raid and how long will a rebuild take?
    The  most important column in the table below is the sustained transfer  rate. It is indicative and no guarantee that your raid will achieve  exactly the same results. That depends on the controller, the on-board  cache and the disks in use. The more tracks you use in your editing, the higher the resolution you use, the more complex your codec, the more  you will need a high sustained transfer rate and that means more disks in the array.
    Sidebar: While testing a  new time-line for the PPBM6 benchmark, using a large variety of source  material, including RED and EPIC 4K, 4:2:2 MXF, XDCAM HD and the like,  the required sustained transfer rate for simple playback of a  pre-rendered time-line was already over 300 MB/s, even with 1/4  resolution playback, because of the 4 4 4 4 full quality deBayering of  the 4K material.
    Final thoughts
    With the increasing popularity of file based formats, the importance of backups of your media can not be stressed enough. In the past one always had the original tape if disaster stroke, but no longer. You need regular backups of your media and projects.  With single disks and (R)aid0 you take risks of complete data loss, because of the lack of redundancy.  Backups cost extra disks and extra time to create and restore in case of disk failure.
    The need for backups in case of mirrored raids is far less, since there is complete redundancy. Sure, mirrored raids require double the number of disks but you save on the number of backup disks and you save time to create and restore backups.
    In the case of parity raids, the need for backups is more than with mirrored arrays, but less than with single disks or striped arrays and in the case of 'hot-spares' the need for backups is further reduced. Initially, a parity array may look like a costly endeavor. The raid controller and the number of disks make it expensive, but if you consider what you get, more speed, more storage space, easier administration, less backups required, less time for those backups, continued working in case of a drive failure, even though somewhat sluggish, the cost is often worth more with the peace-of-mind it brings, than continuing with single disks or striped arrays.

    Raid3 is better suited for video editing work, because it is more efficient when using large files, as clips usually are. Raid5 is better suited in high I/O environments, where lots of small files need to be accessed all the time, like news sites, webshops and the like. Raid3 will usually have a better rebuild time than raid5.
    But, and there is always a but, raid3 requires an Areca controller. LSI and other controller brands do not support raid3. And Areca is not exactly cheap...
    Keep in mind that a single disk shows declining performance when the fill rate increases. See the example below:
    A Raid3 or Raid30 will not show that behavior. The performance remains nearly constant even if fill rates go up:
    Note that both charts were created with Samsung Spinpoint F1 disks, an older and slower generation of disks and with an older generation Areca ARC-1680iX-12.

  • LSI_SAS Driver error?

    Hello, we have this new box - TS100 6432 17U - and raid 10 is configured with 4  X 500GB physical disks to give us a 1TB virtual drive. Recently the following log entries have started to show up in the system events.
    - The driver has detected en error on device \Device\RaidPort0
    Then at some point all hell's break lose and the array becomes in rebuild mode (degraded). We also see alot of suspicious log entries in the MegaRAID Storage Manager (MSM) like:
    - Link lost on SAS port: 0 PHY = 0
    - Link lost on SAS port: 2 PHY = 2
    - Link lost on SAS port: 3 PHY = 3
    - Controller ID: 0 PD Removed 0:1
    - Controller ID: 0 PD Inserted 0:1
    So I kinda think that I should change DISK 0 because I also receive alot, and I mean alot, of these:
    Unrecoverable medium error during rebuild: PD 0 Location...
    Any Idea besides this?
    Solved!
    Go to Solution.

    welcome to the forum!
    you'd be better off calling lenovo support on this one.   if you do have a bad drive on port 0, they'll be able to send you a new one right away.   until then, you'll be stuck with a degraded array -- which i know from prior experience can be painfully slow.   if it's not a bad drive then they'll be able to diagnose it.   trying to solve something like this on a forum isn't easy.
    good luck and please post back with your results.
    ThinkStation C20
    ThinkPad X1C · X220 · X60T · s30 · 600

  • Degraded mirrored raid- won't rebuild

    I've had 2 500 gig drives mirrored. When I set up the drives I did not check "automatically rebuild".
    I just discovered ( not too long after installing Leopard) that one slice was "damaged" and the raid was degraded. I tried to rebuild, but got an error message that said disk utility was unable to recognize the filesystem. I finally erased the "damaged" drive, verified it and am ready to try rebuilding the set, but don't want to lose data from the remaining drive from the original raid.
    How should I proceed?
    Thanks-
    Ford

    Well I'm sure VERY few people have multiples of EVERYTHING. I still have my data.
    All I want to do is mirror a fresh drive with the other drive that was part of a mirrored raid set. That should be possible... it seems...
    Please don't admonish me for not having more backups. I have a total of 1.25 terabytes of HD space with a lot of redundancy.

  • Rebuild RAID1 - How long is this supposed to take?

    Hello all
    i had a drive die today on an xserve raid1. i replaced the drive and the rebuild started (1:19PM CST). it's been a few hours and the GUI on Disk Utility hasn't really changed much. all it says is:
    Rebuilding RAID - and after this it has a progress bar with a "barbers pole" spinning blue and white.
    there is no idication of how long it has left to build. it doesn't even look like there is any activity on the LEDs on the server in the appropriate drive bays. i'm scared to cancel out of the rebuild but i have no idea if it's even doing anything.
    fyi - two 320GB drives. about 109GB used on the originating drive....
    any ideas?
    thanks,
    nick

    fixed it myself. rebooted the server and now it's doing the rebuild properly.

  • Should RAID1 boot degraded?

    I can no longer boot from my RAID1 array with one of the drives disconnected.
    I know this used to work but I haven't tested it for a couple of years now and I'm wondering whether it's still supposed to work or whether I now have to assemble the arrays manually.
    I have three sets of partitions that  assemble as md0 (/) md1 (swap) and md2 (/home) plus several others that are probably not relevant.
    My HOOKS line in mkinitcpio.conf is:
    HOOKS="base udev autodetect pata scsi sata mdadm_udev resume filesystems"
    What I see if I boot with one drive disconnected is:
    Booting the kernel.
    :: running early hook [udev]
    :: running hook [udev]
    :: Triggerng events...
    :: running hook [resume]
    [      2.682471] read-error on swap-device (9:1:0)
    :: mounting '/dev/md0' on real root
    Segmentation fault
    then I get dropped to the shell.
    I guess the segfault occurs because the array md0 has not been assembled.
    cat /proc/mdstat shows all the arrays are recognised but none have been assembled, eg:
    md0: inactive sda1[0](S)
    I haven't yet tried assembling by hand - mostly because I've forgotten where I mount the assembled (degraded) array to continue.
    Anyone know whether this should work or not or do I have to assemble the arrays by hand now?
    Pete

    I just talked to the tech that is coming  and he said the fact that the hard drives tested OK in the Thinkvantage Toolbox, even given the other indications of a hard drive failure, points more to a mainboard rather than a hard drive because (I'm paraphrasing) the Thinkvantage Toolbox tests are more lower level or basic and test individual sectors and the fact that they are showing OK probably means it is a problem with something else and the mainboard is the logical choice.
    Geophyte1
    W700ds 2757-CTO

  • Degraded array will not rebuild with new disk

    what do I need to do, inserting new disk doesn't start the rebuild
    I have tried everthing that I know to do from raid admin 1.5.1
    thanks

    Justinian, Hello and welcome to the Apple Boards,
    A couple of tips on using the boards:
    1) Start a new topic rather than resurrecting an old one. This will help to bring your post to a fresh set of eyes rather than digging up an old topic that people feel they have already read. It will also give you the capability to mark it as "solved" because it is your topic not someone else's.
    2) Start your post with what your set-up is and what you have already tried. This is will keep people from suggesting things you've already done or asking questions about your set-up like "What kind of RAID is it?"
    So assuming you have an RAID 5 set-up did you lose a drive? And you pulled the drive, replaced that drive and made sure the drive was available in RAID Admin but it didn't rebuild automatically?
    Was the drive an exact replacement from Apple? It's not one byte smaller than the others or something goofy like? The drive's SMART status shows up as OK?
    About calling AppleCare - are you calling the number on the front of your the contract you received back from Apple or are just calling the general help number? I've only ever used the number I was given when I enrolled our Xserves and Xserve RAIDs and I have never had to hold when I've called.
    =Tod

  • Can I create a mirrored RAID1 from an existing striped RAID0 without erasing the data?

    I have a 1.5 TB striped RAID0 with 3 500 GB drives. I have a clone of the data from the 1.5TB RAID on a non-RAID 1TB drive and a 500GB drive. I'd like to create a RAID1 mirrored set out of all of these disks. I can partition the 1TB into 2 500GB drives and combine that with the other 500 GB. I'd rather not erase the data to create the mirror. Is there a way to rebuild a mirrored set from the existing 1.5 TB striped set, or do I need to start all over? I have a third copy of the data that I could put on the new mirrored RAID if necessary.
    The data is aperture, itunes and imovie libraries. To back up a minute, is having a RAID1 a good option or is there a better solution that I am not considering?
    PS I'm using a 2.53 GHz macbook pro unibody with 8GB of RAM.

    First off a mirrored RAID requires two drives of equal size (could be two striped RAIDs of equal size.) So, as I understand what you have to work with you can create a single 500 GB mirrored RAID using two of the 500 GB drives.
    You could create a striped RAID array using two of the 500 GB drives, then combine it with the 1 TB drive you have to create a 1 TB mirrored RAID. But this would not be the best alternative because if one of the smaller drives in the striped array fails then you lose everything on those drives. Not so bad as long as the single 1 TB drive is OK.
    Also, you might find this information helpful:
    RAID Basics
    For basic definitions and discussion of what a RAID is and the different types of RAIDs see RAIDs.  Additional discussions plus advantages and disadvantages of RAIDs and different RAID arrays see:
    RAID Tutorial;
    RAID Array and Server:
    Hardware and Service Comparison.
    Hardware or Software RAID?
    RAID Hardware Vs RAID Software - What is your best option?
    RAID is a method of combining multiple disk drives into a single entity in order to improve the overall performance and reliability of your system. The different options for combining the disks are referred to as RAID levels. There are several different levels of RAID available depending on the needs of your system. One of the options available to you is whether you should use a Hardware RAID solution or a Software RAID solution.
    RAID Hardware is always a disk controller to which you can cable up the disk drives. RAID Software is a set of kernel modules coupled together with management utilities that implement RAID in Software and require no additional hardware.
    Pros and cons
    Software RAID is more flexible than Hardware RAID. Software RAID is also considerably less expensive. On the other hand, a Software RAID system requires more CPU cycles and power to run well than a comparable Hardware RAID System. Also, because Software RAID operates on a partition by partition basis where a number of individual disk partitions are grouped together as opposed to Hardware RAID systems which generally group together entire disk drives, Software RAID tends be slightly more complicated to run. This is because it has more available configurations and options. An added benefit to the slightly more expensive Hardware RAID solution is that many Hardware RAID systems incorporate features that are specialized for optimizing the performance of your system.
    For more detailed information on the differences between Software RAID and Hardware RAID you may want to read: Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?
    Do You Really Need a RAID?
    There is only one thing a RAID  provides - more space.  Beyond that a RAID can’t help you with:
    Accidental deletion or user error
    Viruses or malware
    Theft or catastrophic damage
    Data corruption due to other failed hardware or power loss
    Striped RAIDs have a higher failure risk than a single drive
    The purpose of a RAID is to provide high speed mass storage for specialized needs like video editing, working with extremely large files, and storing huge amounts of data.
    If your array fails it means complete loss of data and hours of time to rebuild.  RAIDs degrade over time necessitating many hours of restoration.  And, if you don't know much about RAIDs then you really don't need one.
    You can use a RAID for backup.  But unless your backup needs involve TBs of data requiring rapid and frequent access, why bother?  TM works in the background.  It's not like you have to sit there waiting for your backup to be completed.  Furthermore, you're buying two drives possibly to solve a problem where a single drive will do.  And, one drive is less expensive than two.
    Ignoring overhead, two drives in a RAID 0 (striped) array should perform about twice as fast. However, as the array fills up with files that performance will degrade.
    RAID was a technology that in it's time was meant to solve a problem.  Large capacity, fast drives were extremely expensive.  Small drives were cheaper but slower.  However, combining these cheaper drives into arrays gave faster performance and the larger capacity needed for data storage needs.  Thus, the reason why it's called Redundant Array of Inexpensive Drives.  But today you can buy a 3 TB drive with performance that's better than the 1 TB drives of two or three years ago.

  • RAID1 - harddisk partly dropped out

    Hi all,
    I'm irritated by the behavior of my RAID1, because one of the harddisks partly fell out of the array.
    I have two 1-TB-harddisks in my machine and I parted both with a 100 MB-Partition for boot and a second partition using almost the rest of the space (in fact, I parted one of them and cloned the partition table). At the end of the disks there are some MB of unused space.
    I created a RAID1 via mdadm on these harddisks, using both 100-MB-partitions for the first RAID-device and the two big ones for the second. Then I installed /boot on the 100-MB-device and the rest of the system into a crypted luks-container on the second RAID-device.
    Everything worked fine, everything booted up as it should, but recently something strage happened: One of the partitions that formed the RAID containing the crypted container dropped out of the RAID while the /boot-RAID is still in perfect action (at least according to /proc/mdstat). I did some bootups meanwhile, that doesn't change anything, the bigger RAID-device remains with one disk (every time the same harddisk that drops out), the /boot-RAID still has both partitions. I tried to reassemble, but that doesn't work either.
    I'm quite confused at the moment. I assume that it can't be a problem of the data on the RAID, because the RAID is the ground-structure of my configuration, everything was installed onto the RAID-device, so I think in that case both partitions should have dropped out. The other possibility is a hardware problem, but because the /boot-array is still working fine I'm sure (at least I think I can be) that the harddisk is working. I can write in /boot without triggering problems, I even had a update of the kernel meanwhile, which is written into boot. I also did a smartctl-test on the affected harddisk, the results seem to be fine, but I have to admit I'm not that experienced in this matter, I never had a defect harddisk yet, so I will place the results down below.
    I would be glad if someone can help me figure out why just one of the two partitions dropped out of my RAID and what's the best to do.
    Both RAID-devices were definitely working before (according to /proc/mdstat) and I haven't changed anything in their configuration, the only thing is activating smart-support because I noticed that it was disabled for some reason, but as far as I remember this wasn't coincidential with the described problem and besides that should have affected both RAID-devices.
    Thanks in advance!
    smartctl -t long /dev/sdb
    #waited
    smartctl -a /dev/sdb > down-below
    smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.7.9-1-ARCH] (local build)
    Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF INFORMATION SECTION ===
    Model Family: Western Digital Caviar Black
    Device Model: WDC WD1002FAEX-00Z3A0
    Serial Number: WD-WCATR3167936
    LU WWN Device Id: 5 0014ee 204f24ec5
    Firmware Version: 05.01D05
    User Capacity: 1.000.204.886.016 bytes [1,00 TB]
    Sector Size: 512 bytes logical/physical
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: ATA8-ACS (minor revision not indicated)
    SATA Version is: SATA 2.6, 6.0 Gb/s
    Local Time is: Tue Feb 26 23:31:02 2013 CET
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    General SMART Values:
    Offline data collection status: (0x82) Offline data collection activity
    was completed without error.
    Auto Offline Data Collection: Enabled.
    Self-test execution status: ( 0) The previous self-test routine completed
    without error or no self-test has ever
    been run.
    Total time to complete Offline
    data collection: (17280) seconds.
    Offline data collection
    capabilities: (0x7b) SMART execute Offline immediate.
    Auto Offline data collection on/off support.
    Suspend Offline collection upon new
    command.
    Offline surface scan supported.
    Self-test supported.
    Conveyance Self-test supported.
    Selective Self-test supported.
    SMART capabilities: (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
    Error logging capability: (0x01) Error logging supported.
    General Purpose Logging supported.
    Short self-test routine
    recommended polling time: ( 2) minutes.
    Extended self-test routine
    recommended polling time: ( 200) minutes.
    Conveyance self-test routine
    recommended polling time: ( 5) minutes.
    SCT capabilities: (0x3037) SCT Status supported.
    SCT Feature Control supported.
    SCT Data Table supported.
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
    3 Spin_Up_Time 0x0027 177 169 021 Pre-fail Always - 4125
    4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 1455
    5 Reallocated_Sector_Ct 0x0033 159 159 140 Pre-fail Always - 324
    7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
    9 Power_On_Hours 0x0032 091 091 000 Old_age Always - 6749
    10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
    11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
    12 Power_Cycle_Count 0x0032 099 099 000 Old_age Always - 1411
    192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 50
    193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1404
    194 Temperature_Celsius 0x0022 117 106 000 Old_age Always - 30
    196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 273
    197 Current_Pending_Sector 0x0032 200 199 000 Old_age Always - 111
    198 Offline_Uncorrectable 0x0030 200 199 000 Old_age Offline - 51
    199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
    200 Multi_Zone_Error_Rate 0x0008 194 183 000 Old_age Offline - 1224
    SMART Error Log Version: 1
    No Errors Logged
    SMART Self-test log structure revision number 1
    No self-tests have been logged. [To run self-tests, use: smartctl -t]
    SMART Selective self-test log data structure revision number 1
    SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
    1 0 0 Not_testing
    2 0 0 Not_testing
    3 0 0 Not_testing
    4 0 0 Not_testing
    5 0 0 Not_testing
    Selective self-test flags (0x0):
    After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.

    Ovion wrote:Isn't there a possibility to make the harddrive just ignore the faulty sectors?
    If you rebuild the bad array on the same drive, the bad sectors should automatically be ignored. As I understand it - please correct me if wrong - the sata drives 'swap' the bad sectors out after they failed checksums, replacing them with spare ones (drive feature for this purpose). No idea why the automatic swaping out of the bad apples lead to the degraded state in your case. So, faulty sectors are ignored generally. Maybe there are too many of them - you should really consider the advice above. In your output:
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    5 Reallocated_Sector_Ct 0x0033 159 159 140 Pre-fail Always - 324
    196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 273
    197 Current_Pending_Sector 0x0032 200 199 000 Old_age Always - 111
    198 Offline_Uncorrectable 0x0030 200 199 000 Old_age Offline - 51
    ID #
    5 shows the bad sectors already replaced by the drive's internal buffer of spare sectors
    197 shows a number of sectors pending to be marked good/bad (this should be zero! Maybe thats the problem relating to the degraded state, or ID 198)
    edit: Note that for ID 5 the drive is close to the threshold already & it is only going to get worse. If anything, use it for something you won't miss until then.
    Last edited by Strike0 (2013-03-12 23:41:19)

  • RAID degraded - mirrored striped volumes - how to see if mirroring is working

    I've set up a mirrored raid drive. It keeps showing it as 'degraded', even though I've tested each disk separately and they come up OK.
    What does it mean in this report by 'rebuilding'?
    How do I sort out this 'degraded' status?
    How can I tell if mirroring is working correctly?
    The set up is:
    ===============================================================================
    Name:                 Saturn
    Unique ID:            CF319F64-987B-42FC-8C14-751766B37A49
    Type:                 Mirror
    Status:               Degraded
    Size:                 8.0 TB (8000084869120 Bytes)
    Rebuild:              manual
    Device Node:          disk15
    #  DevNode   UUID                                  Status     Size
    0  -none-    384CF206-BCD5-4804-BDB8-FF1C956EEF64  Online     8000084869120
    1  -none-    F886018B-EEE1-4875-853E-BCD4298683F8  0% (Rebuilding)8000084869120
    ===============================================================================
    Name:                 RAID0.1
    Unique ID:            384CF206-BCD5-4804-BDB8-FF1C956EEF64
    Type:                 Stripe
    Status:               Online
    Size:                 8.0 TB (8000219643904 Bytes)
    Rebuild:              manual
    Device Node:          -
    #  DevNode   UUID                                  Status     Size
    0  disk14s2  78CEBE17-8AFA-4849-A5B2-B73D9906FFE2  Online     2000054910976
    1  disk10s2  5D06DB08-6A3E-4041-A3F8-7E195D5B80DD  Online     2000054910976
    2  disk9s2   E430EC4F-AE2C-4B78-B2CC-04ED88315D3A  Online     2000054910976
    3  disk8s2   46EED2A6-BA53-4D54-8B69-FF5D650B97A0  Online     2000054910976
    ===============================================================================
    Name:                 RAID0.0
    Unique ID:            F886018B-EEE1-4875-853E-BCD4298683F8
    Type:                 Stripe
    Status:               Online
    Size:                 8.0 TB (8000084901888 Bytes)
    Rebuild:              manual
    Device Node:          -
    #  DevNode   UUID                                  Status     Size
    0  disk16s2  0B5223E9-2750-493B-A08E-01DD30E65065  Online     2000021225472
    1  disk5s2   F473AEC5-34A0-444E-AE62-DE755ECCE8A5  Online     2000021225472
    2  disk13s2  BDCBFE64-5771-4F3A-AECA-959B20844CD6  Online     2000021225472
    3  disk11s2  63AF3296-C427-4805-9FB5-B496205F49E8  Online     2000021225472
    ===============================================================================

    Hi,
    As I known, RAID1, or mirroring, is the technique of writing the same data to more than one disk drive. Mirrors are usually used to guard against data loss due to drive failure. Each drive in a mirror contains
    an identical copy of the data. No need to compare.
    If the drive is bootable, You should have got a warning when you booted the machine saying that either drive 0 or drive 1 was corrupt/non-bootable/missing and then be given a choice to boot from the second drive.
    Kate Li
    TechNet Community Support

  • RAID 1 comes up degraded after reboot, even though all components are new and no data is on the drives. . help!

    Hi there, I'm trying to build a RAID1 (mirrored) from 2 identical drives in an external 2 bay enclosure. I'm able to build the RAID set, and it ends up having an online status when I'm finished. I'm also able to copy data to the new RAID and work on it.  However when I restart my computer the next day, the RAID comes up as degraded and automatically starts the (9 hour!) rebuild process, even if there is no data on the RAID (I've destroyed and recreated the RAID several times).  I've already erased both drives with the zero out option to map out any possible bad blocks, and I've also checked the connections in the enclosure and reseated the drives, although they mount just fine as individual disks.  I'm kind of at a loss here.  Any advice or experience with a similar issue?  The drives and enclosure are all brand new (I know not a guarantee, but it's a bit early for work related failure as they've not been in use for more than a week.)  I'm running 10.6.8 on a 17" Laptop 2.66gHz Quad core with 8gig of RAM.  Enclosure is connected through USB 2.0.  Any help would be greatly appreciated, thanks!
    -Andrew

    Your dad should go here:
    http://www.apple.com/support/itunes/contact/
    and follow the instructions to report the issue to the iTunes Store.
    Regards.

  • RAID1 issues on K7N2 Delta2-LSR

    I originally started down the road of trying to setup a RAID0 with two new WD Raptors and discovered one was bad - WinXP Pro install fails on SATA RAID K7N2 Delta2-LSR
    So I continued on with installing WinXP on the remaining Raptor and figured I could setup a RAID1 array when the replacement Raptor arrived.  Today, I popped in the replacement Raptor on the SATA2 controller, booted into WinXP, and confirmed the new drive was recognized.  I could see the new drive in Disk Management and all looked good. 
    I rebooted into bios, enabled the RAID Config on SATA Primary and SATA Secondary, saved and exited.  Upon reboot, I was able to go into the nVidia RAID setup (F10). 
    The NVIDIA RAID UTILITY - Array List showed one Mirror 74G array with a Status of Degraded.  I hit [Enter] Detail and [A] Add to add the new hard drive to the array.  Then I selected [R] Rebuild which didn't seem to do anything (no HD activity), but the Array List reported the array Status as Healthy.  I never went through the process of defining a new array where drives are added from the left to the right side of the Define New Array screen since the Array List initially showed one Mirror 74G array, which is what I wanted.
    I exited [Ctrl-X] Exit and the array failed to boot.  The NVIDIA BIOS section "Detecting array..." failed to detect the array.  Rebooting confirmed the same. 
    To make matters worse, I couldn't get into Bios (Delete) or the NVIDIA Setup (F10).  I had to unplug the second Raptor to get into the Bios, upon which I disabled the RAID Config.  My original Raptor remains intact with WinXP.
    I would have figured that a mirrored array could be created on the fly from a master hard drive - is this not the case?
    How do I go about fixing this problem since now I cannot connect the second Raptor and get into Bios/NVIDIA Setup?  Does the BIOS need to be reset?
    Again, any help would be greatly appreciated!

    I initially thought a change in direction of the topic belonged in a new thread, but I see the value in keeping it all together.  I'll keep everything from this point out in the 3rd thread.
    1st thread - WinXP Pro install fails on SATA RAID K7N2 Delta2-LSR - due to bad HD
    2nd thread - RAID1 issues on K7N2 Delta2-LSR - this thread
    3rd thread - Bad K7N2 Delta2-LSR?

  • Disk Utility / diskutil RAID Mirror rebuild fails: mistakenly thinks disk is too small

    Has anyone else had this issue?
    I've just migrated my Lion Server install to a slightly upgraded Mac Mini running Mavericks and am running into horrible trouble rebuilding my Apple RAID disk, which I established on Lion and would like to repair using Mavericks. The drives being used are the same units; the RAID got out of sync due to a power outage, and rather than wait to migrate until the old machine was done rebuilding the array like a week later, I just figured I could repair the RAID after migration.
    Not so, however. Disk Utility, when I attempt to rebuild the raid using the second drive, tells me that it is too small.
    When I look in diskutil, here's what it shows me:
    <blockquote>
    /dev/disk2
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *3.0 TB     disk2
       1:                        EFI EFI                     314.6 MB   disk2s1
       2:         Apple_RAID_Offline                         3.0 TB     disk2s2
       3:                 Apple_Boot Boot OS X               134.2 MB   disk2s3
    /dev/disk3
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *3.0 TB     disk3
       1:                        EFI                         209.7 MB   disk3s1
       2:                 Apple_RAID                         3.0 TB     disk3s2
       3:                 Apple_Boot Boot OS X               134.2 MB   disk3s3</blockquote>
    Notice that the EFI partition on disk3 -- the intact RAID slice -- is the correct size, 209.7MB or 200MiB. The EFI partition disk2, however, which was created moments earlier by Disk Utility in its failed attempt to rebuild the array, is 100MiB too many. As a result, the Apple_RAID partition is about 100MiB smaller than the one on disk2. Hence, this error.
    So my question is: what the **** is going on here, and what can I do about it? Is this just a Mavericks bug, that it creates EFI partitions of the wrong size? The disks are seen as identical in diskutil. Is there anything I can do to rebuild my array?
    I've considered rebuilding it on the old server, but then next time it gets messed up I'm just going to run into the same problem, as whichever slice fails will have a correctly-sized EFI partiton, and if Disk Utility insists on creating an incorrectly-sized one during the rebuild process it's just going to fail again.
    Please halp! Thank you!

    OK, so here is the solution for my problem
    what i did was the following:
    - as i wrote before, i erased the disk with command line (diskutil zeroDisk disk1), then the pending sector was gone.
    - i decided to rebuild the raid in recovery mode to prevent further system crashes. so i added the previously zeroed disk1 to the degraded raid. when disk utility finished adding the disk it stalled but showed the raid ONLINE. so i decided to hard shutdown my mac mini - because it wasn't possible to restart via the interface of the recovery mode.
    after staring the mac the system loaded and everything was working again properly! including disk utility still showing raid1 online.
    btw, time machine was working correctly again when i deleted the .inprogress-file on the backup-disk.
    maybe someone else is able to use this information - that would be great!

  • REBUILD INDEX vs DROP/CREATE INDEX

    Hi there,
    Does anyone has already got some performance degradation after REBUILD INDEXes ? Would it be better to perform DROP/CREATE INDEX instead ?
    Thank you very much for anu reply.
    Best regards,
    Helena

    Hi,
    >>so is it then better to DROP/CREATE them ?
    Well, In fact I learned that when you rebuild an index, Oracle creates a new index from the old index and does not perform sorting while building the new index, which results in performance enhancement. In this case, depending of the size of your data it's necessary sufficient space on a tablespace for storing the old as well as the new index (while creating the new index). Other advantage, is that Oracle can use the old index for answering queries while it builds the new index too using [alter index <index_name> rebuild online].
    Cheers

  • Rebuild Index VS Drop and Rebuild?

    Hey all,
    I am currently redesigning a weekly process (weekly coz we pre determined the rate of index fragmentation) for specific indexes that get massive updates. The old process has proved to be able to fix and maintain reports performance.
    In this process we rebuild specific indexes using the below command:
    Alter index index_name rebuild online;
    This command takes around 10 min for selected indexes.
    Testing the below took 2 min for 6 or 7 indexes.
    Drop Index Index_Name;
    Create Index Index_Name on Table_name (Col1, col, ..);
    I know that indexes might not be used, and the application performance would be degraded with stale or non-existent stats. But our production and all our test DBs have procedures that daily gather stats on them.
    I tested the below script to make sure that execution plan does not change:
    SELECT ProductID, ProductName, MfrID FROM PRODUCT WHERE MFRID = 'Mfr1';
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 37 | 3737 | 13 (0)|
    | 1 | TABLE ACCESS BY INDEX ROWID| PRODUCT | 37 | 3737 | 13 (0)|
    | 2 | INDEX RANGE SCAN | PRODUCT_X1 | 37 | | 3 (0)|
    dropping PRODUCT_X1 and recreating it only changed the cost to 12.
    Gathering the stats again took the cost to 14.
    No performance issues were faced and index was still used.
    My question is: Is there any oracle recommendation that requires rebuilding the index instead of dropping and recreating it?
    Is there any side effect to my approach that I did not consider?
    Thank you

    Charlov wrote:
    I am currently redesigning a weekly process (weekly coz we pre determined the rate of index fragmentation)Nice. Not only have you defined and located index fragmentation but have also measured the rate at which it occurs.
    Could you please share your definition of index fragmentation, how you detect it, and how you measure the rate of change of this fragmentation.
    I am curious about all this since it can be repeatedly shown that Oracle btree indexes are never fragmented.
    http://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth-ii.pdf
    The old process has proved to be able to fix and maintain reports performance.Great so you have traces and run time statistics from before and after the rebuild that highlight this mysterious fragmentation and show how the fragmentation caused the report to be slow, details what effects the rebuild had that caused the reports to perform better.
    Please share them as these would be an interesting discussion point since no one has been able to show previously how an index rebuild caused a report to run faster or even show the fragmentation that caused it to be slow in the first place.
    I mean it would be a pity if the report was just slow because of an inefficient plan and compressing an index or two that probably shouldn't be used in teh first place appears to temporarily speed it up. Could you imagine rebuilding indexes every week, because some developer put the wrong hint in a query? That would be pretty funny.

Maybe you are looking for

  • Are Lightroom 5 and Photoshop CC compatible with OS X Mavericks?

    Would like to check compatibility of Lightroom 5 and Photoshop CC with OSX Maverick.   Will I have to reinstall these apps after I upgrade?

  • Create sequence, function and view all at once -script or something similar

    Hi I would like to know in what way can I write a script or something like that which would define names for a sequence, function and a view in the beginning (for example TEST_SEQ, TEST_FJ, TEST_VIEW...) and after that create this sequence, function

  • I have just had an hp pavilion g6 and the sound is bad i can't understand words,

    1. HP Pavilion g6 laptop notebook 2. Windows 7 3. I have only installed MIcrosoft Office 2010 4. NO error message is brand new. I did not upgrade anything or installed any printer I can understand the voices when its reproduce it. It sounds bad with

  • Adobe generating error

    Hello, I have installed new set patch 23 NW 7.0 Installation was successfully.But at testing it has appeared that generating of Adobe documents does not work I got an error : java.net.ConnectException: Connection refused: connect Stack trace: java.rm

  • Freehand 10 weird problem !!!

    hi everyone. have anyone come across such a prob. i'm unable to open a page. usually if u start freehand, it would open with a new page. but it doesn't. even if i go file>new it doesn't. nothing happens when i click new. no error msgs. but i'm able t