Striped + Mirrored software RAID no longer mounting

I have a MacPro 4,1 using the 4 internal bays each with a 1.5GB drive. Drives 1&2 are mirrored, drives 3&4 are mirrored. Those 2 mirrored drives are then striped together for a volume called 'Titanic.' Been working great for years, but just recently Titanic will no longer mount. My mirrors don't show any errors, but it seems I lost the stripe configuration. I updated to Yosemite last week. When running verify and repair I get:
Invalid number of allocation blocks. The volume could not be verified completely. File system check exit code is 8.
Running Terminal's checkRaid I get this:
AppleRAID sets (3 found)
===============================================================================
Name:                 Mirror2
Unique ID:            43EF978D-B5CF-4410-82A9-CB0E880B8759
Type:                 Mirror
Status:               Online
Size:                 1.5 TB (1499957919744 Bytes)
Rebuild:              manual
Device Node:          -
#  DevNode   UUID                                  Status     Size
0  disk2s2   29788AC9-0B7A-4E7E-94BF-E9FD8F697421  Online     1499957919744
1  disk3s2   6DDF597F-B4A3-4AEB-AB8F-77BD4360A791  Online     1499957919744
===============================================================================
===============================================================================
Name:                 Titanic
Unique ID:            140B7C4C-9A76-4B5B-BFBF-FFE7968A99AB
Type:                 Stripe
Status:               Offline
Size:                 3.0 TB (2999915773952 Bytes)
Rebuild:              manual
Device Node:          -
#  DevNode   UUID                                  Status     Size
-  -none-    0B55A0E0-CFB1-4B38-84CF-794F5F2AD1D6  Missing/Damaged
1  -none-    43EF978D-B5CF-4410-82A9-CB0E880B8759  Online     1499957886976
===============================================================================
===============================================================================
Name:                 Mirror1
Unique ID:            0B55A0E0-CFB1-4B38-84CF-794F5F2AD1D6
Type:                 Mirror
Status:               Online
Size:                 1.5 TB (1499957919744 Bytes)
Rebuild:              manual
Device Node:          disk5
#  DevNode   UUID                                  Status     Size
0  disk4s2   A2103B4C-FF2C-4126-8D5C-DCBF68890AC2  Online     1499957919744
1  disk1s2   B549FC0E-D281-487F-9017-61F077D0E3A4  Online     1499957919744
===============================================================================
Essentially if my mirrors are fine can I repair the striping? I have everything backed up offsite, but before I put my resources in that direction is there anything I can do to repair the problem?

My jab at it: Apple vs SoftRAID, for RAID 0+1?
Began under Mountain Lion or earlier without a rebuild?
The drives are enterprise type 7.2k 1.5TB (typo) and verified with checksum during vs lips and a tanning for weak sectors...?
SoftRAID today checks in background for media and I/O errors as well as stepped-read mirrors (3 should be minimum)
I don't see any reason not to erase and restore, probably to a new set of 3-4 2TB drives with current OS.

Similar Messages

  • Can I install Snow Leopard and boot from software RAID 1 (mirror)?

    I have a Mac Pro (quad core 2.66 GHz) on order for my office workstation. Yeah, I know new ones are probably coming out early next year but due to budget and upcoming projects I need one now. What I'd like to do is replace the pre-installed 640GB drive with two 1 TB drives and mirror them. The 640GB drive will be redeployed to another machine in the office. Can I boot from the Snow Leopard install DVD, go to Disk Utility, setup a RAID 1 with the two drives, install Snow Leopard to the mirror and then boot off the mirror set?
    I've searched and found offhand comments to the effect that installing to and booting from a software mirror is OK, but I'd like to know for sure that it's OK. Any experience that you have with such a configuration would be nice to hear.

    Yes. But before you do read the following:
    RAID Basics
    For basic definitions and discussion of what a RAID is and the different types of RAIDs see RAIDs. Additional discussions plus advantages and disadvantages of RAIDs and different RAID arrays see:
    RAID Tutorial;
    RAID Array and Server: Hardware and Service Comparison>.
    Hardware or Software RAID?
    RAID Hardware Vs RAID Software - What is your best option?
    RAID is a method of combining multiple disk drives into a single entity in order to improve the overall performance and reliability of your system. The different options for combining the disks are referred to as RAID levels. There are several different levels of RAID available depending on the needs of your system. One of the options available to you is whether you should use a Hardware RAID solution or a Software RAID solution.
    RAID Hardware is always a disk controller to which you can cable up the disk drives. RAID Software is a set of kernel modules coupled together with management utilities that implement RAID in Software and require no additional hardware.
    Pros and cons
    Software RAID is more flexible than Hardware RAID. Software RAID is also considerably less expensive. On the other hand, a Software RAID system requires more CPU cycles and power to run well than a comparable Hardware RAID System. Also, because Software RAID operates on a partition by partition basis where a number of individual disk partitions are grouped together as opposed to Hardware RAID systems which generally group together entire disk drives, Software RAID tends be slightly more complicated to run. This is because it has more available configurations and options. An added benefit to the slightly more expensive Hardware RAID solution is that many Hardware RAID systems incorporate features that are specialized for optimizing the performance of your system.
    For more detailed information on the differences between Software RAID and Hardware RAID you may want to read: Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?

  • How to restore a software raid mirror after a drive failure

    i set up a software raid mirror with two hard drives in a mac pro. then one failed as reported by disk utility. i replaced the drive. it does not seem possible to restore this raid short of copying the files to a third location and then erasing and establishing a new raid. is there a way to simply "restore"?

    Question: Do I need special software to administer the Mac Pro RAID Card or the Xserve RAID Card?
    Answer: Normal administration can be carried out using the RAID Utility (found in /Application/Utilities) or by using the raidutil command. For more information refer to the User’s Guide or man raidutil.
    The command-line utility should be available in Single-User mode.
    To run RAID Utility, you may need to boot to an alternate source of Mac OS to be able to manipulate the Boot drive.
    This article suggests using the Make Spare command:
    RAID Utility 1.0 Help > If a Disk Fails
    Message was edited by: Grant Bennet-Alder

  • Software mirrored disk raid crashes Mac.

    I had two 1TB disks mirrored raid on a Mac Pro (software raid). Took them out and sold the computer. When I connect either one externally to my new iMac they crash my computer. Please help!

    What brand and connection type of enclosure ?

  • Striped software raid managed or referenced library

    I am setting up a striped software RAID using a couple of internal drives on my MacPro in hopes of getting a little more speed out of Aperture 3. My question is should I keep a managed library on the RAID or should I keep a referenced library on the main startup drive and store masters on the striped RAID. Which would be faster? I am still using Aperture 2 right now and waiting to get an answer before I upgrade to 3. I plan to backup whatever is on the RAID to a Drobo.
    Thanks of the help ya"ll

    I would use referenced Even if you were putting both on the RAID. 2 drives Striped is not going to be a big benefit.
    You will get more of a benefit putting your library on a separate drive than your raw masters and here's why. Raw files fall well into the category of Large block read speed that most current drives handle just fine, once it's read its in RAM so I usually don't recommend putting the RAWs on a striped raid of just 2 drives.
    The real storage bottleneck is when the library is trying to write all of its filesystem previews and database changes to the drive concurrently as its trying to read raw masters at the same time.
    If you understand Hard drive tech, you will know that For one volume to handle Small block random access read/writes and large block sustained reads at the same time is a recipe for a slowdown...
    I'd put the Library on a dedicated SSD and your raws on your bigger internal drives.
    Or... Just try putting the library on one drive and relocating the masters to another drive and see if you notice a difference first. I sure did!

  • Software raid won't boot after updating to "mdadm" in mkinitcpio.conf

    After a power outage I've discovetred the config I was using (with raid in mkinitcpio.conf) no longer works, it's mdadm now - that's fine.  I've updated that and re-run mkinitcpio successfully, however my system is unable to boot from the root filesystem /dev/md2 like so:
    Waiting for 10 seconds for device /dev/md2 ...
    Root device '/dev/md2' doesn't exist. Attempting to create it.
    ERROR: Unable to determine major/minor number of root device '/dev/md2'.
    You are being dropped to a recovery shell
        Type 'exit' to try and continue booting
    /bin/sh: can't access tty; job control turned off
    [ramfs /]#
    As far as I can see from reading various threads and http://wiki.archlinux.org/index.php/Ins … AID_or_LVM I'm doing the right things now (although I'm not using lvm at all, which makes the installation document a little confusing).
    I think I've included all the appropriate bits of config here that should be working.  I assume I've missed something fundamental - any ideas?
    menu.lst:
    # (0) Arch Linux
    title  Arch Linux  [/boot/vmlinuz26]
    root   (hd0,0)
    kernel /vmlinuz26 root=/dev/md2 ro
    initrd /kernel26.img
    mkinitcpio.conf:
    HOOKS="base udev autodetect pata scsi mdadm sata filesystems"
    fstab:
    /dev/md1 /boot ext3 defaults 0 1
    /dev/md2 / ext3 defaults 0 1
    mdadm.conf
    ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=7ae70fa6:9f54ba0a:21
    47a9fe:d45dbc0c
    ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=20560268:8a089af7:e6
    043406:dbdabe38
    Thanks!

    Hi magec, that's quite helfpul - I've certainly got further.
    Before I was doing this to set up the chroot (which is what is suggested in the wiki article about setting up software raid):
    mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
    mdadm -A /dev/md2 /dev/sda2 /dev/sdb2
    mount /dev/md2 /mnt
    mount /dev/md1 /mnt/boot
    mount -o bind /dev /mnt/dev
    mount -t proc none /mnt/proc
    chroot /mnt /bin/bash
    But based on your suggestion it's working better
    mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
    mdadm -A /dev/md2 /dev/sda2 /dev/sdb2
    mount /dev/md2 /mnt
    mount /dev/md1 /mnt/boot
    mount -t proc none /mnt/proc
    mount -t sysfs none /mnt/sys
    mount -n -t ramfs none /mnt/dev
    cp -Rp /dev/* /mnt/dev
    chroot /mnt /bin/bash
    The boot is now getting further, but now I'm getting:
    md: md2 stopped.
    md: bind<sdb2>
    md: bind<sda2>
    raid1: raid set md2 active with 2 out of 2 mirrors
    md2: detected capacity change from 0 to 32218349568
    mdadm: /dev/md2 has been started with 2 drives.
    md2: Waiting 10 seconds for device /dev/md2 ...
    unknown partition table
    mount: mounting /dev/md2 on /new_root failed: No such device
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    /bin/sh: can't access tty; job contol turned off
    [ramfs /]#
    The bit that really confuses me is this:
    [ramfs /]# cat /proc/mdstat
    Personalities : [raid1]
    md2 : active raid1 sda2[0] sdb2[1]
    31463232 blocks [2/2] [UU]
    md1 : active raid1 sda1[0] sdb1[1]
    208704 blocks [2/2] [UU]
    unused devices: <none>
    [ramfs /]# mount /dev/md2 /new_root
    mount: mounting /dev/md2 on /new_root failed: No such file or directory
    [ramfs /]# ls /dev/md2
    /dev/md2
    [ramfs /]#
    So the array is up, the device node is there but it can't be mounted?  Very strange.
    Last edited by chas (2010-05-02 11:24:09)

  • [Solved] Move Software RAID 5 Array From NAS To Arch

    Edit: I probably never had a problem at all, the error in dmesg probably just scared me, because after I disconnected it I noticed that /dev/d127 was 8.1 TB, the exact size of my RAID array node in my NAS which was /dev/md0, I just overlooked it. I reconnected it to my pc and mounted /dev/md127 to /mnt/raid and got this wonderful sight!
    [bran@ra ~]$ ls /mnt/raid
    data lost+found meta sys
    [bran@ra ~]$ ls /mnt/raid/data/
    data ftproot module _NAS_Media _NAS_Piczza_ _NAS_Recycle_RAID _P2P_DownLoad_ stackable _SYS_TMP TV USBHDD
    download htdocs Movies _NAS_NFS_Exports_ NAS_Public nzbget-downloads PLEX_CONFIG sys tmp USBCopy
    I bought a Thecus N4520 a few months ago and it's ok but programs crash a lot and they're hard to debug, apps have to be updated manually and the whole thing is moderately underpowered. I'm trying to move the software RAID 5 array from the NAS to my desktop, the kernel seems to detect that there is a RAID array but all these drives aren't part of it. I'm pretty new to RAID and I'm just getting my feet wet with it.
    When I try to assemble the RAID array, it just tells me that it isn't an md array. How can I get it to build my array?
    [bran@ra ~]$ sudo mdadm --assemble /dev/sdb /dev/sdc /dev/sdd /dev/sde
    mdadm: device /dev/sdb exists but is not an md array.
    Found this little chunk of info in dmesg, it says that the md devices have unknown partition tables.
    [ 3.262225] md: raid1 personality registered for level 1
    [ 3.262483] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.10
    [ 3.262508] iTCO_wdt: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
    [ 3.262585] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
    [ 3.262933] md/raid1:md126: active with 4 out of 4 mirrors
    [ 3.262961] md126: detected capacity change from 0 to 536850432
    [ 3.263272] RAID1 conf printout:
    [ 3.263274] --- wd:4 rd:4
    [ 3.263276] disk 0, wo:0, o:1, dev:sdc3
    [ 3.263276] disk 1, wo:0, o:1, dev:sdb3
    [ 3.263277] disk 2, wo:0, o:1, dev:sdd3
    [ 3.263278] disk 3, wo:0, o:1, dev:sde3
    [ 3.263501] md: bind<sde4>
    [ 3.264810] md: bind<sdb2>
    [ 3.268262] async_tx: api initialized (async)
    [ 3.272632] md: raid6 personality registered for level 6
    [ 3.272636] md: raid5 personality registered for level 5
    [ 3.272637] md: raid4 personality registered for level 4
    [ 3.272905] md/raid:md127: device sdb2 operational as raid disk 1
    [ 3.272908] md/raid:md127: device sde2 operational as raid disk 3
    [ 3.272910] md/raid:md127: device sdd2 operational as raid disk 2
    [ 3.272911] md/raid:md127: device sdc2 operational as raid disk 0
    [ 3.273211] md/raid:md127: allocated 0kB
    [ 3.273241] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
    [ 3.273243] RAID conf printout:
    [ 3.273244] --- level:5 rd:4 wd:4
    [ 3.273245] disk 0, o:1, dev:sdc2
    [ 3.273246] disk 1, o:1, dev:sdb2
    [ 3.273247] disk 2, o:1, dev:sdd2
    [ 3.273248] disk 3, o:1, dev:sde2
    [ 3.273273] md127: detected capacity change from 0 to 8929230716928
    [ 3.273322] RAID conf printout:
    [ 3.273326] --- level:5 rd:4 wd:4
    [ 3.273329] disk 0, o:1, dev:sdc2
    [ 3.273331] disk 1, o:1, dev:sdb2
    [ 3.273332] disk 2, o:1, dev:sdd2
    [ 3.273360] disk 3, o:1, dev:sde2
    [ 3.283617] md126: unknown partition table
    [ 3.309239] md127: unknown partition table
    [ 3.312660] md: bind<sdb4>
    [ 3.318291] md/raid1:md124: not clean -- starting background reconstruction
    [ 3.318296] md/raid1:md124: active with 4 out of 4 mirrors
    [ 3.318333] md124: detected capacity change from 0 to 10736291840
    [ 3.318385] RAID1 conf printout:
    [ 3.318391] --- wd:4 rd:4
    [ 3.318395] disk 0, wo:0, o:1, dev:sdc4
    [ 3.318398] disk 1, wo:0, o:1, dev:sdb4
    [ 3.318402] disk 2, wo:0, o:1, dev:sdd4
    [ 3.318405] disk 3, wo:0, o:1, dev:sde4
    [ 3.319890] md124: unknown partition table
    [ 3.323462] md: bind<sde1>
    [ 3.338094] md/raid1:md125: active with 4 out of 4 mirrors
    [ 3.338225] md125: detected capacity change from 0 to 2146414592
    [ 3.338253] RAID1 conf printout:
    [ 3.338258] --- wd:4 rd:4
    [ 3.338262] disk 0, wo:0, o:1, dev:sdc1
    [ 3.338266] disk 1, wo:0, o:1, dev:sdb1
    [ 3.338268] disk 2, wo:0, o:1, dev:sdd1
    [ 3.338271] disk 3, wo:0, o:1, dev:sde1
    Here's my full dmesg
    mdadm.conf
    # The designation "partitions" will scan all partitions found in /proc/partitions
    DEVICE partitions
    ARRAY /dev/md127 metadata=1.2 name=(none):0 UUID=d1d14afc:23490940:a0f7f996:d7b87dfb
    ARRAY /dev/md126 metadata=1.2 name=(none):50 UUID=d43d5dd6:9446766e:1a7486f4:b811e16d
    ARRAY /dev/md125 metadata=1.2 name=(none):10 UUID=f502437a:d27d335a:d11578d5:6e119d58
    ARRAY /dev/md124 metadata=1.2 name=(none):70 UUID=ea980643:5c1b79e8:64f1b4cb:2462799b
    Last edited by brando56894 (2014-04-21 22:51:01)

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Adding A New Drive To A Software RAID 5 Array

    Edit 3: Just mounted the partitions and I can delete them because they contain nothing special. Is it safe to expand the 2nd partition of each drive to fill up the left over 22 GB?
    Edit 2: I just deleted all the partitions off of my new drive and created one partition, then added it to the array and it works just fine. My next question is, can I delete all the smaller partitions and expand /dev/sd[x]2 to reclaim all the space (about 70 GB)?
    One of my drives failed and Western Digital sent me a new drive, except it was an external drive instead of an internal drive, so I cracked it open and the label looked different. Turns out it's just refurbished and it's the same model as my other drives (WD Caviar Green 3 TB).
    I've read through the wiki article on Software RAID and created the partitions exactly the same as my other drives, but while creating the main 2.7 TB partition it says that the ending sector is out of range when it isn't. I'm new to all this so I have no idea what to do. From what I've read there normally aren't this many partitions per disk, correct? I also have md124, md125 and md126 for the other partitions. md127 is for the 2.7 TB partitions. I took the array out of my Thecus N4520. I have a 3 TB external drive and a 1TB internal, along with another 500 GB drive. Would I be better off at destroying the RAID set and creating a fresh RAID 5 set, considering I'm losing about 90 GB if I don't need the smaller partitions.
    /dev/sdc
    Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 00636413-FB4D-408D-BC7F-EBAF880FBE6D
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    /dev/sdd
    Disk /dev/sdd: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): C5900FF4-95A1-44BD-8A36-E1150E4FC458
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    /dev/sde
    Disk /dev/sde: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2B5527AC-9D53-4506-B31F-28736A0435BD
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    new drive: /dev/sdf
    Disk /dev/sdf: 5860467633 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 93F9EF48-998D-4EF9-B5B7-936D4D3C7030
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860467599
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 5813281700 sectors (2.7 TiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00 Linux RAID
    2 47187968 47187969 1024 bytes FD00 Linux RAID
    3 46139392 47187951 512.0 MiB FD00 Linux RAID
    4 2048 20973559 10.0 GiB FD00 Linux RAID
    5 20973568 41945071 10.0 GiB FD00 Linux RAID
    when I type in 5860491263 as the end sector gdisk does nothing, just wants more input. If I type +2.7T it accepts it, but really it just creates a partition that's 1KB in size!
    I am able to create a 2.7 TB partition with an end sector of 5860467599, this won't screw anything up will it?
    Edit 1: just tried it and got this
    [root@ra /home/bran]# mdadm --add /dev/md127 /dev/sdf2
    mdadm: /dev/sdf2 not large enough to join array
    [root@ra /home/bran]# fdisk -l /dev/sdf
    Disk /dev/sdf: 2.7 TiB, 3000559428096 bytes, 5860467633 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 93F9EF48-998D-4EF9-B5B7-936D4D3C7030
    Device Start End Size Type
    /dev/sdf1 41945088 46139375 2G Linux RAID
    /dev/sdf2 47187968 5860467599 2.7T Linux RAID
    /dev/sdf3 46139392 47187951 512M Linux RAID
    /dev/sdf4 2048 20973559 10G Linux RAID
    /dev/sdf5 20973568 41945071 10G Linux RAID
    Last edited by brando56894 (2014-04-28 00:47:29)

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Need help with formatting a software RAID 5 array with xfs

    Hi,
    i'm tying to format a software RAID 5 array, using the xfs filesystem with the following command:
    # mkfs.xfs -v -m 0.5 -b 4096 -E stride=64,stripe-width=128 /dev/md0
    but all I get is the attached error message. It works fine when I use the ext4 filesystem. Any ideas?
    Thanks!
    http://i.imgur.com/cooLBwH.png
    -- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Software RAID Failure - my experience and solution

    I just wanted to share this information with the iCloud community.
    I searched a bit and did not find much information that was useful with regard to my software RAID issue.
    I have 27 inch Mid 2011 iMac with SSD and Hard drive which has been great.
    I added an external hard drive (I think if I mention any brand name the moderator will delete this post) which includes an nice aluminum case with two 3 TB hard drives within it, and it has a big blue light on the front and is connected via Thunderbolt. This unit is about 2 years old and I have it configured in a 3 TB mirrored RAID (RAID 1) via a software RAID configured via Mac OS Disk Utility. 
    I had at one point a minor glitch which was fixed using another piece of software (again if I mention a brand the moderator will delete this post) which is like a 'Harddrive Fighter' or similar type name LOL.   So otherwise that RAID has served me well as a site for my Time Machine back up and Aperture Vault, etc.  (I created a 1.5 TB Sparse bundle for Time Machine so that the backup would not use the entire 3 TBs)
    I recently purchased a second aluminum block of drives, and set that up as a 4 TB RAID 1.
    Each of the two RAIDs are set with the option of “Automatically rebuild RAID mirror sets” checked.
    I put only about 400 gb on the new RAID to let it sit for a ‘burning in period.’
    A few days ago the monitoring software from the vendor who sells the aluminum block of drives told me I had a problem.  One of the drives had “Failed.”   The monitoring software strangely enough does not distinguish the drives so you can figure out which pair had the issue, so I assumed it was the New 8 TB model.  Long story short, it was the older 6 TB model, but that does not matter for this discussion.
    I contacted the vender and this is part of their response.
    “This is an indication that the Disk Utility application in Mac had a momentary problem communicating with the drive mechanism. As a result, it marked that drive as "failed" in the header information. Unfortunately, once this designation is applied to a drive by the OS, the Disk Utility will thereafter refuse to attempt any further operations with that disk until the incorrect "failed" marker is manually cleared off the drive.”
    That did not sound very good to me…..back up killed by a SOFTWARE GLITCH?
    “The solution is to remove the corrupted volume header, and allow the generation of a new one….This command will need to be done for each disk in the array… (using Terminal)…
    diskutil zerodisk (identifier)
    …3. After everything is finished, you should be able to exit Terminal, and go back into the Disk Utility Application to re-configure the RAID array on the device.”
    Furthermore they said.
    “If the Disk Utility has placed a flag into the RAID array header (which exists on both drives) then performing this procedure on a single drive will not correct anything.”
    And…
    “When a drive actually does fail, it typically stops appearing in the Disk Utility application altogether. In that circumstance, it will never be marked "failed" by the Disk Utility, so the header erase operation is not needed.”
    This all sounded like a bad idea to me. And what does the Vendors RAID monitor software say then?  “Disk Really Really FAILED, check for a fire.”
    As I tried to figure out which drive was actually the bad RAID pair I stumbled on a solution.
    First I noted that the OS Disk Utility did NOT show a fault in the RAID. It listed both RAIDS as “Online.’ Thus no rebuilding was needed and it did not begin the rebuild process.
    The Vendors disk monitor software saw some fault, but Mac was still able to read and write to the RAID, both disks in the mirror.  I wrote a folder to the RAID and with various rebooting steps I pulled the “Bad” drive and looked at the “Good” Drive….the folder was there…I put the Bad drive back in and pulled the Good Drive and the folder was there on the “bad” drive.  So it wrote to both drives.  AND THE VENDORS MONITORING SOFTWARE SHOWED THE PREVIOUSLY LABELED ‘BAD’ DRIVE AS ‘GOOD’ AND THE MISSING DRIVE SLOT AS ‘BAD’.
    My stumbled FIX.   I moved a bunch of files off the failed RAID to the new RAID  but before I moved the sparse bundle, a folder of 500 gigs movies and some other really big folders the DISK UTILITY WINDOW (which I still had open) now showed that the RAID had a Defect and began rebuilding the mirror set itself, out of the blue!   I don't know why this happened.  But moving about 1/2 of the data off of it perhaps did something?  Any Ideas?
    This process took a few hours as best I can tell (let it run overnight) and the next day the RAID was fine and the Vendors RAID monitor did not show a fault any longer.
    So, the Vendors RAID monitoring software reporting a “FAILED” drive without any specific error codes to look up.  Perhaps they could have more info for the user on the specific fault?  The support line of the the Vendor said with certainty “the Volume Header is corrupted” and THE ONLY FIX is to completely ZERO THE DRIVE! This was not necessary as it turns out.
    And the stick in the eye to me…..
    “I've also sometimes seen the drives get marked as "failed" by the disk utility due to a shaky connection. In some cases, swapping the ends of the Thunderbolt cable will help with this. Something to try, perhaps, if your problems come back. “
    Ya Right…..
    Mike

    Follow up.
    After going through the Zeroing process and rebuilding the RAID set three times, with various configurations, LaCie finally agreed to repair the unit under warrantee.
    I tried swapping the power supplies and thunderbolt wires, tried taking the drive out of series with the newer big brother of it.  And it still failed after a few days.
    I just wanted to share more of what I learned with regard to rebuilding the RAID sets via the Terminal.  The commands can be typed partially and a help paragraph will come up to give VERY cryptic descriptions of the proper use of the commands.
    First Under terminal you can used the command "diskutil appleRAID list" to list those drives which are in the RAID.  This gives you the ID number for each physical drive. For example:
    AppleRAID sets (1 found)
    ===============================================================================
    Name:                 LaCie RAID 3TB
    Unique ID:            84A93ADF-A7CA-4E5A-B8AE-8B4A8A6960CA
    Type:                 Mirror
    Status:               Online
    Size:                 3.0 TB (3000248991744 Bytes)
    Rebuild:              manual
    Device Node:          disk4
    #  DevNode   UUID                                  Status     Size
    0  disk3s2   D53F6A81-89F1-4FB3-86A9-8808006683C2  Online     3000248991744
    -  disk2s2   E58CA8F5-1D2C-423A-B4BE-FBAA80F85879  Spare      3000248991744
    ===============================================================================
    In my situation with the failed RAID, I had an extra disk in this with the status of Missing/Failed. 
    The command is "diskutil appleRAID remove" and the cryptic help paragraph says:
    Usage:  diskutil appleRAID remove MemberDeviceName|MemberUUID
            RAIDSetVolumePath|RAIDSetDeviceName|RAIDSetUUID
    MemberDeviceName|MemberUUID  is the number listed in the "diskutil appleRAID List" command,  and
    RAIDSetVolumePath|RAIDSetDeviceName|RAIDSetUUID is the Device Node for the RAID which here is /dev/disk4.
    I used this command to remove the third entry (missing/failed), I did not copy the terminal window text on that one, so I cannot show the list of three disks.
    I could not get to remove the disk2s2 disk listed as SPARE, as it gave an error message:
    Michaels-iMac:~ mike_aronis$ diskutil appleraid remove E58CA8F5-1D2C-423A-B4BE-FBAA80F85879 /dev/disk4
    Started RAID operation on disk4 LaCie RAID 3TB
    Removing disk from RAID
    Changing the disk type
    Can't resize the file system on the disk "disk2s2"
    Error: -69827: The partition cannot be resized
    But I was able to remove it using the graphical interface Disk Utility program using the delete key.
    I then rebuilt the RAID set by dragging the second drive back into the RAID set.
    I could not get the command: "diskutil appleRAID update AutoRebuild 1 /dev/disk4" to work, because even though it was trying to execute it HUNG.  I put the two drives into my newer LaCie 2big as my attempt at further trouble shooting the RAID (this was not suggested by LaCie tech), rebuild the RAID and now I am going to leave it setup that way for a few days before I ship it back to just see if the old drives work fine in the new RAID box (thus proving the RAID box is the problem). I tried the AutoRebuild 1 command just now and it gave an error.
    Michaels-iMac:~ mike_aronis$ diskutil appleraid update autorebuild 1 /dev/disk4
    Error updating RAID: Couldn't modify RAID (-69848)
    Michaels-iMac:~ mike_aronis$
    In my haste to rebuild the RAID set for the third or forth time as LaCie led me through the testing this and test that phase, I forgot to click the "Auto Rebuild" option in the Disk Utility program.
    Question for the more experienced:
    As I was working on this issue, I notice that each time I rebooted and did work in the Terminal (with and without the RAID plugged in to the thunderbolt connection) I notice that the list of drives would change and my main boot drive would not stay listed as drive 0!  Some times it would be drive 0, sometimes the RAID would be listed as Drive 0.  It's strange to me...I would have thought the designation for Drive0 and Drive1 would always be my two build in drives (SSD and spinning drive).
    Mike

  • Software RAID issue

    I have just installed 2-200 GB seagate hd's into my machine and set them up as a software raid 0. Also installed is the original 80 GB drive and a Hitachi 160 GB drive. The two new drives functioned long enough for me to copy all of my raw digital home video.
    Now, when I try to boot the grey Apple screen comes up and the machine hangs. When I disconnect the drives, it boots right up.
    I thought maybe my power supply was giving up the ghost until I reconnected them and successfully rebooted with Drive Genius 1.2. I rebuilt the raid set (maintaining all data) and rebooted into OSX successfully. Woohoo!
    But not so fast. Next reboot started the same thing. And now Drive Genius has no effect.
    I really dont wan't to lose the data on these drives. Is there a problem with the software raid in 10.4.7? Any ideas of other things to try? Help?!?!
    Thanks,
    Andy
    G4 1.25DP FW800   Mac OS X (10.4.7)  
    G4 1.25DP FW800   Mac OS X (10.4.7)  

    Possibly a couple things that "aren't quite right" with your setup.
    SoftRAID 3.5 supports mirror for boot drive, but not with stripped.
    The two IDE buses are not identical, making for some slight problems with RAID on the two.
    You really can't RAID if both drives are together in one drive cage on the same IDE bus.
    You need a PCI controller to RAID in MDD. Most people at this point in time opt for Serial ATA drives and controller. If you want a bootable SATA drive...
    http://www.firmtek.com/seritek
    If you must boot from stripped RAID (not always a good idea and may not offer much actually in real world performance, check www.barefeats.com for some tests for one).
    A dedicated boot drive for OS/Apps and use other drives for media, data, scratch, even for /users.
    Two ATA drives on the same bus, have to share the bus, contend for I/O.
    You can create a stripe raid with Disk Utility and boot.
    I assume that you read the SoftRAID QuickStart and Manual which really are helpful. There are a number of sites etc that can go into more about RAID.

  • Mac OS X 10.4 - Software RAID 10???

    Hello All...
    As a motion graphics artist by trade and a musician at night, I am constantly faced with the on-going dilemma of my storage needs. As you may probably already know, these two activities eat up drive space and require a speedy storage solution as well.
    Here's my current setup:
    - PowerMac G5 Dual 2GHz (PCI-X) w/ 3GB of RAM.
    - Sonnet Tempo-X eSATA 4x4 PCI-X card.
    - MacGurus Burly 4 bay SATA enclosure.
    - 2 250GB WD drives in the Burly enclosure.
    I have the two 250GB WD drives in the external SATA enclosure software-RAIDed together to form one RAID 0 (striped) volume, using the OS X disk utility RAID feature. While that has resulted in an excellent, fast, big disk... I am starting to run into errors, and starting to lose sleep over the fact that the current setup is not redundant / reliable.
    My initial solution was to buy two more disks, RAID them together as another striped volume and run backups from the first two-disk WD striped volume to the new striped RAID. So, four disks, two striped raids with two disks each.... But when I called Sonnet Tech Support, the support guy said that OS X 10.4 has support for RAID 10. That changes everything! If I could fill the Burly enclosure with 4 of the same disks and software RAID 10 them together, I'd be in business... I think...
    Does OS X 10.4 really allow you to create a software RAID 10???
    If so, do you think this is the most ideal solution for what I'm working with?
    Keep in mind, I'm working with a low budget. No, XServes/XServe-RAID is NOT an option!
    Any input is greatly appreciated!
    TIA

    Hi, noka.
    Yes, it supports RAID 10. See "Disk Utility 10.5 Help: Protecting your data against hardware failure with a mirrored RAID set."
    However, you may not get the performance you expect.
    FWIW and IMO, unless one is running a high-volume transaction server with a 99.999% ("Five Nines") availability requirement, RAID is overkill. For example, unless you're running a bank, a brokerage, or a major e-commerce site, you're probably spending sums of time and money with RAID that could be applied elsewhere.
    RAID is high on the "geek chic" scale, low on the practicality scale, and very high on the "complex to troubleshoot" scale when problems arise. The average user, even one in your lines of business, is better served by implementing a comprehensive Backup and Recovery solution and using it regularly.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

  • External RAID Won't Mount

    Hi all -
    I have a SimpleTech Quad-interface RAID which I'm running as RAID 1 for mirroring, which is supposed to be protecting archived video projects. I've been in the middle of a weeks-long NIGHTMARE trying to get an eSata Expresscard to work on this MBP, which is another story.
    After numerous crashes, forced quits, etc testing various eSata cards, my RAID won't mount. It shows up the same whether I'm using FW400, FW800, or USB. When it powers up the Mac says it can't recognize the drive (Initialize, Ignore, or Eject), and Disk Utilities sees the box but only identifies the volume as disk1s2.
    I have opened the box and attached the drives separately, and get the same response on both disks individually. Disk Utility can't verify or repair, and DiskWarrior can't see them at all.
    I know I can reformat the RAID, but I really want to save these files. Is there any way I can do that?
    Thanks a lot in advance.
    Dave

    You may have to reformat the RAID.
    Do keep in mind that even RAIDs need to be backed up.

  • Software raid 1 drive(s) failed

    Curious on what to do with a RAID 1 Drive (s) failure? I've searched the community and cannot find similar issue/answers.
    Configuration:
    One 1TB OS & App Drive is fine
    Two 3TB RAID 1 Drives for Video scratch are fine
    Two 2TB RAID 1 Data Drives are my issue
         I'm using the software raid in OSX and it shows "2TB Data Drive/Mirrored Offline"
         Both drives have "Failed" next to them
    I have backup this Data Drive in Time Machine and a recent full back up to a single drive that is off site.  I believe I have a good back up strategy with Time Machine and offsite back ups via Voyager S3.  I guess we'll find out!
    My question is how do I find out if it's a HW issue or SW issue with this configuration ?  And then how to go about recovering them?
    Initially when I looked in Disk Utility it appeared that only one of the 2TB drives had "Failed" next to it but now after a reboot "Failed" is next to both drives.
    My initial hope was that one of the drives had failed and I'd be able to just replace that drive and the RAID System would rebuild from the other drive.
    Any assistance would be appreciated.
    Thanks

    TommyH wrote:
    My initial hope was that one of the drives had failed and I'd be able to just replace that drive and the RAID System would rebuild from the other drive.
    RAID 1 just copies the same data to two (or more) drives at the same time, it's only for mission critical data loss requirements where a drive failure during writing would be catastrophic. Like taking purchasing orders or getting a phone call from ET for example.
    RAID 0 is dangerous as the data path is separated to how many drives in the RAID 0 set, one drive glitches and all data is lost. But provides insane speeds.
    RAID 5 is a much more reliable, it combines many drives (4 or more for more speed) splitting the data path but also has redundancy factor that if a drive dies it can be replaced and the data is recovered from the other drives. Usually a RAID 5 is in a external enclosure with it's own cooling and hardware controllers, not software based RAID where your CPU is being overloaded.
    There are other forms of RAID some combinations like RAID 1+0, RAID 6, 10 and so forth, you can find out more about the more eclectic RAID types online.
    RAID 5 is more ideally suited to video requirements for data storage, with perhaps a RAID 0 as a scratch disk or a RAID 0 + 1 (the RAID 0 is mirrored to another RAID 0) if your going to take a while working on it that increases the potential of failure.
    Once the work is completed, it's sent to the RAID 5 where it's safe with it's redundancy and speed.
    http://eshop.macsales.com/shop/hard-drives/RAID/Desktop
    The thing to remmeber with the failed RAID 1 Data drive is the data is still on the drive, even though it failed and perhaps things are messed up a little on it on the directory. All the drive needs is to be told it's not part of a RAID 1 anymore, that it's a normal drive, a simple fix, but I don't know how to go about it.
    You have two drives and two chances to recover the data off off, simply disconnect one data drive and reboot, see if it will mount and or you can repair the drive in Disk Utility.
    If not pull one drive out and stick another blank one in there and download Data Rescue and simply recover the files to the new blank drive. DR works by simply reading the files themselve, not the file struture or anything else.
    Software RAID is unreliable, it depends upon the CPU. Opt to get a external RAID where it's based in hardware instead. Use eSATA if all possible.
    Also consider using backup software, like Carbon Copy Cloner which can be scheduled to make a backup of your data during the middle of the night. This way if something happens to either your boot drive or a data drive, the clone won't immediately copy the issue to the other drive like a RAID 1 does.
    Also Carbon Copy Cloner makes "hold option" bootable OS X boot drive clones, which TimeMachine doesn't.
    Also TM kicks in what, like once a hour? It's likely what caused your issue, CCC will work when you schedule it.  TM is for consumers, not video pro's like yourself.
    If you need even heavier iron as a external RAID setup and more expertise, then see these guys
    http://www.macgurus.com/

  • Possible to increase the total size of a software raid set?

    Hi,
    I need to increase the size of a software raid set which is internally in one of the Xserves - originally it had 2x 400G drives.
    I've swapped drives into the raid so they are both now 500G drives.
    diskutil info drive XX (ie raid set) shows that the raid is still a 400G raid volume as expected.
    Question is - can I grow the Raid Total Size to use all the capacity of the member volumes?
    I guess the alternative is to remove the drives from the raid, enable raid on one of them and then add the other drive as a member... but this would mean I would have to take the raid offline.
    The raid sets are not my system disks.
    Any clues?
    TIA
    Campbell
    XServes   Mac OS X (10.4.5)   12 Macs & far too many PCs

    Thanks for your help.
    I rebuilt the array using enableRaid. Worked fine with Raid Volume offline for approx 5 minutes - although I had to degrade the array (and go bare with no mirror) twice in the process.
    In case anyone is interested, this is what the process was:
    1 Swap into Raid array larger drive (say, disk 2) and let array rebuild
    2 Disable file services.
    3 removefromraid disk 2 (leave it mounted).
    4 Unmount the raid set and eject drives (say, disk 1) Remove drive - it's the only original backup.
    5 enableRaid on disk 2
    Providing enableRaid is ok -
    6 Insert a fresh larger size drive in the place of the removed drive (disk 1)
    7 Unmount new drive and addToRaid
    8 (Rebuild array). Providing rebuilding ok -
    9 Start file services again.
    Data, permissions & ACLs were all intact. So now users can fill the rest of the raid up with more MP3s this week <sigh>
    Maybe not the best approach but it worked well for me.
    XServes   Mac OS X (10.4.5)   12 Macs & several hundred too many PCs

Maybe you are looking for