Encrypted root on software RAID with 2.6.33

Hi,
I ran an update earlier today on my HTPC, hoping for mantis included in the 2.6.33 kernel to work with my TV-Tuner card, when I recognized that the configuration for software-raids changed.
So I followed the instructions on: Installing with Software RAID or LVM and added the dm_mod module to the MODULES list and replaced raid with mdadm in the HOOKS list resulting in
HOOKS="base udev autodetect sata mdadm usbinput ecrypt filesystems"
After that I successfully rebuilt the kernel and decided it was time for a reboot, but after running the hooks udev, mdadm and encrypt, the screen shows:
Waiting 10 seconds for device /dev/mapper/root ...
root device '/dev/mapper/root' doesn't exist. Attempting to create it.
ERROR: Unable to determine major/minor number of root device '/dev/mapper/roo'.
You are being dropped to a recovery shell.
cat /proc/mdstat shows no configured raid arrays, while assembling them with mdassemble works.
I've tried to boot with and without specifying the raid arrays in grub's kernel line, always with the same result.
Even after booting from a live CD an chrooting in the system to rebuild the kernel - same error message.
Any ideas what I've done wrong?
Best regards

I just checked my mdadm.conf again and compared it with the commands listed here and on the wiki-page.
# cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=13db5ec4:4d02ad99:5be3d20e:87599807
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=dda2a7de:a998f117:3f649c17:a7325bd1
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=26a15bc5:2345a817:f8c460fb:3d47969b
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=be48517f:ebc6eabb:f8463e19:ac167733
# mdadm -D --scan
ARRAY /dev/md0 metadata=0.90 UUID=13db5ec4:4d02ad99:5be3d20e:87599807
ARRAY /dev/md1 metadata=0.90 UUID=dda2a7de:a998f117:3f649c17:a7325bd1
ARRAY /dev/md2 metadata=0.90 UUID=26a15bc5:2345a817:f8c460fb:3d47969b
ARRAY /dev/md3 metadata=0.90 UUID=be48517f:ebc6eabb:f8463e19:ac167733
# mdadm --detail --scan
ARRAY /dev/md0 metadata=0.90 UUID=13db5ec4:4d02ad99:5be3d20e:87599807
ARRAY /dev/md1 metadata=0.90 UUID=dda2a7de:a998f117:3f649c17:a7325bd1
ARRAY /dev/md2 metadata=0.90 UUID=26a15bc5:2345a817:f8c460fb:3d47969b
ARRAY /dev/md3 metadata=0.90 UUID=be48517f:ebc6eabb:f8463e19:ac167733
I'm going to change this and rebuild the kernel image.
Don't know where the difference comes from :/
Okay - the difference comes from running the command in the arch livecd system and in the chrooted environment
But both somehow don't seem to work for me.
When running "mkinitcpio -p kernel26", this is the output:
==> Building image "default"
==> Running command: /sbin/mkinitcpio -k 2.6.33-ARCH -c /etc/mkinitcpio.conf -g /boot/kernel26.img
:: Begin build
:: Parsing hook [base]
:: Parsing hook [udev]
:: Parsing hook [autodetect]
find: "/sys/devices/": Datei oder Verzeichnis nicht gefunden
:: Parsing hook [sata]
:: Parsing hook [mdadm]
Custom /etc/mdadm.conf file will be used in initramfs for assembling arrays.
:: Parsing hook [usbinput]
:: Parsing hook [encrypt]
:: Parsing hook [filesystems]
:: Generating module dependencies
:: Generating image '/boot/kernel26.img'...SUCCESS
==> SUCCESS
So /etc/mdadm.conf should be used?
lilsirecho wrote:When using mdadm for a raid0 in my machine, the assembly of the raid md0 is made during initrd.  A statement for md0 and its 2 drives appears during boot-up.
I can't see any statement about creating any of my mdX devices.
It's just
Running Hook [udev]
Triggering udev uevents ... done
Running hook [mdadm]
Running hook [enrypt]
Waiting 10 seconds for device /dev/md2 ...
Waiting 10 seconds for device /dev/mapper/root ...
Root device '/dev/mapper/root' doesn't exist. Attempting to create it.
ERROR: Unable to determine majo/minor number of root device '/dev/mapper/root'.
You are being dropped to a recovery shell
Last edited by flo (2010-04-21 19:25:03)

Similar Messages

  • How to install on a software raid with dmraid?

    In linux 2.4 the ATARAID kernel framework provided support for Fake Raid (software RAID assisted by the BIOS). For linux 2.6 the device-mapper framework can do, among other nice things like LVM and EVMS, the same kind of work as ATARAID in 2.4. While the new code handling the RAID's I/O still runs in the kernel, device-mapper stuff is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
    Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) Fake-RAID IDE / SATA controllers which have BIOS functions on it. Most common ones are: Promise Fasttrack controllers as well as HPT 37x, Intel, VIA and LSI. Also serial ata RAID controllers like Silicon Image Medley and Nvidia Nforce are supported by the program.
    I would like to ask if someone out there managed to set up a raid machine with dmraid, and I am asking for a full raid setup, nothing like raid for /home only.

    Loosec;
    I see that you have a handle on the dmraid package, recently upgraded I see.
    I have an application for raid that does not involve booting with raid, data system only.
    I desire to generate an archive for pacman using a raid array for purposes of install speed.
    My first attempts used mdadm techniques and provided a software raid of hde and hdg drives which did not produce  an improved hdparm read speed.
    What changes to dmraid procedures will provide a non-bootable raid0 system which will archive pacman packages and provide combined raid read speed at least 50% greater than the normal 40MB/sec.?
    Performance figures for raid0 with dmraid haven't been available in the forums.  Perhaps these are disappointing?
    Basically, how to make a raid0 system  with dmraid but not make it bootable?
    EDIT:  Solved my problem with mdadm and mke2fs, fstab entry and /mnt/md entry.
    Last edited by lilsirecho (2008-03-14 07:50:00)

  • Can't mount LUKS on RAID with single click

    Hello
    I use KDE and I want to create encrypted volume on software raid 1, which I could mount from time to time as simple as just with one click in dolphin +  entering the password.
    I tried with single drive, and It works. I mean I click on drive then I'm asked about password and I can see another unencrypted volume.
    When it comes to RAID ... I'm asked about password, but unencrypted volume doesn't appear (can't see any error on bottom bar because of window refresh). Mapper device is created under /dev/mapper/luks_crypto_.... which I can mount manually.
    Anybody have experience with such configuration ? Any ideas what's wrong ? Where to start ?
    Any help would be greatly appreciated.
    Kind Regards,
    osazon

    Viper_Scull wrote:
    In Thunar Preferences -Advanced. Do you have enable volume management checked?
    If so, configure it, and mark the two first options in Removable storage.
    Didn't work. I still get Not Authorized message. Also I logged out and in.
    And here is the everything.log about USB drive connecting:
    Jul 21 09:23:33 lynx kernel: [ 2239.496516] usb 2-1: new high speed USB device number 6 using ehci_hcd
    Jul 21 09:23:33 lynx mtp-probe: checking bus 2, device 6: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1"
    Jul 21 09:23:33 lynx kernel: [ 2239.624659] scsi8 : usb-storage 2-1:1.0
    Jul 21 09:23:33 lynx mtp-probe: bus: 2, device: 6 was not an MTP device
    Jul 21 09:23:34 lynx kernel: [ 2241.054851] scsi 8:0:0:0: Direct-Access     takeMS   Mem-drive EasyII 1100 PQ: 0 ANSI: 0 CCS
    Jul 21 09:23:34 lynx kernel: [ 2241.055144] sd 8:0:0:0: Attached scsi generic sg2 type 0
    Jul 21 09:23:34 lynx kernel: [ 2241.057409] sd 8:0:0:0: [sdb] 8028160 512-byte logical blocks: (4.11 GB/3.82 GiB)
    Jul 21 09:23:34 lynx kernel: [ 2241.058419] sd 8:0:0:0: [sdb] Write Protect is off
    Jul 21 09:23:34 lynx kernel: [ 2241.058423] sd 8:0:0:0: [sdb] Mode Sense: 43 00 00 00
    Jul 21 09:23:34 lynx kernel: [ 2241.058425] sd 8:0:0:0: [sdb] Assuming drive cache: write through
    Jul 21 09:23:34 lynx kernel: [ 2241.061294] sd 8:0:0:0: [sdb] Assuming drive cache: write through
    Jul 21 09:23:34 lynx kernel: [ 2241.062306]  sdb: sdb1
    Jul 21 09:23:34 lynx kernel: [ 2241.063904] sd 8:0:0:0: [sdb] Assuming drive cache: write through
    Jul 21 09:23:34 lynx kernel: [ 2241.063909] sd 8:0:0:0: [sdb] Attached SCSI removable disk
    Last edited by freyr (2011-07-21 07:28:57)

  • Installing grub2 1.99rc1 on Linux Software RAID (/dev/md[0-x])

    Hello!
    In the last days I tried to install Archlinux x64 on my server. This server consists of 4 SATA drives. I'm using Linux software Raid with the following configuration:
    mdadm --create --verbose /dev/md0 --auto=yes --level=10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
    mdadm --create --verbose /dev/md1 --auto=yes --level=10 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
    mdadm --create --verbose /dev/md2 --auto=yes --level=0 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
    md0 --> / including /boot
    md1 --> /data (my storage partiton)
    md2 --> my swap partion
    In the last step of the installer I don't installed grub and leave the installer without installing a bootloader.
    I mounted the /dev folder to /mnt/dev and chrooted in my new system:
    chroot /mnt bash
    In my new system I installed the new grub2 bootloader with:
    pacman -S grub2-bios
    In the next step I tried to install the bootloader to the MBR with the following command:
    grub_bios-install --boot-directory=/boot --no-floppy --recheck /dev/md0
    But I get the following error:
    /sbin/grub-probe: error: no such disk.
    Auto-detection of a filesystem of /dev/md0 failed.
    Please report this together with the output of "/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to <[email protected]>
    I also tried to install the grub2 directly on my discs with the following command:
    grub_bios-install --boot-directory=/boot --no-floppy --recheck /dev/sda
    But the error is the same as above.
    Following the instruction of the error message I executed the following command:
    /sbin/grub-probe --device-map="/boot/grub/device.map
    I get a large debug output:
    /sbin/grub-probe: info: scanning hd3,msdos2 for LVM.
    /sbin/grub-probe: info: the size of hd3 is 3907029168.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: scanning hd3,msdos1 for LVM.
    /sbin/grub-probe: info: the size of hd3 is 3907029168.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: scanning hd4 for LVM.
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: scanning hd4,msdos1 for LVM.
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: changing current directory to /dev.
    /sbin/grub-probe: info: opening md0.
    /sbin/grub-probe: error: no such disk.
    There is a tutorial for installing grub2 at ArchLinux in the wiki:
    https://wiki.archlinux.org/index.php/Grub2
    But I can't find any solution for my problem.
    Does anybody know a solution for this problem?
    Thanks for help
    Best regards,
    Flasher
    Last edited by Flasher (2011-03-03 20:13:13)

    hbswn wrote:Maybe it cannot handle the new mdadm format. Here the other partitions have version 0.90. Only the failing md0 has version 1.2:
    I'm sure it is the cause of the problem. But not new format. The old one (0.90) is incompatible with grub2.
    I hit the same problem and today I managed to fix it.
    What I did: I copied /boot to tmp directory, destroyed /dev/md0 (my /boot) with metadata 0.90, created new /dev/md0 (metadata 1.2), copied back /boot content.
    After that, grub_bios-install /dev/sda && grub_bios-install /dev/sdb finished without any errors.

  • RAID with Disk Utility question

    I've setup a RAID array in disk utility for an external SATA NAS that I have. The Silicon Image Si3132 drivers are not working for Snow Leopard (the SATARAID 5 drivers) so I've installed the base non-raid drivers for 10.6.
    I'm using Disk Utility to setup the RAID for now until I get updated drivers from SI. Here's my question:
    Let's just say that my Mac Pro one day decides to die and I have to rebuild a new OS on the machine. I will then have this external drive where I've configured a software RAID with Mac OS X. Will I be able to rebuild this RAID and recover the data, or will I lose all of the data if the Mac needs to be rebuilt?
    The external enclosure has its own controller, but these silicon image drivers do not seem to want to work as they are kernel panicking.
    Will I be able to recover the RAID config with disk utility if I rebuild the machine? My thinking is NO

    You have four internal hard drives; ability to boot from FW or USB, and I would include TimeMachine and clone (Carbon Copy Cloner or SuperDuper). Some WHS NAS servers also support TimeMachine and SuperDuper, but I would have local backups. Two minimum.
    I would not rely on SI or RAID.
    And all you need to boot from, and I'd have at least two, is a basic 30GB partition for Mac OS: a working copy of your system as is, a copy of last version installed also, AND, one "emergency" boot partition for disk maintenance and repairs.
    Then use NAS as second line of backup for your data, disk images, etc.

  • Software RAID 0,1,10

    With Mac OSX 10.5 there appears to be the ability to configure software RAID with multiple partitions. I am looking for some expert advice on using Mac OSX Server to achieve the backup of 5-10 Mac OSX systems (10.5 or earlier), striping and mirroring backup data from remote system to multiple drives and store common files (iTunes library) in a single location for all 5 licensed machines for single point access. The Mac OSX will be purchased if favorable response/support is provided to suggest this is the correct route. The other option is to source a NAS (network attached server); e.g. harddrive with ethernet connectivity and backup software.
    Assumption: Mac OSX Server running on a dedicated Mac with multiple drives or multiple partitions.
    Assumption: Mac OSX Server supports RAID 0,1,10 (striping & mirroring) and ability to serve common iTunes library.
    Any expert advice or experience with OSX Server vs. NAS will be greatly appreciated.

    Visited the Genius Bar and receieved all the answers to using RAID 1 for mirroring, centralizing the shared music/movie library, etc. Got all the answers for an effective backup, mirroring and server solution.

  • Need help with formatting a software RAID 5 array with xfs

    Hi,
    i'm tying to format a software RAID 5 array, using the xfs filesystem with the following command:
    # mkfs.xfs -v -m 0.5 -b 4096 -E stride=64,stripe-width=128 /dev/md0
    but all I get is the attached error message. It works fine when I use the ext4 filesystem. Any ideas?
    Thanks!
    http://i.imgur.com/cooLBwH.png
    -- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Cannot get software RAID-1 going on root (/) on Sun V240's

    Hi All,
    I for the life of me cannot get software RAID-1 going on the root file-system. I've followed the instructions in Solaris Volume Manager Admin. Guide and I think I'm doing it right. This is in a Sun V240 with two 137GB drives.
    I basically do the following:
    Disk0 - c1t0d0s0
    Disk1 - c1t0d1s0
    prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
    prtvtoc /dev/rdsk/c1t1d0s2 (Double check that disk 1 looks like disk 2)
    metadb -af -c 2 /dev/rdsk/c1t0d0s3 /dev/rdsk/c1t0d0s4
    metadb -af -c 2 /dev/rdsk/c1t1d0s3 /dev/rdsk/c1t1d0s4
    metainit -f d10 1 1 /dev/rdsk/c1t0d0s0
    metainit -f d20 1 1 /dev/rdsk/c1t1d0s0
    metainit d0 -m d10
    metaroot d0
    lockfs -fa
    reboot
    Before I reboot, I do a metastat and it shows the following:
    # metastat
    d0: Mirror
        Submirror 0: d10
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 20494464 blocks (9.8 GB)
    d10: Submirror of d0
        State: Okay        
        Size: 20494464 blocks (9.8 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s0          0     No            Okay   Yes
    d20: Concat/Stripe
        Size: 20494464 blocks (9.8 GB)
        Stripe 0:
            Device     Start Block  Dbase   Reloc
            c1t1d0s0          0     No      YesUpon reboot, I get this:
    Rebooting with command: boot                                         
    Boot device: /pci@1c,600000/scsi@2/disk@0,0:a  File and args:
    SunOS Release 5.10 Version Generic 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.
    Cannot open mirrored root device, error 19
    Cannot remount root on /pseudo/md@0:0,0,blk fstype ufs
    panic[cpu1]/thread=180e000: vfs_mountroot: cannot remount root
    000000000180b960 genunix:vfs_mountroot+2b8 (18aa800, 0, 185e6f8, 3000153dc40, 1859ce8, 4)
      %l0-3: 0000000000000000 0000000000000003 00000000018a40b0 00000000018a40b0
      %l4-7: 000000000185b800 00000000011cb000 00000000018aa800 0000000001834340
    000000000180ba20 genunix:main+88 (1813c98, 1011c00, 1834340, 18a7c00, 0, 1813800)
      %l0-3: 000000000180e000 0000000000000001 000000000180c000 0000000001835200
      %l4-7: 0000000070002000 0000000000000001 000000000181ba54 0000000000000000
    syncing file systems... done
    skipping system dump - no dump device configured
    rebooting...HELP! What am I doing wrong? Thanks!

    You created to many metadatabases..
    I'm not kidding, you did. You created 8 which i suspect trigged a bug which currently exists in Solaris 9 / Solaris 10.
    This bug occours on the following disks: MAT3073N, MAT3147N, MAT3300N and ST373207LC, if you have any of those disks in your system (can be determined with iostat -En) you must use less than 8 replicas.
    Check out bugreport 6244431, if you got a sunsolve login linked to a contract number.
    You could try and boot up the system from a cdrom or similar and add "md_devid_destroy=1;" to the /kernel/drv/md.conf file above the 'Begin MDD database info' line.
    hopefully your system will come up after this, if it does the best workaround is to remove a few replicas, the above line from md.conf and reboot.
    Best regards,
    //Magnus
    Best regards,
    //Magnus

  • Encrypted root with btrfs

    So I'm trying to set up my system with /dev/sda1 as a 500mb boot partition and /dev/sda2 as a plain dm crypted btrfs partition.
    I've got it all set up and I can chroot into it. Everything is fine except it won't boot into a functional system
    i've installed mkinicpio-btrfs from aur and reran mkinitcpio.
    i can't figure out where the problem is. any ideas?
    FSTAB
    # /etc/fstab: static file system information
    # <file system> <dir> <type> <options> <dump> <pass>
    #### /dev/mapper/btrfs LABEL=btrfs
    ####UUID=78a05d43-f52a-4086-9255-5062e5fcbb94 / btrfs rw,relatime,space_cache,subvol=__active/rootvol 0 0
    #### /dev/mapper/btrfs LABEL=btrfs
    ####UUID=78a05d43-f52a-4086-9255-5062e5fcbb94 /home btrfs rw,relatime,space_cache,subvol=__active/home 0 0
    # /dev/mapper/btrfs LABEL=btrfs
    /dev/mapper/btrfs / btrfs rw,noatime,compress=lzo,discard,autodefrag,inode_cache,subvol=__active/rootvol 0 0
    /dev/sda2 /boot ext2 defaults 0 2
    # /dev/mapper/btrfs LABEL=btrfs
    /dev/mapper/btrfs /home btrfs rw,noatime,compress=lzo,discard,autodefrag,inode_cache,subvol=__active/home 0 0
    # /dev/mapper/btrfs LABEL=btrfs
    /dev/mapper/btrfs /mnt/defvol btrfs rw,noatime,compress=lzo,discard,autodefrag,inode_cache 0 0
    syslinux.cfg
    # Config file for Syslinux -
    # /boot/syslinux/syslinux.cfg
    # Comboot modules:
    # * menu.c32 - provides a text menu
    # * vesamenu.c32 - provides a graphical menu
    # * chain.c32 - chainload MBRs, partition boot sectors, Windows bootloaders
    # * hdt.c32 - hardware detection tool
    # * reboot.c32 - reboots the system
    # To Use: Copy the respective files from /usr/lib/syslinux to /boot/syslinux.
    # If /usr and /boot are on the same file system, symlink the files instead
    # of copying them.
    # If you do not use a menu, a 'boot:' prompt will be shown and the system
    # will boot automatically after 5 seconds.
    # Please review the wiki: https://wiki.archlinux.org/index.php/Syslinux
    # The wiki provides further configuration examples
    DEFAULT arch
    PROMPT 0 # Set to 1 if you always want to display the boot: prompt
    TIMEOUT 50
    # You can create syslinux keymaps with the keytab-lilo tool
    #KBDMAP de.ktl
    # Menu Configuration
    # Either menu.c32 or vesamenu32.c32 must be copied to /boot/syslinux
    UI menu.c32
    #UI vesamenu.c32
    # Refer to http://syslinux.zytor.com/wiki/index.php/Doc/menu
    MENU TITLE Arch Linux
    #MENU BACKGROUND splash.png
    MENU COLOR border 30;44 #40ffffff #a0000000 std
    MENU COLOR title 1;36;44 #9033ccff #a0000000 std
    MENU COLOR sel 7;37;40 #e0ffffff #20ffffff all
    MENU COLOR unsel 37;44 #50ffffff #a0000000 std
    MENU COLOR help 37;40 #c0ffffff #a0000000 std
    MENU COLOR timeout_msg 37;40 #80ffffff #00000000 std
    MENU COLOR timeout 1;37;40 #c0ffffff #00000000 std
    MENU COLOR msg07 37;40 #90ffffff #a0000000 std
    MENU COLOR tabmsg 31;40 #30ffffff #00000000 std
    # boot sections follow
    # TIP: If you want a 1024x768 framebuffer, add "vga=773" to your kernel line.
    LABEL arch
    MENU LABEL Arch Linux
    LINUX ../vmlinuz-linux
    APPEND root=/dev/mapper/btrfs cryptdevice=/dev/sda2:btrfs rw
    INITRD ../initramfs-linux.img
    LABEL archfallback
    MENU LABEL Arch Linux Fallback
    mkinitcpio.conf
    # vim:set ft=sh
    # MODULES
    # The following modules are loaded before any boot hooks are
    # run. Advanced users may wish to specify all system modules
    # in this array. For instance:
    # MODULES="piix ide_disk reiserfs"
    MODULES="crc32c"
    # BINARIES
    # This setting includes any additional binaries a given user may
    # wish into the CPIO image. This is run last, so it may be used to
    # override the actual binaries included by a given hook
    # BINARIES are dependency parsed, so you may safely ignore libraries
    BINARIES=""
    # FILES
    # This setting is similar to BINARIES above, however, files are added
    # as-is and are not parsed in any way. This is useful for config files.
    FILES=""
    # HOOKS
    # This is the most important setting in this file. The HOOKS control the
    # modules and scripts added to the image, and what happens at boot time.
    # Order is important, and it is recommended that you do not change the
    # order in which HOOKS are added. Run 'mkinitcpio -H <hook name>' for
    # help on a given hook.
    # 'base' is _required_ unless you know precisely what you are doing.
    # 'udev' is _required_ in order to automatically load modules
    # 'filesystems' is _required_ unless you specify your fs modules in MODULES
    # Examples:
    ## This setup specifies all modules in the MODULES setting above.
    ## No raid, lvm2, or encrypted root is needed.
    # HOOKS="base"
    ## This setup will autodetect all modules for your system and should
    ## work as a sane default
    # HOOKS="base udev autodetect block filesystems"
    ## This setup will generate a 'full' image which supports most systems.
    ## No autodetection is done.
    # HOOKS="base udev block filesystems"
    ## This setup assembles a pata mdadm array with an encrypted root FS.
    ## Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
    # HOOKS="base udev block mdadm encrypt filesystems"
    ## This setup loads an lvm2 volume group on a usb device.
    # HOOKS="base udev block lvm2 filesystems"
    ## NOTE: If you have /usr on a separate partition, you MUST include the
    # usr, fsck and shutdown hooks.
    HOOKS="base udev autodetect encrypt lvm2 modconf block filesystems keyboard fsck btrfs"
    # COMPRESSION
    # Use this to compress the initramfs image. By default, gzip compression
    # is used. Use 'cat' to create an uncompressed image.
    #COMPRESSION="gzip"
    #COMPRESSION="bzip2"
    #COMPRESSION="lzma"
    #COMPRESSION="xz"
    #COMPRESSION="lzop"
    #COMPRESSION="lz4"
    # COMPRESSION_OPTIONS
    # Additional options for the compressor
    #COMPRESSION_OPTIONS=""
    and this is what happens when i try to boot into it
    http://i.imgur.com/FeaBjvC.jpg
    -- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --

    falconindy wrote:You're not passing any subvol in your bootloader config.
    thanks that was the problem. it's running perfectly now. it's a great feeling diving into something deep in the command line and accomplishing it. making things work in arch is one of the most satisfying things ever.
    Strongly recommend against mkinitcpio-btrfs, BTW. Its unmaintained, archaic, and unnecessary for most folks. Device assembly can be done entirely with udev.
    so what do i need as far as hooks? just encrypt? or do i still need btrfs?
    and is it going to create problems? i've already done it. does it require a reformat and starting over? can i uninstall mkinitcpio-btrfs and just run mkinitcpio again? or should i just leave it be since it's working?
    edit: added rootflags=subvolume=__active/roovol
    Last edited by risho (2014-09-07 06:29:28)

  • Help with software RAID on system disk

    I've been trying to get my OS system disk mirrored using diskutil's software RAID (my server has 2 internal 80GB drives). I did what homework I could and here's where I'm stuck. I'm running 10.4.3.
    n506:~ root# uname -a
    Darwin n506.local 8.3.0 Darwin Kernel Version 8.3.0: Mon Oct 3 20:04:04 PDT 2005; root:xnu-792.6.22.obj~2/RELEASE_PPC Power Macintosh powerpc
    n506:~ root#
    According to my research, I needed to boot from an OS disk other than the one I want to mirror, so I installed a second instance of OS X on my second drive, booted on that and ran diskutil enableRAID against the system drive I want to protect. This appeared to work. A new disk device was added (the RAID device) with its only component being the disk I ran enable RAID on. Then I booted from that disk. Here's the problem.
    When I run diskutil addToRAID to add the second disk to the raid group, the command does partition the disk the same way the primary is partitioned (according to diskutil list), but the command always fails with this error:
    n506:~ root# diskutil addToRAID member disk0 disk2
    2006-05-27 23:49:50.012 DiskManagementTool[362] AppleRAIDUpdateSet failed to modify the set (null)
    There was an error adding the disk to the RAID (-9960)
    n506:~ root#
    The system log shows this:
    n506:~ root# tail -1 /var/log/system.log
    May 27 23:49:50 n506 kernel[0]: AppleRAIDMember::readRAIDHeader: failed, no header signature present on disk0s4.
    n506:~ root#
    [NOTE: I should add here that, for reasons unknown to me, my system created /dev/disk0 from the drive in the second bay, and /dev/disk1 from the drive in the first bay (the primary disk I'm trying to mirror). I have no idea why OS X set up the devices like that, but I haven't really worried about it]
    I've done what research I could and have found no information on this problem. I've tried zero-ing the second disk and rerunning the command (along with rebooting), but I always seem to get this error.
    Any help would be greatly appreciated. I've been at it for days now and I'm really frustrated.
    G5 server Mac OS X (10.4.3)

    If anyone out there is interested in this thread, I upgraded to 10.4.6 and tada:
    n506:~ root# diskutil addToRAID member disk0 disk2
    The disk has been added to the RAID
    n506:~ root# diskutil checkRAID
    RAID SETS
    Name: Disk1
    Unique ID: 1CC5982A-B353-438C-A69F-8D66924449C0
    Type: Mirror
    Status: Degraded
    Device Node: disk2
    Apple RAID Version: 2
    # Device Node UUID Status
    1 disk0s3 219611E9-F35F-48A4-BD4F-F56038094613 2% (Rebuilding)
    0 disk1s3 0EDD559B-FEE8-4F73-B94D-AF5590AD8BF2 Online
    n506:~ root#

  • Can I install boot camp on a machine with a Software Raid-0?

    I have a machine with a software Raid-0 running Snow Leopard. I'd like to install Boot Camp but I only have 2 SATA hard drives. Windows does not need to run in a RAID on this machine and I'm fine partitioning both drives so that I have a 30GB Windows 7 partition. Both drives are 500GB each.
    Can I partition each drive with -30GB run a software raid and still use bootcamp?
    Please let me know.

    No, and 30GB is too small for Windows 7 64-bit Pro for your Mac.
    Get a 3rd drive. Assuming this is a Mac Pro you have 4 internal drive bays.

  • Problems with Grub, Dualbooting, and a software raid

    Been searching through google and any forums i could find regarding a possible fix and can not find anything that works.
    I have 2 Hard drives, First one is windows, second one is Archlinux.  They are setup through a software raid and i can not get grub to see them.
    The error i Get is
    Step 1.5 complete
    Loading Grub Please Wait..
    Error 21
    I've used the supergrub disk to no avail.  (i had to load that onto a usb drive due to it not working on my usb cd drive)
    my setup is SDA = windows drive
    SDB = Archlinux
    sdb1 = /               (100gb)
    sdb2 = /home       (190gb)
    sdb3 = swap         (6gb)
    I've checked (and will post when i get back on other computer to try again) the menu.lst,
    last time i checked that it looked to me like the sdb0 and sdb1 were not in there
    so i added them, pretty sure it at least needed sdb1
    (found it was on sdb1 by loading a live linux and using grub>find /boot/grub/stage1 to verify)
    i've tried changing the settings of my raid in the bios, i have 3 options, Disabled (at which point i don't get any grub error messages just a non system disk error)
    Raid - not how i normally run but tried it, no change (still error 21)
    and IDE - how it normally runs, unsuccessfully
    I'd really appreciate some help on this and thank you.

    Here is the Menu.lst in /boot/grub
    # Config file for GRUB - The GNU GRand Unified Bootloader
    # /boot/grub/menu.lst
    # DEVICE NAME CONVERSIONS
    # Linux Grub
    # /dev/fd0 (fd0)
    # /dev/sda (hd0)
    # /dev/sda1 (hd0,0)
    # /dev/sdb (hd1)
    # /dev/sdb1 (hd1,0)
    # /dev/sdb2 (hd1,1)
    # /dev/sda3 (hd0,2)
    # FRAMEBUFFER RESOLUTION SETTINGS
    # +-------------------------------------------------+
    # | 640x480 800x600 1024x768 1280x1024
    # ----+--------------------------------------------
    # 256 | 0x301=769 0x303=771 0x305=773 0x307=775
    # 32K | 0x310=784 0x313=787 0x316=790 0x319=793
    # 64K | 0x311=785 0x314=788 0x317=791 0x31A=794
    # 16M | 0x312=786 0x315=789 0x318=792 0x31B=795
    # +-------------------------------------------------+
    # for more details and different resolutions see
    # http://wiki.archlinux.org/index.php/GRUB#Framebuffer_Resolution
    # general configuration:
    timeout 5
    default 0
    color light-blue/black light-cyan/blue
    # boot sections follow
    # each is implicitly numbered from 0 in the order of appearance below
    # TIP: If you want a 1024x768 framebuffer, add "vga=773" to your kernel line.
    # (0) Arch Linux
    title Arch Linux
    root (hd1,0)
    kernel /boot/vmlinuz26 root=/dev/disk/by-uuid/0e70c4ce-e3b3-42f8-8c2b-6c6414e$
    initrd /boot/kernel26.img
    # (1) Arch Linux
    title Arch Linux Fallback
    root (hd1,0)
    kernel /boot/vmlinuz26 root=/dev/disk/by-uuid/0e70c4ce-e3b3-42f8-8c2b-6c6414e$
    initrd /boot/kernel26-fallback.img
    # (2) Windows
    title Windows
    rootnoverify (hd0,0)
    makeactive
    chainloader +1
    and here is Device. map
    (fd0) /dev/fd0
    (hd0) /dev/sda
    (hd1) /dev/sdb
    some more info from grub
    grub> find /boot/grub/stage1
    (hd1,0)
    Last edited by m3lkor (2009-03-15 19:14:07)

  • Moving Thunderbold DAS with software RAID 1 to different Mac

    Background:  I am preparing to wipe my iMac (currently Mavericks) to then do a clean install of Yosemite.  Attached to my iMac is a "WD MyBook Thunderbolt Duo".  This external "DAS" drive model is a single enclosure that contains two, 2TB physical drives, and attaches to the iMac with a single Thunderbolt cable.  I am not using any WesternDigital software on my iMac!  Instead, I have used Disk Utility to manage the drive and have created a single 2TB software RAID 1 array (HFS+Journaled) incorporating both of the 2 physical drives as slices.  The RAID 1 volume mounts as "DAS" on my Desktop and currently contains the majority of my media files.
    Before beginning the repartitioning, formatting, and doing a clean install of Yosemite on the iMac's internal HDD, I plan on disconnecting the DAS's thunderbolt cable so it's safe from the install process. 
    Question:  What I need to know is what to expect when I eventually reconnect the DAS to the new Yosemite installation.  Will my RAID 1 volume "DAS" simply mount on the desktop as expected?
    What I don't understand is where software RAID slice and volume configurations are stored.  Is the software RAID 1 configuration stored as part of the partition table on the drives themselves or is it stored on the Mac?  I am hoping to hear that the software RAID configuration is all on the drives themselves and therefore the drive can be reattached to a new Mac and it will just recognize the configuration and mount the volume normally with no additional effort.  Can someone please confirm this to ease my mind (or save me before I go ahead with this).
    Sorry to put this in the OS X Server forum but I thought that a software RAID question using Disk Utility would be best answered from the expertise here.
    I appreciate any info you can provide.

    Yes the software RAID configuration is stored on the drives themselves so it will be safe to disconnect from one Mac and reconnect to a second Mac.
    Note to others. Any Mac that supports Thunderbolt is going to be running a new enough version of OS X so as to not cause a problem.

  • Software RAID (level 0) with Timemachine Backup

    Hi
    I am in the process of ordering a Mac Pro (two 2.26GHz Quad Core, 12GB) and will be using four 1.5TB Seagate Barracuda hard drives. I have read many posts on these discussion pages relating to RAID, etc but can't seem to find an answer to my query.
    Is it possible and effective to stripe two drives together and use that as my boot/files and then stripe the other two and use that as a timemachine backup.
    Also is Apple's in-built software RAID able to do this and without much strain on the CPU?? The £560 RAID card is not looking to pleasing to my wallet!
    Many thanks in advance and apologies if this has been covered elsewhere.

    Time Machine does not benefit from anything other than mirror perhaps, and even then, I prefer to have two backups sets as a priority, and only then go with mirror.
    I can think of better uses for internal drives than to have two devoted just to TimeMachine.
    And "0+1" was 'briefly' tried by many and given up, and not supported post-Tiger, that does have too much overhead and problems - and slow.
    Fast boot drive, dedicated scratch, and 3rd drive for media files and data, would be a good start. SSD or 10K VelociRaptor boot drive perhaps.

  • Server 10.5 on PPC with software RAID not bootable?

    I read in this article (http://docs.info.apple.com/article.html?path=ServerAdmin/10.5/en/c5sa15.html) that "No PPC-based Macs support booting from software RAID volumes."
    I believe that this is not true! From my software dealer i was told there should be no problems to do this. I called Apple and the guy i spoke with also told me that it works fine. I actually now have set up a software RAID1 (PPC G5 2.3Ghz, 2x500GB HD) from wich i boot and it does work!
    My question is: Are there any risks to continue running this installation?
    /Andreas

    One risk is you can't partition an Apple software RAID 1 in Boot and Data volumes so you could potentially have either overwrite "the other". An "overfull" share can damage the system and vice versa.
    And write performance can suffer because writes aren't simultaneous (?).
    Another risk is the server woun't tell you when one disk fails so if you don't notice when the other one also fails the data on the first one is old (backups are essential as usual). Or if thinking "there is a "failover" disk available" when it too already have failed a while ago (but ofcourse no worse than having only 1 drive and no mirror).
    Depending on the cause of failiure, software/glitch or mechanical, the disks can be in different "shape" if you have to try restoring the data.
    A good idéa is have an extra drive (firewire?) you backup (one way synch replacing/removing old files/removed files) to every night.

Maybe you are looking for