Software raid on sata discs (solved)

Hi all.
I'm trying to set up software raid, following this guide: http://wiki2.archlinux.org/index.php/In … d-or%20LVM
My setup is sda & sdb.
/dev/md/0  /boot        (sda1 & sdb1)
/dev/md/1  swap         (sda2 & sdb2)
/dev/md/2  /            (sda3 & sdb3)
/dev/md/3  /mnt/storage (sda4 & sdb4)
I'm running it as raid level 0 and reiserfs.
Partitioning the drives, setting up raid arrays, making file system, installing and configuring the system, copying the grub part, mounting devfs, proc and chrooting into /mnt works whit out any problem.
My problem begins whit setting up grub.
grub> root (hd0,0) (and other combos) gives me :file system unknown partition type 0xfd.
grub> setup (hd0) (and other combos) gives me :error 17 cannot mount selected partition.
I used to run an hda disc as swap & / and my sata discs as a large md/0 storage whit no problem, but after running hdparm -t on the devices, I thought why let it go to waste storing stuff that I don't use regularly.
/cat /proc/mdstat gives me:
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6] [raid10]                             
md1 : active raid0 sdb2[1] sda2[0] 1959680 blocks 64k chunks
                                          md2 : active raid0 sdb3[1] sda3[0]  17382144 blocks 64k chunks
                                          md3 : active raid0 sdb4[1] sda4[0]  293041536 blocks 64k chunks
                                          md0 : active raid0 sdb1[1] sda1[0]  192512 blocks 64k chunks
                                          unused devices: <none>
both under the grub install session and booting up arch on my hda disk.
I disconnected my hda during the attempt to configure the software raid.
So if anyone knows what I'm doing wrong, please let me know.
PS: If the problem lies with grub and sata disks in software raid setup, I wouldn't mind making an boot partition on my hda and use the rest for a backup and important files drive, I just dont want to destroy my working arch install, just to find it wont work anyway.
Thanks.

If anyone is interested, I finally got it up and running with some small modifications.
Instead of whats in the wiki.
root   (hd0,0)
kernel /boot/vmlinuz26 root=/dev/md/2 ro
I used.
root   (hd1,0)
kernel /vmlinuz26 root=/dev/md/2 ro
(my sdb is detected as drive 0 and sda as drive 1)
And it works  8)

Similar Messages

  • [SOLVED] how to install ArchLinux on a simple software raid 0

    I have two 256GB disks and I want them to be in raid 0.
    I tried following this tutorial: https://wiki.archlinux.org/index.php/In … AID_or_LVM
    but this tutorial has the added complication of LVM and raid 1 which I don't need.
    I made 3 partitions on each of the disks:
    sda1 - 100MB for /boot
    sda2 - 2048MB for swap
    sda3 - raid 0 md0
    sdb1 - unused
    sdb2 - 2048 for swap
    sdb3 - raid 0 md0 for /
    I can assemble and format the sda1(ext4), sda2(swap), sdb2(swap)  md0 (ext4) and install all the packages
    I also configured mdadm.conf by Alt F2 at the installer and executing: mdadm --examine --scan > /mnt/etc/mdadm.conf
    and added mdadm hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems')
    configured the boot with grub outside of the installer as indicated in the tutorial
    But when I boot I get:
    md0: unknown partition table
    Error: Unable to determine the file system of /dev/sda3
    please help.
    Last edited by 99Percent (2011-06-29 20:21:52)

    fyi this is how I finally set up my simple 2 drive raid 0:
    1. Create a bootable USB ArchLinux with UNetBootin
    2. Boot with the USB
    3. # /arch/setup
    3. Select source: internet (highly recommended because i found out UNetBootin sources are not 100% reliable though not necessarily so, IOW just to be sure)
    3. partition the two drives with 100mb for /boot and 100mb for swap (setup requires it - I have 8GB memory, I decided I don't need much swap space) and the rest of both sda and sdb which will make your raid 0.
    4. ALT F2 to another terminal and create the the raid like this:
    # modprobe raid0 (not sure if this is actually necessary)
    # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda2 /dev/sdb2
    5. go back to the setup screen with CTRL-ALT-F1
    6. go to Prepare Hard Drives>Manually configure block devices, filesystems and mountpoints. Add the /boot (ext2) for sda1 swap for sdb1 and desired filesystem for md0 (I chose reiserfs). Ignore the rest of the devices.
    7. Select packages. Add the base-devel just in case, but nothing else and install packages
    8. ALT F2 again to the  terminal and run:
    # mdamd --examine --scan > /mnt/etc/mdadm.conf
    (this will configure the mdadm.conf to use your raid as created)
    9. go back to the setup screen again with CTRL-ALT-F1
    10. Select configure system
    11. Edit /etc/rc.conf adding: raid0 to MODULES= like this:
    MODULES=(raid0)
    again not sure if necessary but it works for me
    12. Edit /etc/mkinitcpio.conf adding dm_mod to the MODULES= like this:
    MODULES="dm_mod"
    13. also add to /etc/mkinitcpio.conf HOOKS= mdadm but before filesystems in my case it went like this:
    HOOKS="base udev autodetect pata scsi sata mdadm filesystems"
    14. I went ahead and uncommented a few mirrors in the mirrorlist file so I wouldn't have to deal with that later (and maybe it helps on the way).
    15. Also set a root password. Not sure if it is even necessary but maybe some components require it.
    16, Go to configure bootloader and select Grub
    17. When asked "Do you have your system installed on software raid? answer Yes
    18. When asked to edit the menu.lst file don't edit anything, just exit
    19. When asked "Do you want to install grub to the MBR of each harddisk from your boot array? answer Yes
    20. You will get "Error: Missing/Invalid root device:" and "GRUB was NOT successfully installed." Ignore those messages.
    21. Exit the install
    22. remove the USB stick and:
    # reboot
    23. the boot will fail and you will get a grub> prompt type the following commands:
    grub> root (hd0,0)
    grub> setup (hd0)
    grub> reboot
    Thats it!
    Last edited by 99Percent (2011-06-28 18:51:59)

  • [Solved] Move Software RAID 5 Array From NAS To Arch

    Edit: I probably never had a problem at all, the error in dmesg probably just scared me, because after I disconnected it I noticed that /dev/d127 was 8.1 TB, the exact size of my RAID array node in my NAS which was /dev/md0, I just overlooked it. I reconnected it to my pc and mounted /dev/md127 to /mnt/raid and got this wonderful sight!
    [bran@ra ~]$ ls /mnt/raid
    data lost+found meta sys
    [bran@ra ~]$ ls /mnt/raid/data/
    data ftproot module _NAS_Media _NAS_Piczza_ _NAS_Recycle_RAID _P2P_DownLoad_ stackable _SYS_TMP TV USBHDD
    download htdocs Movies _NAS_NFS_Exports_ NAS_Public nzbget-downloads PLEX_CONFIG sys tmp USBCopy
    I bought a Thecus N4520 a few months ago and it's ok but programs crash a lot and they're hard to debug, apps have to be updated manually and the whole thing is moderately underpowered. I'm trying to move the software RAID 5 array from the NAS to my desktop, the kernel seems to detect that there is a RAID array but all these drives aren't part of it. I'm pretty new to RAID and I'm just getting my feet wet with it.
    When I try to assemble the RAID array, it just tells me that it isn't an md array. How can I get it to build my array?
    [bran@ra ~]$ sudo mdadm --assemble /dev/sdb /dev/sdc /dev/sdd /dev/sde
    mdadm: device /dev/sdb exists but is not an md array.
    Found this little chunk of info in dmesg, it says that the md devices have unknown partition tables.
    [ 3.262225] md: raid1 personality registered for level 1
    [ 3.262483] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.10
    [ 3.262508] iTCO_wdt: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
    [ 3.262585] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
    [ 3.262933] md/raid1:md126: active with 4 out of 4 mirrors
    [ 3.262961] md126: detected capacity change from 0 to 536850432
    [ 3.263272] RAID1 conf printout:
    [ 3.263274] --- wd:4 rd:4
    [ 3.263276] disk 0, wo:0, o:1, dev:sdc3
    [ 3.263276] disk 1, wo:0, o:1, dev:sdb3
    [ 3.263277] disk 2, wo:0, o:1, dev:sdd3
    [ 3.263278] disk 3, wo:0, o:1, dev:sde3
    [ 3.263501] md: bind<sde4>
    [ 3.264810] md: bind<sdb2>
    [ 3.268262] async_tx: api initialized (async)
    [ 3.272632] md: raid6 personality registered for level 6
    [ 3.272636] md: raid5 personality registered for level 5
    [ 3.272637] md: raid4 personality registered for level 4
    [ 3.272905] md/raid:md127: device sdb2 operational as raid disk 1
    [ 3.272908] md/raid:md127: device sde2 operational as raid disk 3
    [ 3.272910] md/raid:md127: device sdd2 operational as raid disk 2
    [ 3.272911] md/raid:md127: device sdc2 operational as raid disk 0
    [ 3.273211] md/raid:md127: allocated 0kB
    [ 3.273241] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
    [ 3.273243] RAID conf printout:
    [ 3.273244] --- level:5 rd:4 wd:4
    [ 3.273245] disk 0, o:1, dev:sdc2
    [ 3.273246] disk 1, o:1, dev:sdb2
    [ 3.273247] disk 2, o:1, dev:sdd2
    [ 3.273248] disk 3, o:1, dev:sde2
    [ 3.273273] md127: detected capacity change from 0 to 8929230716928
    [ 3.273322] RAID conf printout:
    [ 3.273326] --- level:5 rd:4 wd:4
    [ 3.273329] disk 0, o:1, dev:sdc2
    [ 3.273331] disk 1, o:1, dev:sdb2
    [ 3.273332] disk 2, o:1, dev:sdd2
    [ 3.273360] disk 3, o:1, dev:sde2
    [ 3.283617] md126: unknown partition table
    [ 3.309239] md127: unknown partition table
    [ 3.312660] md: bind<sdb4>
    [ 3.318291] md/raid1:md124: not clean -- starting background reconstruction
    [ 3.318296] md/raid1:md124: active with 4 out of 4 mirrors
    [ 3.318333] md124: detected capacity change from 0 to 10736291840
    [ 3.318385] RAID1 conf printout:
    [ 3.318391] --- wd:4 rd:4
    [ 3.318395] disk 0, wo:0, o:1, dev:sdc4
    [ 3.318398] disk 1, wo:0, o:1, dev:sdb4
    [ 3.318402] disk 2, wo:0, o:1, dev:sdd4
    [ 3.318405] disk 3, wo:0, o:1, dev:sde4
    [ 3.319890] md124: unknown partition table
    [ 3.323462] md: bind<sde1>
    [ 3.338094] md/raid1:md125: active with 4 out of 4 mirrors
    [ 3.338225] md125: detected capacity change from 0 to 2146414592
    [ 3.338253] RAID1 conf printout:
    [ 3.338258] --- wd:4 rd:4
    [ 3.338262] disk 0, wo:0, o:1, dev:sdc1
    [ 3.338266] disk 1, wo:0, o:1, dev:sdb1
    [ 3.338268] disk 2, wo:0, o:1, dev:sdd1
    [ 3.338271] disk 3, wo:0, o:1, dev:sde1
    Here's my full dmesg
    mdadm.conf
    # The designation "partitions" will scan all partitions found in /proc/partitions
    DEVICE partitions
    ARRAY /dev/md127 metadata=1.2 name=(none):0 UUID=d1d14afc:23490940:a0f7f996:d7b87dfb
    ARRAY /dev/md126 metadata=1.2 name=(none):50 UUID=d43d5dd6:9446766e:1a7486f4:b811e16d
    ARRAY /dev/md125 metadata=1.2 name=(none):10 UUID=f502437a:d27d335a:d11578d5:6e119d58
    ARRAY /dev/md124 metadata=1.2 name=(none):70 UUID=ea980643:5c1b79e8:64f1b4cb:2462799b
    Last edited by brando56894 (2014-04-21 22:51:01)

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Installing grub2 1.99rc1 on Linux Software RAID (/dev/md[0-x])

    Hello!
    In the last days I tried to install Archlinux x64 on my server. This server consists of 4 SATA drives. I'm using Linux software Raid with the following configuration:
    mdadm --create --verbose /dev/md0 --auto=yes --level=10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
    mdadm --create --verbose /dev/md1 --auto=yes --level=10 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
    mdadm --create --verbose /dev/md2 --auto=yes --level=0 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
    md0 --> / including /boot
    md1 --> /data (my storage partiton)
    md2 --> my swap partion
    In the last step of the installer I don't installed grub and leave the installer without installing a bootloader.
    I mounted the /dev folder to /mnt/dev and chrooted in my new system:
    chroot /mnt bash
    In my new system I installed the new grub2 bootloader with:
    pacman -S grub2-bios
    In the next step I tried to install the bootloader to the MBR with the following command:
    grub_bios-install --boot-directory=/boot --no-floppy --recheck /dev/md0
    But I get the following error:
    /sbin/grub-probe: error: no such disk.
    Auto-detection of a filesystem of /dev/md0 failed.
    Please report this together with the output of "/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to <[email protected]>
    I also tried to install the grub2 directly on my discs with the following command:
    grub_bios-install --boot-directory=/boot --no-floppy --recheck /dev/sda
    But the error is the same as above.
    Following the instruction of the error message I executed the following command:
    /sbin/grub-probe --device-map="/boot/grub/device.map
    I get a large debug output:
    /sbin/grub-probe: info: scanning hd3,msdos2 for LVM.
    /sbin/grub-probe: info: the size of hd3 is 3907029168.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: scanning hd3,msdos1 for LVM.
    /sbin/grub-probe: info: the size of hd3 is 3907029168.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: scanning hd4 for LVM.
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: scanning hd4,msdos1 for LVM.
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: changing current directory to /dev.
    /sbin/grub-probe: info: opening md0.
    /sbin/grub-probe: error: no such disk.
    There is a tutorial for installing grub2 at ArchLinux in the wiki:
    https://wiki.archlinux.org/index.php/Grub2
    But I can't find any solution for my problem.
    Does anybody know a solution for this problem?
    Thanks for help
    Best regards,
    Flasher
    Last edited by Flasher (2011-03-03 20:13:13)

    hbswn wrote:Maybe it cannot handle the new mdadm format. Here the other partitions have version 0.90. Only the failing md0 has version 1.2:
    I'm sure it is the cause of the problem. But not new format. The old one (0.90) is incompatible with grub2.
    I hit the same problem and today I managed to fix it.
    What I did: I copied /boot to tmp directory, destroyed /dev/md0 (my /boot) with metadata 0.90, created new /dev/md0 (metadata 1.2), copied back /boot content.
    After that, grub_bios-install /dev/sda && grub_bios-install /dev/sdb finished without any errors.

  • How to install on a software raid with dmraid?

    In linux 2.4 the ATARAID kernel framework provided support for Fake Raid (software RAID assisted by the BIOS). For linux 2.6 the device-mapper framework can do, among other nice things like LVM and EVMS, the same kind of work as ATARAID in 2.4. While the new code handling the RAID's I/O still runs in the kernel, device-mapper stuff is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
    Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) Fake-RAID IDE / SATA controllers which have BIOS functions on it. Most common ones are: Promise Fasttrack controllers as well as HPT 37x, Intel, VIA and LSI. Also serial ata RAID controllers like Silicon Image Medley and Nvidia Nforce are supported by the program.
    I would like to ask if someone out there managed to set up a raid machine with dmraid, and I am asking for a full raid setup, nothing like raid for /home only.

    Loosec;
    I see that you have a handle on the dmraid package, recently upgraded I see.
    I have an application for raid that does not involve booting with raid, data system only.
    I desire to generate an archive for pacman using a raid array for purposes of install speed.
    My first attempts used mdadm techniques and provided a software raid of hde and hdg drives which did not produce  an improved hdparm read speed.
    What changes to dmraid procedures will provide a non-bootable raid0 system which will archive pacman packages and provide combined raid read speed at least 50% greater than the normal 40MB/sec.?
    Performance figures for raid0 with dmraid haven't been available in the forums.  Perhaps these are disappointing?
    Basically, how to make a raid0 system  with dmraid but not make it bootable?
    EDIT:  Solved my problem with mdadm and mke2fs, fstab entry and /mnt/md entry.
    Last edited by lilsirecho (2008-03-14 07:50:00)

  • Setting up raid O in disc utility

    I want to create a raid O in Disc Utility on my Mac Pro 2.8, 8 cores.
    I have 2 drives that i've been using in my previous powermac G5 stripped as Raid O with a Sonet 8 port E-Sata controller card, they worked fine and fast.
    In my Mac Pro do I need an E-Sata Controller card like one from Sonet with 4 ports in order to achieve a raid configuration and speed?
    With out a controller card, stripping 2 drives, does Leopard sees the 2 drives as one large drive without the speed?
    Another question, when stripping the drives as a raid O configuration it asks which as an option the sector block size, 16, 32, 64, 128, and 256,
    for video editing which sector size be best suited.
    any suggestions would be appreciated.
    thanks
    Sigi

    {quote:title=In my Mac Pro do I need an E-Sata Controller card like one from Sonet with 4 ports in order to achieve a raid configuration and speed?}{quote}
    If you want to do software raid 0 with external drives or enclosures, then you can use either the ODD sata ports to esata adapter OR any esata controller card. Doesn't have to be from sonnet (although they are very solid).
    {quote:title=With out a controller card, stripping 2 drives, does Leopard sees the 2 drives as one large drive without the speed? }{quote}
    This question is a little confusing...first, you don't need a controller card to stripe 2 drives together. It can be done merely in disk utility. Also, whenever you make a raid, it always shows up as 1 drive.
    For the block size, if you're doing mainly video, higher sizes will be better, as your files are bigger in size. I've never heard much on using 256K, but afaik the video norm is 128K. However, if your files are very large, I don't see why not to go for 256K.

  • Software raid won't boot after updating to "mdadm" in mkinitcpio.conf

    After a power outage I've discovetred the config I was using (with raid in mkinitcpio.conf) no longer works, it's mdadm now - that's fine.  I've updated that and re-run mkinitcpio successfully, however my system is unable to boot from the root filesystem /dev/md2 like so:
    Waiting for 10 seconds for device /dev/md2 ...
    Root device '/dev/md2' doesn't exist. Attempting to create it.
    ERROR: Unable to determine major/minor number of root device '/dev/md2'.
    You are being dropped to a recovery shell
        Type 'exit' to try and continue booting
    /bin/sh: can't access tty; job control turned off
    [ramfs /]#
    As far as I can see from reading various threads and http://wiki.archlinux.org/index.php/Ins … AID_or_LVM I'm doing the right things now (although I'm not using lvm at all, which makes the installation document a little confusing).
    I think I've included all the appropriate bits of config here that should be working.  I assume I've missed something fundamental - any ideas?
    menu.lst:
    # (0) Arch Linux
    title  Arch Linux  [/boot/vmlinuz26]
    root   (hd0,0)
    kernel /vmlinuz26 root=/dev/md2 ro
    initrd /kernel26.img
    mkinitcpio.conf:
    HOOKS="base udev autodetect pata scsi mdadm sata filesystems"
    fstab:
    /dev/md1 /boot ext3 defaults 0 1
    /dev/md2 / ext3 defaults 0 1
    mdadm.conf
    ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=7ae70fa6:9f54ba0a:21
    47a9fe:d45dbc0c
    ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=20560268:8a089af7:e6
    043406:dbdabe38
    Thanks!

    Hi magec, that's quite helfpul - I've certainly got further.
    Before I was doing this to set up the chroot (which is what is suggested in the wiki article about setting up software raid):
    mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
    mdadm -A /dev/md2 /dev/sda2 /dev/sdb2
    mount /dev/md2 /mnt
    mount /dev/md1 /mnt/boot
    mount -o bind /dev /mnt/dev
    mount -t proc none /mnt/proc
    chroot /mnt /bin/bash
    But based on your suggestion it's working better
    mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
    mdadm -A /dev/md2 /dev/sda2 /dev/sdb2
    mount /dev/md2 /mnt
    mount /dev/md1 /mnt/boot
    mount -t proc none /mnt/proc
    mount -t sysfs none /mnt/sys
    mount -n -t ramfs none /mnt/dev
    cp -Rp /dev/* /mnt/dev
    chroot /mnt /bin/bash
    The boot is now getting further, but now I'm getting:
    md: md2 stopped.
    md: bind<sdb2>
    md: bind<sda2>
    raid1: raid set md2 active with 2 out of 2 mirrors
    md2: detected capacity change from 0 to 32218349568
    mdadm: /dev/md2 has been started with 2 drives.
    md2: Waiting 10 seconds for device /dev/md2 ...
    unknown partition table
    mount: mounting /dev/md2 on /new_root failed: No such device
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    /bin/sh: can't access tty; job contol turned off
    [ramfs /]#
    The bit that really confuses me is this:
    [ramfs /]# cat /proc/mdstat
    Personalities : [raid1]
    md2 : active raid1 sda2[0] sdb2[1]
    31463232 blocks [2/2] [UU]
    md1 : active raid1 sda1[0] sdb1[1]
    208704 blocks [2/2] [UU]
    unused devices: <none>
    [ramfs /]# mount /dev/md2 /new_root
    mount: mounting /dev/md2 on /new_root failed: No such file or directory
    [ramfs /]# ls /dev/md2
    /dev/md2
    [ramfs /]#
    So the array is up, the device node is there but it can't be mounted?  Very strange.
    Last edited by chas (2010-05-02 11:24:09)

  • Can you move a software raid 1 from one mac to another

    I have a 2 disc software raid 1 on my powermac and I want to move it to my mac pro. Does anyone know if I can do this?

    It appears that in Mac OS X 10.4, and again in the transition to 10.7, RAID format may have undergone dramatic changes. If you are trying to move a RAID array across those boundaries, you may not have the best results moving the drives directly.

  • Software RAID issue

    I have just installed 2-200 GB seagate hd's into my machine and set them up as a software raid 0. Also installed is the original 80 GB drive and a Hitachi 160 GB drive. The two new drives functioned long enough for me to copy all of my raw digital home video.
    Now, when I try to boot the grey Apple screen comes up and the machine hangs. When I disconnect the drives, it boots right up.
    I thought maybe my power supply was giving up the ghost until I reconnected them and successfully rebooted with Drive Genius 1.2. I rebuilt the raid set (maintaining all data) and rebooted into OSX successfully. Woohoo!
    But not so fast. Next reboot started the same thing. And now Drive Genius has no effect.
    I really dont wan't to lose the data on these drives. Is there a problem with the software raid in 10.4.7? Any ideas of other things to try? Help?!?!
    Thanks,
    Andy
    G4 1.25DP FW800   Mac OS X (10.4.7)  
    G4 1.25DP FW800   Mac OS X (10.4.7)  

    Possibly a couple things that "aren't quite right" with your setup.
    SoftRAID 3.5 supports mirror for boot drive, but not with stripped.
    The two IDE buses are not identical, making for some slight problems with RAID on the two.
    You really can't RAID if both drives are together in one drive cage on the same IDE bus.
    You need a PCI controller to RAID in MDD. Most people at this point in time opt for Serial ATA drives and controller. If you want a bootable SATA drive...
    http://www.firmtek.com/seritek
    If you must boot from stripped RAID (not always a good idea and may not offer much actually in real world performance, check www.barefeats.com for some tests for one).
    A dedicated boot drive for OS/Apps and use other drives for media, data, scratch, even for /users.
    Two ATA drives on the same bus, have to share the bus, contend for I/O.
    You can create a stripe raid with Disk Utility and boot.
    I assume that you read the SoftRAID QuickStart and Manual which really are helpful. There are a number of sites etc that can go into more about RAID.

  • Mac OS X 10.4 - Software RAID 10???

    Hello All...
    As a motion graphics artist by trade and a musician at night, I am constantly faced with the on-going dilemma of my storage needs. As you may probably already know, these two activities eat up drive space and require a speedy storage solution as well.
    Here's my current setup:
    - PowerMac G5 Dual 2GHz (PCI-X) w/ 3GB of RAM.
    - Sonnet Tempo-X eSATA 4x4 PCI-X card.
    - MacGurus Burly 4 bay SATA enclosure.
    - 2 250GB WD drives in the Burly enclosure.
    I have the two 250GB WD drives in the external SATA enclosure software-RAIDed together to form one RAID 0 (striped) volume, using the OS X disk utility RAID feature. While that has resulted in an excellent, fast, big disk... I am starting to run into errors, and starting to lose sleep over the fact that the current setup is not redundant / reliable.
    My initial solution was to buy two more disks, RAID them together as another striped volume and run backups from the first two-disk WD striped volume to the new striped RAID. So, four disks, two striped raids with two disks each.... But when I called Sonnet Tech Support, the support guy said that OS X 10.4 has support for RAID 10. That changes everything! If I could fill the Burly enclosure with 4 of the same disks and software RAID 10 them together, I'd be in business... I think...
    Does OS X 10.4 really allow you to create a software RAID 10???
    If so, do you think this is the most ideal solution for what I'm working with?
    Keep in mind, I'm working with a low budget. No, XServes/XServe-RAID is NOT an option!
    Any input is greatly appreciated!
    TIA

    Hi, noka.
    Yes, it supports RAID 10. See "Disk Utility 10.5 Help: Protecting your data against hardware failure with a mirrored RAID set."
    However, you may not get the performance you expect.
    FWIW and IMO, unless one is running a high-volume transaction server with a 99.999% ("Five Nines") availability requirement, RAID is overkill. For example, unless you're running a bank, a brokerage, or a major e-commerce site, you're probably spending sums of time and money with RAID that could be applied elsewhere.
    RAID is high on the "geek chic" scale, low on the practicality scale, and very high on the "complex to troubleshoot" scale when problems arise. The average user, even one in your lines of business, is better served by implementing a comprehensive Backup and Recovery solution and using it regularly.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

  • Can I install boot camp on a machine with a Software Raid-0?

    I have a machine with a software Raid-0 running Snow Leopard. I'd like to install Boot Camp but I only have 2 SATA hard drives. Windows does not need to run in a RAID on this machine and I'm fine partitioning both drives so that I have a 30GB Windows 7 partition. Both drives are 500GB each.
    Can I partition each drive with -30GB run a software raid and still use bootcamp?
    Please let me know.

    No, and 30GB is too small for Windows 7 64-bit Pro for your Mac.
    Get a 3rd drive. Assuming this is a Mac Pro you have 4 internal drive bays.

  • Migrating Software RAID 1 ?!

    Hello all, i just installed my new SSD on which i'm doing a fresh install of my OS, although have now run into an issue with regaining the software raid 1 i had setup on 2 external drives with the old OS. Can i possibly migrate this raid 1 to the fresh OS??the drives are not coming up at all at the moment but if i boot the old OS disk they appear as normal.
    any help would be great
    thanks

    Did a bit more research, SCSI and storage use to be my hobby, and Mac Pro my passion. So I tried to dig into this further. SSD is very popular and people want to have 2 x SSDs along with 3-4 traditional mechanical hard drives. I just don't see going with an external type solution.
    Short answer: ain't gonna boot.
    You want a mirror - that should be fine.
    I was not familiar at all with one item, but I advise people to not put USB/FW cases on SATA controllers, too many problems and issues.
    *Raidon 4bay 19in 1ru esata enclusure*
    Like this?
    http://eshop.macsales.com/item/Raidon/SR4WBS2/
    Is the controller one of the very few that are bootable? most are not.
    To be bootable, the card would have to have Mac EFI firmware. Also, the EFI ROM version can "interfere" with support for cards and being bootable.
    There are always drivers and firmware when it comes to booting. It is suppose to be "driver-less." translation: built into OS X 10.5.x and later.
    NewerTech use Marvell chipset. I can say their 6G chipset which you can google and see who else uses them. I just picked up Marvell based 6G 2-port. So interested in why MINE is having trouble (conflicts with Sonnet Silicon Image most likely). I have a couple Sonnet cards, E4P and E2P, and there are drivers.
    the Caldigit Fasta-2E is not bootable with the Mac Pro. In addition, while most FirmTek controllers are bootable with the PowerMac G4 and G5, they are not bootable with the Apple Mac Pro. Bootable External drives Mac Pro?
    *NewerTech MAXPower eSATA 6G*
    +Does NOT support booting on any Macintosh platform.+
    http://eshop.macsales.com/item/Newer%20Technology/MXPCIE6GS2/
    Or this one,
    *eSATA 6G PCIe 2.0 RAID Capable Controller Card*
    http://eshop.macsales.com/item/Newer%20Technology/MXPCIE6GRS/
    http://www.newertech.com/products/pcieraidesata.php
    Reviewed here (and this should be req'd site to check also for Mac Pro owners)
    http://macperformanceguide.com/Reviews-MAXPowereSATA6G.html
    http://www.xlr8yourmac.com/IDE/NewerTech6GeSATA/Newertech6GeSataCard.html
    There is also FirmTek.
    http://firmtek.com/seritek/seritek-2me4-e/perform/
    http://eshop.macsales.com/item/Firmtek/SATAE6G/
    Highpoint has always depended on driver.
    Popular kit to install two SSDs in optical drive bay:
    http://eshop.macsales.com/item/Other%20World%20Computing/MM352A52MP9/
    External dual drive (SATA only, which is what I use and want for native SATA speeds)
    http://eshop.macsales.com/item/Other%20World%20Computing/MESATATBEK/?APC=XLR8You rMac09
    The ideal would be to boot off the two SSDs, and yes it would help to have a PCIe with internal ports, then use them in stripe RAID, not mirror (you can backup the OS and have a clone on external FW800; on sparse disk image of 100GB; on a small internal drive even), and when it changes.
    http://www.xlr8yourmac.com/IDE/SSDin_Mac_Pro/SSD_install_inMacPro.htm#storytop
    For an internal controller, HPT RR640 IF it was supported at all on Mac
    http://www.newegg.com/Product/Product.aspx?Item=N82E16816115077
    http://www.tweaktown.com/reviews/3309/highpointrocketraid_640_sata_6gb_s_4_port_pci_e_2_0controller/index.html
    http://www.bit-tech.net/hardware/2011/01/04/high-point-rocketraid-640-review/5
    To boot in 64-bit mode, all drivers and even some application plug-ins I believe have to be 64-bit, but not everything is 64-bit mode compatible.
    one SSD in optical drive bay for system
    no need to mirror
    no controller
    use the 2nd for scratch or an expensive off line backup
    If the SSD fails, swap out, use clone, rebuild and restore.
    No meed for mirror, AND disks writes are where SSDs can suffer and slow down.
    But DO consider SoftRAID 4.x, great support for SSDs and driver even for non-RAID, but their mirror RAID is a notch above.
    http://www.softraid.com
    I found Highpoint RR Quad on Apple Store:
    Is this compatible with the new 2010 Mac Pros? bootable?
    http://store.apple.com/us/product/H1113LL/A#compatibility
    I think you would get more mac pro owners to read and help in Mac Pro forum. And don't forget to stop by and check XLR8YOURMAC web site for news and tips about hardware upgrades and reports.

  • Software Raid 5 - Performance

    Hi all,
    we have a problem with a software RAID 5. Its read performance is acceptable (61 MB/s via UFS filesystem) while its write performance is very very bad. 3 MB/s through UFS filesystem is not acceptable for a machine with 6 1.35 GHz CPUs and 7 very fast (10K) fiberchannel disks.
    Raid 1 write performance is okay (42 MB/s) as well as the performance of individual disks (read: 74 MB/s).
    The RAID 5 includes 5 disks (140 GB each, 10K) and was built using the standard commands from the man page and online help.
    The hardware: SunFire 890, 6 CPUs (1.35 GHz US IV), 24 GB Memory, 7 disks (140 GB, 10K each). The machine runs SunOS 5.10.
    The question is: What are the options to speed up write performance of RAID 5?
    A much cheaper Athlon 64 based Linux system with slower SATA disks is much faster in reading (180 MB/s) and writing (around 50 MB/s) on a similar software raid 5.
    While searching the net I found some benchmarks that indicate that "normally" write and read on solaris software raid 5 should be nearly the same.
    Are there any ideas what to do?
    Greetinga and thanks in advance,
    Jan

    Hi,
    thanks for your answer. But... I don't think this is the problem here. It is clear that these facts slow down the write process compared to the native write performance of the underlying disks, but the slowdown is IMO one order of magnitude too high.
    As already mentioned... an Athlon Linux system with 4 disks has also a slowdown in write performance (compared to reading) but write performance is still 25%...30% of read performance. For this sun machine it is 5% of read performance. If the problem would be caused by contention and queuing it should apply there also, right?
    Or, to use the measurements: The Athlon with linux has around 50 MB/s write and 180 MB/s read which makes sense for a fileserver with Gigabit ethernet. The sun has 61 MB/s read which is acceptable for such a server while the writing speed of 3 MB/s is even too slow for a single 100 MBit/s client.
    Some benchmarks that I found on the net does not show such a big performance gap between read and write on Solaris software RAID 5 so I still guess there is a fundamental problem in our installation.
    What about others... has anyone numbers from experiments such as writing 1 GB with dd to an empty partition and measure the time for that? The same for reading... after reboot or remount in order to empty the fs cache.
    Greetings,
    Jan

  • Will OSX software Raid-0 on external drive be regonized elseware?

    Are RAID setups created in Disk util recognized in other computers. Say i have 2 external drives (over eSata on my 08 mac pro) in a raid 0 from disk util, can I plug these two drives into another mac pro and have them work?
    Basically, are software raid setups from disk utilily readible from all macs?

    Both systems would have to have their own controllers or ports, but yes, they should move and work just the same on both.
    There are Port Multiplier cases and controllers, and there are the 2 ODD ports people use, or direct connect SATA external controllers.
    You can even create a RAID internal or external and move it and it will work, or change the drive bay and order of your drives, the RAID will still work and be intact.
    What often doesn't work are eSATA/FW or eSATA/USB cases.
    Mac Pro Discussion focus on using and expanding is here:
    http://discussions.apple.com/category.jspa?categoryID=194

  • Best Throughput for RAID - Internal SATA vs. PCI RAID Card

    Hi, we're upgrading our old server, which has been on an upgraded G4. We put 3 SATA drives in it, connected to a Highpoint 1820a RAID card running RAID 5.
    We're moving this to an older G5 1.6 GHz model (2003). It currently has 2 internal SATA drive bays, but I know can be expanded with an adapter kit. It also has just plain PCI slots (not PCI-X or PCIe).
    So I'm wondering what would provide better performance (throughput I guess):
    • Using the 2 internal bays with a software RAID 1?
    • Putting our old Highpoint RAID card in the PCI slot and running RAID 1 or possibly RAID 5 (with another drive)?
    I know the RAID card doesn't use the processor for RAID functions, but I think in a RAID 1 it's fairly minimal, correct? I'm mostly wondering if the older basic PCI slot would bet the bottleneck in a fast system compared to the onboard SATA controllers?
    THANKS!

    Well, we're most concerned with data security and reliability. We're not streaming video or anything else that would demand extreme throughput. The server is mostly serving files to 4-5 users. So I imagine the performance gains would only be noticed when opening/closing files off the server, correct?
    And we already own the Highpoint card, which has 8 internal SATA ports, so we'll probably use this. It works fine on PCI bus. So we can run this with 4 drives in either two RAID 1s (boot & files); a RAID 5, or a RAID 10.
    We'll probably stick with Western Digital or Seagate 'enterprise' SATA drives, such as the WD RE3.
    So, do you think we would see much real-world difference between the different RAID options (RAID 1 software, RAID 1 hardware, RAID 5 hardware, RAID 10 hardware)?
    Thanks!

Maybe you are looking for

  • TS3694 iPod Touch and error "21"

    I have tried to restore my son's iPod touch twice and get error message "21". I have updated the iPod and have restarted the computer. The iPod was purchased in Nov 2011 and is a 32 GB iPod Touch. All I get now is the screen with an image of a USB po

  • ITunes no longer recognizes iPod after "Enable Disk Use"

    I enabled the disk use on my iPod and now iTunes won't recognize it, meaning it doesn't appear in the source window and I can't disable the disk use and sync. The iPod only appears in the device manager as a USB storage device, but not in My Computer

  • Check out native doc from Site Studio Contributor

    Why can't I (or my users) check out a native document from a dynamic list in contribution mode? Is there any way to accomplish this (besides installing software on every client's computer)? I modified the default dynamic list fragment to use the pdf

  • Mbp takes long time to go to sleep

    hey guys. i am having this issue since a few days and  i came to no solution. there is no printers issue, as many others can have. here is what my terminal says after typing pmset -g log Time stamp Domain Message Duration Delay ========== ====== ====

  • Do_PostBack Not Defined error when using Hint lookup in Ancestry.co.uk website.

    When trying to review a 'Member tree Hint' in the Ancestry.co.uk but FireFox Hangs when selecting a member tree to view and the clicking on the 'Review Member Tree Hints' button, The Java instruction 'javascript:_doPostBack('saveButton1'),"' appears