[Solved] Move Software RAID 5 Array From NAS To Arch

Edit: I probably never had a problem at all, the error in dmesg probably just scared me, because after I disconnected it I noticed that /dev/d127 was 8.1 TB, the exact size of my RAID array node in my NAS which was /dev/md0, I just overlooked it. I reconnected it to my pc and mounted /dev/md127 to /mnt/raid and got this wonderful sight!
[bran@ra ~]$ ls /mnt/raid
data lost+found meta sys
[bran@ra ~]$ ls /mnt/raid/data/
data ftproot module _NAS_Media _NAS_Piczza_ _NAS_Recycle_RAID _P2P_DownLoad_ stackable _SYS_TMP TV USBHDD
download htdocs Movies _NAS_NFS_Exports_ NAS_Public nzbget-downloads PLEX_CONFIG sys tmp USBCopy
I bought a Thecus N4520 a few months ago and it's ok but programs crash a lot and they're hard to debug, apps have to be updated manually and the whole thing is moderately underpowered. I'm trying to move the software RAID 5 array from the NAS to my desktop, the kernel seems to detect that there is a RAID array but all these drives aren't part of it. I'm pretty new to RAID and I'm just getting my feet wet with it.
When I try to assemble the RAID array, it just tells me that it isn't an md array. How can I get it to build my array?
[bran@ra ~]$ sudo mdadm --assemble /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: device /dev/sdb exists but is not an md array.
Found this little chunk of info in dmesg, it says that the md devices have unknown partition tables.
[ 3.262225] md: raid1 personality registered for level 1
[ 3.262483] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.10
[ 3.262508] iTCO_wdt: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
[ 3.262585] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[ 3.262933] md/raid1:md126: active with 4 out of 4 mirrors
[ 3.262961] md126: detected capacity change from 0 to 536850432
[ 3.263272] RAID1 conf printout:
[ 3.263274] --- wd:4 rd:4
[ 3.263276] disk 0, wo:0, o:1, dev:sdc3
[ 3.263276] disk 1, wo:0, o:1, dev:sdb3
[ 3.263277] disk 2, wo:0, o:1, dev:sdd3
[ 3.263278] disk 3, wo:0, o:1, dev:sde3
[ 3.263501] md: bind<sde4>
[ 3.264810] md: bind<sdb2>
[ 3.268262] async_tx: api initialized (async)
[ 3.272632] md: raid6 personality registered for level 6
[ 3.272636] md: raid5 personality registered for level 5
[ 3.272637] md: raid4 personality registered for level 4
[ 3.272905] md/raid:md127: device sdb2 operational as raid disk 1
[ 3.272908] md/raid:md127: device sde2 operational as raid disk 3
[ 3.272910] md/raid:md127: device sdd2 operational as raid disk 2
[ 3.272911] md/raid:md127: device sdc2 operational as raid disk 0
[ 3.273211] md/raid:md127: allocated 0kB
[ 3.273241] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
[ 3.273243] RAID conf printout:
[ 3.273244] --- level:5 rd:4 wd:4
[ 3.273245] disk 0, o:1, dev:sdc2
[ 3.273246] disk 1, o:1, dev:sdb2
[ 3.273247] disk 2, o:1, dev:sdd2
[ 3.273248] disk 3, o:1, dev:sde2
[ 3.273273] md127: detected capacity change from 0 to 8929230716928
[ 3.273322] RAID conf printout:
[ 3.273326] --- level:5 rd:4 wd:4
[ 3.273329] disk 0, o:1, dev:sdc2
[ 3.273331] disk 1, o:1, dev:sdb2
[ 3.273332] disk 2, o:1, dev:sdd2
[ 3.273360] disk 3, o:1, dev:sde2
[ 3.283617] md126: unknown partition table
[ 3.309239] md127: unknown partition table
[ 3.312660] md: bind<sdb4>
[ 3.318291] md/raid1:md124: not clean -- starting background reconstruction
[ 3.318296] md/raid1:md124: active with 4 out of 4 mirrors
[ 3.318333] md124: detected capacity change from 0 to 10736291840
[ 3.318385] RAID1 conf printout:
[ 3.318391] --- wd:4 rd:4
[ 3.318395] disk 0, wo:0, o:1, dev:sdc4
[ 3.318398] disk 1, wo:0, o:1, dev:sdb4
[ 3.318402] disk 2, wo:0, o:1, dev:sdd4
[ 3.318405] disk 3, wo:0, o:1, dev:sde4
[ 3.319890] md124: unknown partition table
[ 3.323462] md: bind<sde1>
[ 3.338094] md/raid1:md125: active with 4 out of 4 mirrors
[ 3.338225] md125: detected capacity change from 0 to 2146414592
[ 3.338253] RAID1 conf printout:
[ 3.338258] --- wd:4 rd:4
[ 3.338262] disk 0, wo:0, o:1, dev:sdc1
[ 3.338266] disk 1, wo:0, o:1, dev:sdb1
[ 3.338268] disk 2, wo:0, o:1, dev:sdd1
[ 3.338271] disk 3, wo:0, o:1, dev:sde1
Here's my full dmesg
mdadm.conf
# The designation "partitions" will scan all partitions found in /proc/partitions
DEVICE partitions
ARRAY /dev/md127 metadata=1.2 name=(none):0 UUID=d1d14afc:23490940:a0f7f996:d7b87dfb
ARRAY /dev/md126 metadata=1.2 name=(none):50 UUID=d43d5dd6:9446766e:1a7486f4:b811e16d
ARRAY /dev/md125 metadata=1.2 name=(none):10 UUID=f502437a:d27d335a:d11578d5:6e119d58
ARRAY /dev/md124 metadata=1.2 name=(none):70 UUID=ea980643:5c1b79e8:64f1b4cb:2462799b
Last edited by brando56894 (2014-04-21 22:51:01)

Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

Similar Messages

  • Can you move a software raid 1 from one mac to another

    I have a 2 disc software raid 1 on my powermac and I want to move it to my mac pro. Does anyone know if I can do this?

    It appears that in Mac OS X 10.4, and again in the transition to 10.7, RAID format may have undergone dramatic changes. If you are trying to move a RAID array across those boundaries, you may not have the best results moving the drives directly.

  • Moving RAID Array from old to new workstation

    I've completed building a new rig.  Win10 is working.  I have a RAID 0 array of 2-750GB drives in the old rig.  I've installed 2-1TB drives in the new rig.  Drive manager shows both old and new HD's but not as a single RAID array (RAID-OLD,
    and RAID-NEW, as I have named them during the BiOS RAID naming.  Could I have some guidance about this?  I also have the contents backed up on a USB-3 external drive, but would like to continue with both of the RAID arrays and not go through transferring
    data via USB.

    Hi CWO4 Mann
    For hardware, we need the driver to drive it works in Windows system. When you move your RAID to new system, ensure you have installed appropriate driver.
    I suggest you download the driver in Windows 10 Technical Preview, install it in Windows 8 compatibility mode to check if it works well.
    Alex Zhao
    TechNet Community Support

  • Image Software on nVidia RAID arrays

    Has anyone been able to get a disk imaging program to work with the nVidia RAID array?  I have tried Acronis True Image, Norton Ghost 9.0 and the old Drive Image. I have been unable get the rescue/recovery disks to recognize the RAID Array. With Acronis, it appears that they don't have the Linux drivers for the RAID array.  Ghost 9.0 is supposed to support RAID arrays, but when I try to load the drivers, I get a message that it can't find nvraid.sys, despite the fact that I supply that file on the floppy disk. I spent 1.5 hours on the phone with Symantec Technical Support and their only answer was that I would receive a phone call from a supervisor in 3 days.  
    I can get Norton Ghost to work from within windows, but in order to restore an image of my C: drive I need to gain access to the RAID array from the recovery disk.

    I have sort of got it to work with ghost9 .
    I have written a plug-in for nvraid/nvide for the bartWinPE cd made with PEBuilder 3032.
    V2I Images can then be browsed with imagebrowser and files/catalogs be restored
    to arrays partitions .
    The thing that does not yet work properly is the V2I protector executable cause the bart plug-in for .Net gives problems .(not sorted out this yet)
    The BartWinPE cd is also of course equipped with ghost9 plugin .
    So basicly the PC is booted with bootable Windows PE cd (like the ghost9 CD) ,
    windows runs from cd&ram . And all the nvraid arrays is visible .Restoration made
    with V2I image browser .
    The plug-in ( INF file ) :
    Code: [Select]
    ; PE Builder v3 plug-in INF file
    ; NVIDIA RAID (Windows XP)
    ; Created by Syar2003
    [Version]
    Signature= "$Windows NT$"
    [PEBuilder]
    Name="DSK_NvRaid"
    Enable=1
    [SourceDisksFiles]
    nvraid.sys=4,,1
    nvatabus.sys=4,,1
    [SetValue]
    "txtsetup.sif","SourceDisksFiles","nvraid.sys", "1,,,,,,_x,4,1,0,0"
    "txtsetup.sif","SourceDisksFiles","NvAtaBus.sys", "1,,,,,,_x,4,1,0,0"
    "txtsetup.sif","SCSI.Load","nvatabus", "nvatabus.sys,4"
    "txtsetup.sif","SCSI.Load","nvraid", "nvraid.sys,4"
    "txtsetup.sif","SCSI","nvatabus", """NVIDIA NForce Storage Controller"""
    "txtsetup.sif","SCSI","nvraid", """NVIDIA RAID CLASS DRIVER"""
    "txtsetup.sif","HardwareIdsDatabase","PCI\VEN_10DE&DEV_008E", """nvatabus"""
    "txtsetup.sif","HardwareIdsDatabase","PCI\VEN_10DE&DEV_0085", """nvatabus"""
    "txtsetup.sif","HardwareIdsDatabase","PCI\VEN_10DE&DEV_00D5", """nvatabus"""
    "txtsetup.sif","HardwareIdsDatabase","PCI\VEN_10DE&DEV_00EE", """nvatabus"""
    "txtsetup.sif","HardwareIdsDatabase","PCI\VEN_10DE&DEV_00E3", """nvatabus"""
    "txtsetup.sif","HardwareIdsDatabase","PCI\VEN_10DE&DEV_00E5", """nvatabus"""
    "txtsetup.sif","HardwareIdsDatabase","*_NVRAIDBUS", """nvraid"""
    "txtsetup.sif","HardwareIdsDatabase","GenNvRaidDisk", """nvraid"""
    If the BartWinPE cd is equipped with ghost8 you should be able to restore
    full disks/partitions with the *.gho extension .
    But then the images must be buildt with ghost8 (from the BartWinPE cd first).
    Though i have not had the time to test that yet .
    Links:
    Plug-in repository
    http://www.reatogo.de/BartPE/BartPE_plugins/repository.htm
    PeBuilder
    http://www.nu2.nu/pebuilder/

  • Unable to add partition on raid array, device or resource busy.

    Greetings,
    I want to be able to create a disk image of a software raid of one of my arch box.
    I'm able to create my image with G4U successfully. I'm also able to restore my image without error on my new box.
    When my system boot up, I make sure that my raid array are up by doing cat /proc/mdstat.
    I can see that md1 and md2 are 2 of 2 and active raid 1. But, when I look at md0 this is what I got:
    md0 : active raid1 sdb3[1]
               6289344 blocks [2/1] [_/U]
    I try to add the partition sda3 to md0 array with this command:
    mdadm --manage /dev/md0 --add /dev/sda3
    The output of this command give me this :
    mdadm: Cannot open /dev/sda3: Device or resource busy.
    It seems that this error only occurs on /dev/md0 (/) array. I'm 100% sure that both, my image and my drive (vmware hdd) are good.
    This is my partition table:
    /dev/sda1 /dev/sdb1 = /boot (md1) 100MB
    /dev/sda2 /dev/sdb2 = swap (md2) 2048MB
    /dev/sda3 /dev/sdb3 = / (md0) 8GB
    I have also tried the image creation with Acronis... same error.

    I solved my issue.
    My menu.lst was wrong..
    kernel /kernel26 root=/dev/md0 ro
    I should add this to my menu.lst:
    kernel /kernel26 root=/dev/md0 ro  md=0,/dev/sda3,/dev/sdb3
    Now it work.
    I followed the archlinux raid guide who's telling this:
    Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).
    Which is wrong because my distro is 2009.02, if someone can add a note to the wiki it can be usefull.
    Thanks for your support

  • Will OSX software Raid-0 on external drive be regonized elseware?

    Are RAID setups created in Disk util recognized in other computers. Say i have 2 external drives (over eSata on my 08 mac pro) in a raid 0 from disk util, can I plug these two drives into another mac pro and have them work?
    Basically, are software raid setups from disk utilily readible from all macs?

    Both systems would have to have their own controllers or ports, but yes, they should move and work just the same on both.
    There are Port Multiplier cases and controllers, and there are the 2 ODD ports people use, or direct connect SATA external controllers.
    You can even create a RAID internal or external and move it and it will work, or change the drive bay and order of your drives, the RAID will still work and be intact.
    What often doesn't work are eSATA/FW or eSATA/USB cases.
    Mac Pro Discussion focus on using and expanding is here:
    http://discussions.apple.com/category.jspa?categoryID=194

  • Software RAID apps?

    is there any alternative to SoftRAID and Disk Utility for creating software RAID arrays? Disk Utility falls short in a couple of areas. SoftRAID is perfect for my purpose, but I find the US$180 price outputting.  Unfortunately, after searching with Google, I cannot find an alternative to it.

    If it weren't for the bugs, I'd be delighted with AppleRAID.
    To have something that almost works for nothing or works for $180 is not an easy choice. You can get at least 4TB of storage for $180!

  • Can I move Array from one Xserve RAID to another and keep the data

    I'd like to move a set of disk with an existing Array from one Xserve RAID to another. Can I simply shutdown both Xserve RAIDs and move the disk over, assuming I put the disk back in the same order?

    Yes, you should be able to do this.
    BE SURE YOU HAVE A BACKUP FIRST.

  • Unable to move existing installation to software RAID-1 setup

    I have an existing Arch Linux amd64 installation which I want to move to a RAID-1 setup.
    I followed the following Wiki article:
    Arch Linux Wiki - Installing with Software RAID or LVM
    with necesary changes applied and some supplemental advice from the equivalent Gentoo Wiki article:
    Gentoo Linux Wiki - HOWTO Install on Software RAID
    mainly for the "--assume-clean" command-line switch to mdadm.
    With a Larch live CD I took a tarball backup of each partition.
    Then with cfdisk partitioned the disks.
    And copied their partition tables to their RAID-1 mirrors:
    sfdisk -d /dev/sda | sfdisk /dev/sdb
    Now, I cannot boot Arch on the RAID-1 setup.
    After initramfs is finished, I get a kernel panic as the "/" partition cannot be mounted.
    The mkinitcpio.conf that created my kernel is this:
    MODULES="ahci pata_amd ata_generic sata_nv sata_sil"
    HOOKS="base udev autodetect pata scsi sata usbinput keymap raid lvm2 filesystems"
    My rc.conf is this:
    MODULES=(raid1 powernow-k8 fuse forcedeth ac battery button fan thermal loop)
    My fstab ("/" and "/boot" lines) is this:
    /dev/md0 /boot ext3 noatime 0 1
    /dev/md1 / ext3 noatime 0 1
    And my menu.lst is this:
    timeout 5
    default 0
    color light-blue/black light-cyan/blue
    title Arch Linux
    root (hd0,0)
    kernel /vmlinuz26 root=/dev/md1 ro swncq=1 md=0,/dev/sda1,/dev/sdb1 md=1,/dev/sda2,/dev/sdb2
    initrd /kernel26.img
    I even tried another shot by passing all ten RAID-1 arrays (md0 to md9) as kernel parameters, but still, no effect.
    Any help would be greatly appreciated.
    PS: I have created a correct mdadm.conf file with:
    mdadm -D --scan >>/etc/mdadm.conf
    Last edited by wantilles (2008-09-08 08:28:20)

    My experience with raid suggests that mdadm setups work after boot-up as "software raid".  The array is assembled and "ID'd" with mdadm --assemble command.
    IMHO In order to utilize mdadm for boot-up, how does the user "assemble" before the elements needed for mdadm --assemble are accessible?
    IMHO My understanding of booting with raid involves the use of dmraid, a "software" raid program which can be adapted to boot use.  It involves changes to the boot sequence to prevent loss of dmraid params during the boot processes such as hardware detect AFAIK.

  • Adding A New Drive To A Software RAID 5 Array

    Edit 3: Just mounted the partitions and I can delete them because they contain nothing special. Is it safe to expand the 2nd partition of each drive to fill up the left over 22 GB?
    Edit 2: I just deleted all the partitions off of my new drive and created one partition, then added it to the array and it works just fine. My next question is, can I delete all the smaller partitions and expand /dev/sd[x]2 to reclaim all the space (about 70 GB)?
    One of my drives failed and Western Digital sent me a new drive, except it was an external drive instead of an internal drive, so I cracked it open and the label looked different. Turns out it's just refurbished and it's the same model as my other drives (WD Caviar Green 3 TB).
    I've read through the wiki article on Software RAID and created the partitions exactly the same as my other drives, but while creating the main 2.7 TB partition it says that the ending sector is out of range when it isn't. I'm new to all this so I have no idea what to do. From what I've read there normally aren't this many partitions per disk, correct? I also have md124, md125 and md126 for the other partitions. md127 is for the 2.7 TB partitions. I took the array out of my Thecus N4520. I have a 3 TB external drive and a 1TB internal, along with another 500 GB drive. Would I be better off at destroying the RAID set and creating a fresh RAID 5 set, considering I'm losing about 90 GB if I don't need the smaller partitions.
    /dev/sdc
    Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 00636413-FB4D-408D-BC7F-EBAF880FBE6D
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    /dev/sdd
    Disk /dev/sdd: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): C5900FF4-95A1-44BD-8A36-E1150E4FC458
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    /dev/sde
    Disk /dev/sde: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2B5527AC-9D53-4506-B31F-28736A0435BD
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    new drive: /dev/sdf
    Disk /dev/sdf: 5860467633 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 93F9EF48-998D-4EF9-B5B7-936D4D3C7030
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860467599
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 5813281700 sectors (2.7 TiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00 Linux RAID
    2 47187968 47187969 1024 bytes FD00 Linux RAID
    3 46139392 47187951 512.0 MiB FD00 Linux RAID
    4 2048 20973559 10.0 GiB FD00 Linux RAID
    5 20973568 41945071 10.0 GiB FD00 Linux RAID
    when I type in 5860491263 as the end sector gdisk does nothing, just wants more input. If I type +2.7T it accepts it, but really it just creates a partition that's 1KB in size!
    I am able to create a 2.7 TB partition with an end sector of 5860467599, this won't screw anything up will it?
    Edit 1: just tried it and got this
    [root@ra /home/bran]# mdadm --add /dev/md127 /dev/sdf2
    mdadm: /dev/sdf2 not large enough to join array
    [root@ra /home/bran]# fdisk -l /dev/sdf
    Disk /dev/sdf: 2.7 TiB, 3000559428096 bytes, 5860467633 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 93F9EF48-998D-4EF9-B5B7-936D4D3C7030
    Device Start End Size Type
    /dev/sdf1 41945088 46139375 2G Linux RAID
    /dev/sdf2 47187968 5860467599 2.7T Linux RAID
    /dev/sdf3 46139392 47187951 512M Linux RAID
    /dev/sdf4 2048 20973559 10G Linux RAID
    /dev/sdf5 20973568 41945071 10G Linux RAID
    Last edited by brando56894 (2014-04-28 00:47:29)

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Can I install Snow Leopard and boot from software RAID 1 (mirror)?

    I have a Mac Pro (quad core 2.66 GHz) on order for my office workstation. Yeah, I know new ones are probably coming out early next year but due to budget and upcoming projects I need one now. What I'd like to do is replace the pre-installed 640GB drive with two 1 TB drives and mirror them. The 640GB drive will be redeployed to another machine in the office. Can I boot from the Snow Leopard install DVD, go to Disk Utility, setup a RAID 1 with the two drives, install Snow Leopard to the mirror and then boot off the mirror set?
    I've searched and found offhand comments to the effect that installing to and booting from a software mirror is OK, but I'd like to know for sure that it's OK. Any experience that you have with such a configuration would be nice to hear.

    Yes. But before you do read the following:
    RAID Basics
    For basic definitions and discussion of what a RAID is and the different types of RAIDs see RAIDs. Additional discussions plus advantages and disadvantages of RAIDs and different RAID arrays see:
    RAID Tutorial;
    RAID Array and Server: Hardware and Service Comparison>.
    Hardware or Software RAID?
    RAID Hardware Vs RAID Software - What is your best option?
    RAID is a method of combining multiple disk drives into a single entity in order to improve the overall performance and reliability of your system. The different options for combining the disks are referred to as RAID levels. There are several different levels of RAID available depending on the needs of your system. One of the options available to you is whether you should use a Hardware RAID solution or a Software RAID solution.
    RAID Hardware is always a disk controller to which you can cable up the disk drives. RAID Software is a set of kernel modules coupled together with management utilities that implement RAID in Software and require no additional hardware.
    Pros and cons
    Software RAID is more flexible than Hardware RAID. Software RAID is also considerably less expensive. On the other hand, a Software RAID system requires more CPU cycles and power to run well than a comparable Hardware RAID System. Also, because Software RAID operates on a partition by partition basis where a number of individual disk partitions are grouped together as opposed to Hardware RAID systems which generally group together entire disk drives, Software RAID tends be slightly more complicated to run. This is because it has more available configurations and options. An added benefit to the slightly more expensive Hardware RAID solution is that many Hardware RAID systems incorporate features that are specialized for optimizing the performance of your system.
    For more detailed information on the differences between Software RAID and Hardware RAID you may want to read: Hardware RAID vs. Software RAID: Which Implementation is Best for my Application?

  • [SOLVED] how to install ArchLinux on a simple software raid 0

    I have two 256GB disks and I want them to be in raid 0.
    I tried following this tutorial: https://wiki.archlinux.org/index.php/In … AID_or_LVM
    but this tutorial has the added complication of LVM and raid 1 which I don't need.
    I made 3 partitions on each of the disks:
    sda1 - 100MB for /boot
    sda2 - 2048MB for swap
    sda3 - raid 0 md0
    sdb1 - unused
    sdb2 - 2048 for swap
    sdb3 - raid 0 md0 for /
    I can assemble and format the sda1(ext4), sda2(swap), sdb2(swap)  md0 (ext4) and install all the packages
    I also configured mdadm.conf by Alt F2 at the installer and executing: mdadm --examine --scan > /mnt/etc/mdadm.conf
    and added mdadm hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems')
    configured the boot with grub outside of the installer as indicated in the tutorial
    But when I boot I get:
    md0: unknown partition table
    Error: Unable to determine the file system of /dev/sda3
    please help.
    Last edited by 99Percent (2011-06-29 20:21:52)

    fyi this is how I finally set up my simple 2 drive raid 0:
    1. Create a bootable USB ArchLinux with UNetBootin
    2. Boot with the USB
    3. # /arch/setup
    3. Select source: internet (highly recommended because i found out UNetBootin sources are not 100% reliable though not necessarily so, IOW just to be sure)
    3. partition the two drives with 100mb for /boot and 100mb for swap (setup requires it - I have 8GB memory, I decided I don't need much swap space) and the rest of both sda and sdb which will make your raid 0.
    4. ALT F2 to another terminal and create the the raid like this:
    # modprobe raid0 (not sure if this is actually necessary)
    # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sda2 /dev/sdb2
    5. go back to the setup screen with CTRL-ALT-F1
    6. go to Prepare Hard Drives>Manually configure block devices, filesystems and mountpoints. Add the /boot (ext2) for sda1 swap for sdb1 and desired filesystem for md0 (I chose reiserfs). Ignore the rest of the devices.
    7. Select packages. Add the base-devel just in case, but nothing else and install packages
    8. ALT F2 again to the  terminal and run:
    # mdamd --examine --scan > /mnt/etc/mdadm.conf
    (this will configure the mdadm.conf to use your raid as created)
    9. go back to the setup screen again with CTRL-ALT-F1
    10. Select configure system
    11. Edit /etc/rc.conf adding: raid0 to MODULES= like this:
    MODULES=(raid0)
    again not sure if necessary but it works for me
    12. Edit /etc/mkinitcpio.conf adding dm_mod to the MODULES= like this:
    MODULES="dm_mod"
    13. also add to /etc/mkinitcpio.conf HOOKS= mdadm but before filesystems in my case it went like this:
    HOOKS="base udev autodetect pata scsi sata mdadm filesystems"
    14. I went ahead and uncommented a few mirrors in the mirrorlist file so I wouldn't have to deal with that later (and maybe it helps on the way).
    15. Also set a root password. Not sure if it is even necessary but maybe some components require it.
    16, Go to configure bootloader and select Grub
    17. When asked "Do you have your system installed on software raid? answer Yes
    18. When asked to edit the menu.lst file don't edit anything, just exit
    19. When asked "Do you want to install grub to the MBR of each harddisk from your boot array? answer Yes
    20. You will get "Error: Missing/Invalid root device:" and "GRUB was NOT successfully installed." Ignore those messages.
    21. Exit the install
    22. remove the USB stick and:
    # reboot
    23. the boot will fail and you will get a grub> prompt type the following commands:
    grub> root (hd0,0)
    grub> setup (hd0)
    grub> reboot
    Thats it!
    Last edited by 99Percent (2011-06-28 18:51:59)

  • Need help with formatting a software RAID 5 array with xfs

    Hi,
    i'm tying to format a software RAID 5 array, using the xfs filesystem with the following command:
    # mkfs.xfs -v -m 0.5 -b 4096 -E stride=64,stripe-width=128 /dev/md0
    but all I get is the attached error message. It works fine when I use the ext4 filesystem. Any ideas?
    Thanks!
    http://i.imgur.com/cooLBwH.png
    -- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • I would like to migrate from Aperture. What happens to my Masters which are on a RAID array and then what do I do with my Vault which is on a separate Drive please ?

    I am sorry, but I do noisy understand what happens to my RAW original Master files, which I keep offline on a RAID array and then what I do with my Vault which as all my edited photos ? Sorry for such a simple question, but would someone please help my lift the fog ?
    Thanks,
    Rob

    Dear John ,
    Apologies, as I am attempting to get to the bottom of this migration for my wife ( who is away on assignment ) and I am not 100% certain on the technical aspects of Aperture, so excuse my ignorance.
    She has about 6TB worth of RAW Master images ( several 100 thousand ) which, as explained, are on an external RAID drive. She uses a separate Drive as a Vault . Can I assume that this Vault contains all of her edits, file structures , Metadata, etc ?
    So, step by step........She can Import into Lightroom her Referenced Masters from her RAID and still keep them there ? Is that correct ?
    The Managed Files that are backed up by her Vault , are in the pictures folder of her MacPro, but not in a structure that looks like her Aperture library ? This means Lightroom will just organize all the Managed files, simply by the date in the Metadata ? Am I correct ( Sorry for being so tech illiterate ).
    How do I ensure she imports into Lighgtroom in exactly the same format as she runs her workflow in Aperture ?  ( Projects, that are organized by year and shoot location and Albums within those projects with sub-locations, or species , etc ). What exactly do I need to do in Aperture please to organize Managed Files to create a mirror structure of Aperture on my internal Hard Drive ?
    There are a couple of points I am unsure about in regard to Lightroom. Does it work in the same way as Aperture ? Meaning, can she still keep Master Files on an external RAID and Lightroom will reference them ? If the answer is yes, how do you back up your Managed ( edited ) work in Lightroom ? ( Can you still use an external Drive as a Vault ? ) . Will the vault she uses now be able to continue to back up Managed Files post migration ?

  • Movies purchased from iTunes to stream to LG smart TV from NAS via DLNA?

    Want to stream HD movies purchased from iTunes and stored on a WD NAS drive on LG smart TV (42LN578V)? Movies have standard iTunes DRM protection and so won't currently play on TV. Why is this not possible?! DRM or iTunes app for TV would offer sufficient protection for copyright holders surely? PC not up to job (so can't 'cast' to TV) and this would be a backwards step anyway. Must 'play' on TV in HD & 5.1. Is this not possible?!

    iTunes running on windows laptop by the way. Don't want this to have to be on in order to watch the movies. Can an Apple TV box be an 'authorised device' for use in playing iTunes movies and then stream them directly from NAS?

Maybe you are looking for