Software RAID in Solaris 8

Hi All,
I have already made the stripping for 3 disks in home slice.
This function are destructive process in disks slice. So all data are destroyed in home. However, the metatool can't allow the stripping in root, swap and usr directories because those of it must umount firstly. Does anyone can breakthrough this problem ?
Thanks,
CK wong

Hi,
You will find a full procedure for mirroring the root filesystem(s) in the Disksuite documentation on http://docs.sun.com. Just do a search for Disksuite 4.2.1 and look under Creating Objects in the Users Guide.
Basically the -f option on metainit is used to force the creation of the metadevice using a mounted filesystem. However please refer to the documentation as there are some extra steps when mirroring filesystems such as root. Failure to follow these steps can lead to a system that won't boot.
Note, you will note be able to concat or stripe root, /usr, /var, swap, etc...
Regards,
Ben Humphreys
Customer Technical Support

Similar Messages

  • Solaris 10 x86 - software RAID

    I try make software RAID on x86 server with Solaris 10.
    2 ata/ide disks on Intel i865 board.
    I make mirroring all data partitions - it normally working.
    make 2-nd disk bootable.
    What I can do with BOOT partition ? mirror ? format on pcfs and copy data from 1-st disk? copy it with dd ?

    You can use SMC for other purposes but it won't help your with RAID.
    Sol 10 1/06 has raidctl which handles LSI1030 and LSI1064 RAID�enabled controllers (from raidctl(1M)).
    Some of the PERCs (most?) are LSI but I don't know if they are chipsets used by your PoweEdge (I doubt it).
    Generally you can break it down like this for x86:
    If you are using hardware RAID with Solaris 10 x86 you have to use pre-Solaris (i.e. on the RAID controller) managment or hope that the manufacturer of the device has a Solaris managment agent/interface (good luck).
    The only exception to this that I know of is the RAID that comes with V20z, V40z, X4100, X4200.
    Otherwise you will want to go with SVM or VxVM and manage RAID within Solaris (software RAID).
    SMC etc are only going to show you stuff if SVM is involved and VxVM has its own interface, otherwise the disks are controlled by PERC and just hanging out as far as Solaris is concerned.
    Hope this helps.

  • Software Raid 5 - Performance

    Hi all,
    we have a problem with a software RAID 5. Its read performance is acceptable (61 MB/s via UFS filesystem) while its write performance is very very bad. 3 MB/s through UFS filesystem is not acceptable for a machine with 6 1.35 GHz CPUs and 7 very fast (10K) fiberchannel disks.
    Raid 1 write performance is okay (42 MB/s) as well as the performance of individual disks (read: 74 MB/s).
    The RAID 5 includes 5 disks (140 GB each, 10K) and was built using the standard commands from the man page and online help.
    The hardware: SunFire 890, 6 CPUs (1.35 GHz US IV), 24 GB Memory, 7 disks (140 GB, 10K each). The machine runs SunOS 5.10.
    The question is: What are the options to speed up write performance of RAID 5?
    A much cheaper Athlon 64 based Linux system with slower SATA disks is much faster in reading (180 MB/s) and writing (around 50 MB/s) on a similar software raid 5.
    While searching the net I found some benchmarks that indicate that "normally" write and read on solaris software raid 5 should be nearly the same.
    Are there any ideas what to do?
    Greetinga and thanks in advance,
    Jan

    Hi,
    thanks for your answer. But... I don't think this is the problem here. It is clear that these facts slow down the write process compared to the native write performance of the underlying disks, but the slowdown is IMO one order of magnitude too high.
    As already mentioned... an Athlon Linux system with 4 disks has also a slowdown in write performance (compared to reading) but write performance is still 25%...30% of read performance. For this sun machine it is 5% of read performance. If the problem would be caused by contention and queuing it should apply there also, right?
    Or, to use the measurements: The Athlon with linux has around 50 MB/s write and 180 MB/s read which makes sense for a fileserver with Gigabit ethernet. The sun has 61 MB/s read which is acceptable for such a server while the writing speed of 3 MB/s is even too slow for a single 100 MBit/s client.
    Some benchmarks that I found on the net does not show such a big performance gap between read and write on Solaris software RAID 5 so I still guess there is a fundamental problem in our installation.
    What about others... has anyone numbers from experiments such as writing 1 GB with dd to an empty partition and measure the time for that? The same for reading... after reboot or remount in order to empty the fs cache.
    Greetings,
    Jan

  • Cannot get software RAID-1 going on root (/) on Sun V240's

    Hi All,
    I for the life of me cannot get software RAID-1 going on the root file-system. I've followed the instructions in Solaris Volume Manager Admin. Guide and I think I'm doing it right. This is in a Sun V240 with two 137GB drives.
    I basically do the following:
    Disk0 - c1t0d0s0
    Disk1 - c1t0d1s0
    prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
    prtvtoc /dev/rdsk/c1t1d0s2 (Double check that disk 1 looks like disk 2)
    metadb -af -c 2 /dev/rdsk/c1t0d0s3 /dev/rdsk/c1t0d0s4
    metadb -af -c 2 /dev/rdsk/c1t1d0s3 /dev/rdsk/c1t1d0s4
    metainit -f d10 1 1 /dev/rdsk/c1t0d0s0
    metainit -f d20 1 1 /dev/rdsk/c1t1d0s0
    metainit d0 -m d10
    metaroot d0
    lockfs -fa
    reboot
    Before I reboot, I do a metastat and it shows the following:
    # metastat
    d0: Mirror
        Submirror 0: d10
          State: Okay        
        Pass: 1
        Read option: roundrobin (default)
        Write option: parallel (default)
        Size: 20494464 blocks (9.8 GB)
    d10: Submirror of d0
        State: Okay        
        Size: 20494464 blocks (9.8 GB)
        Stripe 0:
            Device     Start Block  Dbase        State Reloc Hot Spare
            c1t0d0s0          0     No            Okay   Yes
    d20: Concat/Stripe
        Size: 20494464 blocks (9.8 GB)
        Stripe 0:
            Device     Start Block  Dbase   Reloc
            c1t1d0s0          0     No      YesUpon reboot, I get this:
    Rebooting with command: boot                                         
    Boot device: /pci@1c,600000/scsi@2/disk@0,0:a  File and args:
    SunOS Release 5.10 Version Generic 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc.  All rights reserved.
    Use is subject to license terms.
    Cannot open mirrored root device, error 19
    Cannot remount root on /pseudo/md@0:0,0,blk fstype ufs
    panic[cpu1]/thread=180e000: vfs_mountroot: cannot remount root
    000000000180b960 genunix:vfs_mountroot+2b8 (18aa800, 0, 185e6f8, 3000153dc40, 1859ce8, 4)
      %l0-3: 0000000000000000 0000000000000003 00000000018a40b0 00000000018a40b0
      %l4-7: 000000000185b800 00000000011cb000 00000000018aa800 0000000001834340
    000000000180ba20 genunix:main+88 (1813c98, 1011c00, 1834340, 18a7c00, 0, 1813800)
      %l0-3: 000000000180e000 0000000000000001 000000000180c000 0000000001835200
      %l4-7: 0000000070002000 0000000000000001 000000000181ba54 0000000000000000
    syncing file systems... done
    skipping system dump - no dump device configured
    rebooting...HELP! What am I doing wrong? Thanks!

    You created to many metadatabases..
    I'm not kidding, you did. You created 8 which i suspect trigged a bug which currently exists in Solaris 9 / Solaris 10.
    This bug occours on the following disks: MAT3073N, MAT3147N, MAT3300N and ST373207LC, if you have any of those disks in your system (can be determined with iostat -En) you must use less than 8 replicas.
    Check out bugreport 6244431, if you got a sunsolve login linked to a contract number.
    You could try and boot up the system from a cdrom or similar and add "md_devid_destroy=1;" to the /kernel/drv/md.conf file above the 'Begin MDD database info' line.
    hopefully your system will come up after this, if it does the best workaround is to remove a few replicas, the above line from md.conf and reboot.
    Best regards,
    //Magnus
    Best regards,
    //Magnus

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Software raid won't boot after updating to "mdadm" in mkinitcpio.conf

    After a power outage I've discovetred the config I was using (with raid in mkinitcpio.conf) no longer works, it's mdadm now - that's fine.  I've updated that and re-run mkinitcpio successfully, however my system is unable to boot from the root filesystem /dev/md2 like so:
    Waiting for 10 seconds for device /dev/md2 ...
    Root device '/dev/md2' doesn't exist. Attempting to create it.
    ERROR: Unable to determine major/minor number of root device '/dev/md2'.
    You are being dropped to a recovery shell
        Type 'exit' to try and continue booting
    /bin/sh: can't access tty; job control turned off
    [ramfs /]#
    As far as I can see from reading various threads and http://wiki.archlinux.org/index.php/Ins … AID_or_LVM I'm doing the right things now (although I'm not using lvm at all, which makes the installation document a little confusing).
    I think I've included all the appropriate bits of config here that should be working.  I assume I've missed something fundamental - any ideas?
    menu.lst:
    # (0) Arch Linux
    title  Arch Linux  [/boot/vmlinuz26]
    root   (hd0,0)
    kernel /vmlinuz26 root=/dev/md2 ro
    initrd /kernel26.img
    mkinitcpio.conf:
    HOOKS="base udev autodetect pata scsi mdadm sata filesystems"
    fstab:
    /dev/md1 /boot ext3 defaults 0 1
    /dev/md2 / ext3 defaults 0 1
    mdadm.conf
    ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=7ae70fa6:9f54ba0a:21
    47a9fe:d45dbc0c
    ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=20560268:8a089af7:e6
    043406:dbdabe38
    Thanks!

    Hi magec, that's quite helfpul - I've certainly got further.
    Before I was doing this to set up the chroot (which is what is suggested in the wiki article about setting up software raid):
    mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
    mdadm -A /dev/md2 /dev/sda2 /dev/sdb2
    mount /dev/md2 /mnt
    mount /dev/md1 /mnt/boot
    mount -o bind /dev /mnt/dev
    mount -t proc none /mnt/proc
    chroot /mnt /bin/bash
    But based on your suggestion it's working better
    mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
    mdadm -A /dev/md2 /dev/sda2 /dev/sdb2
    mount /dev/md2 /mnt
    mount /dev/md1 /mnt/boot
    mount -t proc none /mnt/proc
    mount -t sysfs none /mnt/sys
    mount -n -t ramfs none /mnt/dev
    cp -Rp /dev/* /mnt/dev
    chroot /mnt /bin/bash
    The boot is now getting further, but now I'm getting:
    md: md2 stopped.
    md: bind<sdb2>
    md: bind<sda2>
    raid1: raid set md2 active with 2 out of 2 mirrors
    md2: detected capacity change from 0 to 32218349568
    mdadm: /dev/md2 has been started with 2 drives.
    md2: Waiting 10 seconds for device /dev/md2 ...
    unknown partition table
    mount: mounting /dev/md2 on /new_root failed: No such device
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    /bin/sh: can't access tty; job contol turned off
    [ramfs /]#
    The bit that really confuses me is this:
    [ramfs /]# cat /proc/mdstat
    Personalities : [raid1]
    md2 : active raid1 sda2[0] sdb2[1]
    31463232 blocks [2/2] [UU]
    md1 : active raid1 sda1[0] sdb1[1]
    208704 blocks [2/2] [UU]
    unused devices: <none>
    [ramfs /]# mount /dev/md2 /new_root
    mount: mounting /dev/md2 on /new_root failed: No such file or directory
    [ramfs /]# ls /dev/md2
    /dev/md2
    [ramfs /]#
    So the array is up, the device node is there but it can't be mounted?  Very strange.
    Last edited by chas (2010-05-02 11:24:09)

  • Can you move a software raid 1 from one mac to another

    I have a 2 disc software raid 1 on my powermac and I want to move it to my mac pro. Does anyone know if I can do this?

    It appears that in Mac OS X 10.4, and again in the transition to 10.7, RAID format may have undergone dramatic changes. If you are trying to move a RAID array across those boundaries, you may not have the best results moving the drives directly.

  • [Solved] Move Software RAID 5 Array From NAS To Arch

    Edit: I probably never had a problem at all, the error in dmesg probably just scared me, because after I disconnected it I noticed that /dev/d127 was 8.1 TB, the exact size of my RAID array node in my NAS which was /dev/md0, I just overlooked it. I reconnected it to my pc and mounted /dev/md127 to /mnt/raid and got this wonderful sight!
    [bran@ra ~]$ ls /mnt/raid
    data lost+found meta sys
    [bran@ra ~]$ ls /mnt/raid/data/
    data ftproot module _NAS_Media _NAS_Piczza_ _NAS_Recycle_RAID _P2P_DownLoad_ stackable _SYS_TMP TV USBHDD
    download htdocs Movies _NAS_NFS_Exports_ NAS_Public nzbget-downloads PLEX_CONFIG sys tmp USBCopy
    I bought a Thecus N4520 a few months ago and it's ok but programs crash a lot and they're hard to debug, apps have to be updated manually and the whole thing is moderately underpowered. I'm trying to move the software RAID 5 array from the NAS to my desktop, the kernel seems to detect that there is a RAID array but all these drives aren't part of it. I'm pretty new to RAID and I'm just getting my feet wet with it.
    When I try to assemble the RAID array, it just tells me that it isn't an md array. How can I get it to build my array?
    [bran@ra ~]$ sudo mdadm --assemble /dev/sdb /dev/sdc /dev/sdd /dev/sde
    mdadm: device /dev/sdb exists but is not an md array.
    Found this little chunk of info in dmesg, it says that the md devices have unknown partition tables.
    [ 3.262225] md: raid1 personality registered for level 1
    [ 3.262483] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.10
    [ 3.262508] iTCO_wdt: Found a Patsburg TCO device (Version=2, TCOBASE=0x0460)
    [ 3.262585] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
    [ 3.262933] md/raid1:md126: active with 4 out of 4 mirrors
    [ 3.262961] md126: detected capacity change from 0 to 536850432
    [ 3.263272] RAID1 conf printout:
    [ 3.263274] --- wd:4 rd:4
    [ 3.263276] disk 0, wo:0, o:1, dev:sdc3
    [ 3.263276] disk 1, wo:0, o:1, dev:sdb3
    [ 3.263277] disk 2, wo:0, o:1, dev:sdd3
    [ 3.263278] disk 3, wo:0, o:1, dev:sde3
    [ 3.263501] md: bind<sde4>
    [ 3.264810] md: bind<sdb2>
    [ 3.268262] async_tx: api initialized (async)
    [ 3.272632] md: raid6 personality registered for level 6
    [ 3.272636] md: raid5 personality registered for level 5
    [ 3.272637] md: raid4 personality registered for level 4
    [ 3.272905] md/raid:md127: device sdb2 operational as raid disk 1
    [ 3.272908] md/raid:md127: device sde2 operational as raid disk 3
    [ 3.272910] md/raid:md127: device sdd2 operational as raid disk 2
    [ 3.272911] md/raid:md127: device sdc2 operational as raid disk 0
    [ 3.273211] md/raid:md127: allocated 0kB
    [ 3.273241] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
    [ 3.273243] RAID conf printout:
    [ 3.273244] --- level:5 rd:4 wd:4
    [ 3.273245] disk 0, o:1, dev:sdc2
    [ 3.273246] disk 1, o:1, dev:sdb2
    [ 3.273247] disk 2, o:1, dev:sdd2
    [ 3.273248] disk 3, o:1, dev:sde2
    [ 3.273273] md127: detected capacity change from 0 to 8929230716928
    [ 3.273322] RAID conf printout:
    [ 3.273326] --- level:5 rd:4 wd:4
    [ 3.273329] disk 0, o:1, dev:sdc2
    [ 3.273331] disk 1, o:1, dev:sdb2
    [ 3.273332] disk 2, o:1, dev:sdd2
    [ 3.273360] disk 3, o:1, dev:sde2
    [ 3.283617] md126: unknown partition table
    [ 3.309239] md127: unknown partition table
    [ 3.312660] md: bind<sdb4>
    [ 3.318291] md/raid1:md124: not clean -- starting background reconstruction
    [ 3.318296] md/raid1:md124: active with 4 out of 4 mirrors
    [ 3.318333] md124: detected capacity change from 0 to 10736291840
    [ 3.318385] RAID1 conf printout:
    [ 3.318391] --- wd:4 rd:4
    [ 3.318395] disk 0, wo:0, o:1, dev:sdc4
    [ 3.318398] disk 1, wo:0, o:1, dev:sdb4
    [ 3.318402] disk 2, wo:0, o:1, dev:sdd4
    [ 3.318405] disk 3, wo:0, o:1, dev:sde4
    [ 3.319890] md124: unknown partition table
    [ 3.323462] md: bind<sde1>
    [ 3.338094] md/raid1:md125: active with 4 out of 4 mirrors
    [ 3.338225] md125: detected capacity change from 0 to 2146414592
    [ 3.338253] RAID1 conf printout:
    [ 3.338258] --- wd:4 rd:4
    [ 3.338262] disk 0, wo:0, o:1, dev:sdc1
    [ 3.338266] disk 1, wo:0, o:1, dev:sdb1
    [ 3.338268] disk 2, wo:0, o:1, dev:sdd1
    [ 3.338271] disk 3, wo:0, o:1, dev:sde1
    Here's my full dmesg
    mdadm.conf
    # The designation "partitions" will scan all partitions found in /proc/partitions
    DEVICE partitions
    ARRAY /dev/md127 metadata=1.2 name=(none):0 UUID=d1d14afc:23490940:a0f7f996:d7b87dfb
    ARRAY /dev/md126 metadata=1.2 name=(none):50 UUID=d43d5dd6:9446766e:1a7486f4:b811e16d
    ARRAY /dev/md125 metadata=1.2 name=(none):10 UUID=f502437a:d27d335a:d11578d5:6e119d58
    ARRAY /dev/md124 metadata=1.2 name=(none):70 UUID=ea980643:5c1b79e8:64f1b4cb:2462799b
    Last edited by brando56894 (2014-04-21 22:51:01)

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Adding A New Drive To A Software RAID 5 Array

    Edit 3: Just mounted the partitions and I can delete them because they contain nothing special. Is it safe to expand the 2nd partition of each drive to fill up the left over 22 GB?
    Edit 2: I just deleted all the partitions off of my new drive and created one partition, then added it to the array and it works just fine. My next question is, can I delete all the smaller partitions and expand /dev/sd[x]2 to reclaim all the space (about 70 GB)?
    One of my drives failed and Western Digital sent me a new drive, except it was an external drive instead of an internal drive, so I cracked it open and the label looked different. Turns out it's just refurbished and it's the same model as my other drives (WD Caviar Green 3 TB).
    I've read through the wiki article on Software RAID and created the partitions exactly the same as my other drives, but while creating the main 2.7 TB partition it says that the ending sector is out of range when it isn't. I'm new to all this so I have no idea what to do. From what I've read there normally aren't this many partitions per disk, correct? I also have md124, md125 and md126 for the other partitions. md127 is for the 2.7 TB partitions. I took the array out of my Thecus N4520. I have a 3 TB external drive and a 1TB internal, along with another 500 GB drive. Would I be better off at destroying the RAID set and creating a fresh RAID 5 set, considering I'm losing about 90 GB if I don't need the smaller partitions.
    /dev/sdc
    Disk /dev/sdc: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 00636413-FB4D-408D-BC7F-EBAF880FBE6D
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    /dev/sdd
    Disk /dev/sdd: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): C5900FF4-95A1-44BD-8A36-E1150E4FC458
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    /dev/sde
    Disk /dev/sde: 5860533168 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2B5527AC-9D53-4506-B31F-28736A0435BD
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860533134
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 43941 sectors (21.5 MiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00
    2 47187968 5860491263 2.7 TiB FD00 THECUS
    3 46139392 47187951 512.0 MiB FD00
    4 2048 20973559 10.0 GiB FD00 i686-THECUS
    5 20973568 41945071 10.0 GiB FD00
    new drive: /dev/sdf
    Disk /dev/sdf: 5860467633 sectors, 2.7 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 93F9EF48-998D-4EF9-B5B7-936D4D3C7030
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 5860467599
    Partitions will be aligned on 2048-sector boundaries
    Total free space is 5813281700 sectors (2.7 TiB)
    Number Start (sector) End (sector) Size Code Name
    1 41945088 46139375 2.0 GiB FD00 Linux RAID
    2 47187968 47187969 1024 bytes FD00 Linux RAID
    3 46139392 47187951 512.0 MiB FD00 Linux RAID
    4 2048 20973559 10.0 GiB FD00 Linux RAID
    5 20973568 41945071 10.0 GiB FD00 Linux RAID
    when I type in 5860491263 as the end sector gdisk does nothing, just wants more input. If I type +2.7T it accepts it, but really it just creates a partition that's 1KB in size!
    I am able to create a 2.7 TB partition with an end sector of 5860467599, this won't screw anything up will it?
    Edit 1: just tried it and got this
    [root@ra /home/bran]# mdadm --add /dev/md127 /dev/sdf2
    mdadm: /dev/sdf2 not large enough to join array
    [root@ra /home/bran]# fdisk -l /dev/sdf
    Disk /dev/sdf: 2.7 TiB, 3000559428096 bytes, 5860467633 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 93F9EF48-998D-4EF9-B5B7-936D4D3C7030
    Device Start End Size Type
    /dev/sdf1 41945088 46139375 2G Linux RAID
    /dev/sdf2 47187968 5860467599 2.7T Linux RAID
    /dev/sdf3 46139392 47187951 512M Linux RAID
    /dev/sdf4 2048 20973559 10G Linux RAID
    /dev/sdf5 20973568 41945071 10G Linux RAID
    Last edited by brando56894 (2014-04-28 00:47:29)

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • Need help with formatting a software RAID 5 array with xfs

    Hi,
    i'm tying to format a software RAID 5 array, using the xfs filesystem with the following command:
    # mkfs.xfs -v -m 0.5 -b 4096 -E stride=64,stripe-width=128 /dev/md0
    but all I get is the attached error message. It works fine when I use the ext4 filesystem. Any ideas?
    Thanks!
    http://i.imgur.com/cooLBwH.png
    -- mod edit: read the Forum Etiquette and only post thumbnails http://wiki.archlinux.org/index.php/For … s_and_Code [jwr] --

    Sorry I numbered them to show the flow of information, this was also just a place for me to store info as I worked through it. I managed to get it to work by creating a partition that takes up the whole drive and is actually 22 GB larger than all the other drives (since I found out that the had root, swap and home partitions that are no longer needed).
    I should be able to resize the other partitions without a problem, correct? They're EXT4. Should I unmount the raid array and do them individually, remount the array, let it sync and do the next? Or just unmount the array, resize all of them, mount it and let it sync?

  • How to install owb server-side software on UNIX Solaris (SPACR-64)?

    Hi there,
    How to install owb server-side software on UNIX Solaris (SPACR-64)?
    I've read the install guide
    and it mentions
    3. Start the installer by entering the following at the prompt:
    cd mount_point
    ./runInstaller
    I don't have access to any graphical interface on the UNIX box e.g. x-windows nor any cd with the software, just the solaris software downloaded from web - does this include the runinstaller
    and is the runisnatller just a command line interface?
    I hoped I would be able to download the software from oracle website and then simply run the a setup scrip?
    Is it possible to do this? Would I simply substitute cd mount_point to cd <directory I put software)
    I've never ran oracle universall installer on UNIX before.
    Many Thanks
    Edited by: user575470 on Feb 15, 2009 7:54 AM
    Edited by: user575470 on Feb 15, 2009 8:06 AM

    Hi,
    You can install the server-side software from the downloaded software.
    You don't need the CD
    You do need an X-windows client on your computer to connect to the server OR work directly on the server.
    Without X-windows you cannot start the Oracle Universal Installer.
    Maybe there is a command line installation, I have never used this.
    I hope this helps.
    Regards,
    Emile

  • Installing grub2 1.99rc1 on Linux Software RAID (/dev/md[0-x])

    Hello!
    In the last days I tried to install Archlinux x64 on my server. This server consists of 4 SATA drives. I'm using Linux software Raid with the following configuration:
    mdadm --create --verbose /dev/md0 --auto=yes --level=10 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
    mdadm --create --verbose /dev/md1 --auto=yes --level=10 --raid-devices=4 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2
    mdadm --create --verbose /dev/md2 --auto=yes --level=0 --raid-devices=4 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3
    md0 --> / including /boot
    md1 --> /data (my storage partiton)
    md2 --> my swap partion
    In the last step of the installer I don't installed grub and leave the installer without installing a bootloader.
    I mounted the /dev folder to /mnt/dev and chrooted in my new system:
    chroot /mnt bash
    In my new system I installed the new grub2 bootloader with:
    pacman -S grub2-bios
    In the next step I tried to install the bootloader to the MBR with the following command:
    grub_bios-install --boot-directory=/boot --no-floppy --recheck /dev/md0
    But I get the following error:
    /sbin/grub-probe: error: no such disk.
    Auto-detection of a filesystem of /dev/md0 failed.
    Please report this together with the output of "/sbin/grub-probe --device-map="/boot/grub/device.map" --target=fs -v /boot/grub" to <[email protected]>
    I also tried to install the grub2 directly on my discs with the following command:
    grub_bios-install --boot-directory=/boot --no-floppy --recheck /dev/sda
    But the error is the same as above.
    Following the instruction of the error message I executed the following command:
    /sbin/grub-probe --device-map="/boot/grub/device.map
    I get a large debug output:
    /sbin/grub-probe: info: scanning hd3,msdos2 for LVM.
    /sbin/grub-probe: info: the size of hd3 is 3907029168.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: scanning hd3,msdos1 for LVM.
    /sbin/grub-probe: info: the size of hd3 is 3907029168.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: scanning hd4 for LVM.
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: scanning hd4,msdos1 for LVM.
    /sbin/grub-probe: info: the size of hd4 is 2068992.
    /sbin/grub-probe: info: no LVM signature found
    /sbin/grub-probe: info: changing current directory to /dev.
    /sbin/grub-probe: info: opening md0.
    /sbin/grub-probe: error: no such disk.
    There is a tutorial for installing grub2 at ArchLinux in the wiki:
    https://wiki.archlinux.org/index.php/Grub2
    But I can't find any solution for my problem.
    Does anybody know a solution for this problem?
    Thanks for help
    Best regards,
    Flasher
    Last edited by Flasher (2011-03-03 20:13:13)

    hbswn wrote:Maybe it cannot handle the new mdadm format. Here the other partitions have version 0.90. Only the failing md0 has version 1.2:
    I'm sure it is the cause of the problem. But not new format. The old one (0.90) is incompatible with grub2.
    I hit the same problem and today I managed to fix it.
    What I did: I copied /boot to tmp directory, destroyed /dev/md0 (my /boot) with metadata 0.90, created new /dev/md0 (metadata 1.2), copied back /boot content.
    After that, grub_bios-install /dev/sda && grub_bios-install /dev/sdb finished without any errors.

  • Software RAID Failure - my experience and solution

    I just wanted to share this information with the iCloud community.
    I searched a bit and did not find much information that was useful with regard to my software RAID issue.
    I have 27 inch Mid 2011 iMac with SSD and Hard drive which has been great.
    I added an external hard drive (I think if I mention any brand name the moderator will delete this post) which includes an nice aluminum case with two 3 TB hard drives within it, and it has a big blue light on the front and is connected via Thunderbolt. This unit is about 2 years old and I have it configured in a 3 TB mirrored RAID (RAID 1) via a software RAID configured via Mac OS Disk Utility. 
    I had at one point a minor glitch which was fixed using another piece of software (again if I mention a brand the moderator will delete this post) which is like a 'Harddrive Fighter' or similar type name LOL.   So otherwise that RAID has served me well as a site for my Time Machine back up and Aperture Vault, etc.  (I created a 1.5 TB Sparse bundle for Time Machine so that the backup would not use the entire 3 TBs)
    I recently purchased a second aluminum block of drives, and set that up as a 4 TB RAID 1.
    Each of the two RAIDs are set with the option of “Automatically rebuild RAID mirror sets” checked.
    I put only about 400 gb on the new RAID to let it sit for a ‘burning in period.’
    A few days ago the monitoring software from the vendor who sells the aluminum block of drives told me I had a problem.  One of the drives had “Failed.”   The monitoring software strangely enough does not distinguish the drives so you can figure out which pair had the issue, so I assumed it was the New 8 TB model.  Long story short, it was the older 6 TB model, but that does not matter for this discussion.
    I contacted the vender and this is part of their response.
    “This is an indication that the Disk Utility application in Mac had a momentary problem communicating with the drive mechanism. As a result, it marked that drive as "failed" in the header information. Unfortunately, once this designation is applied to a drive by the OS, the Disk Utility will thereafter refuse to attempt any further operations with that disk until the incorrect "failed" marker is manually cleared off the drive.”
    That did not sound very good to me…..back up killed by a SOFTWARE GLITCH?
    “The solution is to remove the corrupted volume header, and allow the generation of a new one….This command will need to be done for each disk in the array… (using Terminal)…
    diskutil zerodisk (identifier)
    …3. After everything is finished, you should be able to exit Terminal, and go back into the Disk Utility Application to re-configure the RAID array on the device.”
    Furthermore they said.
    “If the Disk Utility has placed a flag into the RAID array header (which exists on both drives) then performing this procedure on a single drive will not correct anything.”
    And…
    “When a drive actually does fail, it typically stops appearing in the Disk Utility application altogether. In that circumstance, it will never be marked "failed" by the Disk Utility, so the header erase operation is not needed.”
    This all sounded like a bad idea to me. And what does the Vendors RAID monitor software say then?  “Disk Really Really FAILED, check for a fire.”
    As I tried to figure out which drive was actually the bad RAID pair I stumbled on a solution.
    First I noted that the OS Disk Utility did NOT show a fault in the RAID. It listed both RAIDS as “Online.’ Thus no rebuilding was needed and it did not begin the rebuild process.
    The Vendors disk monitor software saw some fault, but Mac was still able to read and write to the RAID, both disks in the mirror.  I wrote a folder to the RAID and with various rebooting steps I pulled the “Bad” drive and looked at the “Good” Drive….the folder was there…I put the Bad drive back in and pulled the Good Drive and the folder was there on the “bad” drive.  So it wrote to both drives.  AND THE VENDORS MONITORING SOFTWARE SHOWED THE PREVIOUSLY LABELED ‘BAD’ DRIVE AS ‘GOOD’ AND THE MISSING DRIVE SLOT AS ‘BAD’.
    My stumbled FIX.   I moved a bunch of files off the failed RAID to the new RAID  but before I moved the sparse bundle, a folder of 500 gigs movies and some other really big folders the DISK UTILITY WINDOW (which I still had open) now showed that the RAID had a Defect and began rebuilding the mirror set itself, out of the blue!   I don't know why this happened.  But moving about 1/2 of the data off of it perhaps did something?  Any Ideas?
    This process took a few hours as best I can tell (let it run overnight) and the next day the RAID was fine and the Vendors RAID monitor did not show a fault any longer.
    So, the Vendors RAID monitoring software reporting a “FAILED” drive without any specific error codes to look up.  Perhaps they could have more info for the user on the specific fault?  The support line of the the Vendor said with certainty “the Volume Header is corrupted” and THE ONLY FIX is to completely ZERO THE DRIVE! This was not necessary as it turns out.
    And the stick in the eye to me…..
    “I've also sometimes seen the drives get marked as "failed" by the disk utility due to a shaky connection. In some cases, swapping the ends of the Thunderbolt cable will help with this. Something to try, perhaps, if your problems come back. “
    Ya Right…..
    Mike

    Follow up.
    After going through the Zeroing process and rebuilding the RAID set three times, with various configurations, LaCie finally agreed to repair the unit under warrantee.
    I tried swapping the power supplies and thunderbolt wires, tried taking the drive out of series with the newer big brother of it.  And it still failed after a few days.
    I just wanted to share more of what I learned with regard to rebuilding the RAID sets via the Terminal.  The commands can be typed partially and a help paragraph will come up to give VERY cryptic descriptions of the proper use of the commands.
    First Under terminal you can used the command "diskutil appleRAID list" to list those drives which are in the RAID.  This gives you the ID number for each physical drive. For example:
    AppleRAID sets (1 found)
    ===============================================================================
    Name:                 LaCie RAID 3TB
    Unique ID:            84A93ADF-A7CA-4E5A-B8AE-8B4A8A6960CA
    Type:                 Mirror
    Status:               Online
    Size:                 3.0 TB (3000248991744 Bytes)
    Rebuild:              manual
    Device Node:          disk4
    #  DevNode   UUID                                  Status     Size
    0  disk3s2   D53F6A81-89F1-4FB3-86A9-8808006683C2  Online     3000248991744
    -  disk2s2   E58CA8F5-1D2C-423A-B4BE-FBAA80F85879  Spare      3000248991744
    ===============================================================================
    In my situation with the failed RAID, I had an extra disk in this with the status of Missing/Failed. 
    The command is "diskutil appleRAID remove" and the cryptic help paragraph says:
    Usage:  diskutil appleRAID remove MemberDeviceName|MemberUUID
            RAIDSetVolumePath|RAIDSetDeviceName|RAIDSetUUID
    MemberDeviceName|MemberUUID  is the number listed in the "diskutil appleRAID List" command,  and
    RAIDSetVolumePath|RAIDSetDeviceName|RAIDSetUUID is the Device Node for the RAID which here is /dev/disk4.
    I used this command to remove the third entry (missing/failed), I did not copy the terminal window text on that one, so I cannot show the list of three disks.
    I could not get to remove the disk2s2 disk listed as SPARE, as it gave an error message:
    Michaels-iMac:~ mike_aronis$ diskutil appleraid remove E58CA8F5-1D2C-423A-B4BE-FBAA80F85879 /dev/disk4
    Started RAID operation on disk4 LaCie RAID 3TB
    Removing disk from RAID
    Changing the disk type
    Can't resize the file system on the disk "disk2s2"
    Error: -69827: The partition cannot be resized
    But I was able to remove it using the graphical interface Disk Utility program using the delete key.
    I then rebuilt the RAID set by dragging the second drive back into the RAID set.
    I could not get the command: "diskutil appleRAID update AutoRebuild 1 /dev/disk4" to work, because even though it was trying to execute it HUNG.  I put the two drives into my newer LaCie 2big as my attempt at further trouble shooting the RAID (this was not suggested by LaCie tech), rebuild the RAID and now I am going to leave it setup that way for a few days before I ship it back to just see if the old drives work fine in the new RAID box (thus proving the RAID box is the problem). I tried the AutoRebuild 1 command just now and it gave an error.
    Michaels-iMac:~ mike_aronis$ diskutil appleraid update autorebuild 1 /dev/disk4
    Error updating RAID: Couldn't modify RAID (-69848)
    Michaels-iMac:~ mike_aronis$
    In my haste to rebuild the RAID set for the third or forth time as LaCie led me through the testing this and test that phase, I forgot to click the "Auto Rebuild" option in the Disk Utility program.
    Question for the more experienced:
    As I was working on this issue, I notice that each time I rebooted and did work in the Terminal (with and without the RAID plugged in to the thunderbolt connection) I notice that the list of drives would change and my main boot drive would not stay listed as drive 0!  Some times it would be drive 0, sometimes the RAID would be listed as Drive 0.  It's strange to me...I would have thought the designation for Drive0 and Drive1 would always be my two build in drives (SSD and spinning drive).
    Mike

  • Software RAID issue

    I have just installed 2-200 GB seagate hd's into my machine and set them up as a software raid 0. Also installed is the original 80 GB drive and a Hitachi 160 GB drive. The two new drives functioned long enough for me to copy all of my raw digital home video.
    Now, when I try to boot the grey Apple screen comes up and the machine hangs. When I disconnect the drives, it boots right up.
    I thought maybe my power supply was giving up the ghost until I reconnected them and successfully rebooted with Drive Genius 1.2. I rebuilt the raid set (maintaining all data) and rebooted into OSX successfully. Woohoo!
    But not so fast. Next reboot started the same thing. And now Drive Genius has no effect.
    I really dont wan't to lose the data on these drives. Is there a problem with the software raid in 10.4.7? Any ideas of other things to try? Help?!?!
    Thanks,
    Andy
    G4 1.25DP FW800   Mac OS X (10.4.7)  
    G4 1.25DP FW800   Mac OS X (10.4.7)  

    Possibly a couple things that "aren't quite right" with your setup.
    SoftRAID 3.5 supports mirror for boot drive, but not with stripped.
    The two IDE buses are not identical, making for some slight problems with RAID on the two.
    You really can't RAID if both drives are together in one drive cage on the same IDE bus.
    You need a PCI controller to RAID in MDD. Most people at this point in time opt for Serial ATA drives and controller. If you want a bootable SATA drive...
    http://www.firmtek.com/seritek
    If you must boot from stripped RAID (not always a good idea and may not offer much actually in real world performance, check www.barefeats.com for some tests for one).
    A dedicated boot drive for OS/Apps and use other drives for media, data, scratch, even for /users.
    Two ATA drives on the same bus, have to share the bus, contend for I/O.
    You can create a stripe raid with Disk Utility and boot.
    I assume that you read the SoftRAID QuickStart and Manual which really are helpful. There are a number of sites etc that can go into more about RAID.

  • Mac OS X 10.4 - Software RAID 10???

    Hello All...
    As a motion graphics artist by trade and a musician at night, I am constantly faced with the on-going dilemma of my storage needs. As you may probably already know, these two activities eat up drive space and require a speedy storage solution as well.
    Here's my current setup:
    - PowerMac G5 Dual 2GHz (PCI-X) w/ 3GB of RAM.
    - Sonnet Tempo-X eSATA 4x4 PCI-X card.
    - MacGurus Burly 4 bay SATA enclosure.
    - 2 250GB WD drives in the Burly enclosure.
    I have the two 250GB WD drives in the external SATA enclosure software-RAIDed together to form one RAID 0 (striped) volume, using the OS X disk utility RAID feature. While that has resulted in an excellent, fast, big disk... I am starting to run into errors, and starting to lose sleep over the fact that the current setup is not redundant / reliable.
    My initial solution was to buy two more disks, RAID them together as another striped volume and run backups from the first two-disk WD striped volume to the new striped RAID. So, four disks, two striped raids with two disks each.... But when I called Sonnet Tech Support, the support guy said that OS X 10.4 has support for RAID 10. That changes everything! If I could fill the Burly enclosure with 4 of the same disks and software RAID 10 them together, I'd be in business... I think...
    Does OS X 10.4 really allow you to create a software RAID 10???
    If so, do you think this is the most ideal solution for what I'm working with?
    Keep in mind, I'm working with a low budget. No, XServes/XServe-RAID is NOT an option!
    Any input is greatly appreciated!
    TIA

    Hi, noka.
    Yes, it supports RAID 10. See "Disk Utility 10.5 Help: Protecting your data against hardware failure with a mirrored RAID set."
    However, you may not get the performance you expect.
    FWIW and IMO, unless one is running a high-volume transaction server with a 99.999% ("Five Nines") availability requirement, RAID is overkill. For example, unless you're running a bank, a brokerage, or a major e-commerce site, you're probably spending sums of time and money with RAID that could be applied elsewhere.
    RAID is high on the "geek chic" scale, low on the practicality scale, and very high on the "complex to troubleshoot" scale when problems arise. The average user, even one in your lines of business, is better served by implementing a comprehensive Backup and Recovery solution and using it regularly.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

Maybe you are looking for

  • Is it possible to install Solaris 10 from Hard-Disk

    Linux support installation from HD, will it be possible in solaris?

  • Slow Boot after 10.4.3 Update

    Hi, I'm experiencing a really slow boot time after updating to 10.4.3. This only effects my Dual 2.3 G5, the iMac G5 boots as normal. It seems to hang for at least 2 minutes on the grey screen before completing the boot and launching the login window

  • Audio and Video Not Synchronizing in the Same Clip.

    Ok, So I have had Adobe Premier Pro for a while now an I love it. I just have one not so insignificant problem with it. I was in the middle of making a video (with one particularly long clip in it) and I noticed something that began to happen in the

  • Import Error: ORA-01435 : User does not exist

    Hi all, I am importing a dump into 11g Database, the dump taken from Old Financial release 6 [EBIZ] i given this command: IMP FILE=EXPDUMP.DMP BUFFER=524288000 FULL=Y IGNORE=Y Import user system and password given... Once its started and getting erro

  • How to add "On Top" feature

    Hello, How can I add on top feature in my desktop application? Also, how can I add a feature like this, It runs and cover entire background, all other application will run above it but it will remain as a background on desktop. Thanks in advance. Reg