ZFS Root Pool Restore of a EFI labelled Disk

Hi Team
Please let me know the procedure for backup and restore of a EFI labelled root pool using zfs send/receive.
Note - original operating system is installed on a t5 server with latest firmware,here the default disk label will be EFl instead of SMI as in the case of earlier firmware version.
operation system is Solaris 11.1.
Also need to know how to expand lun which is formatted with EFI labelled disk without losing its data.
Expecting a positive response soon
Regards
Arun

Hi ,
What you need to do is very easy here is a procedure that i use:
1)  make a snapshot off the rpool
       zfs snapshot -r rpool@<snapshotname>
2) send that snapshot somewhere safe
       zpool destroy rpool/dump<snapshotname>
       zpool destroy rpool/swap@<snapshotname>
       zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
3) Once the above is done you can do the following.
     Boot from DVD make sure you ahe a disk available and start creating the rpool.
     The rpool can be created with EFI or SMI label
     so for example to use EFI label zpool create c0d0
                           to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
4) create a new boot env
      zpool create rpool <disk>
5) import the data again.
     gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
      zfs create -V 4G rpool/dump
     zfs create -V 4G rpool/swap
6) check a list off bootenv
        beadm list
        beadm mount <bootenv> /tmp/mnt
        bootadm install-bootloader -P rpool
       devfsadm -Cn -r /tmp/mnt
       touch /tmp/mnt/reconfigure
       beadm umount <bootenv>
       beadm activate <bootenv>
This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
If you want i have a script that makes a backup off the rpool towards a nfs share.
Hope this helps
Regards
Filip

Similar Messages

  • Scrub ZFS root pool

    Does anyone see any issue in having a cron job that scrubs the ZFS root pool rpool periodically?
    Let's say every Sunday at midnight (00:00 of Sunday).

    Hi ,
    What you need to do is very easy here is a procedure that i use:
    1)  make a snapshot off the rpool
           zfs snapshot -r rpool@<snapshotname>
    2) send that snapshot somewhere safe
           zpool destroy rpool/dump<snapshotname>
           zpool destroy rpool/swap@<snapshotname>
           zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
    3) Once the above is done you can do the following.
         Boot from DVD make sure you ahe a disk available and start creating the rpool.
         The rpool can be created with EFI or SMI label
         so for example to use EFI label zpool create c0d0
                               to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
    4) create a new boot env
          zpool create rpool <disk>
    5) import the data again.
         gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
          zfs create -V 4G rpool/dump
         zfs create -V 4G rpool/swap
    6) check a list off bootenv
            beadm list
            beadm mount <bootenv> /tmp/mnt
            bootadm install-bootloader -P rpool
           devfsadm -Cn -r /tmp/mnt
           touch /tmp/mnt/reconfigure
           beadm umount <bootenv>
           beadm activate <bootenv>
    This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
    I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
    If you want i have a script that makes a backup off the rpool towards a nfs share.
    Hope this helps
    Regards
    Filip

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • Solaris 10 with zfs root install and VMWare-How to grow disk?

    I have a Solaris 10 instance installed on an ESX host. During the install, I selected a 20gig disk. Now, I would like to grow the disk from 20GB to 25GB, I made the change on VMWare but now the issue seems to be Solaris. I haven't seen anything on how to grow the FS in Solaris. Someone mentioned using fdisk to manually change the number of cylinders but that seems awkward. I am using a zfs root install too.
    bash-3.00# fdisk /dev/rdsk/c1t0d0s0
    Total disk size is 3263 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 2609 2609 80
    This shows the expanded number of cylinders. but a format command does not.
    bash-3.00# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci1000,30@10/sd@0,0
    Specify disk (enter its number):
    Any ideas?
    Thanks.

    That's the MBR label on the disk. That's easy to modify with fdisk.
    Inside the Solaris partition is another (VTOC) label. That one is harder to modify. It's what you see when you run 'format' -> 'print' -> 'partition' or 'prtvtoc'.
    To resize it, the only method I'm aware of is to record the slices somewhere, then destroy the label or run 'format -e' and create a new label for the autodetect device. Once you have the new label in place, you can recreate the old slices. All the data on the disk should be stable.
    Then you can make use of the space on the disk for new slices, for enlarging the last slice, or if you have a VM of some sort managing the disk.
    Darren

  • S10 x86 ZFS on VMWare - Increase root pool?

    I'm running Solaris 10 x86 on VMWare.
    I need more space in the zfs root pool.
    I doubled the provisioned space in Hard disk 1, but it is not visible to the VM (format).
    I tried creating a 2nd HD, but root pool can't have multiple VDEVs.
    How can I add space to my root pool without rebuilding?

    Hi,
    This is what I did in single user (it may fail in multi user):
    -> format -> partition -> print
    Current partition table (original):
    Total disk cylinders available: 1302 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 1301 9.97GB (1301/0/0) 20900565
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wm 0 - 1301 9.97GB (1302/0/0) 20916630
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> fdisk
    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1304 1304 83
    -> format -> fdisk -> delete partition 1
    -> format -> fdisk -> create SOLARIS2 partition with 100% of the disk
    Total disk size is 1566 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 1565 1565 100
    format -> partition -> print
    Current partition table (original):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> partition -> 0 cyl=1 size=1562e
    Current partition table (unnamed):
    Total disk cylinders available: 1563 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 1 - 1562 11.97GB (1562/0/0) 25093530
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 1562 11.97GB (1563/0/0) 25109595
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    -> format -> partition -> label
    zpool set autoexpand=on rpool
    zpool list
    zpool scrub rpool
    zpool status
    Best regards,
    Ibraima

  • Can't boot with zfs root - Solaris 10 u6

    Having installed Solaris 10 u6 on one disk with native ufs and made this work by adding the the following entries
    /etc/driver_aliases
    glm pci1000,f
    /etc/path_to_inst
    <lang pci string for my scsi controller> glm
    which are needed since the driver selected by default are the ncsr scsi controller driver that do not work in 64 bit.
    Now I would like to create a new boot env. on a second disk on the same scsi controller, but use zfs instead.
    Using Live Upgrade to create a new boot env on the second disk with zfs as file system worked fine.
    But when trying to boot of it I get the following error
    spa_import_rootpool: error 22
    panic[cpu0]/thread=fffffffffbc26ba0: cannot mount root path /pci@0,0-pci1002,4384@14,4/pci1000@1000@5/sd@1,0:a
    Well that's the same error I got with ufs before making the above mentioned changes /etc/driver_aliases and path_to_install
    But that seems not to be enough when using zfs.
    What am I missing ??

    Hmm I dropped the live upgrade from ufs to zfs because I was not 100% sure it worked.
    Then I did a reinstall selecting to use zfs during the install and made the changes to driver_aliases and path_to_inst before the 1'st reboot.
    The system came up fine on the 1'st reboot and did use the glm scsi driver and running in 64bit.
    But that was it. When the system then was rebooted (where it made a new boot-archive) it stopped working. Same error as before.
    I have managed to get it to boot in 32bit mode but still the same error (thats independent of what scsi driver used.)
    In all cases it does pop the SunOS Relase banner and it do load the driver (ncrs or glm) and detects the disks in the correct path and numbering.
    But it fails to load the file system.
    So basically the current status are no-go if you need to use the ncrs/glm scsi driver to access the disks with your zfs root pool.
    File-Safe works and can mount the zfs root pool, but that's no fun as server OS :(

  • Trouble mirroring root pool onto larger disk

    Hi,
    I have Solaris 11 Express with a root pool installed on a 500 GB disk. I'd like to migrate it to a 2 TB disk. I've followed the instructions on the ZFS troubleshooting guide (http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk) and the Oracle ZFS Administration Guide (http://download.oracle.com/docs/cd/E19253-01/819-5461/ghzvx/index.html) pretty carefully. However, things still don't work: after re-silvering, I switch my BIOS to boot from the 2 TB disk and at boot, some kind of error message appears for < 1 second before the machine reboots itself. Is there any way I can view this message? I.e., is this message written to the log anywhere?
    As far as I can tell, I've set up all the partitions and slices correctly (VTOC below). The only error message I get is when I do:
    # zpool attach rpool c9t0d0s0 c13d1s0
    (c9t0d0s0 is the 500 GB original disk, c13d1s0 is the 2 TB new disk)
    I get:
    invalid vdev specification
    use '-f' to override the following errors:
    /dev/dsk/c13d1s0 overlaps with /dev/dsk.c13d1s2
    But that's a well known bug and I use "-f" to force it since the backup slice shouldn't matter. If anyone has any ideas, I really appreciate it.
    Here's my disk layout
    =============================================================
    500 GB disk
    fdisk
    Total disk size is 60801 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 60800 60800 100
    VTOC:
    partition> p
    Current partition table (original):
    Total disk cylinders available: 60798 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 60797 465.73GB (60797/0/0) 976703805
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 60797 465.74GB (60798/0/0) 976719870
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    =============================================================
    2 TB disk:
    fdisk:
    Total disk size is 60799 cylinders
    Cylinder size is 64260 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 60798 60798 100
    VTOC:
    partition> p
    Current partition table (original):
    Total disk cylinders available: 60796 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 60795 1.82TB (60795/0/0) 3906686700
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 60795 1.82TB (60796/0/0) 3906750960
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 31.38MB (1/0/0) 64260
    9 unassigned wm 0 0 (0/0/0) 0
    =============================================================

    Thanks for the suggestions! I fixed the problem. I took a video of the boot sequence using my iPhone and managed to catch the error messages. It appears that ZFS (specifically, zpools) aren't very robust to devices changing ports (i.e., names).
    My original boot device (500 GB) was on c9t0d0 (SATA port 0). My new boot device was on port c13d1 (on a PCI SATA card). The problem was a combination of devices getting renamed.
    After successfully attaching the 2TB disk to create a mirror, I of course could boot off the original 500GB disk. The problem was I didn't try to boot off the 2TB disk on the PCI card. Instead I swapped the cables, which led to the zpool freaking out about not being able to find the device (I only discovered this through the video! Automatic reboot on a kernel panic might not be such a great idea after all...). The other thing I originally tried was removing the 500 GB disk and try booting off the PCI card, but it seems that my BIOS isn't very robust to devices being removed either - it renames devices in its "list of hard drives" in such a way that it fails to boot from the default device. Manually rearranging the list, or using the boot sequence selector (F8) made it all work.
    In the end, since I really didn't want to boot off a PCI card, I simply detached the 500 GB disk, attached a different 2 TB disk to SATA port 0, and mirrored onto that. Finally, I detached the 2 TB disk on the PCI card (I don't have enough physical slots in the machine to hold that last disk!).
    Just to tie up any loose ends, does anyone know how to tell ZFS that a device has changed position (or name)? My data zpool is running pretty happily as in raid-z2. But if I take one of the disks and attach it to another SATA port, it complains that the device is missing. If I do a zpool replace, is it smart enough to recognize that the disk simply moved, and not waste a day re-silvering?
    Similarly, is there a way to change the port of a disk attached to the root pool without using an extra disk and doing two mirrors?
    Thanks!

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • How so I protect my root file system? - x86 solaris 10 - zfs data pools

    Hello all:
    I'm new to ZFS and am trying to understand it better before I start building a new file server. I'm looking for a low cost file server for smaller projects I support and would like to use the ZFS capabilities. If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem? The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too? or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Should I install solaris 10 on a mirrored set of drives? Can the root filesystem be zfs too? I'd like to be able to use a fairly simple PC to do this, perhaps one that doesn't have built in raid. I'm not looking for 10 terabytes of storage, maybe just four 500gb sata disks connected into a raidz zpool.
    thanks,

    patrickez wrote:
    If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem?Solaris 10 doesn't yet support ZFS for a root filesystem, but it is working in some OpenSolaris distributions.
    You could use Sun Volume Manager to create a mirror for your root filesystem.
    The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too?No. They're separate filesystems.
    or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Yes. (Actually, you can import the ZFS pool you created).
    Should I install solaris 10 on a mirrored set of drives?If you have one, that would work as well.
    Can the root filesystem be zfs too?Not currently in Solaris 10. The initial root support in OpenSolaris will require the root pool be only a single disk or mirrors. No striping, no raidz.
    Darren

  • Can ZFS storage pools share a physical drive w/ the root (UFS) file system?

    I wonder if I'm missing something here, because I was under the impression ZFS offered ultimate flexability until I encountered the following fine print 50 pages into the ZFS Administration Guide:
    "Before creating a storage pool, you must determine which devices will store your data. These devices must be disks of at least 128 Mbytes in size, and _they must not be in use by other parts of the operating system_. The devices can be individual slices on a preformatted disk, or they can be entire disks that ZFS formats as a single large slice."
    I thought it was frustrating that ZFS couldn't be used as a boot disk, but the fact that I can't even use the rest of the space on the boot drive for ZFS is aggrivating. Or am I missing something? The following text appears elsewhere in the guide, and suggests that I can use the 7th slice:
    "A storage device can be a whole disk (c0t0d0) or _an individual slice_ (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted."
    Currently, I've just installed Solaris 10 (6/11) on an Ultra 10. I removed the slice for /export/users (c0t0d0s7) from the default layout during the installation. So there's approx 6 GB in UFS space, and 1/2 GB in swap space. I want to make the 70GB of unused HDD space a ZFS pool.
    Suggestions? I read somewhere that the other slices must be unmounted before creating a pool. How do I unmount the root partition, then use the ZFS tools that reside in that unmounted space to create a pool?
    Edited by: MindFuq on Oct 20, 2007 8:12 PM

    It's not convenient for me to post that right now, because my ultra 10 is offline (for some reason the DNS never got set up properly, and creating an /etc/resolv.conf file isn't enough to get it going).
    Anyway, you're correct, I can see that there is overlap with the cylinders.
    During installation, I removed slice 7 from the table. However, under the covers the installer created a 'backup' partition (slice 2), which used the rest of the space (~74.5GB), so the installer didn't leave the space unused as I had expected. Strangely, the backup partition overlapped; it started at zero as the swap partition did, and it ended ~3000 cylinders beyond the root partition. I trusted the installer to be correct about things, and simply figured it was acceptible for multiple partitions to share a cylinder. So I deleted slice 2, and created slice 7 using the same boundaries as slice 2.
    So next I'll have to remove the zfs pool, and shrink slice 7 so it goes from cylinder 258 to ~35425.
    [UPDATE] It worked. Thanks Alex! When I ran zpool create tank c0t0d0s7, there was no error.
    Edited by: MindFuq on Oct 22, 2007 8:15 PM

  • Replacing a root pool disk example

    Replacing a root pool disk has several use cases:
    1. Replace a small disk with a larger disk
    2. Replace a failed disk with a replacement disk
    3. Outright move a rpool to a different disk because you want to
    The easiest way is to attach the replacement disk to an existing root pool disk.
    I like this approach better than an outright replacement with zpool replace because
    you can ensure the new disk is bootable while both disks are still attached.
    On an x86 system running S11.1, its even easier (if you reinstall not upgrade) because your
    rpool disk contains an EFI label and you don't have to mess with any labeling.
    On a SPARC system running S11.1 or a SPARC or x86 system running S11, you'll still need to
    apply an SMI (VTOC) label and s0, if necessary.
    See the example below.
    Thanks, Cindy
    1. Create a new BE just to identify that all rpool data is available on the replacement
    disk:
    # beadm list
    BE Active Mountpoint Space Policy Created
    s11u1_24b NR / 4.02G static 2012-12-05 10:24
    # beadm create s11u1_backup
    # beadm list
    BE Active Mountpoint Space Policy Created
    s11u1_24b NR / 4.02G static 2012-12-05 10:24
    s11u1_backup - - 172.0K static 2012-12-11 08:46
    2. Identify the existing root pool disk. You can see that this disk has an EFI label because
    the device identifier is d0, not s0.
    # zpool status rpool
    pool: rpool
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    c0t5000CCA012C8323Cd0 ONLINE 0 0 0
    3. Attach the replacement disk.
    # zpool attach rpool c0t5000CCA012C8323Cd0 c0t5000C500438124F3d0
    Make sure to wait until resilver is done before rebooting.
    You will see a message from FMA that the pool device is DEGRADED. This
    is because the pool data is being resilvered onto the new disk.
    4. Check the resilvering progress:
    # zpool status
    pool: rpool
    state: DEGRADED
    status: One or more devices is currently being resilvered. The pool will
    continue to function in a degraded state.
    action: Wait for the resilver to complete.
    Run 'zpool status -v' to see device specific details.
    scan: resilver in progress since Tue Dec 11 08:49:57 2012
    42.6G scanned out of 71.7G at 132M/s, 0h3m to go
    42.6G resilvered, 59.44% done
    config:
    NAME STATE READ WRITE CKSUM
    rpool DEGRADED 0 0 0
    mirror-0 DEGRADED 0 0 0
    c0t5000CCA012C8323Cd0 ONLINE 0 0 0
    c0t5000C500438124F3d0 DEGRADED 0 0 0 (resilvering)
    5. When resilvering is complete, check that you can boot from the new disk.
    You will need to boot the new disk specifically from either a SPARC boot PROM
    or an x86 BIOS.
    # zpool status
    pool: rpool
    state: ONLINE
    scan: resilvered 71.7G in 0h8m with 0 errors on Tue Dec 11 08:58:45 2012
    config:
    NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    c0t5000CCA012C8323Cd0 ONLINE 0 0 0
    c0t5000C500438124F3d0 ONLINE 0 0 0
    6. If you can boot successfully from the new disk, detach the old disk.
    # zpool detach rpool c0t5000CCA012C8323Cd0
    Confirm your BE info is intact.
    # beadm list
    BE Active Mountpoint Space Policy Created
    s11u1_24b NR / 4.03G static 2012-12-05 10:24
    s11u1_backup - - 172.0K static 2012-12-11 08:46
    7. If the new disk is larger than the existing disk, you will need to do one of the following
    to see the expanded pool space.
    # zpool set autoexpand=on rpool
    # zpool online -e rpool c0t5000C500438124F3d0
    8. Set the SPARC boot PROM or the x86 BIOS to boot from the new disk.

    Where you're getting confused is that you do want to mirror, temporarily.
    Make sure that s0 on the new disk covers the whole disk.
    Then attach it the existing disk, as a mirror.
    What for it to silver.
    Then detach the old disk.
    autoexpand is use when the existing disk is getting bigger. Thats not what you want to do.
    You want to replace 1 disk with a second larger disk.

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • Booting from a mirrored disk on a zfs root system

    Hi all,
    I am a newbee here.
    I have a zfs root system with mirrored disks c0t0d0s0 and c1t0d0s0, grub has been installed on c0t0d0s0 and OS booting is just fine.
    Now the question is if I want to boot the OS from the mirrored disk c1t0d0s0, how can I achieve that.
    OS is solaris 10 update 7.
    I installed the grub to c1t0d0s0 and assume menu.lst need to be changed (but i don't know how), somehow no luck.
    # zpool status zfsroot
    pool: zfsroot
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    zfsroot ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0
    c0t0d0s0 ONLINE 0 0 0
    # bootadm list-menu
    The location for the active GRUB menu is: /zfsroot/boot/grub/menu.lst
    default 0
    timeout 10
    0 s10u6-zfs
    1 s10u6-zfs failsafe
    # tail /zfsroot/boot/grub/menu.lst
    title s10u6-zfs
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
    module /platform/i86pc/boot_archive
    title s10u6-zfs failsafe
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel /boot/multiboot kernel/unix -s -B console=ttya
    module /boot/x86.miniroot-safe
    Appreciate anyone can provide some tips.
    Thanks.
    Mizuki

    This is what I have in my notes.... not sure if I wrote them or not. This is a sparc example as well. I believe on my x86 I still have to tell the bios to boot the mirror.
    After attaching mirror (if the mirror was not present during the initial install) you need to fix the boot block.
    #installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
    If the primary then fails you need to set the obp to the mirror:
    ok>boot disk1
    for example
    Apparently there is a way to set the obp to search for a bootable disk automatically.
    Good notes on all kinds of zfs and boot issues here:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Boot_Issues

  • Unbootable Solaris 10 x86 installed on ZFS root file system

    Hi all,
    I have unbootable Solaris 10 x86 installed on ZFS root file system. on an IDE HDD
    The bios keep showing the msg
    DISK BOOT FAILURE , PLEASE INSERT SYSTEM BOOT DISK
    please note :
    1- the HDD is connected properly and recognized by the system
    2- GRUB don't show any messages
    is there any guide to recover the system , or detail procedure to boot system again
    Thanks,,,

    It's not clear if this is a recently installed system that is refusing to boot OR if the system was working fine and crashed.
    If it's the former, I would suggest you check the BIOS settings to make sure it's booting from the right hard disk. In any case, the Solaris 10 installation should have writting the GRUB stage1 and stage2 blocks to the beginning of the disk.
    If the system crashed and is refusing to boot, you can try to boot from a Solaris 10 installation DVD. Choose the single user shell option and see if it can find your system. You should be able to use format/devfsadm/etc to do the actual troubleshooting. If your disk is still responding, try a `zpool import` to see if there is any data that ZFS can recognize (it usually has many backup uberblocks and disk labels scattered around the disk).

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

Maybe you are looking for

  • Is there a way to open a local file not a web file with actionscript 3.0?

    Reason for asking is because I am making a media player in Adobe Flash CS6, and was wondering if there was anyway to make a button to open a local .fla file in a directory with ActionScript 3.0 or if there are any other ways I can achieve this? Thank

  • SQL Error on MBAM ISS server while MBAM client encryption

    Hi While I am starting the encryption through MBAM client getting below error.. unable to connect to the MBAM recovery and hardware service An error occurred while sending encryption status data for resolve this issue I have done the http://support.m

  • Backing up 3rd party apps with Time Machine

    After what started out as a simple question...why won't my iChat stay connected now that i have installed OS Lion and spending over 2 hours with Apple Phone support and trying all the known trouble shoots, they told me the only thing I can now do is

  • Initial JNDI context doesn't return the root node

    Hi : I just deployed a java application on Netweaver CE 7.2, when the application starts up, the code is trying to perform a JNDI bind and lookup operations by calling " Context context = new Initialcontext(), context.bind(), context.lookup("...)", t

  • Open downloaded files on linux

    After files have downloaded, in the Downloads dialog, whether I right-click and select "open contaning folder" or simply "Open", firefox tries to open the selected file (or folder) in gedit, which of course gives an error. Why doesn't firefox follow