10 update 6 - Jumpstart with ZFS root

Please help me to solve the problem with jumpstart and root on zfs in update 6:
On server:
#grep pool profile
pool zfs all auto auto auto
...ok, but on a client:
#/sbin/install-solaris
...<skip>...
ERROR: Field 1 - Keyword "auto" is invalid
Solaris installation program exited.
The manuals says that this keyword is appeared in the Update 6 (10/08).
What's wrong?
Thanks.

Is it possible that someone could elaborate on the solution a little more? I copied the check file from the 10_08 DVD to no avail. The Solaris install root is 10/08 and I've correctly updated the solaris_media_locations file, but I still get this error. I cannot find a mention of the word "pool" in my profiles, but of course, I am using "rpool".
ERROR: Field 1 - Keyword "pool" is invalid
Solaris installation program exited.
Thanks,
E.R.

Similar Messages

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Solaris 10 with zfs root install and VMWare-How to grow disk?

    I have a Solaris 10 instance installed on an ESX host. During the install, I selected a 20gig disk. Now, I would like to grow the disk from 20GB to 25GB, I made the change on VMWare but now the issue seems to be Solaris. I haven't seen anything on how to grow the FS in Solaris. Someone mentioned using fdisk to manually change the number of cylinders but that seems awkward. I am using a zfs root install too.
    bash-3.00# fdisk /dev/rdsk/c1t0d0s0
    Total disk size is 3263 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 2609 2609 80
    This shows the expanded number of cylinders. but a format command does not.
    bash-3.00# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci1000,30@10/sd@0,0
    Specify disk (enter its number):
    Any ideas?
    Thanks.

    That's the MBR label on the disk. That's easy to modify with fdisk.
    Inside the Solaris partition is another (VTOC) label. That one is harder to modify. It's what you see when you run 'format' -> 'print' -> 'partition' or 'prtvtoc'.
    To resize it, the only method I'm aware of is to record the slices somewhere, then destroy the label or run 'format -e' and create a new label for the autodetect device. Once you have the new label in place, you can recreate the old slices. All the data on the disk should be stable.
    Then you can make use of the space on the disk for new slices, for enlarging the last slice, or if you have a VM of some sort managing the disk.
    Darren

  • Can't boot with zfs root - Solaris 10 u6

    Having installed Solaris 10 u6 on one disk with native ufs and made this work by adding the the following entries
    /etc/driver_aliases
    glm pci1000,f
    /etc/path_to_inst
    <lang pci string for my scsi controller> glm
    which are needed since the driver selected by default are the ncsr scsi controller driver that do not work in 64 bit.
    Now I would like to create a new boot env. on a second disk on the same scsi controller, but use zfs instead.
    Using Live Upgrade to create a new boot env on the second disk with zfs as file system worked fine.
    But when trying to boot of it I get the following error
    spa_import_rootpool: error 22
    panic[cpu0]/thread=fffffffffbc26ba0: cannot mount root path /pci@0,0-pci1002,4384@14,4/pci1000@1000@5/sd@1,0:a
    Well that's the same error I got with ufs before making the above mentioned changes /etc/driver_aliases and path_to_install
    But that seems not to be enough when using zfs.
    What am I missing ??

    Hmm I dropped the live upgrade from ufs to zfs because I was not 100% sure it worked.
    Then I did a reinstall selecting to use zfs during the install and made the changes to driver_aliases and path_to_inst before the 1'st reboot.
    The system came up fine on the 1'st reboot and did use the glm scsi driver and running in 64bit.
    But that was it. When the system then was rebooted (where it made a new boot-archive) it stopped working. Same error as before.
    I have managed to get it to boot in 32bit mode but still the same error (thats independent of what scsi driver used.)
    In all cases it does pop the SunOS Relase banner and it do load the driver (ncrs or glm) and detects the disks in the correct path and numbering.
    But it fails to load the file system.
    So basically the current status are no-go if you need to use the ncrs/glm scsi driver to access the disks with your zfs root pool.
    File-Safe works and can mount the zfs root pool, but that's no fun as server OS :(

  • Canvio 3tb update not compatible with ZFS

    Hello,
    I have been using mutiple Canvio 3tb usb drives (all purchased this year) in striped sets on a backup server running FreeBSD 10 and utilising the ZFS filesystem. Recently I purchased two more identical drives to increase the pool but these are not correctly recognised by ZFS. The most significant difference appears to be that the later drives impose a 32kb stripesize whereas this is zero on the earlier models.
    The disks work perfectly well on a Win7 system, so there is nothing faulty in the hardware.
    Is it possible to change the disk settings or firmware to be the same as the slightly earlier models?
    This is the working configuration on the older disks
    ========================================
    diskinfo -v da6
    da6
            4096            # sectorsize
            3000592982016   # mediasize in bytes (2.7T)
            732566646       # mediasize in sectors
            0               # stripesize
            0               # stripeoffset
            45600           # Cylinders according to firmware.
            255             # Heads according to firmware.
            63              # Sectors according to firmware.
            201308090203EC  # Disk ident.
    ......and this is the failing configuration on the latest disks
    ============================================
    diskinfo -v da0
    da0
            4096            # sectorsize
            3000592977920   # mediasize in bytes (2.7T)
            732566645       # mediasize in sectors
            32768           # stripesize
            0               # stripeoffset
            45600           # Cylinders according to firmware.
            255             # Heads according to firmware.
            63              # Sectors according to firmware.
            20140619016424  # Disk ident.

    I doubt that future releases will support hardware older than the GM release of Mountain Lion.  You'll probably need to either update your machine or stay on Lion.

  • Help - Custom Jumpstart 10/09 x86 with ZFS

    Hi,
    I'm trying to Jumpstart Solaris 10/09 x86 with ZFS root paritition. Everything seems to be going okay. My problems are
    1. The Server does not automatically boot after completing jumpstart
    2. After installation, everytime I reboot the newly jumpstarted machine, I see the following messages on the console, that I don't see with regular installation
    # init 6
    propagating updated GRUB menu
    File </boot/grub/menu.lst> propagation successful
    File </etc/lu/GRUB_backup_menu> propagation successful
    File </etc/lu/menu.cksum> propagation successful
    File </sbin/bootadm> propagation successful
    Do you know why I see this progagating message?
    My rules.ok file is
    root@dev # cat rules.ok
    any - - my_profile -
    # version=2 checksum=3418
    root@dev #
    root@dev # cat my_profile
    install_type initial_install
    system_type standalone
    cluster SUNWCreq
    pool rpool auto auto auto mirror c1t0d0s0 c1t1d0s0
    bootenv installbe bename s10x_u8wos_08a dataset /var
    root@dev #
    Thank you,
    Jacob

    Okay, I figured out why I'm getting the File </boot/grub/menu.lst> propagation successful,etc. messages. Its because I had
    bootenv installbe bename s10x_u8wos_08a dataset /var
    in my jumpstart profile. When that line exists, another boot env is installed, also Live Upgrade is installed when that line exists. Removing that line Live Upgrade is not installed and an alternate boot environment is not installed. But now the problem is, I would like to have a separate dataset for /var and unfortunately it is not possible to create it without the bootenv line.
    Does anyone know how I could have the "best of both worlds", ie have separate dataset for /var and not having to install a new boot env.
    Thank you,
    Jacob.

  • ZFS root filesystem & slice 7 for metadb (SUNWjet)

    Hi,
    I'm planning to use ZFS root filesystem in Sun Cluster 3.3 environment, as written in documentation when we will use UFS share diskset then we need to create small slice for metadb on slice 7. In standar installation we can't create slice 7 when we install solaris with zfs root, then we can create it with jumpstart profile below :
    # example Jumpstart profile -- ZFS with
    # space on s7 left out of the zpool for SVM metadb
    install_type initial_install
    cluster SUNWCXall
    filesys c0t0d0s7 32
    pool rpool auto 2G 2G c0t0d0s0
    so, my question is : "when we use SUNWjet (JumpStart(tm) Enterprise Toolkit) how we can write the profile similar to above jumpstart profile"?
    Thanks very much, for your best answer.

    This can be done with JET
    You create the template as normal.
    Then create a profile file with the slice 7 line.
    Then edit the template to use it.
    see
    ---8<
    # It is also possible to append additional profile information to the JET
    # derived one. Do this using the base_config_profile_append variable, but
    # don't forget to fill out the remaining base_config_profile variables.
    base_config_profile=""
    base_config_profile_append="
    ---8<
    It is how OpsCentre (which uses JET) does it.
    JET questions are best asked on the external JET alias at yahoogorups (until the forum is setup on OTN)

  • Ldmp2v  and ZFS  root source system question

    hi
    reading the ldmp2v doc, it seems to implay that p2v only support source system with UFS root
    this is fine for s8 and s9 system.
    what about the new s10 with zfs root?
    thx

    Chk the links
    Transfer global settings - Multiple source systems
    Re: Difference between Transfer Global Setting & Transfer Exchange rates
    Regards,
    B

  • CLUSTERING WITH ZFS BOOT DISK

    hi guys,
    i'm looking for create a new cluster on two standalone server
    the two server boot with a rpool zfs, and i don't know if in installation procedure the boot disk was layered with a dedicated slice for global device.
    Is possible to install SunCluster with a rpool boot zfs disk?
    What do i have to do?
    Alessio

    Hi!
    I am have 10 node Sun Cluster.
    All nodes have zfs rpool with mirror.
    is better create mirror zfs disk boot after installation of Sun Cluster or not?I create zfs mirror when install Solaris 10 OS.
    But I don't see any problems to do this after installation of Sun Cluster or Solaris 10.
    P.S. And you may use UFS global with ZFS root.
    Anatoly S. Zimin

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

  • ZFS root problem after iscsi target experiment

    Hello all.
    I need help with this situation... I've installed solaris 10u6, patched, created branded full zone. Everything went well until I started to experiment with iSCSI target according to this document: http://docs.sun.com/app/docs/doc/817-5093/fmvcd?l=en&a=view&q=iscsi
    After setting up iscsi discovery address of my iscsi target solaris hung up and the only way was to send break from service console. Then I got these messages during boot
    SunOS Release 5.10 Version Generic_138888-01 64-bit
    /dev/rdsk/c5t216000C0FF8999D1d0s0 is clean
    Reading ZFS config: done.
    Mounting ZFS filesystems: (1/6)cannot mount 'root': mountpoint or dataset is busy
    (6/6)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Jan 23 14:25:42 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Jan 23 14:25:42 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    ---- There are many affected services from this error, unfortunately one of them is system-log, so I cannot find any relevant information why this happens.
    bash-3.00# svcs -xv
    svc:/system/filesystem/local:default (local file system mounts)
    State: maintenance since Fri Jan 23 14:25:42 2009
    Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
    See: http://sun.com/msg/SMF-8000-KS
    See: /var/svc/log/system-filesystem-local:default.log
    Impact: 32 dependent services are not running:
    svc:/application/psncollector:default
    svc:/system/webconsole:console
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/milestone/multi-user-server:default
    svc:/system/basicreg:default
    svc:/system/zones:default
    svc:/application/graphical-login/cde-login:default
    svc:/system/iscsitgt:default
    svc:/application/cde-printinfo:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/system/dumpadm:default
    svc:/system/fmd:default
    svc:/system/sysidtool:net
    svc:/network/rpc/bind:default
    svc:/network/nfs/nlockmgr:default
    svc:/network/nfs/status:default
    svc:/network/nfs/mapid:default
    svc:/application/sthwreg:default
    svc:/application/stosreg:default
    svc:/network/inetd:default
    svc:/system/sysidtool:system
    svc:/system/postrun:default
    svc:/system/filesystem/volfs:default
    svc:/system/cron:default
    svc:/application/font/fc-cache:default
    svc:/system/boot-archive-update:default
    svc:/network/shares/group:default
    svc:/network/shares/group:zfs
    svc:/system/sac:default
    [ Jan 23 14:25:40 Executing start method ("/lib/svc/method/fs-local") ]
    WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    [ Jan 23 14:25:42 Method "start" exited with status 95 ]
    Finaly there is output of zpool list command, where everything about ZFS pools looks OK:
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    root 68G 18.5G 49.5G 27% ONLINE -
    storedgeD2 404G 45.2G 359G 11% ONLINE -
    I would appretiate any help.
    thanks in advance,
    Berrosch

    OK, i've tryied to install s10u6 to default rpool and move root user to /rpool directory (which it is nonsense of course, it was just for this testing purposes) and everything went OK.
    Another experiment was with rootpool name 'root' and root user in /root, everything went OK as well.
    Next try was with rootpool 'root', root user in /root, enabling iscsi initiator:
    # svcs -a |grep iscsi
    disabled 16:31:07 svc:/network/iscsi_initiator:default
    # svcadm enable iscsi_initiator
    # svcs -a |grep iscsi
    online 16:34:11 svc:/network/iscsi_initiator:default
    and voila! the problem is here...
    Mounting ZFS filesystems: (1/5)cannot mount 'root': mountpoint or dataset is busy
    (5/5)
    svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
    Feb 9 16:37:35 svc.startd[7]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" failed with exit status 95.
    Feb 9 16:37:35 svc.startd[7]: system/filesystem/local:default failed fatally: transitioned to maintenance (see 'svcs -xv' for details)
    Seems to be a bug in iscsi implementation, some quotas of 'root' in source code or something like it...
    Martin

  • Booting from a mirrored disk on a zfs root system

    Hi all,
    I am a newbee here.
    I have a zfs root system with mirrored disks c0t0d0s0 and c1t0d0s0, grub has been installed on c0t0d0s0 and OS booting is just fine.
    Now the question is if I want to boot the OS from the mirrored disk c1t0d0s0, how can I achieve that.
    OS is solaris 10 update 7.
    I installed the grub to c1t0d0s0 and assume menu.lst need to be changed (but i don't know how), somehow no luck.
    # zpool status zfsroot
    pool: zfsroot
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    zfsroot ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0
    c0t0d0s0 ONLINE 0 0 0
    # bootadm list-menu
    The location for the active GRUB menu is: /zfsroot/boot/grub/menu.lst
    default 0
    timeout 10
    0 s10u6-zfs
    1 s10u6-zfs failsafe
    # tail /zfsroot/boot/grub/menu.lst
    title s10u6-zfs
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
    module /platform/i86pc/boot_archive
    title s10u6-zfs failsafe
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel /boot/multiboot kernel/unix -s -B console=ttya
    module /boot/x86.miniroot-safe
    Appreciate anyone can provide some tips.
    Thanks.
    Mizuki

    This is what I have in my notes.... not sure if I wrote them or not. This is a sparc example as well. I believe on my x86 I still have to tell the bios to boot the mirror.
    After attaching mirror (if the mirror was not present during the initial install) you need to fix the boot block.
    #installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
    If the primary then fails you need to set the obp to the mirror:
    ok>boot disk1
    for example
    Apparently there is a way to set the obp to search for a bootable disk automatically.
    Good notes on all kinds of zfs and boot issues here:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Boot_Issues

  • Lucreate with zfs system

    Hello,
    Am relatively new to using liveupgrade to patch solaris 10 systems, but so far have found it to work pretty well. I have come across an oddity on a system that I would like to have explained. The system has solaris 10 installed, with one zfs pool rpool, and when I create an alternate BE to patch with: lucreate -n altBEname, it does not use the clone/snapshots of zfs to quickly create the alternate BE. It looks like it is creating a full copy (like lucreate does on our systems using ufs), and takes about 45 minutes to 1 hr to complete. On other solaris 10 systems installed with zfs, the lucreate command completes in a minute or so, and a snapshot along with the alternate BE is in the zfs list output. On this system there is no snapshot, only the alternate BE. Below is output from commands run on the system to show what I am trying to describe. Any ideas what might be the problem, thanks:
    bash# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    Solaris_10_012011_patched yes yes yes no -
    Solaris_10_022011_patched yes no no yes -
    bash# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 22.1G 112G 97K /rpool
    rpool/ROOT 17.0G 112G 21K legacy
    rpool/ROOT/Solaris_10_012011_patched 9.18G 112G 9.18G /
    rpool/ROOT/Solaris_10_022011_patched 7.87G 112G 7.87G /
    rpool/dump 1.00G 112G 1.00G -
    rpool/export 44K 112G 23K /export
    rpool/export/home 21K 112G 21K /export/home
    rpool/swap 4.05G 112G 4.05G -
    bash# lucreate -n Solaris_10_072011_patched
    Analyzing system configuration.
    Comparing source boot environment <Solaris_10_012011_patched> file systems
    with the file system(s) you specified for the new boot environment.
    Determining which file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <Solaris_10_072011_patched>.
    Source boot environment is <Solaris_10_012011_patched>.
    Creating boot environment <Solaris_10_072011_patched>.
    Creating file systems on boot environment <Solaris_10_072011_patched>.
    Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/Solaris_10_072011_patched>.
    /usr/lib/lu/lumkfs: test: unknown operator zfs
    Populating file systems on boot environment <Solaris_10_072011_patched>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Copying.
    Creating shared file system mount points.
    Creating compare databases for boot environment <Solaris_10_072011_patched>.
    Creating compare database for file system </>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <Solaris_10_072011_patched>.
    Updating compare databases on boot environment <Solaris_10_022011_patched>.
    Making boot environment <Solaris_10_072011_patched> bootable.
    Population of boot environment <Solaris_10_072011_patched> successful.
    Creation of boot environment <Solaris_10_072011_patched> successful.
    bash# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 30.8G 103G 97K /rpool
    rpool/ROOT 25.8G 103G 21K legacy
    rpool/ROOT/Solaris_10_012011_patched 9.22G 103G 9.22G /
    rpool/ROOT/Solaris_10_022011_patched 7.87G 103G 7.87G /
    rpool/ROOT/Solaris_10_072011_patched 8.70G 103G 8.70G /
    rpool/dump 1.00G 103G 1.00G -
    rpool/export 44K 103G 23K /export
    rpool/export/home 21K 103G 21K /export/home
    rpool/swap 4.05G 103G 4.05G -

    Hi,
    i have installed Solaris10 x86 in vmware. The root disk is currently present on UFS Filesystem. "c0d0s0" on disk "c0d0"
    i have placed another new disk and created a root pool (rpool). "c0t2d0s0" on "c0t2d0"
    lustatus -> (shows the c0d0s0) as the current boot environment
    when i try to create a new zfs boot environment it gives me a error. Please help
    "lucreate -c c0d0s0 -n zfsBE -p rpool"
    it tells me "unknown option -- p"
    cat /etc/release shows me this...
    Solaris 10 5/08 s10x_u5wos_10 X86
    Please help!!!!

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

Maybe you are looking for