ZFS root and Live upgrade

Is it possible to create /var as its own ZFS dataset when using liveupgrade? With ufs, there's the -m option to lucreate. It seems like any liveupgrade to a ZFS root results in just the root, dump, and swap datasets for the boot environment.
merill

Hey man.
I bad my head against the wall with the same question :-)
One thing that might help you out anyway is that i found a solution on how to move ufs filesystems to the new ZFS pool.
Let's say you have a ufs fs with let's say application server and stuff on /app which is on c1t0d0s6.
When you create the new ZFS based BE the /app is shared.
In order to move it to the new BE, all you need to do is to comment the lines in /etc/vfstab you want to be moved.
then run lucreate to create the ZFS BE.
After that, create a new dataset for /app, just give it a different mountpoint.
Copy all your stuff.
rename the original /app
and set the dataset's mountpoint
this is it, all your stuff are now on ZFS.
Hope it will be usefull,

Similar Messages

  • Volume as install disk for Guest Domain and Live Upgrade

    Hi Folks,
    I am new to LDOMs and have some questions - any pointers, examples would be much appreciated:
    (1) With support for volumes to be used as whole disks added in LDOM release 1.0.3, can we export a whole LUN under either VERITAS DMP or mpxio control to guest domain and install Solaris on it ? Any gotchas or special config required to do this ?
    (2) Can Solaris Live Upgrade be used with Guest LDOMs ? or is this ability limited to Control Domains ?
    Thanks

    The answer to your #1 question is YES.
    Here's my mpxio enabled device.
    non-STMS device name STMS device name
    /dev/rdsk/c2t50060E8010029B33d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    /dev/rdsk/c3t50060E8010029B37d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    create the virtual disk using slice 2
    ldm add-vdsdev /dev/dsk/c4t4849544143484920373730313036373530303136d0s2 77bootdisk@primary-vds01
    add the virtual disk to the guest domain
    ldm add-vdisk apps bootdisk@primary-vds01 ldom1
    the virtual disk will be imprted as c0d0 which is the whole lun itself.
    bind, start ldom 1 and install OS (i used jumpstart) and it partitioned the boot disk c0d0 as / 15GB, swap the remaining space (10GB)
    when you run format, print command on both guest and primary domain on this disk you'll see the same slice/size information
    Part Tag Flag Cylinders Size Blocks
    0 root wm 543 - 1362 15.01GB (820/0/0) 31488000
    1 swap wu 0 - 542 9.94GB (543/0/0) 20851200
    2 backup wm 0 - 1362 24.96GB (1363/0/0) 52339200
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    I havent used DMP but HDLM (Hitachi Dynamic link manager) seems not supported by ldom as i cannot make it work :(
    I have no answer on your next question unfortunately.

  • DiskSuite and Live Upgrade 2.0

    I have two Solaris 7 boxes running DiskSuite to mirror the O/S disk onto another drive.
    I need to upgrade to Solaris 8. In the past I have used Live Upgrade to do so, when I have enough free disk space to partition an existing disk or to use a unused disk for the Solaris 8 system files.
    In this case, I do not have sufficient free space on the boot disk. So, what is the best approach? It seems that I would have to:
    1. unmirror the file system
    2. install Solaris 8 onto the old mirror drive using LU 2.0
    3. make the old mirror drive the boot drive
    4. re-establish mirroring, being sure that it goes the right way from the Solaris 8 disk to the old boot disk
    Comments, suggestions?

    I recently built a system (specs below) and installed this card (MSI GF4 Ti4200 VTD8X MS8894, 128MB DDR), and when I try to use Live Update 2 (version 3.33.000, from the CD that came with the card), I get the same message:
    "Warning!!! Your Display Card does not support MSI Live Update 2 function.  Note: MSI Live Update 2 supports the Display Cards of MSI only."
    I'm using the drivers/BIOS that came on the CD: Driver version 6.13.10.4107, BIOS version 4.28.20.05.11.  I see on the nVidia site that they have the 4109 drivers out now, should I try those?  ?(
    I have also made sure to do the suggested modifications to IE (and I don't have PC-cillin installed):
    "Note: In order to operate this application properly, please note the following suggests.
    -Set the IE security setting 'Download signed ActiveX controls' to [Enable] or [Prompt]. (System default is [Prompt]).
    -Disable 'WebTrap' of PC-cillin(R) or any web based anti-virus application when executing MSITM Live Update 2TM.
    -Update Microsoft® Windows® Installer"
    I downloaded a newer version of LIveUpdate (3.35.000), and installed it (after completely uninstalling the old version), and got the same results.  Nothing on my system is currently overclocked.
    Help!
    System specs:
    -Soyo SY-KT400 DRAGON Ultra (Platinum Edition) with latest BIOS & Chipset Drivers
    -AMD Athlon XP Thoroughbred 2100+
    -MSI GF4 Ti4200 VTD8X (MS-8894)
    -WD Caviar Special Edition 80 GB HDD, 8 MB Cache
    -512 MB Crucial PC2700 DDR (one stick, in DIMM #1)
    -TDK 40/12/48 CD R/RW
    -Daewoo 905DF Dynaflat 19" Monitor
    -Windows XP Home Edition, SP1/all other updates current
    -On-Board CMedia 6-channel audio
    -On-Board VIA 10/100 Ethernet
    -Altec-Lansing ATP3 Speakers

  • Non-Global zones and Live Upgrade

    Good afternoon,
    Trying to find an answer for a question that I have.
    Currently we have (2) T5140 servers.  One of them is our production Sun Messaging Server and the other is the backup.  The zones are SAN-attached disks(currently running on the Production server) and each server is "aware" of them.  They are only mounted on one server at a time.  My question is,  I can do a LiveUpgrade on the backup server (from Solaris10u10 to Solaris10u11) and then detach/export the NGZ from the production system and use "update on attach" to upgrade the NGZ to Solaris10u11.  If I don't upgrade the Production Box(Global) to u11 and have to move my NGZ back to it, will "update on attach" rollback the NGZs back to u10?
    We have a test system that we will be working through to test using LiveUpgrade without detaching the zones.  But wanted to see the feasibility of doing this the way I have it mention in the above paragraph.
    Thanks in advance for your help!!
    Doug

    Found my answer:  BigAdmin Feature Article: The Zones Update on Attach Feature and Patching in the Solaris 10 OS</title><meta nam…

  • UCE and live upgrade (again)

    Hi, does anyone have any information on how one might use UCE to patch inactive LU BEs?
    Thanks
    --tim                                                                                                                                                                                                           

    Thanks!
    --tim                                                                                                                                                                                                                                                   

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • X86 server: Live Upgrade - now no grub

    Hi,
    I am did a live upgrade of a server in a test environment, the test server mirrors our production hosts. The problem I have is I do not have the luxury of doing live upgrade on another slice on the same disk. I am working with an x4600, so my primary root disk is c3t0d0, alternate disk: c3t1d0. Since I do not have the ability to use c3t0d0 as a location of another BE, I am stuck using the only other disk, c3t1d0. So am left breaking the mirror and live upgrading to c3t1d0. Now the problem, once I complete the LU successfully, and everything is tested, we now make the new BE gold and basically sync up the old BE drive (c3t0d0) to the new BE (c3t1d0). HERE COMES MY ISSUES: The problem is once I do that i no longer have grub.
    I receive this error when I run bootadm list-menu:
    # bootadm list-menu
    mount: /dev/md/dsk/d0 no such device
    bootadm: mount of /dev/md/dsk/d0 (fstype ufs) failed
    bootadm: cannot find GRUB menu
    and you guess it, once I reboot, I am hosed. So I am left with going into single-user mode off of my jumpstart server. once I do that I add the following on both disks (c3t0d0 and c3t1d0)
    #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
    title Solaris 10 8/07 s10x_u4wos_10 X86
    kernel /platform/i86pc/multiboot
    module /platform/i86pc/boot_archive
    #---------------------END BOOTADM--------------------
    #---------- ADDED BY BOOTADM - DO NOT EDIT ----------
    title Solaris failsafe
    kernel /boot/multiboot kernel/unix -s -B console=ttya
    module /boot/x86.miniroot-safe
    #---------------------END BOOTADM--------------------
    title Solaris 10 8/07 s10x_u4wos_10 X86 (single-user)
    kernel /platform/i86pc/multiboot -s -B console=ttya
    module /platform/i86pc/boot_archive
    The server come up just fine, but STILL, when I do a bootadm list-menu, it STILL references  the following:
    # bootadm list-menu
    mount: /dev/md/dsk/d0 or /tmp/GRUB_slice_mntpt.891, no such file or directory
    bootadm: mount of /dev/md/dsk/d0 (fstype ufs) failed
    bootadm: cannot find GRUB menu
    any help would be appreciated, I know this is not the ideal usage of live upgrade, but when I have no other option, what am I suppose to do?
    thanks

    cat /etc/lu/GRUB_slice
    set the PHYS_SLICE and LOG_SLICE to the correct one,
    also check /etc/lu/GRUB_root and make sure the correct disk (either single disk or metadisk) is listed there.

  • Live upgrade only for zfs root?

    Only live upgrade for zfs root on 5/09? Is this true? I have tried to do live upgrades previously and have had no luck. Particularly on my old blade1000 with an 18gb drive.

    Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Solaris 10 update 9 - live upgrade issues with ZFS

    Hi
    After doing a live upgrade from Solaris 10 update 8 to Solaris 10 update 9 the alternate boot environment I created is no longer bootable.
    I have completed all the pre-upgrade steps like:
    - Installing the latest version of live upgrade from the update 9 ISO.
    - Create and test the new boot environment.
    - Create a sysidcfg file used by the live upgrade that has auto_reg=disable in it.
    There is also no errors while creating the boot environment or even when activating it.
    Here is the error I get:
    SunOS Release 5.10 Version Generic_14489-06 64-bit
    Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
    NOTICE: zfs_parse_bootfs: error 22
    Cannot mount root on altroot/37 fstype zfs
    *panic[cpu0]/thread=fffffffffbc28040: vfs mountroot: cannot mount root*
    ffffffffffbc4a8d0 genunix:main+107 ()
    Skipping system dump - no dump device configured
    Does anyone know how I can fix this?
    Edited by: user12099270 on 02-Feb-2011 04:49

    Found the culprit... *142910-17*... breaks it
    System has findroot enabled GRUB
    Updating GRUB menu default setting
    GRUB menu default setting is unaffected
    Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u8wos_08a> as <mount-point>//boot/grub/menu.lst.prev.
    File </etc/lu/GRUB_backup_menu> propagation successful
    Successfully deleted entry from GRUB menu
    Validating the contents of the media </admin/x86/Patches/10_x86_Recommended/patches>.
    The media contains 204 software patches that can be added.
    Mounting the BE <s10x_u8wos_08a_Jan2011>.
    Adding patches to the BE <s10x_u8wos_08a_Jan2011>.
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    The following requested patches have packages not installed on the system
    Package SUNWio-tools from directory SUNWio-tools in patch 142910-17 is not installed on the system. Changes for package SUNWio-tools will not be applied to the system.
    Package SUNWzoneu from directory SUNWzoneu in patch 142910-17 is not installed on the system. Changes for package SUNWzoneu will not be applied to the system.
    Package SUNWpsm-ipp from directory SUNWpsm-ipp in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-ipp will not be applied to the system.
    Package SUNWsshdu from directory SUNWsshdu in patch 142910-17 is not installed on the system. Changes for package SUNWsshdu will not be applied to the system.
    Package SUNWsacom from directory SUNWsacom in patch 142910-17 is not installed on the system. Changes for package SUNWsacom will not be applied to the system.
    Package SUNWmdbr from directory SUNWmdbr in patch 142910-17 is not installed on the system. Changes for package SUNWmdbr will not be applied to the system.
    Package SUNWopenssl-commands from directory SUNWopenssl-commands in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-commands will not be applied to the system.
    Package SUNWsshdr from directory SUNWsshdr in patch 142910-17 is not installed on the system. Changes for package SUNWsshdr will not be applied to the system.
    Package SUNWsshcu from directory SUNWsshcu in patch 142910-17 is not installed on the system. Changes for package SUNWsshcu will not be applied to the system.
    Package SUNWsshu from directory SUNWsshu in patch 142910-17 is not installed on the system. Changes for package SUNWsshu will not be applied to the system.
    Package SUNWgrubS from directory SUNWgrubS in patch 142910-17 is not installed on the system. Changes for package SUNWgrubS will not be applied to the system.
    Package SUNWzoner from directory SUNWzoner in patch 142910-17 is not installed on the system. Changes for package SUNWzoner will not be applied to the system.
    Package SUNWmdb from directory SUNWmdb in patch 142910-17 is not installed on the system. Changes for package SUNWmdb will not be applied to the system.
    Package SUNWpool from directory SUNWpool in patch 142910-17 is not installed on the system. Changes for package SUNWpool will not be applied to the system.
    Package SUNWudfr from directory SUNWudfr in patch 142910-17 is not installed on the system. Changes for package SUNWudfr will not be applied to the system.
    Package SUNWxcu4 from directory SUNWxcu4 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu4 will not be applied to the system.
    Package SUNWarc from directory SUNWarc in patch 142910-17 is not installed on the system. Changes for package SUNWarc will not be applied to the system.
    Package SUNWtftp from directory SUNWtftp in patch 142910-17 is not installed on the system. Changes for package SUNWtftp will not be applied to the system.
    Package SUNWaccu from directory SUNWaccu in patch 142910-17 is not installed on the system. Changes for package SUNWaccu will not be applied to the system.
    Package SUNWppm from directory SUNWppm in patch 142910-17 is not installed on the system. Changes for package SUNWppm will not be applied to the system.
    Package SUNWtoo from directory SUNWtoo in patch 142910-17 is not installed on the system. Changes for package SUNWtoo will not be applied to the system.
    Package SUNWcpc from directory SUNWcpc.i in patch 142910-17 is not installed on the system. Changes for package SUNWcpc will not be applied to the system.
    Package SUNWftdur from directory SUNWftdur in patch 142910-17 is not installed on the system. Changes for package SUNWftdur will not be applied to the system.
    Package SUNWypr from directory SUNWypr in patch 142910-17 is not installed on the system. Changes for package SUNWypr will not be applied to the system.
    Package SUNWlxr from directory SUNWlxr in patch 142910-17 is not installed on the system. Changes for package SUNWlxr will not be applied to the system.
    Package SUNWdcar from directory SUNWdcar in patch 142910-17 is not installed on the system. Changes for package SUNWdcar will not be applied to the system.
    Package SUNWnfssu from directory SUNWnfssu in patch 142910-17 is not installed on the system. Changes for package SUNWnfssu will not be applied to the system.
    Package SUNWpcmem from directory SUNWpcmem in patch 142910-17 is not installed on the system. Changes for package SUNWpcmem will not be applied to the system.
    Package SUNWlxu from directory SUNWlxu in patch 142910-17 is not installed on the system. Changes for package SUNWlxu will not be applied to the system.
    Package SUNWxcu6 from directory SUNWxcu6 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu6 will not be applied to the system.
    Package SUNWpcmci from directory SUNWpcmci in patch 142910-17 is not installed on the system. Changes for package SUNWpcmci will not be applied to the system.
    Package SUNWarcr from directory SUNWarcr in patch 142910-17 is not installed on the system. Changes for package SUNWarcr will not be applied to the system.
    Package SUNWscpu from directory SUNWscpu in patch 142910-17 is not installed on the system. Changes for package SUNWscpu will not be applied to the system.
    Package SUNWcpcu from directory SUNWcpcu in patch 142910-17 is not installed on the system. Changes for package SUNWcpcu will not be applied to the system.
    Package SUNWopenssl-include from directory SUNWopenssl-include in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-include will not be applied to the system.
    Package SUNWdtrp from directory SUNWdtrp in patch 142910-17 is not installed on the system. Changes for package SUNWdtrp will not be applied to the system.
    Package SUNWhermon from directory SUNWhermon in patch 142910-17 is not installed on the system. Changes for package SUNWhermon will not be applied to the system.
    Package SUNWpsm-lpd from directory SUNWpsm-lpd in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-lpd will not be applied to the system.
    Package SUNWdtrc from directory SUNWdtrc in patch 142910-17 is not installed on the system. Changes for package SUNWdtrc will not be applied to the system.
    Package SUNWhea from directory SUNWhea in patch 142910-17 is not installed on the system. Changes for package SUNWhea will not be applied to the system.
    Package SUNW1394 from directory SUNW1394 in patch 142910-17 is not installed on the system. Changes for package SUNW1394 will not be applied to the system.
    Package SUNWrds from directory SUNWrds in patch 142910-17 is not installed on the system. Changes for package SUNWrds will not be applied to the system.
    Package SUNWnfsskr from directory SUNWnfsskr in patch 142910-17 is not installed on the system. Changes for package SUNWnfsskr will not be applied to the system.
    Package SUNWudf from directory SUNWudf in patch 142910-17 is not installed on the system. Changes for package SUNWudf will not be applied to the system.
    Package SUNWixgb from directory SUNWixgb in patch 142910-17 is not installed on the system. Changes for package SUNWixgb will not be applied to the system.
    Checking patches that you specified for installation.
    Done!
    Approved patches will be installed in this order:
    142910-17
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    Patch 142910-17 has been successfully installed.
    See /a/var/sadm/patch/142910-17/log for details
    Executing postpatch script...
    Creating GRUB menu in /a
    Installing grub on /dev/rdsk/c2t0d0s0
    stage1 written to partition 0 sector 0 (abs 16065)
    stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
    Patch packages installed:
    BRCMbnx
    SUNWaac
    SUNWahci
    SUNWamd8111s
    SUNWcakr
    SUNWckr
    SUNWcry
    SUNWcryr
    SUNWcsd
    SUNWcsl
    SUNWcslr
    SUNWcsr
    SUNWcsu
    SUNWesu
    SUNWfmd
    SUNWfmdr
    SUNWgrub
    SUNWhxge
    SUNWib
    SUNWigb
    SUNWintgige
    SUNWipoib
    SUNWixgbe
    SUNWmdr
    SUNWmegasas
    SUNWmptsas
    SUNWmrsas
    SUNWmv88sx
    SUNWnfsckr
    SUNWnfscr
    SUNWnfscu
    SUNWnge
    SUNWnisu
    SUNWntxn
    SUNWnv-sata
    SUNWnxge
    SUNWopenssl-libraries
    SUNWos86r
    SUNWpapi
    SUNWpcu
    SUNWpiclu
    SUNWpsdcr
    SUNWpsdir
    SUNWpsu
    SUNWrge
    SUNWrpcib
    SUNWrsgk
    SUNWses
    SUNWsmapi
    SUNWsndmr
    SUNWsndmu
    SUNWtavor
    SUNWudapltu
    SUNWusb
    SUNWxge
    SUNWxvmpv
    SUNWzfskr
    SUNWzfsr
    SUNWzfsu
    PBE GRUB has no capability information.
    PBE GRUB has no versioning information.
    ABE GRUB is newer than PBE GRUB. Updating GRUB.
    GRUB update was successfull.
    Unmounting the BE <s10x_u8wos_08a_Jan2011>.
    The patch add to the BE <s10x_u8wos_08a_Jan2011> completed.
    Still need to know how to resolve it though...

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

  • Solaris 10 with zfs root install and VMWare-How to grow disk?

    I have a Solaris 10 instance installed on an ESX host. During the install, I selected a 20gig disk. Now, I would like to grow the disk from 20GB to 25GB, I made the change on VMWare but now the issue seems to be Solaris. I haven't seen anything on how to grow the FS in Solaris. Someone mentioned using fdisk to manually change the number of cylinders but that seems awkward. I am using a zfs root install too.
    bash-3.00# fdisk /dev/rdsk/c1t0d0s0
    Total disk size is 3263 cylinders
    Cylinder size is 16065 (512 byte) blocks
    Cylinders
    Partition Status Type Start End Length %
    ========= ====== ============ ===== === ====== ===
    1 Active Solaris2 1 2609 2609 80
    This shows the expanded number of cylinders. but a format command does not.
    bash-3.00# format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci1000,30@10/sd@0,0
    Specify disk (enter its number):
    Any ideas?
    Thanks.

    That's the MBR label on the disk. That's easy to modify with fdisk.
    Inside the Solaris partition is another (VTOC) label. That one is harder to modify. It's what you see when you run 'format' -> 'print' -> 'partition' or 'prtvtoc'.
    To resize it, the only method I'm aware of is to record the slices somewhere, then destroy the label or run 'format -e' and create a new label for the autodetect device. Once you have the new label in place, you can recreate the old slices. All the data on the disk should be stable.
    Then you can make use of the space on the disk for new slices, for enlarging the last slice, or if you have a VM of some sort managing the disk.
    Darren

  • Ldmp2v  and ZFS  root source system question

    hi
    reading the ldmp2v doc, it seems to implay that p2v only support source system with UFS root
    this is fine for s8 and s9 system.
    what about the new s10 with zfs root?
    thx

    Chk the links
    Transfer global settings - Multiple source systems
    Re: Difference between Transfer Global Setting & Transfer Exchange rates
    Regards,
    B

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

Maybe you are looking for