Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

Dears,
I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
Appreciate your help

Hi,
I am not sure whether this document:
http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
has been on the list of docs you found already.
If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
Regards
Hartmut

Similar Messages

  • To break out of a non-global zone and become root user in the global zone

    Hi folks
    "to break out of a non-global zone and become root user in the global zone through a kernel bug exploit"
    Is this possible and has SUN allready a fix/workaround/patch for that?
    Cheers

    Is it possible there's a bug in the kernel? Sure.
    Someone would need to find and identify such a bug before it could be fixed. I've not heard of the discovery of a bug like this. You could check the bug database at www.opensolaris.org.
    Darren

  • Sun cluster 3.20, live upgrade with non-global zones

    I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
    For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
    The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
    Any assistance gratefully received

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Sharing software package with non-global zone

    I've installed Solaris Studio 12.3 to a global zone on Solaris 11 following instructions from http://pkg-register.oracle.com which I want to make available to the non-global zones.  Do I need to install it independently for all zones, or can I export/share it?

    All files are normally installed in /opt/solarisstudio12.3. You can share it, but it is recommended to install it independently for all zones if you want different installed versions.

  • Patching sol10 with non-global zones.

    The patch REAMDE says perform in single user mode. if the system is in single user mode, will the non-global zones get patched also?

    Hello
    I found a couple of docs that can help you, see the one that describe how to create "BE" in a sol10 brandz this is the safest way to patch a sol10 brand zone.
    How to Install a Solaris Patchset in a Branded Zone (Doc ID 1489197.1)
    And if you are using sol11.1 you can use something similar to LU
    How to create and patch a new boot environment in an Oracle Solaris 10 Zone on a Oracle Solaris 11.1 system (Doc ID 1558773.1)
    Regards
    Eze

  • Non-Global Zones and startup scripts

    Created a non-global zone on a Solaris 10 box.
    Boots up ok and I can login with zlogin.
    It doesn't seem to run any of the scripts in /etc/rc2.d or /etc/rc3.d
    I know Solaris 10 uses "Service Management Facility" for most services now,
    but could still run legacy scripts in /etc/init.d ?
    Also I can't get sshd to start on the non-global zone.
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    # svcadm enable -t svc:/network/ssh:default
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    Anyone got any ideas?
    Michael

    These services are off-line in the non-global zone, which is why non of the
    rc2.d or rc3.d scripts are being run:
    offline Dec_12 svc:/milestone/multi-user-server:default
    offline Dec_12 svc:/milestone/multi-user:default
    Any idea how to enable these, and why they are offline?
    Michael
    Created a non-global zone on a Solaris 10 box.
    Boots up ok and I can login with zlogin.
    It doesn't seem to run any of the scripts in
    /etc/rc2.d or /etc/rc3.d
    I know Solaris 10 uses "Service Management Facility"
    for most services now,
    but could still run legacy scripts in /etc/init.d ?
    Also I can't get sshd to start on the non-global
    zone.
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    # svcadm enable -t svc:/network/ssh:default
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    Anyone got any ideas?
    Michael

  • Non-global zones and unix sockets

    Hello, I have a problem with local zones and unix socket sharing. I've created directory in global zone for ex. /zones/shared. Added it to zones via 'add fs, type=lofs' . In one zone I'm putting mysql socket in it and I want that other local zones could use it. Is it possible to share socket between zones?
    After all my experiments I'm always getting 'can't connect to mysql ... (146)' , 146 is 'connection refused' error.

    These services are off-line in the non-global zone, which is why non of the
    rc2.d or rc3.d scripts are being run:
    offline Dec_12 svc:/milestone/multi-user-server:default
    offline Dec_12 svc:/milestone/multi-user:default
    Any idea how to enable these, and why they are offline?
    Michael
    Created a non-global zone on a Solaris 10 box.
    Boots up ok and I can login with zlogin.
    It doesn't seem to run any of the scripts in
    /etc/rc2.d or /etc/rc3.d
    I know Solaris 10 uses "Service Management Facility"
    for most services now,
    but could still run legacy scripts in /etc/init.d ?
    Also I can't get sshd to start on the non-global
    zone.
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    # svcadm enable -t svc:/network/ssh:default
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    Anyone got any ideas?
    Michael

  • Non-Global zones and Live Upgrade

    Good afternoon,
    Trying to find an answer for a question that I have.
    Currently we have (2) T5140 servers.  One of them is our production Sun Messaging Server and the other is the backup.  The zones are SAN-attached disks(currently running on the Production server) and each server is "aware" of them.  They are only mounted on one server at a time.  My question is,  I can do a LiveUpgrade on the backup server (from Solaris10u10 to Solaris10u11) and then detach/export the NGZ from the production system and use "update on attach" to upgrade the NGZ to Solaris10u11.  If I don't upgrade the Production Box(Global) to u11 and have to move my NGZ back to it, will "update on attach" rollback the NGZs back to u10?
    We have a test system that we will be working through to test using LiveUpgrade without detaching the zones.  But wanted to see the feasibility of doing this the way I have it mention in the above paragraph.
    Thanks in advance for your help!!
    Doug

    Found my answer:  BigAdmin Feature Article: The Zones Update on Attach Feature and Patching in the Solaris 10 OS&lt;/title&gt;&lt;meta nam…

  • Oracle 10 g non-global zones with asynchronous I/O

    Hi,
    I note that using direct I/O (by setting the forcedirectio while
    mounting the database file systems) and bypassing the file system
    cache may improve database performance significantly, but this should
    be done only for file systems in which database files and redo log
    files exist. If direct I/O is used and there is not enough database
    buffer cache, it may even decrease the performance by moving the
    problem from double buffering to a lack of database buffer cache. So,
    this performance tuning must be planned carefully, and the database
    buffer cache should be sized properly. The direct I/O option should
    not be used for other file systems used by other applications because
    they still need the UFS buffer cache.
    Now, I have Oracle database installed inside a non-global zone and I
    see a lot of Asynchronous I/O wait warnings in the Oracle Alert log
    file. Storage mount points with UFS filesystem contain the Oracle
    datafiles and redo log files. In addition, two Oracle datafiles of 10
    GB each reside on the local disks. The Oracle init.ora parameter to
    set asynchronous I/O for Oracle database files is
    FILESYSTEMIO_OPTIONS= SETALL.
    Although the above parameter was set during the database installation,
    the aiowait warnings don't seem to disappear.
    Can I use the "forcedirectio" option at the Operating System /etc/
    vfstab file for Oracle datafiles and redo log files?
    Or, should I just move the Oracle database files residing on the local
    disks to the external storage? Will this take care of aiowait warnings
    and if yes, how? The storage is a DAS.
    Regards
    Sandeep

    I presume you compiled php on the Sun server, was this done using gcc or the Sun One C compiler.
    If the latter then you can also use the flag: --enable-nonportable-atomics when you run configure                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to retrieve #  on-line procs in a non-global zone with resource pool

    Is there any way to retrieve the #of on line processors of the machine running in a non global zone with resource pool ?
    sysconf does not return this value. In fact this is an excerpt of the man:
    "If the caller is in a non-global zone and the pools facility is active, sysconf(_SC_NPROCESSORS_CONF) and sysconf_SC_NPROCESSORS_ONLN) return the number of processors in the processor set of the pool to which the zone is bound."

    So, from within a local zone that's in a pool (i.e. in a pool with 8 CPUs) , you want to query how many CPUs really exist in the global zone (i.e. the global zone may actually have 16 CPUs)? I don't think that's possible: in fact for security reasons it's probably intentionally disabled.
    A quick workaround would be a script/cron-job in the global zone that writes a small file in the filesystem of the local zone... then from within that zone you could read the CPU count.
    I'm interested though: what are you trying to set up?
    Regards,
    [email protected]

  • Lucreate and non-global zones

    Hi - I'm trying to get my head around Live Upgrades now that I've switched to ZFS on Solaris 10 for our test servers. The problem I have is we have a number of non-global zones and when I ran the lucreate command I get a number of warnings:
    lucreate -n CPU_2012-07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <CPU_2012-07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <CPU_2012-07>.
    Populating file systems on boot environment <CPU_2012-07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    WARNING: Directory </export/zones/tdukwxstestz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwxstestz01> is shared between BEs, remapping to <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Directory </export/zones/tdukwbprepz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwbprepz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwbprepz01> is shared between BEs, remapping to <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/export/zones/tdukwbprepz01> on <rpool/export/zones/tdukwbprepz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwbprepz01@CPU_2012-07> on <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Creating snapshot for <rpool/export/zones/tdukwxstestz01> on <rpool/export/zones/tdukwxstestz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwxstestz01@CPU_2012-07> on <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10@CPU_2012-07> on <rpool/ROOT/CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10/var@CPU_2012-07> on <rpool/ROOT/CPU_2012-07/var>.
    Mounting ABE <CPU_2012-07>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <CPU_2012-07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <CPU_2012-07> bootable.
    Population of boot environment <CPU_2012-07> successful.
    Creation of boot environment <CPU_2012-07> successful.
    So ALL my non-global zones live under /export/zones/<zonename> - what do all the WARNINGS mean?
    I then applied the Oracle CPU, activated the ABE and shutdown the server. When it came back up non of the zones would start and this seems to be because now all the zonepaths and references to the zones are labelled with CPU_2012-07 on the end. Now I can edit the zone xml files to fix this but am sure this is not the recommended method and something I would prefer not to do.
    So basically I think I have not set my ZFS resource pools up correctly to take into account my non-global zones and where I have created them.
    My zfs list output looks like this now, unfortunately I don't have the output prior to me starting this work:
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 91.2G 456G 106K /rpool
    rpool/ROOT 9.06G 456G 31K legacy
    rpool/ROOT/10 38.2M 456G 4.34G /.alt.10
    rpool/ROOT/10/var 22.6M 24.0G 3.60G /.alt.10/var
    rpool/ROOT/CPU_2012-07 9.02G 456G 4.34G /
    rpool/ROOT/CPU_2012-07@CPU_2012-07 566M - 4.34G -
    rpool/ROOT/CPU_2012-07/var 4.12G 456G 4.11G /var
    rpool/ROOT/CPU_2012-07/var@CPU_2012-07 13.8M - 3.58G -
    rpool/dump 2.00G 456G 2.00G -
    rpool/export 5.94G 456G 35K /export
    rpool/export/home 76.9M 23.9G 76.9M /export/home
    rpool/export/zones 5.87G 456G 36K /export/zones
    rpool/export/zones/tdukwbprepz01 21.7M 456G 323M /export/zones/tdukwbprepz01
    rpool/export/zones/tdukwbprepz01-10 321M 31.7G 312M /export/zones/tdukwbprepz01-10
    rpool/export/zones/tdukwbprepz01-10@CPU_2012-07 8.50M - 312M -
    rpool/export/zones/tdukwxstestz01 29.6M 456G 5.49G /export/zones/tdukwxstestz01
    rpool/export/zones/tdukwxstestz01-10 5.51G 26.5G 5.47G /export/zones/tdukwxstestz01-10
    rpool/export/zones/tdukwxstestz01-10@CPU_2012-07 32.1M - 5.48G -
    rpool/logs 8.23G 23.8G 8.23G /logs
    rpool/swap 66.0G 458G 64.0G -
    Any help would be greatly appreciated.
    Thanks - Julian.

    OK, so been tinkering with this. I'm not sure this is my exact problem but a few people have reported issues with the following package:
    121430-xx
    In that it gives the exact same WARNINGS when trying to create an ABE via lucreate and you have non-global zones. So one of the suggestions was to go back to an earlier version of this patch and then someone said it was fixed in version 71 of the patch. So I installed the very latest version 121430-81 and now it fails with a different error. Fortunately this time I have a screen shot of the before and after:
    BEFORE:
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    1 build14 running /export/zones/build14 native shared
    bash-3.2# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 70.0G 477G 106K /rpool
    rpool/ROOT 1.98G 477G 31K legacy
    rpool/ROOT/10 1.98G 477G 1.95G /
    rpool/ROOT/10/var 28.8M 24.0G 28.8M /var
    rpool/dump 2.00G 477G 2.00G -
    rpool/export 36.5M 477G 33K /export
    rpool/export/home 35K 24.0G 35K /export/home
    rpool/export/zones 36.4M 477G 32K /export/zones
    rpool/export/zones/build14 36.4M 32.0G 36.4M /export/zones/build14
    rpool/logs 3.78M 32.0G 3.78M /logs
    rpool/swap 66.0G 543G 16K -
    bash-3.2# df -h |grep rpool
    rpool/ROOT/10 547G 1.9G 477G 1% /
    rpool/ROOT/10/var 24G 29M 24G 1% /var
    rpool/export 547G 33K 477G 1% /export
    rpool/export/home 24G 35K 24G 1% /export/home
    rpool/export/zones 547G 32K 477G 1% /export/zones
    rpool/export/zones/build14 32G 36M 32G 1% /export/zones/build14
    rpool/logs 32G 3.8M 32G 1% /logs
    rpool 547G 106K 477G 1% /rpool
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    bash-3.2# lucreate -n 10-CPU_2012_07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <10-CPU_2012_07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <10-CPU_2012_07>.
    Populating file systems on boot environment <10-CPU_2012_07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10/var@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07/var>.
    Mounting ABE <10-CPU_2012_07>.
    Generating file list.
    Copying data from PBE <10> to ABE <10-CPU_2012_07>.
    100% of filenames transferred
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <10-CPU_2012_07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to mount zone <build14> in </.alt.tmp.b-0ob.mnt>.
    zoneadm: zone 'build14': zone root /export/zones/build14/root already in use by zone build14
    zoneadm: zone 'build14': call to zoneadmd failed
    ERROR: Unable to mount non-global zones of ABE <10-CPU_2012_07>: cannot make ABE bootable.
    ERROR: umount: /.alt.tmp.b-0ob.mnt/var/run busy
    ERROR: cannot unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: failed to unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: cannot fully unmount boot environment - <1>: file systems remain mounted
    ERROR: Unable to make boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to populate file systems on boot environment <10-CPU_2012_07>.
    Removing incomplete BE <10-CPU_2012_07>.
    ERROR: Cannot make file systems for boot environment <10-CPU_2012_07>.
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    10-CPU_2012_07 no no no yes -
    So the very latest Live Upgrade patch doesn't seem to have fix this, I get even more errors now.
    Again any help would be greatly appreciated.
    Thanks - Julian.

  • After installing 137137-09 patch OK in global zone, bad in non global zone

    Hi all,
    scratching my head with this one.
    Installed 137137-09 fine on Sun Fire V210. Machine has one non global zone running a proxy server (nothing very exciting there!). non global zone has a local filesystem attached, but don't think this is the issue (on my test V210 I created the same sort of filesystem and was unable to replicate the problem :( ).
    So 137137-09 is fine in the global zone (I had the non global zone halted when patch installed) it is also installed in the non global zone (ie, when zone boots it says it's at rev 137137-09 via uname) in the patch log in the non global zone I get this:
    PKG=SUNWust2.v
    Original package not installed.
    pkgadd: ERROR: ERROR: unable to get zone brand: zonecfg_get_brand: No such zone configured
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: cannot open
    pkgadd: ERROR: checkinstall script did not complete successfully
    Dryrun complete.
    No changes were made to the system.
    I'm not sure if the branding error is causing the checkinstall postpatch script error or if they are not related. There doesn't seem to be any obvious permissions problems that I can find. I have checked that all the pkg and patch patches are up to date on the system. Searching on the brand error gives me a link to a problem with 127127-11, but that was installed on the system before the local zone was created and all the other seemingly appropriate patches (eg: 119254) are all up to date or at a higher revision than recommended.
    I see the same problem on a M5000 which has two non global zones on it.
    Both machines had the Solaris 10 50/08 update bundle applied when it came out,a nd have had recommended patch sets applied at regular intervals since.
    This issue only came to light when trying the latest bundles with 138888-01/02 in it, and those fail to install on the global zones because the non global zone install dies claiming 137137-09 is not installed (which is plainly wrong).
    I've tried to recreate this on a test server but unfortunately everything works as it should, even though the test server has a similar history in terms of patches and original setup to the others.
    I'm planning to try to detatch the non global zone and try an attach -u to see if it will update the patches properly, but I'm not holding out much hope on that one (I need to wait for a mainteiance window when I can take the zone down in a couple of days).
    Any ideas?

    Well, I am following up to my own post it seems I have determined what is causing the problem, or at least situations where the problem can be reproduced which I have been able to do on my test system.
    It seems that if the zone container's zonepath is in /usr (eg: /usr/zones, /usr/local/zones, or some other path under /usr) the patchadd of 137137-09 will fail with the log similar to posted above, and this will stop further kernel patches (eg: 138888-02) being added.
    The test system had everything patched to current and searching the web I can't find any other instances of this being an issue, but I have reproduced this problem on my test machine (which worked OK because it's test zones were in a filesystem mounted as /zones). When I used zoneadd -z <zonename> move to a zone in /usr/local and applied 137137-09 the same problem came up.
    Not sure what is causing this issue.. I imagine it might have to do with some sort of confusion with the patch utilities and the read-only loopback filesystems in the sparse root zone but I can't bs sure.
    Maybe someone at sun will see this and figure out what the deal is :)
    When I moved my test zone back to /zones the patch applied perfectly so it's definitely having it in /usr or /usr/local (I tried both locations, even though they are seperate ufs filesystems on my test server).
    Oh I am running DiskSuite to mirror filesystems on my V210's which may or may not have anything to do with it.
    Hope this helps someone in the future at least!

  • FilesystemMountPoints for ufs disks mounted to non-global zones

    Hello,
    I have a SAN ufs disk to be used as a failover storage, mounted to non-global zones (NGZ).
    Solaris 10 nodes using Cluster 3.2
    I'm looking for the correct value for the property FilesystemMountPoints and the vfstab entry required for a failover disk mounted to a NGZ.
    Should the path NOT include the NGZ root path?
    From the man page for SUNW.HAStoragePlus, for the property FilesystemMountPoints:
    You can specify both the path in a non-global zone and the path in a global zone, in this format:
    Non-GlobalZonePath:GlobalZonePath
    The global zone path is optional. If you do not specify a global zone path, Sun Cluster assumes that the path in
    the non-global zone and in the global zone are the same. If you specify the path as
    Non-GlobalZonePath:GlobalZonePath, you must specify Global-ZonePath in the global zone's /etc/vfstab.
    The default setting for this property is an empty list.
    You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable
    the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the
    non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone
    by mounting the file system in the global zone. The resource type then performs a loopback mount in the
    non-global zone.
    Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all
    global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.
    SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group
    with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You
    can specify both local and global file system mounts points at the same time.
    Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to
    be local if its /etc/vfstab entry satisfies both of the following conditions:
    1. The non-global mount option is specified.
    2. The "mount at boot" field for the entry is set to "no."
    In my situation, I want to mount the disk to /mysql_data on the NGZ called ftp_zone. So, which is the correct setup?
    a. FilesystemMountPoints=/mysql_data:/zones/ftp_zone/root/mysql_data
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /zones/ftp_zone/root/mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    OR
    b. FilesystemMountPoints=/mysql_data:/mysql_data (can be condensed to simply /mysql_data)
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    Should the path NOT include the NGZ root path?
    And should the fsck pass # be 1 or 2?
    Looking at this example from p. 26 of
    http://wikis.sun.com/download/attachments/24543510/820-4690.pdf
    This example doesn't mention the entry in vfstab.
    Create a resource group that can holds services in nodea zonex and nodeb zoney
    nodea# clresourcegroup create -n nodea:zonex,nodeb:zoney test-rg
    Make sure the HAStoragePlus resource is registered
    nodea# clresourcetype register SUNW.HAStoragePlus
    Now add a UFS [or VxFS] fail-over file system: mount /bigspace1 to failover/export/install in NGZ
    nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \
    -p FilesystemMountPoints=/fail-over/export/install:/bigspace1 \
    ufs-hasp-rs
    Thank you!

    Hi,
    /zones/oracle-z is my root directory of the zone.
    * add the device to the zone :
    root@mpbxapp1 # zonecfg -z oracle-z
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/dsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/rdsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> exit
    * add FS to NGZ's /etc/vfstab : ( You may omit this step, I don't know why but it works without this step :) )
    root@mpbxapp1 # vi /zones/oracle-z/root/etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /global/oracle ufs 1 no logging
    * add FS to global zone's /etc/vfstab :
    root@mpbxapp1 # vi /etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /zonefs/oracle ufs 1 no logging
    * set the FilesystemMountPoints property :
    root@mpbxapp1 # /usr/cluster/bin/clresource set -p FilesystemMountPoints=/global/oracle:/zonefs/oracle oracle-hastp
    Whit this configuration you may ensure that the FS is not directly accessible from master zone. Actually, it's accessible but with a different PATH. For example, for Oracle, from the master zone Oracle can not be started/stopped because the controlfile can not be accessed. :)
    Hope this helps,
    Murat

  • Always install applications into non-global zones?

    I am planning on taking full advantage of Containers and Zones as I migrate servers and applications to Solaris 10. During this migration process, I believe that I will have a need to initially just run just one application on a server. I fear that if I do this in the global zone I will lose flexibility down the road for future projects and workloads. So, should I consider always installing applications in a non-global zone and never install applications in the global zone? This would keep the global zone as the controller of the non-global zones and ensure that I can always add more non-global zones later without having to worry about what is running in the global zone.
    Are there any thoughts or comments on this topic?

    Yes we've found it's best to run the applications in non-global zones. Here are a few benefits, basically we only put an application in the global zone if it requires it (like Oracle RAC). Note non-RAC instances of Oracle will run in a non-global zone just fine.
    Reasons to put applications in non-global zones
    o Increased security (self contained environment)
    o Increased flexibility for provisioning resources (CPU, memory, etc) when/if we decide to run multiple applications on the same hardware
    o Increased flexibility in starting up temporary environments to debug issues in parallel to the primary environment (i.e. in another non-global zone on the same server)
    o Works well with Sun Cluster (i.e. we cluster the non-global zones so that they can run across several hosts)
    o Improved trouble shooting and performance diagnosis as the applications are isolated to a non-global zone
    o Simplified environment for the application admins as the environment can be fine tuned for their needs (i.e. only let them see what they need)
    o Disaster recovery is much faster for a non-global zone

Maybe you are looking for

  • How to present pics from SAP folder?

    Hi, I would like to present a pic that is on the SAP folder (visible via AL11). I want to use logical filename to get the path to that gif. How can I present this pic on the screen? Thanks, Itay

  • Adobe Reader X error

    Adobe Reader X error When I save as a PowerPoint file to PDF and open it with Adobe Reader, the following message pop up: "An error exists on this page. Acrobat may not display the page correctly. Please contact the person who created the PDF documen

  • IWeb Publishing issues MobileMe/iCloud/GoDaddy Updates

    Hello - I am hoping someone can provide me some insight. I previously created a website via iWeb and had it hosted via my MobileMe account.  With the new IOS and iCloud updates (and being that I can no longer host through MobileMe), I have transfered

  • CTD Examination experience sharing....

    Hi all, I am posting this thread for different purpose. Aim of it is to get example questions,Important links,test formats related to CTD Test.It is not for to get any practice exams .I knew about CTD preparation guide. For me more important are your

  • "Insufficient Disk Space Error" Doesn't make sense

    I'm sure there is a very easy solution here and I am missing something simple. I am trying to import my iPhoto Library into Aperture. In order to do this I copied my iPhoto Library onto my external, freeing up space for the Aperture Library. Prior to