Live upgrade with zones

Hi,
While trying to create a new be environment in solaris 10 update 6 i 'm getting following errors for my zone
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
ERROR: unable to mount zones:
zoneadm: zone 'OTM1_wa_lab': "/usr/lib/fs/lofs/mount -o ro /.alt.tmp.b-AKc.mnt/swdump /zones/app/OTM1_wa_lab-zfsBE/lu/a/swdump" failed with exit code 33
zoneadm: zone 'OTM1_wa_lab': call to zoneadmd failed
ERROR: unable to mount zone <OTM1_wa_lab> in </.alt.tmp.b-AKc.mnt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Unable to remount ABE <zfsBE>: cannot make ABE bootable
ERROR: no boot environment is mounted on root device <rootpool/ROOT/zfsBE>
Making the ABE <zfsBE> bootable FAILED.
Although my zone is running fine
zoneadm -z OTM1_wa_lab list -v
ID NAME STATUS PATH BRAND IP
3 OTM1_wa_lab running /zones/app/OTM1_wa_lab native shared
Does any body know what could be the reason for this ?

http://opensolaris.org/jive/thread.jspa?messageID=322728

Similar Messages

  • Live Upgrade with Zones - still not working ?

    Hi Guys,
    I'm trying to do LiveUpdate from Solaris update 3 to update 4 with non-global zone installed. It's driving me crazy now.
    I did everything as described in documentation, installed SUNWlucfg and supposedly updated SUNWluu and SUNWlur (supposedly because they are exactly the same as were in update 3) both from packages and with script from update 4 DVD, installed all patches mentioned in 72099, but lucreate process still complains about missing patches and I've checked if they're installed five times. They are. It doesn't even allow to create second BE. Once I detached Zone - everything went smooth, but I had an impression that Live Upgrade with Zones will work in Update 4.
    It did create second BE before SUNWlucfg was installed, but failed on update stage with exactly the same message - install patches according to 72099. After installation of SUNWlucfg Live Upgrade process fails instantly, that's a real progress, must admit.
    Is it still "mission impossible" to Live Upgrade with non-global zones installed ? Or am I missed something ?
    Any ideas or success stories are greatly appreciated. Thanks.

    I upgraded from u3 to u5.
    The upgrade went fine, the zones boot up but there are problems.
    sshd doesn't work
    svsc -vx prints out this.
    svc:/network/rpc/gss:default (Generic Security Service)
    State: uninitialized since Fri Apr 18 09:54:33 2008
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M gssd
    Impact: 8 dependent services are not running:
    svc:/network/nfs/client:default
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/system/webconsole:console
    svc:/milestone/multi-user-server:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/network/inetd:default (inetd)
    State: maintenance since Fri Apr 18 09:54:41 2008
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/share/man -s 1M inetd
    See: /var/svc/log/network-inetd:default.log
    Impact: This service is not running.
    It seems as thought the container is not upgraded.
    more /etc/release in the container shows this
    Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    How do I get it to fix the inetd service?

  • Sun Live Upgrade with local zones Solaris 10

    I have M800 server running global root (/) fs on local disk and running 6 local zones on another local disk. I am running solaris 5.10 8/07.
    I used live upgrade to patch the system and created new BE (lucreate). Both root fs are mirror as RAID-1.
    When I ran lucreate, it copies all 6 local zones root fs to the global root fs and failed no enogh space.
    What is the best procedure to use lu with local zones.
    Note: I used lu with global zone only, and worked without any problem.
    regards,

    I have been trying to use luupgrade for Solaris10 on Sparc, 05/09 -> 10/09.
    lucreate is successful, but luactivate directs me to install 'the rest of the packages' in order to make the BE stable enough to activate. I try to find the packages indicated , but find only "virtual packages" which contain only pkgmap.
    I installed upgrade 6 on a spare disk to make sure my u7 installation was not defective, but got similar results.
    I got beyond luactivate on x86 a while ago, but had other snags which I left unattended.

  • Sun cluster 3.20, live upgrade with non-global zones

    I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
    For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
    The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
    Any assistance gratefully received

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Solaris Live Upgrade with NGZ

    Hi
    I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
    I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
    On my 1st Server:
    I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
    ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
    zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
    zoneadm: zone 'mddtri': call to zoneadm failed
    ERROR: unable to mount non-global zones of ABE: cannot make bootable
    ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
    ERROR: unable to make boot environment <new_s10BE> bootable
    On my 2nd Server:
    I have a Global zone and 10 NGZ. This is the error i am getting:-
    WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
    WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
    *.This happens for all the NGZ running.*
    Duplicating ZFS datasets from PBE to ABE.
    ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zoneĀ  to dedicated dataset.
    ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
    Reverting state of zones in PBE <old_s10BE>
    ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
    ERROR: Unable to populate file systems from boot environment <new_s10BE>
    Help, I need to sort this out a.s.a.p!

    Hi,
    I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
    Here is a script with lucreate.
    #!/bin/sh
    Logdir=/usr/local/LUscripts/logs
    if &#91; ! -d ${Logdir} &#93;
    then
    echo ${Logdir} existiert nicht
    exit
    fi
    /usr/sbin/lucreate \
    -l ${Logdir}/$0.log \
    -o ${Logdir}/$0.error \
    -m /:/dev/dsk/c2t0d0s0:ufs \
    -m /var:/dev/dsk/c2t0d0s3:ufs \
    -m /opt:/dev/dsk/c2t0d0s4:ufs \
    -m -:/dev/dsk/c2t0d0s1:swap \
    -n disk0
    And here is the output
    root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
    ERROR: An error occurred during creation of configuration file.
    ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
    Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
    Abort - core dumped

  • Solaris 10 Live Upgrade with Veritas Volume Manager 4.1

    What is the latest version of Live Upgrade?
    I need to upgrade our systems from Solaris 8 to Solaris 10. All our systems have Veritas VxVM 4.1, with the O.S disks encapsulated and mirrored.
    Whats the best way to do the Live Upgrade. Anyone have clean documents for the same ?

    There are more things that you need to do.
    Read veritas install guide -- it has a pretty good section of what needs to be done.
    http://www.sun.com/products-n-solutions/hardware/docs/Software/Storage_Software/VERITAS_Volume_Manager/

  • Live migration with zones

    Hi all,
    I have been reading into making "SPARC Private Cloud" whitepapers with LDOM's from Oracle. One thing really pops out from the text which really confuses me:
    from Page 8 and 10:
    "VMs may also be securely live migrated or automatically started or restarted across any servers in their respective pools. *Zones are cold migrated*"
    "Secure live migrationā€”Move domains off of servers that are undergoing planned maintenance. *Zones are cold-migrated*."
    Does this really mean that if I have zones inside LDOM guest, I can live migrate the LDOM guests but not the zones? Hence zones will go down if I do this? If so, whats the reason behind this, its hard to grasp the idea that the OS itself can be live migrated, but not zones inside it that are using the same kernel, binaries etc from it....
    Links:
    https://blogs.oracle.com/infrared/entry/building_private_iaas_with_sparc
    http://www.oracle.com/us/groups/public/@otn/documents/webcontent/1659149.pdf
    - Jukka

    Lumi, I'm pretty sure they are comparing LDOMs with zones on a standalone system (i.e. no LDOMs).
    When you migrate a domain, everything the guest kernel is doing should emerge as it was before.
    Migration might take a bit longer than for the GZ alone, since you're using more virtual memory.
    To move an NGZ between standalone GZ's, you would indeed have to halt, detach, attach, and boot it.
    But please don't take my word for it... feel free to try both methods for yourself. =-)
    The only limitation for zones in LDOMs that I'm aware of: You cannot currently set elastic power policy.
    Other than that, I don't see why you couldn't keep zones running inside your guest as it moves around.
    Hope that helps... -cheers, CSB

  • Live Upgrade with VDI

    Is there any reason why LU will not work with VDI and it's built in MySQL cluster?
    I attempted to LU a VDI 3.2.1 environment and after rebooting a secondary node the service would not come back up. Unonfiguring and reconfiguring brought everything back to normal. What this just an anomaly or is there a procedure for using LU with VDI?

    Well LU does work fine with VDI. I upgraded a second VDI cluster without problems.

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Sparse zones live upgrade

    Hi
    I have problem with live upgrade on solaris 10 9/10 to 8/11 on sparse zone.
    The installation on global zone is good but sparse zone cannnot boot because zonepath changed.
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - pbspfox1 installed /zoneprod/pbspfox1-s10u10-baseline native excl
    the initial path is /zoneprod/pbspfox1
    #zfs list
    zoneprod/pbspfox1@s10u10-baseline 22.4M - 2.18G -
    zoneprod/pbspfox1-s10u10-baseline
    # luactivate zoneprod/pbspfox1@s10u10-baseline
    ERROR: The boot environment Name <zoneprod/pbspfox1@s10u10-baseline> is too long - a BE name may contain no more than 30 characters.
    ERROR: Unable to activate boot environment <zoneprod/pbspfox1@s10u10-baseline>.
    how to upgrade pbspfox1?
    Please help
    Walter

    I'm not exactly sure what happened here but the zone name doesn't change. If the zone patch is wrong, I'd try using zonecfg to change the zone path to the proper value and then boot the zone normally.
    zonecfg -z pbspfox1
    set zonepath=/zone/pbspfox1 (or whatever is the proper path)
    verify
    commit
    exit
    zoneadm -z pbspfox1 boot
    If the zone didn't get properly updated, you might be able to update it by detaching the zone:
    zoneadm -z pbspfox1 detach
    and doing an update reattach
    zoneadm -z pbspfox1 attach -u
    Disclaimer: All of the above was done from memory without testing, I may have gotten some things wrong.
    Hopefully this will help. I've had similar issues in the past but I'm not sure I've had exactly the same problem so I can't tell for sure whether this will help you or not. It is what I'd try. Of course, try to avoid getting yourself into a position where you can't back something out if necessary. This kind of thing can be messy and may require more than one try. If a remember correctly there were some issues with the live upgrade software as shipped with Solaris 10 8/11. I'd get it patched up to current levels ASAP to avoid additional issues.

  • Upgrading Solaris with Zones

    I have just found the following statement in the Solaris 10 Install Guide, section "Upgrade Limitations":
    "If you have configured Solaris Zones on your system, you are not able to upgrade until you have unconfigured and uninstalled your non-global zones"
    I have a serious problem with this limitation. It seems to be impossible to upgrade a Solaris 10 system with zones configured. I would have to shut down and uninstall my zones and applications to upgrade the OS.
    There's not even a way to move the affected zones to another system to keep the application running if the host with the global zone needs maintenance due to OS upgrade, HW maintenance or if simply has too many zones configured on it and all resources are exhausted.
    With these limitations I don't see much reasonable use for zones. In particular, once a zone and an application is set up on a physical box with a particular OS release I am stuck with it for all times.
    How are you going to solve these problems. And when? What are zones good for in the current implementation? I am still waiting for convincing arguments to use zones ...

    I guess I'll state for the record, lest anyone should be confused by my last post, that LiveUpgrade doesn't allow upgrade anymore. I guess I got away with it before LU was upgraded to restrict such activity.
    ERROR: Unable to upgrade boot environment <Solaris10-B58>.
    INFORMATION: Boot environment <Solaris10-B58> has one or more non-global
    zones installed. This version of Live Upgrade cannot upgrade a boot
    Guess I'd better figure out how to backup my zones so I clean remove them. A simple tar will probly do the trick for now.
    benr.

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • Upgrading Solaris with Zones installed

    So Solaris 5/09 was released. Previously I have upgraded Solaris the simplest way by simply downloading the DVD, throwing it in and running the upgrade. However that was before we started using the zones feature and I'm not sure if there is anything special I have to take into account now.
    They're simply native zones. Is there anything special I have to do or will the simple disk upgrade work still?

    Upgrading 10/08 to 5/09, including zones, should be very clean, with or without Live Upgrade. The advantage is Live upgrade is minimal downtime and a safe backout if the upgrade does cause something to break. If you use ZFS root filesystems, it is even easier and 'wastes' very little disk space.
    Mark

  • How to upgrade (with clean install) a OEL4 server to OL5 on a live system

    My problem is that I cannot find any meaningful information on how to perform a OL upgrade on a live system with a running database.
    The system is running OEL 4.9 (migrated from RHEL 4 to OEL) and I want to upgrade it to OL 5.7 (wanted to use 6.1 but that is not yet certified for 11.2)
    I know that using Anaconda's upgrade is not supported so I guess I will have to use a clean install. (the system is full of legacy rpm's and old drivers that where needed in previous 4.x releases and dependencies that I wouldn't want to upgrade).
    So I want to do a clean install of 5.7 on the server, but what is the process? There are 3 LVs running 1 for the OS (with partitions for /, /boot, /tmp, /usr, /var and /home )
    1 for the Oracle software / data files and 1 for exports / archive logs and such.
    If I do a clean install on the OS LV (after taking a backup of everything) will I be able to reuse the existing setup of Oracle software on the 2nd logical volume? create the oracle user with the same uid/gid, running root.sh, etc.. Will it start and be able to mount/open the database ? (I am assuming the need to rebuild the installation) There is also a home for 11.2 middleware web-tier utilities (for APEX).
    I have been searching in vain for some guides / insight into what the correct procedures are for upgrading the OEL4 to OL5 (or OL6 if it gets certified in the next few months).
    Any help appreciated.
    Oli

    My problem is that I cannot find any meaningful information on how to perform a OL upgrade on a live system with a running database.Firstly, it is good to know your system is running satisfactorily.
    After taking a full backup (and checking that the backup is good!), you must shutdown the RDBMS instances.
    1) Reboot from the distribution media.
    2) Choose a full install; you do not want to upgrade.
    3) Be very careful to click on the checkbox to "use custom setup" so that you will get a display of the current storage setup.
    4) In the Anaconda Disk Druid screen, edit the displayed LVMs to use their old mount points.
    5) Make absolutely certain that the checkbox to reformat the LVMs for your database setup are clear, not checked.
    6) Do install the default RPM package selection; the OL distro is configured to install the necessary prerequisites for an RDBMS setup.
    7) After the installation completes, be sure to install the "oracle-validated" RPM package to do the necessary tuning.
    8) Add back all the user accounts.
    9) Move on to bringing up the RDBMS.
    Looks scary, and is, but can be done. Practice installing this way before doing it for real -- you do want to be familiar with each installation step and to realize where you are along the process.

Maybe you are looking for

  • How to store really important files on a Time Capsule

    I'm wondering whether it is wise to store really important files on a Time Capsule (apart from Time Machine) because its hard disk is non-removable and non-replaceable and there is no mirrored backup. One could lose everything in the blink of an eye.

  • WHY WONT IT SWITCH?

    alright so on my iTunes library it has all my files in AAC and it takes up 5.50 GB. but wen i connect and update my iPod, and i look at the library that my actual iPod currently contains, some of the files are still in mp3 even tho theyve been conver

  • Connecting SQL server from Oracle 11g

    Hello, I know there are several threads available to this post but please help me in *step by step process to follow in connecting 2 servers. I tried everything in internet but got Connection closed error after tnsping of DNS name. Do i need to run a

  • Displaying the tabs

    Hi i have tabs in two rows.. 4 in the first and three in the second... if i click a tab in the first row, the rows itself should interchange as in windows dialog boxws.... is it possible in JSF... if so what ll be the logic in it...

  • Default file share question

    I have a mac mini server and I am not very experienced. I think I screwed up the permissions and configuration of the default shares: Backups, Groups, Public and Users. Can anybody tell me what the defaults are? Regards,