Partial live upgrade

I have made a partial distribution and I would like to perform a live
upgrade. The partition that is being upgraded has been identified however,
I am not able to implement the live upgrade.
Has anyone ever tried to implement code changes while an application is
still running and if so how were you able to accomplish the upgrade?
Thank you.
Reserve your name now at http://www.email.com

Thanks Pavel.
I have all patches installed on my servers that described in Solaris[TM] Live Upgrade Software: Minimum Patch Requirements
https://support.us.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1004881.1
Is 10_sparc_lustarter_patchset.zip can be download in MOS? I searched in Patches and Updates but can't find it. Can you send me the link or internal NFS location of the 10_sparc_lustarter_patchset.zip file?
Thanks
Tiffany

Similar Messages

  • Re: (forte-users) Partial live upgrade

    Hi,
    It should be possible using the Compatibility Level for instance. Then, you
    will be able to install the CL1 having the CL0 still running. You could test
    the CL1 in an IVP to verify that the installation is correct and then,
    uninstall the CL0. May be the partition should be seen as a reference
    partition. You can find some information on the Nomadic PC section of the
    documentation (you can find a sample on
    http://perso.club-internet.fr/dnguyen/). The normal case is that you need to
    upgrade your server, having still old versions of your clients.You can use
    converters to manage the interface modifications between versions.
    Other ways are possible with libraries, also using Compatibility Level, like
    Conductor does with processes (they are conditionned as libraries with their
    own compatibility level at each distribution). The engine creates the processes
    instances with the last level of the library, but the instances already there
    are still running the old version.
    One last solution is to manage services reconnection, using the
    ReleaseConnection to rebind service objects from the client. So you can
    reinstall a new instance of your partition and then use it without restarting
    your client. Depending on the OS, you may have a lock on the file. So you will
    need to stop your partition, reinstall it and restart the partition. To make
    this possible, you should not have the autostart activated on your environment
    during the manipulation. This solution does not need to use the Compatibility
    Level.
    Hope this helps,
    Daniel Nguyen
    Freelance Forte Consultant
    http://perso.club-internet.fr/dnguyen/
    andrea harper a écrit:
    I have made a partial distribution and I would like to perform a live
    upgrade. The partition that is being upgraded has been identified however,
    I am not able to implement the live upgrade.
    Has anyone ever tried to implement code changes while an application is
    still running and if so how were you able to accomplish the upgrade?
    Thank you.
    Reserve your name now at http://www.email.com
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

    Hi,
    It should be possible using the Compatibility Level for instance. Then, you
    will be able to install the CL1 having the CL0 still running. You could test
    the CL1 in an IVP to verify that the installation is correct and then,
    uninstall the CL0. May be the partition should be seen as a reference
    partition. You can find some information on the Nomadic PC section of the
    documentation (you can find a sample on
    http://perso.club-internet.fr/dnguyen/). The normal case is that you need to
    upgrade your server, having still old versions of your clients.You can use
    converters to manage the interface modifications between versions.
    Other ways are possible with libraries, also using Compatibility Level, like
    Conductor does with processes (they are conditionned as libraries with their
    own compatibility level at each distribution). The engine creates the processes
    instances with the last level of the library, but the instances already there
    are still running the old version.
    One last solution is to manage services reconnection, using the
    ReleaseConnection to rebind service objects from the client. So you can
    reinstall a new instance of your partition and then use it without restarting
    your client. Depending on the OS, you may have a lock on the file. So you will
    need to stop your partition, reinstall it and restart the partition. To make
    this possible, you should not have the autostart activated on your environment
    during the manipulation. This solution does not need to use the Compatibility
    Level.
    Hope this helps,
    Daniel Nguyen
    Freelance Forte Consultant
    http://perso.club-internet.fr/dnguyen/
    andrea harper a écrit:
    I have made a partial distribution and I would like to perform a live
    upgrade. The partition that is being upgraded has been identified however,
    I am not able to implement the live upgrade.
    Has anyone ever tried to implement code changes while an application is
    still running and if so how were you able to accomplish the upgrade?
    Thank you.
    Reserve your name now at http://www.email.com
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Live upgrade, zones and separate mount points

    Hi,
    We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
    Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
    lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
    This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
    Anyone tried something similar or has any idea on how to solve this?
    The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
    1:
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mount point </zones/zone01>.
    Copying.
    Creating shared file system mount points.
    Copying root of zone <zone01>.
    Creating compare databases for boot environment <upgrade>.
    Creating compare database for file system </zones/zone01>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <upgrade>.
    Making boot environment <upgrade> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
    zoneadm: zone 'zone01': call to zoneadmd failed
    ERROR: unable to mount zone <zone01> in </.alt.upgrade>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
    umount: /dev/dsk/c2t1d0s0 not mounted
    ERROR: cannot unmount </dev/dsk/c2t1d0s0>
    ERROR: cannot mount boot environment by name <upgrade>
    ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
    ERROR: Update of loader failed.
    ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
    Making the ABE <upgrade> bootable FAILED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    2:
    0 21191 21113 /usr/lib/lu/lumount -f upgrade
    0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
    0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
    0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
    0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
    0 21196 21192 mount swap /.alt.upgrade/var/run
    0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
    0 21197 21192 mount swap /.alt.upgrade/tmp
    0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
    0 21199 21198 /bin/expr 2 - 1
    0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
    0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
    0 21202 21198 false
    0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
    0 21206 21204 sed s/\([^\]\)::/\1:-:/
    0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
    0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
    0 21210 21203 false
    0 21211 21203 gettext unable to mount zone <%s> in <%s>
    0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
    Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

    I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
    # lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
    <snip>
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
    Mounting file systems for boot environment <upgrade>.
    ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
    allowable number of mount points exceeded
    ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
    ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
    ERROR: unmounting partially mounted boot environment file systems
    If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
    <snip>
    Creating boot environment <upgrade>.
    Creating file systems on boot environment <upgrade>.
    Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
    Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
    Mounting file systems for boot environment <upgrade>.
    Calculating required sizes of file systems for boot environment <upgrade>.
    Populating file systems on boot environment <upgrade>.
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point </>.
    Populating contents of mountED.
    ERROR: Unable to make boot environment <upgrade> bootable.
    ERROR: Unable to populate file systems on boot environment <upgrade>.
    ERROR: Cannot make file systems for boot environment <upgrade>.
    If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
    # lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

  • Lucreate 'ERROR: mount: /export: invalid argument' - Live Upgrade u8 to u9

    I'm trying to update several servers running solaris cluster 3.2 from u8 to u9 using live upgrade, first server (quorum server) worked just fine, next one (cluster member) goes down like this:
    # lucreate -n solaris-10-u9
    ERROR: mount: /export: Invalid argument
    ERROR: cannot mount mount point </.alt.tmp.b-pob.mnt/export> device </export>
    ERROR: failed to mount file system </export> on </.alt.tmp.b-pob.mnt/export>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to mount ABE <solaris-10-u9>
    ERROR: Unable to clone the existing file systems from boot environment <s10x_u8wos_08a> to create boot environment <solaris-10-u9>.
    ERROR: Cannot make file systems for boot environment <solaris-10-u9>.I followed all the necessary steps, removed the installed live upgrade packages and installed the ones from the u9 iso...
    Any ideas would be greatly appreciated.
    Edited by: 801033 on Oct 8, 2010 5:11 AM
    Edited by: 801033 on Oct 8, 2010 5:28 AM
    Edited by: 801033 on Oct 8, 2010 5:33 AM

    The answer, at least in my case:
    When I originally installed this cluster, I apparently misread the part of the documentation which lead me to disable lofs. The documentation states that you need to disable lofs if BOTH of two conditions are met,
    1) You are running HA for NFS to server a locally available filesystem AND
    2) you are running automountd.
    In my case, I have no need for automountd, so I disabled the autofs service, reenabled lofs and am proceeding with the upgrade.

  • Patching broken in Live Upgrade from Solaris 9 to Solaris 10

    I'm using Live Upgrade to upgrade a Solaris 9 system to Solaris 10. I installed the
    LU packages from Solaris 10 11/06 plus the latest Live Upgrade patches. Everything
    went fine until I attempted to use `luupgrade -t' to apply Recommended and Security patches to the
    Solaris 10 boot environment. It gave me this error:
    ERROR: The boot environment <sol10env> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.
    Can anyone tell me if there is a way to get the Recommended patches installed without having to first activate and boot up to 10?
    Thanks in advance.
    'chele

    Tried it - got kind of excited for a couple of seconds......but then it failed:
    # ./install_cluster -R /.alt.sol10env
    Patch cluster install script for Solaris 10 Recommended Patch Cluster
    WARNING SYSTEMS WITH LIMITED DISK SPACE SHOULD NOT INSTALL PATCHES:
    .(standard stuff about space)
    in only partially loaded patches. Check and be sure adequate disk space
    is available before continuing.
    Are you ready to continue with install? [y/n]: y
    Determining if sufficient save space exists...
    Sufficient save space exists, continuing...
    ERROR: OS is not valid for this cluster. Exiting.

  • 8/07 live upgrade Seg Faults

    I'm having a bit of trouble trying to live ugprade an 11/06 system to 08/07.
    # luupgrade -u -s /export/install/install.Sol10_sparc.0807 -n 0807 -j /var/adm/luupgrade.profile
    -o /var/adm/L.luupgrade.out.0807 -l /var/adm/L.luupgrade.err.0807
    After the operation is almost complete, this eventually produces in the upgrade log:
    Restarting upgrade:
    Segmentation Fault
    obal zone list.
    Resuming partially completed upgrade
    Processing profile
    Cannot find non-global zone list.
    ERROR: The upgrade script terminated abnormally
    Some other info:
    # cat /etc/release
    Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    # pkginfo -l SUNWlur
    PKGINST: SUNWlur
    NAME: Live Upgrade (root)
    CATEGORY: application
    ARCH: sparc
    VERSION: 11.10,REV=2005.01.10.00.03
    # pkginfo -l SUNWluu
    PKGINST: SUNWluu
    NAME: Live Upgrade (usr)
    CATEGORY: application
    ARCH: sparc
    VERSION: 11.10,REV=2005.01.10.00.03
    # pkginfo -l SUNWlucfg
    PKGINST: SUNWlucfg
    NAME: Live Upgrade Configuration
    CATEGORY: application
    ARCH: sparc
    VERSION: 11.10,REV=2007.03.09.13.13
    No zones installed
    # zoneadm list
    global
    Any ideas on where to start tracking down the problem?
    Edited by: cbcrawfo on Sep 17, 2007 5:38 PM

    Hello,
    Did you successfully fixed that ?
    I encounter the same problem.
    Thx

  • How to delete file systems from a Live Upgrade environment

    How to delete non-critical file systems from a Live Upgrade boot environment?
    Here is the situation.
    I have a Sol 10 upd 3 machine with 3 disks which I intend to upgrade to Sol 10 upd 6.
    Current layout
    Disk 0: 16 GB:
    /dev/dsk/c0t0d0s0 1.9G /
    /dev/dsk/c0t0d0s1 692M /usr/openwin
    /dev/dsk/c0t0d0s3 7.7G /var
    /dev/dsk/c0t0d0s4 3.9G swap
    /dev/dsk/c0t0d0s5 2.5G /tmp
    Disk 1: 16 GB:
    /dev/dsk/c0t1d0s0 7.7G /usr
    /dev/dsk/c0t1d0s1 1.8G /opt
    /dev/dsk/c0t1d0s3 3.2G /data1
    /dev/dsk/c0t1d0s4 3.9G /data2
    Disk 2: 33 GB:
    /dev/dsk/c0t2d0s0 33G /data3
    The data file systems are not in use right now, and I was thinking of
    partitioning the data3 into 2 or 3 file systems and then creating
    a new BE.
    However, the system already has a BE (named s10) and that BE lists
    all of the filesystems, incl the data ones.
    # lufslist -n 's10'
    boot environment name: s10
    This boot environment is currently active.
    This boot environment will be active on next system boot.
    Filesystem fstype device size Mounted on Mount Options
    /dev/dsk/c0t0d0s4 swap 4201703424 - -
    /dev/dsk/c0t0d0s0 ufs 2098059264 / -
    /dev/dsk/c0t1d0s0 ufs 8390375424 /usr -
    /dev/dsk/c0t0d0s3 ufs 8390375424 /var -
    /dev/dsk/c0t1d0s3 ufs 3505453056 /data1 -
    /dev/dsk/c0t1d0s1 ufs 1997531136 /opt -
    /dev/dsk/c0t1d0s4 ufs 4294785024 /data2 -
    /dev/dsk/c0t2d0s0 ufs 36507484160 /data3 -
    /dev/dsk/c0t0d0s5 ufs 2727290880 /tmp -
    /dev/dsk/c0t0d0s1 ufs 770715648 /usr/openwin -
    I browsed the Solaris 10 Installation Guide and the man pages
    for the lu commands, but can not find how to remove the data
    file systems from the BE.
    How do I do a live upgrade on this system?
    Thanks for your help.

    Thanks for the tips.
    I commented out the entries in /etc/vfstab, also had to remove the files /etc/lutab and /etc/lu/ICF.1
    and then could create the Boot Environment from scratch.
    I was also able to create another boot environment and copied into it,
    but now I'm facing a different problem, error when trying to upgrade.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy     
    Name                       Complete Now    On Reboot Delete Status   
    s10                        yes      yes    yes       no     -        
    s10u6                      yes      no     no        yes    -        Now, I have the Solaris 10 Update 6 DVD image on another machine
    which shares out the directory. I mounted it on this machine,
    did a lofiadm and mounted that at /cdrom.
    # ls -CF /cdrom /cdrom/boot /cdrom/platform
    /cdrom:
    Copyright                     boot/
    JDS-THIRDPARTYLICENSEREADME   installer*
    License/                      platform/
    Solaris_10/
    /cdrom/boot:
    hsfs.bootblock   sparc.miniroot
    /cdrom/platform:
    sun4u/   sun4us/  sun4v/Now I did luupgrade and I get this error:
    # luupgrade -u -n s10u6 -s /cdrom    
    ERROR: The media miniroot archive does not exist </cdrom/boot/x86.miniroot>.
    ERROR: Cannot unmount miniroot at </cdrom/Solaris_10/Tools/Boot>.I find it strange that this sparc machine is complaining about x86.miniroot.
    BTW, the machine on which the DVD image is happens to be x86 running Sol 10.
    I thought that wouldn't matter, as it is just NFS sharing a directory which has a DVD image.
    What am I doing wrong?
    Thanks.

  • Solaris 10 5/08 live upgrade only for customers with serviceplan ?

    Live upgrade fails due to missing /usr/bin/7za
    Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
    But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
    Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
    A bit disappointing ...
    Regards
    /Flemming

    Live upgrade fails due to missing /usr/bin/7za
    Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
    But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
    Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
    A bit disappointing ...
    Regards
    /Flemming

  • DiskSuite and Live Upgrade 2.0

    I have two Solaris 7 boxes running DiskSuite to mirror the O/S disk onto another drive.
    I need to upgrade to Solaris 8. In the past I have used Live Upgrade to do so, when I have enough free disk space to partition an existing disk or to use a unused disk for the Solaris 8 system files.
    In this case, I do not have sufficient free space on the boot disk. So, what is the best approach? It seems that I would have to:
    1. unmirror the file system
    2. install Solaris 8 onto the old mirror drive using LU 2.0
    3. make the old mirror drive the boot drive
    4. re-establish mirroring, being sure that it goes the right way from the Solaris 8 disk to the old boot disk
    Comments, suggestions?

    I recently built a system (specs below) and installed this card (MSI GF4 Ti4200 VTD8X MS8894, 128MB DDR), and when I try to use Live Update 2 (version 3.33.000, from the CD that came with the card), I get the same message:
    "Warning!!! Your Display Card does not support MSI Live Update 2 function.  Note: MSI Live Update 2 supports the Display Cards of MSI only."
    I'm using the drivers/BIOS that came on the CD: Driver version 6.13.10.4107, BIOS version 4.28.20.05.11.  I see on the nVidia site that they have the 4109 drivers out now, should I try those?  ?(
    I have also made sure to do the suggested modifications to IE (and I don't have PC-cillin installed):
    "Note: In order to operate this application properly, please note the following suggests.
    -Set the IE security setting 'Download signed ActiveX controls' to [Enable] or [Prompt]. (System default is [Prompt]).
    -Disable 'WebTrap' of PC-cillin(R) or any web based anti-virus application when executing MSITM Live Update 2TM.
    -Update Microsoft® Windows® Installer"
    I downloaded a newer version of LIveUpdate (3.35.000), and installed it (after completely uninstalling the old version), and got the same results.  Nothing on my system is currently overclocked.
    Help!
    System specs:
    -Soyo SY-KT400 DRAGON Ultra (Platinum Edition) with latest BIOS & Chipset Drivers
    -AMD Athlon XP Thoroughbred 2100+
    -MSI GF4 Ti4200 VTD8X (MS-8894)
    -WD Caviar Special Edition 80 GB HDD, 8 MB Cache
    -512 MB Crucial PC2700 DDR (one stick, in DIMM #1)
    -TDK 40/12/48 CD R/RW
    -Daewoo 905DF Dynaflat 19" Monitor
    -Windows XP Home Edition, SP1/all other updates current
    -On-Board CMedia 6-channel audio
    -On-Board VIA 10/100 Ethernet
    -Altec-Lansing ATP3 Speakers

  • Live Upgrade 2.0 (from Sol8 K23 to Sol9 K05

    Installed LU 2.0 from solaris 9 CDs
    Created a new BE (= copy of my Sol8 K23)
    Start upgrade on my inactiive BE to Sol9
    insert Solaris 9 CD 1of2
    luupgrade -u -n <INACT_BE> -s /cdrom/sol_9_403_sparc/s0
    --> runs fine
    eject cdrom
    insert Solaris 9 CD 2of2
    luupgrade -i -n <INACT_BE> -s /cdrom/sol_9_405_sparc_2 -O '-nodisplay'
    After a few questions, the upgrade starts,
    it first upgrade Live Upgrade ok,
    then it start upgrading Solaris,
    it than fails .
    I checked the logs on the <INACT_BE> and found in
    /var/sadm/install/logs/Solaris_9_pacjages...
    it failed on installing SUNWnsm (Netscape 7) because already installed !!
    It is right that I had SUNWnsm on my Solaris 8 system!!
    Why is this causing LU to fail ?
    It should just skip that package and go to the next
    For the sake of it I deinstalled netscape 7 of my <INACT_BE> using pkgrm -R
    I then restarted the LU using CD 2of2 , now it goes further but fails on package SUNWjhrt (java) which also existed !!
    Do I miss something or is LU just unusable ??Thanks

    Fred,
    I personally have never read that caveat, what is recommended is to always run the same version on components that use the same firmware bundle, in other words....  For a B series upgrade, you need the Infraestruture bundle (include firmware for Fabric Interconnects, IOMs and UCSM) and also need the Server bundle (which includes the firmware for the CIMC, BIOS and Adapter).
    Bottom line, the recommendation is to run exactly the same version for components that use firmware that come from the same bundle BUT,  UCSM 2.1 introduces an enhancement: "Mixed version support (for infra and server bundles firmware) "  wich allows the combination of SOME infraestructure bundles with some server bundles.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/release/notes/UCS_28313.html#wp58530 << Look for
    "Operational enhancements"
    These are the posible configurations I am aware of:
    2.1(1f) infrastructure and 2.0(5a)+ server firmware
    2.1(2a) infrastructure and 2.1(1f)+ server firmware
    I hope that helps.
    Rate ALL helpful answers.
    -Kenny

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Sparse zones live upgrade

    Hi
    I have problem with live upgrade on solaris 10 9/10 to 8/11 on sparse zone.
    The installation on global zone is good but sparse zone cannnot boot because zonepath changed.
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - pbspfox1 installed /zoneprod/pbspfox1-s10u10-baseline native excl
    the initial path is /zoneprod/pbspfox1
    #zfs list
    zoneprod/pbspfox1@s10u10-baseline 22.4M - 2.18G -
    zoneprod/pbspfox1-s10u10-baseline
    # luactivate zoneprod/pbspfox1@s10u10-baseline
    ERROR: The boot environment Name <zoneprod/pbspfox1@s10u10-baseline> is too long - a BE name may contain no more than 30 characters.
    ERROR: Unable to activate boot environment <zoneprod/pbspfox1@s10u10-baseline>.
    how to upgrade pbspfox1?
    Please help
    Walter

    I'm not exactly sure what happened here but the zone name doesn't change. If the zone patch is wrong, I'd try using zonecfg to change the zone path to the proper value and then boot the zone normally.
    zonecfg -z pbspfox1
    set zonepath=/zone/pbspfox1 (or whatever is the proper path)
    verify
    commit
    exit
    zoneadm -z pbspfox1 boot
    If the zone didn't get properly updated, you might be able to update it by detaching the zone:
    zoneadm -z pbspfox1 detach
    and doing an update reattach
    zoneadm -z pbspfox1 attach -u
    Disclaimer: All of the above was done from memory without testing, I may have gotten some things wrong.
    Hopefully this will help. I've had similar issues in the past but I'm not sure I've had exactly the same problem so I can't tell for sure whether this will help you or not. It is what I'd try. Of course, try to avoid getting yourself into a position where you can't back something out if necessary. This kind of thing can be messy and may require more than one try. If a remember correctly there were some issues with the live upgrade software as shipped with Solaris 10 8/11. I'd get it patched up to current levels ASAP to avoid additional issues.

  • Live Upgrade from 8 to 10 keeping Oracle install

    Hi everyone,
    I'm trying to figure out how to do a Live Upgrade from Solaris 8 (SPARC) to 10 and still keep my Oracle available after the reboot.
    Since all of the Oracle datafiles are on /un or /opt/oracle I figure I can just create a new BE for /, /etc and /var. From there I'd just need to edit /etc/vfstab, /etc/init.d/ (for db startups), copy over /var/opt/oracle/, mount /opt/oracle.
    Does that sound right? Has anyone done this?
    On a side note, I'm still trying to figure out Live Update. My system is configed with RAID 1 under Solstice (metatool). I'm concerened about being able to access the MetaDB once the BE switch goes through. Should I set up a new mirror for the new BE prior to running LU? Or should I configure the mirror for the new BE once the switchover has gone through?

    Hello dfkoch,
    To upgrade from ColdfFusion 8/ColdfFusion 9 to ColdfFusion 10, please download the setup for ColdfFusion 10 from http://www.adobe.com/support/coldfusion/downloads.html#cf10productdownloads and install the same with the serial number.
    The upgrade price can be checked at www.adobe.com or alternatively you can call our sales team.
    Hope this helps.
    Regards,
    Anit Kumar

  • Solaris 10 update 9 - live upgrade issues with ZFS

    Hi
    After doing a live upgrade from Solaris 10 update 8 to Solaris 10 update 9 the alternate boot environment I created is no longer bootable.
    I have completed all the pre-upgrade steps like:
    - Installing the latest version of live upgrade from the update 9 ISO.
    - Create and test the new boot environment.
    - Create a sysidcfg file used by the live upgrade that has auto_reg=disable in it.
    There is also no errors while creating the boot environment or even when activating it.
    Here is the error I get:
    SunOS Release 5.10 Version Generic_14489-06 64-bit
    Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
    NOTICE: zfs_parse_bootfs: error 22
    Cannot mount root on altroot/37 fstype zfs
    *panic[cpu0]/thread=fffffffffbc28040: vfs mountroot: cannot mount root*
    ffffffffffbc4a8d0 genunix:main+107 ()
    Skipping system dump - no dump device configured
    Does anyone know how I can fix this?
    Edited by: user12099270 on 02-Feb-2011 04:49

    Found the culprit... *142910-17*... breaks it
    System has findroot enabled GRUB
    Updating GRUB menu default setting
    GRUB menu default setting is unaffected
    Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u8wos_08a> as <mount-point>//boot/grub/menu.lst.prev.
    File </etc/lu/GRUB_backup_menu> propagation successful
    Successfully deleted entry from GRUB menu
    Validating the contents of the media </admin/x86/Patches/10_x86_Recommended/patches>.
    The media contains 204 software patches that can be added.
    Mounting the BE <s10x_u8wos_08a_Jan2011>.
    Adding patches to the BE <s10x_u8wos_08a_Jan2011>.
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    The following requested patches have packages not installed on the system
    Package SUNWio-tools from directory SUNWio-tools in patch 142910-17 is not installed on the system. Changes for package SUNWio-tools will not be applied to the system.
    Package SUNWzoneu from directory SUNWzoneu in patch 142910-17 is not installed on the system. Changes for package SUNWzoneu will not be applied to the system.
    Package SUNWpsm-ipp from directory SUNWpsm-ipp in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-ipp will not be applied to the system.
    Package SUNWsshdu from directory SUNWsshdu in patch 142910-17 is not installed on the system. Changes for package SUNWsshdu will not be applied to the system.
    Package SUNWsacom from directory SUNWsacom in patch 142910-17 is not installed on the system. Changes for package SUNWsacom will not be applied to the system.
    Package SUNWmdbr from directory SUNWmdbr in patch 142910-17 is not installed on the system. Changes for package SUNWmdbr will not be applied to the system.
    Package SUNWopenssl-commands from directory SUNWopenssl-commands in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-commands will not be applied to the system.
    Package SUNWsshdr from directory SUNWsshdr in patch 142910-17 is not installed on the system. Changes for package SUNWsshdr will not be applied to the system.
    Package SUNWsshcu from directory SUNWsshcu in patch 142910-17 is not installed on the system. Changes for package SUNWsshcu will not be applied to the system.
    Package SUNWsshu from directory SUNWsshu in patch 142910-17 is not installed on the system. Changes for package SUNWsshu will not be applied to the system.
    Package SUNWgrubS from directory SUNWgrubS in patch 142910-17 is not installed on the system. Changes for package SUNWgrubS will not be applied to the system.
    Package SUNWzoner from directory SUNWzoner in patch 142910-17 is not installed on the system. Changes for package SUNWzoner will not be applied to the system.
    Package SUNWmdb from directory SUNWmdb in patch 142910-17 is not installed on the system. Changes for package SUNWmdb will not be applied to the system.
    Package SUNWpool from directory SUNWpool in patch 142910-17 is not installed on the system. Changes for package SUNWpool will not be applied to the system.
    Package SUNWudfr from directory SUNWudfr in patch 142910-17 is not installed on the system. Changes for package SUNWudfr will not be applied to the system.
    Package SUNWxcu4 from directory SUNWxcu4 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu4 will not be applied to the system.
    Package SUNWarc from directory SUNWarc in patch 142910-17 is not installed on the system. Changes for package SUNWarc will not be applied to the system.
    Package SUNWtftp from directory SUNWtftp in patch 142910-17 is not installed on the system. Changes for package SUNWtftp will not be applied to the system.
    Package SUNWaccu from directory SUNWaccu in patch 142910-17 is not installed on the system. Changes for package SUNWaccu will not be applied to the system.
    Package SUNWppm from directory SUNWppm in patch 142910-17 is not installed on the system. Changes for package SUNWppm will not be applied to the system.
    Package SUNWtoo from directory SUNWtoo in patch 142910-17 is not installed on the system. Changes for package SUNWtoo will not be applied to the system.
    Package SUNWcpc from directory SUNWcpc.i in patch 142910-17 is not installed on the system. Changes for package SUNWcpc will not be applied to the system.
    Package SUNWftdur from directory SUNWftdur in patch 142910-17 is not installed on the system. Changes for package SUNWftdur will not be applied to the system.
    Package SUNWypr from directory SUNWypr in patch 142910-17 is not installed on the system. Changes for package SUNWypr will not be applied to the system.
    Package SUNWlxr from directory SUNWlxr in patch 142910-17 is not installed on the system. Changes for package SUNWlxr will not be applied to the system.
    Package SUNWdcar from directory SUNWdcar in patch 142910-17 is not installed on the system. Changes for package SUNWdcar will not be applied to the system.
    Package SUNWnfssu from directory SUNWnfssu in patch 142910-17 is not installed on the system. Changes for package SUNWnfssu will not be applied to the system.
    Package SUNWpcmem from directory SUNWpcmem in patch 142910-17 is not installed on the system. Changes for package SUNWpcmem will not be applied to the system.
    Package SUNWlxu from directory SUNWlxu in patch 142910-17 is not installed on the system. Changes for package SUNWlxu will not be applied to the system.
    Package SUNWxcu6 from directory SUNWxcu6 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu6 will not be applied to the system.
    Package SUNWpcmci from directory SUNWpcmci in patch 142910-17 is not installed on the system. Changes for package SUNWpcmci will not be applied to the system.
    Package SUNWarcr from directory SUNWarcr in patch 142910-17 is not installed on the system. Changes for package SUNWarcr will not be applied to the system.
    Package SUNWscpu from directory SUNWscpu in patch 142910-17 is not installed on the system. Changes for package SUNWscpu will not be applied to the system.
    Package SUNWcpcu from directory SUNWcpcu in patch 142910-17 is not installed on the system. Changes for package SUNWcpcu will not be applied to the system.
    Package SUNWopenssl-include from directory SUNWopenssl-include in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-include will not be applied to the system.
    Package SUNWdtrp from directory SUNWdtrp in patch 142910-17 is not installed on the system. Changes for package SUNWdtrp will not be applied to the system.
    Package SUNWhermon from directory SUNWhermon in patch 142910-17 is not installed on the system. Changes for package SUNWhermon will not be applied to the system.
    Package SUNWpsm-lpd from directory SUNWpsm-lpd in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-lpd will not be applied to the system.
    Package SUNWdtrc from directory SUNWdtrc in patch 142910-17 is not installed on the system. Changes for package SUNWdtrc will not be applied to the system.
    Package SUNWhea from directory SUNWhea in patch 142910-17 is not installed on the system. Changes for package SUNWhea will not be applied to the system.
    Package SUNW1394 from directory SUNW1394 in patch 142910-17 is not installed on the system. Changes for package SUNW1394 will not be applied to the system.
    Package SUNWrds from directory SUNWrds in patch 142910-17 is not installed on the system. Changes for package SUNWrds will not be applied to the system.
    Package SUNWnfsskr from directory SUNWnfsskr in patch 142910-17 is not installed on the system. Changes for package SUNWnfsskr will not be applied to the system.
    Package SUNWudf from directory SUNWudf in patch 142910-17 is not installed on the system. Changes for package SUNWudf will not be applied to the system.
    Package SUNWixgb from directory SUNWixgb in patch 142910-17 is not installed on the system. Changes for package SUNWixgb will not be applied to the system.
    Checking patches that you specified for installation.
    Done!
    Approved patches will be installed in this order:
    142910-17
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    Patch 142910-17 has been successfully installed.
    See /a/var/sadm/patch/142910-17/log for details
    Executing postpatch script...
    Creating GRUB menu in /a
    Installing grub on /dev/rdsk/c2t0d0s0
    stage1 written to partition 0 sector 0 (abs 16065)
    stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
    Patch packages installed:
    BRCMbnx
    SUNWaac
    SUNWahci
    SUNWamd8111s
    SUNWcakr
    SUNWckr
    SUNWcry
    SUNWcryr
    SUNWcsd
    SUNWcsl
    SUNWcslr
    SUNWcsr
    SUNWcsu
    SUNWesu
    SUNWfmd
    SUNWfmdr
    SUNWgrub
    SUNWhxge
    SUNWib
    SUNWigb
    SUNWintgige
    SUNWipoib
    SUNWixgbe
    SUNWmdr
    SUNWmegasas
    SUNWmptsas
    SUNWmrsas
    SUNWmv88sx
    SUNWnfsckr
    SUNWnfscr
    SUNWnfscu
    SUNWnge
    SUNWnisu
    SUNWntxn
    SUNWnv-sata
    SUNWnxge
    SUNWopenssl-libraries
    SUNWos86r
    SUNWpapi
    SUNWpcu
    SUNWpiclu
    SUNWpsdcr
    SUNWpsdir
    SUNWpsu
    SUNWrge
    SUNWrpcib
    SUNWrsgk
    SUNWses
    SUNWsmapi
    SUNWsndmr
    SUNWsndmu
    SUNWtavor
    SUNWudapltu
    SUNWusb
    SUNWxge
    SUNWxvmpv
    SUNWzfskr
    SUNWzfsr
    SUNWzfsu
    PBE GRUB has no capability information.
    PBE GRUB has no versioning information.
    ABE GRUB is newer than PBE GRUB. Updating GRUB.
    GRUB update was successfull.
    Unmounting the BE <s10x_u8wos_08a_Jan2011>.
    The patch add to the BE <s10x_u8wos_08a_Jan2011> completed.
    Still need to know how to resolve it though...

Maybe you are looking for

  • Photoshop cs4 access denied when trying to install from disc in Windows 7 64bit.

    Photoshop cs4 access denied when trying to install from disc in Windows 7 64bit. I tried it in safe mode and it starts to install but get an error there as well. What do I do?

  • HT201210 3194 error while trying to update iphone 3g to 4.2

    gota 3194 error while updating iphone 3g to 4.2

  • Internet Explorer in CVI

    Hi, we want to impletent a Internet-page with Internet Explorer in our CVI software. With the following code it is possible to do this:     sprintf(WebAdress, "http://%s", IP);     // Get Object Handle from ActiveX control and load initial page     G

  • Psql procedure to call 2 procedures(one by one)

    Hi All, I am very new to plsql, great if any1 could help me in getting to the right track my problem is , I need to write a stored procedure which is going to call another stored procedure(lets call this first stored proc as procedure_A) and I have t

  • Problem with windows 7 Upgrade website

    Hi, I recently purchased a Thinkpad t400 with vista installed in it. I am eligible for free upgrade to windows 7. But when I try to click on "Request Upgrade" button in "https://ebiz3.mentormediacorp.com/LenovoWindows7Upgrade/Select_Lan.aspx", my web