Solaris 10 live upgrade failed

I am trying to upgrade Solaris 10 U6 10/08 to U7 05/09 with live upgrade. But it failed with following errors:
# luupgrade -u -n solaris_u7 -s /mnt/cdrom
No entry for BE <solaris_u7> in GRUB menu
Copying failsafe kernel from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is <ufs>
Mounting miniroot at </mnt/cdrom/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt/cdrom>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <solaris_u7>.
Checking for GRUB menu on ABE <solaris_u7>.
Saving GRUB menu on ABE <solaris_u7>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <solaris_u7>.
Performing the operating system upgrade of the BE <solaris_u7>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
ERROR: Installation of the packages from this media of the media failed; pfinstall returned these diagnostics:
Processing profile
Loading local environment and services
Generating upgrade actions
WARNING: SUNWwgetu depends on SUNWgcmn, which is not selected
WARNING: SUNWi15cs depends on SUNWi15rf, which is not selected
WARNING: SUNWi1cs depends on SUNWxwplt, which is not selected
WARNING: SUNWbind depends on SUNWbindr, which is not selected
WARNING: SUNWale depends on SUNWxwrtl, which is not selected
WARNING: SUNWale depends on SUNWxwplt, which is not selected
ERROR: No upgradeable file systems found at specified mount point.
Restoring GRUB menu on ABE <solaris_u7>.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Did it really fail due to file dependencies? Will it fix the problem by installing the missing packages on the running BE?

You need to run this script from the CD or Install server: Solaris_10/Tools/Installers/liveupgrade20 when upgrading to 10 u7. I had run into this. Run the script and choose all the defaults. Liveupgrade will then not give the errors you have seen.
Linuxbass :-)

Similar Messages

  • Solaris Live Upgrade

    I didnt find a specific place to place this topic, so maybe you can replace.
    I read a lot about Solaris Live Upgrade, but I'd would like to know if I can copy a BE from one computer to another over network via NFS.
    Or if there is another software that can make a image of my system and replace to the others computers (they are all the same), so I will dont need to configure all of them.
    Thx for any help.

    Thanks for answering.
    I read about JumpStart installation, but is it possible to install a customized image, but not with just the partition, packages and patches selected. I wanted to share a whole system configured. For example, the DNS, IP, User's account settings, etc.
    Because, I just found a way to share the installation image with the selected packages and patches. But you will need to configure the files of the packages later in the clients.
    Is it possible?

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Solaris Live Upgrade with NGZ

    Hi
    I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
    I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
    On my 1st Server:
    I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
    ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
    zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
    zoneadm: zone 'mddtri': call to zoneadm failed
    ERROR: unable to mount non-global zones of ABE: cannot make bootable
    ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
    ERROR: unable to make boot environment <new_s10BE> bootable
    On my 2nd Server:
    I have a Global zone and 10 NGZ. This is the error i am getting:-
    WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
    WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
    *.This happens for all the NGZ running.*
    Duplicating ZFS datasets from PBE to ABE.
    ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone  to dedicated dataset.
    ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
    Reverting state of zones in PBE <old_s10BE>
    ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
    ERROR: Unable to populate file systems from boot environment <new_s10BE>
    Help, I need to sort this out a.s.a.p!

    Hi,
    I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
    Here is a script with lucreate.
    #!/bin/sh
    Logdir=/usr/local/LUscripts/logs
    if &#91; ! -d ${Logdir} &#93;
    then
    echo ${Logdir} existiert nicht
    exit
    fi
    /usr/sbin/lucreate \
    -l ${Logdir}/$0.log \
    -o ${Logdir}/$0.error \
    -m /:/dev/dsk/c2t0d0s0:ufs \
    -m /var:/dev/dsk/c2t0d0s3:ufs \
    -m /opt:/dev/dsk/c2t0d0s4:ufs \
    -m -:/dev/dsk/c2t0d0s1:swap \
    -n disk0
    And here is the output
    root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
    ERROR: An error occurred during creation of configuration file.
    ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
    Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
    Abort - core dumped

  • Solaris SPARC upgrade failing

    I am trying to apply a Solaris 10 upgrade release to my E450 server, but get the following error during the install process:
    "Could not reinitialize system state, please exit installer and try again"
    Retrying the upgrade gives the same error.
    The current release on the server is 10 1/06 and I am trying to move this on to 10 11/06.
    Anyone got any idea what it is complaining about, the upgrade process worked OK on my V480/V490 servers!.

    Obviously Sun (excuse me, Oracle) does not intend to support the solaris-10/sparc/firefox-3.6 combination. Firefox 3.6 requires the NPAPI and/or NPRuntime interfaces to java, but oracle does not provide the new "libnpjp2.so" java plugin with the jre-6u18-sparc-solaris java download.
    A search at www.oracle.com does not turn up any support announcement about this. You have to learn it the hard way. :(
    If you want to continue to use sparc-solaris, downgrade to firefox-3.5 so you can use the old libjavaplugin_oji.so module. Otherwise run Linux or Mac OS X to stay in the unix camp.

  • Solaris 10 upgrade fails with "mount failed"

    I'm trying to upgrade an aged Ultra 80 to Solaris 10u9 (from Solaris 10u2 or so), so that it can be used for our product testing purposes. After several unsuccessful attempts, I finally freed up enough space on the / filesystem (5.2GB free) by moving /opt to a newly installed second drive.
    However, the upgrade now fails with
    "Mount failed for either root, swap, or other file system.
    Pfinstall failed. Exit stat= java.lang.UNIXProcess@e2892b 2"
    At the end of the log file it says
    "Checking c0t0d0s0 for an upgradeable Solaris image.
    ERROR: Mount failed for either root, swap, or other file system."
    I'm stumped. It doesn't say which file system it's having problems with.
    Any ideas as to how to troubleshoot this problem, or what the cause might be? At 30 minutes per upgrade attempt, I'd like to know what the cause is rather than pursuing trial and error.
    Thanks.

    Look at your "vfstab" and see if there are any mounts that need a specific driver like a SAN-provided LUN. Comment them out during the upgrade for the upgrade tries to mount everything in your /etc/vfstab and if it doesn't have the driver/software to do it, it will fail...

  • First Solaris 11 upgrade fails

    Trying to update Solaris 11, it says "unable to clone the current boot environment",
    manual creation gives this:
    sudo beadm create test
    updatevfstab: failed to open vfstab (/tmp/.be.5Saiwr/etc/vfstab): No such file or directory
    be_copy: failed to update new BE's vfstab (test)
    be_copy: destroying partially created boot environment
    Unable to create test.
    Unable to find message for error code: 1
    cat /etc/vfstab#device          device          mount          FS     fsck     mount     mount
    #to mount     to fsck          point          type     pass     at boot     options
    /devices     -     /devices     devfs     -     no     -
    /proc     -     /proc     proc     -     no     -
    ctfs     -     /system/contract     ctfs     -     no     -
    objfs     -     /system/object     objfs     -     no     -
    sharefs     -     /etc/dfs/sharetab     sharefs     -     no     -
    fd     -     /dev/fd     fd     -     no     -
    swap     -     /tmp     tmpfs     -     yes     -
    rpool/ROOT/release11-1     -     /     zfs     -     no     -
    /dev/zvol/dsk/ssd/swap     -     -     swap     -     no     -

    Originally it was OpenSolaris, then upgraded to Solaris 11 Express and then to Solaris 11,
    beadm list -dsError getting boot configuration from pool rpool: Error while processing the /rpool/boot/grub/menu.lst file([Errno 13] Permission denied: '/rpool/boot/grub/menu.lst')
    BE/Dataset/Snapshot Active Mountpoint Space Policy Created
    dev-latest-2
    rpool/ROOT/dev-latest-2 - - 5.20G static 2010-07-27 15:52
    orsol
    rpool/ROOT/orsol - - 5.72M static 2011-01-13 14:46
    release11
    rpool/ROOT/release11 - - 3.65M static 2011-11-17 18:32
    release11-1
    rpool/ROOT/release11-1 NR / 33.53G static 2011-11-17 18:51
    rpool/ROOT/release11-1@2010-09-07-09:32:33 - - 2.22G static 2010-09-07 13:32
    rpool/ROOT/release11-1@2011-08-30-11:54:22 - - 1.59G static 2011-08-30 15:54
    rpool/ROOT/release11-1@2011-09-26-14:15:49 - - 45.35M static 2011-09-26 18:15
    rpool/ROOT/release11-1@2011-10-17-08:30:51 - - 221.18M static 2011-10-17 12:30
    rpool/ROOT/release11-1@2011-11-07-10:25:09 - - 287.96M static 2011-11-07 14:25
    rpool/ROOT/release11-1@2011-11-17-14:32:46 - - 18.38M static 2011-11-17 18:32
    rpool/ROOT/release11-1@2011-11-17-14:51:04 - - 11.81M static 2011-11-17 18:51
    rpool/ROOT/release11-1@2011-12-02-09:25:26 - - 410.0K static 2011-12-02 13:25
    rpool/ROOT/release11-1@2011-12-02-09:37:22 - - 326.5K static 2011-12-02 13:37
    rpool/ROOT/release11-1@2011-12-02-09:40:54 - - 276.0K static 2011-12-02 13:40
    rpool/ROOT/release11-1@2011-12-02-09:43:06 - - 229.5K static 2011-12-02 13:43
    rpool/ROOT/release11-1@2011-12-02-09:45:16 - - 290.0K static 2011-12-02 13:45
    rpool/ROOT/release11-1@2011-12-02-10:03:45 - - 35.0K static 2011-12-02 14:03
    rpool/ROOT/release11-1@2011-12-02-10:04:08 - - 35.5K static 2011-12-02 14:04
    rpool/ROOT/release11-1@2011-12-02-10:05:40 - - 301.5K static 2011-12-02 14:05
    rpool/ROOT/release11-1@2011-12-05-08:46:41 - - 273.0K static 2011-12-05 12:46
    rpool/ROOT/release11-1@2011-12-05-08:50:44 - - 279.0K static 2011-12-05 12:50
    rpool/ROOT/release11-1@2011-12-05-08:54:36 - - 260.5K static 2011-12-05 12:54
    rpool/ROOT/release11-1@2011-12-05-08:57:39 - - 118.5K static 2011-12-05 12:57
    rpool/ROOT/release11-1@dev - - 696.84M static 2010-06-24 12:50
    rpool/ROOT/release11-1@install - - 2.19G static 2008-09-08 15:45
    rpool/ROOT/release11-1@snapshot1 - - 6.07M static 2011-09-09 14:03
    rpool/ROOT/release11-1@snapshot2 - - 3.37M static 2011-09-09 15:47
    support
    rpool/ROOT/support - - 3.28M static 2011-08-30 15:54
    support-11
    rpool/ROOT/support-11 - - 3.69M static 2011-09-26 18:15
    support-12
    rpool/ROOT/support-12 - - 5.73M static 2011-10-17 12:30
    support-13
    rpool/ROOT/support-13 - - 20.76M static 2011-11-07 14:25

  • Solaris 10 5/08 live upgrade only for customers with serviceplan ?

    Live upgrade fails due to missing /usr/bin/7za
    Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
    But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
    Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
    A bit disappointing ...
    Regards
    /Flemming

    Live upgrade fails due to missing /usr/bin/7za
    Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
    But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
    Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
    A bit disappointing ...
    Regards
    /Flemming

  • DB upgrade during a Solaris upgrade using live upgrade feature

    Hello,
    Here is the scenario. We are currently upgrading our OS from Solaris 9 to Solaris 10. And there is also a major database upgrade involved too. To help mitigate downtime, the Solaris Live Upgrade feature is being used by the SA's. The DB to be upgraded is currently at 9i and the proposed upgrade end state is at 10.2.0.3.
    Does anyone know if it is possible to do at least part of the database upgrade in the alternate boot environment created during the live upgrade process? So lets say that I am able to get the database partly upgraded to 10.2.0.1. Then, I want to run the patch set for 10.2.0.3 in the alternate boot environment and do the upgrade of the instance there too so that when the alternate boot environment is booted to the primary boot environment, I can, in effect, just start up the database.
    That's sort of a high level simplified version of what I think may be possible.
    Does anyone have any recommendations? We don't have any other high availability solutions currently in place so options such as using data guard do not apply.
    Thanks in advance!

    Hi magNorth,
    I'm not a Solaris expert but I'd always recommend not to do both steps (OS upgrade and database upgrade) at the same time. I've seen to many live scenarios where either the one or the other has caused issues - and it's fairly tough for all three parties (You as the customer, Sun Support, Oracle Support) to find out the end what was/is causing the issues.
    So my recommendation would be:
    1) Do your Solaris upgrade from 9 to 10 and check for the most recent Solaris patches - once this has been finished and run for one or two weeks successfully in production then go to step 2
    2) Do your database upgrade - I would suggest not going to 10.2.0.3 but directly to 10.2.0.4 because 10.2.0.4 has ~2000 bug fixes plus the security fixes from April2008 - but that's your decision. If 10.2.0.3 is a must then at least check Note:401435.1 for known issues and alert in 10.2.0.3 and for both cases the Upgrade Companion: Note:466181.1     
    Kind regards
    Mike

  • After Solaris live upgarde disks unavailable

    Hi All,
    we have two SUN Fire 6800 - cluster node. OS and Kernel Version: Solaris 9 9/05 s9s_u8wos_05 SPARC. After Solaris live upgrade the new root disks are continueously failing. During boot with new boot environment Solaris_9_905 wait and then boot disks become unavailable, cluster will crash and start with old boot environment. If the failed devices are removed then inserted, it makes the device available again.
    Could anybody help us? Any idea?
    regards
    Josef

    Hi Tim,
    We have Sun Cluster 3.1. Is it possible error? I'll seek for description of LU.
    However: it seems a simple hardware failure. it lost just the disks.
    Messages:
    root@mcl01:~#metastat -p
    d60 -m d61 d62 d63 1
    d61 1 1 c1t1d0s6
    d62 1 1 c1t5d0s6
    d63 1 1 c1t4d0s6
    d50 -m d51 d52 d53 1
    d51 1 1 c1t1d0s5
    d52 1 1 c1t5d0s5
    d53 1 1 c1t4d0s5
    d30 -m d31 d32 d33 1
    d31 1 1 c1t1d0s3
    d32 1 1 c1t5d0s3
    d33 1 1 c1t4d0s3
    d0 -m d1 d2 d3 1
    d1 1 1 c1t1d0s0
    d2 1 1 c1t5d0s0
    d3 1 1 c1t4d0s0
    0. c1t0d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000870e3de0,0
    1. c1t2d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871421d4,0
    2. c1t3d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871432fc,0
    3. c1t5d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871421a1,0
    root@mcl02:~#metastat -p
    d60 -m d61 d62 d63 1
    d61 1 1 c0t1d0s6
    d62 1 1 c4t1d0s6
    d63 1 1 c4t2d0s6
    d50 -m d51 d52 d53 1
    d51 1 1 c0t1d0s5
    d52 1 1 c4t1d0s5
    d53 1 1 c4t2d0s5
    d30 -m d31 d32 d33 1
    d31 1 1 c0t1d0s3
    d32 1 1 c4t1d0s3
    d33 1 1 c4t2d0s3
    d0 -m d1 d2 d3 1
    d1 1 1 c0t1d0s0
    d2 1 1 c4t1d0s0
    d3 1 1 c4t2d0s0
    0. c0t0d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@0,0
    !!! 1. c0t1d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@1,0
    2. c0t2d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@2,0
    297. c4t0d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@0,0
    !!! 298. c4t1d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@1,0
    !!! 299. c4t2d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@2,0
    And
    "format" doesn't show some of these disks or says "disk type undefined" (- something like this)
    thx
    J

  • Looking for information on best practices using Live Upgrade to patch LDOMs

    This is in Solaris 10. Relatively new to the style of patching... I have a T5240 with 4 LDOMS. A control LDOM and three clients. I have some fundamental questions I'd like help with..
    Namely:
    #1. The Client LDOMS have zones running in them. Do I need to init 0 the zone or can I just +zoneadm zone halt+ them regardless of state? I.E. if it's running a database will halting the zone essentially snapshot it or will it attempt to shut it down. Is this even a nessessary step.
    #2. What is the reccommended reboot order for the LDOMs? do I need to init 0 the client ldoms and the reboot the control ldom or can I leave the client LDOM's running and just reboot the control and then reboot the clients after the control comes up?
    #3. Oracle. it's running in several of the zones on the client LDOM's what considerations need to be made for this?
    I am sure other things will come up during the conversation but I have been looking for an hour on Oracle's site for this and the only thing I can find is old Sun Docs with broken links.
    Thanks for any help you can provide,
    pipelineadmin+*                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Before you use live upgrade, or any other patching technique for Solaris, please be sure to read http://docs.oracle.com/cd/E23823_01/html/E23801/index.html which includes information on upgrading systems with non-global zones. Also, go to support.oracle.com and read Oracle Solaris Live Upgrade Information Center [ID 1364140.1]. These really are MANDATORY READING.
    For the individual questions:
    #1. During the actual maintenance you don't have to do anything to the zone - just operate it as normal. That's the purpose of the "live" in "live upgrade" - you're applying patches on a live, running system under normal operations. When you are finisihed with that process you can then reboot into the new "boot environment". This will become more clear after reading the above documents. Do as you normally would do before taking a planned outage: shut the databases down using the database commands for a graceful shutdown. A zone halt will abruptly stop the zone and is not a good idea for a database. Alternatively, if you can take application outages, you could (smoothly) shutdown the applications and then their domains, detach the zones (zoneadm detach) and then do a live upgrade. Some people like that because it makes things faster. After the live upgrade you would reboot and then zoneadm attach the zones again. The fact that the Solaris instance is running within a logical domain really is mostly besides the point with respect to this process.
    As you can see, there are a LOT of options and choices here, so it's important to read the doc. I ***strongly*** recommend you practice on a test domain so you can get used to the procedure. That's one of the benefits of virtualization: you can easily set up test environments so you cn test out procedures. Do it! :-)
    #2 First, note that you can update the domains individually at separate times, just as if they were separate physical machines. So, you could update the guest domains one week (all at once or one at a time), reboot them into the new Solaris 10 software level, and then a few weeks later (or whenever) update the control domain.
    If you had set up your T5240 in a split-bus configuration with an alternate I/O domain providing virtual I/O for the guests, you would be able to upgrade the extra I/O domain and the control domain one at a time in a rolling upgrade - without ever having to reboot the guests. That's really powerful for providing continuous availability. Since you haven't done that, the answer is that at the point you reboot the control domain the guests will lose their I/O. They don't crash, and technically you could just have them continue until the control domain comes back up at which time the I/O devices reappear. For an important application like a database I wouldn't recommend that. Instead: shutdown the guests. then reboot the control domain, then bring the guest domains back up.
    3. The fact that Oracle database is running in zones inside those domains really isn't an issue. You should study the zones administration guide to understand the operational aspects of running with zones, and make sure that the patches are compatible with the version of Oracle.
    I STRONGLY recommend reading the documents mentioned at top, and setting up a test domain to practice on. It shouldn't be hard for you to find documentation. Go to www.oracle.com and hover your mouse over "Oracle Technology Network". You'll see a window with a menu of choices, one of which is "Documentation" - click on that. From there, click on System Software, and it takes you right to the links for Solaris 10 and 11.

  • Volume as install disk for Guest Domain and Live Upgrade

    Hi Folks,
    I am new to LDOMs and have some questions - any pointers, examples would be much appreciated:
    (1) With support for volumes to be used as whole disks added in LDOM release 1.0.3, can we export a whole LUN under either VERITAS DMP or mpxio control to guest domain and install Solaris on it ? Any gotchas or special config required to do this ?
    (2) Can Solaris Live Upgrade be used with Guest LDOMs ? or is this ability limited to Control Domains ?
    Thanks

    The answer to your #1 question is YES.
    Here's my mpxio enabled device.
    non-STMS device name STMS device name
    /dev/rdsk/c2t50060E8010029B33d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    /dev/rdsk/c3t50060E8010029B37d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    create the virtual disk using slice 2
    ldm add-vdsdev /dev/dsk/c4t4849544143484920373730313036373530303136d0s2 77bootdisk@primary-vds01
    add the virtual disk to the guest domain
    ldm add-vdisk apps bootdisk@primary-vds01 ldom1
    the virtual disk will be imprted as c0d0 which is the whole lun itself.
    bind, start ldom 1 and install OS (i used jumpstart) and it partitioned the boot disk c0d0 as / 15GB, swap the remaining space (10GB)
    when you run format, print command on both guest and primary domain on this disk you'll see the same slice/size information
    Part Tag Flag Cylinders Size Blocks
    0 root wm 543 - 1362 15.01GB (820/0/0) 31488000
    1 swap wu 0 - 542 9.94GB (543/0/0) 20851200
    2 backup wm 0 - 1362 24.96GB (1363/0/0) 52339200
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    I havent used DMP but HDLM (Hitachi Dynamic link manager) seems not supported by ldom as i cannot make it work :(
    I have no answer on your next question unfortunately.

  • Applying individual patches to a Live Upgrade Environment

    Hi all
    Is it possible to apply individual patches to a Live Upgrade Environment? More specifically, is it possible to apply a kernel patch to the LUE?
    I was thinking that the command would look like this:
    patchadd -R /alt_env_root /location/144500-19
    In the README I don't find anything for patching a LUE and only mention of installing it in single user mode or when the system is close to totally idle.
    Dean

    yes u can apply the individul patches anad kernel patches in the lu enviornment
    for that first u need to create ABE
    luupgrade -n mytestBE -t -s /patchesfolder 166981-17
    you can refer this below link
    http://www.oracle.com/technetwork/server-storage/solaris10/solaris-live-upgrade-wp-167900.pdf
    http://www.oracle.com/technetwork/systems/articles/lu-patch-jsp-139117.html

  • Creating Boot Environment for Live Upgrade

    Hello.
    I'd like to upgrade a Sun Fire 280R system running Solaris 8 to Solaris 10 U4. I'd like to use Live Upgrade to do this. As that's going to be my first LU of a system, I've got some questions. Before I start, I'd like to mention that I have read the �Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning� ([820-0178|http://docs.sun.com/app/docs/doc/820-0178]) document. Nonetheless, I'd also appreciate pointers to a more �hands-on� documentation/howto reg. live upgrade.
    The system that I'd like to upgrade has these filesystems:
    (winds02)askwar$ df
    Filesystem 1k-blocks Used Available Use% Mounted on
    /dev/md/dsk/d30 4129290 684412 3403586 17% /
    /dev/md/dsk/d32 3096423 1467161 1567334 49% /usr
    /dev/md/dsk/d33 2053605 432258 1559739 22% /var
    swap 7205072 16 7205056 1% /var/run
    /dev/dsk/c3t1d0s6 132188872 61847107 69019877 48% /u04
    /dev/md/dsk/d34 18145961 5429315 12535187 31% /opt
    /dev/md/dsk/d35 4129290 77214 4010784 2% /export/home
    It has 2 built in harddisks, which form those metadevices. You can find the �metastat� at http://askwar.pastebin.ca/697380. I'm now planning to break the mirrors for /, /usr, /var and /opt. To do so, I'd run
    metadetach d33 d23
    metaclear d23
    d23 is/used to be c1t1d0s4. I'd do this for d30, d32 and d34 as well. Plan is, that I'd be able to use these newly freed slices on c1t1d0 for LU. I know that I'm in trouble when c1t0d0 now dies. But that's okay, as that system isn't being used anyway right now...
    Or wait, I can use lucreate to do that as well, can't I? So, instead of manually detaching the mirror, I could do:
    lucreate -n s8_2_s10 -m /:/dev/md/dsk/d30:preserve,ufs \
    -m /usr:/dev/md/dsk/d32:preserve,ufs \
    -m /var:/dev/md/dsk/d33:preserve,ufs \
    -m /opt:/dev/md/dsk/d34:preserve,ufs
    Does that sound right? I'd assume, that I'd then have a new boot environment called �s8_2_s10�, which uses the contents of the old metadevices. Or would the correct command rather be:
    lucreate -n s8_2_s10_v2 \
    -m /:/dev/md/dsk/d0:mirror,ufs \
    -m /:/dev/md/dsk/d20:detach,attach,preserve \
    -m /usr:/dev/md/dsk/d2:mirror,ufs \
    -m /usr:/dev/md/dsk/d22:detach,attach,preserve \
    -m /var:/dev/md/dsk/d3:mirror,ufs \
    -m /var:/dev/md/dsk/d23:detach,attach,preserve \
    -m /opt:/dev/md/dsk/d4:mirror,ufs \
    -m /opt:/dev/md/dsk/d24:detach,attach,preserve
    What would be the correct way to create the new boot environment? As I said, I haven't done this before, so I'd really appreciate some help.
    Thanks a lot,
    Alexander Skwar

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Sun Docs on Live Upgrade

    New from Sun:
    Upgrading With Solaris Live Upgrade:
    http://docs.sun.com/app/docs/doc/817-5505/6mkv5m1ke?a=view

    Frederick is correct that UM does not support download only of patches, however, the cli command smpatch does. Take a look at the man page for smpatch and specifically the download subcommand.
    BTW, being able to use UM to patch live upgrade images sounds like a great future feature.
    -Dave

Maybe you are looking for