Solaris 9 live upgrade to solaris 10 segmentation fault

pre-upgrade work:
apply patch doc 72099
upgrade live upgrade to solaris 10.
lucreate-m /:/dev/dsk/c0t1d0s0:ufs -m /var:/dev/dsk/c0t1d0s1:ufs -m /opt:/dev/dsk/c0t1d0s3:ufs -n sol10
it works well
oot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
sol9 yes yes yes no -
sol10 yes no no yes -
But luupgrade -u -n sol10 -s /sol11
Validating the contents of the media </sol11>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <sol10>.
Determining packages to install or upgrade for BE <sol10>.
Performing the operating system upgrade of the BE <sol10>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
/usr/sbin/luupgrade&#91;677&#93;: 6549 Segmentation Fault(coredump)
ERROR: Installation of the packages from this media of the media failed; pfinstall returned these diagnostics:
The Solaris upgrade of the boot environment <sol10> failed.

I would break your mirror and do a liveupgrade on the available mirror. Then boot that disk.

Similar Messages

  • Solaris Live Upgrade

    I didnt find a specific place to place this topic, so maybe you can replace.
    I read a lot about Solaris Live Upgrade, but I'd would like to know if I can copy a BE from one computer to another over network via NFS.
    Or if there is another software that can make a image of my system and replace to the others computers (they are all the same), so I will dont need to configure all of them.
    Thx for any help.

    Thanks for answering.
    I read about JumpStart installation, but is it possible to install a customized image, but not with just the partition, packages and patches selected. I wanted to share a whole system configured. For example, the DNS, IP, User's account settings, etc.
    Because, I just found a way to share the installation image with the selected packages and patches. But you will need to configure the files of the packages later in the clients.
    Is it possible?

  • Live upgrade from solaris 8 to 10

    how do i upgrade from solaris 8 to 10 using live upgrade on a sun netra t1 machine?

    THere is a good intro to LU at
    http://www.sun.com/bigadmin/collections/installation.html
    beyond that, see docs.sun.com

  • Solaris Live Upgrade with NGZ

    Hi
    I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
    I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
    On my 1st Server:
    I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
    ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
    zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
    zoneadm: zone 'mddtri': call to zoneadm failed
    ERROR: unable to mount non-global zones of ABE: cannot make bootable
    ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
    ERROR: unable to make boot environment <new_s10BE> bootable
    On my 2nd Server:
    I have a Global zone and 10 NGZ. This is the error i am getting:-
    WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
    WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
    *.This happens for all the NGZ running.*
    Duplicating ZFS datasets from PBE to ABE.
    ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone  to dedicated dataset.
    ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
    Reverting state of zones in PBE <old_s10BE>
    ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
    ERROR: Unable to populate file systems from boot environment <new_s10BE>
    Help, I need to sort this out a.s.a.p!

    Hi,
    I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
    Here is a script with lucreate.
    #!/bin/sh
    Logdir=/usr/local/LUscripts/logs
    if &#91; ! -d ${Logdir} &#93;
    then
    echo ${Logdir} existiert nicht
    exit
    fi
    /usr/sbin/lucreate \
    -l ${Logdir}/$0.log \
    -o ${Logdir}/$0.error \
    -m /:/dev/dsk/c2t0d0s0:ufs \
    -m /var:/dev/dsk/c2t0d0s3:ufs \
    -m /opt:/dev/dsk/c2t0d0s4:ufs \
    -m -:/dev/dsk/c2t0d0s1:swap \
    -n disk0
    And here is the output
    root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
    ERROR: An error occurred during creation of configuration file.
    ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
    Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
    Abort - core dumped

  • Patching broken in Live Upgrade from Solaris 9 to Solaris 10

    I'm using Live Upgrade to upgrade a Solaris 9 system to Solaris 10. I installed the
    LU packages from Solaris 10 11/06 plus the latest Live Upgrade patches. Everything
    went fine until I attempted to use `luupgrade -t' to apply Recommended and Security patches to the
    Solaris 10 boot environment. It gave me this error:
    ERROR: The boot environment <sol10env> supports non-global zones.The current boot environment does not support non-global zones. Releases prior to Solaris 10 cannot be used to maintain Solaris 10 and later releases that include support for non-global zones. You may only execute the specified operation on a system with Solaris 10 (or later) installed.
    Can anyone tell me if there is a way to get the Recommended patches installed without having to first activate and boot up to 10?
    Thanks in advance.
    'chele

    Tried it - got kind of excited for a couple of seconds......but then it failed:
    # ./install_cluster -R /.alt.sol10env
    Patch cluster install script for Solaris 10 Recommended Patch Cluster
    WARNING SYSTEMS WITH LIMITED DISK SPACE SHOULD NOT INSTALL PATCHES:
    .(standard stuff about space)
    in only partially loaded patches. Check and be sure adequate disk space
    is available before continuing.
    Are you ready to continue with install? [y/n]: y
    Determining if sufficient save space exists...
    Sufficient save space exists, continuing...
    ERROR: OS is not valid for this cluster. Exiting.

  • Live Upgrade on Solaris 11

    Following my posts on LU on Solaris 10, I now need to do the same on Solaris 11.2.  The two machines are brand new - not being used in anger.
    I saw the useful blog https://blogs.oracle.com/oem/entry/using_ops_center_to_update
    One of our machines has the following:
    beadm list
    BE                 Active Mountpoint Space  Policy Created
    solaris            -      -          12.99M static 2014-06-21 05:39
    solaris-1          NR     /          25.76G static 2014-09-10 12:00
    solaris-1-backup-1 -      -          323.0K static 2014-09-29 11:45
    solaris-1-backup-2 -      -          172.0K static 2014-11-12 12:26
    When asked, the person who looked after the machine, he was not sure how these were created.  I would like to know how do I ensure that the two boot environment are the same and if that is the case if one can be deleted. Whether the backups are required and if they are auto-generated.  Essentially, I am a total newbie on Live Upgrade, but I see as the only way to apply patches and other packages.
    Regards
    SC

    Hi,
    please note that Live Upgrade is obsolete and isn't used on Solaris 11 and above. Solaris 11
    uses pkg(1) and beadm(1M) to manage boot environments.
    The output from beadm list shows the currently existing boot environments. Those environments
    are created by pkg(1) when updating the system or in certain situations when destructive package
    operations take place (the "-backup" environments).
    Please see
    Updating to Oracle Solaris 11.2
    for more information on upgrading Solaris.
    In your example output the solaris boot environment is most likely the result of an
    initial installation. Then someone later updated the environment resulting in a new
    environment solaris-1 (which is also the current boot environment). Some pkg
    operations then caused some backup boot environments to be created. This is
    done to make sure that a fall-back exists should there be a problem with the newly
    installed packages. If you don't need to go back to that stage you can also remove them.
    It is recommended to always keep a known-good boot environment around just in case.
    If you are happy with the current one you can ask beadm to create such an environment
    like this:
    # beadm create my-known-good-be
    Please note: When using the pkg command to update the system you can
    also specify a custom name for the new boot environment, e.g.
    # pkg update --be-name=s11.2
    would name the new environment s11.2 instead of some generic name like "solaris-X".
    Regards,
      Ronald

  • Solaris 8 upgrade to solaris 10 terminology questio

    I have been researching upgrading my E420R and V440's from solaris 8 to solaris 10. I have solaris 10 cds and am confused about the terminology that said "If you want to upgrade a system that has non-global zones installed you have to use the DVDs". There are no zones in solaris 8 so is the entire system considered the global zone? Can I just upgrade with the CDs I have.
    I just want to upgrade (not live upgrade, etc).
    Thank yo ufor clearing this up.

    I'm a little confused about the statement about the zone. Because it is solaris 8, is the entire OS considered a 1 global zone?Solaris 8 has never had zone support nor will it ever have zone support. So from that perspective I guess you could call it a global zone if you wish but at the time Solaris 8 was active the concept of a global zone did not exist. For the purposes of your question, yes, it's only a global zone with no non global zones.
    You're reading way to much into this.
    alan

  • Solaris 10 update 9 - live upgrade issues with ZFS

    Hi
    After doing a live upgrade from Solaris 10 update 8 to Solaris 10 update 9 the alternate boot environment I created is no longer bootable.
    I have completed all the pre-upgrade steps like:
    - Installing the latest version of live upgrade from the update 9 ISO.
    - Create and test the new boot environment.
    - Create a sysidcfg file used by the live upgrade that has auto_reg=disable in it.
    There is also no errors while creating the boot environment or even when activating it.
    Here is the error I get:
    SunOS Release 5.10 Version Generic_14489-06 64-bit
    Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
    NOTICE: zfs_parse_bootfs: error 22
    Cannot mount root on altroot/37 fstype zfs
    *panic[cpu0]/thread=fffffffffbc28040: vfs mountroot: cannot mount root*
    ffffffffffbc4a8d0 genunix:main+107 ()
    Skipping system dump - no dump device configured
    Does anyone know how I can fix this?
    Edited by: user12099270 on 02-Feb-2011 04:49

    Found the culprit... *142910-17*... breaks it
    System has findroot enabled GRUB
    Updating GRUB menu default setting
    GRUB menu default setting is unaffected
    Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u8wos_08a> as <mount-point>//boot/grub/menu.lst.prev.
    File </etc/lu/GRUB_backup_menu> propagation successful
    Successfully deleted entry from GRUB menu
    Validating the contents of the media </admin/x86/Patches/10_x86_Recommended/patches>.
    The media contains 204 software patches that can be added.
    Mounting the BE <s10x_u8wos_08a_Jan2011>.
    Adding patches to the BE <s10x_u8wos_08a_Jan2011>.
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    The following requested patches have packages not installed on the system
    Package SUNWio-tools from directory SUNWio-tools in patch 142910-17 is not installed on the system. Changes for package SUNWio-tools will not be applied to the system.
    Package SUNWzoneu from directory SUNWzoneu in patch 142910-17 is not installed on the system. Changes for package SUNWzoneu will not be applied to the system.
    Package SUNWpsm-ipp from directory SUNWpsm-ipp in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-ipp will not be applied to the system.
    Package SUNWsshdu from directory SUNWsshdu in patch 142910-17 is not installed on the system. Changes for package SUNWsshdu will not be applied to the system.
    Package SUNWsacom from directory SUNWsacom in patch 142910-17 is not installed on the system. Changes for package SUNWsacom will not be applied to the system.
    Package SUNWmdbr from directory SUNWmdbr in patch 142910-17 is not installed on the system. Changes for package SUNWmdbr will not be applied to the system.
    Package SUNWopenssl-commands from directory SUNWopenssl-commands in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-commands will not be applied to the system.
    Package SUNWsshdr from directory SUNWsshdr in patch 142910-17 is not installed on the system. Changes for package SUNWsshdr will not be applied to the system.
    Package SUNWsshcu from directory SUNWsshcu in patch 142910-17 is not installed on the system. Changes for package SUNWsshcu will not be applied to the system.
    Package SUNWsshu from directory SUNWsshu in patch 142910-17 is not installed on the system. Changes for package SUNWsshu will not be applied to the system.
    Package SUNWgrubS from directory SUNWgrubS in patch 142910-17 is not installed on the system. Changes for package SUNWgrubS will not be applied to the system.
    Package SUNWzoner from directory SUNWzoner in patch 142910-17 is not installed on the system. Changes for package SUNWzoner will not be applied to the system.
    Package SUNWmdb from directory SUNWmdb in patch 142910-17 is not installed on the system. Changes for package SUNWmdb will not be applied to the system.
    Package SUNWpool from directory SUNWpool in patch 142910-17 is not installed on the system. Changes for package SUNWpool will not be applied to the system.
    Package SUNWudfr from directory SUNWudfr in patch 142910-17 is not installed on the system. Changes for package SUNWudfr will not be applied to the system.
    Package SUNWxcu4 from directory SUNWxcu4 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu4 will not be applied to the system.
    Package SUNWarc from directory SUNWarc in patch 142910-17 is not installed on the system. Changes for package SUNWarc will not be applied to the system.
    Package SUNWtftp from directory SUNWtftp in patch 142910-17 is not installed on the system. Changes for package SUNWtftp will not be applied to the system.
    Package SUNWaccu from directory SUNWaccu in patch 142910-17 is not installed on the system. Changes for package SUNWaccu will not be applied to the system.
    Package SUNWppm from directory SUNWppm in patch 142910-17 is not installed on the system. Changes for package SUNWppm will not be applied to the system.
    Package SUNWtoo from directory SUNWtoo in patch 142910-17 is not installed on the system. Changes for package SUNWtoo will not be applied to the system.
    Package SUNWcpc from directory SUNWcpc.i in patch 142910-17 is not installed on the system. Changes for package SUNWcpc will not be applied to the system.
    Package SUNWftdur from directory SUNWftdur in patch 142910-17 is not installed on the system. Changes for package SUNWftdur will not be applied to the system.
    Package SUNWypr from directory SUNWypr in patch 142910-17 is not installed on the system. Changes for package SUNWypr will not be applied to the system.
    Package SUNWlxr from directory SUNWlxr in patch 142910-17 is not installed on the system. Changes for package SUNWlxr will not be applied to the system.
    Package SUNWdcar from directory SUNWdcar in patch 142910-17 is not installed on the system. Changes for package SUNWdcar will not be applied to the system.
    Package SUNWnfssu from directory SUNWnfssu in patch 142910-17 is not installed on the system. Changes for package SUNWnfssu will not be applied to the system.
    Package SUNWpcmem from directory SUNWpcmem in patch 142910-17 is not installed on the system. Changes for package SUNWpcmem will not be applied to the system.
    Package SUNWlxu from directory SUNWlxu in patch 142910-17 is not installed on the system. Changes for package SUNWlxu will not be applied to the system.
    Package SUNWxcu6 from directory SUNWxcu6 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu6 will not be applied to the system.
    Package SUNWpcmci from directory SUNWpcmci in patch 142910-17 is not installed on the system. Changes for package SUNWpcmci will not be applied to the system.
    Package SUNWarcr from directory SUNWarcr in patch 142910-17 is not installed on the system. Changes for package SUNWarcr will not be applied to the system.
    Package SUNWscpu from directory SUNWscpu in patch 142910-17 is not installed on the system. Changes for package SUNWscpu will not be applied to the system.
    Package SUNWcpcu from directory SUNWcpcu in patch 142910-17 is not installed on the system. Changes for package SUNWcpcu will not be applied to the system.
    Package SUNWopenssl-include from directory SUNWopenssl-include in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-include will not be applied to the system.
    Package SUNWdtrp from directory SUNWdtrp in patch 142910-17 is not installed on the system. Changes for package SUNWdtrp will not be applied to the system.
    Package SUNWhermon from directory SUNWhermon in patch 142910-17 is not installed on the system. Changes for package SUNWhermon will not be applied to the system.
    Package SUNWpsm-lpd from directory SUNWpsm-lpd in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-lpd will not be applied to the system.
    Package SUNWdtrc from directory SUNWdtrc in patch 142910-17 is not installed on the system. Changes for package SUNWdtrc will not be applied to the system.
    Package SUNWhea from directory SUNWhea in patch 142910-17 is not installed on the system. Changes for package SUNWhea will not be applied to the system.
    Package SUNW1394 from directory SUNW1394 in patch 142910-17 is not installed on the system. Changes for package SUNW1394 will not be applied to the system.
    Package SUNWrds from directory SUNWrds in patch 142910-17 is not installed on the system. Changes for package SUNWrds will not be applied to the system.
    Package SUNWnfsskr from directory SUNWnfsskr in patch 142910-17 is not installed on the system. Changes for package SUNWnfsskr will not be applied to the system.
    Package SUNWudf from directory SUNWudf in patch 142910-17 is not installed on the system. Changes for package SUNWudf will not be applied to the system.
    Package SUNWixgb from directory SUNWixgb in patch 142910-17 is not installed on the system. Changes for package SUNWixgb will not be applied to the system.
    Checking patches that you specified for installation.
    Done!
    Approved patches will be installed in this order:
    142910-17
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    Patch 142910-17 has been successfully installed.
    See /a/var/sadm/patch/142910-17/log for details
    Executing postpatch script...
    Creating GRUB menu in /a
    Installing grub on /dev/rdsk/c2t0d0s0
    stage1 written to partition 0 sector 0 (abs 16065)
    stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
    Patch packages installed:
    BRCMbnx
    SUNWaac
    SUNWahci
    SUNWamd8111s
    SUNWcakr
    SUNWckr
    SUNWcry
    SUNWcryr
    SUNWcsd
    SUNWcsl
    SUNWcslr
    SUNWcsr
    SUNWcsu
    SUNWesu
    SUNWfmd
    SUNWfmdr
    SUNWgrub
    SUNWhxge
    SUNWib
    SUNWigb
    SUNWintgige
    SUNWipoib
    SUNWixgbe
    SUNWmdr
    SUNWmegasas
    SUNWmptsas
    SUNWmrsas
    SUNWmv88sx
    SUNWnfsckr
    SUNWnfscr
    SUNWnfscu
    SUNWnge
    SUNWnisu
    SUNWntxn
    SUNWnv-sata
    SUNWnxge
    SUNWopenssl-libraries
    SUNWos86r
    SUNWpapi
    SUNWpcu
    SUNWpiclu
    SUNWpsdcr
    SUNWpsdir
    SUNWpsu
    SUNWrge
    SUNWrpcib
    SUNWrsgk
    SUNWses
    SUNWsmapi
    SUNWsndmr
    SUNWsndmu
    SUNWtavor
    SUNWudapltu
    SUNWusb
    SUNWxge
    SUNWxvmpv
    SUNWzfskr
    SUNWzfsr
    SUNWzfsu
    PBE GRUB has no capability information.
    PBE GRUB has no versioning information.
    ABE GRUB is newer than PBE GRUB. Updating GRUB.
    GRUB update was successfull.
    Unmounting the BE <s10x_u8wos_08a_Jan2011>.
    The patch add to the BE <s10x_u8wos_08a_Jan2011> completed.
    Still need to know how to resolve it though...

  • DB upgrade during a Solaris upgrade using live upgrade feature

    Hello,
    Here is the scenario. We are currently upgrading our OS from Solaris 9 to Solaris 10. And there is also a major database upgrade involved too. To help mitigate downtime, the Solaris Live Upgrade feature is being used by the SA's. The DB to be upgraded is currently at 9i and the proposed upgrade end state is at 10.2.0.3.
    Does anyone know if it is possible to do at least part of the database upgrade in the alternate boot environment created during the live upgrade process? So lets say that I am able to get the database partly upgraded to 10.2.0.1. Then, I want to run the patch set for 10.2.0.3 in the alternate boot environment and do the upgrade of the instance there too so that when the alternate boot environment is booted to the primary boot environment, I can, in effect, just start up the database.
    That's sort of a high level simplified version of what I think may be possible.
    Does anyone have any recommendations? We don't have any other high availability solutions currently in place so options such as using data guard do not apply.
    Thanks in advance!

    Hi magNorth,
    I'm not a Solaris expert but I'd always recommend not to do both steps (OS upgrade and database upgrade) at the same time. I've seen to many live scenarios where either the one or the other has caused issues - and it's fairly tough for all three parties (You as the customer, Sun Support, Oracle Support) to find out the end what was/is causing the issues.
    So my recommendation would be:
    1) Do your Solaris upgrade from 9 to 10 and check for the most recent Solaris patches - once this has been finished and run for one or two weeks successfully in production then go to step 2
    2) Do your database upgrade - I would suggest not going to 10.2.0.3 but directly to 10.2.0.4 because 10.2.0.4 has ~2000 bug fixes plus the security fixes from April2008 - but that's your decision. If 10.2.0.3 is a must then at least check Note:401435.1 for known issues and alert in 10.2.0.3 and for both cases the Upgrade Companion: Note:466181.1     
    Kind regards
    Mike

  • After Solaris live upgarde disks unavailable

    Hi All,
    we have two SUN Fire 6800 - cluster node. OS and Kernel Version: Solaris 9 9/05 s9s_u8wos_05 SPARC. After Solaris live upgrade the new root disks are continueously failing. During boot with new boot environment Solaris_9_905 wait and then boot disks become unavailable, cluster will crash and start with old boot environment. If the failed devices are removed then inserted, it makes the device available again.
    Could anybody help us? Any idea?
    regards
    Josef

    Hi Tim,
    We have Sun Cluster 3.1. Is it possible error? I'll seek for description of LU.
    However: it seems a simple hardware failure. it lost just the disks.
    Messages:
    root@mcl01:~#metastat -p
    d60 -m d61 d62 d63 1
    d61 1 1 c1t1d0s6
    d62 1 1 c1t5d0s6
    d63 1 1 c1t4d0s6
    d50 -m d51 d52 d53 1
    d51 1 1 c1t1d0s5
    d52 1 1 c1t5d0s5
    d53 1 1 c1t4d0s5
    d30 -m d31 d32 d33 1
    d31 1 1 c1t1d0s3
    d32 1 1 c1t5d0s3
    d33 1 1 c1t4d0s3
    d0 -m d1 d2 d3 1
    d1 1 1 c1t1d0s0
    d2 1 1 c1t5d0s0
    d3 1 1 c1t4d0s0
    0. c1t0d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000870e3de0,0
    1. c1t2d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871421d4,0
    2. c1t3d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871432fc,0
    3. c1t5d0
    /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000000871421a1,0
    root@mcl02:~#metastat -p
    d60 -m d61 d62 d63 1
    d61 1 1 c0t1d0s6
    d62 1 1 c4t1d0s6
    d63 1 1 c4t2d0s6
    d50 -m d51 d52 d53 1
    d51 1 1 c0t1d0s5
    d52 1 1 c4t1d0s5
    d53 1 1 c4t2d0s5
    d30 -m d31 d32 d33 1
    d31 1 1 c0t1d0s3
    d32 1 1 c4t1d0s3
    d33 1 1 c4t2d0s3
    d0 -m d1 d2 d3 1
    d1 1 1 c0t1d0s0
    d2 1 1 c4t1d0s0
    d3 1 1 c4t2d0s0
    0. c0t0d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@0,0
    !!! 1. c0t1d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@1,0
    2. c0t2d0
    /ssm@0,0/pci@18,700000/pci@1/scsi@2/sd@2,0
    297. c4t0d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@0,0
    !!! 298. c4t1d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@1,0
    !!! 299. c4t2d0
    /ssm@0,0/pci@1c,700000/pci@1/scsi@2/sd@2,0
    And
    "format" doesn't show some of these disks or says "disk type undefined" (- something like this)
    thx
    J

  • Sparse zones live upgrade

    Hi
    I have problem with live upgrade on solaris 10 9/10 to 8/11 on sparse zone.
    The installation on global zone is good but sparse zone cannnot boot because zonepath changed.
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - pbspfox1 installed /zoneprod/pbspfox1-s10u10-baseline native excl
    the initial path is /zoneprod/pbspfox1
    #zfs list
    zoneprod/pbspfox1@s10u10-baseline 22.4M - 2.18G -
    zoneprod/pbspfox1-s10u10-baseline
    # luactivate zoneprod/pbspfox1@s10u10-baseline
    ERROR: The boot environment Name <zoneprod/pbspfox1@s10u10-baseline> is too long - a BE name may contain no more than 30 characters.
    ERROR: Unable to activate boot environment <zoneprod/pbspfox1@s10u10-baseline>.
    how to upgrade pbspfox1?
    Please help
    Walter

    I'm not exactly sure what happened here but the zone name doesn't change. If the zone patch is wrong, I'd try using zonecfg to change the zone path to the proper value and then boot the zone normally.
    zonecfg -z pbspfox1
    set zonepath=/zone/pbspfox1 (or whatever is the proper path)
    verify
    commit
    exit
    zoneadm -z pbspfox1 boot
    If the zone didn't get properly updated, you might be able to update it by detaching the zone:
    zoneadm -z pbspfox1 detach
    and doing an update reattach
    zoneadm -z pbspfox1 attach -u
    Disclaimer: All of the above was done from memory without testing, I may have gotten some things wrong.
    Hopefully this will help. I've had similar issues in the past but I'm not sure I've had exactly the same problem so I can't tell for sure whether this will help you or not. It is what I'd try. Of course, try to avoid getting yourself into a position where you can't back something out if necessary. This kind of thing can be messy and may require more than one try. If a remember correctly there were some issues with the live upgrade software as shipped with Solaris 10 8/11. I'd get it patched up to current levels ASAP to avoid additional issues.

  • Live Upgrade from 8 to 10 keeping Oracle install

    Hi everyone,
    I'm trying to figure out how to do a Live Upgrade from Solaris 8 (SPARC) to 10 and still keep my Oracle available after the reboot.
    Since all of the Oracle datafiles are on /un or /opt/oracle I figure I can just create a new BE for /, /etc and /var. From there I'd just need to edit /etc/vfstab, /etc/init.d/ (for db startups), copy over /var/opt/oracle/, mount /opt/oracle.
    Does that sound right? Has anyone done this?
    On a side note, I'm still trying to figure out Live Update. My system is configed with RAID 1 under Solstice (metatool). I'm concerened about being able to access the MetaDB once the BE switch goes through. Should I set up a new mirror for the new BE prior to running LU? Or should I configure the mirror for the new BE once the switchover has gone through?

    Hello dfkoch,
    To upgrade from ColdfFusion 8/ColdfFusion 9 to ColdfFusion 10, please download the setup for ColdfFusion 10 from http://www.adobe.com/support/coldfusion/downloads.html#cf10productdownloads and install the same with the serial number.
    The upgrade price can be checked at www.adobe.com or alternatively you can call our sales team.
    Hope this helps.
    Regards,
    Anit Kumar

  • Looking for information on best practices using Live Upgrade to patch LDOMs

    This is in Solaris 10. Relatively new to the style of patching... I have a T5240 with 4 LDOMS. A control LDOM and three clients. I have some fundamental questions I'd like help with..
    Namely:
    #1. The Client LDOMS have zones running in them. Do I need to init 0 the zone or can I just +zoneadm zone halt+ them regardless of state? I.E. if it's running a database will halting the zone essentially snapshot it or will it attempt to shut it down. Is this even a nessessary step.
    #2. What is the reccommended reboot order for the LDOMs? do I need to init 0 the client ldoms and the reboot the control ldom or can I leave the client LDOM's running and just reboot the control and then reboot the clients after the control comes up?
    #3. Oracle. it's running in several of the zones on the client LDOM's what considerations need to be made for this?
    I am sure other things will come up during the conversation but I have been looking for an hour on Oracle's site for this and the only thing I can find is old Sun Docs with broken links.
    Thanks for any help you can provide,
    pipelineadmin+*                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Before you use live upgrade, or any other patching technique for Solaris, please be sure to read http://docs.oracle.com/cd/E23823_01/html/E23801/index.html which includes information on upgrading systems with non-global zones. Also, go to support.oracle.com and read Oracle Solaris Live Upgrade Information Center [ID 1364140.1]. These really are MANDATORY READING.
    For the individual questions:
    #1. During the actual maintenance you don't have to do anything to the zone - just operate it as normal. That's the purpose of the "live" in "live upgrade" - you're applying patches on a live, running system under normal operations. When you are finisihed with that process you can then reboot into the new "boot environment". This will become more clear after reading the above documents. Do as you normally would do before taking a planned outage: shut the databases down using the database commands for a graceful shutdown. A zone halt will abruptly stop the zone and is not a good idea for a database. Alternatively, if you can take application outages, you could (smoothly) shutdown the applications and then their domains, detach the zones (zoneadm detach) and then do a live upgrade. Some people like that because it makes things faster. After the live upgrade you would reboot and then zoneadm attach the zones again. The fact that the Solaris instance is running within a logical domain really is mostly besides the point with respect to this process.
    As you can see, there are a LOT of options and choices here, so it's important to read the doc. I ***strongly*** recommend you practice on a test domain so you can get used to the procedure. That's one of the benefits of virtualization: you can easily set up test environments so you cn test out procedures. Do it! :-)
    #2 First, note that you can update the domains individually at separate times, just as if they were separate physical machines. So, you could update the guest domains one week (all at once or one at a time), reboot them into the new Solaris 10 software level, and then a few weeks later (or whenever) update the control domain.
    If you had set up your T5240 in a split-bus configuration with an alternate I/O domain providing virtual I/O for the guests, you would be able to upgrade the extra I/O domain and the control domain one at a time in a rolling upgrade - without ever having to reboot the guests. That's really powerful for providing continuous availability. Since you haven't done that, the answer is that at the point you reboot the control domain the guests will lose their I/O. They don't crash, and technically you could just have them continue until the control domain comes back up at which time the I/O devices reappear. For an important application like a database I wouldn't recommend that. Instead: shutdown the guests. then reboot the control domain, then bring the guest domains back up.
    3. The fact that Oracle database is running in zones inside those domains really isn't an issue. You should study the zones administration guide to understand the operational aspects of running with zones, and make sure that the patches are compatible with the version of Oracle.
    I STRONGLY recommend reading the documents mentioned at top, and setting up a test domain to practice on. It shouldn't be hard for you to find documentation. Go to www.oracle.com and hover your mouse over "Oracle Technology Network". You'll see a window with a menu of choices, one of which is "Documentation" - click on that. From there, click on System Software, and it takes you right to the links for Solaris 10 and 11.

  • Volume as install disk for Guest Domain and Live Upgrade

    Hi Folks,
    I am new to LDOMs and have some questions - any pointers, examples would be much appreciated:
    (1) With support for volumes to be used as whole disks added in LDOM release 1.0.3, can we export a whole LUN under either VERITAS DMP or mpxio control to guest domain and install Solaris on it ? Any gotchas or special config required to do this ?
    (2) Can Solaris Live Upgrade be used with Guest LDOMs ? or is this ability limited to Control Domains ?
    Thanks

    The answer to your #1 question is YES.
    Here's my mpxio enabled device.
    non-STMS device name STMS device name
    /dev/rdsk/c2t50060E8010029B33d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    /dev/rdsk/c3t50060E8010029B37d16 /dev/rdsk/c4t4849544143484920373730313036373530303136d0
    create the virtual disk using slice 2
    ldm add-vdsdev /dev/dsk/c4t4849544143484920373730313036373530303136d0s2 77bootdisk@primary-vds01
    add the virtual disk to the guest domain
    ldm add-vdisk apps bootdisk@primary-vds01 ldom1
    the virtual disk will be imprted as c0d0 which is the whole lun itself.
    bind, start ldom 1 and install OS (i used jumpstart) and it partitioned the boot disk c0d0 as / 15GB, swap the remaining space (10GB)
    when you run format, print command on both guest and primary domain on this disk you'll see the same slice/size information
    Part Tag Flag Cylinders Size Blocks
    0 root wm 543 - 1362 15.01GB (820/0/0) 31488000
    1 swap wu 0 - 542 9.94GB (543/0/0) 20851200
    2 backup wm 0 - 1362 24.96GB (1363/0/0) 52339200
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    I havent used DMP but HDLM (Hitachi Dynamic link manager) seems not supported by ldom as i cannot make it work :(
    I have no answer on your next question unfortunately.

  • Applying individual patches to a Live Upgrade Environment

    Hi all
    Is it possible to apply individual patches to a Live Upgrade Environment? More specifically, is it possible to apply a kernel patch to the LUE?
    I was thinking that the command would look like this:
    patchadd -R /alt_env_root /location/144500-19
    In the README I don't find anything for patching a LUE and only mention of installing it in single user mode or when the system is close to totally idle.
    Dean

    yes u can apply the individul patches anad kernel patches in the lu enviornment
    for that first u need to create ABE
    luupgrade -n mytestBE -t -s /patchesfolder 166981-17
    you can refer this below link
    http://www.oracle.com/technetwork/server-storage/solaris10/solaris-live-upgrade-wp-167900.pdf
    http://www.oracle.com/technetwork/systems/articles/lu-patch-jsp-139117.html

Maybe you are looking for