Live upgrade, zones and separate mount points

Hi,
We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
Anyone tried something similar or has any idea on how to solve this?
The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
1:
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mount point </zones/zone01>.
Copying.
Creating shared file system mount points.
Copying root of zone <zone01>.
Creating compare databases for boot environment <upgrade>.
Creating compare database for file system </zones/zone01>.
Creating compare database for file system </>.
Updating compare databases on boot environment <upgrade>.
Making boot environment <upgrade> bootable.
ERROR: unable to mount zones:
zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
zoneadm: zone 'zone01': call to zoneadmd failed
ERROR: unable to mount zone <zone01> in </.alt.upgrade>
ERROR: unmounting partially mounted boot environment file systems
ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
umount: /dev/dsk/c2t1d0s0 not mounted
ERROR: cannot unmount </dev/dsk/c2t1d0s0>
ERROR: cannot mount boot environment by name <upgrade>
ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
ERROR: Update of loader failed.
ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
Making the ABE <upgrade> bootable FAILED.
ERROR: Unable to make boot environment <upgrade> bootable.
ERROR: Unable to populate file systems on boot environment <upgrade>.
ERROR: Cannot make file systems for boot environment <upgrade>.
2:
0 21191 21113 /usr/lib/lu/lumount -f upgrade
0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
0 21196 21192 mount swap /.alt.upgrade/var/run
0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
0 21197 21192 mount swap /.alt.upgrade/tmp
0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
0 21199 21198 /bin/expr 2 - 1
0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
0 21202 21198 false
0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
0 21206 21204 sed s/\([^\]\)::/\1:-:/
0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
0 21210 21203 false
0 21211 21203 gettext unable to mount zone <%s> in <%s>
0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.

I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
# lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
<snip>
Creating file systems on boot environment <upgrade>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
Mounting file systems for boot environment <upgrade>.
ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
allowable number of mount points exceeded
ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
ERROR: unmounting partially mounted boot environment file systems
If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
<snip>
Creating boot environment <upgrade>.
Creating file systems on boot environment <upgrade>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
Mounting file systems for boot environment <upgrade>.
Calculating required sizes of file systems for boot environment <upgrade>.
Populating file systems on boot environment <upgrade>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mountED.
ERROR: Unable to make boot environment <upgrade> bootable.
ERROR: Unable to populate file systems on boot environment <upgrade>.
ERROR: Cannot make file systems for boot environment <upgrade>.
If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
# lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs

Similar Messages

  • Unexpected disconnection external disk and different mount points

    Dear community,
    I have an application that needs to read and write data from an external disk called "external"
    If the volume is accidentally unmounted by unproperly pluggin it off), it will remount as "external_1" in the terminal,
    and my app won't see it as the original valid destination.
    According to this documentation:
    https://support.apple.com/en-us/HT203258
    it needs a reboot to be solved, optionally removing the wrong unused mount points before rebooting.
    Would there be a way to force OSX remounting the volume on the original mount point automatically?
    or checking the disk UUID and bypassing the different mount point name (app or os level?)
    Thanks for any clue on that.

    See DUMPFILE

  • How do I add volume and change mount point of groups?

    how do you add an external volume and change the mount point for users and groups?

    There are no volume automation curves in the iPad version of GarageBand, but you can add a fade-out:
    http://help.apple.com/garageband/ipad/2.0/index.html#chs2a762fad
    Add a fade-out
    You can add an automatic fade-out to the end of a song. When you turn on Fade Out, the last ten seconds of the song fade to silence. If you extend the last section by adding or moving regions, the fade-out adjusts to the new end of the song. You hear the fade-out when you play or share the song, but not while recording.
    Open the song settings.
    Turn Fade Out on.
    Tap Fade Out again to turn off the automatic fade-out.

  • Live Upgrade grub and mount errors on 11/06

    Hi,
    Whilst creating an initial BE on 10 11/06 I got some mount errors ...
    invalid option 'r'
    usage: mount [-o opts] <path>
    ..which subsequently left my BEs a bit messed up. I could lustatus just fine, but couldn't remove the copy. I discovered there was no ICF.2 file, so created it with what should have been the correct information. Now I get a different message:
    ERROR: No suitable candidate slice for GRUB menu on boot disk:
    I'm using SVM, and the second BE is basically a detached mirror.
    What I'd like to do now is just clear the whole lot up and start again.
    So, is there a manual way of clearing up LU? Like, removing lutab and the information under /etc/lu, for example?
    BTW, I applied 10_x86_Recommended, and then removed and installed SUNWluu, lur, and luzone from snv_66 - which appears to have removed the 'mount' errors.
    Any help gratefully received!
    Thanks,
    --Mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Hi,
    Whilst creating an initial BE on 10 11/06 I got some mount errors ...
    invalid option 'r'
    usage: mount [-o opts] <path>
    ..which subsequently left my BEs a bit messed up. I could lustatus just fine, but couldn't remove the copy. I discovered there was no ICF.2 file, so created it with what should have been the correct information. Now I get a different message:
    ERROR: No suitable candidate slice for GRUB menu on boot disk:
    I'm using SVM, and the second BE is basically a detached mirror.
    What I'd like to do now is just clear the whole lot up and start again.
    So, is there a manual way of clearing up LU? Like, removing lutab and the information under /etc/lu, for example?
    BTW, I applied 10_x86_Recommended, and then removed and installed SUNWluu, lur, and luzone from snv_66 - which appears to have removed the 'mount' errors.
    Any help gratefully received!
    Thanks,
    --Mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Solaris Zones and NFS mounts

    Hi all,
    Got a customer who wants to seperate his web environments on the same node. The release of apache, Java and PHP are different so kind of makes sense. Seems a perfect opportunity to implement zoning. It seems quite straight forward to setup (I'm sure I'll find out its not). The only concern I have is that all Zones will need access to a single NFS mount from a NAS storage array that we have. Is this going to be a problem to configure and how would I get them to mount automatically on boot.
    Cheers

    Not necessarily, you can create (from Global zone) a /zone/zonename/etc/dfs/dfstab (NOT a /zone/[i[zonename[/i]/root/etc/dfs/dfstab notice you don't use the root dir) and from global do a shareall and the zone will start serving. Check your multi-level ports and make sure they are correct. You will run into some problems if you are running Trusted Extensions or the NFS share is ZFS but they can be overcome rather easily.
    EDIT: I believe you have to be running TX for this to work. I'll double check.
    Message was edited by:
    AdamRichards

  • ZFS mount points and zones

    folks,
    a little history, we've been running cluster 3.2.x with failover zones (using the containers data service) where the zoneroot is installed on a failover zpool (using HAStoragePlus). it's worked ok but could be better with the real problems surrounding lack of agents that work in this config (we're mostly an oracle shop). we've been using the joost manifests inside the zones which are ok and have worked but we wouldn't mind giving the oracle data services a go - and the more than a little painful patching processes in the current setup...
    we're started to look at failover applications amongst zones on the nodes, so we'd have something like node1:zone and node2:zone as potentials and the apps failing between them on 'node' failure and switchover. this way we'd actually be able to use the agents for oracle (DB, AS and EBS).
    with the current cluster we create various ZFS volumes within the pool (such as oradata) and through the zone boot resource have it mounted where we want inside the zone (in this case $ORACLE_BASE/oradata) with the global zone having the mount point of /export/zfs/<instance>/oradata.
    is there a way of achieving something like this with failover apps inside static zones? i know we can set the volume mountpoint to be what we want but we rather like having the various oracle zones all having a similar install (/app/oracle etc).
    we haven't looked at zone clusters at this stage if for no other reason than time....
    or is there a better way?
    thanks muchly,
    nelson

    i must be missing something...any ideas what and where?
    nelson
    devsun012~> zpool import Zbob
    devsun012~> zfs list|grep bob
    Zbob 56.9G 15.5G 21K /export/zfs/bob
    Zbob/oracle 56.8G 15.5G 56.8G /export/zfs/bob/oracle
    Zbob/oratab 1.54M 15.5G 1.54M /export/zfs/bob/oratab
    devsun012~> zpool export Zbob
    devsun012~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    1 bob running /opt/zones/bob native shared
    devsun013~> zoneadm -z bob list -v
    ID NAME STATUS PATH BRAND IP
    16 bob running /opt/zones/bob native shared
    devsun012~> clrt list|egrep 'oracle_|HA'
    SUNW.HAStoragePlus:6
    SUNW.oracle_server:6
    SUNW.oracle_listener:5
    devsun012~> clrg create -n devsun012:bob,devsun013:bob bob-rg
    devsun012~> clrslh create -g bob-rg -h bob bob-lh-rs
    devsun012~> clrs create -g bob-rg -t SUNW.HAStoragePlus \
    root@devsun012 > -p FileSystemMountPoints=/app/oracle:/export/zfs/bob/oracle \
    root@devsun012 > bob-has-rs
    clrs: devsun013:bob - Entry for file system mount point /export/zfs/bob/oracle is absent from global zone /etc/vfstab.
    clrs: (C189917) VALIDATE on resource bob-has-rs, resource group bob-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource bob-has-rs in resource group bob-rg on node devsun013:bob failed.
    clrs: (C891200) Failed to create resource "bob-has-rs".

  • Lucreate 'ERROR: mount: /export: invalid argument' - Live Upgrade u8 to u9

    I'm trying to update several servers running solaris cluster 3.2 from u8 to u9 using live upgrade, first server (quorum server) worked just fine, next one (cluster member) goes down like this:
    # lucreate -n solaris-10-u9
    ERROR: mount: /export: Invalid argument
    ERROR: cannot mount mount point </.alt.tmp.b-pob.mnt/export> device </export>
    ERROR: failed to mount file system </export> on </.alt.tmp.b-pob.mnt/export>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to mount ABE <solaris-10-u9>
    ERROR: Unable to clone the existing file systems from boot environment <s10x_u8wos_08a> to create boot environment <solaris-10-u9>.
    ERROR: Cannot make file systems for boot environment <solaris-10-u9>.I followed all the necessary steps, removed the installed live upgrade packages and installed the ones from the u9 iso...
    Any ideas would be greatly appreciated.
    Edited by: 801033 on Oct 8, 2010 5:11 AM
    Edited by: 801033 on Oct 8, 2010 5:28 AM
    Edited by: 801033 on Oct 8, 2010 5:33 AM

    The answer, at least in my case:
    When I originally installed this cluster, I apparently misread the part of the documentation which lead me to disable lofs. The documentation states that you need to disable lofs if BOTH of two conditions are met,
    1) You are running HA for NFS to server a locally available filesystem AND
    2) you are running automountd.
    In my case, I have no need for automountd, so I disabled the autofs service, reenabled lofs and am proceeding with the upgrade.

  • How to create Mount Point in Linux

    Hi,
    I m new to Linux OS, want to install Oracle EBS R12 on OEL 5.6 . I have downloaded s/w and installed all required packages for OEL.
    Now i want to create Staging directory at location /u01/StageR12.
    Can anyone guide me for how to create Mount point /u01 and how to assign space of 53 GB to it.... ?
    Reply awaited...

    You're on the right path.
    Keep in mind though what you need partition wise just for o/s install. You should have at least 3 partitions:
    - a ?? GB partition for the / (root) mount (the size depends on what you will install and how much space you will need for running that s/w)
    - a 128MB or larger partition for the /boot mount
    - a ?? GB partition as swap space mount (typically rule of thumb is 2x RAM for 4GB and lower, else around 25% of total RAM)
    I would not create separate partitions for the old-style +/u01+ mount points for Oracle. That part of the OFA (Oracle Flexible Architecture) standards are old and pre-ASM and pre- automated Oracle managed data files.
    You need space for installing an Oracle Home. I do not see the need for a dedicated mount point for that. I use the root file system.
    You need space for the actual database. If you use Oracle ASM, then I would create raw partitions (across disks) to be used for database storage. These will not be mounted or formatted. Instead these will be assigned to an ASM diskgroup (and striped by default) and used for database storage.
    If you do not use ASM, then you need a cooked file system for storage of database file. In that case separate mount point(s) make sense. Also, it makes sense not to manually apply the old OFA standards, but instead use Oracle managed database files. In that case you need a single mount point (e.g.<i>/oracledb</i>) - and set that as the base directory for Oracle to use for the database. Oracle will create an OFA compliant database directory tree and files under that base directory/mount point.
    Keep in mind that the more mount points you have for the database, the more difficult your job becomes to manage storage. The easiest is a single mount point for the database and using Oracle to manage the OFA compliant side for you on that mount point.
    Also, you cannot and should not attempt some kind of manual striping layout of the database across multiple mountpoints. This may have made some sense back 10+ years ago. It no longer does.

  • Mount point getting decreased frequently

    Hi,
    The mount point database in oracle E-Buiseness suite get reduced two days once.
    I cross checked with trace file, it generated normally & also i stored the archive log files in separate mount point.
    Kindly any one guide me for that issue.
    Thanks & Regards
    Kesav

    Kesavan G wrote:
    Hi,
    The mount point database in oracle E-Buiseness suite get reduced two days once.
    I cross checked with trace file, it generated normally & also i stored the archive log files in separate mount point.
    Kindly any one guide me for that issue.
    Thanks & Regards
    KesavHi,
    What is the operating system, EBS version.
    Do you have any scheduled scripts in crontab/scheduled tasks to delete trace and old files?
    Do you store only database related files or application related files. If you store application related files in the same mount point check if you have scheduled "Purge concurrent request and/or manager data" program.
    Any RMAN backups in this mount point?
    Thanks

  • HANA Mount Point for MCOS

    Hi All,
    We have implemented BW on HANA System and it was all implemented as single instance on one Hardware, now we have to implement HANA in MCOS for DEV and QAS system.
    The current file system is :
    Filesystem              Size  Used  Avail   Use% Mounted on
    /dev/sda1                63G   11G   50G    18%  /
    devtmpfs                127G  280K  127G   1%  /dev
    tmpfs                     213G  100K  213G   1%  /dev/shm
    /dev/sapmntdata      1.1T   16G  1.1T     2%  /sapmnt
    Need to know the new mount point for this for MCOS type implementation. Where our DEV and QAS will reside in the same HANA DB box.
    Thanks,
    Sharib
    Message was edited by: Sharib Tasneem

    Hi All,
    There is no separate Mount point needed for MCOS HANA system.
    Need to provide log directory, data directory and Shared folder during MCOD creation.
    Log Directory will be /<Shared directory>/log/SID2 which in our case was /sapmnt/log/SID2
    Data Directory will be /Shared Directory>/data/SID2 which in our case was /sapmnt/data/SID2
    And Shared location which is /sapmnt/ in our case.
    Use HLM(HANA Life cycle management tool to create new HANA SID. It was like a Cake walk for Updateing HANA Revision, Client, etc.
    Thanks and Regards,
    Sharib
    Note: You need to create SID2 directory inside the location and give appropriate permission to <SID1>ADM user to write over there.
    Message was edited by: Sharib Tasneem

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • How to keep aggregated link interface across live upgrade partitions?

    I'm running Solaris 10 11/06 s10x_u3wos_10 X86 on a 2100. I have nge1 and nge0 in an aggregated link interface aggr1. I have the /etc/hostname.aggr1 file, and aggr1 comes up fine on reboots. I also have a Live Upgrade partition, and it doesn't come up when I activate my Live Upgrade partition. I get a message on the console saying it failed to plumb aggr1, so I can see it was at least trying. I do see the /etc/hostname.aggr1 on the Live Upgrade partition and it has the correct IP & netmask, so that file synchronized fine. I don't have any errors or warnings in /etc/lu/sync.log. My other link interface bge0 (not aggregated) comes up fine on the Live Upgrade partition, it is just the aggregated one that doesn't come up. Also, when I removed the aggregated interface, nge1 and nge0 came up fine on the Live Upgrade partition too. It seems to be just the aggregated interface that has problems. Is there something else I need to do on the original partition to make sure aggr1 will come up on the Live Upgrade partition?
    This seemed similar to bug id 6369648, but I have the patch 120991-02 installed, and I don't lose aggr1 on reboots. I only lose it after activating the other Live Upgrade partition.

    I just found a file /etc/aggregation.conf, which was not added to the Live Upgrade synclist (/etc/lu/synclist). I'm hoping by adding that file, it will get synchronized and the plumb will work.

  • Live upgrade only for zfs root?

    Only live upgrade for zfs root on 5/09? Is this true? I have tried to do live upgrades previously and have had no luck. Particularly on my old blade1000 with an 18gb drive.

    Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.

  • Which patch to apply when live upgrading to a new OS

    Hello Guys,
    when you use live upgrade to upgrade to a new OS level , which patches do you apply before performing the upgrade? I used prerequired patches from 10REC when performing a live upgrade patching . Do you use these same prerequired patches when upgrading to a NEW OS LEVEL?
    Your response will be truely appreciated.
    Thanks

    Hi.
    I hope it's help:
    http://docs.oracle.com/cd/E26505_01/html/E28038/preconfig-17.html#luplanning-7
    Ensure that you have the most recently updated patch list by consulting http://support.oracle.com (My Oracle Support). Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support.
    The patches listed in knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support are subject to change at any time. These patches potentially fix defects in Live Upgrade, as well as fix defects in components that Live Upgrade depends on. If you experience any difficulties with Live Upgrade, please check and make sure that you have the latest Live Upgrade patches installed.
    If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment recommended to run the Live Upgrade installer and install the packages. To install the Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://support.oracle.com (My Oracle Support).
    For instructions about installing the Live Upgrade software, see Installing Live Upgrade.
    Required patches available via MOS ( My Oracle Support: http://support.oracle.com).
    But for access to this site you must have active service contract.
    Regards.

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

Maybe you are looking for

  • Automate workbook to generate in PDF with prompts

    Hello All, I have two requirements : First: I have date selection in prompt, i need that to be automated. It should throw the result current date (08-07-2014) minus 1. (Last day data 06-07-2014 - 07-07-2014) Second : Possibility to export AAO workboo

  • Forward Button NOT Hidden in FF10

    Okay, so the FF10 ChangeLog says "The forward button is now hidden until you navigate back" But Mine is still visibile ( http://d.pr/EjrK ), its just faded a little. Why? Make the Forward Button Go AWAY!

  • Can Mouse.hide() be disabled/disallowed?

    I am loading fileB.swf into fileA.swf. I want fileA.swf to disallow fileB.swf from calling Mouse.hide(). So far I am completely clueless as to how this may be done. My primary application, fileA.swf, will be loading many 3rd-party swf's, and I don't

  • Modify an existing form

    I wonder if it is possible to scan in an existing form and modify it for fill in usage. IF so what would I need to do to make this happen? Tks [email protected] Gil

  • Some Basics

    Hi All, Can you please define the following terms, just the basic definitions.It will help me lot for the furthur research. WIP Backflushing components. Operation pull,Push Assembly Pull,Push Discrete Jobs and Lot based jobs. Standard jobs and Non-St