Sun Live Upgrade with local zones Solaris 10

I have M800 server running global root (/) fs on local disk and running 6 local zones on another local disk. I am running solaris 5.10 8/07.
I used live upgrade to patch the system and created new BE (lucreate). Both root fs are mirror as RAID-1.
When I ran lucreate, it copies all 6 local zones root fs to the global root fs and failed no enogh space.
What is the best procedure to use lu with local zones.
Note: I used lu with global zone only, and worked without any problem.
regards,

I have been trying to use luupgrade for Solaris10 on Sparc, 05/09 -> 10/09.
lucreate is successful, but luactivate directs me to install 'the rest of the packages' in order to make the BE stable enough to activate. I try to find the packages indicated , but find only "virtual packages" which contain only pkgmap.
I installed upgrade 6 on a spare disk to make sure my u7 installation was not defective, but got similar results.
I got beyond luactivate on x86 a while ago, but had other snags which I left unattended.

Similar Messages

  • Live Upgrade with Zones - still not working ?

    Hi Guys,
    I'm trying to do LiveUpdate from Solaris update 3 to update 4 with non-global zone installed. It's driving me crazy now.
    I did everything as described in documentation, installed SUNWlucfg and supposedly updated SUNWluu and SUNWlur (supposedly because they are exactly the same as were in update 3) both from packages and with script from update 4 DVD, installed all patches mentioned in 72099, but lucreate process still complains about missing patches and I've checked if they're installed five times. They are. It doesn't even allow to create second BE. Once I detached Zone - everything went smooth, but I had an impression that Live Upgrade with Zones will work in Update 4.
    It did create second BE before SUNWlucfg was installed, but failed on update stage with exactly the same message - install patches according to 72099. After installation of SUNWlucfg Live Upgrade process fails instantly, that's a real progress, must admit.
    Is it still "mission impossible" to Live Upgrade with non-global zones installed ? Or am I missed something ?
    Any ideas or success stories are greatly appreciated. Thanks.

    I upgraded from u3 to u5.
    The upgrade went fine, the zones boot up but there are problems.
    sshd doesn't work
    svsc -vx prints out this.
    svc:/network/rpc/gss:default (Generic Security Service)
    State: uninitialized since Fri Apr 18 09:54:33 2008
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M gssd
    Impact: 8 dependent services are not running:
    svc:/network/nfs/client:default
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/system/webconsole:console
    svc:/milestone/multi-user-server:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/network/inetd:default (inetd)
    State: maintenance since Fri Apr 18 09:54:41 2008
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/share/man -s 1M inetd
    See: /var/svc/log/network-inetd:default.log
    Impact: This service is not running.
    It seems as thought the container is not upgraded.
    more /etc/release in the container shows this
    Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    How do I get it to fix the inetd service?

  • Sun cluster 3.20, live upgrade with non-global zones

    I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
    For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
    The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
    Any assistance gratefully received

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Solaris Live Upgrade with NGZ

    Hi
    I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
    I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
    On my 1st Server:
    I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
    ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
    zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
    zoneadm: zone 'mddtri': call to zoneadm failed
    ERROR: unable to mount non-global zones of ABE: cannot make bootable
    ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
    ERROR: unable to make boot environment <new_s10BE> bootable
    On my 2nd Server:
    I have a Global zone and 10 NGZ. This is the error i am getting:-
    WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
    WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
    *.This happens for all the NGZ running.*
    Duplicating ZFS datasets from PBE to ABE.
    ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone  to dedicated dataset.
    ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
    Reverting state of zones in PBE <old_s10BE>
    ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
    ERROR: Unable to populate file systems from boot environment <new_s10BE>
    Help, I need to sort this out a.s.a.p!

    Hi,
    I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
    Here is a script with lucreate.
    #!/bin/sh
    Logdir=/usr/local/LUscripts/logs
    if &#91; ! -d ${Logdir} &#93;
    then
    echo ${Logdir} existiert nicht
    exit
    fi
    /usr/sbin/lucreate \
    -l ${Logdir}/$0.log \
    -o ${Logdir}/$0.error \
    -m /:/dev/dsk/c2t0d0s0:ufs \
    -m /var:/dev/dsk/c2t0d0s3:ufs \
    -m /opt:/dev/dsk/c2t0d0s4:ufs \
    -m -:/dev/dsk/c2t0d0s1:swap \
    -n disk0
    And here is the output
    root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
    ERROR: An error occurred during creation of configuration file.
    ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
    Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
    Abort - core dumped

  • Solaris 10 Live Upgrade with Veritas Volume Manager 4.1

    What is the latest version of Live Upgrade?
    I need to upgrade our systems from Solaris 8 to Solaris 10. All our systems have Veritas VxVM 4.1, with the O.S disks encapsulated and mirrored.
    Whats the best way to do the Live Upgrade. Anyone have clean documents for the same ?

    There are more things that you need to do.
    Read veritas install guide -- it has a pretty good section of what needs to be done.
    http://www.sun.com/products-n-solutions/hardware/docs/Software/Storage_Software/VERITAS_Volume_Manager/

  • Live upgrade with zones

    Hi,
    While trying to create a new be environment in solaris 10 update 6 i 'm getting following errors for my zone
    Updating compare databases on boot environment <zfsBE>.
    Making boot environment <zfsBE> bootable.
    ERROR: unable to mount zones:
    zoneadm: zone 'OTM1_wa_lab': "/usr/lib/fs/lofs/mount -o ro /.alt.tmp.b-AKc.mnt/swdump /zones/app/OTM1_wa_lab-zfsBE/lu/a/swdump" failed with exit code 33
    zoneadm: zone 'OTM1_wa_lab': call to zoneadmd failed
    ERROR: unable to mount zone <OTM1_wa_lab> in </.alt.tmp.b-AKc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
    ERROR: Unable to remount ABE <zfsBE>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device <rootpool/ROOT/zfsBE>
    Making the ABE <zfsBE> bootable FAILED.
    Although my zone is running fine
    zoneadm -z OTM1_wa_lab list -v
    ID NAME STATUS PATH BRAND IP
    3 OTM1_wa_lab running /zones/app/OTM1_wa_lab native shared
    Does any body know what could be the reason for this ?

    http://opensolaris.org/jive/thread.jspa?messageID=322728

  • Live Upgrade not working from Solaris 10 05/09 - 10/09

    I have a Blade 1000 that used to run SXCE, and I used to LU all of the time. I recently rejumpstarted it back to Solaris 10 05/09 to match production.
    Now that 10/09 is out, I went to do a normal LU, however it completely bombs out. I'm assuming it's because of the weird device name, yet I can't figure out what is causing it. Any ideas?
    ========
    = Before =
    ========
    root@dxnpnc05:~ # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 9.24G 24.0G 54.5K /rpool
    rpool/ROOT 2.46G 24.0G 18K legacy
    rpool/ROOT/sol10u7 2.46G 24.0G 2.46G /
    rpool/appl 18K 24.0G 18K /appl
    rpool/export 2.41G 24.0G 18K /export
    rpool/export/home 2.41G 24.0G 2.41G /home
    rpool/local 1.63M 24.0G 1.63M /usr/local
    rpool/opt 279K 24.0G 279K /opt
    rpool/perl 23K 24.0G 23K /usr/perl5/site_perl
    rpool/pkg 107M 24.0G 107M /usr/pkg
    rpool/pkgsrc 263M 24.0G 263M /usr/pkgsrc
    rpool/swap 4G 27.4G 545M -
    root@dxnpnc05:~ # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    sol10u7 yes yes yes no -
    =========
    = Creation =
    =========
    root@dxnpnc05:~ # lucreate -c sol10u7 -n sol10u8
    Analyzing system configuration.
    Comparing source boot environment <sol10u7> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <sol10u8>.
    Source boot environment is <sol10u7>.
    Creating boot environment <sol10u8>.
    Cloning file systems from boot environment <sol10u7> to create boot environment <sol10u8>.
    Creating snapshot for <rpool/ROOT/sol10u7> on <rpool/ROOT/sol10u7@sol10u8>.
    Creating clone for <rpool/ROOT/sol10u7@sol10u8> on <rpool/ROOT/sol10u8>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/sol10u8>.
    ERROR: cannot open ' ': invalid dataset name
    ERROR: cannot mount mount point </.alt.tmp.b-0lg.mnt/opt> device < >
    ERROR: failed to mount file system < > on </.alt.tmp.b-0lg.mnt/opt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to mount ABE <sol10u8>
    ERROR: Unable to clone the existing file systems from boot environment <sol10u7> to create boot environment <sol10u8>.
    ERROR: Cannot make file systems for boot environment <sol10u8>.
    ======
    = After =
    ======
    root@dxnpnc05:~ # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 9.24G 24.0G 54.5K /rpool
    rpool/ROOT 2.46G 24.0G 18K legacy
    rpool/ROOT/sol10u7 2.46G 24.0G 2.46G /
    rpool/ROOT/sol10u7@sol10u8 68.5K - 2.46G -
    rpool/ROOT/sol10u8 110K 24.0G 2.46G legacy
    rpool/appl 18K 24.0G 18K /appl
    rpool/export 2.41G 24.0G 18K /export
    rpool/export/home 2.41G 24.0G 2.41G /home
    rpool/local 1.63M 24.0G 1.63M /usr/local
    rpool/opt 279K 24.0G 279K /opt
    rpool/perl 23K 24.0G 23K /usr/perl5/site_perl
    rpool/pkg 107M 24.0G 107M /usr/pkg
    rpool/pkgsrc 263M 24.0G 263M /usr/pkgsrc
    rpool/swap 4G 27.4G 545M -
    root@dxnpnc05:~ # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    sol10u7 yes yes yes no -
    sol10u8 no no no yes -
    Any ideas? Thanks!

    I have been trying to use luupgrade for Solaris10 on Sparc, 05/09 -> 10/09.
    lucreate is successful, but luactivate directs me to install 'the rest of the packages' in order to make the BE stable enough to activate. I try to find the packages indicated , but find only "virtual packages" which contain only pkgmap.
    I installed upgrade 6 on a spare disk to make sure my u7 installation was not defective, but got similar results.
    I got beyond luactivate on x86 a while ago, but had other snags which I left unattended.

  • Live Upgrade with VDI

    Is there any reason why LU will not work with VDI and it's built in MySQL cluster?
    I attempted to LU a VDI 3.2.1 environment and after rebooting a secondary node the service would not come back up. Unonfiguring and reconfiguring brought everything back to normal. What this just an anomaly or is there a procedure for using LU with VDI?

    Well LU does work fine with VDI. I upgraded a second VDI cluster without problems.

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Best Practices for patching Sun Clusters with HA-Zones using LiveUpgrade?

    We've been running Sun Cluster for about 7 years now, and I for
    one love it. About a year ago, we starting consolidating our
    standalone web servers into a 3 node cluster using multiple
    HA-Zones. For the most part, everything about this configuration
    works great! One problem we've having is with patching. So far,
    the only documentation I've been able to find that talks about
    patch Clusters with HA-Zones is the following:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi2g0
    Sun Cluster System Administration Guide for Solaris OS
    How to Apply Patches in Single-User Mode with Failover Zones
    This documentation works, but has two major drawbacks:
    1) The nodes/zones have to be patched in Single-User Mode, which
    translates to major downtime to do patching.
    2) If there are any problems during the patching process, or
    after the cluster is up, there is no simple back out process.
    We've been using a small test cluster to test out using
    LiveUpgrade with HA-Zones. We've worked out most of bugs, but we
    are still in a position of patching our HA-Zoned clusters based
    on home grow steps, and not anything blessed by Oracle/Sun.
    How are others patching Sun Cluster nodes with HA-Zones? Has any
    one found/been given Oracle/Sun documentation that lists the
    steps to patch Sun Clusters with HA-Zones using LiveUpgrade??
    Thanks!

    Hi Thomas,
    there is a blueprint that deals with this problem in much more detail. Actually it is based on configurations that are solely based on ZFS, i.e. for root and the zone roots. But it should be applicable also to other environments. "!Maintaining Solaris with Live Upgrade and Update On Attach" (http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach)
    Unfortunately, due to some redirection work in the joint Sun and Oracle network, access to the blueprint is currently not available. If you send me an email with your contact data I can send you a copy via email. (You'll find my address on the web)
    Regards
    Hartmut

  • Solaris 10U1 needs to be to patched to SunUC 1.0.4 for local zones support

    The version of Sun UC integrated into S10u1 does not support systems with local zones configured. This has since been fixed but you first need to patch the system to upgrade to Sun UC 1.0.4. This can be done on a SPARC system as follows:
    $ smpatch download -i 121118-06
    $ smpatch add -i 121118-06
    For X86 the patch is 121119-06.

    This implies that ODP.NET does NOT need to be installed on a client. However, I cannot find OraOPs9.dll on a machine with Client Release 9.2 installed. Should OraOps?.dll automatically come with a Client installation of 9.2 or higher?
    ODP.NET needs to be installed on the client. OraOps9.dll is part of ODP.NET, not the Oracle Client.
    Also, if an application is built with the 10g ODP.NET, can it be run from a machine with OraOps9.dll?
    If an application is built with 10g ODP.NET, it can be run with 9.2 ODP.NET as long as you do not use any 10g APIs. The new features in 10g ODP.NET are included in the doc and the ODP.NET FAQ for your reference.

  • Live Upgrade hangs at Configuring devices. on Sun V440

    Hi all,
    I am trying to upgrade a V440 which is running Solaris 8 2/04 to Solaris 10 8/07.
    I have patched the V440 Solaris 8 build to the latest recommended patch cluster, and have also installed p7zip utility and followed the recommended patching procedures in Sun Doc 206844.
    I have installed the Live Upgrade packages from the Solaris Media Kit (8/07) and successfully created the Solaris 10 Boot Environment based on the Solaris 8 boot environment, upgraded it and activated it.
    However, on rebooting, the new Solaris10 boot environment hangs at configuring devices. and fails to proceed any further in the boot process.
    Can anyone assist please?

    Done it!
    It seems the only way I could get the V440 to upgrade to Solaris 10 was to upgrade the alternate boot environment to Solaris 9 (which required a reinstallation of the SAN foundation kit - which might be where Solaris 10 was hanging), and then a further upgrade of the alternate boot environment to Solaris 10.
    I might try uninstalling the SAN foundation kit, and then an upgrade from Solaris 8 to Solaris 10 and see if this confirms my theory. Will keep you posted.

  • Smpatch/local zones, error even after zone uninstalled, fix

    Solaris 10, Blade 1500 (silver)
    Running smpatch update kept coming back with -
    "This operation is not supported by this application for systems with local zones."
    even though I had uninstalled the local zone I'd created earlier. I mv'ed the /etc/zones/index file to a new name and all was well. smpatch update completed successfully.
    Just a tip,
    D

    Correct, however doing an uninstall on a zone would leave the above mentioned index file populated with the removed zone. zoneadm list -c would still report the zone as there until the index file was edited/replaced/removed.
    It's most likely my limitied knowledge of zones, but I thought the uninstall was the way to fully remove a zone.
    Anywho, it's just a tip ;)
    D

  • Removing a slice from Live Upgrade

    Some time ago I used Live Upgrade to update a Solaris 9 4/04 V490 to Solaris 10 6/06. The internal disks (2 146GB) are mirrored using SVM. I used a couple of extra slices for the upgrade (/ & /var). I need to setup a new filesystem for an Oracle grid control install. I need to to reallocate one of the slices currently allocated to Solaris 9 (/var). I assume that I need to remove the slice/metadevice from Live Upgrade and then delete the SVM submirrors and mirror.
    What steps/commands are needed to cleanup Live Upgrade and reuse the meta disk?
    Thanks,
    sysglen

    assuming you have a filesystem on the volume, then you can't.
    Solaris has no support for shrinking filesystems.
    Well not until ZFS, but thats a different issue.
    If you don't have a filesystem on the volume, you should be able to use metaclear to scrap the volume. Then readd the devices you do want.

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

Maybe you are looking for

  • Can I use the backup of one iPod Touch on another?

             Hello, The other day, I accidentally dropped my 4th generation 32GB iPod Touch on concrete from a couple of feet.  The screen has many cracks and the top-right corner is missing a few chunks of screen. The corner is completely shattered and

  • Output panel in Adobe Bridge

    My output panel in Adobe Bridge is blank - any suggestions??

  • Copying tables in pdf's to excel in Acrobat Standard 7.0.8

    Hi there I am having problems copying tables of numbers from pdf's into excel in Acrobat Standard 7.0.8. I am finding that when I select data in a table, right-click, and select 'Copy As Table', only about 50% of the time will the data be pasted corr

  • Adding Custom Pictogram

    Hi Experts, I have created a custom pictogram to use in my workset map.  I added it to etc/public/mimes/images/pictograms and recycled the portal.  I then went to create a new iView the custom pictogram is NOT in this lovely unsorted list. Any guidan

  • Message tracking and disk usage

    hello, I cannot find answer in documentation... what happens if disk quota for message tracking is full? a) message tracking is stopped b) oldest messages will be overwritten which possibility is correct? how much is slowdown performance if message t