Live Upgrade with VDI
Is there any reason why LU will not work with VDI and it's built in MySQL cluster?
I attempted to LU a VDI 3.2.1 environment and after rebooting a secondary node the service would not come back up. Unonfiguring and reconfiguring brought everything back to normal. What this just an anomaly or is there a procedure for using LU with VDI?
Well LU does work fine with VDI. I upgraded a second VDI cluster without problems.
Similar Messages
-
Live Upgrade with Zones - still not working ?
Hi Guys,
I'm trying to do LiveUpdate from Solaris update 3 to update 4 with non-global zone installed. It's driving me crazy now.
I did everything as described in documentation, installed SUNWlucfg and supposedly updated SUNWluu and SUNWlur (supposedly because they are exactly the same as were in update 3) both from packages and with script from update 4 DVD, installed all patches mentioned in 72099, but lucreate process still complains about missing patches and I've checked if they're installed five times. They are. It doesn't even allow to create second BE. Once I detached Zone - everything went smooth, but I had an impression that Live Upgrade with Zones will work in Update 4.
It did create second BE before SUNWlucfg was installed, but failed on update stage with exactly the same message - install patches according to 72099. After installation of SUNWlucfg Live Upgrade process fails instantly, that's a real progress, must admit.
Is it still "mission impossible" to Live Upgrade with non-global zones installed ? Or am I missed something ?
Any ideas or success stories are greatly appreciated. Thanks.I upgraded from u3 to u5.
The upgrade went fine, the zones boot up but there are problems.
sshd doesn't work
svsc -vx prints out this.
svc:/network/rpc/gss:default (Generic Security Service)
State: uninitialized since Fri Apr 18 09:54:33 2008
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M gssd
Impact: 8 dependent services are not running:
svc:/network/nfs/client:default
svc:/system/filesystem/autofs:default
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/system/webconsole:console
svc:/milestone/multi-user-server:default
svc:/network/smtp:sendmail
svc:/network/ssh:default
svc:/network/inetd:default (inetd)
State: maintenance since Fri Apr 18 09:54:41 2008
Reason: Restarting too quickly.
See: http://sun.com/msg/SMF-8000-L5
See: man -M /usr/share/man -s 1M inetd
See: /var/svc/log/network-inetd:default.log
Impact: This service is not running.
It seems as thought the container is not upgraded.
more /etc/release in the container shows this
Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 14 November 2006
How do I get it to fix the inetd service? -
Solaris 10 Live Upgrade with Veritas Volume Manager 4.1
What is the latest version of Live Upgrade?
I need to upgrade our systems from Solaris 8 to Solaris 10. All our systems have Veritas VxVM 4.1, with the O.S disks encapsulated and mirrored.
Whats the best way to do the Live Upgrade. Anyone have clean documents for the same ?There are more things that you need to do.
Read veritas install guide -- it has a pretty good section of what needs to be done.
http://www.sun.com/products-n-solutions/hardware/docs/Software/Storage_Software/VERITAS_Volume_Manager/ -
Sun Live Upgrade with local zones Solaris 10
I have M800 server running global root (/) fs on local disk and running 6 local zones on another local disk. I am running solaris 5.10 8/07.
I used live upgrade to patch the system and created new BE (lucreate). Both root fs are mirror as RAID-1.
When I ran lucreate, it copies all 6 local zones root fs to the global root fs and failed no enogh space.
What is the best procedure to use lu with local zones.
Note: I used lu with global zone only, and worked without any problem.
regards,I have been trying to use luupgrade for Solaris10 on Sparc, 05/09 -> 10/09.
lucreate is successful, but luactivate directs me to install 'the rest of the packages' in order to make the BE stable enough to activate. I try to find the packages indicated , but find only "virtual packages" which contain only pkgmap.
I installed upgrade 6 on a spare disk to make sure my u7 installation was not defective, but got similar results.
I got beyond luactivate on x86 a while ago, but had other snags which I left unattended. -
Sun cluster 3.20, live upgrade with non-global zones
I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
Any assistance gratefully receivedHi,
I am not sure whether this document:
http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
has been on the list of docs you found already.
If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
Regards
Hartmut -
Hi
I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
On my 1st Server:
I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
zoneadm: zone 'mddtri': call to zoneadm failed
ERROR: unable to mount non-global zones of ABE: cannot make bootable
ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
ERROR: unable to make boot environment <new_s10BE> bootable
On my 2nd Server:
I have a Global zone and 10 NGZ. This is the error i am getting:-
WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
*.This happens for all the NGZ running.*
Duplicating ZFS datasets from PBE to ABE.
ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone to dedicated dataset.
ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
Reverting state of zones in PBE <old_s10BE>
ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
ERROR: Unable to populate file systems from boot environment <new_s10BE>
Help, I need to sort this out a.s.a.p!Hi,
I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
Here is a script with lucreate.
#!/bin/sh
Logdir=/usr/local/LUscripts/logs
if [ ! -d ${Logdir} ]
then
echo ${Logdir} existiert nicht
exit
fi
/usr/sbin/lucreate \
-l ${Logdir}/$0.log \
-o ${Logdir}/$0.error \
-m /:/dev/dsk/c2t0d0s0:ufs \
-m /var:/dev/dsk/c2t0d0s3:ufs \
-m /opt:/dev/dsk/c2t0d0s4:ufs \
-m -:/dev/dsk/c2t0d0s1:swap \
-n disk0
And here is the output
root@ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
ERROR: An error occurred during creation of configuration file.
ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
Abort - core dumped -
Hi,
While trying to create a new be environment in solaris 10 update 6 i 'm getting following errors for my zone
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
ERROR: unable to mount zones:
zoneadm: zone 'OTM1_wa_lab': "/usr/lib/fs/lofs/mount -o ro /.alt.tmp.b-AKc.mnt/swdump /zones/app/OTM1_wa_lab-zfsBE/lu/a/swdump" failed with exit code 33
zoneadm: zone 'OTM1_wa_lab': call to zoneadmd failed
ERROR: unable to mount zone <OTM1_wa_lab> in </.alt.tmp.b-AKc.mnt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Unable to remount ABE <zfsBE>: cannot make ABE bootable
ERROR: no boot environment is mounted on root device <rootpool/ROOT/zfsBE>
Making the ABE <zfsBE> bootable FAILED.
Although my zone is running fine
zoneadm -z OTM1_wa_lab list -v
ID NAME STATUS PATH BRAND IP
3 OTM1_wa_lab running /zones/app/OTM1_wa_lab native shared
Does any body know what could be the reason for this ?http://opensolaris.org/jive/thread.jspa?messageID=322728
-
Live Upgrade fails on cluster node with zfs root zones
We are having issues using Live Upgrade in the following environment:
-UFS root
-ZFS zone root
-Zones are not under cluster control
-System is fully up to date for patching
We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
Here is the output of a Live Upgrade:
bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
Determining types of file systems supported
Validating file system requests
The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <sol10> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <sol10-20110505>.
Source boot environment is <sol10>.
Creating boot environment <sol10-20110505>.
Creating file systems on boot environment <sol10-20110505>.
Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
Mounting file systems for boot environment <sol10-20110505>.
Calculating required sizes of file systems for boot environment <sol10-20110505>.
Populating file systems on boot environment <sol10-20110505>.
Checking selection integrity.
Integrity check OK.
Preserving contents of mount point </>.
Preserving contents of mount point </var>.
Copying file systems that have not been preserved.
Creating shared file system mount points.
Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
list of <2> potential problems (issues) that were encountered while
populating boot environment <sol10-20110505>.
INFORMATION: You must review the issues listed in
</tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
resolved. In general, you can ignore warnings about files that were
skipped because they did not exist or could not be opened. You cannot
ignore errors such as directories or files that could not be created, or
file systems running out of disk space. You must manually resolve any such
problems before you activate boot environment <sol10-20110505>.
Creating compare databases for boot environment <sol10-20110505>.
Creating compare database for file system </var>.
Creating compare database for file system </>.
Updating compare databases on boot environment <sol10-20110505>.
Making boot environment <sol10-20110505> bootable.
ERROR: unable to mount zones:
WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
zoneadm: zone 'img1': call to zoneadmd failed
ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
Making the ABE <sol10-20110505> bootable FAILED.
ERROR: Unable to make boot environment <sol10-20110505> bootable.
ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
ERROR: Cannot make file systems for boot environment <sol10-20110505>.
Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
ThanksI was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
When attempting to do a "lumount s10u9c33zfs", it gave the following error:
ERROR: unable to mount zones:
zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
zoneadm: zone 'edd313': call to zoneadmd failed
ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
ERROR: unmounting partially mounted boot environment file systems
ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
ERROR: cannot mount boot environment by name <s10u9c33zfs>
The solution in this case was:
zonecfg -z edd313
info ;# display current setting
remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
verify ;# check change
commit ;# commit change
exit -
How to upgrade (with clean install) a OEL4 server to OL5 on a live system
My problem is that I cannot find any meaningful information on how to perform a OL upgrade on a live system with a running database.
The system is running OEL 4.9 (migrated from RHEL 4 to OEL) and I want to upgrade it to OL 5.7 (wanted to use 6.1 but that is not yet certified for 11.2)
I know that using Anaconda's upgrade is not supported so I guess I will have to use a clean install. (the system is full of legacy rpm's and old drivers that where needed in previous 4.x releases and dependencies that I wouldn't want to upgrade).
So I want to do a clean install of 5.7 on the server, but what is the process? There are 3 LVs running 1 for the OS (with partitions for /, /boot, /tmp, /usr, /var and /home )
1 for the Oracle software / data files and 1 for exports / archive logs and such.
If I do a clean install on the OS LV (after taking a backup of everything) will I be able to reuse the existing setup of Oracle software on the 2nd logical volume? create the oracle user with the same uid/gid, running root.sh, etc.. Will it start and be able to mount/open the database ? (I am assuming the need to rebuild the installation) There is also a home for 11.2 middleware web-tier utilities (for APEX).
I have been searching in vain for some guides / insight into what the correct procedures are for upgrading the OEL4 to OL5 (or OL6 if it gets certified in the next few months).
Any help appreciated.
OliMy problem is that I cannot find any meaningful information on how to perform a OL upgrade on a live system with a running database.Firstly, it is good to know your system is running satisfactorily.
After taking a full backup (and checking that the backup is good!), you must shutdown the RDBMS instances.
1) Reboot from the distribution media.
2) Choose a full install; you do not want to upgrade.
3) Be very careful to click on the checkbox to "use custom setup" so that you will get a display of the current storage setup.
4) In the Anaconda Disk Druid screen, edit the displayed LVMs to use their old mount points.
5) Make absolutely certain that the checkbox to reformat the LVMs for your database setup are clear, not checked.
6) Do install the default RPM package selection; the OL distro is configured to install the necessary prerequisites for an RDBMS setup.
7) After the installation completes, be sure to install the "oracle-validated" RPM package to do the necessary tuning.
8) Add back all the user accounts.
9) Move on to bringing up the RDBMS.
Looks scary, and is, but can be done. Practice installing this way before doing it for real -- you do want to be familiar with each installation step and to realize where you are along the process. -
Solaris 10 5/08 live upgrade only for customers with serviceplan ?
Live upgrade fails due to missing /usr/bin/7za
Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
A bit disappointing ...
Regards
/FlemmingLive upgrade fails due to missing /usr/bin/7za
Which seems to be installed by adding patch 137322-01 on x86 according to release notes http://docs.sun.com/app/docs/doc/820-4078/installbugs-114?l=en&a=view
But this patch (may also be the case for Sparc patch) are only available for customers with a valid serviceplan.
Does this mean that from now on its required to purchase a serviceplan if you would run Solaris 10 and use the normal procedures for system upgrades ?
A bit disappointing ...
Regards
/Flemming -
Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2
Dears,
I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
Appreciate your helpHi,
I am not sure whether this document:
http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
has been on the list of docs you found already.
If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
Regards
Hartmut -
Solaris 10 update 9 - live upgrade issues with ZFS
Hi
After doing a live upgrade from Solaris 10 update 8 to Solaris 10 update 9 the alternate boot environment I created is no longer bootable.
I have completed all the pre-upgrade steps like:
- Installing the latest version of live upgrade from the update 9 ISO.
- Create and test the new boot environment.
- Create a sysidcfg file used by the live upgrade that has auto_reg=disable in it.
There is also no errors while creating the boot environment or even when activating it.
Here is the error I get:
SunOS Release 5.10 Version Generic_14489-06 64-bit
Copyright (c) 1983, 2010, Oracle and/or its affiliates. All rights reserved.
NOTICE: zfs_parse_bootfs: error 22
Cannot mount root on altroot/37 fstype zfs
*panic[cpu0]/thread=fffffffffbc28040: vfs mountroot: cannot mount root*
ffffffffffbc4a8d0 genunix:main+107 ()
Skipping system dump - no dump device configured
Does anyone know how I can fix this?
Edited by: user12099270 on 02-Feb-2011 04:49Found the culprit... *142910-17*... breaks it
System has findroot enabled GRUB
Updating GRUB menu default setting
GRUB menu default setting is unaffected
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <s10x_u8wos_08a> as <mount-point>//boot/grub/menu.lst.prev.
File </etc/lu/GRUB_backup_menu> propagation successful
Successfully deleted entry from GRUB menu
Validating the contents of the media </admin/x86/Patches/10_x86_Recommended/patches>.
The media contains 204 software patches that can be added.
Mounting the BE <s10x_u8wos_08a_Jan2011>.
Adding patches to the BE <s10x_u8wos_08a_Jan2011>.
Validating patches...
Loading patches installed on the system...
Done!
Loading patches requested to install.
Done!
The following requested patches have packages not installed on the system
Package SUNWio-tools from directory SUNWio-tools in patch 142910-17 is not installed on the system. Changes for package SUNWio-tools will not be applied to the system.
Package SUNWzoneu from directory SUNWzoneu in patch 142910-17 is not installed on the system. Changes for package SUNWzoneu will not be applied to the system.
Package SUNWpsm-ipp from directory SUNWpsm-ipp in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-ipp will not be applied to the system.
Package SUNWsshdu from directory SUNWsshdu in patch 142910-17 is not installed on the system. Changes for package SUNWsshdu will not be applied to the system.
Package SUNWsacom from directory SUNWsacom in patch 142910-17 is not installed on the system. Changes for package SUNWsacom will not be applied to the system.
Package SUNWmdbr from directory SUNWmdbr in patch 142910-17 is not installed on the system. Changes for package SUNWmdbr will not be applied to the system.
Package SUNWopenssl-commands from directory SUNWopenssl-commands in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-commands will not be applied to the system.
Package SUNWsshdr from directory SUNWsshdr in patch 142910-17 is not installed on the system. Changes for package SUNWsshdr will not be applied to the system.
Package SUNWsshcu from directory SUNWsshcu in patch 142910-17 is not installed on the system. Changes for package SUNWsshcu will not be applied to the system.
Package SUNWsshu from directory SUNWsshu in patch 142910-17 is not installed on the system. Changes for package SUNWsshu will not be applied to the system.
Package SUNWgrubS from directory SUNWgrubS in patch 142910-17 is not installed on the system. Changes for package SUNWgrubS will not be applied to the system.
Package SUNWzoner from directory SUNWzoner in patch 142910-17 is not installed on the system. Changes for package SUNWzoner will not be applied to the system.
Package SUNWmdb from directory SUNWmdb in patch 142910-17 is not installed on the system. Changes for package SUNWmdb will not be applied to the system.
Package SUNWpool from directory SUNWpool in patch 142910-17 is not installed on the system. Changes for package SUNWpool will not be applied to the system.
Package SUNWudfr from directory SUNWudfr in patch 142910-17 is not installed on the system. Changes for package SUNWudfr will not be applied to the system.
Package SUNWxcu4 from directory SUNWxcu4 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu4 will not be applied to the system.
Package SUNWarc from directory SUNWarc in patch 142910-17 is not installed on the system. Changes for package SUNWarc will not be applied to the system.
Package SUNWtftp from directory SUNWtftp in patch 142910-17 is not installed on the system. Changes for package SUNWtftp will not be applied to the system.
Package SUNWaccu from directory SUNWaccu in patch 142910-17 is not installed on the system. Changes for package SUNWaccu will not be applied to the system.
Package SUNWppm from directory SUNWppm in patch 142910-17 is not installed on the system. Changes for package SUNWppm will not be applied to the system.
Package SUNWtoo from directory SUNWtoo in patch 142910-17 is not installed on the system. Changes for package SUNWtoo will not be applied to the system.
Package SUNWcpc from directory SUNWcpc.i in patch 142910-17 is not installed on the system. Changes for package SUNWcpc will not be applied to the system.
Package SUNWftdur from directory SUNWftdur in patch 142910-17 is not installed on the system. Changes for package SUNWftdur will not be applied to the system.
Package SUNWypr from directory SUNWypr in patch 142910-17 is not installed on the system. Changes for package SUNWypr will not be applied to the system.
Package SUNWlxr from directory SUNWlxr in patch 142910-17 is not installed on the system. Changes for package SUNWlxr will not be applied to the system.
Package SUNWdcar from directory SUNWdcar in patch 142910-17 is not installed on the system. Changes for package SUNWdcar will not be applied to the system.
Package SUNWnfssu from directory SUNWnfssu in patch 142910-17 is not installed on the system. Changes for package SUNWnfssu will not be applied to the system.
Package SUNWpcmem from directory SUNWpcmem in patch 142910-17 is not installed on the system. Changes for package SUNWpcmem will not be applied to the system.
Package SUNWlxu from directory SUNWlxu in patch 142910-17 is not installed on the system. Changes for package SUNWlxu will not be applied to the system.
Package SUNWxcu6 from directory SUNWxcu6 in patch 142910-17 is not installed on the system. Changes for package SUNWxcu6 will not be applied to the system.
Package SUNWpcmci from directory SUNWpcmci in patch 142910-17 is not installed on the system. Changes for package SUNWpcmci will not be applied to the system.
Package SUNWarcr from directory SUNWarcr in patch 142910-17 is not installed on the system. Changes for package SUNWarcr will not be applied to the system.
Package SUNWscpu from directory SUNWscpu in patch 142910-17 is not installed on the system. Changes for package SUNWscpu will not be applied to the system.
Package SUNWcpcu from directory SUNWcpcu in patch 142910-17 is not installed on the system. Changes for package SUNWcpcu will not be applied to the system.
Package SUNWopenssl-include from directory SUNWopenssl-include in patch 142910-17 is not installed on the system. Changes for package SUNWopenssl-include will not be applied to the system.
Package SUNWdtrp from directory SUNWdtrp in patch 142910-17 is not installed on the system. Changes for package SUNWdtrp will not be applied to the system.
Package SUNWhermon from directory SUNWhermon in patch 142910-17 is not installed on the system. Changes for package SUNWhermon will not be applied to the system.
Package SUNWpsm-lpd from directory SUNWpsm-lpd in patch 142910-17 is not installed on the system. Changes for package SUNWpsm-lpd will not be applied to the system.
Package SUNWdtrc from directory SUNWdtrc in patch 142910-17 is not installed on the system. Changes for package SUNWdtrc will not be applied to the system.
Package SUNWhea from directory SUNWhea in patch 142910-17 is not installed on the system. Changes for package SUNWhea will not be applied to the system.
Package SUNW1394 from directory SUNW1394 in patch 142910-17 is not installed on the system. Changes for package SUNW1394 will not be applied to the system.
Package SUNWrds from directory SUNWrds in patch 142910-17 is not installed on the system. Changes for package SUNWrds will not be applied to the system.
Package SUNWnfsskr from directory SUNWnfsskr in patch 142910-17 is not installed on the system. Changes for package SUNWnfsskr will not be applied to the system.
Package SUNWudf from directory SUNWudf in patch 142910-17 is not installed on the system. Changes for package SUNWudf will not be applied to the system.
Package SUNWixgb from directory SUNWixgb in patch 142910-17 is not installed on the system. Changes for package SUNWixgb will not be applied to the system.
Checking patches that you specified for installation.
Done!
Approved patches will be installed in this order:
142910-17
Checking installed patches...
Executing prepatch script...
Installing patch packages...
Patch 142910-17 has been successfully installed.
See /a/var/sadm/patch/142910-17/log for details
Executing postpatch script...
Creating GRUB menu in /a
Installing grub on /dev/rdsk/c2t0d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)
Patch packages installed:
BRCMbnx
SUNWaac
SUNWahci
SUNWamd8111s
SUNWcakr
SUNWckr
SUNWcry
SUNWcryr
SUNWcsd
SUNWcsl
SUNWcslr
SUNWcsr
SUNWcsu
SUNWesu
SUNWfmd
SUNWfmdr
SUNWgrub
SUNWhxge
SUNWib
SUNWigb
SUNWintgige
SUNWipoib
SUNWixgbe
SUNWmdr
SUNWmegasas
SUNWmptsas
SUNWmrsas
SUNWmv88sx
SUNWnfsckr
SUNWnfscr
SUNWnfscu
SUNWnge
SUNWnisu
SUNWntxn
SUNWnv-sata
SUNWnxge
SUNWopenssl-libraries
SUNWos86r
SUNWpapi
SUNWpcu
SUNWpiclu
SUNWpsdcr
SUNWpsdir
SUNWpsu
SUNWrge
SUNWrpcib
SUNWrsgk
SUNWses
SUNWsmapi
SUNWsndmr
SUNWsndmu
SUNWtavor
SUNWudapltu
SUNWusb
SUNWxge
SUNWxvmpv
SUNWzfskr
SUNWzfsr
SUNWzfsu
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successfull.
Unmounting the BE <s10x_u8wos_08a_Jan2011>.
The patch add to the BE <s10x_u8wos_08a_Jan2011> completed.
Still need to know how to resolve it though... -
How to delete file systems from a Live Upgrade environment
How to delete non-critical file systems from a Live Upgrade boot environment?
Here is the situation.
I have a Sol 10 upd 3 machine with 3 disks which I intend to upgrade to Sol 10 upd 6.
Current layout
Disk 0: 16 GB:
/dev/dsk/c0t0d0s0 1.9G /
/dev/dsk/c0t0d0s1 692M /usr/openwin
/dev/dsk/c0t0d0s3 7.7G /var
/dev/dsk/c0t0d0s4 3.9G swap
/dev/dsk/c0t0d0s5 2.5G /tmp
Disk 1: 16 GB:
/dev/dsk/c0t1d0s0 7.7G /usr
/dev/dsk/c0t1d0s1 1.8G /opt
/dev/dsk/c0t1d0s3 3.2G /data1
/dev/dsk/c0t1d0s4 3.9G /data2
Disk 2: 33 GB:
/dev/dsk/c0t2d0s0 33G /data3
The data file systems are not in use right now, and I was thinking of
partitioning the data3 into 2 or 3 file systems and then creating
a new BE.
However, the system already has a BE (named s10) and that BE lists
all of the filesystems, incl the data ones.
# lufslist -n 's10'
boot environment name: s10
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
/dev/dsk/c0t0d0s4 swap 4201703424 - -
/dev/dsk/c0t0d0s0 ufs 2098059264 / -
/dev/dsk/c0t1d0s0 ufs 8390375424 /usr -
/dev/dsk/c0t0d0s3 ufs 8390375424 /var -
/dev/dsk/c0t1d0s3 ufs 3505453056 /data1 -
/dev/dsk/c0t1d0s1 ufs 1997531136 /opt -
/dev/dsk/c0t1d0s4 ufs 4294785024 /data2 -
/dev/dsk/c0t2d0s0 ufs 36507484160 /data3 -
/dev/dsk/c0t0d0s5 ufs 2727290880 /tmp -
/dev/dsk/c0t0d0s1 ufs 770715648 /usr/openwin -
I browsed the Solaris 10 Installation Guide and the man pages
for the lu commands, but can not find how to remove the data
file systems from the BE.
How do I do a live upgrade on this system?
Thanks for your help.Thanks for the tips.
I commented out the entries in /etc/vfstab, also had to remove the files /etc/lutab and /etc/lu/ICF.1
and then could create the Boot Environment from scratch.
I was also able to create another boot environment and copied into it,
but now I'm facing a different problem, error when trying to upgrade.
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
s10 yes yes yes no -
s10u6 yes no no yes - Now, I have the Solaris 10 Update 6 DVD image on another machine
which shares out the directory. I mounted it on this machine,
did a lofiadm and mounted that at /cdrom.
# ls -CF /cdrom /cdrom/boot /cdrom/platform
/cdrom:
Copyright boot/
JDS-THIRDPARTYLICENSEREADME installer*
License/ platform/
Solaris_10/
/cdrom/boot:
hsfs.bootblock sparc.miniroot
/cdrom/platform:
sun4u/ sun4us/ sun4v/Now I did luupgrade and I get this error:
# luupgrade -u -n s10u6 -s /cdrom
ERROR: The media miniroot archive does not exist </cdrom/boot/x86.miniroot>.
ERROR: Cannot unmount miniroot at </cdrom/Solaris_10/Tools/Boot>.I find it strange that this sparc machine is complaining about x86.miniroot.
BTW, the machine on which the DVD image is happens to be x86 running Sol 10.
I thought that wouldn't matter, as it is just NFS sharing a directory which has a DVD image.
What am I doing wrong?
Thanks. -
DiskSuite and Live Upgrade 2.0
I have two Solaris 7 boxes running DiskSuite to mirror the O/S disk onto another drive.
I need to upgrade to Solaris 8. In the past I have used Live Upgrade to do so, when I have enough free disk space to partition an existing disk or to use a unused disk for the Solaris 8 system files.
In this case, I do not have sufficient free space on the boot disk. So, what is the best approach? It seems that I would have to:
1. unmirror the file system
2. install Solaris 8 onto the old mirror drive using LU 2.0
3. make the old mirror drive the boot drive
4. re-establish mirroring, being sure that it goes the right way from the Solaris 8 disk to the old boot disk
Comments, suggestions?I recently built a system (specs below) and installed this card (MSI GF4 Ti4200 VTD8X MS8894, 128MB DDR), and when I try to use Live Update 2 (version 3.33.000, from the CD that came with the card), I get the same message:
"Warning!!! Your Display Card does not support MSI Live Update 2 function. Note: MSI Live Update 2 supports the Display Cards of MSI only."
I'm using the drivers/BIOS that came on the CD: Driver version 6.13.10.4107, BIOS version 4.28.20.05.11. I see on the nVidia site that they have the 4109 drivers out now, should I try those? ?(
I have also made sure to do the suggested modifications to IE (and I don't have PC-cillin installed):
"Note: In order to operate this application properly, please note the following suggests.
-Set the IE security setting 'Download signed ActiveX controls' to [Enable] or [Prompt]. (System default is [Prompt]).
-Disable 'WebTrap' of PC-cillin(R) or any web based anti-virus application when executing MSITM Live Update 2TM.
-Update Microsoft® Windows® Installer"
I downloaded a newer version of LIveUpdate (3.35.000), and installed it (after completely uninstalling the old version), and got the same results. Nothing on my system is currently overclocked.
Help!
System specs:
-Soyo SY-KT400 DRAGON Ultra (Platinum Edition) with latest BIOS & Chipset Drivers
-AMD Athlon XP Thoroughbred 2100+
-MSI GF4 Ti4200 VTD8X (MS-8894)
-WD Caviar Special Edition 80 GB HDD, 8 MB Cache
-512 MB Crucial PC2700 DDR (one stick, in DIMM #1)
-TDK 40/12/48 CD R/RW
-Daewoo 905DF Dynaflat 19" Monitor
-Windows XP Home Edition, SP1/all other updates current
-On-Board CMedia 6-channel audio
-On-Board VIA 10/100 Ethernet
-Altec-Lansing ATP3 Speakers -
Live Upgrade 2.0 (from Sol8 K23 to Sol9 K05
Installed LU 2.0 from solaris 9 CDs
Created a new BE (= copy of my Sol8 K23)
Start upgrade on my inactiive BE to Sol9
insert Solaris 9 CD 1of2
luupgrade -u -n <INACT_BE> -s /cdrom/sol_9_403_sparc/s0
--> runs fine
eject cdrom
insert Solaris 9 CD 2of2
luupgrade -i -n <INACT_BE> -s /cdrom/sol_9_405_sparc_2 -O '-nodisplay'
After a few questions, the upgrade starts,
it first upgrade Live Upgrade ok,
then it start upgrading Solaris,
it than fails .
I checked the logs on the <INACT_BE> and found in
/var/sadm/install/logs/Solaris_9_pacjages...
it failed on installing SUNWnsm (Netscape 7) because already installed !!
It is right that I had SUNWnsm on my Solaris 8 system!!
Why is this causing LU to fail ?
It should just skip that package and go to the next
For the sake of it I deinstalled netscape 7 of my <INACT_BE> using pkgrm -R
I then restarted the LU using CD 2of2 , now it goes further but fails on package SUNWjhrt (java) which also existed !!
Do I miss something or is LU just unusable ??ThanksFred,
I personally have never read that caveat, what is recommended is to always run the same version on components that use the same firmware bundle, in other words.... For a B series upgrade, you need the Infraestruture bundle (include firmware for Fabric Interconnects, IOMs and UCSM) and also need the Server bundle (which includes the firmware for the CIMC, BIOS and Adapter).
Bottom line, the recommendation is to run exactly the same version for components that use firmware that come from the same bundle BUT, UCSM 2.1 introduces an enhancement: "Mixed version support (for infra and server bundles firmware) " wich allows the combination of SOME infraestructure bundles with some server bundles.
http://www.cisco.com/en/US/docs/unified_computing/ucs/release/notes/UCS_28313.html#wp58530 << Look for
"Operational enhancements"
These are the posible configurations I am aware of:
2.1(1f) infrastructure and 2.0(5a)+ server firmware
2.1(2a) infrastructure and 2.1(1f)+ server firmware
I hope that helps.
Rate ALL helpful answers.
-Kenny
Maybe you are looking for
-
I closed firefox and rebooted. After rebooting, I tried to load Firefox but the firefox.exe file was nowhere on the hard disk. I would like to restore my previous windows and tabs. How do I do this?
-
Due to a scam invading my computer recently, I cannot block these site in Parental Controls. This feature locks immediately and I must use Force Quit to close. Any ideas as to how to unlock Parental Controls? The two scams appear to be gone as I c
-
Problem with displaying "\n" in notepad
Hi, I wonder if anyone had this kinda problem before and knows how to fix it I tried both with FileWriter or RandomAccessFile For FileWriter, I write FileWriter file = new FileWriter("output.txt"); file.write("test\ntest\ntest");For RandomAccessFile,
-
Oracle 9i Client crashes on Windows XP SP2
Hi, I am currently trying to install the Oracle 9i (9.2.0.1.0)Client on Windows XP with installed SP2. A first fresh installation is always possible but trying to deinstall or show up the installed products leads to a crash of the OUI with an java.la
-
How can I highlight or underline?
Having converted the PDF to doc.x I can't figure out how to underline/highlight text (as I can do in word).