Creating Boot Environment for Live Upgrade
Hello.
I'd like to upgrade a Sun Fire 280R system running Solaris 8 to Solaris 10 U4. I'd like to use Live Upgrade to do this. As that's going to be my first LU of a system, I've got some questions. Before I start, I'd like to mention that I have read the �Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning� ([820-0178|http://docs.sun.com/app/docs/doc/820-0178]) document. Nonetheless, I'd also appreciate pointers to a more �hands-on� documentation/howto reg. live upgrade.
The system that I'd like to upgrade has these filesystems:
(winds02)askwar$ df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/md/dsk/d30 4129290 684412 3403586 17% /
/dev/md/dsk/d32 3096423 1467161 1567334 49% /usr
/dev/md/dsk/d33 2053605 432258 1559739 22% /var
swap 7205072 16 7205056 1% /var/run
/dev/dsk/c3t1d0s6 132188872 61847107 69019877 48% /u04
/dev/md/dsk/d34 18145961 5429315 12535187 31% /opt
/dev/md/dsk/d35 4129290 77214 4010784 2% /export/home
It has 2 built in harddisks, which form those metadevices. You can find the �metastat� at http://askwar.pastebin.ca/697380. I'm now planning to break the mirrors for /, /usr, /var and /opt. To do so, I'd run
metadetach d33 d23
metaclear d23
d23 is/used to be c1t1d0s4. I'd do this for d30, d32 and d34 as well. Plan is, that I'd be able to use these newly freed slices on c1t1d0 for LU. I know that I'm in trouble when c1t0d0 now dies. But that's okay, as that system isn't being used anyway right now...
Or wait, I can use lucreate to do that as well, can't I? So, instead of manually detaching the mirror, I could do:
lucreate -n s8_2_s10 -m /:/dev/md/dsk/d30:preserve,ufs \
-m /usr:/dev/md/dsk/d32:preserve,ufs \
-m /var:/dev/md/dsk/d33:preserve,ufs \
-m /opt:/dev/md/dsk/d34:preserve,ufs
Does that sound right? I'd assume, that I'd then have a new boot environment called �s8_2_s10�, which uses the contents of the old metadevices. Or would the correct command rather be:
lucreate -n s8_2_s10_v2 \
-m /:/dev/md/dsk/d0:mirror,ufs \
-m /:/dev/md/dsk/d20:detach,attach,preserve \
-m /usr:/dev/md/dsk/d2:mirror,ufs \
-m /usr:/dev/md/dsk/d22:detach,attach,preserve \
-m /var:/dev/md/dsk/d3:mirror,ufs \
-m /var:/dev/md/dsk/d23:detach,attach,preserve \
-m /opt:/dev/md/dsk/d4:mirror,ufs \
-m /opt:/dev/md/dsk/d24:detach,attach,preserve
What would be the correct way to create the new boot environment? As I said, I haven't done this before, so I'd really appreciate some help.
Thanks a lot,
Alexander Skwar
I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
You were hijacking that other person's discussion from 2012 to ask your own new post.
You have now properly asked your question and people can pay attention to you and not confuse you with that other person.
Similar Messages
-
Highlevel information on creating a environment for self learning
I am new to SAP and hoping to create an environment at home for self learning. I have a strong back ground in infrastructure/ unix / systems adminstration/ oracle database etc. I was unable to find information at a high level how I can go about creating an environment for learning the SAP Basis and Netweaver administration. Can some one please clarify the following:
a) Is it possible to get a trial version that I can use to install the SAP or I will have to buy a license?
b) Since, I am not using it for production - can I renew this trial version by changing the date or there are ways to request new trial keys?
c) Can you please provide the list of all the softwares that I need to install to get a full SAP environment setup?
Edited by: neeraj mendiratta on Dec 26, 2007 1:58 PMHi,
You can buy the IDES version online. See the below thread:
http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.biztalk.general&tid=4a9dac5f-0e74-4422-aef5-aa2bcc7b3831&p=1
Have a look at any search site where in you can find lot many sites to purchase SAP IDES.
However, you can simulate the landscape. If you buy the license, you will have access to SAP market place, download the latest patches and also look for solutions.
Also, all the software that are required (apart from the OS) are bundled in the package. You need not buy any on the shelf software additionally.
Rgds,
Raghu
Reward, if you find the solution informative. -
RFE: smpatch for Live Upgrade boot environments
We use smpatch extensively, along with a local patch server, to keep our Solaris servers
and workstations up to date on patches. I'm relatively satisfied with this facility.
I'd like to use smpatch to apply patches to a Live Upgrade boot environment, but it
doesn't offer that option. All I really need to do is to point it at an alternate root to do
the analysis and patch download. Live Upgrade already has the ability to apply patches
from a local directory. I've had to turn to the competition, pca, to do the analysis and
download.
Please request that this ability be added to smpatch.Unfortunately man pages are not usually updated after an initial release. However there is a change request 6481979 to add it to the man pages. The option is now present in the smpatch help when no parameters are provided to the command only as "-b boot-env". As an example;
$ smpatch add -b altboot -i 111111-11
The relevent change requests were 6366823 for Update Connection and 4974240 historically for smpatch. As realisation detection used in an analysis may depend on active software or drivers to extract data this cannot be statically extracted from a system image so a correct analysis cannot be done which appears to be why only add, remove and update were given the boot environment option. -
Lucreate fails to create boot environment
Hi,
I'm trying to create a boot environment, but the lucreate fails with the following error message:
# lucreate -n solaris10 -m /:/dev/dsk/c0t2d0s0:ufs
Please wait while your system configuration is determined.
Determining what file systems should be in the new BE.
/usr/sbin/lustatus: illegal option -- d
USAGE: lustatus [-l error_log] [-o outfile] ( [-n] "BE_name" )
WARNING: The BE_name should be enclosed in double quotes.
Template entry /:/dev/dsk/c0t2d0s0:ufs skipped.
luconfig: ERROR: Template filesystem definition failed for /, all devices are not applicable..
ERROR: Configuration of BE failed.I have tried the BE_name with and without double quotes but still no luck. I have also checked the target partition and it does contain the "wm" flag:
partition> print
Current partition table (original):
Total disk cylinders available: 33916 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 32969 132.81GB (32970/0/0) 278530560
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 - 33915 136.62GB (33916/0/0) 286522368
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 swap wu 32970 - 33915 3.81GB (946/0/0) 7991808Does anybody have an idea what causes this issue? I would greatly appreciate any help.
Thanks!
Cindyconrad_user wrote:
I'm trying to use the following command to create a boot environment on c1t1d0s0 (currently the system booted from c1t0d0s0) as you can see below the -m option is not recognized. What am a I doing wrong? Please help?
# lucreate -m /:/c1t1d0s0:ufs \ -m -:/dev/dsk/c1t1d0s1:swap -m /usr:/dev/dsk/c1t1d0s6:ufs -n solaris 10
ERROR: command line argument(s) < -m> not recognized
No, it seems to be saying " -m" (so <space>-m, not -m) is what's not recognized.
In your command line, you have a backslash between ufs and the -m. That's escaping the space in front of the argument. Any reason you've put the backslash there?
Darren -
Lucreate -m fails to create boot environment. error: -m not recongnized?
I'm trying to use the following command to create a boot environment on c1t1d0s0 (currently the system booted from c1t0d0s0) as you can see below the -m option is not recognized. What am a I doing wrong? Please help?
# lucreate -m /:/c1t1d0s0:ufs \ -m -:/dev/dsk/c1t1d0s1:swap -m /usr:/dev/dsk/c1t1d0s6:ufs -n solaris 10
ERROR: command line argument(s) < -m> not recognized
Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ]
[ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ]
[ -l error_log-file ] [ -M slice_list-file [ -M ... ] ]
[ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ]
[ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X ]
[ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ]
[ -z filter_list-file ]conrad_user wrote:
I'm trying to use the following command to create a boot environment on c1t1d0s0 (currently the system booted from c1t0d0s0) as you can see below the -m option is not recognized. What am a I doing wrong? Please help?
# lucreate -m /:/c1t1d0s0:ufs \ -m -:/dev/dsk/c1t1d0s1:swap -m /usr:/dev/dsk/c1t1d0s6:ufs -n solaris 10
ERROR: command line argument(s) < -m> not recognized
No, it seems to be saying " -m" (so <space>-m, not -m) is what's not recognized.
In your command line, you have a backslash between ufs and the -m. That's escaping the space in front of the argument. Any reason you've put the backslash there?
Darren -
Guideline to Creating Production Environment for BPC 7.5
Hello All:
Does anyone know of any published guidelines for moving a complete development environment into production?
We are upgrading to BPC 7.5 MS and are quickly approaching the stage where we can deploy our environment into production.
I've been told it involves creating a backup of your DEV environment and simply restoring it in production. Is it that basic? What are the manual adjustments needed afterwards?
Any input/insight would be great!
Thanks,
NathanNo manual adjustment for this.
Using Server Manager you are able to do the backup into development.
Also with server manager you are able to do restore into production.
No other manual steps are necessary.
But this can be done once.
After that working in paralel with production and development to do backup into dev and restore it into production is not working any more because you will lose data from productiion.
Backup restore from Server manager is not a transport mechanism. It is really a backup restore functionality.
Regards
Sorin Radulescu. -
Safe boot required for firmware upgrades??
The two recent firmware upgrades, EFI and SMC, both required me to do a safe boot for the firmware to install. Is this normal behaviour?? Trying to do the firmware install from a normal boot results in in an installation failure.
Possibly related, the "fixed" version of Parallels crashes the Mac during the install. If I try the same install during a safeboot, it will install. However, the system will panic during a normal boot until Parallels is uninstalled.
Thoughts?? Advice??? Thanks!
-PhilSafe Mode boot…
http://docs.info.apple.com/article.html?artnum=107393
You get updates by running Software Update (System Preferences else it's the 2nd item in the Apple menu). -
Sizing the hardware and environment for the upgrade
Hi,
Can anyone tell me, what is the sizing requirement of hardware and environment, when I'm upgrading from SRM2.0 to SRM5.0?
Thanks!
SonaliHi,
Check details more details on upgrade using following link -
http://service.sap.com/spau.
BR,
Disha.
Pls reward points for helpful answers. -
Create boot disc for snow leopard
Can you create a boot disc DVD for Snow Leopard without the original intallation copy?
No. You need the installer DVD that came originally with your computer or a retail disc with a later version of OS X. The current retail Snow Leopard DVD installs 10.6.3. If your computer came with a later version of Snow Leopard, then you will need the DVD that came with your computer.
-
Does anyone know of any issues with using a T2000 as a Master Flash Archive to install a clone on a T5220 inactive boot environment using Live Upgrade? I created an empty boot environmnet first on the T5220 using slice 7 (17Gb). I tried this and it would not boot the T2000 boot envirinment when I activated it. No errors, it just boot the original T5220 boot environment. Are the T1 and T2 processors that show up as sun4v architecture not really considered the same architecture?
Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.
-
Lucreate - Cannot make file systems for boot environment
Hello!
I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <d100> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <S10U6_20081207>.
Source boot environment is <d100>.
Creating boot environment <S10U6_20081207>.
Creating file systems on boot environment <S10U6_20081207>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
Mounting file systems for boot environment <S10U6_20081207>.
Calculating required sizes of file systems for boot environment <S10U6_20081207>.
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
Well - why's that?
I can do a "newfs /dev/md/dsk/d200" just fine.
When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
/bin/nawk: can't open file /etc/lu/ICF.2
Quellcodezeilennummer 1
Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
Would anyone maybe have an idea about what's broken there?
--($ ~)-- LC_ALL=C metastat
d200: Mirror
Submirror 0: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d20: Submirror of d200
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes
d100: Mirror
Submirror 0: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d10: Submirror of d100
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Okay Yes
d201: Mirror
Submirror 0: d21
State: Okay
Submirror 1: d11
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2097414 blocks (1.0 GB)
d21: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s1 0 No Okay Yes
d11: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s1 0 No Okay Yes
hsp001: is empty
Device Relocation Information:
Device Reloc Device ID
c1t1d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____434N5641
c1t0d0 Yes id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
--($ ~)-- /bin/df -k | grep md
/dev/md/dsk/d100 15490539 10772770 4562864 71% /Thanks,
MichaelHello.
(sys01)root# devfsadm -Cv
(sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
--($ ~)-- sudo env LC_ALL=C LANG=C lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
d100 yes yes yes no -
--($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
boot environment name: d100
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
/dev/md/dsk/d100 ufs 16106660352 / logging
/dev/md/dsk/d201 swap 1073875968 - -In the rebooted system, I re-did the original lucreate:
<code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
Copying.
*{color:#ff0000}Excellent! It now works!{color}*
Thanks a lot,
Michael -
Live Upgrade fails on cluster node with zfs root zones
We are having issues using Live Upgrade in the following environment:
-UFS root
-ZFS zone root
-Zones are not under cluster control
-System is fully up to date for patching
We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
Here is the output of a Live Upgrade:
bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
Determining types of file systems supported
Validating file system requests
The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <sol10> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <sol10-20110505>.
Source boot environment is <sol10>.
Creating boot environment <sol10-20110505>.
Creating file systems on boot environment <sol10-20110505>.
Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
Mounting file systems for boot environment <sol10-20110505>.
Calculating required sizes of file systems for boot environment <sol10-20110505>.
Populating file systems on boot environment <sol10-20110505>.
Checking selection integrity.
Integrity check OK.
Preserving contents of mount point </>.
Preserving contents of mount point </var>.
Copying file systems that have not been preserved.
Creating shared file system mount points.
Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
list of <2> potential problems (issues) that were encountered while
populating boot environment <sol10-20110505>.
INFORMATION: You must review the issues listed in
</tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
resolved. In general, you can ignore warnings about files that were
skipped because they did not exist or could not be opened. You cannot
ignore errors such as directories or files that could not be created, or
file systems running out of disk space. You must manually resolve any such
problems before you activate boot environment <sol10-20110505>.
Creating compare databases for boot environment <sol10-20110505>.
Creating compare database for file system </var>.
Creating compare database for file system </>.
Updating compare databases on boot environment <sol10-20110505>.
Making boot environment <sol10-20110505> bootable.
ERROR: unable to mount zones:
WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
zoneadm: zone 'img1': call to zoneadmd failed
ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
Making the ABE <sol10-20110505> bootable FAILED.
ERROR: Unable to make boot environment <sol10-20110505> bootable.
ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
ERROR: Cannot make file systems for boot environment <sol10-20110505>.
Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
ThanksI was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
When attempting to do a "lumount s10u9c33zfs", it gave the following error:
ERROR: unable to mount zones:
zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
zoneadm: zone 'edd313': call to zoneadmd failed
ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
ERROR: unmounting partially mounted boot environment file systems
ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
ERROR: cannot mount boot environment by name <s10u9c33zfs>
The solution in this case was:
zonecfg -z edd313
info ;# display current setting
remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
verify ;# check change
commit ;# commit change
exit -
Live upgrade, zones and separate mount points
Hi,
We have a quite large zone environment based on Solaris zones located on VxVM/VxFS. I know this is a doubtful configuration but the choice was made before i got here and now we need to upgrade the environment. Veritas guides says its fine to locate zones on Veritas, but i am not sure Sun would approve.
Anyway, sine all zones are located on a separate volume i want to create a new one for every zonepath, something like:
lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /zones/zone01:/dev/vx/dsk/zone01/zone01_root02:ufs
This works fine for a while after integration of 6620317 in 121430-23, but when the new environment is to be activated i get errors, see below[1]. If i look at the command executed by lucreate i see that the global root is mounted, but by zoneroot does not seem to have been mounted before the call to zoneadmd[2]. While this might not be a supported configuration, VxVM seems to be supported and i think that there are a few people out there with zonepaths on separate disks. Live upgrade probably has no issues with the files moved from the VxFS filesystem, that parts has been done, but the new filesystems does not seem to get mounted correctly.
Anyone tried something similar or has any idea on how to solve this?
The system is a s10s_u4 with kernel 127111-10 and Live upgrade patches 121430-25, 121428-10.
1:
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mount point </zones/zone01>.
Copying.
Creating shared file system mount points.
Copying root of zone <zone01>.
Creating compare databases for boot environment <upgrade>.
Creating compare database for file system </zones/zone01>.
Creating compare database for file system </>.
Updating compare databases on boot environment <upgrade>.
Making boot environment <upgrade> bootable.
ERROR: unable to mount zones:
zoneadm: zone 'zone01': can't stat /.alt.upgrade/zones/zone01/root: No such file or directory
zoneadm: zone 'zone01': call to zoneadmd failed
ERROR: unable to mount zone <zone01> in </.alt.upgrade>
ERROR: unmounting partially mounted boot environment file systems
ERROR: umount: warning: /dev/dsk/c2t1d0s0 not in mnttab
umount: /dev/dsk/c2t1d0s0 not mounted
ERROR: cannot unmount </dev/dsk/c2t1d0s0>
ERROR: cannot mount boot environment by name <upgrade>
ERROR: Unable to determine the configuration of the target boot environment <upgrade>.
ERROR: Update of loader failed.
ERROR: Unable to umount ABE <upgrade>: cannot make ABE bootable.
Making the ABE <upgrade> bootable FAILED.
ERROR: Unable to make boot environment <upgrade> bootable.
ERROR: Unable to populate file systems on boot environment <upgrade>.
ERROR: Cannot make file systems for boot environment <upgrade>.
2:
0 21191 21113 /usr/lib/lu/lumount -f upgrade
0 21192 21191 /etc/lib/lu/plugins/lupi_bebasic plugin
0 21193 21191 /etc/lib/lu/plugins/lupi_svmio plugin
0 21194 21191 /etc/lib/lu/plugins/lupi_zones plugin
0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
0 21195 21192 mount /dev/dsk/c2t1d0s0 /.alt.upgrade
0 21196 21192 mount -F tmpfs swap /.alt.upgrade/var/run
0 21196 21192 mount swap /.alt.upgrade/var/run
0 21197 21192 mount -F tmpfs swap /.alt.upgrade/tmp
0 21197 21192 mount swap /.alt.upgrade/tmp
0 21198 21192 /bin/sh /usr/lib/lu/lumount_zones -- /.alt.upgrade
0 21199 21198 /bin/expr 2 - 1
0 21200 21198 egrep -v ^(#|global:) /.alt.upgrade/etc/zones/index
0 21201 21198 /usr/sbin/zonecfg -R /.alt.upgrade -z test exit
0 21202 21198 false
0 21205 21204 /usr/sbin/zoneadm -R /.alt.upgrade list -i -p
0 21206 21204 sed s/\([^\]\)::/\1:-:/
0 21207 21203 zoneadm -R /.alt.upgrade -z zone01 mount
0 21208 21207 zoneadmd -z zone01 -R /.alt.upgrade
0 21210 21203 false
0 21211 21203 gettext unable to mount zone <%s> in <%s>
0 21212 21203 /etc/lib/lu/luprintf -Eelp2 unable to mount zone <%s> in <%s> zone01 /.alt.up
Edited by: henrikj_ on Sep 8, 2008 11:55 AM Added Solaris release and patch information.I updated my manual pages got a reminder of the zonename field for the -m option of lucreate. But i still have no success, if i have the root filesystem for the zone in vfstab it tries to mount it the current root into the alternate BE:
# lucreate -u upgrade -m /:/dev/dsk/c2t1d0s0:ufs -m /:/dev/vx/dsk/zone01/zone01_rootvol02:ufs:zone01
<snip>
Creating file systems on boot environment <upgrade>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_rootvol02>.
Mounting file systems for boot environment <upgrade>.
ERROR: UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/zone01/zone01_rootvol is already mounted, /.alt.tmp.b-gQg.mnt/zones/zone01 is busy,
allowable number of mount points exceeded
ERROR: cannot mount mount point </.alt.tmp.b-gQg.mnt/zones/zone01> device </dev/vx/dsk/zone01/zone01_rootvol>
ERROR: failed to mount file system </dev/vx/dsk/zone01/zone01_rootvol> on </.alt.tmp.b-gQg.mnt/zones/zone01>
ERROR: unmounting partially mounted boot environment file systems
If i tried to do the same but with the filesystem removed from vfstab, then i get another error:
<snip>
Creating boot environment <upgrade>.
Creating file systems on boot environment <upgrade>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c2t1d0s0>.
Creating <ufs> file system for </> in zone <zone01> on </dev/vx/dsk/zone01/zone01_upgrade>.
Mounting file systems for boot environment <upgrade>.
Calculating required sizes of file systems for boot environment <upgrade>.
Populating file systems on boot environment <upgrade>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mountED.
ERROR: Unable to make boot environment <upgrade> bootable.
ERROR: Unable to populate file systems on boot environment <upgrade>.
ERROR: Cannot make file systems for boot environment <upgrade>.
If i let lucreate copy the zonepath to the same slice as the OS, the creation of the BE works fine:
# lucreate -n upgrade -m /:/dev/dsk/c2t1d0s0:ufs -
Lucreate 'ERROR: mount: /export: invalid argument' - Live Upgrade u8 to u9
I'm trying to update several servers running solaris cluster 3.2 from u8 to u9 using live upgrade, first server (quorum server) worked just fine, next one (cluster member) goes down like this:
# lucreate -n solaris-10-u9
ERROR: mount: /export: Invalid argument
ERROR: cannot mount mount point </.alt.tmp.b-pob.mnt/export> device </export>
ERROR: failed to mount file system </export> on </.alt.tmp.b-pob.mnt/export>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to mount ABE <solaris-10-u9>
ERROR: Unable to clone the existing file systems from boot environment <s10x_u8wos_08a> to create boot environment <solaris-10-u9>.
ERROR: Cannot make file systems for boot environment <solaris-10-u9>.I followed all the necessary steps, removed the installed live upgrade packages and installed the ones from the u9 iso...
Any ideas would be greatly appreciated.
Edited by: 801033 on Oct 8, 2010 5:11 AM
Edited by: 801033 on Oct 8, 2010 5:28 AM
Edited by: 801033 on Oct 8, 2010 5:33 AMThe answer, at least in my case:
When I originally installed this cluster, I apparently misread the part of the documentation which lead me to disable lofs. The documentation states that you need to disable lofs if BOTH of two conditions are met,
1) You are running HA for NFS to server a locally available filesystem AND
2) you are running automountd.
In my case, I have no need for automountd, so I disabled the autofs service, reenabled lofs and am proceeding with the upgrade. -
Live Upgrade not working from Solaris 10 05/09 - 10/09
I have a Blade 1000 that used to run SXCE, and I used to LU all of the time. I recently rejumpstarted it back to Solaris 10 05/09 to match production.
Now that 10/09 is out, I went to do a normal LU, however it completely bombs out. I'm assuming it's because of the weird device name, yet I can't figure out what is causing it. Any ideas?
========
= Before =
========
root@dxnpnc05:~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.24G 24.0G 54.5K /rpool
rpool/ROOT 2.46G 24.0G 18K legacy
rpool/ROOT/sol10u7 2.46G 24.0G 2.46G /
rpool/appl 18K 24.0G 18K /appl
rpool/export 2.41G 24.0G 18K /export
rpool/export/home 2.41G 24.0G 2.41G /home
rpool/local 1.63M 24.0G 1.63M /usr/local
rpool/opt 279K 24.0G 279K /opt
rpool/perl 23K 24.0G 23K /usr/perl5/site_perl
rpool/pkg 107M 24.0G 107M /usr/pkg
rpool/pkgsrc 263M 24.0G 263M /usr/pkgsrc
rpool/swap 4G 27.4G 545M -
root@dxnpnc05:~ # lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
sol10u7 yes yes yes no -
=========
= Creation =
=========
root@dxnpnc05:~ # lucreate -c sol10u7 -n sol10u8
Analyzing system configuration.
Comparing source boot environment <sol10u7> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <sol10u8>.
Source boot environment is <sol10u7>.
Creating boot environment <sol10u8>.
Cloning file systems from boot environment <sol10u7> to create boot environment <sol10u8>.
Creating snapshot for <rpool/ROOT/sol10u7> on <rpool/ROOT/sol10u7@sol10u8>.
Creating clone for <rpool/ROOT/sol10u7@sol10u8> on <rpool/ROOT/sol10u8>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/sol10u8>.
ERROR: cannot open ' ': invalid dataset name
ERROR: cannot mount mount point </.alt.tmp.b-0lg.mnt/opt> device < >
ERROR: failed to mount file system < > on </.alt.tmp.b-0lg.mnt/opt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to mount ABE <sol10u8>
ERROR: Unable to clone the existing file systems from boot environment <sol10u7> to create boot environment <sol10u8>.
ERROR: Cannot make file systems for boot environment <sol10u8>.
======
= After =
======
root@dxnpnc05:~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.24G 24.0G 54.5K /rpool
rpool/ROOT 2.46G 24.0G 18K legacy
rpool/ROOT/sol10u7 2.46G 24.0G 2.46G /
rpool/ROOT/sol10u7@sol10u8 68.5K - 2.46G -
rpool/ROOT/sol10u8 110K 24.0G 2.46G legacy
rpool/appl 18K 24.0G 18K /appl
rpool/export 2.41G 24.0G 18K /export
rpool/export/home 2.41G 24.0G 2.41G /home
rpool/local 1.63M 24.0G 1.63M /usr/local
rpool/opt 279K 24.0G 279K /opt
rpool/perl 23K 24.0G 23K /usr/perl5/site_perl
rpool/pkg 107M 24.0G 107M /usr/pkg
rpool/pkgsrc 263M 24.0G 263M /usr/pkgsrc
rpool/swap 4G 27.4G 545M -
root@dxnpnc05:~ # lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
sol10u7 yes yes yes no -
sol10u8 no no no yes -
Any ideas? Thanks!I have been trying to use luupgrade for Solaris10 on Sparc, 05/09 -> 10/09.
lucreate is successful, but luactivate directs me to install 'the rest of the packages' in order to make the BE stable enough to activate. I try to find the packages indicated , but find only "virtual packages" which contain only pkgmap.
I installed upgrade 6 on a spare disk to make sure my u7 installation was not defective, but got similar results.
I got beyond luactivate on x86 a while ago, but had other snags which I left unattended.
Maybe you are looking for
-
How to change the format of tooltips in bar chart?
Hi experts, I am using the WAD and using the bar chart to show the query result. However, when the mouse hovered on the bar, a tooptip would appear and show both charateristics' value and key figures' values(x and y axis). The requirement is that too
-
Enquiry about the Driver program SAPFM06P
Hi, If I am executing the Driver program SAPFM06P, shows the error like this -> SAPFM06P is not a report program (type 'S'). How can I execute ..Plz help. Pooja
-
HP Pavilion dv7 6b57nr windows can not continue
after system recovery with hp recovery disc windows will not load completely After setup for first use the wecome screen loads to the point of says welcome then windows can not continute comes on. HP Pavilion dv7 6b57nr windows 7
-
What to do with a g3 beige tower
Power Mac as above, with extra RAM, 2 CD drives. Running OS 9.6 or 9.7. *Question: what OS can it be upgraded to, or is it at the max? Rarely used anymore, but has tons of floppies, CDs, all the manuals, and was working the last time it was booted.
-
I am trying to rip an audiobook and the 'Get CD tracks information from the internet' option is checked. I got 2 entries. Selected first without much thought and it turns out that there are no track names. I would like to try another entry but I can