Lucreate fails to create boot environment
Hi,
I'm trying to create a boot environment, but the lucreate fails with the following error message:
# lucreate -n solaris10 -m /:/dev/dsk/c0t2d0s0:ufs
Please wait while your system configuration is determined.
Determining what file systems should be in the new BE.
/usr/sbin/lustatus: illegal option -- d
USAGE: lustatus [-l error_log] [-o outfile] ( [-n] "BE_name" )
WARNING: The BE_name should be enclosed in double quotes.
Template entry /:/dev/dsk/c0t2d0s0:ufs skipped.
luconfig: ERROR: Template filesystem definition failed for /, all devices are not applicable..
ERROR: Configuration of BE failed.I have tried the BE_name with and without double quotes but still no luck. I have also checked the target partition and it does contain the "wm" flag:
partition> print
Current partition table (original):
Total disk cylinders available: 33916 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 32969 132.81GB (32970/0/0) 278530560
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 - 33915 136.62GB (33916/0/0) 286522368
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 swap wu 32970 - 33915 3.81GB (946/0/0) 7991808Does anybody have an idea what causes this issue? I would greatly appreciate any help.
Thanks!
Cindy
conrad_user wrote:
I'm trying to use the following command to create a boot environment on c1t1d0s0 (currently the system booted from c1t0d0s0) as you can see below the -m option is not recognized. What am a I doing wrong? Please help?
# lucreate -m /:/c1t1d0s0:ufs \ -m -:/dev/dsk/c1t1d0s1:swap -m /usr:/dev/dsk/c1t1d0s6:ufs -n solaris 10
ERROR: command line argument(s) < -m> not recognized
No, it seems to be saying " -m" (so <space>-m, not -m) is what's not recognized.
In your command line, you have a backslash between ufs and the -m. That's escaping the space in front of the argument. Any reason you've put the backslash there?
Darren
Similar Messages
-
Lucreate -m fails to create boot environment. error: -m not recongnized?
I'm trying to use the following command to create a boot environment on c1t1d0s0 (currently the system booted from c1t0d0s0) as you can see below the -m option is not recognized. What am a I doing wrong? Please help?
# lucreate -m /:/c1t1d0s0:ufs \ -m -:/dev/dsk/c1t1d0s1:swap -m /usr:/dev/dsk/c1t1d0s6:ufs -n solaris 10
ERROR: command line argument(s) < -m> not recognized
Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ]
[ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ]
[ -l error_log-file ] [ -M slice_list-file [ -M ... ] ]
[ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ]
[ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X ]
[ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ]
[ -z filter_list-file ]conrad_user wrote:
I'm trying to use the following command to create a boot environment on c1t1d0s0 (currently the system booted from c1t0d0s0) as you can see below the -m option is not recognized. What am a I doing wrong? Please help?
# lucreate -m /:/c1t1d0s0:ufs \ -m -:/dev/dsk/c1t1d0s1:swap -m /usr:/dev/dsk/c1t1d0s6:ufs -n solaris 10
ERROR: command line argument(s) < -m> not recognized
No, it seems to be saying " -m" (so <space>-m, not -m) is what's not recognized.
In your command line, you have a backslash between ufs and the -m. That's escaping the space in front of the argument. Any reason you've put the backslash there?
Darren -
Creating Boot Environment for Live Upgrade
Hello.
I'd like to upgrade a Sun Fire 280R system running Solaris 8 to Solaris 10 U4. I'd like to use Live Upgrade to do this. As that's going to be my first LU of a system, I've got some questions. Before I start, I'd like to mention that I have read the �Solaris 10 8/07 Installation Guide: Solaris Live Upgrade and Upgrade Planning� ([820-0178|http://docs.sun.com/app/docs/doc/820-0178]) document. Nonetheless, I'd also appreciate pointers to a more �hands-on� documentation/howto reg. live upgrade.
The system that I'd like to upgrade has these filesystems:
(winds02)askwar$ df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/md/dsk/d30 4129290 684412 3403586 17% /
/dev/md/dsk/d32 3096423 1467161 1567334 49% /usr
/dev/md/dsk/d33 2053605 432258 1559739 22% /var
swap 7205072 16 7205056 1% /var/run
/dev/dsk/c3t1d0s6 132188872 61847107 69019877 48% /u04
/dev/md/dsk/d34 18145961 5429315 12535187 31% /opt
/dev/md/dsk/d35 4129290 77214 4010784 2% /export/home
It has 2 built in harddisks, which form those metadevices. You can find the �metastat� at http://askwar.pastebin.ca/697380. I'm now planning to break the mirrors for /, /usr, /var and /opt. To do so, I'd run
metadetach d33 d23
metaclear d23
d23 is/used to be c1t1d0s4. I'd do this for d30, d32 and d34 as well. Plan is, that I'd be able to use these newly freed slices on c1t1d0 for LU. I know that I'm in trouble when c1t0d0 now dies. But that's okay, as that system isn't being used anyway right now...
Or wait, I can use lucreate to do that as well, can't I? So, instead of manually detaching the mirror, I could do:
lucreate -n s8_2_s10 -m /:/dev/md/dsk/d30:preserve,ufs \
-m /usr:/dev/md/dsk/d32:preserve,ufs \
-m /var:/dev/md/dsk/d33:preserve,ufs \
-m /opt:/dev/md/dsk/d34:preserve,ufs
Does that sound right? I'd assume, that I'd then have a new boot environment called �s8_2_s10�, which uses the contents of the old metadevices. Or would the correct command rather be:
lucreate -n s8_2_s10_v2 \
-m /:/dev/md/dsk/d0:mirror,ufs \
-m /:/dev/md/dsk/d20:detach,attach,preserve \
-m /usr:/dev/md/dsk/d2:mirror,ufs \
-m /usr:/dev/md/dsk/d22:detach,attach,preserve \
-m /var:/dev/md/dsk/d3:mirror,ufs \
-m /var:/dev/md/dsk/d23:detach,attach,preserve \
-m /opt:/dev/md/dsk/d4:mirror,ufs \
-m /opt:/dev/md/dsk/d24:detach,attach,preserve
What would be the correct way to create the new boot environment? As I said, I haven't done this before, so I'd really appreciate some help.
Thanks a lot,
Alexander SkwarI replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
You were hijacking that other person's discussion from 2012 to ask your own new post.
You have now properly asked your question and people can pay attention to you and not confuse you with that other person. -
System Image Utility fails to create boot image
I am not able to successfully build a boot image with the System Image Utility. The build starts and runs for about 1 minute and then I get 100's of ditto messages saying "No space left on device". There's plenty of space left on the device. Eventually I get a GUI message stating the there was an error creating the image. The image is of course unusable. This only happens on a Boot image. I have no problem making an install image from the same source. Is it just me??
Xserve G5 Dual Mac OS X (10.4.5)I had simililar problems with much headscratching as the result.
I found that whenever I tried to create a boot image with System Image Utility (SIU) using an image file of my existing system as the source SIU would fail with the annoying "No space left on device" message everytime. I did a little investigating and found that SIU always created a 400 MB disk image file to copy to. So the error message was correct as my source was way over 4 GB.
I checked the manual and found the embarrasingly simple solution. It's not mentioned directly, rather it is stated that when you want to create an boot image from an existing system you should boot the machine containing the desired system on disk from an alternate source and the run SIU on that machine. The "trick" is that you're running SIU with the existing system mounted as a disk.
So I went back to my Xserve, mounted the image so it appeared on the desktop. Ran SIU and chose the mounted volume as the source instead of the image file, and hey presto!
MacBook Pro, Xserve, eMac, iMac... any Mac I can get my hands on Mac OS X (10.4.6) -
Lucreate - Cannot make file systems for boot environment
Hello!
I'm trying to use LiveUpgrade to upgrade one "my" Sparc servers from Solaris 10 U5 to Solaris 10 U6. To do that, I first installed the patches listed on [Infodoc 72099|http://sunsolve.sun.com/search/document.do?assetkey=1-9-72099-1] and then installed SUNWlucfg, SUNWlur and SUNWluufrom the S10U6 sparc DVD iso. I then did:
--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <d100> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <S10U6_20081207>.
Source boot environment is <d100>.
Creating boot environment <S10U6_20081207>.
Creating file systems on boot environment <S10U6_20081207>.
Creating <ufs> file system for </> in zone <global> on </dev/md/dsk/d200>.
Mounting file systems for boot environment <S10U6_20081207>.
Calculating required sizes of file systems for boot environment <S10U6_20081207>.
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.So the problem is:
ERROR: Cannot make file systems for boot environment <S10U6_20081207>.
Well - why's that?
I can do a "newfs /dev/md/dsk/d200" just fine.
When I try to remove the incomplete S10U6_20081207 BE, I get yet another error :(
/bin/nawk: can't open file /etc/lu/ICF.2
Quellcodezeilennummer 1
Boot environment <S10U6_20081207> deleted.I get this error consistently (I ran the lucreate many times now).
lucreate used to work fine, "once upon a time", when I brought the system from S10U4 to S10U5.
Would anyone maybe have an idea about what's broken there?
--($ ~)-- LC_ALL=C metastat
d200: Mirror
Submirror 0: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d20: Submirror of d200
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s0 0 No Okay Yes
d100: Mirror
Submirror 0: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 31458321 blocks (15 GB)
d10: Submirror of d100
State: Okay
Size: 31458321 blocks (15 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s0 0 No Okay Yes
d201: Mirror
Submirror 0: d21
State: Okay
Submirror 1: d11
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2097414 blocks (1.0 GB)
d21: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t1d0s1 0 No Okay Yes
d11: Submirror of d201
State: Okay
Size: 2097414 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1t0d0s1 0 No Okay Yes
hsp001: is empty
Device Relocation Information:
Device Reloc Device ID
c1t1d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____434N5641
c1t0d0 Yes id1,sd@SSEAGATE_ST336607LSUN36G_3JA659W600007412LQFN
--($ ~)-- /bin/df -k | grep md
/dev/md/dsk/d100 15490539 10772770 4562864 71% /Thanks,
MichaelHello.
(sys01)root# devfsadm -Cv
(sys01)root# To be on the safe side, I even rebooted after having run devfsadm.
--($ ~)-- sudo env LC_ALL=C LANG=C lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
d100 yes yes yes no -
--($ ~)-- sudo env LC_ALL=C LANG=C lufslist d100
boot environment name: d100
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
/dev/md/dsk/d100 ufs 16106660352 / logging
/dev/md/dsk/d201 swap 1073875968 - -In the rebooted system, I re-did the original lucreate:
<code>--($ ~)-- time sudo env LC_ALL=C LANG=C PATH=/usr/bin:/bin:/sbin:/usr/sbin:$PATH lucreate -n S10U6_20081207 -m /:/dev/md/dsk/d200:ufs</code>
Copying.
*{color:#ff0000}Excellent! It now works!{color}*
Thanks a lot,
Michael -
Hi All
tried to create a new boot environment using lucreate on Solaris 10 release 10/08
/usr/sbin/lucreate -c live -m /:/dev/dsk/c0t1d0s0:ufs \
-m -:/dev/dsk/c0t1d0s1:swap \
-m /tmp:/dev/dsk/c0t1d0s4:ufs \
-m /usr:/dev/dsk/c0t1d0s6:ufs \
-m /var:/dev/dsk/c0t1d0s5:ufs \
-m /local:/dev/dsk/c0t1d0s7:ufs -n cloned
my disk lay out on c0t0d0 and c0t1d0 are identical, df shows a pretty straight forward config:
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 966815 560068 348739 62% /
/dev/dsk/c0t0d0s6 5045478 474884 4520140 10% /usr
/dev/dsk/c0t0d0s5 8072501 204880 7786896 3% /var
/dev/dsk/c0t0d0s4 966815 1198 907609 1% /tmp
/dev/dsk/c0t0d0s7 39306115 790864 38122190 3% /local
but Iucreate failed miserably.
Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <cloned>.
Source boot environment is <live>.
Creating boot environment <cloned>.
Creating file systems on boot environment <cloned>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c0t1d0s0>.
Creating <ufs> file system for </local> in zone <global> on </dev/dsk/c0t1d0s7>.
Creating <ufs> file system for </tmp> in zone <global> on </dev/dsk/c0t1d0s4>.
Creating <ufs> file system for </usr> in zone <global> on </dev/dsk/c0t1d0s6>.
Creating <ufs> file system for </var> in zone <global> on </dev/dsk/c0t1d0s5>.
Mounting file systems for boot environment <cloned>.
ERROR: mount point </.alt.tmp.b-L2b.mnt/tmp> is already in use, mounted on </dev/dsk/c0t1d0s4>
ERROR: failed to create mount point </.alt.tmp.b-L2b.mnt/tmp> for file system <swap>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>WARNING: Attempting to correct problems detected with file systems for boot environment <cloned>.
Performing file system check of device </dev/rdsk/c0t1d0s0>.
I am running out of ideas as everything looks correct to me
thanks for your helpHi,
Try devfsadm -Cv (might be small c) but if you have tried running that below command many times you will need to clear the failed devices before it will work. The command your using looks fine though. Also are you using x86 or sparc as x86 is very buggy, and one final thing have you installed the live upgrade patch cluster.
Regards
chris -
I have the following filesytems on my fully patched Solaris 10 machine:
/export/home
/ZONES/reg-otm3
/ZONES/reg-otm4
Note that reg-otm3 and reg-otm4 are whole-root zones, up and running.
The above filesystems are all part of my SAN network. I want to create an ABE on one of my internal disks, but the lucreate barfs when it sees the zones:
root@reg-otm2 # lucreate -c "San_Disk" -m /:/dev/dsk/c0t1d0s0:ufs -n "Internal_Disk_1"
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <San_Disk>.
Creating initial configuration for primary boot environment <San_Disk>.
The device </dev/dsk/c3t50060E8005B00B14d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <San_Disk> PBE Boot Device </dev/dsk/c3t50060E8005B00B14d0s0>.
Comparing source boot environment <San_Disk> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <Internal_Disk_1>.
Source boot environment is <San_Disk>.
Creating boot environment <Internal_Disk_1>.
Creating file systems on boot environment <Internal_Disk_1>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c0t1d0s0>.
Mounting file systems for boot environment <Internal_Disk_1>.
Calculating required sizes of file systems for boot environment <Internal_Disk_1>.
Populating file systems on boot environment <Internal_Disk_1>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Copying root of zone <reg-otm3> to </.alt.tmp.b-Nwg.mnt/ZONES/reg-otm3-Internal_Disk_1>.
ERROR: Internal error: /.alt.tmp.b-Nwg.mnt/ZONES/reg-otm3-Internal_Disk_1 is missing
Copying root of zone <reg-otm4> to </.alt.tmp.b-Nwg.mnt/ZONES/reg-otm4-Internal_Disk_1>.
ERROR: Internal error: /.alt.tmp.b-Nwg.mnt/ZONES/reg-otm4-Internal_Disk_1 is missing
Creating compare databases for boot environment <Internal_Disk_1>.
Creating compare database for file system </>.
Updating compare databases on boot environment <Internal_Disk_1>.
Making boot environment <Internal_Disk_1> bootable.
ERROR: unable to mount zones:
zoneadm: /.alt.tmp.b-aMg.mnt/ZONES/reg-otm3-Internal_Disk_1: No such file or directory
could not verify zonepath /.alt.tmp.b-aMg.mnt/ZONES/reg-otm3-Internal_Disk_1 because of the above errors.
zoneadm: zone reg-otm3 failed to verify
ERROR: unable to mount zone <reg-otm3> in </.alt.tmp.b-aMg.mnt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to remount ABE <Internal_Disk_1>: cannot make ABE bootable
ERROR: no boot environment is mounted on root device </dev/dsk/c0t1d0s0>
Making the ABE <Internal_Disk_1> bootable FAILED.
ERROR: Unable to make boot environment <Internal_Disk_1> bootable.
ERROR: Unable to populate file systems on boot environment <Internal_Disk_1>.
ERROR: Cannot make file systems for boot environment <Internal_Disk_1>.
I just want to be able to tell lucreate to ignore the zones, but I get the same results when I use the "-x" qualifiers as well.
Any ideas?
TIA
Rick1) I would move the creation of the frame to init. I like to create objects only then when they are needed. And when creating the class it is not needed. But it is needed when the applet starts.
2) You forgot to create a container that can hold objects. Try to create a JPanel where you can add your Buttons to. Then add the JPanel to the ContentPane.
public class JAppletExample extends JApplet {
JFrame jfrmTest;
JPanel jpnlTest;
public void init() {
// initializing objects
jfrmTest = new JFrame("This is a test");
jpnlTest = new JPanel();
// setting parameters
jpnlTest.setBackground(Color.white);
jpnlTest.setLayout(new FlowLayout());
jpnlTest.add(new JButton("Button 1"));
jpnlTest.add(new JButton("Button 2"));
jpnlTest.add(new JButton("Button 3"));
// adding JPanel to ContentPane
jfrmTest.getContentPane().add( jpnlTest );
// setting size and visibility
jfrmTest.setSize(400, 150);
jfrmTest.setVisible(true);
}Hope this helps, Rommie. -
Lucreate fails with - .../var directory not empty
Hi,
I have Solaris 10u8 with patch cluster from February2010 and patch 121430-44 installed on a V445
/ and /var are on seperate datasets
When i attempt to run lucreate i get an error as it attempts to mount /.alt.tmp.b-vI.mnt/var due to the directory /.alt.tmp.b-vI.mnt/var/tmp already existing
lucreate appears to create a /var/tmp directory in the root dataset of the the new BE during creation and is then unable to mount the var dataset.
It returns a "directory not empty" error
Attempted this multiple times same issue.
Any suggestions would be greatly appreciated.
bash-3.00# lucreate -n test
Analyzing system configuration.
Comparing source boot environment <sol10u8_0210> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <test>.
Source boot environment is <sol10u8_0210>.
Creating boot environment <test>.
Cloning file systems from boot environment <sol10u8_0210> to create boot environment <test>.
Creating snapshot for <rpool/ROOT/s10s_u8wos_08a> on <rpool/ROOT/s10s_u8wos_08a@test>.
Creating clone for <rpool/ROOT/s10s_u8wos_08a@test> on <rpool/ROOT/test>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/test>.
Creating snapshot for <rpool/ROOT/s10s_u8wos_08a/var> on <rpool/ROOT/s10s_u8wos_08a/var@test>.
Creating clone for <rpool/ROOT/s10s_u8wos_08a/var@test> on <rpool/ROOT/test/var>.
Setting canmount=noauto for </var> in zone <global> on <rpool/ROOT/test/var>.
ERROR: cannot mount '/.alt.tmp.b-vI.mnt/var': directory is not empty
ERROR: cannot mount mount point </.alt.tmp.b-vI.mnt/var> device <rpool/ROOT/test/var>
ERROR: failed to mount file system <rpool/ROOT/test/var> on </.alt.tmp.b-vI.mnt/var>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to mount ABE <test>
ERROR: Unable to clone the existing file systems from boot environment <sol10u8_0210> to create boot environment <test>.
ERROR: Cannot make file systems for boot environment <test>.
Thanks.
Edited by: Brian79 on Mar 14, 2010 5:36 AMI think I figured it out...
In the / dataset there is a /var/tmp that is missing the sticky bit (mode 0755) There was an anomaly during the u7 to u8 luupgrade that caused this, I think. It was a couple weeks ago and it didn't take me too long to overcome it by 'chmod 1777 /var/tmp'. But that was with the /var dataset mounted.
Unfortunately, zfs just mounts over the bad /var/tmp in the / dataset causing lumount to barf during the lucreate process because that there's already something in /var/tmp when it tries to mount the var dataset clone.
I dropped to single-user mode, force-unmounted /var, and deleted /var/tmp (which still had the 0755 permissions). lucreate works just fine now...
Good luck,
Mike -
Hi,
I'm running lucreate on a Sol10u7 x86 system as I wanted to get it to u8 level. I installed:
SUNWlucfg
SUNWlur
SUNWluu
from u8 and then a patch: 121431-58
System is not zoned and it is on ZFS with following pools:
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
pool: spool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
spool ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
This is what happens:
Creating Alternative Boot Environment..
lucreate -n s10x_u8
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <s10x_u7wos_08>.
Current boot environment is named <s10x_u7wos_08>.
Creating initial configuration for primary boot environment <s10x_u7wos_08>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10x_u7wos_08> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <s10x_u7wos_08> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <s10x_u8>.
Source boot environment is <s10x_u7wos_08>.
Creating boot environment <s10x_u8>.
Cloning file systems from boot environment <s10x_u7wos_08> to create boot environment <s10x_u8>.
Creating snapshot for <rpool/ROOT/s10x_u7wos_08> on <rpool/ROOT/s10x_u7wos_08@s10x_u8>.
Creating clone for <rpool/ROOT/s10x_u7wos_08@s10x_u8> on <rpool/ROOT/s10x_u8>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/s10x_u8>.
Creating snapshot for <rpool/ROOT/s10x_u7wos_08/var> on <rpool/ROOT/s10x_u7wos_08/var@s10x_u8>.
Creating clone for <rpool/ROOT/s10x_u7wos_08/var@s10x_u8> on <rpool/ROOT/s10x_u8/var>.
Setting canmount=noauto for </var> in zone <global> on <rpool/ROOT/s10x_u8/var>.
ERROR: Root slice device </dev/dsk/c1t0d0s0> for BE <s10x_u8> is not a block device: .
ERROR: Cannot make file systems for boot environment <s10x_u8>.
Please help,
Cheers,
Tomdevfsadm fixed the issue,
-
Help creating a clone of my boot environment
I am running Solaris 10
I am booting from SAN and want to clone my Boot Environment to another SAN Lun. I keep running into the error ERROR: Root pool <vnxsanpool> is not a bootable pool. I have formated the disk EFI label with format -e and have a root partition with 60GB (Whole Disk) on slice 0 and a backup partition on slice 2 60GB. I created the zpool with zpool create disk0 and then try and do lucreate -n vnxBE -p rootpool
Thanks for your help.Hi,
Solaris 10 needs an SMI (VTOC) label on the root pool disk. This is a long-standing boot limitation.
Convert the disk label to SMI by using format -e. Watch that it doesn't put on a default label with 128MB in slice 0.
If it does, move the disk space back to s0.
See these pointers:
SPARC: Setting up Disks for ZFS File Systems (Task Map) - System Administration Guide: Devices and File Systems
x86: Setting Up Disks for ZFS File Systems (Task Map) - System Administration Guide: Devices and File Systems
Thanks, Cindy -
Need Best Practice for creating BE in ZFS boot environment with zones
Good Afternoon -
I have a Sparc system with ZFS Root File System and Zones. I need to create a BE for whenever we do patching or upgrades to the O/S. I have run into issues when testing booting off of the newBE where the zones did not show up. I tried to go back to the original BE by running the luactivate on it and received errors. I did a fresh install of the O/S from cdrom on a ZFS filesystem. Next ran the following commands to create the zones, and then create the BE, then activate it and boot off of it. Please tell me if there are any steps left out or if the sequence was incorrect.
# zfs create o canmount=noauto rpool/ROOT/S10be/zones
# zfs mount rpool/ROOT/S10be/zones
# zfs create o canmount=noauto rpool/ROOT/s10be/zones/z1
# zfs create o canmount=noauto rpool/ROOT/s10be/zones/z2
# zfs mount rpool/ROOT/s10be/zones/z1
# zfs mount rpool/ROOT/s10be/zones/z2
# chmod 700 /zones/z1
# chmod 700 /zones/z2
# zonecfg z z1
Myzone: No such zone configured
Use create to begin configuring a new zone
Zonecfg:myzone> create
Zonecfg:myzone> set zonepath=/zones/z1
Zonecfg:myzone> verify
Zonecfg:myzone> commit
Zonecfg:myzone>exit
# zonecfg z z2
Myzone: No such zone configured
Use create to begin configuring a new zone
Zonecfg:myzone> create
Zonecfg:myzone> set zonepath=/zones/z2
Zonecfg:myzone> verify
Zonecfg:myzone> commit
Zonecfg:myzone>exit
# zoneadm z z1 install
# zoneadm z z2 install
# zlogin C e 9. z1
# zlogin C e 9. z2
Output from zoneadm list -v:
# zoneadm list -v
ID NAME STATUS PATH BRAND IP
0 global running / native shared
2 z1 running /zones/z1 native shared
4 z2 running /zones/z2 native shared
Now for the BE create:
# lucreate n newBE
# zfs list
rpool/ROOT/newBE 349K 56.7G 5.48G /.alt.tmp.b-vEe.mnt <--showed this same type mount for all f/s
# zfs inherit -r mountpoint rpool/ROOT/newBE
# zfs set mountpoint=/ rpool/ROOT/newBE
# zfs inherit -r mountpoint rpool/ROOT/newBE/var
# zfs set mountpoint=/var rpool/ROOT/newBE/var
# zfs inherit -r mountpoint rpool/ROOT/newBE/zones
# zfs set mountpoint=/zones rpool/ROOT/newBE/zones
and did it for the zones too.
When ran the luactivate newBE - it came up with errors, so again changed the mountpoints. Then rebooted.
Once it came up ran the luactivate newBE again and it completed successfully. Ran the lustatus and got:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
s10s_u8wos_08a yes yes no no -
newBE yes no yes no -
Ran init 0
ok boot -L
picked item two which was newBE
then boot.
Came up - but df showed no zones, zfs list showed no zones and when cd into /zones nothing there.
Please help!
thanks julieThe issue here is that lucreate add's an entry to the vfstab in newBE for the zfs filesystems of the zones. You need to lumount newBE /mnt then edit /mnt/etc/vfstab and remove the entries for any zfs filesystems. Then if you luumount it you can continue. It's my understanding that this has been reported to Sun, and, the fix is in the next release of Solaris.
-
Failed to Create MDT Boot Image in SCCM 2007
Hello Everyone,
I am trying to create MDT Integrated Boot Image in SCCM to Enable DART Integration on it. But when i try to Create Boot Image it Fails with below Error Message.
Started processing.
Creating boot image.
Copying WIM file.
Mounting WIM file.
WIM file mounted.
Setting Windows PE system root.
Set Windows PE system root.
Set Windows PE scratch space.
Adding standard components.
Adding extra content from: C:\Users\ADMINI~1\AppData\Local\Temp\1\i5wqsynb.efm
Unmounting WIM.
Copying WIM to the package source directory.
Creating boot image package.
Error while importing Microsoft Deployment Toolkit Task Sequence.
Failed to insert OSD binaries into the WIM file
Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlQueryException: The ConfigMgr Provider reported an error.
---> System.Management.ManagementException: Generic failure
at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
at System.Management.ManagementObject.Put(PutOptions options)
at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put(ReportProgress progressReport)
--- End of inner exception stack trace ---
at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put(ReportProgress progressReport)
I have searched over Internet for the same Error and I guess it was a Permission Issue. But my Environment is as below.
AD & SCCM Server on Same Machine
Folder have Full Permissions to SYSTEM, SCCMComputerAccount, DomainAdministrators, and Even I gave to Everyone Full
Control.
This is a LAB Environment with No Antivirus, No UAC
ANY HELP ON THIS Folks..... (:-()
RamHi Ychinnari,
As your question is related to SCCM report. it is not supported here.
you could post it in SCCM forum for better supports, Thanks for your understanding.
SCCM forum link:
https://social.technet.microsoft.com/Forums/systemcenter/en-US/home?category=configurationmanager
Best regards,
Youjun Tang
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Zones not booting - failed to create devlinks: Interrupted system call
I just installed the latest 10_Recommended cluster with 118833-36 kernel patch and now my zones won't boot. I get the error:
(root)Yes Master?> zoneadm list -iv
ID NAME STATUS PATH
0 global running /
- samba installed /export/home/zones/samba
- web installed /export/home/zones/web
- dhcp installed /export/home/zones/dhcp
- dns installed /export/home/zones/dns
- vs1 installed /zdata/zones/vs1
- dss installed /zdata/zones/dss
- test installed /zdata/zones/test
(root)Yes Master?> zoneadm -z test boot
failed to create devlinks: Interrupted system call
console setup: device initialization failed
zoneadm: zone 'test': could not start zoneadmd
zoneadm: zone 'test': call to zoneadmd failed
Also, running devfsadm or drvconfig;devlinks from the global zone will core dump.
Any ideas..??
tia..First, you gotta change your prompt to something less embarrassing when you post to a public forum :)
I'd forget about the zones problem and focus on why devfsadm core dumps -- that's the core of the problem (no pun intended...no, really!).
First, review the install logs of all the patches installed during the recent service (/var/sadm/patch/*/log). Even if they all show a good completion, check the messages they generated, sometimes they have errors that aren't bad enough to cause a complete failure of the patch. The KJP (118833-36) is probably a good one to start with.
Next I'd "truss" the devfsadm command while it core dumps then get a stack trace of the core (run "adb <corefile>" and type "$C" -- that's old school, I'm sure mdb is much cooler...).
Throw some of the strings from those against sunsolve and see if something sticks.
Good luck! -
Creating Standby server ---- Flash archive/Alternate Boot Environment.
Dear All,
I want to patch upgrade the Centralized login server which holds more than 200 users. To avoid this production outage during the downtime of upgradation, we are planning to create all the logins and application in the alternate server manually. But i feel (lazy) hesitate to create logins and copy the applications in the statndby server. Since "chpasswd" is not in solaris, i can't copy the /etc/passwd & shadow file to standby server.
So i have the plan to achieve this by
i) creating flash archive of existing login server
ii) Creating Alternate boot environment in the standby server and flash restore the archive in it.
Whether the above mentioned plan is achievable, how to create the entire flash archive of solaris os?? and how to deploy it in the alternate boot environment.?
Regards,
SivaDone, I've given the same path in Standby Database [datafiles, controlfiles] as it has been given in the Primary Database.
MoreOver, I m manually switching logfiles at primary database [alter system switch logfile] they are copied and applied sucessfully at standby database. But the archived files that I have copied with cold backup are not applied yet. Infact they are not retrieved executing the below query.
select name, applied from v$archived_log;
How can I apply those archived logs? [archives that copied with cold backup]
Regards. -
Lucreate fails w/msg cannot check device name d41 for device path abbre
I'm using Live Upgrade to install Solaris 10 8/07 on a V490 currently running Solaris 9 4/04. Sol9 is using SVM to mirror two internal drives with file systems for /, /var and swap. I used format and formatted two new slices for / and /var. LU has been removed and the liveupgrade20 script used to install LU from the Solaris 10 CD. I believe the next step is to lucreate the BE, but the lucreate is failing:
root@animal # lucreate -A 'dualboot' -m /:/dev/md/dsk/d40:ufs,mirror -m /:/dev/dsk/c1t0d0s4,d41:attach -m /var:/dev/md/dsk/d50:ufs,mirror -m /var:/dev/dsk/c1t0d0s5,d51:attach -m /var:/dev/dsk/c1t1d0s5,d52:attach -n sol10
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
ERROR: cannot check device name <d41> for device path abbreviation
ERROR: cannot determine if device name <d41> is abbreviated device path
ERROR: cannot create new boot environment using file systems as configured
ERROR: please review all file system configuration options
ERROR: cannot create new boot environment using options providedIt's probably something simple as this is the time I'm doing an upgrade on my own.
Thanks for any ideas,
GlenI received help elsewhere.
To summarize using the full metadevice names worked:
lucreate -A 'dualboot' -m /:/dev/md/dsk/d40:ufs,mirror -m /:/dev/dsk/c1t0d0s4,/dev/md/dsk/d41:attach -m /:/dev/dsk/c1t1d0s4,/dev/md/dsk/d42:attach -m /var:/dev/md/dsk/d50:ufs,mirror -m /var:/dev/dsk/c1t0d0s5,/dev/md/dsk/d51:attach -m /var:/dev/dsk/c1t1d0s5,/dev/md/dsk/d52:attach -n sol10
(Note: Using the short names (d41, d42 etc) worked with Solaris 10 6/06, but fails with 8/07.)
sysglen
Maybe you are looking for
-
Slow after installing yosemite
My Mac is slow after installing OS X Yosemite. Can anyone advise me as to anything I can do to speed up performance. I understand that I may need more HD and or Ram space but is there anything else I can do. Any apps that can guide me as to what is o
-
Cut/paste text in a new TextFrame, and place it on the text original position
With this script, the pagraphs with style "_DESTACADOS" are cut and paste in a new TextFrame, but place them in to first page. I need place it on the text original position... var myDocument = app.activeDocument; var myRegExp = ".+"; app.findGrepPref
-
Bounce Message processing question
When a message from our Notes mail server gets sent to the Ironport for outbound delivery, and the recipient email address is incorrect, does the delivery failures message generated by the Ironport to our Notes user bypass outbound content filters an
-
Ringer equivalence number for Freetalk Connect Me ...
Hi, I have an old Western Electric 2500 touch tone phone that i want to use with Skype. I am hoping it will work with the Freetalk Connect Me device. I have tried with with the Sprint Home Connect 2 device on Ting and everything worked except the pho
-
I want those apps but its versions are to high. can i get a lower version?