Smpatch add -b fails for ZFS root

This is Solaris-10U6, x86, patched to current patchlevels as of this afternoon:
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
be20081218                 yes      no     no        yes    -
be20081229                 yes      yes    yes       no     -
# smpatch analyze 2>&1 | tee /var/tmp/,patchlist
122912-14 SunOS 5.10_x86: Apache 1.3 Patch
# lucreate -n testbe
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <be20081229> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <testbe>.
Source boot environment is <be20081229>.
Creating boot environment <testbe>.
Cloning file systems from boot environment <be20081229> to create boot environment <testbe>.
Creating snapshot for <rpool/ROOT/be20081229> on <rpool/ROOT/be20081229@testbe>.Creating clone for <rpool/ROOT/be20081229@testbe> on <rpool/ROOT/testbe>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/testbe>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <be20081218> as <mount-point>//boot/grub/menu.lst.prev.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <testbe> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <testbe> in GRUB menu
Population of boot environment <testbe> successful.
Creation of boot environment <testbe> successful.
# smpatch download -x idlist=/var/tmp/,patchlist
122912-14 has been validated.
# smpatch add -b testbe -x idlist=/var/tmp/,patchlist
Ckecking the currently running boot enviornment ...
Currently running boot enviornment name is [be20081229].
Checking the destination boot environment [testbe] ...
Copying the cuurently running BE into inactive BE [testbe] ...
This grogess will take you a long time, please wait a moment.)
ERROR: File systems on ABE <testbe> have insufficient space for repopulation from boot environment <be20081229>. It is recommended to delete this BE and create a fresh BE.
/usr/sbin/lumake: lumake into testbe failed
#

I think the problem is generated by the following line in /usr/sbin/lumake:
$LUBIN/lucomp_size -p $PBE_NAME -i ${ICF} -O $INODE_ICF -n $ABE_NAMEThis command calls /usr/lib/lu/lucomp_size and it is this which is returning a 1, and ultimately causes the "insufficient space" error.
To find out what is going wrong when lumake checks the size, please enable the printing of commands and arguments and run the command again:
# script /var/tmp/smpatch.out
# set -x
# smpatch add -b testbe -x idlist=/var/tmp/,patchlist
# set +x
# exitOnce this has completed, please look through the /var/tmp/smpatch.out and then run the corresponding lucomp_size command and check the output and exit code:
# /usr/lib/lu/lucomp_size <args>
# echo $?If you are absolutely certain that there is enough space and just want to make this work now, you could remove the following lines from lumake:
  $LUBIN/lucomp_size -p $PBE_NAME -i ${ICF} -O $INODE_ICF -n $ABE_NAME
  if [ "$?" -ne "0" ] ; then
    # Size is not sufficient
    ${LUPRINTF} -Eelp2 "`gettext 'File systems on ABE <%s> have insufficient \
space for repopulation from boot environment <%s>. It is recommended to \
delete this BE and create a fresh BE.'`" "${ABE_NAME}" "${PBE_NAME}"
    err_exit_script 1
  fiI would recommend that you open a support call so that the issue can be progressed more quickly.

Similar Messages

  • Live upgrade only for zfs root?

    Only live upgrade for zfs root on 5/09? Is this true? I have tried to do live upgrades previously and have had no luck. Particularly on my old blade1000 with an 18gb drive.

    Reading over this post I see it is a little unclear. I am trying to upgrade a u6 installation that has a zfs root to u7.

  • Flarecreate for zfs root dataset and ignore multiple dataset

    Hi All,
    I want to write a script to create flar images on multiple servers. In non zfs filesystem I am using -X option to refer a file to exclude mounts on different servers.
    but on ZFS -X option is not working. I want multiple mounts to be ignore on ZFS base system during flarecreate.
    I can use -D option to ignore the datasets on the server but it is not serving my purpose as i am maintaining a common file to ignore the mounts on all different servers.
    Please help me in this

    Renaming the root pool is not recommended.

  • Live Upgrade fails on cluster node with zfs root zones

    We are having issues using Live Upgrade in the following environment:
    -UFS root
    -ZFS zone root
    -Zones are not under cluster control
    -System is fully up to date for patching
    We also use Live Upgrade with the exact same same system configuration on other nodes except the zones are UFS root and Live Upgrade works fine.
    Here is the output of a Live Upgrade:
    bash-3.2# lucreate -n sol10-20110505 -m /:/dev/md/dsk/d302:ufs,mirror -m /:/dev/md/dsk/d320:detach,attach,preserve -m /var:/dev/md/dsk/d303:ufs,mirror -m /var:/dev/md/dsk/d323:detach,attach,preserve
    Determining types of file systems supported
    Validating file system requests
    The device name </dev/md/dsk/d302> expands to device path </dev/md/dsk/d302>
    The device name </dev/md/dsk/d303> expands to device path </dev/md/dsk/d303>
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    Comparing source boot environment <sol10> file systems with the file
    system(s) you specified for the new boot environment. Determining which
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating configuration for boot environment <sol10-20110505>.
    Source boot environment is <sol10>.
    Creating boot environment <sol10-20110505>.
    Creating file systems on boot environment <sol10-20110505>.
    Preserving <ufs> file system for </> on </dev/md/dsk/d302>.
    Preserving <ufs> file system for </var> on </dev/md/dsk/d303>.
    Mounting file systems for boot environment <sol10-20110505>.
    Calculating required sizes of file systems for boot environment <sol10-20110505>.
    Populating file systems on boot environment <sol10-20110505>.
    Checking selection integrity.
    Integrity check OK.
    Preserving contents of mount point </>.
    Preserving contents of mount point </var>.
    Copying file systems that have not been preserved.
    Creating shared file system mount points.
    Creating snapshot for <data/zones/img1> on <data/zones/img1@sol10-20110505>.
    Creating clone for <data/zones/img1@sol10-20110505> on <data/zones/img1-sol10-20110505>.
    Creating snapshot for <data/zones/jdb3> on <data/zones/jdb3@sol10-20110505>.
    Creating clone for <data/zones/jdb3@sol10-20110505> on <data/zones/jdb3-sol10-20110505>.
    Creating snapshot for <data/zones/posdb5> on <data/zones/posdb5@sol10-20110505>.
    Creating clone for <data/zones/posdb5@sol10-20110505> on <data/zones/posdb5-sol10-20110505>.
    Creating snapshot for <data/zones/geodb3> on <data/zones/geodb3@sol10-20110505>.
    Creating clone for <data/zones/geodb3@sol10-20110505> on <data/zones/geodb3-sol10-20110505>.
    Creating snapshot for <data/zones/dbs9> on <data/zones/dbs9@sol10-20110505>.
    Creating clone for <data/zones/dbs9@sol10-20110505> on <data/zones/dbs9-sol10-20110505>.
    Creating snapshot for <data/zones/dbs17> on <data/zones/dbs17@sol10-20110505>.
    Creating clone for <data/zones/dbs17@sol10-20110505> on <data/zones/dbs17-sol10-20110505>.
    WARNING: The file </tmp/.liveupgrade.4474.7726/.lucopy.errors> contains a
    list of <2> potential problems (issues) that were encountered while
    populating boot environment <sol10-20110505>.
    INFORMATION: You must review the issues listed in
    </tmp/.liveupgrade.4474.7726/.lucopy.errors> and determine if any must be
    resolved. In general, you can ignore warnings about files that were
    skipped because they did not exist or could not be opened. You cannot
    ignore errors such as directories or files that could not be created, or
    file systems running out of disk space. You must manually resolve any such
    problems before you activate boot environment <sol10-20110505>.
    Creating compare databases for boot environment <sol10-20110505>.
    Creating compare database for file system </var>.
    Creating compare database for file system </>.
    Updating compare databases on boot environment <sol10-20110505>.
    Making boot environment <sol10-20110505> bootable.
    ERROR: unable to mount zones:
    WARNING: zone jdb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/jdb3-sol10-20110505 does not exist.
    WARNING: zone posdb5 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/posdb5-sol10-20110505 does not exist.
    WARNING: zone geodb3 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/geodb3-sol10-20110505 does not exist.
    WARNING: zone dbs9 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs9-sol10-20110505 does not exist.
    WARNING: zone dbs17 is installed, but its zonepath /.alt.tmp.b-tWc.mnt/zoneroot/dbs17-sol10-20110505 does not exist.
    zoneadm: zone 'img1': "/usr/lib/fs/lofs/mount /.alt.tmp.b-tWc.mnt/global/backups/backups/img1 /.alt.tmp.b-tWc.mnt/zoneroot/img1-sol10-20110505/lu/a/backups" failed with exit code 111
    zoneadm: zone 'img1': call to zoneadmd failed
    ERROR: unable to mount zone <img1> in </.alt.tmp.b-tWc.mnt>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
    ERROR: Unable to remount ABE <sol10-20110505>: cannot make ABE bootable
    ERROR: no boot environment is mounted on root device </dev/md/dsk/d302>
    Making the ABE <sol10-20110505> bootable FAILED.
    ERROR: Unable to make boot environment <sol10-20110505> bootable.
    ERROR: Unable to populate file systems on boot environment <sol10-20110505>.
    ERROR: Cannot make file systems for boot environment <sol10-20110505>.
    Any ideas why it can't mount that "backups" lofs filesystem into /.alt? I am going to try and remove the lofs from the zone configuration and try again. But if that works I still need to find a way to use LOFS filesystems in the zones while using Live Upgrade
    Thanks

    I was able to successfully do a Live Upgrade with Zones with a ZFS root in Solaris 10 update 9.
    When attempting to do a "lumount s10u9c33zfs", it gave the following error:
    ERROR: unable to mount zones:
    zoneadm: zone 'edd313': "/usr/lib/fs/lofs/mount -o rw,nodevices /.alt.s10u9c33zfs/global/ora_export/stage /zonepool/edd313 -s10u9c33zfs/lu/a/u04" failed with exit code 111
    zoneadm: zone 'edd313': call to zoneadmd failed
    ERROR: unable to mount zone <edd313> in </.alt.s10u9c33zfs>
    ERROR: unmounting partially mounted boot environment file systems
    ERROR: No such file or directory: error unmounting <rpool1/ROOT/s10u9c33zfs>
    ERROR: cannot mount boot environment by name <s10u9c33zfs>
    The solution in this case was:
    zonecfg -z edd313
    info ;# display current setting
    remove fs dir=/u05 ;#remove filesystem linked to a "/global/" filesystem in the GLOBAL zone
    verify ;# check change
    commit ;# commit change
    exit

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

  • Change ZFS root dataset name for root file system

    Hi all
    A quick one.
    I accepted the default ZFS root dataset name for the root file system during Solaris 10 installation.
    Can I change it to another name afterward without reinstalling the OS? For example,
    zfs rename rpool/ROOT/s10s_u6wos_07b rpool/ROOT/`hostname`
    zfs rename rpool/ROOT/s10s_u6wos_07b/var rpool/ROOT/`hostname`/var
    Thank you.

    Renaming the root pool is not recommended.

  • ZFS root filesystem & slice 7 for metadb (SUNWjet)

    Hi,
    I'm planning to use ZFS root filesystem in Sun Cluster 3.3 environment, as written in documentation when we will use UFS share diskset then we need to create small slice for metadb on slice 7. In standar installation we can't create slice 7 when we install solaris with zfs root, then we can create it with jumpstart profile below :
    # example Jumpstart profile -- ZFS with
    # space on s7 left out of the zpool for SVM metadb
    install_type initial_install
    cluster SUNWCXall
    filesys c0t0d0s7 32
    pool rpool auto 2G 2G c0t0d0s0
    so, my question is : "when we use SUNWjet (JumpStart(tm) Enterprise Toolkit) how we can write the profile similar to above jumpstart profile"?
    Thanks very much, for your best answer.

    This can be done with JET
    You create the template as normal.
    Then create a profile file with the slice 7 line.
    Then edit the template to use it.
    see
    ---8<
    # It is also possible to append additional profile information to the JET
    # derived one. Do this using the base_config_profile_append variable, but
    # don't forget to fill out the remaining base_config_profile variables.
    base_config_profile=""
    base_config_profile_append="
    ---8<
    It is how OpsCentre (which uses JET) does it.
    JET questions are best asked on the external JET alias at yahoogorups (until the forum is setup on OTN)

  • Failed to Add Update Source for WUAgent Error = 0x87d00692 - HELP!!

    I am having trouble with my clients recieving updates.  I notice alot of my clients where my primary site is are recieving updates but all of my clients in my secondary sites are not.  Below is output from the WUAHandler.log on a client system.
    Group policy settings were overwritten by a higher authority (Domain Controller) to: Server HTTPS://SERVER.COMPANY.COM:443 and Policy ENABLED WUAHandler 4/12/2013 8:21:28 AM 4220 (0x107C)
    Failed to Add Update Source for WUAgent of type (2) and id ({AA27FC20-A281-46CC-B04F-D0940B5E072F}). Error = 0x87d00692. WUAHandler 4/12/2013 8:21:28 AM 4220 (0x107C)
    The server stated above is correct and we have no GPOs applied to our enviornment for an update server.
    I did just update to SCCM 2012 SP1 and have applied all the hotfixes for WSUS.  I am at a roadblock.  Would a site reset be advised?  Perhaps WSUS and SUP reinstall?  Any help would be appreciated I am out of ideas.
    My enviornment:
    1 primary server - roles- sup, dp, mp
    3 secondary sites - role - dp and mp - clients under these sites do not get updates
    Ryan Ventimiglio

    When I have checked this my current SCCM server address thats listed in the WUAHandler log is set in the registry for the client.  So this is the same address so its not like anything is conflicting. 
    Whats strange about this is that everything was working fine before the SP1 upgrade.  I just ran a site reset but no change.  Do you think a WSUS/SUP uninstall/reinstall would be the next step?  Is this done typically?  Any other
    things I should be checking.  I know we do not have any GPOs set in our enviornment for this so this cant be the problem.
    Ryan Ventimiglio

  • Failed to get root path for 'UED'

    Good afternoon.
    I have a problem with my IDES installation in University.
    After a server reboot, MAXDB doesn't want to give up.
    This is the log:
    $ tail startdb.log
    Error! Connection failed to node dolphin for database UED: database not found                    
    Opening Database...
    Failed to get root path for 'UED'
    Error! Connection failed to node dolphin for database UED: database not found                    
    Fri Aug 24 17:11:11 CEST 2007
    Connect to the database to verify the database is now open
    dbmcli check finished with return code: 0
    If I try to run dbmcli manually, I receive the same error:
    $dbmcli -u control,****** -d UED
    Failed to get root path for 'UED'
    Error! Connection failed to node (local) for database UED: database not found
    Any suggestions??
    Regards,
    Luca

    Hi Markus,
    I think that my problem is not completely solved.
    In fact, now the IDES system is on , but I'm not able to login into it.
    When I select UED system on menu of my JAVA GUI and press "connect button", I can't access to SAP login window. (see )
    I'm sure that is not a "network problem" of my laptop. (the problem persist also with others SAPGUI).
    I've tried to <b>reboot</b> the server, but the situation is always the same. No way to access into the system, popup "connecting" reman.
    If I try to see process that are running in the server, via "top -u uedadm" command, this is the situation:
    3791 uedadm    16   0 87988 9768 1996 S    0  0.2   0:00.00 sapstartsrv                                                                               
    4775 uedadm    15   0 54664 1580  904 S    0  0.0   0:00.04 csh                                                                               
    4911 uedadm    16   0 54664 1604  920 S    0  0.0   0:00.05 csh                                                                               
    5388 uedadm    20   0 26788 1896 1064 S    0  0.0   0:00.04 sapstart                                                                               
    5408 uedadm    16   0 35736 5748 3024 S    0  0.1   0:00.06 ms.sapUED_DVEBM                                                                               
    5409 uedadm    16   0 4480m 100m  86m S    0  2.5   0:00.95 UED_00_DP                                                                               
    5410 uedadm    16   0 27528 2988 2224 S    0  0.1   0:00.05 co.sapUED_DVEBM                                                                               
    5411 uedadm    16   0 27488 2952 2216 S    0  0.1   0:00.04 se.sapUED_DVEBM                                                                               
    5412 uedadm    18   0 17620 2432 1800 S    0  0.1   0:00.01 ig.sapUED_DVEBM                                                                               
    5413 uedadm    16   0  210m  12m 3352 S    0  0.3   0:00.20 igsmux_mt                                                                               
    5414 uedadm    16   0  183m  18m  10m S    0  0.5   0:00.16 igspw_mt                                                                               
    5415 uedadm    16   0  183m  18m  10m S    0  0.5   0:00.19 igspw_mt                                                                               
    5430 uedadm    15   0  188m 8296 5972 S    0  0.2   0:00.05 gwrd                                                                               
    5431 uedadm    18   0  164m 4376 2844 S    0  0.1   0:00.63 icman                                                                               
    5432 uedadm    17   0 4486m  22m 7988 S    0  0.6   0:00.01 UED_00_DIA_W0                                                                               
    5433 uedadm    17   0 4486m  22m 7976 S    0  0.6   0:00.01 UED_00_DIA_W1                                                                               
    5434 uedadm    16   0 5201m  79m  63m S    0  2.0   0:00.82 UED_00_DIA_W2                                                                               
    5435 uedadm    17   0 4486m  22m 7976 S    0  0.6   0:00.01 UED_00_DIA_W3                                                                               
    5436 uedadm    17   0 4486m  22m 7980 S    0  0.6   0:00.02 UED_00_DIA_W4                                                                               
    5437 uedadm    16   0 4486m  22m 7980 S    0  0.6   0:00.02 UED_00_DIA_W5                                                                               
    5438 uedadm    16   0 4486m  22m 7976 S    0  0.6   0:00.01 UED_00_DIA_W6                                                                               
    5439 uedadm    16   0 4486m  22m 7976 S    0  0.6   0:00.02 UED_00_DIA_W7                                                                               
    5440 uedadm    17   0 4486m  22m 7980 S    0  0.6   0:00.01 UED_00_DIA_W8                                                                               
    5441 uedadm    16   0 4486m  22m 7976 S    0  0.6   0:00.02 UED_00_DIA_W9                                                                               
    5442 uedadm    17   0 4486m  22m 7976 S    0  0.6   0:00.02 UED_00_DIA_W10                                                                               
    5443 uedadm    17   0 4486m  22m 7984 S    0  0.6   0:00.01 UED_00_DIA_W11                                                                               
    5444 uedadm    17   0 4486m  22m 7972 S    0  0.6   0:00.02 UED_00_UPD_W12                                                                               
    5445 uedadm    17   0 4486m  22m 7972 S    0  0.6   0:00.01 UED_00_UPD_W13                                                                               
    5446 uedadm    17   0 4486m  22m 7976 S    0  0.6   0:00.01 UED_00_UPD_W14                                                                               
    5447 uedadm    16   0 4486m  22m 7976 S    0  0.6   0:00.02 UED_00_ENQ_W15                                                                               
    5448 uedadm    16   0 4486m  22m 7976 S    0  0.6   0:00.02 UED_00_BTC_W16                                                                               
    5449 uedadm    17   0 4486m  22m 7980 S    0  0.6   0:00.01 UED_00_BTC_W17                                                                               
    5450 uedadm    17   0 4486m  22m 7976 S    0  0.6   0:00.01 UED_00_BTC_W18      
    I think that is not a performance problem of server, because
    "free -m" command
                       total       used       free     shared    buffers     cached
    Mem:          3946       2249       1697          0         50        950
    -/+ buffers/cache:       1247       2699
    Swap:         8001          0          8001
    Can you please me, give some suggestions?
    Thank you very much.

  • S10u 7 - smpatch in BE fails - patchadd: /dev/null: cannot create

    Solaris 10 5/09 s10x_u7wos_08 X86 with ZFS Root + Zones
    smpatch on non active BE with Zones fails:
    # smpatch update -b s10x_u7wos_08p1
    Ckecking the currently running boot enviornment ...
    Currently running boot enviornment name is [s10x_u7wos_08p].
    Checking the destination boot environment [s10x_u7wos_08p1] ...
    Installing patches from /var/sadm/spool...
    Copying the currently running BE into inactive BE [s10x_u7wos_08p1] ...
    (This process will take some time, please wait a moment.)
    Installing update(s) onto the inactive boot environment [s10x_u7wos_08p1] ...
    Failed to install patch 119901-07.
    Utility used to install the update failed with exit code 1.
    System has findroot enabled GRUBNo entry for BE <s10x_u7wos_08p1> in GRUB menuValidating the contents of the media </var/sadm/spool/119901-07.jar.dir>.The media contains
    1 software patches that can be added.All 1 patches will be added because you did not specify any specific patches to add.Mounting the BE <s10x_u7wos_08p1>.Adding patche
    s to the BE <s10x_u7wos_08p1>.Validating patches...Loading patches installed on the system...Done!Loading patches requested to install.Done!Checking patches that you spe
    cified for installation.Done!Approved patches will be installed in this order:119901-07 Preparing checklist for non-global zone check...Checking non-global zones...This
    patch passes the non-global zone check.119901-07 Summary for zones:Zone master-templateRejected patches:None.Patches that passed the dependency check:119901-07 Zone mast
    er-template-cloneRejected patches:None.Patches that passed the dependency check:119901-07 Patching global zoneAdding patches...Checking installed patches...Verifying suf
    ficient filesystem capacity (dry run method)...Installing patch packages...Patch 119901-07 has been successfully installed.See /a/var/sadm/patch/119901-07/log for detail
    sPatch packages installed: SUNWPython SUNWTiff SUNWTiff-devel SUNWgnome-img-viewer-shareDone!Patching non-global zones...Patching zone master-templateAdding patches.
    ..Checking installed patches...Patchadd is terminating.Done!Unmounting the BE <s10x_u7wos_08p1>.The patch add to the BE <s10x_u7wos_08p1> failed (with result code <8>).
    /usr/lib/patch/patchadd[4]: /dev/null: cannot create/usr/lib/patch/patchadd[6]: /dev/null: cannot createsort: insufficient available file descriptorsPatch 119901-07 fail
    ed in non-global zone SUNWlu-master-template.Patch 119901-07 wasn't installed in zones:master-template-clone
    Failed to install patch 119901-07.
    ALERT: Failed to install patch 119901-07.
    Any ideas?
    TIA.
    /phs
    Peter H. Seybold

    Post your question in the Developer Forums:
    http://discussions.apple.com/category.jspa?categoryID=164

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • ERROR: Ldap Authentication failed for dap during installation of iAS 6.0 SP3

    I am attempting to install ias Enterprise Edition (6.0 SP3) on solaris 2.8 using typical in basesetup. I am trying to install new Directory server as I don't have an existing one.
    During the installation I got the following error.
    ERROR: Ldap Authentication failed for url ldap://hostname:389/o=NetScape Root user id admin (151: Unknown Error)
    Fatal Slapd did not add Directory server information to config Server.
    Warning slapd could'nt populate with ldif file Yes error code 151.
    ERROR:Failure installing iPlanet Directory Server.
    Do you want to continue: ( I entered yes )
    Configuring Administration Server Segmentation fault core dumped.
    Error: Failure installing Netscape Administration Server.
    Do you want to continue:( I responded with yes).
    And during the Extraction I got the following
    ERROR:mple_bind: Can't connect to the LDAP server - No route to host
    ERROR: Unable to connect to LDAP Directory Server
    Hostname: hostname
    Port: 389
    User: cn=Directory Manager
    Password: <password-for-cn=Directory Manager
    Please make sure this Directory Server is currently running.
    You might need to run 'stop-slapd' and then
    'start-slapd' in the Directory Server home directory, in order to restart
    LDAP. When finished, press ENTER to continue, or S to skip this step:
    Start registering Bootstrap EJB...
    javax.naming.NameNotFoundException
    at java.lang.Throwable.fillInStackTrace(Native Method)
    at java.lang.Throwable.fillInStackTrace(Compiled Code)
    at java.lang.Throwable.<init>(Compiled Code)
    at java.lang.Exception.<init>(Compiled > Code)
    at javax.naming.NamingException.<init>(NamingException.java:114)
    at javax.naming.NameNotFoundException.<init>(NameNotFoundException.java: 48)
    at com.netscape.server.jndi.RootContext.resolveCtx(Unknown Source)
    "ldaperror" 76 lines, 2944 characters
    at com.netscape.server.jndi.RootContext.resolveCtx(Unknown Source)
    at com.netscape.server.jndi.RootContext.bind(Unknown Source)
    at com.netscape.server.jndi.RootContext.bind(Unknown Source)
    at javax.naming.InitialContext.bind(InitialContext.java:371)
    at com.netscape.server.deployment.EjbReg.deployToNaming(Unknown Source)
    at com.netscape.server.deployment.EjbReg.registerEjbJar(Compiled Code)
    at com.netscape.server.deployment.EjbReg.registerEjbJar(Compiled Code)
    at com.netscape.server.deployment.EjbReg.run(Compiled Code)
    at com.netscape.server.deployment.EjbReg.main(Unknown Source)
    Start registering iAS 60 Fortune Application...
    Start iPlanet Application Server
    Start iPlanet Application Server
    Start Web Server iPlanet-WebServer-Enterprise/6.0SP1 B08/20/200100:58
    warning: daemon is running as super-user
    [LS ls1] http://gedemo1.plateau.com, port 80 ready
    to accept requests
    startup: server started successfully.
    After completion of installation, I tried to start the console. But I got the following error;
    "Cant connect ot the admin server. The url is not correct or the server is not running.
    Finally,when I started the admintool(iASTT),it shows the iAS1
    was registered( marked with a red cross mark) and says "cant login. make sure the user
    name & passwdord are correct" when i click on it.
    Thanks in advance for any help
    Madhavi

    Hi,
    Make sure that the directory server is installed first. If it is running
    ok, then you can try adding an admin user, please check the following
    technote.
    http://knowledgebase.iplanet.com/ikb/kb/articles/4106.html
    regards
    Swami
    madhavi korupolu wrote:
    I am attempting to install ias Enterprise Edition (6.0 SP3) on
    solaris 2.8 using typical in basesetup. I am trying to install new
    Directory server as I don't have an existing one.
    During the installation I got the following error.
    ERROR: Ldap Authentication failed for url
    ldap://hostname:389/o=NetScape Root user id admin (151: Unknown
    Error)
    Fatal Slapd did not add Directory server information to config
    Server.
    Warning slapd could'nt populate with ldif file Yes error code 151.
    ERROR:Failure installing iPlanet Directory Server.
    Do you want to continue: ( I entered yes )
    Configuring Administration Server Segmentation fault core dumped.
    Error: Failure installing Netscape Administration Server.
    Do you want to continue:( I responded with yes).
    And during the Extraction I got the following
    ERROR:mple_bind: Can't connect to the LDAP server - No route to host
    ERROR: Unable to connect to LDAP Directory Server
    Hostname: hostname
    Port: 389
    User: cn=Directory Manager
    Password: <password-for-cn=Directory Manager
    Please make sure this Directory Server is currently running.
    You might need to run 'stop-slapd' and then
    'start-slapd' in the Directory Server home directory, in order to
    restart
    LDAP. When finished, press ENTER to continue, or S to skip this
    step:
    Start registering Bootstrap EJB...
    javax.naming.NameNotFoundException
    at java.lang.Throwable.fillInStackTrace(Native Method)
    at java.lang.Throwable.fillInStackTrace(Compiled Code)
    at java.lang.Throwable.<init>(Compiled Code)
    at java.lang.Exception.<init>(Compiled > Code)
    at javax.naming.NamingException.<init>(NamingException.java:114)
    at
    javax.naming.NameNotFoundException.<init>(NameNotFoundException.java:
    48)
    at com.netscape.server.jndi.RootContext.resolveCtx(Unknown Source)
    "ldaperror" 76 lines, 2944 characters
    at com.netscape.server.jndi.RootContext.resolveCtx(Unknown Source)
    at com.netscape.server.jndi.RootContext.bind(Unknown Source)
    at com.netscape.server.jndi.RootContext.bind(Unknown Source)
    at javax.naming.InitialContext.bind(InitialContext.java:371)
    at com.netscape.server.deployment.EjbReg.deployToNaming(Unknown
    Source)
    at com.netscape.server.deployment.EjbReg.registerEjbJar(Compiled
    Code)
    at com.netscape.server.deployment.EjbReg.registerEjbJar(Compiled
    Code)
    at com.netscape.server.deployment.EjbReg.run(Compiled Code)
    at com.netscape.server.deployment.EjbReg.main(Unknown Source)
    Start registering iAS 60 Fortune Application...
    Start iPlanet Application Server
    Start iPlanet Application Server
    Start Web Server iPlanet-WebServer-Enterprise/6.0SP1 B08/20/200100:58
    warning: daemon is running as super-user
    [LS ls1] http://gedemo1.plateau.com, port 80 ready
    to accept requests
    startup: server started successfully.
    After completion of installation, I tried to start the console. But I
    got the following error;
    "Cant connect ot the admin server. The url is not correct or the
    server is not running.
    Finally,when I started the admintool(iASTT),it shows the iAS1
    was registered( marked with a red cross mark) and says "cant login.
    make sure the user
    name & passwdord are correct" when i click on it.
    Thanks in advance for any help
    Madhavi
    Try our New Web Based Forum at http://softwareforum.sun.com
    Includes Access to our Product Knowledge Base!

  • Booting from a mirrored disk on a zfs root system

    Hi all,
    I am a newbee here.
    I have a zfs root system with mirrored disks c0t0d0s0 and c1t0d0s0, grub has been installed on c0t0d0s0 and OS booting is just fine.
    Now the question is if I want to boot the OS from the mirrored disk c1t0d0s0, how can I achieve that.
    OS is solaris 10 update 7.
    I installed the grub to c1t0d0s0 and assume menu.lst need to be changed (but i don't know how), somehow no luck.
    # zpool status zfsroot
    pool: zfsroot
    state: ONLINE
    scrub: none requested
    config:
    NAME STATE READ WRITE CKSUM
    zfsroot ONLINE 0 0 0
    mirror ONLINE 0 0 0
    c1t0d0s0 ONLINE 0 0 0
    c0t0d0s0 ONLINE 0 0 0
    # bootadm list-menu
    The location for the active GRUB menu is: /zfsroot/boot/grub/menu.lst
    default 0
    timeout 10
    0 s10u6-zfs
    1 s10u6-zfs failsafe
    # tail /zfsroot/boot/grub/menu.lst
    title s10u6-zfs
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
    module /platform/i86pc/boot_archive
    title s10u6-zfs failsafe
    findroot (BE_s10u6-zfs,0,a)
    bootfs zfsroot/ROOT/s10u6-zfs
    kernel /boot/multiboot kernel/unix -s -B console=ttya
    module /boot/x86.miniroot-safe
    Appreciate anyone can provide some tips.
    Thanks.
    Mizuki

    This is what I have in my notes.... not sure if I wrote them or not. This is a sparc example as well. I believe on my x86 I still have to tell the bios to boot the mirror.
    After attaching mirror (if the mirror was not present during the initial install) you need to fix the boot block.
    #installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
    If the primary then fails you need to set the obp to the mirror:
    ok>boot disk1
    for example
    Apparently there is a way to set the obp to search for a bootable disk automatically.
    Good notes on all kinds of zfs and boot issues here:
    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Boot_Issues

  • Can't boot with zfs root - Solaris 10 u6

    Having installed Solaris 10 u6 on one disk with native ufs and made this work by adding the the following entries
    /etc/driver_aliases
    glm pci1000,f
    /etc/path_to_inst
    <lang pci string for my scsi controller> glm
    which are needed since the driver selected by default are the ncsr scsi controller driver that do not work in 64 bit.
    Now I would like to create a new boot env. on a second disk on the same scsi controller, but use zfs instead.
    Using Live Upgrade to create a new boot env on the second disk with zfs as file system worked fine.
    But when trying to boot of it I get the following error
    spa_import_rootpool: error 22
    panic[cpu0]/thread=fffffffffbc26ba0: cannot mount root path /pci@0,0-pci1002,4384@14,4/pci1000@1000@5/sd@1,0:a
    Well that's the same error I got with ufs before making the above mentioned changes /etc/driver_aliases and path_to_install
    But that seems not to be enough when using zfs.
    What am I missing ??

    Hmm I dropped the live upgrade from ufs to zfs because I was not 100% sure it worked.
    Then I did a reinstall selecting to use zfs during the install and made the changes to driver_aliases and path_to_inst before the 1'st reboot.
    The system came up fine on the 1'st reboot and did use the glm scsi driver and running in 64bit.
    But that was it. When the system then was rebooted (where it made a new boot-archive) it stopped working. Same error as before.
    I have managed to get it to boot in 32bit mode but still the same error (thats independent of what scsi driver used.)
    In all cases it does pop the SunOS Relase banner and it do load the driver (ncrs or glm) and detects the disks in the correct path and numbering.
    But it fails to load the file system.
    So basically the current status are no-go if you need to use the ncrs/glm scsi driver to access the disks with your zfs root pool.
    File-Safe works and can mount the zfs root pool, but that's no fun as server OS :(

  • How to add more disk space into /   root file system

    Hi All,
    Linux  2.6.18-128
    can anyone please let us know how to add more disk space into "/" root file system.
    i have added new hard disk with space of 20GB, 
    [root@rac2 shm]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda1             965M  767M  149M  84% /
    /dev/hda7             1.9G  234M  1.6G  13% /var
    /dev/hda6             2.9G   69M  2.7G   3% /tmp
    /dev/hda3             7.6G  4.2G  3.0G  59% /usr
    /dev/hda2              18G   12G  4.8G  71% /u01
    LABLE=/               2.0G     0  2.0G   0% /dev/shm
    /dev/hdb2             8.9G  149M  8.3G   2% /vm
    [root@rac2 shm]#

    Dude! wrote:
    I would actually question whether or not more disks increase the risk of a disk failure. One disk can break as likely as one of two of more disks.
    Simple stats.  Buying 2 lottery tickets instead of one, gives you 2 chances to win the lottery prize. Not 1. Even though the odds of winning per ticket remains unchanged.
    2 disks buy you 2 tickets in The-Drive-Failure lottery.
    Back in the 90's, BT (British Telecom) had a 80+ node OPS cluster build with Pyramid MPP hardware. They had a dedicated store of scsi disks for replacing failed disks - as there were disk failure fairly often due to the number of disks. (a Pryamid MPP chassis looked like a Xmas tree with all the scsi drive LEDs, and BT had several)
    In my experience - one should rather expect a drive failure sooner, than later. And have some kind of contingency plan in place to recover from the failure.
    The use of symbolic links instead of striping the filesystem protects from the complete loss of the enchilada if a volume member fails, but it does not reduce the risk of loosing data.
    I would rather buy a single ticket for the drive failure lottery for a root drive, than 2 tickets in this case. And using symbolic links to "offload" non-critical files to the 2nd drive means that its lottery ticket prize is not a non-bootable server due to a toasted root drive.

Maybe you are looking for

  • Error message in SDS report shipping processing

    Hello everybody, we are running a SAP ECC 6.00 system with EA-APPL 15. We are using the standard SDS shipping process of inbound documents triggered by delivery. Everything worked fine, but all of a sudden each SD_CALL gets an error message in CVD1 i

  • How can I restrict options result to only one cost center?

    In transaction KS03 (Display cost Center), when I search for a cost center (hit F4), I have an option to drill down by Company code, controlling area, Cost Center Category, Person Responsible etc. My question is, how can I restrict users to select on

  • The Problem with Faces

    I upgraded to Aperture 3 soon after it was released. I was naturally interested in the new "bells and whistles", but, for reasons that are probably painfully obvious, I soon turned off Faces. Over the last few days, I have been experimenting with it.

  • sql:setDataSource in Java EE 6 web project

    I'm trying to figure out how to use <sql:setDataSource> within a (Java EE 6) web project that does not contain a web.xml deployment descriptor. In a previous version of the project, I was able to connect to the database using: web.xml <resource-ref>

  • Fiscal year problem

    hi kings we are using payroll add on. query generator parameter field have problem.when i run the query for last year April(2008) to this year march(2009).its retrieve the o records.. here i paste my query SELECT T0.[U_Year], T0.[U_Month] FROM [dbo].