/boot not preent in Non-Global zones

Hi,
/boot directory is not present in non-global zones. Is there any specific reason for that?
On sun x86 uses /boot/solaris/bootenv.rc to duplicate the “eeprom” functionality. Since bootenv.rc is used for boot prom functionality and /boot itself not present in zones, because of that eeprom command failing on x86 zones.
Is there any specific reason why /boot is not present in Zones ?
Thanks in advance,
Chanthu

Hi George,
You might get away creating global monunts, create directories below the mount pint. Now comes the trick, you create HAStoragePlus resources with filesystem_mountpoints=/global_mountmount/mysql1, you must set AffinityOn to false.
This creates a lofs mount into the zone.
I must admit, that I never tried this myself, but it should work. Of cause you will get a performance penalty if you create tables over the wire. Creating tables means creating small files. It is wortha a
It would be better, if you would have more and smaller luns, so that you could restrict on lun to a pair of zones.
Kind Regards
Detlef

Similar Messages

  • Administration file for non-global zones

    I want to install package in global zone. What attributes can I set in administration file? I use administration file for relocating packages by altering basedir. In global zone package is relocated, but installation continues for non-global zones in package basedir.
    Is it possible to specify pkgadd -G parameter (install only in current zone) in administration file?
    Regards
    Jezz

    We use the CDE login mechanism. From the CDE login screen on the global zone:
    [] Select Options, Remote Login, Enter Host Name from the CDE login screen.
    [] Enter the hostname (not the zone name!) of the non-global zone in the Enter the host name box.
    [] Click OK.
    [] Once the CDE login screen appears with the hostname of the non-globalzone listed at the top, log in as sysadmin.
    Notes: If the non-global zone or the system was recently booted, wait a few minutes and check to make sure that the cde-login service is running using the command:
    svcs -a | grep cde-login
    Also, if you have restricted /etc/Xaccess, you'll need to add your non-global zone to it.
    Message was edited by:
    r2ad
    Message was edited by:
    r2ad, http://www.r2ad.com

  • GUI interface for non-global zones

    My Goal:
    Create multiple zones, each running different services thus eliminating the need for multiple servers w/out using VMware.
    What I'm realizing:
    Everything I've read points back to non-global zones being only a console based environment. Does anyone know if it's possible to login to non-global zones with a GUI interface?
    Thanks,
    Rick

    We use the CDE login mechanism. From the CDE login screen on the global zone:
    [] Select Options, Remote Login, Enter Host Name from the CDE login screen.
    [] Enter the hostname (not the zone name!) of the non-global zone in the Enter the host name box.
    [] Click OK.
    [] Once the CDE login screen appears with the hostname of the non-globalzone listed at the top, log in as sysadmin.
    Notes: If the non-global zone or the system was recently booted, wait a few minutes and check to make sure that the cde-login service is running using the command:
    svcs -a | grep cde-login
    Also, if you have restricted /etc/Xaccess, you'll need to add your non-global zone to it.
    Message was edited by:
    r2ad
    Message was edited by:
    r2ad, http://www.r2ad.com

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Installing & booting non-global zones

    Hi all,
    I am having problems with installing and booting a non-global zones. My configurations are as follows :
    zonepath: /export/small-zone
    autoboot: true
    pool:
    inherit-pkg-dir:
    dir: /lib
    inherit-pkg-dir:
    dir: /platform
    inherit-pkg-dir:
    dir: /sbin
    inherit-pkg-dir:
    dir: /usr
    net:
    address: 192.168.1.101
    physical: hme0
    There is no problems when I try to verify the zone. However, upon installing, I get the following errors :
    <SUNWexplu SUNWexplo SUNWesr SUNWtftpr SUNWdtdmr SUNWmdu SUNWbsr .....(and many other pachages ) >
    Despite all this, I am able to boot the new non-global zone. However when I do `zlogin -C -e\@ small-zone', I only get the message "[Connected to zone 'small-zone' console]" .I don't get the install sequence as expected. I am consequently unable to get the console login.
    Can anyone please help me ?
    Any help would be much appreciated.

    Hi swa.. it's Lucky... thanks for your time...
    the out put of "df -hk /export/small-zone" is :
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0t0d0s0 9.6G 9.4G 118M 99% /
    the out put of "zoneadm list -cv" is :
    ID NAME STATUS PATH
    0 global running /
    6 small-zone running /export/small-zone
    Now, I know it seems like its working, but it's not quite working properly. I tried a "zlogin -C small-zone" before I ever booted up. It just hangs with the message "[Connected to zone 'small-zone' console]"
    Then upon booting up, I get :
    "[NOTICE : zone booting up]
    Sun OS Release 5.10 Version Generic 64-bit
    Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to licence terms.
    INIT : can not open /etc/default/init. Environment not initialised.
    INIT : can not stat inittab, errno:2.
    INIT : Absent svc.startd entry a bad contract template. Not starting svc.startd.
    Requesting maintenence mode
    (see /lib/svc/share/README for additional information)
    ** Unable to retrieve 'root' entry in shadow password file ***
    Entering maintainence mode
    It seems that the initial configuration where you set up the username passwords has been totally bi-passed due to a failure. I have no idea why this is so.... Any help would be much appreciated.
    Once again, thats for your help.

  • Not all non-global zones updated for DST

    We have one server with Solaris 10 and four non-global zones. I installed patch 122032-03 to the global zone and it installed successfull, according to the log. With the DST change on 3/11, TWO of the non-global zones and the global zone updated correctly to daylight time, but the other TWO non-global zone DID NOT. Does anyone know what would cause this?
    I have also tried to manually change the time on the two non-global zones and have not been able to; as root I get the message "not owner"
    ainsworth:hughesm> su -
    Password:
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have mail.
    # date
    Tue Mar 13 12:02:45 PST 2007
    # date -u
    Tue Mar 13 20:03:16 GMT 2007
    # date
    Tue Mar 13 12:04:31 PST 2007
    # date 0313130007
    date: Not owner
    usage: date [-u] mmddHHMM[[cc]yy][.SS]
    date [-u] [+format]
    date -a [-]sss[.fff]
    Fortunately, these were just test zones. They were set up by a previous admin to be used for pgpftp, so I'm wondering if there are some special configurations for security that is preventing the time change.

    Thanks for replying.
    I rebooted from the global zone. All the zones have the same uptime as the global zone, except one that was rebooted more recently.
    Quick question - how do I tell if it's a sparse zone or full zone?
    One of the zones that the time change worked on:
    $ zdump -v US/Pacific | grep 2007
    US/Pacific Tue Mar 13 22:37:59 2007 UTC = Tue Mar 13 15:37:59 2007 PDT isdst=1
    US/Pacific Sun Mar 11 09:59:59 2007 UTC = Sun Mar 11 01:59:59 2007 PST isdst=0
    US/Pacific Sun Mar 11 10:00:00 2007 UTC = Sun Mar 11 03:00:00 2007 PDT isdst=1
    US/Pacific Sun Nov 4 08:59:59 2007 UTC = Sun Nov 4 01:59:59 2007 PDT isdst=1
    US/Pacific Sun Nov 4 09:00:00 2007 UTC = Sun Nov 4 01:00:00 2007 PST isdst=0
    tsbackup:hughesm> cd /usr/share/lib/zoneinfo; ls -al | grep Pac
    drwxr-xr-x 2 root bin 1024 Jan 19 11:19 Pacific
    cathedral:hughesm> cd /usr/share/lib/zoneinfo; ls -al | grep Pac (the global zone)
    drwxr-xr-x 2 root bin 1024 Jan 19 11:19 Pacific
    One zone that didn't work: (the other one that did not work is the same)
    # zdump -v US/Pacific | grep 2007
    US/Pacific Tue Mar 13 22:45:33 2007 UTC = Tue Mar 13 14:45:33 2007 PST isdst=0
    US/Pacific Sun Apr 1 09:59:59 2007 UTC = Sun Apr 1 01:59:59 2007 PST isdst=0
    US/Pacific Sun Apr 1 10:00:00 2007 UTC = Sun Apr 1 03:00:00 2007 PDT isdst=1
    US/Pacific Sun Oct 28 08:59:59 2007 UTC = Sun Oct 28 01:59:59 2007 PDT isdst=1
    US/Pacific Sun Oct 28 09:00:00 2007 UTC = Sun Oct 28 01:00:00 2007 PST isdst=0
    # uname -a
    SunOS albina 5.10 Generic_118822-02 sun4u sparc SUNW,Ultra-4
    # cd /usr/share/lib/zoneinfo (non-global zone that did not update)
    # ls -al | grep Pac
    drwxr-xr-x 2 root bin 1024 Apr 20 2005 Pacific
    I was thinking of trying to apply the patch within the zone itself, but when I tried smpatch analyze, it didn't list it:
    # smpatch analyze
    120900-04 SunOS 5.10: libzonecfg Patch
    121133-02 SunOS 5.10: zones library and zones utility patch
    119254-27 SunOS 5.10: Install and Patch Utilities Patch
    119574-02 SunOS 5.10: su patch
    121453-02 SunOS 5.10: Sun Update Connection Client Foundation
    121118-08 SunOS 5.10: Sun Update Connection System Client 1.0.8
    121081-05 SunOS 5.10: Connected Customer Agents 1.1.0
    122231-01 SunOS 5.10 Sun Connection agents, transport certificate update
    I attempted to add the patch using smpatch, but I've never run it here before so it's probably not configured right:
    # smpatch update -i 122032-03
    122032-03 cannot be validated.
    com.sun.patchpro.model.PatchProRuntimeException: Unexpected throwable
    at com.sun.patchpro.cli.PatchServices.waitForThread(PatchServices.java:1284)
    at com.sun.patchpro.cli.PatchServices.installPatches(PatchServices.java:1121)
    at com.sun.patchpro.cli.PatchServices.main(PatchServices.java:510)
    Caused by:
    java.lang.Throwable: ERROR: Failed to validate the digital signature(s).
    at com.sun.patchpro.model.PatchProModel$InnerDownloadPatchThread.downloadPatchFailed(PatchProModel.java:2855)
    at com.sun.patchpro.server.GroupPatchDownloader.dispatchFailedEvent(GroupPatchDownloader.java:384)
    at com.sun.patchpro.server.GroupPatchDownloader.downloadPatchFailed(GroupPatchDownloader.java:335)
    at com.sun.patchpro.server.ServerPatchServiceProvider.dispatchFailedEvent(ServerPatchServiceProvider.java:2577
    at com.sun.patchpro.server.ServerPatchServiceProvider.validatePatchBundle(ServerPatchServiceProvider.java:2196
    at com.sun.patchpro.server.ServerPatchServiceProvider.requestDownload(ServerPatchServiceProvider.java:1780)
    at com.sun.patchpro.server.ServerPatchServiceProvider.performDownloadPatches(ServerPatchServiceProvider.java:1
    2)
    at com.sun.patchpro.server.ServerPatchServiceProvider.downloadPatches(ServerPatchServiceProvider.java:860)
    at com.sun.patchpro.server.PatchServerProxy.downloadPatches(PatchServerProxy.java:142)
    at com.sun.patchpro.server.GroupPatchDownloader.downloadPatches(GroupPatchDownloader.java:124)
    at com.sun.patchpro.model.PatchProModel.performPatchDownload(PatchProModel.java:1932)
    at com.sun.patchpro.model.PatchProStateMachine$10.run(PatchProStateMachine.java:526)
    at com.sun.patchpro.util.State.run(State.java:266)
    at java.lang.Thread.run(Thread.java:595)
    So then I attempted to add the patch using patchadd:
    # patchadd 122032-03
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    Checking patches that you specified for installation.
    Done!
    Global patches.
    0 Patch 122032-03 is for global zone only - cannot be installed on local zone.
    No patches to install.
    under /var/sadm/patch/122032-03 on the Global zone, the log shows:
    -rw-r--r-- 1 root root 2666 Jan 19 11:19 log
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    WARNING: /usr/share/lib/zoneinfo/Africa/Timbuktu <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Argentina/ComodRivadavia <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Indiana/Indianapolis <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Indianapolis <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Kentucky/Louisville <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Louisville <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/CST6CDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST5EDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Europe/Belfast <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/HST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST7MDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/PST8PDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Pacific/Yap <no longer a regular file>
    Dryrun complete.
    No changes were made to the system.
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    WARNING: /usr/share/lib/zoneinfo/Africa/Timbuktu <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Argentina/ComodRivadavia <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Indiana/Indianapolis <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Indianapolis <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Kentucky/Louisville <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Louisville <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/CST6CDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST5EDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Europe/Belfast <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/HST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST7MDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/PST8PDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Pacific/Yap <no longer a regular file>
    Installation of <SUNWcsu> was successful.
    On the non-global zones, either there is nothing under /var/sadm/patch or there isn't even a patch directory under /var/sadm. Is there somewhere else to look?
    Thanks.

  • Adding a cdrw to a non-global zone

    Hi all,
    I am attempting to add a cdrw on a laptop running Solaris 10 to a
    non-global zone via the following (after browsing the archives of this
    list as well as related forums and documentation):
    "cdrw -l" when run in the global zone reports
    "/dev/rdsk/c1t0d0s2" as the sole CD writer attached. I have previously
    burnt cdr(s) using this, so the functionality of the drive is not an issue.
    I then proceeded to configure a zone, "zulu01", and added a device via
    the following using zonecfg (I have omitted the other configuration data
    which is standard, root path, standard inherit-pkg-dir)
    "add device"
    "set match=/dev/rdsk/*"
    "end"
    "commit"
    "verify"
    I then installed the zone
    "zoneadm -z zulu01 install"
    zoneadm does the usual and reports success.
    I boot the zone
    "zoneadm -z zulu01 boot"
    and login via "zlogin -C zulu01"
    Inside zulu01, running "cdrw -l"
    reports "No CD Writers found."
    a "ls /dev/rdsk" shows that c1t0d0s2 is present.
    I am aware that adding such a device is not recommended, but it is
    supposedly possible?
    Please advise on what I am doing wrong, or is it not possible to add a
    cdrw to a non-local zone?
    Thanks in advance.
    Regards,
    Jeremy.

    This should work. A shot in the dark: can you try with another tool than cdrw, cdrecord for example? Also make sure that volume management is not running in the global zone (/etc/init.d/volmgt stop).
    Blaise

  • Non-global zone network configuration

    Hi,
    Zones are a new thing for me so please excuse me if this is a basic query... I have recently jumpstarted a system using a jumpstart script that was developed by somebody else. It creates two non-global zones and configures their network interfaces.
    I have unplumbed one of the virtual interfaces for a particular zone because the IP address it was using is actually being used by another system on the network. However, when I reboot the zone, the interface is re-assigned the same IP address again. The IP address in question is not in /etc/hosts on any of the zones, and in the non-global zones the "hostname.<interface>" files do not exist at all. Also, the IP address is not in sysidcfg in any of the zones.
    So basically, interface e1000g0:2 is being assigned an IP address that was configured by the jumpstart script, so perhaps the jumpstart script has placed that IP address in some file that is read when the zone is booting. I have even checked rc scripts just in case but I cannot find the IP address anywhere. Would anybody please be able to tell me where the configuration information could be coming from in this scenario (nsswitch.conf specifies only files).
    Thank you in advance...

    its in the zone config.
    zonecfg -z <zone in question> info
    it should list a net address and physical device. you can then use:
    zonecfg -z <zone in question>
    from here you can remove the net statements, or change the address if you want to keep using the net card in your zone.

  • Non-global zone in "shutting_down" state.. Hung in this state

    Hi.. My server is running in Sol10. It has got two non-global zones hosted in it in which the database is running.
    There was some complain from the database team that they were not able to login to the server. When I checked, it the status of the local zones were fine. But when tried to "# zlogin" to them, it got hung. So i tried to " # zlogin -S <zone_name>" and i was able to login in the failsafe mode but not able to execute any command in it. Any command from "uptime", "zfs list", gets hung and i had to forcefully logout.
    So I tried to halt the non-global zones first and then boot it. But here, it got stuck in "shutting_down" state.
    When tried to kill the processes of the non-global zones using "kill -9", it failed to kill the processes.
    so I rebooted the global zone which fixed the issue. But then, 10 days later, the same issue came up.
    I followed the same steps to fix the issue but i'm afraid this issue might come up again since i think rebooting the global zone server is a temporary fix.
    I logged a call with Oracle Support for this, but the server looks fine from the explorer output that was provided.
    Has anyone faced this same problem? What can i do to fix this issue permanantly?

    If you encounter the issue again in future, please get a system crash dump by panicing the global zone. This will allow us (support) to review the crash dump and understand why the zone failed to shut down. It will have been waiting on a resource and without the dump there's simply no way to know what or why.
    IIRC we recently (with the past month) did a putback of a bug (which I can't find the ID of right now) whereby if a zone doesn't hang on the way down we'll fork a new instance of the zone and leave the old refs in their hung state. So it's worth ensuring that you're running the latest Patchset.

  • Problem with exporting devices to non-global zone

    Hi,
    I've problem with exporting devices to my solaris zones (i try do add support to mount /dev/lofi/* in my non-global zone).
    A create cfg for my zone.
    Here it is:
    $ zonecfg -z sapdev info
    zonename: sapdev
    zonepath: /export/home/zones/sapdev
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv: default,sys_time
    scheduling-class:
    ip-type: shared
    fs:
    dir: /sap
    special: /dev/dsk/c1t44d0s0
    raw: /dev/rdsk/c1t44d0s0
    type: ufs
    options: []
    net:
    address: 194.29.128.45
    physical: ce0
    device
    match: /dev/lofi/1
    device
    match: /dev/rlofi/1
    device
    match: /dev/lofi/2
    device
    match: /dev/rlofi/2
    attr:
    name: comment
    type: string
    value: "This is SAP developement zone"
    global# lofiadm
    Block Device File
    /dev/lofi/1 /root/SAP_DB2_9_LUW.iso
    /dev/lofi/2 /usr/tmp/fsfile
    I reboot the non-global zone, even reboot global-zone, and after that, in sapdev zone, there is no /dev/*lofi/* files.
    What i do wrong? Maybe I reduce my sol 10 u4 sparc instalation too much.
    Can anybody help me?
    Thanks for help,
    Marek

    I experienced the same problem on my system Sol 10 08/07.
    Normally, when the zone enters the READY state during boot, it's zoneadmd will run devfsadm -z <zone>. In my understanding this is to create the necessary device files in ZONEPATH/dev.
    This worked well until recently. Now only the directories are still created.
    It seems as if devfsadm -z is broken. Somebody should issue a call to sun.
    As a workaround you can easily copy the device files into the zone. It is important not to copy the symbolic link but the target.
    # cp /dev/lofi/1 ZONEPATH/dev/lofi
    Hope this helps,
    Konstantin Gremliza

  • After installing 137137-09 patch OK in global zone, bad in non global zone

    Hi all,
    scratching my head with this one.
    Installed 137137-09 fine on Sun Fire V210. Machine has one non global zone running a proxy server (nothing very exciting there!). non global zone has a local filesystem attached, but don't think this is the issue (on my test V210 I created the same sort of filesystem and was unable to replicate the problem :( ).
    So 137137-09 is fine in the global zone (I had the non global zone halted when patch installed) it is also installed in the non global zone (ie, when zone boots it says it's at rev 137137-09 via uname) in the patch log in the non global zone I get this:
    PKG=SUNWust2.v
    Original package not installed.
    pkgadd: ERROR: ERROR: unable to get zone brand: zonecfg_get_brand: No such zone configured
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: cannot open
    pkgadd: ERROR: checkinstall script did not complete successfully
    Dryrun complete.
    No changes were made to the system.
    I'm not sure if the branding error is causing the checkinstall postpatch script error or if they are not related. There doesn't seem to be any obvious permissions problems that I can find. I have checked that all the pkg and patch patches are up to date on the system. Searching on the brand error gives me a link to a problem with 127127-11, but that was installed on the system before the local zone was created and all the other seemingly appropriate patches (eg: 119254) are all up to date or at a higher revision than recommended.
    I see the same problem on a M5000 which has two non global zones on it.
    Both machines had the Solaris 10 50/08 update bundle applied when it came out,a nd have had recommended patch sets applied at regular intervals since.
    This issue only came to light when trying the latest bundles with 138888-01/02 in it, and those fail to install on the global zones because the non global zone install dies claiming 137137-09 is not installed (which is plainly wrong).
    I've tried to recreate this on a test server but unfortunately everything works as it should, even though the test server has a similar history in terms of patches and original setup to the others.
    I'm planning to try to detatch the non global zone and try an attach -u to see if it will update the patches properly, but I'm not holding out much hope on that one (I need to wait for a mainteiance window when I can take the zone down in a couple of days).
    Any ideas?

    Well, I am following up to my own post it seems I have determined what is causing the problem, or at least situations where the problem can be reproduced which I have been able to do on my test system.
    It seems that if the zone container's zonepath is in /usr (eg: /usr/zones, /usr/local/zones, or some other path under /usr) the patchadd of 137137-09 will fail with the log similar to posted above, and this will stop further kernel patches (eg: 138888-02) being added.
    The test system had everything patched to current and searching the web I can't find any other instances of this being an issue, but I have reproduced this problem on my test machine (which worked OK because it's test zones were in a filesystem mounted as /zones). When I used zoneadd -z <zonename> move to a zone in /usr/local and applied 137137-09 the same problem came up.
    Not sure what is causing this issue.. I imagine it might have to do with some sort of confusion with the patch utilities and the read-only loopback filesystems in the sparse root zone but I can't bs sure.
    Maybe someone at sun will see this and figure out what the deal is :)
    When I moved my test zone back to /zones the patch applied perfectly so it's definitely having it in /usr or /usr/local (I tried both locations, even though they are seperate ufs filesystems on my test server).
    Oh I am running DiskSuite to mirror filesystems on my V210's which may or may not have anything to do with it.
    Hope this helps someone in the future at least!

  • FilesystemMountPoints for ufs disks mounted to non-global zones

    Hello,
    I have a SAN ufs disk to be used as a failover storage, mounted to non-global zones (NGZ).
    Solaris 10 nodes using Cluster 3.2
    I'm looking for the correct value for the property FilesystemMountPoints and the vfstab entry required for a failover disk mounted to a NGZ.
    Should the path NOT include the NGZ root path?
    From the man page for SUNW.HAStoragePlus, for the property FilesystemMountPoints:
    You can specify both the path in a non-global zone and the path in a global zone, in this format:
    Non-GlobalZonePath:GlobalZonePath
    The global zone path is optional. If you do not specify a global zone path, Sun Cluster assumes that the path in
    the non-global zone and in the global zone are the same. If you specify the path as
    Non-GlobalZonePath:GlobalZonePath, you must specify Global-ZonePath in the global zone's /etc/vfstab.
    The default setting for this property is an empty list.
    You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable
    the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the
    non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone
    by mounting the file system in the global zone. The resource type then performs a loopback mount in the
    non-global zone.
    Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all
    global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.
    SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group
    with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You
    can specify both local and global file system mounts points at the same time.
    Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to
    be local if its /etc/vfstab entry satisfies both of the following conditions:
    1. The non-global mount option is specified.
    2. The "mount at boot" field for the entry is set to "no."
    In my situation, I want to mount the disk to /mysql_data on the NGZ called ftp_zone. So, which is the correct setup?
    a. FilesystemMountPoints=/mysql_data:/zones/ftp_zone/root/mysql_data
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /zones/ftp_zone/root/mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    OR
    b. FilesystemMountPoints=/mysql_data:/mysql_data (can be condensed to simply /mysql_data)
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    Should the path NOT include the NGZ root path?
    And should the fsck pass # be 1 or 2?
    Looking at this example from p. 26 of
    http://wikis.sun.com/download/attachments/24543510/820-4690.pdf
    This example doesn't mention the entry in vfstab.
    Create a resource group that can holds services in nodea zonex and nodeb zoney
    nodea# clresourcegroup create -n nodea:zonex,nodeb:zoney test-rg
    Make sure the HAStoragePlus resource is registered
    nodea# clresourcetype register SUNW.HAStoragePlus
    Now add a UFS [or VxFS] fail-over file system: mount /bigspace1 to failover/export/install in NGZ
    nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \
    -p FilesystemMountPoints=/fail-over/export/install:/bigspace1 \
    ufs-hasp-rs
    Thank you!

    Hi,
    /zones/oracle-z is my root directory of the zone.
    * add the device to the zone :
    root@mpbxapp1 # zonecfg -z oracle-z
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/dsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/rdsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> exit
    * add FS to NGZ's /etc/vfstab : ( You may omit this step, I don't know why but it works without this step :) )
    root@mpbxapp1 # vi /zones/oracle-z/root/etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /global/oracle ufs 1 no logging
    * add FS to global zone's /etc/vfstab :
    root@mpbxapp1 # vi /etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /zonefs/oracle ufs 1 no logging
    * set the FilesystemMountPoints property :
    root@mpbxapp1 # /usr/cluster/bin/clresource set -p FilesystemMountPoints=/global/oracle:/zonefs/oracle oracle-hastp
    Whit this configuration you may ensure that the FS is not directly accessible from master zone. Actually, it's accessible but with a different PATH. For example, for Oracle, from the master zone Oracle can not be started/stopped because the controlfile can not be accessed. :)
    Hope this helps,
    Murat

  • Lucreate and non-global zones

    Hi - I'm trying to get my head around Live Upgrades now that I've switched to ZFS on Solaris 10 for our test servers. The problem I have is we have a number of non-global zones and when I ran the lucreate command I get a number of warnings:
    lucreate -n CPU_2012-07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <CPU_2012-07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <CPU_2012-07>.
    Populating file systems on boot environment <CPU_2012-07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    WARNING: Directory </export/zones/tdukwxstestz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwxstestz01> is shared between BEs, remapping to <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Directory </export/zones/tdukwbprepz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwbprepz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwbprepz01> is shared between BEs, remapping to <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/export/zones/tdukwbprepz01> on <rpool/export/zones/tdukwbprepz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwbprepz01@CPU_2012-07> on <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Creating snapshot for <rpool/export/zones/tdukwxstestz01> on <rpool/export/zones/tdukwxstestz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwxstestz01@CPU_2012-07> on <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10@CPU_2012-07> on <rpool/ROOT/CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10/var@CPU_2012-07> on <rpool/ROOT/CPU_2012-07/var>.
    Mounting ABE <CPU_2012-07>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <CPU_2012-07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <CPU_2012-07> bootable.
    Population of boot environment <CPU_2012-07> successful.
    Creation of boot environment <CPU_2012-07> successful.
    So ALL my non-global zones live under /export/zones/<zonename> - what do all the WARNINGS mean?
    I then applied the Oracle CPU, activated the ABE and shutdown the server. When it came back up non of the zones would start and this seems to be because now all the zonepaths and references to the zones are labelled with CPU_2012-07 on the end. Now I can edit the zone xml files to fix this but am sure this is not the recommended method and something I would prefer not to do.
    So basically I think I have not set my ZFS resource pools up correctly to take into account my non-global zones and where I have created them.
    My zfs list output looks like this now, unfortunately I don't have the output prior to me starting this work:
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 91.2G 456G 106K /rpool
    rpool/ROOT 9.06G 456G 31K legacy
    rpool/ROOT/10 38.2M 456G 4.34G /.alt.10
    rpool/ROOT/10/var 22.6M 24.0G 3.60G /.alt.10/var
    rpool/ROOT/CPU_2012-07 9.02G 456G 4.34G /
    rpool/ROOT/CPU_2012-07@CPU_2012-07 566M - 4.34G -
    rpool/ROOT/CPU_2012-07/var 4.12G 456G 4.11G /var
    rpool/ROOT/CPU_2012-07/var@CPU_2012-07 13.8M - 3.58G -
    rpool/dump 2.00G 456G 2.00G -
    rpool/export 5.94G 456G 35K /export
    rpool/export/home 76.9M 23.9G 76.9M /export/home
    rpool/export/zones 5.87G 456G 36K /export/zones
    rpool/export/zones/tdukwbprepz01 21.7M 456G 323M /export/zones/tdukwbprepz01
    rpool/export/zones/tdukwbprepz01-10 321M 31.7G 312M /export/zones/tdukwbprepz01-10
    rpool/export/zones/tdukwbprepz01-10@CPU_2012-07 8.50M - 312M -
    rpool/export/zones/tdukwxstestz01 29.6M 456G 5.49G /export/zones/tdukwxstestz01
    rpool/export/zones/tdukwxstestz01-10 5.51G 26.5G 5.47G /export/zones/tdukwxstestz01-10
    rpool/export/zones/tdukwxstestz01-10@CPU_2012-07 32.1M - 5.48G -
    rpool/logs 8.23G 23.8G 8.23G /logs
    rpool/swap 66.0G 458G 64.0G -
    Any help would be greatly appreciated.
    Thanks - Julian.

    OK, so been tinkering with this. I'm not sure this is my exact problem but a few people have reported issues with the following package:
    121430-xx
    In that it gives the exact same WARNINGS when trying to create an ABE via lucreate and you have non-global zones. So one of the suggestions was to go back to an earlier version of this patch and then someone said it was fixed in version 71 of the patch. So I installed the very latest version 121430-81 and now it fails with a different error. Fortunately this time I have a screen shot of the before and after:
    BEFORE:
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    1 build14 running /export/zones/build14 native shared
    bash-3.2# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 70.0G 477G 106K /rpool
    rpool/ROOT 1.98G 477G 31K legacy
    rpool/ROOT/10 1.98G 477G 1.95G /
    rpool/ROOT/10/var 28.8M 24.0G 28.8M /var
    rpool/dump 2.00G 477G 2.00G -
    rpool/export 36.5M 477G 33K /export
    rpool/export/home 35K 24.0G 35K /export/home
    rpool/export/zones 36.4M 477G 32K /export/zones
    rpool/export/zones/build14 36.4M 32.0G 36.4M /export/zones/build14
    rpool/logs 3.78M 32.0G 3.78M /logs
    rpool/swap 66.0G 543G 16K -
    bash-3.2# df -h |grep rpool
    rpool/ROOT/10 547G 1.9G 477G 1% /
    rpool/ROOT/10/var 24G 29M 24G 1% /var
    rpool/export 547G 33K 477G 1% /export
    rpool/export/home 24G 35K 24G 1% /export/home
    rpool/export/zones 547G 32K 477G 1% /export/zones
    rpool/export/zones/build14 32G 36M 32G 1% /export/zones/build14
    rpool/logs 32G 3.8M 32G 1% /logs
    rpool 547G 106K 477G 1% /rpool
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    bash-3.2# lucreate -n 10-CPU_2012_07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <10-CPU_2012_07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <10-CPU_2012_07>.
    Populating file systems on boot environment <10-CPU_2012_07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10/var@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07/var>.
    Mounting ABE <10-CPU_2012_07>.
    Generating file list.
    Copying data from PBE <10> to ABE <10-CPU_2012_07>.
    100% of filenames transferred
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <10-CPU_2012_07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to mount zone <build14> in </.alt.tmp.b-0ob.mnt>.
    zoneadm: zone 'build14': zone root /export/zones/build14/root already in use by zone build14
    zoneadm: zone 'build14': call to zoneadmd failed
    ERROR: Unable to mount non-global zones of ABE <10-CPU_2012_07>: cannot make ABE bootable.
    ERROR: umount: /.alt.tmp.b-0ob.mnt/var/run busy
    ERROR: cannot unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: failed to unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: cannot fully unmount boot environment - <1>: file systems remain mounted
    ERROR: Unable to make boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to populate file systems on boot environment <10-CPU_2012_07>.
    Removing incomplete BE <10-CPU_2012_07>.
    ERROR: Cannot make file systems for boot environment <10-CPU_2012_07>.
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    10-CPU_2012_07 no no no yes -
    So the very latest Live Upgrade patch doesn't seem to have fix this, I get even more errors now.
    Again any help would be greatly appreciated.
    Thanks - Julian.

  • Pkgmap files missing in global zone, can't build non-global zone

    My solaris 10 server is missing the pkgmap files for the packages. As a result, I can't build a non-global zone. Is there a way to recreate the pkgmap files?
    The OS on the Solaris 10 server was installed via jumpstart (initial install). However, the Jumpstart process used a Solaris 9 boot server which seems to have caused the missing pkgmap problem.
    Does anyone know of any other problems which would result from a version mismatch between a boot and installation server during the jumpstart process?

    Hi, i have problems with building transmission from svn too:
    $ versionpkg
    ==> retrieving latest revision number from svn... 3730
    ==> newer revision detected: 3730
    ==> Entering fakeroot environment
    ==> Making package: transmission-svn 3730-1 (Di 6. Nov 08:28:38 CET 2007)
    ==> Checking Runtime Dependencies...
    ==> Checking Buildtime Dependencies...
    ==> Retrieving Sources...
    ==> Validating source files with md5sums
    ==> Extracting Sources...
    ==> Removing existing pkg/ directory...
    ==> Starting build()...
    Fetching external item into 'Transmission/third-party/libevent'
    Checked out external at revision 477.
    Checked out revision 3730.
    ==> SVN checkout done or server timeout
    ==> Starting make...
    ./autogen.sh: line 16: autoreconf: command not found
    Creating aclocal.m4 ...
    Running glib-gettextize...  Ignore non-fatal messages.
    Copying file mkinstalldirs
    Copying file po/Makefile.in.in
    Please add the files
      codeset.m4 gettext.m4 glibc21.m4 iconv.m4 isc-posix.m4 lcmessage.m4
      progtest.m4
    from the /aclocal directory to your autoconf macro directory
    or directly to your aclocal.m4 file.
    You will also need config.guess and config.sub, which you can get from
    ftp://ftp.gnu.org/pub/gnu/config/.
    Making aclocal.m4 writable ...
    Running intltoolize...
    PKGBUILD: line 33: ./configure: No such file or directory
    make: *** No targets specified and no makefile found.  Stop.
    ==> ERROR: Build Failed.  Aborting...
    ==> ERROR: Reverting pkgver...
    i dont know whats up with the autoreconf
    i hope anyone can help me!
    greez

  • Non-global zones & vlan

    I'm in a zoning frenzy now and has created a zone that connects to a vlan interface "ceXXX000". The zone is reachable from within it's subnet but could not route traffic beyond the gateway (which cannot be set). Any ideas? I left the vlan interface as 0.0.0.0 since the global zone does not need to talk in that VLAN.
    Also, while changing the IP of the non-global zone, I missed "zoneadm halt". That resulted in the zone not being able to boot (or do anything). Rebooting recovers the zone(s). Is there anyway to work around that? zoneadmd was running for the zone.
    The machine is b63.

    I'm in a zoning frenzy now and has created a zone that
    connects to a vlan interface "ceXXX000". The zone is
    reachable from within it's subnet but could not route
    traffic beyond the gateway (which cannot be set). Any
    ideas? I left the vlan interface as 0.0.0.0 since
    the global zone does not need to talk in that VLAN.You need to add the default gateway for the zone manually, in the global zone:
    # route add default <gateway> -ifp ceXXX000You can only do this when the zone is in the "ready" state .This is not very convenient, we'll try to improve this in the future (not in the initial release of Solaris 10 though).
    Also, while changing the IP of the non-global zone, I
    missed "zoneadm halt". That resulted in the zone not
    being able to boot (or do anything). Rebooting
    recovers the zone(s). Is there anyway to work around
    that? zoneadmd was running for the zone.Did you change the IP address in the zone configuration (using zonecfg) or directly using ifconfig? There is a known bug when you set the IP address in zonecfg to one that's already configured on another interface (this bug will be fixed in the next Solaris Express release). Otherwise, can you post the output of "pstack" on the zoneadmd process? Thanks.
    Blaise

Maybe you are looking for