Update packages for newbie in a non-global zone

Hi,
I'm new on this forum and i'm a newbie. Recently, I have install OpenSolaris and I have install a non-global zone (big-zone). So, I can install new packages manually in my non-global zone with a terminal but i don't know if it's possible to use the package manager (same tool when i'm log on my global zone in a gnome-session).
I don't know if it's possible to install gnome in a non-global zone because it's more easy to me to use the package manager tool in a graphical mode. It's very long to write a command for each package that i want update.
Thanks a lot in advance and sorry for my english.
jaymachine

I know of patchadd -G, it is used when one has downloaded the pathch already. I was looking for an automated patch procedure. I use the java gui upadate manager in the global zone but it will not patch the Java Communications Suite installed in a sparse zone.
Sparse zones are more than really cool. I have entire comms suite installed in a zone with SMF for all the components, anything siezes up and it's a few minutes to reboot, piece of cake. Allows me to use one host server whilst all webserver / application instances are on port 80.

Similar Messages

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Not all non-global zones updated for DST

    We have one server with Solaris 10 and four non-global zones. I installed patch 122032-03 to the global zone and it installed successfull, according to the log. With the DST change on 3/11, TWO of the non-global zones and the global zone updated correctly to daylight time, but the other TWO non-global zone DID NOT. Does anyone know what would cause this?
    I have also tried to manually change the time on the two non-global zones and have not been able to; as root I get the message "not owner"
    ainsworth:hughesm> su -
    Password:
    Sun Microsystems Inc. SunOS 5.10 Generic January 2005
    You have mail.
    # date
    Tue Mar 13 12:02:45 PST 2007
    # date -u
    Tue Mar 13 20:03:16 GMT 2007
    # date
    Tue Mar 13 12:04:31 PST 2007
    # date 0313130007
    date: Not owner
    usage: date [-u] mmddHHMM[[cc]yy][.SS]
    date [-u] [+format]
    date -a [-]sss[.fff]
    Fortunately, these were just test zones. They were set up by a previous admin to be used for pgpftp, so I'm wondering if there are some special configurations for security that is preventing the time change.

    Thanks for replying.
    I rebooted from the global zone. All the zones have the same uptime as the global zone, except one that was rebooted more recently.
    Quick question - how do I tell if it's a sparse zone or full zone?
    One of the zones that the time change worked on:
    $ zdump -v US/Pacific | grep 2007
    US/Pacific Tue Mar 13 22:37:59 2007 UTC = Tue Mar 13 15:37:59 2007 PDT isdst=1
    US/Pacific Sun Mar 11 09:59:59 2007 UTC = Sun Mar 11 01:59:59 2007 PST isdst=0
    US/Pacific Sun Mar 11 10:00:00 2007 UTC = Sun Mar 11 03:00:00 2007 PDT isdst=1
    US/Pacific Sun Nov 4 08:59:59 2007 UTC = Sun Nov 4 01:59:59 2007 PDT isdst=1
    US/Pacific Sun Nov 4 09:00:00 2007 UTC = Sun Nov 4 01:00:00 2007 PST isdst=0
    tsbackup:hughesm> cd /usr/share/lib/zoneinfo; ls -al | grep Pac
    drwxr-xr-x 2 root bin 1024 Jan 19 11:19 Pacific
    cathedral:hughesm> cd /usr/share/lib/zoneinfo; ls -al | grep Pac (the global zone)
    drwxr-xr-x 2 root bin 1024 Jan 19 11:19 Pacific
    One zone that didn't work: (the other one that did not work is the same)
    # zdump -v US/Pacific | grep 2007
    US/Pacific Tue Mar 13 22:45:33 2007 UTC = Tue Mar 13 14:45:33 2007 PST isdst=0
    US/Pacific Sun Apr 1 09:59:59 2007 UTC = Sun Apr 1 01:59:59 2007 PST isdst=0
    US/Pacific Sun Apr 1 10:00:00 2007 UTC = Sun Apr 1 03:00:00 2007 PDT isdst=1
    US/Pacific Sun Oct 28 08:59:59 2007 UTC = Sun Oct 28 01:59:59 2007 PDT isdst=1
    US/Pacific Sun Oct 28 09:00:00 2007 UTC = Sun Oct 28 01:00:00 2007 PST isdst=0
    # uname -a
    SunOS albina 5.10 Generic_118822-02 sun4u sparc SUNW,Ultra-4
    # cd /usr/share/lib/zoneinfo (non-global zone that did not update)
    # ls -al | grep Pac
    drwxr-xr-x 2 root bin 1024 Apr 20 2005 Pacific
    I was thinking of trying to apply the patch within the zone itself, but when I tried smpatch analyze, it didn't list it:
    # smpatch analyze
    120900-04 SunOS 5.10: libzonecfg Patch
    121133-02 SunOS 5.10: zones library and zones utility patch
    119254-27 SunOS 5.10: Install and Patch Utilities Patch
    119574-02 SunOS 5.10: su patch
    121453-02 SunOS 5.10: Sun Update Connection Client Foundation
    121118-08 SunOS 5.10: Sun Update Connection System Client 1.0.8
    121081-05 SunOS 5.10: Connected Customer Agents 1.1.0
    122231-01 SunOS 5.10 Sun Connection agents, transport certificate update
    I attempted to add the patch using smpatch, but I've never run it here before so it's probably not configured right:
    # smpatch update -i 122032-03
    122032-03 cannot be validated.
    com.sun.patchpro.model.PatchProRuntimeException: Unexpected throwable
    at com.sun.patchpro.cli.PatchServices.waitForThread(PatchServices.java:1284)
    at com.sun.patchpro.cli.PatchServices.installPatches(PatchServices.java:1121)
    at com.sun.patchpro.cli.PatchServices.main(PatchServices.java:510)
    Caused by:
    java.lang.Throwable: ERROR: Failed to validate the digital signature(s).
    at com.sun.patchpro.model.PatchProModel$InnerDownloadPatchThread.downloadPatchFailed(PatchProModel.java:2855)
    at com.sun.patchpro.server.GroupPatchDownloader.dispatchFailedEvent(GroupPatchDownloader.java:384)
    at com.sun.patchpro.server.GroupPatchDownloader.downloadPatchFailed(GroupPatchDownloader.java:335)
    at com.sun.patchpro.server.ServerPatchServiceProvider.dispatchFailedEvent(ServerPatchServiceProvider.java:2577
    at com.sun.patchpro.server.ServerPatchServiceProvider.validatePatchBundle(ServerPatchServiceProvider.java:2196
    at com.sun.patchpro.server.ServerPatchServiceProvider.requestDownload(ServerPatchServiceProvider.java:1780)
    at com.sun.patchpro.server.ServerPatchServiceProvider.performDownloadPatches(ServerPatchServiceProvider.java:1
    2)
    at com.sun.patchpro.server.ServerPatchServiceProvider.downloadPatches(ServerPatchServiceProvider.java:860)
    at com.sun.patchpro.server.PatchServerProxy.downloadPatches(PatchServerProxy.java:142)
    at com.sun.patchpro.server.GroupPatchDownloader.downloadPatches(GroupPatchDownloader.java:124)
    at com.sun.patchpro.model.PatchProModel.performPatchDownload(PatchProModel.java:1932)
    at com.sun.patchpro.model.PatchProStateMachine$10.run(PatchProStateMachine.java:526)
    at com.sun.patchpro.util.State.run(State.java:266)
    at java.lang.Thread.run(Thread.java:595)
    So then I attempted to add the patch using patchadd:
    # patchadd 122032-03
    Validating patches...
    Loading patches installed on the system...
    Done!
    Loading patches requested to install.
    Done!
    Checking patches that you specified for installation.
    Done!
    Global patches.
    0 Patch 122032-03 is for global zone only - cannot be installed on local zone.
    No patches to install.
    under /var/sadm/patch/122032-03 on the Global zone, the log shows:
    -rw-r--r-- 1 root root 2666 Jan 19 11:19 log
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    WARNING: /usr/share/lib/zoneinfo/Africa/Timbuktu <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Argentina/ComodRivadavia <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Indiana/Indianapolis <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Indianapolis <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Kentucky/Louisville <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Louisville <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/CST6CDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST5EDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Europe/Belfast <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/HST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST7MDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/PST8PDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Pacific/Yap <no longer a regular file>
    Dryrun complete.
    No changes were made to the system.
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    WARNING: /usr/share/lib/zoneinfo/Africa/Timbuktu <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Argentina/ComodRivadavia <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Indiana/Indianapolis <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Indianapolis <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/America/Kentucky/Louisville <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/America/Louisville <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/CST6CDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/EST5EDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Europe/Belfast <no longer a regular file>
    WARNING: /usr/share/lib/zoneinfo/HST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/MST7MDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/PST8PDT <no longer a linked file>
    WARNING: /usr/share/lib/zoneinfo/Pacific/Yap <no longer a regular file>
    Installation of <SUNWcsu> was successful.
    On the non-global zones, either there is nothing under /var/sadm/patch or there isn't even a patch directory under /var/sadm. Is there somewhere else to look?
    Thanks.

  • Administration file for non-global zones

    I want to install package in global zone. What attributes can I set in administration file? I use administration file for relocating packages by altering basedir. In global zone package is relocated, but installation continues for non-global zones in package basedir.
    Is it possible to specify pkgadd -G parameter (install only in current zone) in administration file?
    Regards
    Jezz

    We use the CDE login mechanism. From the CDE login screen on the global zone:
    [] Select Options, Remote Login, Enter Host Name from the CDE login screen.
    [] Enter the hostname (not the zone name!) of the non-global zone in the Enter the host name box.
    [] Click OK.
    [] Once the CDE login screen appears with the hostname of the non-globalzone listed at the top, log in as sysadmin.
    Notes: If the non-global zone or the system was recently booted, wait a few minutes and check to make sure that the cde-login service is running using the command:
    svcs -a | grep cde-login
    Also, if you have restricted /etc/Xaccess, you'll need to add your non-global zone to it.
    Message was edited by:
    r2ad
    Message was edited by:
    r2ad, http://www.r2ad.com

  • Sharing software package with non-global zone

    I've installed Solaris Studio 12.3 to a global zone on Solaris 11 following instructions from http://pkg-register.oracle.com which I want to make available to the non-global zones.  Do I need to install it independently for all zones, or can I export/share it?

    All files are normally installed in /opt/solarisstudio12.3. You can share it, but it is recommended to install it independently for all zones if you want different installed versions.

  • FilesystemMountPoints for ufs disks mounted to non-global zones

    Hello,
    I have a SAN ufs disk to be used as a failover storage, mounted to non-global zones (NGZ).
    Solaris 10 nodes using Cluster 3.2
    I'm looking for the correct value for the property FilesystemMountPoints and the vfstab entry required for a failover disk mounted to a NGZ.
    Should the path NOT include the NGZ root path?
    From the man page for SUNW.HAStoragePlus, for the property FilesystemMountPoints:
    You can specify both the path in a non-global zone and the path in a global zone, in this format:
    Non-GlobalZonePath:GlobalZonePath
    The global zone path is optional. If you do not specify a global zone path, Sun Cluster assumes that the path in
    the non-global zone and in the global zone are the same. If you specify the path as
    Non-GlobalZonePath:GlobalZonePath, you must specify Global-ZonePath in the global zone's /etc/vfstab.
    The default setting for this property is an empty list.
    You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable
    the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the
    non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone
    by mounting the file system in the global zone. The resource type then performs a loopback mount in the
    non-global zone.
    Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all
    global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.
    SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group
    with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You
    can specify both local and global file system mounts points at the same time.
    Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to
    be local if its /etc/vfstab entry satisfies both of the following conditions:
    1. The non-global mount option is specified.
    2. The "mount at boot" field for the entry is set to "no."
    In my situation, I want to mount the disk to /mysql_data on the NGZ called ftp_zone. So, which is the correct setup?
    a. FilesystemMountPoints=/mysql_data:/zones/ftp_zone/root/mysql_data
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /zones/ftp_zone/root/mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    OR
    b. FilesystemMountPoints=/mysql_data:/mysql_data (can be condensed to simply /mysql_data)
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    Should the path NOT include the NGZ root path?
    And should the fsck pass # be 1 or 2?
    Looking at this example from p. 26 of
    http://wikis.sun.com/download/attachments/24543510/820-4690.pdf
    This example doesn't mention the entry in vfstab.
    Create a resource group that can holds services in nodea zonex and nodeb zoney
    nodea# clresourcegroup create -n nodea:zonex,nodeb:zoney test-rg
    Make sure the HAStoragePlus resource is registered
    nodea# clresourcetype register SUNW.HAStoragePlus
    Now add a UFS [or VxFS] fail-over file system: mount /bigspace1 to failover/export/install in NGZ
    nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \
    -p FilesystemMountPoints=/fail-over/export/install:/bigspace1 \
    ufs-hasp-rs
    Thank you!

    Hi,
    /zones/oracle-z is my root directory of the zone.
    * add the device to the zone :
    root@mpbxapp1 # zonecfg -z oracle-z
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/dsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/rdsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> exit
    * add FS to NGZ's /etc/vfstab : ( You may omit this step, I don't know why but it works without this step :) )
    root@mpbxapp1 # vi /zones/oracle-z/root/etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /global/oracle ufs 1 no logging
    * add FS to global zone's /etc/vfstab :
    root@mpbxapp1 # vi /etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /zonefs/oracle ufs 1 no logging
    * set the FilesystemMountPoints property :
    root@mpbxapp1 # /usr/cluster/bin/clresource set -p FilesystemMountPoints=/global/oracle:/zonefs/oracle oracle-hastp
    Whit this configuration you may ensure that the FS is not directly accessible from master zone. Actually, it's accessible but with a different PATH. For example, for Oracle, from the master zone Oracle can not be started/stopped because the controlfile can not be accessed. :)
    Hope this helps,
    Murat

  • Separate private ip addresses for non-global zones

    I'm testing zones on one of our administrative servers and I'm wondering about the following scenario.
    Zones can easily run away with a lot of ip addresses and I decided to try this. The machine has, in its global zone, a standard private address in the admin (192.168.129.0) segment on hme0. I have also given it another address, 192.168.229.1, configured on hme0:1 which I intend to be the defaultrouter for non-global zones.
    Zone 1 has as its primary address 192.168.229.10, and I have tried to set the default router to 192.168.229.1 by various methods based on what I have read in here., including adding that address to the defaultrouter file in the global zone.
    Zone 2 has 192.168.229.20 as its primary address and is intended to have the same default of 192.168.229.1.
    So far I've not been able to make this work . Am I barking up the wrong tree?
    TIA

    Sorry for the late reply.
    So if I understand correctly, you want to put all your zones in a dedicated IP network (192.168.229.0/24).
    To do this, you don't need to configure the global zone as default gateway for the zones (which doesn't work, as you noticed). You want to indicate to the zones that they can reach the other network (192.168.129.0/24) just by sending packets on hme0. To do so, you need to create interface routes in every zone:
    # route add net 192.168.129.0/24 192.168.229.10 -interface(same for Zone 2, etc.)
    The global zone then needs to advertise itself as gateway for the 192.168.229.0/24 network to the other hosts. I think in.routed(1M) can do this using special configuration in the gateways(4) file, but I don't know how. Otherwise, if you can administer the real router that the other hosts use, you can add a static route: destination 192.168.229.0/24, gateway [global zone IP].
    hope this helps,
    Blaise

  • Zfs package difference in Global and Non-Global zones

    I have a T2000 hosting many zones. The Global zone and all but one Non-Global zone has 3 zfs packages installed SUNWzfskr, SUNWzfsr, SUNWzfsu). Becuase this one non-global zone is missing the zfs packages, kernel patch 120011-14 also didn't install on that single non-global zone.
    I am curious, can i install SUNWzfskr, SUNWzfsr, SUNWzfsu on the non-global zone that is missing the packages?
    Any ideas how to resolve the kernel patch descrepancy between the global and non-global zone?

    patch 122640-05 installs the SUNWzfskr SUNWzfsr SUNWzfsu packages if they are not already installed on the system.

  • GUI interface for non-global zones

    My Goal:
    Create multiple zones, each running different services thus eliminating the need for multiple servers w/out using VMware.
    What I'm realizing:
    Everything I've read points back to non-global zones being only a console based environment. Does anyone know if it's possible to login to non-global zones with a GUI interface?
    Thanks,
    Rick

    We use the CDE login mechanism. From the CDE login screen on the global zone:
    [] Select Options, Remote Login, Enter Host Name from the CDE login screen.
    [] Enter the hostname (not the zone name!) of the non-global zone in the Enter the host name box.
    [] Click OK.
    [] Once the CDE login screen appears with the hostname of the non-globalzone listed at the top, log in as sysadmin.
    Notes: If the non-global zone or the system was recently booted, wait a few minutes and check to make sure that the cde-login service is running using the command:
    svcs -a | grep cde-login
    Also, if you have restricted /etc/Xaccess, you'll need to add your non-global zone to it.
    Message was edited by:
    r2ad
    Message was edited by:
    r2ad, http://www.r2ad.com

  • *Missing utilities in Solaris11 Non global zone.*

    Hi,
    I created Non Global zone in Solaris 11, I found many utilities are missing in Non Global zone machine. For example in non global zone /usr/xpg4/bin contains only 2 utilities where as in global zone I have 68utilities. I copied few utilities from my global zone machine which ever is required for me(ex: id,grep,egrep....). I need to enable rlogin, telnet, ftp in my Solaris 11 non global zone machine. I installed pkg:/service/network/legacy-remote-utilities. But no luck. In some thread i found workaround to enable rlogin.
    rlogin on zones in solaris 11 i found a workaround.
    Need to copy 2 binaries and 2 .xml manifest from GZ to NGZ
    cp /usr/sbin/in.rlogind
    cp /lib/svc/manifest/network/login.xml
    cp /usr/sbin/in.rshd
    cp /lib/svc/manifest/network/shell.xml
    Question1: how about other services?
    Question2: As a concept It has to have all the utilities which is available in Global zone. Why these many utilities are missing? Am I doing any thing wrong or is it zone limitation? we are facing issue in only Solaris 11. where as in Solaris10 every thing works fine.

    What you observed is normal. The basic Solaris 11 zone install gives you a somewhat minimal install. If you want additional packages, you can install them. If you want the zone install to have what you would install from a CD I suppose you could do a the following:
    pkg install slim_install
    pkg uninstall slim_install
    My understanding is that the slim_install package contains dependencies which loads all of the desktop software but doesn't contain any content itself - which is why you can (and should) remove it afterwards.
    That said, normally one uses a zone for a particular purpose. A better approach might be to install only the software in the zone which is needed for that purpose. That would save space, limit security exposure and reduce maintenance overhead. If your purpose is to have a full user environment, that may be to include all the slim_install packages and maybe others as well.
    I would recommend that you not install services by copying files. If you need a service find out what package contains that service and install the package in the zone. That way you won't break maintenance via pkg update.
    So - your questions:
    1. A Solaris 11 zone install is minimal, presumably to make it easy to set up simple single function zones. Additional packages can be added as needed using "pkg install" as needed to provide any necessary services.
    2. Solaris 10 zones work differently and import most packages from the global zone. With Solaris 10 sparse zones, you actually use the same files from the global zone. Solaris 11 zones are different in that they are actually a separate install. The basic install is minimal, presumably to allow for small and simple single function zones. You are not doing anything wrong with respect to the basic install, this is just how things work.

  • Unable to add a device (e.g. /dev/cua0) to a non-global zone

    Hi,
    I've installed solaris 10u4 on a x86 machine with the latest patches, installed with the smpatch utility
    The history:
    I've installed solaris 10u3 without any patches, a quite minimum installation; I 've created a non-global zone, added a zfs dataset, added networking, add one serial device (/dev/cua0); installed hylafax from blastwave in the created zone using the attached modem on /dev/cua0, all was working fine, except some sendmail issues. Due to issues with samba, which I needed on this machine, I've tried to update the machine, after ending up in dependency hell, due the minimum installation, I gave up. I did a fresh install of solaris 10u4 instead also with latest patches applied with the smpatch utility, the I've created a new zone and want to add the device /dev/cua0 like in the s10u3 installation, but the device doesn't appear in the non-global zone, so I've installed hylafax in the global-zone temporary.
    The question, any ideas or workarrounds to bring the async device into a non-global zone again ?
    I'm not a newbie in nix like systems (several years with BSD and GNU/Linux), but for solaris I would classify myself as a newbie ;-)
    thanks in advance.

    Hmm. If that didn't work, then it's possible you're running into a different problem.
    But I checked again and this is the one I was thinking of. Toward the bottom, some patches are referenced. I suppose they won't hurt, but I'm worried you're seeing something related to the 'cua' device rather than the general problem of device creation.
    http://www.opensolaris.org/jive/thread.jspa?messageID=171187
    Darren

  • After installing 137137-09 patch OK in global zone, bad in non global zone

    Hi all,
    scratching my head with this one.
    Installed 137137-09 fine on Sun Fire V210. Machine has one non global zone running a proxy server (nothing very exciting there!). non global zone has a local filesystem attached, but don't think this is the issue (on my test V210 I created the same sort of filesystem and was unable to replicate the problem :( ).
    So 137137-09 is fine in the global zone (I had the non global zone halted when patch installed) it is also installed in the non global zone (ie, when zone boots it says it's at rev 137137-09 via uname) in the patch log in the non global zone I get this:
    PKG=SUNWust2.v
    Original package not installed.
    pkgadd: ERROR: ERROR: unable to get zone brand: zonecfg_get_brand: No such zone configured
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: cannot open
    pkgadd: ERROR: checkinstall script did not complete successfully
    Dryrun complete.
    No changes were made to the system.
    I'm not sure if the branding error is causing the checkinstall postpatch script error or if they are not related. There doesn't seem to be any obvious permissions problems that I can find. I have checked that all the pkg and patch patches are up to date on the system. Searching on the brand error gives me a link to a problem with 127127-11, but that was installed on the system before the local zone was created and all the other seemingly appropriate patches (eg: 119254) are all up to date or at a higher revision than recommended.
    I see the same problem on a M5000 which has two non global zones on it.
    Both machines had the Solaris 10 50/08 update bundle applied when it came out,a nd have had recommended patch sets applied at regular intervals since.
    This issue only came to light when trying the latest bundles with 138888-01/02 in it, and those fail to install on the global zones because the non global zone install dies claiming 137137-09 is not installed (which is plainly wrong).
    I've tried to recreate this on a test server but unfortunately everything works as it should, even though the test server has a similar history in terms of patches and original setup to the others.
    I'm planning to try to detatch the non global zone and try an attach -u to see if it will update the patches properly, but I'm not holding out much hope on that one (I need to wait for a mainteiance window when I can take the zone down in a couple of days).
    Any ideas?

    Well, I am following up to my own post it seems I have determined what is causing the problem, or at least situations where the problem can be reproduced which I have been able to do on my test system.
    It seems that if the zone container's zonepath is in /usr (eg: /usr/zones, /usr/local/zones, or some other path under /usr) the patchadd of 137137-09 will fail with the log similar to posted above, and this will stop further kernel patches (eg: 138888-02) being added.
    The test system had everything patched to current and searching the web I can't find any other instances of this being an issue, but I have reproduced this problem on my test machine (which worked OK because it's test zones were in a filesystem mounted as /zones). When I used zoneadd -z <zonename> move to a zone in /usr/local and applied 137137-09 the same problem came up.
    Not sure what is causing this issue.. I imagine it might have to do with some sort of confusion with the patch utilities and the read-only loopback filesystems in the sparse root zone but I can't bs sure.
    Maybe someone at sun will see this and figure out what the deal is :)
    When I moved my test zone back to /zones the patch applied perfectly so it's definitely having it in /usr or /usr/local (I tried both locations, even though they are seperate ufs filesystems on my test server).
    Oh I am running DiskSuite to mirror filesystems on my V210's which may or may not have anything to do with it.
    Hope this helps someone in the future at least!

  • Lucreate and non-global zones

    Hi - I'm trying to get my head around Live Upgrades now that I've switched to ZFS on Solaris 10 for our test servers. The problem I have is we have a number of non-global zones and when I ran the lucreate command I get a number of warnings:
    lucreate -n CPU_2012-07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <CPU_2012-07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <CPU_2012-07>.
    Populating file systems on boot environment <CPU_2012-07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    WARNING: Directory </export/zones/tdukwxstestz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwxstestz01> is shared between BEs, remapping to <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Directory </export/zones/tdukwbprepz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwbprepz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwbprepz01> is shared between BEs, remapping to <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/export/zones/tdukwbprepz01> on <rpool/export/zones/tdukwbprepz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwbprepz01@CPU_2012-07> on <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Creating snapshot for <rpool/export/zones/tdukwxstestz01> on <rpool/export/zones/tdukwxstestz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwxstestz01@CPU_2012-07> on <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10@CPU_2012-07> on <rpool/ROOT/CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10/var@CPU_2012-07> on <rpool/ROOT/CPU_2012-07/var>.
    Mounting ABE <CPU_2012-07>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <CPU_2012-07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <CPU_2012-07> bootable.
    Population of boot environment <CPU_2012-07> successful.
    Creation of boot environment <CPU_2012-07> successful.
    So ALL my non-global zones live under /export/zones/<zonename> - what do all the WARNINGS mean?
    I then applied the Oracle CPU, activated the ABE and shutdown the server. When it came back up non of the zones would start and this seems to be because now all the zonepaths and references to the zones are labelled with CPU_2012-07 on the end. Now I can edit the zone xml files to fix this but am sure this is not the recommended method and something I would prefer not to do.
    So basically I think I have not set my ZFS resource pools up correctly to take into account my non-global zones and where I have created them.
    My zfs list output looks like this now, unfortunately I don't have the output prior to me starting this work:
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 91.2G 456G 106K /rpool
    rpool/ROOT 9.06G 456G 31K legacy
    rpool/ROOT/10 38.2M 456G 4.34G /.alt.10
    rpool/ROOT/10/var 22.6M 24.0G 3.60G /.alt.10/var
    rpool/ROOT/CPU_2012-07 9.02G 456G 4.34G /
    rpool/ROOT/CPU_2012-07@CPU_2012-07 566M - 4.34G -
    rpool/ROOT/CPU_2012-07/var 4.12G 456G 4.11G /var
    rpool/ROOT/CPU_2012-07/var@CPU_2012-07 13.8M - 3.58G -
    rpool/dump 2.00G 456G 2.00G -
    rpool/export 5.94G 456G 35K /export
    rpool/export/home 76.9M 23.9G 76.9M /export/home
    rpool/export/zones 5.87G 456G 36K /export/zones
    rpool/export/zones/tdukwbprepz01 21.7M 456G 323M /export/zones/tdukwbprepz01
    rpool/export/zones/tdukwbprepz01-10 321M 31.7G 312M /export/zones/tdukwbprepz01-10
    rpool/export/zones/tdukwbprepz01-10@CPU_2012-07 8.50M - 312M -
    rpool/export/zones/tdukwxstestz01 29.6M 456G 5.49G /export/zones/tdukwxstestz01
    rpool/export/zones/tdukwxstestz01-10 5.51G 26.5G 5.47G /export/zones/tdukwxstestz01-10
    rpool/export/zones/tdukwxstestz01-10@CPU_2012-07 32.1M - 5.48G -
    rpool/logs 8.23G 23.8G 8.23G /logs
    rpool/swap 66.0G 458G 64.0G -
    Any help would be greatly appreciated.
    Thanks - Julian.

    OK, so been tinkering with this. I'm not sure this is my exact problem but a few people have reported issues with the following package:
    121430-xx
    In that it gives the exact same WARNINGS when trying to create an ABE via lucreate and you have non-global zones. So one of the suggestions was to go back to an earlier version of this patch and then someone said it was fixed in version 71 of the patch. So I installed the very latest version 121430-81 and now it fails with a different error. Fortunately this time I have a screen shot of the before and after:
    BEFORE:
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    1 build14 running /export/zones/build14 native shared
    bash-3.2# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 70.0G 477G 106K /rpool
    rpool/ROOT 1.98G 477G 31K legacy
    rpool/ROOT/10 1.98G 477G 1.95G /
    rpool/ROOT/10/var 28.8M 24.0G 28.8M /var
    rpool/dump 2.00G 477G 2.00G -
    rpool/export 36.5M 477G 33K /export
    rpool/export/home 35K 24.0G 35K /export/home
    rpool/export/zones 36.4M 477G 32K /export/zones
    rpool/export/zones/build14 36.4M 32.0G 36.4M /export/zones/build14
    rpool/logs 3.78M 32.0G 3.78M /logs
    rpool/swap 66.0G 543G 16K -
    bash-3.2# df -h |grep rpool
    rpool/ROOT/10 547G 1.9G 477G 1% /
    rpool/ROOT/10/var 24G 29M 24G 1% /var
    rpool/export 547G 33K 477G 1% /export
    rpool/export/home 24G 35K 24G 1% /export/home
    rpool/export/zones 547G 32K 477G 1% /export/zones
    rpool/export/zones/build14 32G 36M 32G 1% /export/zones/build14
    rpool/logs 32G 3.8M 32G 1% /logs
    rpool 547G 106K 477G 1% /rpool
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    bash-3.2# lucreate -n 10-CPU_2012_07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <10-CPU_2012_07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <10-CPU_2012_07>.
    Populating file systems on boot environment <10-CPU_2012_07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10/var@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07/var>.
    Mounting ABE <10-CPU_2012_07>.
    Generating file list.
    Copying data from PBE <10> to ABE <10-CPU_2012_07>.
    100% of filenames transferred
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <10-CPU_2012_07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to mount zone <build14> in </.alt.tmp.b-0ob.mnt>.
    zoneadm: zone 'build14': zone root /export/zones/build14/root already in use by zone build14
    zoneadm: zone 'build14': call to zoneadmd failed
    ERROR: Unable to mount non-global zones of ABE <10-CPU_2012_07>: cannot make ABE bootable.
    ERROR: umount: /.alt.tmp.b-0ob.mnt/var/run busy
    ERROR: cannot unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: failed to unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: cannot fully unmount boot environment - <1>: file systems remain mounted
    ERROR: Unable to make boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to populate file systems on boot environment <10-CPU_2012_07>.
    Removing incomplete BE <10-CPU_2012_07>.
    ERROR: Cannot make file systems for boot environment <10-CPU_2012_07>.
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    10-CPU_2012_07 no no no yes -
    So the very latest Live Upgrade patch doesn't seem to have fix this, I get even more errors now.
    Again any help would be greatly appreciated.
    Thanks - Julian.

  • Disabling copy of /etc/rc scripts to non-global zones

    Hi,
    How would I disable copying of all (or some of) the rc2.d scripts to non-global zones during zone install procedure? Some of the services do not make sense inside a non-global zone, because they may be related to physical devices which can not be managed from the non-global zone.
    Thanks!

    David:
    With smf , isn't it necessary that the rc.d script actually register for the monitoring service? If that is the case and a application doesn't register, then it is not monitored by smf.
    There could be applications that have their own drivers, which are loaded as part of a rc.d script. Each of these application scripts now have to be zone-aware. If there is a way of avoiding the installation of rc.d scripts in zones then you don't have this problem (of trying to load drivers inside a zone).
    Let's say there are 2 packages A & B, with the foll. characteristics:
    . B is dependent on A
    . B needs to be installed in the zone.
    . A loads kernel modules / drivers so cannot be installed in the zone.
    A solution I can think of is to package A with ALLZONES=true and HOLLOW=true. As I understand the use of these variables, only A's packing info. should get updated in the non-global zone and none of pkg A's files (binaries, scripts, etc) should get installed in the non-global zone. If that works then you don't have this problem of rc.d scripts and still solve the package dependencies.
    I would appreciate your response on the use of these variables and how Sun packages deal with such dependencies.
    Thanks!

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

Maybe you are looking for