Is it possible to patch Global Zone and only specific Non-Global Zones?

Hi Champs,
Is it possible to patch Global Zone and only specific Non-Global Zones? Idea is to patch DEV-zones only on the system & test applications and then patch only the STG-zones on same server!
Not sure if it is possible but just throwing a question...
Cheers,
Nitin

M10vir wrote:
Yes, if you have branded (non-sparse) zone!Branded zones and sparse zones don't have the relation that you imply. In Solaris 10, native zones can be sparse or whole-root (non-sparse, as you say). Zones that are not native zones are branded zones. Branded zones on Solaris 10 include Solaris Legacy Containers, previously known as Solaris 8 Containers and Solaris 9 Containers. That add-on product allows you to run Solaris 8 and Solaris 9 application environments under a thin layer of virtualization provided by the brands framework. solaris8 and solaris9 branded zones can be patched independently of each other and of the global zone.
Solaris 11 has no "native zones" - all zones use the brands framework. The "solaris" brand does no emulation and in that respect is very similar to native zones on Solaris 10. Solaris 11 also provides Solaris 10 Zones via the solaris10 brand. This allows zones or the global zone from a Solaris 10 system to be transferred to a Solaris 11 system and run as solaris10 zones. When running on Solaris 11, solaris10 zones can each be patched independently from each other and the Solaris 11 global zone. Technically, Solaris 11 doesn't have patches - it just has newer versions of packages to which the system is updated.

Similar Messages

  • Segregation of  global layout and user Specific  for IW38 with Activity 23

    Hi All,
    We have issue that users are able to change the Global Layout in IW38 transaction codes, which is causing lot of confusion for other users. Please can anyone help me to do some investigation around how the access to Global Layout and User-Specific layout can be segregated.
      We would like to achieve below things:
    1)     Can we control provision access in such way users can only change layout specific to them?
    2)     They should not be able to change Global Layout
    3)     List of roles having Authorization object S_ALV_LAYO with activity 23.
    If anyone already faced this issue, can you please let me know how they resolved it
    Thanks in Advance.

    Hi David,
    > The standard S_ALV_LAYO object is, so far as I know, no use for normal day to day end users as it over-rides the restrictions?
    It's up to the business, but ours wanted the managers to set a global layout for all other users to display only... as sort of a standard report.  I agree with you in the challenge of determining who to give the long-term S_ALV_LAYO w/23 global maintenance access.  We used to have this S_ALV_LAYO in every role that contained a report access (honest design mistake), hence that is why we created a separate role contained just S_ALV_LAYO w/ 23 access  --> to managers only. 
    In our generic system role that is given to everyone, we have both S_ALV_LAYO (as blank) and F_IT_ALV (as 01, 02) so all non-manager users are able to create/save local layout & still view global layout.
    F_IT_ALV activity 70 gives the authority to overwrite other's local layout (the "Manage Layout" access), so we only gave 01  and 02.  Be careful here, as giving 03 (display) will make all other activities inactive.
    Best,
    Alice

  • Ssh takes me to the global zone instead of the non-global zone

    I have set up my first Solaris 10 server with a new zone. The ce device is set up on the zone as well as the global zone.
    Output from ifconfig on the global zone:
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.16.1.217 netmask ffffff00 broadcast 172.16.1.255
    ether 0:3:ba:f2:a1:54
    ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 172.16.1.199 netmask ffffff00 broadcast 172.16.1.255
    ether 0:3:ba:f2:a1:54
    Output from the non-global zone:
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.16.1.199 netmask ffff0000 broadcast 172.16.255.255
    ether 0:3:ba:f2:a1:54
    When I ssh into the non-global zone, I end up in the global zone? Can I ssh straight into the non-global zone? Am I missing something in the zone setup that keeps me from being able to ssh into the non-global zone?
    Any help is appreciated. I have been racking my brain on this for several hours.
    Thanks ahead of time.

    TAdriver wrote:
    The one thing I have found in the documentation is that if you set the network as an exclusive IP, you can only assign the physical name using zonecfg. You can't set the IP address or the default router. In fact, if you try to set either of those, you get an error saying you can't set those using an exclusive IP type.Correct. When doing a shared-IP zone, the zone has no privileges to do IP-level things. So the global zone (via the zone configuration) creates the virtual interface and sets the IP address. Then when the zone is booted, the interface is given to it.
    With an exclusive-IP zone, the zone can do all this work itself. From its perspective, it's handed an interface like a regular machine. So the IP settings are done within the zone (/etc/hosts, /etc/hostname.XXX, /etc/netmasks).
    Darren

  • Pkgmap files missing in global zone, can't build non-global zone

    My solaris 10 server is missing the pkgmap files for the packages. As a result, I can't build a non-global zone. Is there a way to recreate the pkgmap files?
    The OS on the Solaris 10 server was installed via jumpstart (initial install). However, the Jumpstart process used a Solaris 9 boot server which seems to have caused the missing pkgmap problem.
    Does anyone know of any other problems which would result from a version mismatch between a boot and installation server during the jumpstart process?

    Hi, i have problems with building transmission from svn too:
    $ versionpkg
    ==> retrieving latest revision number from svn... 3730
    ==> newer revision detected: 3730
    ==> Entering fakeroot environment
    ==> Making package: transmission-svn 3730-1 (Di 6. Nov 08:28:38 CET 2007)
    ==> Checking Runtime Dependencies...
    ==> Checking Buildtime Dependencies...
    ==> Retrieving Sources...
    ==> Validating source files with md5sums
    ==> Extracting Sources...
    ==> Removing existing pkg/ directory...
    ==> Starting build()...
    Fetching external item into 'Transmission/third-party/libevent'
    Checked out external at revision 477.
    Checked out revision 3730.
    ==> SVN checkout done or server timeout
    ==> Starting make...
    ./autogen.sh: line 16: autoreconf: command not found
    Creating aclocal.m4 ...
    Running glib-gettextize...  Ignore non-fatal messages.
    Copying file mkinstalldirs
    Copying file po/Makefile.in.in
    Please add the files
      codeset.m4 gettext.m4 glibc21.m4 iconv.m4 isc-posix.m4 lcmessage.m4
      progtest.m4
    from the /aclocal directory to your autoconf macro directory
    or directly to your aclocal.m4 file.
    You will also need config.guess and config.sub, which you can get from
    ftp://ftp.gnu.org/pub/gnu/config/.
    Making aclocal.m4 writable ...
    Running intltoolize...
    PKGBUILD: line 33: ./configure: No such file or directory
    make: *** No targets specified and no makefile found.  Stop.
    ==> ERROR: Build Failed.  Aborting...
    ==> ERROR: Reverting pkgver...
    i dont know whats up with the autoreconf
    i hope anyone can help me!
    greez

  • Non-global zone network configuration

    Hi,
    Zones are a new thing for me so please excuse me if this is a basic query... I have recently jumpstarted a system using a jumpstart script that was developed by somebody else. It creates two non-global zones and configures their network interfaces.
    I have unplumbed one of the virtual interfaces for a particular zone because the IP address it was using is actually being used by another system on the network. However, when I reboot the zone, the interface is re-assigned the same IP address again. The IP address in question is not in /etc/hosts on any of the zones, and in the non-global zones the "hostname.<interface>" files do not exist at all. Also, the IP address is not in sysidcfg in any of the zones.
    So basically, interface e1000g0:2 is being assigned an IP address that was configured by the jumpstart script, so perhaps the jumpstart script has placed that IP address in some file that is read when the zone is booting. I have even checked rc scripts just in case but I cannot find the IP address anywhere. Would anybody please be able to tell me where the configuration information could be coming from in this scenario (nsswitch.conf specifies only files).
    Thank you in advance...

    its in the zone config.
    zonecfg -z <zone in question> info
    it should list a net address and physical device. you can then use:
    zonecfg -z <zone in question>
    from here you can remove the net statements, or change the address if you want to keep using the net card in your zone.

  • FilesystemMountPoints for ufs disks mounted to non-global zones

    Hello,
    I have a SAN ufs disk to be used as a failover storage, mounted to non-global zones (NGZ).
    Solaris 10 nodes using Cluster 3.2
    I'm looking for the correct value for the property FilesystemMountPoints and the vfstab entry required for a failover disk mounted to a NGZ.
    Should the path NOT include the NGZ root path?
    From the man page for SUNW.HAStoragePlus, for the property FilesystemMountPoints:
    You can specify both the path in a non-global zone and the path in a global zone, in this format:
    Non-GlobalZonePath:GlobalZonePath
    The global zone path is optional. If you do not specify a global zone path, Sun Cluster assumes that the path in
    the non-global zone and in the global zone are the same. If you specify the path as
    Non-GlobalZonePath:GlobalZonePath, you must specify Global-ZonePath in the global zone's /etc/vfstab.
    The default setting for this property is an empty list.
    You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable
    the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the
    non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone
    by mounting the file system in the global zone. The resource type then performs a loopback mount in the
    non-global zone.
    Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all
    global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.
    SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group
    with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You
    can specify both local and global file system mounts points at the same time.
    Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to
    be local if its /etc/vfstab entry satisfies both of the following conditions:
    1. The non-global mount option is specified.
    2. The "mount at boot" field for the entry is set to "no."
    In my situation, I want to mount the disk to /mysql_data on the NGZ called ftp_zone. So, which is the correct setup?
    a. FilesystemMountPoints=/mysql_data:/zones/ftp_zone/root/mysql_data
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /zones/ftp_zone/root/mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    OR
    b. FilesystemMountPoints=/mysql_data:/mysql_data (can be condensed to simply /mysql_data)
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    Should the path NOT include the NGZ root path?
    And should the fsck pass # be 1 or 2?
    Looking at this example from p. 26 of
    http://wikis.sun.com/download/attachments/24543510/820-4690.pdf
    This example doesn't mention the entry in vfstab.
    Create a resource group that can holds services in nodea zonex and nodeb zoney
    nodea# clresourcegroup create -n nodea:zonex,nodeb:zoney test-rg
    Make sure the HAStoragePlus resource is registered
    nodea# clresourcetype register SUNW.HAStoragePlus
    Now add a UFS [or VxFS] fail-over file system: mount /bigspace1 to failover/export/install in NGZ
    nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \
    -p FilesystemMountPoints=/fail-over/export/install:/bigspace1 \
    ufs-hasp-rs
    Thank you!

    Hi,
    /zones/oracle-z is my root directory of the zone.
    * add the device to the zone :
    root@mpbxapp1 # zonecfg -z oracle-z
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/dsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/rdsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> exit
    * add FS to NGZ's /etc/vfstab : ( You may omit this step, I don't know why but it works without this step :) )
    root@mpbxapp1 # vi /zones/oracle-z/root/etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /global/oracle ufs 1 no logging
    * add FS to global zone's /etc/vfstab :
    root@mpbxapp1 # vi /etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /zonefs/oracle ufs 1 no logging
    * set the FilesystemMountPoints property :
    root@mpbxapp1 # /usr/cluster/bin/clresource set -p FilesystemMountPoints=/global/oracle:/zonefs/oracle oracle-hastp
    Whit this configuration you may ensure that the FS is not directly accessible from master zone. Actually, it's accessible but with a different PATH. For example, for Oracle, from the master zone Oracle can not be started/stopped because the controlfile can not be accessed. :)
    Hope this helps,
    Murat

  • Are volume manager commands available inside non-global zones

    My application requires usage of volume manager commands to create new filesystem, expand an existing file system inside the non-global zone. Is this supported?
    Or the only option is to create filesystem in global zone and assign to non-global zones?

    ArunZone wrote:
    My application requires usage of volume manager commands to create new filesystem, expand an existing file system inside the non-global zone. Is this supported?No. There's no zone knowledege with SVM, so it must be restricted to the global zone only. If you could use ZFS instead, you could delegate a filesystem to a zone and create/modify within the zone.
    Or the only option is to create filesystem in global zone and assign to non-global zones?If you must use SVM, yes.
    Darren

  • After installing 137137-09 patch OK in global zone, bad in non global zone

    Hi all,
    scratching my head with this one.
    Installed 137137-09 fine on Sun Fire V210. Machine has one non global zone running a proxy server (nothing very exciting there!). non global zone has a local filesystem attached, but don't think this is the issue (on my test V210 I created the same sort of filesystem and was unable to replicate the problem :( ).
    So 137137-09 is fine in the global zone (I had the non global zone halted when patch installed) it is also installed in the non global zone (ie, when zone boots it says it's at rev 137137-09 via uname) in the patch log in the non global zone I get this:
    PKG=SUNWust2.v
    Original package not installed.
    pkgadd: ERROR: ERROR: unable to get zone brand: zonecfg_get_brand: No such zone configured
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: cannot open
    pkgadd: ERROR: checkinstall script did not complete successfully
    Dryrun complete.
    No changes were made to the system.
    I'm not sure if the branding error is causing the checkinstall postpatch script error or if they are not related. There doesn't seem to be any obvious permissions problems that I can find. I have checked that all the pkg and patch patches are up to date on the system. Searching on the brand error gives me a link to a problem with 127127-11, but that was installed on the system before the local zone was created and all the other seemingly appropriate patches (eg: 119254) are all up to date or at a higher revision than recommended.
    I see the same problem on a M5000 which has two non global zones on it.
    Both machines had the Solaris 10 50/08 update bundle applied when it came out,a nd have had recommended patch sets applied at regular intervals since.
    This issue only came to light when trying the latest bundles with 138888-01/02 in it, and those fail to install on the global zones because the non global zone install dies claiming 137137-09 is not installed (which is plainly wrong).
    I've tried to recreate this on a test server but unfortunately everything works as it should, even though the test server has a similar history in terms of patches and original setup to the others.
    I'm planning to try to detatch the non global zone and try an attach -u to see if it will update the patches properly, but I'm not holding out much hope on that one (I need to wait for a mainteiance window when I can take the zone down in a couple of days).
    Any ideas?

    Well, I am following up to my own post it seems I have determined what is causing the problem, or at least situations where the problem can be reproduced which I have been able to do on my test system.
    It seems that if the zone container's zonepath is in /usr (eg: /usr/zones, /usr/local/zones, or some other path under /usr) the patchadd of 137137-09 will fail with the log similar to posted above, and this will stop further kernel patches (eg: 138888-02) being added.
    The test system had everything patched to current and searching the web I can't find any other instances of this being an issue, but I have reproduced this problem on my test machine (which worked OK because it's test zones were in a filesystem mounted as /zones). When I used zoneadd -z <zonename> move to a zone in /usr/local and applied 137137-09 the same problem came up.
    Not sure what is causing this issue.. I imagine it might have to do with some sort of confusion with the patch utilities and the read-only loopback filesystems in the sparse root zone but I can't bs sure.
    Maybe someone at sun will see this and figure out what the deal is :)
    When I moved my test zone back to /zones the patch applied perfectly so it's definitely having it in /usr or /usr/local (I tried both locations, even though they are seperate ufs filesystems on my test server).
    Oh I am running DiskSuite to mirror filesystems on my V210's which may or may not have anything to do with it.
    Hope this helps someone in the future at least!

  • Whole root or sparse root zones and patching

    Hi all,
    A while back, I did some cluster patching tests on a system with only sparse root zones, and one with whole root zones...and I seem to recall that the patch time was about equal which surprised me. I had thought the sparse root model mainly is patching the global zone, and even though patchadd may need to run through the sparse NGZ's...that it isn't doing much other than updating /var/sadm info in the NGZ's.
    Has anyone seen this to be true or if there are major patching improvements using a "sparse" root NGZ model over a "whole" root model?
    thanks much.

    My testing showed the same results and I was a bit surprised as well. As I dug into it further my understanding was that the majority of the patch application time goes into figuring out what to patch, not actually copying files around. That work must be done for the sparse zones in the same way as for the full root zones, we just save the few milliseconds of actually backing up and replacing the file.
    I suspect there is a large amount of slack that could be optimized in the patching process (both with and without zones), but I don't understand it nearly well enough to say that with any authority.

  • Lucreate and non-global zones

    Hi - I'm trying to get my head around Live Upgrades now that I've switched to ZFS on Solaris 10 for our test servers. The problem I have is we have a number of non-global zones and when I ran the lucreate command I get a number of warnings:
    lucreate -n CPU_2012-07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <CPU_2012-07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <CPU_2012-07>.
    Populating file systems on boot environment <CPU_2012-07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    WARNING: Directory </export/zones/tdukwxstestz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwxstestz01> is shared between BEs, remapping to <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Directory </export/zones/tdukwbprepz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwbprepz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwbprepz01> is shared between BEs, remapping to <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/export/zones/tdukwbprepz01> on <rpool/export/zones/tdukwbprepz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwbprepz01@CPU_2012-07> on <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Creating snapshot for <rpool/export/zones/tdukwxstestz01> on <rpool/export/zones/tdukwxstestz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwxstestz01@CPU_2012-07> on <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10@CPU_2012-07> on <rpool/ROOT/CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10/var@CPU_2012-07> on <rpool/ROOT/CPU_2012-07/var>.
    Mounting ABE <CPU_2012-07>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <CPU_2012-07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <CPU_2012-07> bootable.
    Population of boot environment <CPU_2012-07> successful.
    Creation of boot environment <CPU_2012-07> successful.
    So ALL my non-global zones live under /export/zones/<zonename> - what do all the WARNINGS mean?
    I then applied the Oracle CPU, activated the ABE and shutdown the server. When it came back up non of the zones would start and this seems to be because now all the zonepaths and references to the zones are labelled with CPU_2012-07 on the end. Now I can edit the zone xml files to fix this but am sure this is not the recommended method and something I would prefer not to do.
    So basically I think I have not set my ZFS resource pools up correctly to take into account my non-global zones and where I have created them.
    My zfs list output looks like this now, unfortunately I don't have the output prior to me starting this work:
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 91.2G 456G 106K /rpool
    rpool/ROOT 9.06G 456G 31K legacy
    rpool/ROOT/10 38.2M 456G 4.34G /.alt.10
    rpool/ROOT/10/var 22.6M 24.0G 3.60G /.alt.10/var
    rpool/ROOT/CPU_2012-07 9.02G 456G 4.34G /
    rpool/ROOT/CPU_2012-07@CPU_2012-07 566M - 4.34G -
    rpool/ROOT/CPU_2012-07/var 4.12G 456G 4.11G /var
    rpool/ROOT/CPU_2012-07/var@CPU_2012-07 13.8M - 3.58G -
    rpool/dump 2.00G 456G 2.00G -
    rpool/export 5.94G 456G 35K /export
    rpool/export/home 76.9M 23.9G 76.9M /export/home
    rpool/export/zones 5.87G 456G 36K /export/zones
    rpool/export/zones/tdukwbprepz01 21.7M 456G 323M /export/zones/tdukwbprepz01
    rpool/export/zones/tdukwbprepz01-10 321M 31.7G 312M /export/zones/tdukwbprepz01-10
    rpool/export/zones/tdukwbprepz01-10@CPU_2012-07 8.50M - 312M -
    rpool/export/zones/tdukwxstestz01 29.6M 456G 5.49G /export/zones/tdukwxstestz01
    rpool/export/zones/tdukwxstestz01-10 5.51G 26.5G 5.47G /export/zones/tdukwxstestz01-10
    rpool/export/zones/tdukwxstestz01-10@CPU_2012-07 32.1M - 5.48G -
    rpool/logs 8.23G 23.8G 8.23G /logs
    rpool/swap 66.0G 458G 64.0G -
    Any help would be greatly appreciated.
    Thanks - Julian.

    OK, so been tinkering with this. I'm not sure this is my exact problem but a few people have reported issues with the following package:
    121430-xx
    In that it gives the exact same WARNINGS when trying to create an ABE via lucreate and you have non-global zones. So one of the suggestions was to go back to an earlier version of this patch and then someone said it was fixed in version 71 of the patch. So I installed the very latest version 121430-81 and now it fails with a different error. Fortunately this time I have a screen shot of the before and after:
    BEFORE:
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    1 build14 running /export/zones/build14 native shared
    bash-3.2# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 70.0G 477G 106K /rpool
    rpool/ROOT 1.98G 477G 31K legacy
    rpool/ROOT/10 1.98G 477G 1.95G /
    rpool/ROOT/10/var 28.8M 24.0G 28.8M /var
    rpool/dump 2.00G 477G 2.00G -
    rpool/export 36.5M 477G 33K /export
    rpool/export/home 35K 24.0G 35K /export/home
    rpool/export/zones 36.4M 477G 32K /export/zones
    rpool/export/zones/build14 36.4M 32.0G 36.4M /export/zones/build14
    rpool/logs 3.78M 32.0G 3.78M /logs
    rpool/swap 66.0G 543G 16K -
    bash-3.2# df -h |grep rpool
    rpool/ROOT/10 547G 1.9G 477G 1% /
    rpool/ROOT/10/var 24G 29M 24G 1% /var
    rpool/export 547G 33K 477G 1% /export
    rpool/export/home 24G 35K 24G 1% /export/home
    rpool/export/zones 547G 32K 477G 1% /export/zones
    rpool/export/zones/build14 32G 36M 32G 1% /export/zones/build14
    rpool/logs 32G 3.8M 32G 1% /logs
    rpool 547G 106K 477G 1% /rpool
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    bash-3.2# lucreate -n 10-CPU_2012_07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <10-CPU_2012_07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <10-CPU_2012_07>.
    Populating file systems on boot environment <10-CPU_2012_07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10/var@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07/var>.
    Mounting ABE <10-CPU_2012_07>.
    Generating file list.
    Copying data from PBE <10> to ABE <10-CPU_2012_07>.
    100% of filenames transferred
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <10-CPU_2012_07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to mount zone <build14> in </.alt.tmp.b-0ob.mnt>.
    zoneadm: zone 'build14': zone root /export/zones/build14/root already in use by zone build14
    zoneadm: zone 'build14': call to zoneadmd failed
    ERROR: Unable to mount non-global zones of ABE <10-CPU_2012_07>: cannot make ABE bootable.
    ERROR: umount: /.alt.tmp.b-0ob.mnt/var/run busy
    ERROR: cannot unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: failed to unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: cannot fully unmount boot environment - <1>: file systems remain mounted
    ERROR: Unable to make boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to populate file systems on boot environment <10-CPU_2012_07>.
    Removing incomplete BE <10-CPU_2012_07>.
    ERROR: Cannot make file systems for boot environment <10-CPU_2012_07>.
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    10-CPU_2012_07 no no no yes -
    So the very latest Live Upgrade patch doesn't seem to have fix this, I get even more errors now.
    Again any help would be greatly appreciated.
    Thanks - Julian.

  • NFS and non global zones

    Hi,
    Ive read numerous threads about mounting NFS shares to non global zones but have still not been able to successfully resolve my issue.
    I have 5 T3-2's which are being used as standalone SAP servers running Solaris 10u9 and numerous sparse non global zones. Basically I have a 1Tb HDS LUN presented to 1 T3-2 and have NFS shared this out as /stage to the remaining 4 global zones which works as expected.
    However I am unable to mount the shared NFS filesystem to the non global zones.
    When I try to mount the NFS share from the non global zone itself I receive RPC errors, I have also tried configuring the non global zone with the NFS mount (from the global zone) as lofs but the zone wont boot and also manually mounting the NFS mount from the global zone which looks like it works but when I do a df on the non global zone I receive stat erros.
    Ive even tried linking the NFS share on the global zone to the non global zone directory but that produces a strange linkage when the zone is booted.
    Numerous threads say this is not supported but I cant believe Oracle after ~6/7 years of zones and numerous threads on the subject wouldnt have resolved this issue.
    I could easily locally mount the storage locally and lofs it to the non global zone but unfortunately dont have the storage capacity available which is why I thought NFS mounting to the non global zone would work!!
    Any suggestions would be gratefully received!
    Thanks.

    If you are trying to mount NFS file system on non-global zone from global zone of the same server, use lofs instead.
    You can mount the same file system to all non-global zones using lofs and all non-global zones have read/write access to it.
    If it is global zone of some other server then you can use NFS. But before that check the way it is exported on NFS server whether the client from which you are trying to mount it has permissions to do so.

  • Can I import one non-global zone from one machine to another?

    If create a non-global zone on one disk on machine A, is it possible to make a copy of that disk, and import the non-global zone to machine B? If yes, how to import the non-global zone?
    Thanks!

    It should be possible if your machines are installed at the same way, because you need the same environment (patches, packages,..).
    If this is true you should export your zone definition on machine A (zonecfg export) and import it on machine B (zonecfg -f ...).
    Then create the new zone on B. If finished get your zonepath with all data on A an copy it to B. That should be all.
    With this solution I hope it would be possible to have a shadow instance on B and the aktiv instance on A. If you have your whole zonepath on external disks like EMC, you only have to mount your disks on B and start your zone.
    harruh

  • Non-global zones on a SAN???

    Hi everyone, i have a question that's probably been asked before and i'm sure many others are interested in knowing the answer.
    Is it possible to store non-global zone(s) on a SAN? The idea being that if the server hosting the non-global zone(s) dies, the non-global zone(s) can be brought up on another server that also has access to the same SAN. This is sort of what vmware can do. It would be great if Solaris 10 non-global zones could also do it.
    Stewart

    Yes it is possible to do this. In fact if you use Sun Cluster (now free) it can be setup so that the zones automatically start on another node within the cluster. Basically any application that can run in a non-global zone can be clustered.
    This also helps greatly with resource balancing as you can move zones between servers as needed. Note the zone does have to shutdown as start again but that usually takes less than a minute.

  • Jumpstart  zones and applications

    hello
    Is is possible to write some rules to create zones and install aplications into those zones when automating an install with jumpstart.
    Alternativly can you create a systems image of several zones and install that zone image using jumpstart.
    basically what i want to do is install solaris 10, create 2 zones and install oracle 10g into those zones.
    regards
    neville

    From within the zone, you can see what pool you're bound to by simply using
    the -q argument to poolbind(1M) with a valid pid, such as "poolbind -q $$".
    Alternatively, you can use the pooladm(1M) command with no arguments.
    Note that if you don't have pools active, this will result in a "Facility is not active"
    message but otherwise you'll see the details about the pool this zone is bound
    to.
    From the global zone, you can see the actual pool the zone is currently bound
    by doing something like "zlogin myzone 'poolbind -q $$'". And you can see
    which pool the zone will attempt to bind to the next time it reboots by using
    the "zonecfg -z myzone info pool" command.
    Does this help?

  • DNS client in a non-global zone

    Hello,
    I want to configure only the non-global zone as a DNS client, with
    /etc/resolv.conf
    /etc/defaultdomain
    /etc/nsswitch.conf
    Is this ok or is this a global wide issue?
    -- Nick

    Yes. The /etc file system is private to each zone (both in the sparse and whole root models) so each zone can have it's own DNS settings (as well as private things like a different time zone and such).

Maybe you are looking for

  • SPP: Error while saving result in Interactive Forecasting

    Hi SCM experts, While saving interactive forecasting but (using "SAVE") system is raising error exceptions. Source Determination Error: Source Loc. 121P Dest. Loc. 122P Product 0000000000000000000000000004018632005101 Message no. /SAPAPO/PALG015 Furt

  • The problem with the return code of JOptionPane

    I debug the follow code, when I press button No, return code is 0, but OpenXLSFile(ResultXSLName); executed, why? the result is whether I press YES or NO, the if condition will be true how to compare the return code?           int open = JOptionPane.

  • Part appraisals as part of OSA Status Overview in MSS?

    Hi, I'm in the middle of a clarification phase regarding implementation of Objective Setting and Appraisal (OSA). This is my question regarding part appraisals: In the Status Overview page/iview in MSS, the manager can prepare, see, select and modify

  • When-validate-trigger

    Hi All I want message('please check the order'); and my code is DECLARE      a VARCHAR2(200);      b varchar2(200); BEGIN      select s.sale_order_no||sd.prod_id||sd.color,s.PERFORMA_TYPE INTO a,b  from sale_order s,sale_order_detail sd      where s.

  • ESS/MSS - Error " 404, The requested resource is not available."

    Hi I have deployed below archives in the EP 7.0 SP14 system. My back end system is ECC 6.0 with SP14. 1)BP ERP05 ESS 1.0 with sp14 2)BP ERP05 MSS 1.0 with sp14 3)SAP ESS 600 with sp14 4)SAP MSS 600 with sp14 5)SAP PCUI_GP 600 with sp14 We are getting