Lucreate and non-global zones

Hi - I'm trying to get my head around Live Upgrades now that I've switched to ZFS on Solaris 10 for our test servers. The problem I have is we have a number of non-global zones and when I ran the lucreate command I get a number of warnings:
lucreate -n CPU_2012-07
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <CPU_2012-07>.
Source boot environment is <10>.
Creating file systems on boot environment <CPU_2012-07>.
Populating file systems on boot environment <CPU_2012-07>.
Temporarily mounting zones in PBE <10>.
Analyzing zones.
WARNING: Directory </export/zones/tdukwxstestz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwxstestz01-CPU_2012-07>.
WARNING: Device <rpool/export/zones/tdukwxstestz01> is shared between BEs, remapping to <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
WARNING: Directory </export/zones/tdukwbprepz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwbprepz01-CPU_2012-07>.
WARNING: Device <rpool/export/zones/tdukwbprepz01> is shared between BEs, remapping to <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/export/zones/tdukwbprepz01> on <rpool/export/zones/tdukwbprepz01@CPU_2012-07>.
Creating clone for <rpool/export/zones/tdukwbprepz01@CPU_2012-07> on <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
Creating snapshot for <rpool/export/zones/tdukwxstestz01> on <rpool/export/zones/tdukwxstestz01@CPU_2012-07>.
Creating clone for <rpool/export/zones/tdukwxstestz01@CPU_2012-07> on <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@CPU_2012-07>.
Creating clone for <rpool/ROOT/10@CPU_2012-07> on <rpool/ROOT/CPU_2012-07>.
Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@CPU_2012-07>.
Creating clone for <rpool/ROOT/10/var@CPU_2012-07> on <rpool/ROOT/CPU_2012-07/var>.
Mounting ABE <CPU_2012-07>.
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <CPU_2012-07>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <10>.
Making boot environment <CPU_2012-07> bootable.
Population of boot environment <CPU_2012-07> successful.
Creation of boot environment <CPU_2012-07> successful.
So ALL my non-global zones live under /export/zones/<zonename> - what do all the WARNINGS mean?
I then applied the Oracle CPU, activated the ABE and shutdown the server. When it came back up non of the zones would start and this seems to be because now all the zonepaths and references to the zones are labelled with CPU_2012-07 on the end. Now I can edit the zone xml files to fix this but am sure this is not the recommended method and something I would prefer not to do.
So basically I think I have not set my ZFS resource pools up correctly to take into account my non-global zones and where I have created them.
My zfs list output looks like this now, unfortunately I don't have the output prior to me starting this work:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 91.2G 456G 106K /rpool
rpool/ROOT 9.06G 456G 31K legacy
rpool/ROOT/10 38.2M 456G 4.34G /.alt.10
rpool/ROOT/10/var 22.6M 24.0G 3.60G /.alt.10/var
rpool/ROOT/CPU_2012-07 9.02G 456G 4.34G /
rpool/ROOT/CPU_2012-07@CPU_2012-07 566M - 4.34G -
rpool/ROOT/CPU_2012-07/var 4.12G 456G 4.11G /var
rpool/ROOT/CPU_2012-07/var@CPU_2012-07 13.8M - 3.58G -
rpool/dump 2.00G 456G 2.00G -
rpool/export 5.94G 456G 35K /export
rpool/export/home 76.9M 23.9G 76.9M /export/home
rpool/export/zones 5.87G 456G 36K /export/zones
rpool/export/zones/tdukwbprepz01 21.7M 456G 323M /export/zones/tdukwbprepz01
rpool/export/zones/tdukwbprepz01-10 321M 31.7G 312M /export/zones/tdukwbprepz01-10
rpool/export/zones/tdukwbprepz01-10@CPU_2012-07 8.50M - 312M -
rpool/export/zones/tdukwxstestz01 29.6M 456G 5.49G /export/zones/tdukwxstestz01
rpool/export/zones/tdukwxstestz01-10 5.51G 26.5G 5.47G /export/zones/tdukwxstestz01-10
rpool/export/zones/tdukwxstestz01-10@CPU_2012-07 32.1M - 5.48G -
rpool/logs 8.23G 23.8G 8.23G /logs
rpool/swap 66.0G 458G 64.0G -
Any help would be greatly appreciated.
Thanks - Julian.

OK, so been tinkering with this. I'm not sure this is my exact problem but a few people have reported issues with the following package:
121430-xx
In that it gives the exact same WARNINGS when trying to create an ABE via lucreate and you have non-global zones. So one of the suggestions was to go back to an earlier version of this patch and then someone said it was fixed in version 71 of the patch. So I installed the very latest version 121430-81 and now it fails with a different error. Fortunately this time I have a screen shot of the before and after:
BEFORE:
bash-3.2# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 build14 running /export/zones/build14 native shared
bash-3.2# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 70.0G 477G 106K /rpool
rpool/ROOT 1.98G 477G 31K legacy
rpool/ROOT/10 1.98G 477G 1.95G /
rpool/ROOT/10/var 28.8M 24.0G 28.8M /var
rpool/dump 2.00G 477G 2.00G -
rpool/export 36.5M 477G 33K /export
rpool/export/home 35K 24.0G 35K /export/home
rpool/export/zones 36.4M 477G 32K /export/zones
rpool/export/zones/build14 36.4M 32.0G 36.4M /export/zones/build14
rpool/logs 3.78M 32.0G 3.78M /logs
rpool/swap 66.0G 543G 16K -
bash-3.2# df -h |grep rpool
rpool/ROOT/10 547G 1.9G 477G 1% /
rpool/ROOT/10/var 24G 29M 24G 1% /var
rpool/export 547G 33K 477G 1% /export
rpool/export/home 24G 35K 24G 1% /export/home
rpool/export/zones 547G 32K 477G 1% /export/zones
rpool/export/zones/build14 32G 36M 32G 1% /export/zones/build14
rpool/logs 32G 3.8M 32G 1% /logs
rpool 547G 106K 477G 1% /rpool
bash-3.2# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
10 yes yes yes no -
bash-3.2# lucreate -n 10-CPU_2012_07
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <10-CPU_2012_07>.
Source boot environment is <10>.
Creating file systems on boot environment <10-CPU_2012_07>.
Populating file systems on boot environment <10-CPU_2012_07>.
Temporarily mounting zones in PBE <10>.
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@10-CPU_2012_07>.
Creating clone for <rpool/ROOT/10@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07>.
Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@10-CPU_2012_07>.
Creating clone for <rpool/ROOT/10/var@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07/var>.
Mounting ABE <10-CPU_2012_07>.
Generating file list.
Copying data from PBE <10> to ABE <10-CPU_2012_07>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <10-CPU_2012_07>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <10>.
Making boot environment <10-CPU_2012_07> bootable.
ERROR: Unable to mount zone <build14> in </.alt.tmp.b-0ob.mnt>.
zoneadm: zone 'build14': zone root /export/zones/build14/root already in use by zone build14
zoneadm: zone 'build14': call to zoneadmd failed
ERROR: Unable to mount non-global zones of ABE <10-CPU_2012_07>: cannot make ABE bootable.
ERROR: umount: /.alt.tmp.b-0ob.mnt/var/run busy
ERROR: cannot unmount </.alt.tmp.b-0ob.mnt/var/run>
ERROR: failed to unmount </.alt.tmp.b-0ob.mnt/var/run>
ERROR: cannot fully unmount boot environment - <1>: file systems remain mounted
ERROR: Unable to make boot environment <10-CPU_2012_07> bootable.
ERROR: Unable to populate file systems on boot environment <10-CPU_2012_07>.
Removing incomplete BE <10-CPU_2012_07>.
ERROR: Cannot make file systems for boot environment <10-CPU_2012_07>.
bash-3.2# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
10 yes yes yes no -
10-CPU_2012_07 no no no yes -
So the very latest Live Upgrade patch doesn't seem to have fix this, I get even more errors now.
Again any help would be greatly appreciated.
Thanks - Julian.

Similar Messages

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Zfs package difference in Global and Non-Global zones

    I have a T2000 hosting many zones. The Global zone and all but one Non-Global zone has 3 zfs packages installed SUNWzfskr, SUNWzfsr, SUNWzfsu). Becuase this one non-global zone is missing the zfs packages, kernel patch 120011-14 also didn't install on that single non-global zone.
    I am curious, can i install SUNWzfskr, SUNWzfsr, SUNWzfsu on the non-global zone that is missing the packages?
    Any ideas how to resolve the kernel patch descrepancy between the global and non-global zone?

    patch 122640-05 installs the SUNWzfskr SUNWzfsr SUNWzfsu packages if they are not already installed on the system.

  • Route between global and non-global zones

    Hi Folks,
    I haven't been able to find an answer to this question searching the archives, so I'll try here. My global zone gets her IP (10.153.197.n) via DHCP, and I've had to use 192.168.1.n addresses for the non global zones. Is there a simple route statement I can issue to allow communication between the global and non global zones? I'm running Solaris 10 x86 03/2005.
    Thanks very much,
    -Adam vonNieda

    If you're only interested in passing traffic between the global zone and the non-global zones, just add a virtual interface to the global zone.
    For example, in the global zone:
    ifconfig ce0:4 plumb 192.168.1.x netmask + broadcast + up
    Then you will be able to pass traffic between the global and non-global zones.
    If you're looking for the global zone to proxy traffic between the non-global zones and the rest of the network, take a look at http://balance.sf.net

  • Running commands across global and non-global zones

    Other than using ssh and public key access, is there better way to run a command in both the global and non-global zone? I need to disable some services (svcadm disable ... ) in both the global and non-global zones.
    Thanks,
    Roger S.

    You can run commands in the non-global zone with the "zlogin" command from the global zone.
    Running commands in a non-global zone from a non global zone works only with ssh, (or any other method using network)

  • PHP in Solaris 10 and Non-Global Zones: Problem of performance?

    Hi friends
    We are feeling a poor performance with applications developed with PHP in Solaris 10, with non-global and global zones, while Intel platform (Xeon and Pentium), performance is very good. Difference between both platforms is about 200% aprox, one second in Intel to 9, 12 or 20 seconds in Solaris depending of model.
    Our tests were developed in:
    1. SF T2000 server Solaris 10 global zone
    2. SF T2000 server Solaris 10 non-global zone
    3. SF280R server Solaris 10 non-global zone
    4. V240 server with 1 GB memory, 1*US III-i 1.0 GHz and Solaris 9 (really this version for test and comparisons)
    5. V240 server with 8GB memory, 2*US III-i 1.5Ghz and Solaris 9 (really this version for test and comparisons too)
    Intel platforms were:
    1. Intel Pentium 4 2GHz 2GB memory, Linux Fedora and PHP 4.4.4
    2. Intel Xeon 2 core, 2.33GHz 2GB memory, Linux Fedora and PHP 4.4.3
    Versions of products are:
    1. Solaris 9 or Solaris 10
    2. PHP 4.4.7 downloaded from http://www.php.net/downloads.php
    3. Apache 2.0.59
    4. MySQL 4.1.15-log
    Our php compilation and installation were:
    ./configure --prefix=/usr/local/php-4.4.7 \
    --with-pear \
    --with-openssl=/usr/local/ssl \
    --with-gettext \
    --with-ldap=/usr/local \
    --with-iconv \
    --enable-ftp \
    --with-dom \
    --with-mime-magic \
    --enable-mbstring \
    --with-zlib \
    --enable-track-vars \
    --enable-sigchild \
    --disable-ctype \
    --disable-overload \
    --disable-tokenizer \
    --disable-posix \
    --with-gd \
    --with-apxs2=/usr/local/apache2.0.53/bin/apxs \
    --with-mysql  \
    --with-pgsql \
    --with-oci8=/oracle/product/9.2.0 \
    --with-oracle=/oracle/product/9.2.0  \
    --with-png-dir=/usr/local \
    --with-zlib-dir=/usr/local \
    --with-freetype-dir=/usr/local \
    --with-jpeg-dir=/usr/local
    make
    make install
    Questions:
    Is there any problem of PHP with SunFire T2000 servers or 64-bits platforms?
    Is there any flag of PHP would be use to compilarion PHP in 64-bits or multithread?
    I wait for any comments or suggestions about our problem with PHP compilation and performance in Solaris 10. Thanks a lot.
    Sergio.

    I presume you compiled php on the Sun server, was this done using gcc or the Sun One C compiler.
    If the latter then you can also use the flag: --enable-nonportable-atomics when you run configure                                                                                                                                                                                                                                                                                                                                                                                                   

  • Lsof and non-global zones

    Hi - wonder if someone could help with an issue I'm trying to troubleshoot. I have a number of T2000 servers all running multiple zones and at peak periods I'm seeing issues with a particular application access a plain text log file. The server although busy is coping well and not particularly loaded. I've wondered if I'm hitting some sort of open file limit on the server but am unsure on how to check this. I can see that ulimit -n reports 256.
    I've also been trying to use lsof to see what open files an application has but this doesn't appear to work when logged into the non-global zone, all I get is:
    lsof -p 5508
    lsof-5.10: can't read namelist from /dev/ksyms
    If I run the same command on the global zone I can see various output about the zone but non relate to the applications log file which is currently being written to.
    Does anyone have any ideas on how to do this or what else I could check?
    Thanks - Julian.

    For security/isolation reasons, /dev/ksyms is not presented to zones. You must run your lsof commands at the global zone. Sorry.

  • NFS and non global zones

    Hi,
    Ive read numerous threads about mounting NFS shares to non global zones but have still not been able to successfully resolve my issue.
    I have 5 T3-2's which are being used as standalone SAP servers running Solaris 10u9 and numerous sparse non global zones. Basically I have a 1Tb HDS LUN presented to 1 T3-2 and have NFS shared this out as /stage to the remaining 4 global zones which works as expected.
    However I am unable to mount the shared NFS filesystem to the non global zones.
    When I try to mount the NFS share from the non global zone itself I receive RPC errors, I have also tried configuring the non global zone with the NFS mount (from the global zone) as lofs but the zone wont boot and also manually mounting the NFS mount from the global zone which looks like it works but when I do a df on the non global zone I receive stat erros.
    Ive even tried linking the NFS share on the global zone to the non global zone directory but that produces a strange linkage when the zone is booted.
    Numerous threads say this is not supported but I cant believe Oracle after ~6/7 years of zones and numerous threads on the subject wouldnt have resolved this issue.
    I could easily locally mount the storage locally and lofs it to the non global zone but unfortunately dont have the storage capacity available which is why I thought NFS mounting to the non global zone would work!!
    Any suggestions would be gratefully received!
    Thanks.

    If you are trying to mount NFS file system on non-global zone from global zone of the same server, use lofs instead.
    You can mount the same file system to all non-global zones using lofs and all non-global zones have read/write access to it.
    If it is global zone of some other server then you can use NFS. But before that check the way it is exported on NFS server whether the client from which you are trying to mount it has permissions to do so.

  • SMCnsnmp in shared-ip non-global zone errors due to duplicate I/F index

    Hi all,
    I have Solaris 10 zones using the shared-ip model, with Net SMTP installed in the global and non-global zones.
    Smtpd starts normally in the global zone, but fails to start in the non-global zones, reporting this error ...
    $ sudo tail /zones/roots/uxNNNz4/root/var/log/snmpd.log
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    error on subcontainer 'interface container' insert (-1)
    This error was reported on OpenSolaris some time ago, reference ...
    (http://prefetch.net/blog/index.php/2009/05/10/net-snmp-should-now-work-in-an-opensolaris-non-global-zone) ...
    Net-snmp does not work in an opensolaris non-global zone:
    +"error on subcontainer ‘interface container’ insert (-1)"+
    These errors are caused by opensolaris bug #6640675, which causes all interfaces to be assigned an index value of 0 (this leads net-snmp to think there are duplicate interfaces). The fix was just integrated into Nevada, so hopefully the code will be back ported to Solaris 10.
    Example ifconfig in global zone (note index 2 for global and shared-ip VIPs)...
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    zone ux560z1
    inet 127.0.0.1 netmask ff000000
    lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    zone ux560z2
    inet 127.0.0.1 netmask ff000000
    lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    zone ux560z3
    inet 127.0.0.1 netmask ff000000
    lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    zone ux560z4
    inet 127.0.0.1 netmask ff000000
    nxge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.25.4.2 netmask fffffc00 broadcast 172.25.7.255
    ether 0:21:28:ba:9e:e4
    nxge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    zone ux560z1
    inet 172.25.4.3 netmask fffffc00 broadcast 172.25.7.255
    nxge0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    zone ux560z2
    inet 172.25.4.4 netmask fffffc00 broadcast 172.25.7.255
    nxge0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    zone ux560z3
    inet 172.25.4.5 netmask fffffc00 broadcast 172.25.7.255
    nxge0:4: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    zone ux560z4
    inet 172.25.4.6 netmask fffffc00 broadcast 172.25.7.255
    QUESTIONS:
    1. Has the bug been reported for Solaris 10 ?
    2. Is a Solaris 10 patch available ?
    3. Is there a work-around or other ideas to get SMTP working in a Solaris shared-ip zoned.
    4. Exclusive-IP should fix it, but does that require a dedicated NIC per zone ?
    Thank You,
    KW

    The CR you cite: 6640675
    was fixed in S10 over a year ago. You'll need a contract to get the patch.

  • LDOMs or Non Global Zones

    I am thinking migrating many machines to our yet unused T5-2 boxes.  One of the consultants suggested, using LDOMS. However, there will be at least 4 different Oracle DBs  running in the boxes.  I am not sure which is the best way to go?  Also even after reading the documents, I could not figure out whether you can have LDOMS (VM for Sparc) and Non-Global Zones side by side over Solaris 11.2?  Also, being a 16 core machine, using LDOM, I would lose 2 core I was told.  Views would be helpful for my understanding. 
    Regards
    SC-BBK

    Hello
    I suggest you to read this doc too, there are plenty of way to do a setup with ldoms, and you can achieve a really good redundancy for net and vdisk.
    http://www.oracle.com/technetwork/server-storage/vm/ovmsparc-best-practices-2334546.pdf
    Primary domain can run sol11.2 and guest sol10 without any problem, but the sol10 release has to be supported on that T5.
    You can have in sol11.2 NGZ in solaris 10 too.
    It is more than VMWare... into any guest ldom running sol11.2, you can have non global zones, for sol10, sol11.2 and kz-zones.  You can assign HW to each domain if you want dynameically, cpu, mem, disks .. all is explained in the admin manual  http://docs.oracle.com/cd/E38405_01/html/E38406/index.html
    Regards
    Eze

  • Can I upgrade patches to non-global zones separate from a global zone?

    Normally, one would assume that you want to keep global and non-global zones in sync. However, at the software company I work for we could potentially want to test on different patch levels of Solaris10 simultaneously. I can't bring down the global zone and change it's patch set everytime I would need this. My only option would be to have separate hardware and separate global zone for each patch set which kinda defeats the purpose IMHO.
    Anybody out there know if this is possible?

    Whole root zones allow you to have different levels of an application installed in different zones.
    But they don't really provide a good mechanism for testing different patch levels of solaris itself.
    Since theres really only one copy of solaris running, its just providing different views of itself.
    If you want to actually test solaris patch levels you need to do "real" virtualisation rather than para virtualisation provided by zones.
    So either somethig like ldoms on sparc hardware, or vmware or equivalent on x86.

  • Is it possible to patch Global Zone and only specific Non-Global Zones?

    Hi Champs,
    Is it possible to patch Global Zone and only specific Non-Global Zones? Idea is to patch DEV-zones only on the system & test applications and then patch only the STG-zones on same server!
    Not sure if it is possible but just throwing a question...
    Cheers,
    Nitin

    M10vir wrote:
    Yes, if you have branded (non-sparse) zone!Branded zones and sparse zones don't have the relation that you imply. In Solaris 10, native zones can be sparse or whole-root (non-sparse, as you say). Zones that are not native zones are branded zones. Branded zones on Solaris 10 include Solaris Legacy Containers, previously known as Solaris 8 Containers and Solaris 9 Containers. That add-on product allows you to run Solaris 8 and Solaris 9 application environments under a thin layer of virtualization provided by the brands framework. solaris8 and solaris9 branded zones can be patched independently of each other and of the global zone.
    Solaris 11 has no "native zones" - all zones use the brands framework. The "solaris" brand does no emulation and in that respect is very similar to native zones on Solaris 10. Solaris 11 also provides Solaris 10 Zones via the solaris10 brand. This allows zones or the global zone from a Solaris 10 system to be transferred to a Solaris 11 system and run as solaris10 zones. When running on Solaris 11, solaris10 zones can each be patched independently from each other and the Solaris 11 global zone. Technically, Solaris 11 doesn't have patches - it just has newer versions of packages to which the system is updated.

  • To break out of a non-global zone and become root user in the global zone

    Hi folks
    "to break out of a non-global zone and become root user in the global zone through a kernel bug exploit"
    Is this possible and has SUN allready a fix/workaround/patch for that?
    Cheers

    Is it possible there's a bug in the kernel? Sure.
    Someone would need to find and identify such a bug before it could be fixed. I've not heard of the discovery of a bug like this. You could check the bug database at www.opensolaris.org.
    Darren

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Non-Global Zones and startup scripts

    Created a non-global zone on a Solaris 10 box.
    Boots up ok and I can login with zlogin.
    It doesn't seem to run any of the scripts in /etc/rc2.d or /etc/rc3.d
    I know Solaris 10 uses "Service Management Facility" for most services now,
    but could still run legacy scripts in /etc/init.d ?
    Also I can't get sshd to start on the non-global zone.
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    # svcadm enable -t svc:/network/ssh:default
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    Anyone got any ideas?
    Michael

    These services are off-line in the non-global zone, which is why non of the
    rc2.d or rc3.d scripts are being run:
    offline Dec_12 svc:/milestone/multi-user-server:default
    offline Dec_12 svc:/milestone/multi-user:default
    Any idea how to enable these, and why they are offline?
    Michael
    Created a non-global zone on a Solaris 10 box.
    Boots up ok and I can login with zlogin.
    It doesn't seem to run any of the scripts in
    /etc/rc2.d or /etc/rc3.d
    I know Solaris 10 uses "Service Management Facility"
    for most services now,
    but could still run legacy scripts in /etc/init.d ?
    Also I can't get sshd to start on the non-global
    zone.
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    # svcadm enable -t svc:/network/ssh:default
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    Anyone got any ideas?
    Michael

Maybe you are looking for