SMCnsnmp in shared-ip non-global zone errors due to duplicate I/F index

Hi all,
I have Solaris 10 zones using the shared-ip model, with Net SMTP installed in the global and non-global zones.
Smtpd starts normally in the global zone, but fails to start in the non-global zones, reporting this error ...
$ sudo tail /zones/roots/uxNNNz4/root/var/log/snmpd.log
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
error on subcontainer 'interface container' insert (-1)
This error was reported on OpenSolaris some time ago, reference ...
(http://prefetch.net/blog/index.php/2009/05/10/net-snmp-should-now-work-in-an-opensolaris-non-global-zone) ...
Net-snmp does not work in an opensolaris non-global zone:
+"error on subcontainer ‘interface container’ insert (-1)"+
These errors are caused by opensolaris bug #6640675, which causes all interfaces to be assigned an index value of 0 (this leads net-snmp to think there are duplicate interfaces). The fix was just integrated into Nevada, so hopefully the code will be back ported to Solaris 10.
Example ifconfig in global zone (note index 2 for global and shared-ip VIPs)...
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone ux560z1
inet 127.0.0.1 netmask ff000000
lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone ux560z2
inet 127.0.0.1 netmask ff000000
lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone ux560z3
inet 127.0.0.1 netmask ff000000
lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
zone ux560z4
inet 127.0.0.1 netmask ff000000
nxge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 172.25.4.2 netmask fffffc00 broadcast 172.25.7.255
ether 0:21:28:ba:9e:e4
nxge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone ux560z1
inet 172.25.4.3 netmask fffffc00 broadcast 172.25.7.255
nxge0:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone ux560z2
inet 172.25.4.4 netmask fffffc00 broadcast 172.25.7.255
nxge0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone ux560z3
inet 172.25.4.5 netmask fffffc00 broadcast 172.25.7.255
nxge0:4: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
zone ux560z4
inet 172.25.4.6 netmask fffffc00 broadcast 172.25.7.255
QUESTIONS:
1. Has the bug been reported for Solaris 10 ?
2. Is a Solaris 10 patch available ?
3. Is there a work-around or other ideas to get SMTP working in a Solaris shared-ip zoned.
4. Exclusive-IP should fix it, but does that require a dedicated NIC per zone ?
Thank You,
KW

The CR you cite: 6640675
was fixed in S10 over a year ago. You'll need a contract to get the patch.

Similar Messages

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Sharing software package with non-global zone

    I've installed Solaris Studio 12.3 to a global zone on Solaris 11 following instructions from http://pkg-register.oracle.com which I want to make available to the non-global zones.  Do I need to install it independently for all zones, or can I export/share it?

    All files are normally installed in /opt/solarisstudio12.3. You can share it, but it is recommended to install it independently for all zones if you want different installed versions.

  • Sharing mounts on non global zone in Solaris 10

    Dear All,
    We have SAP Development server installed on a non global zone in Solaris 10 based machine. We want to share /usr/sap/trans [in DEV's non global zone] on another machine's non global zone which will host Quality server. I'm sure it can be done, however since I am not OS admin unable to find out how inspite of reading some posts on internet.
    Can anyone please help me out here.
    regards, Sean.

    We did it.
    regards, Sean.

  • Ssh takes me to the global zone instead of the non-global zone

    I have set up my first Solaris 10 server with a new zone. The ce device is set up on the zone as well as the global zone.
    Output from ifconfig on the global zone:
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.16.1.217 netmask ffffff00 broadcast 172.16.1.255
    ether 0:3:ba:f2:a1:54
    ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
    inet 172.16.1.199 netmask ffffff00 broadcast 172.16.1.255
    ether 0:3:ba:f2:a1:54
    Output from the non-global zone:
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.16.1.199 netmask ffff0000 broadcast 172.16.255.255
    ether 0:3:ba:f2:a1:54
    When I ssh into the non-global zone, I end up in the global zone? Can I ssh straight into the non-global zone? Am I missing something in the zone setup that keeps me from being able to ssh into the non-global zone?
    Any help is appreciated. I have been racking my brain on this for several hours.
    Thanks ahead of time.

    TAdriver wrote:
    The one thing I have found in the documentation is that if you set the network as an exclusive IP, you can only assign the physical name using zonecfg. You can't set the IP address or the default router. In fact, if you try to set either of those, you get an error saying you can't set those using an exclusive IP type.Correct. When doing a shared-IP zone, the zone has no privileges to do IP-level things. So the global zone (via the zone configuration) creates the virtual interface and sets the IP address. Then when the zone is booted, the interface is given to it.
    With an exclusive-IP zone, the zone can do all this work itself. From its perspective, it's handed an interface like a regular machine. So the IP settings are done within the zone (/etc/hosts, /etc/hostname.XXX, /etc/netmasks).
    Darren

  • Failing to install pkg on non-global zone

    (root)@syslog1:~# pkgadd -d . SUNWant
    Processing package instance <SUNWant> from </home/iqbala>
    Jakarta ANT(sparc) 11.10.0,REV=2005.01.08.05.16
    WARNING: Stale lock installed for pkgrm, pkg SUNWaspell quit in remove-initial state.
    Removing lock.
    Using </> as the package base directory.
    ## Processing package information.
    ERROR: Cannot allocate memory for package object array.
    pkgadd: ERROR: memory allocation failure
    pkgadd: ERROR: unable to process pkgmap
    Installation of <SUNWant> failed (internal error).
    No changes were made to the system.
    (root)@syslog1:~#
    (root)@syslog1:~# zonename
    syslog
    This non-global zone is capped to 1G phy memory out of 2G total of the T1000
    (root)@syslog-global:~# uname -a
    SunOS syslog-global 5.10 Generic_137137-09 sun4v sparc SUNW,Sun-Fire-T1000
    (root)@syslog-global:~# zoneadm list
    global
    syslog
    (root)@syslog-global:~# zonename
    global
    (root)@syslog-global:~# zonecfg -z syslog info
    zonename: syslog
    zonepath: /syslog
    brand: native
    autoboot: true
    bootargs: -m verbose
    pool:
    limitpriv: default,sys_time
    scheduling-class: FSS
    ip-type: shared
    inherit-pkg-dir:
         dir: /lib
    inherit-pkg-dir:
         dir: /platform
    inherit-pkg-dir:
         dir: /sbin
    inherit-pkg-dir:
         dir: /usr
    fs:
         dir: /var/logs
         special: /var/logs
         raw not specified
         type: lofs
         options: []
    fs:
         dir: /usr/local
         special: /syslog-local/usr/local
         raw not specified
         type: lofs
         options: []
    net:
         address: 192.168.0.114
         physical: aggr1
         defrouter: 192.168.0.1
    dedicated-cpu:
         ncpus: 1-8
         importance: 10
    capped-memory:
         physical: 1G
         [swap: 512M]
    attr:
         name: comment
         type: string
         value: "syslog server"
    rctl:
         name: zone.max-swap
         value: (priv=privileged,limit=536870912,action=deny)
    (root)@syslog-global:~# prstat -Z
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    13118 root 7184K 5952K sleep 1 0 52:00:54 0.5% nco_p_syslog/10
    11730 root 162M 123M sleep 59 0 38:51:35 0.1% splunkd/22
    7324 root 12M 8280K sleep 59 0 0:58:06 0.0% syslogd/25
    266 root 97M 24M sleep 49 0 31:45:02 0.0% poold/8
    209 daemon 8104K 3080K sleep 59 0 24:39:56 0.0% rcapd/1
    29553 root 2496K 2024K cpu4 59 5 0:00:00 0.0% splunk-optimize/1
    21578 root 38M 36M sleep 59 0 0:01:10 0.0% puppetd/2
    29554 root 6088K 3712K cpu0 49 0 0:00:00 0.0% prstat/1
    24244 root 5760K 3104K sleep 49 0 0:00:00 0.0% bash/1
    1024 noaccess 171M 96M sleep 59 0 8:41:32 0.0% java/18
    27771 noaccess 189M 100M sleep 1 0 4:44:36 0.0% java/18
    274 daemon 3192K 496K sleep 59 0 0:00:00 0.0% statd/1
    279 daemon 2816K 576K sleep 60 -20 0:00:00 0.0% nfs4cbd/2
    326 root 2304K 40K sleep 59 0 0:00:00 0.0% cimomboot/1
    151 root 2576K 344K sleep 59 0 0:00:00 0.0% drd/2
    ZONEID NPROC SWAP RSS MEMORY TIME CPU ZONE
    3 47 465M 513M 25% 99:54:00 0.7% syslog
    0 42 391M 466M 23% 71:04:39 0.1% global
    Total: 89 processes, 386 lwps, load averages: 0.21, 0.26, 0.26
    Am I hitting a bug?

    If your pkg wants to be installed in /usr or another inherit-pkg-dir, it can't because they are share as read-only.
    Verify wherer the pkg copies its files.

  • SFTP chroot from non-global zone to zfs pool

    Hi,
    I am unable to create an SFTP chroot inside a zone to a shared folder on the global zone.
    Inside the global zone:
    I have created a zfs pool (rpool/data) and then mounted it to /data.
    I then created some shared folders: /data/sftp/ipl/import and /data/sftp/ipl/export
    I then created a non-global zone and added a file system that loops back to /data.
    Inside the zone:
    I then did the ususal stuff to create a chroot sftp user, similar to: http://nixinfra.blogspot.com.au/2012/12/openssh-chroot-sftp-setup-in-linux.html
    I modifed the /etc/ssh/sshd_config file and hard wired the ChrootDirectory to /data/sftp/ipl.
    When I attempt to sftp into the zone an error message is displayed in the zone -> fatal: bad ownership or modes for chroot directory /data/
    Multiple web sites warn that folder ownership and access privileges is important. However, issuing chown -R root:iplgroup /data made no difference. Perhaps it is something todo with the fact the folders were created in the global zone?
    If I create a simple shared folder inside the zone it works, e.g. /data3/ftp/ipl......ChrootDirectory => /data3/ftp/ipl
    If I use the users home directory it works. eg /export/home/sftpuser......ChrootDirectory => %h
    FYI. The reason for having a ZFS shared folder is to allow separate SFTP and FTP zones and a common/shared data repository for FTP and SFTP exchanges with remote systems. e.g. One remote client pushes data to the FTP server. A second remote client pulls the data via SFTP. Having separate zones increases security?
    Any help would be appreciated to solve this issue.
    Regards John

    sanjaykumarfromsymantec wrote:
    Hi,
    I want to do IPC between inter-zones ( commnication between processes running two different zones). So what are the different techniques can be used. I am not interested in TCP/IP ( AF_INET) sockets.Zones are designed to prevent most visibility between non-global zones and other zones. So network communication (like you might use between two physical machines) are the most common method.
    You could mount a global zone filesystem into multiple non-global zones (via lofs) and have your programs push data there. But you'll probably have to poll for updates. I'm not certain that's easier or better than network communication.
    Darren

  • Lucreate and non-global zones

    Hi - I'm trying to get my head around Live Upgrades now that I've switched to ZFS on Solaris 10 for our test servers. The problem I have is we have a number of non-global zones and when I ran the lucreate command I get a number of warnings:
    lucreate -n CPU_2012-07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <CPU_2012-07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <CPU_2012-07>.
    Populating file systems on boot environment <CPU_2012-07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    WARNING: Directory </export/zones/tdukwxstestz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwxstestz01> is shared between BEs, remapping to <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    WARNING: Directory </export/zones/tdukwbprepz01> zone <global> lies on a filesystem shared between BEs, remapping path to </export/zones/tdukwbprepz01-CPU_2012-07>.
    WARNING: Device <rpool/export/zones/tdukwbprepz01> is shared between BEs, remapping to <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/export/zones/tdukwbprepz01> on <rpool/export/zones/tdukwbprepz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwbprepz01@CPU_2012-07> on <rpool/export/zones/tdukwbprepz01-CPU_2012-07>.
    Creating snapshot for <rpool/export/zones/tdukwxstestz01> on <rpool/export/zones/tdukwxstestz01@CPU_2012-07>.
    Creating clone for <rpool/export/zones/tdukwxstestz01@CPU_2012-07> on <rpool/export/zones/tdukwxstestz01-CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10@CPU_2012-07> on <rpool/ROOT/CPU_2012-07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@CPU_2012-07>.
    Creating clone for <rpool/ROOT/10/var@CPU_2012-07> on <rpool/ROOT/CPU_2012-07/var>.
    Mounting ABE <CPU_2012-07>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <CPU_2012-07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <CPU_2012-07> bootable.
    Population of boot environment <CPU_2012-07> successful.
    Creation of boot environment <CPU_2012-07> successful.
    So ALL my non-global zones live under /export/zones/<zonename> - what do all the WARNINGS mean?
    I then applied the Oracle CPU, activated the ABE and shutdown the server. When it came back up non of the zones would start and this seems to be because now all the zonepaths and references to the zones are labelled with CPU_2012-07 on the end. Now I can edit the zone xml files to fix this but am sure this is not the recommended method and something I would prefer not to do.
    So basically I think I have not set my ZFS resource pools up correctly to take into account my non-global zones and where I have created them.
    My zfs list output looks like this now, unfortunately I don't have the output prior to me starting this work:
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 91.2G 456G 106K /rpool
    rpool/ROOT 9.06G 456G 31K legacy
    rpool/ROOT/10 38.2M 456G 4.34G /.alt.10
    rpool/ROOT/10/var 22.6M 24.0G 3.60G /.alt.10/var
    rpool/ROOT/CPU_2012-07 9.02G 456G 4.34G /
    rpool/ROOT/CPU_2012-07@CPU_2012-07 566M - 4.34G -
    rpool/ROOT/CPU_2012-07/var 4.12G 456G 4.11G /var
    rpool/ROOT/CPU_2012-07/var@CPU_2012-07 13.8M - 3.58G -
    rpool/dump 2.00G 456G 2.00G -
    rpool/export 5.94G 456G 35K /export
    rpool/export/home 76.9M 23.9G 76.9M /export/home
    rpool/export/zones 5.87G 456G 36K /export/zones
    rpool/export/zones/tdukwbprepz01 21.7M 456G 323M /export/zones/tdukwbprepz01
    rpool/export/zones/tdukwbprepz01-10 321M 31.7G 312M /export/zones/tdukwbprepz01-10
    rpool/export/zones/tdukwbprepz01-10@CPU_2012-07 8.50M - 312M -
    rpool/export/zones/tdukwxstestz01 29.6M 456G 5.49G /export/zones/tdukwxstestz01
    rpool/export/zones/tdukwxstestz01-10 5.51G 26.5G 5.47G /export/zones/tdukwxstestz01-10
    rpool/export/zones/tdukwxstestz01-10@CPU_2012-07 32.1M - 5.48G -
    rpool/logs 8.23G 23.8G 8.23G /logs
    rpool/swap 66.0G 458G 64.0G -
    Any help would be greatly appreciated.
    Thanks - Julian.

    OK, so been tinkering with this. I'm not sure this is my exact problem but a few people have reported issues with the following package:
    121430-xx
    In that it gives the exact same WARNINGS when trying to create an ABE via lucreate and you have non-global zones. So one of the suggestions was to go back to an earlier version of this patch and then someone said it was fixed in version 71 of the patch. So I installed the very latest version 121430-81 and now it fails with a different error. Fortunately this time I have a screen shot of the before and after:
    BEFORE:
    bash-3.2# zoneadm list -cv
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    1 build14 running /export/zones/build14 native shared
    bash-3.2# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 70.0G 477G 106K /rpool
    rpool/ROOT 1.98G 477G 31K legacy
    rpool/ROOT/10 1.98G 477G 1.95G /
    rpool/ROOT/10/var 28.8M 24.0G 28.8M /var
    rpool/dump 2.00G 477G 2.00G -
    rpool/export 36.5M 477G 33K /export
    rpool/export/home 35K 24.0G 35K /export/home
    rpool/export/zones 36.4M 477G 32K /export/zones
    rpool/export/zones/build14 36.4M 32.0G 36.4M /export/zones/build14
    rpool/logs 3.78M 32.0G 3.78M /logs
    rpool/swap 66.0G 543G 16K -
    bash-3.2# df -h |grep rpool
    rpool/ROOT/10 547G 1.9G 477G 1% /
    rpool/ROOT/10/var 24G 29M 24G 1% /var
    rpool/export 547G 33K 477G 1% /export
    rpool/export/home 24G 35K 24G 1% /export/home
    rpool/export/zones 547G 32K 477G 1% /export/zones
    rpool/export/zones/build14 32G 36M 32G 1% /export/zones/build14
    rpool/logs 32G 3.8M 32G 1% /logs
    rpool 547G 106K 477G 1% /rpool
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    bash-3.2# lucreate -n 10-CPU_2012_07
    Analyzing system configuration.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <10-CPU_2012_07>.
    Source boot environment is <10>.
    Creating file systems on boot environment <10-CPU_2012_07>.
    Populating file systems on boot environment <10-CPU_2012_07>.
    Temporarily mounting zones in PBE <10>.
    Analyzing zones.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/10> on <rpool/ROOT/10@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07>.
    Creating snapshot for <rpool/ROOT/10/var> on <rpool/ROOT/10/var@10-CPU_2012_07>.
    Creating clone for <rpool/ROOT/10/var@10-CPU_2012_07> on <rpool/ROOT/10-CPU_2012_07/var>.
    Mounting ABE <10-CPU_2012_07>.
    Generating file list.
    Copying data from PBE <10> to ABE <10-CPU_2012_07>.
    100% of filenames transferred
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <10-CPU_2012_07>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <10>.
    Making boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to mount zone <build14> in </.alt.tmp.b-0ob.mnt>.
    zoneadm: zone 'build14': zone root /export/zones/build14/root already in use by zone build14
    zoneadm: zone 'build14': call to zoneadmd failed
    ERROR: Unable to mount non-global zones of ABE <10-CPU_2012_07>: cannot make ABE bootable.
    ERROR: umount: /.alt.tmp.b-0ob.mnt/var/run busy
    ERROR: cannot unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: failed to unmount </.alt.tmp.b-0ob.mnt/var/run>
    ERROR: cannot fully unmount boot environment - <1>: file systems remain mounted
    ERROR: Unable to make boot environment <10-CPU_2012_07> bootable.
    ERROR: Unable to populate file systems on boot environment <10-CPU_2012_07>.
    Removing incomplete BE <10-CPU_2012_07>.
    ERROR: Cannot make file systems for boot environment <10-CPU_2012_07>.
    bash-3.2# lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    10 yes yes yes no -
    10-CPU_2012_07 no no no yes -
    So the very latest Live Upgrade patch doesn't seem to have fix this, I get even more errors now.
    Again any help would be greatly appreciated.
    Thanks - Julian.

  • Non-global zones and unix sockets

    Hello, I have a problem with local zones and unix socket sharing. I've created directory in global zone for ex. /zones/shared. Added it to zones via 'add fs, type=lofs' . In one zone I'm putting mysql socket in it and I want that other local zones could use it. Is it possible to share socket between zones?
    After all my experiments I'm always getting 'can't connect to mysql ... (146)' , 146 is 'connection refused' error.

    These services are off-line in the non-global zone, which is why non of the
    rc2.d or rc3.d scripts are being run:
    offline Dec_12 svc:/milestone/multi-user-server:default
    offline Dec_12 svc:/milestone/multi-user:default
    Any idea how to enable these, and why they are offline?
    Michael
    Created a non-global zone on a Solaris 10 box.
    Boots up ok and I can login with zlogin.
    It doesn't seem to run any of the scripts in
    /etc/rc2.d or /etc/rc3.d
    I know Solaris 10 uses "Service Management Facility"
    for most services now,
    but could still run legacy scripts in /etc/init.d ?
    Also I can't get sshd to start on the non-global
    zone.
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    # svcadm enable -t svc:/network/ssh:default
    # svcs -a |grep ssh2
    offline 11:44:58 svc:/network/ssh:default
    Anyone got any ideas?
    Michael

  • NFS and non global zones

    Hi,
    Ive read numerous threads about mounting NFS shares to non global zones but have still not been able to successfully resolve my issue.
    I have 5 T3-2's which are being used as standalone SAP servers running Solaris 10u9 and numerous sparse non global zones. Basically I have a 1Tb HDS LUN presented to 1 T3-2 and have NFS shared this out as /stage to the remaining 4 global zones which works as expected.
    However I am unable to mount the shared NFS filesystem to the non global zones.
    When I try to mount the NFS share from the non global zone itself I receive RPC errors, I have also tried configuring the non global zone with the NFS mount (from the global zone) as lofs but the zone wont boot and also manually mounting the NFS mount from the global zone which looks like it works but when I do a df on the non global zone I receive stat erros.
    Ive even tried linking the NFS share on the global zone to the non global zone directory but that produces a strange linkage when the zone is booted.
    Numerous threads say this is not supported but I cant believe Oracle after ~6/7 years of zones and numerous threads on the subject wouldnt have resolved this issue.
    I could easily locally mount the storage locally and lofs it to the non global zone but unfortunately dont have the storage capacity available which is why I thought NFS mounting to the non global zone would work!!
    Any suggestions would be gratefully received!
    Thanks.

    If you are trying to mount NFS file system on non-global zone from global zone of the same server, use lofs instead.
    You can mount the same file system to all non-global zones using lofs and all non-global zones have read/write access to it.
    If it is global zone of some other server then you can use NFS. But before that check the way it is exported on NFS server whether the client from which you are trying to mount it has permissions to do so.

  • Add tape device to non-global zone

    Hi,
    I have a SCSI attached Ultrium tape device attached and configured against the global zone.
    The /dev/rmt/0* definitions in the global zone are links to ../../devices/pci@2*
    I need to be able to use this tape device from the non-global zones.
    To enable this, I have done the following:
    zonecfg -z <zone name>
    add device
    set match=/dev/rmt/0
    end
    verify
    commit
    exit
    I repeated the above for /dev/rmt/0m and /dev/rmt/0mn
    Then I restarted the zone with the command:
    zoneadm -z <zone name> reboot
    After the reboot, I can see the device when using "mt -f /dev/rmt/0 status", but whenever I try to write a SAP brbackup to the new (initialised and not write protected) tape within the drive I get the following error:
    BR0278E Command output of 'LANG=C cd /oracle/<SID>/sapbackup && /usr/sap/<SID>/SYS/exe/run/brtools -f detach LANG=C cpio -iuvB .tape
    sh: /dev/rmt/0mn: cannot open
    BR0280I BRBACKUP time stamp: 2012-04-04 08.21.41
    BR0279E Return code from 'LANG=C cd /oracle/<SID>/sapbackup && /usr/sap/<SID>/SYS/exe/run/brtools -f detach LANG=C cpio -iuvB .tape.
    BR0359E Restore of /oracle/<SID>/sapbackup/.tape.hdr0 from /dev/rmt/0mn failed due to previous errors
    Have I created the device incorrectly, or does anyone have any ideas what could be the reason the write fails?
    Any help appreciated.
    Edited by: user11329299 on 04-Apr-2012 01:09

    Hi,
    Just to bring you up to speed, I have now fixed the issue.
    The resolution was all within the iniSID.sap file that the backup is using. I have changed a number of parameters within this file:
    1.     tape_copy_cmd = dd (was cpio)
    2.     rewind = "mt     -f $ rew; sleep 30" (was " mt -f $ rew")
    3.     rewind_offline = "mt -f $ offline; sleep 30" (was "mt -f $ offline")
    4.     tape_pos_cmd = "mt -f $ fsf $: sleep 30" (was "mt -f $ fsf $")
    5.     tape_size = 500G (was 18000M)
    After making those changes, the backup started from within DB13. I believe that the main culprit was the tape_copy_cmd, but the others were changed to allow the tape drive time to become online again after any query.

  • How to know global zone in case non global zone is hung

    ....I have nongloabazone1,nongloabazone1,nongloabazone2,nongloabazone3...
    i am working on nongloabazone1 ..
    suppose i am giving remote support ...
    if my nongloabazone1 is hung ..i need to know the global zone on which this nongloabazone1 is installed and reboot from there ...if my nongloabazone1 is hung i cannot apply #arp -a and check it out by trial and error method and know the global zone....
    in this case how can i reboot the nongloabazone1 .....i have the same question in case of Ldoms also..............
    Thanks in Advance.......

    Hi.
    It's not clear what means "non global zone is hung".
    In case it realy hangs you can't do anythins in this zone.
    1) In case you have access to global zone. You can get list all zones running on this host:
    zoneadm list -cv
    For reboot local zone from global zone just need: zoneadm -z <zone_name> reboot
    2) Zones not support live migration. So after zone started it can not change global zone.
    Create script that put global zone name in file. When need - just read content of this file.
    This file can be created from global zone when start (or create/move) local zone.
    In case zone migration is not quickly operation, just create file (or database) for list what zone started on which host.
    For LDOM it look wery same.
    Regards.

  • Problem to migrate a non-global zone to a different machine.

    Hi, recently, I had try to migrate a non-global zone to a different machine but it’s doesn’t work.
    1. First, this is the structure of my machine with my non-global zone:
    host1# uname -a
    SunOS testsolaris 5.11 snv_101b i86pc i386 i86pc
    host1# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    big-zone 1.71G 1.64G 20K /big-zone
    big-zone/export 1.71G 1.64G 22K /big-zone/export
    big-zone/export/big-zone 1.67G 1.64G 21K /big-zone/export/big-zon e
    big-zone/export/big-zone/ROOT 1.67G 1.64G 18K legacy
    big-zone/export/big-zone/ROOT/zbe 1.67G 1.64G 1.66G legacy
    big-zone/export/zonetest 41.8M 1.64G 21K /big-zone/export/zonetes t
    big-zone/export/zonetest/ROOT 41.8M 1.64G 18K legacy
    big-zone/export/zonetest/ROOT/zbe 41.8M 1.64G 1.66G /big-zone/export/zonetes t/root
    rpool 8.35G 7.28G 72K /rpool
    rpool/ROOT 6.86G 7.28G 18K legacy
    rpool/ROOT/opensolaris 6.86G 7.28G 6.73G /
    rpool/dump 575M 7.28G 575M -
    rpool/export 375M 7.28G 21K /export
    rpool/export/home 18K 7.28G 18K /export/home
    rpool/export/small-zone 375M 7.28G 21K /export/small-zone
    rpool/export/small-zone/ROOT 375M 7.28G 18K legacy
    rpool/export/small-zone/ROOT/zbe 375M 7.28G 375M legacy
    rpool/swap 575M 7.78G 56.8M -
    2. In second, I had detach my non-global zone “zonetest” whit this commands :
    host1# zoneadm –z zonetest halt
    host1# zoneadm –z zonetest detach
    3. In third, I had move my zonepath to my new host.
    host1# cd /big-zone/export
    host1# tar cf zonetest.tar zonetest
    host1# sftp jay@new-host
    host1# put zonetest.tar
    Uploading ….
    host1# quit
    4. Unpack my .tar file
    host2# cd /big-zone/export
    host2# tar xf zonetest.tar
    So, after this, I think that my zonepath is transfert to my new host.
    This is the structure of my new host :
    jay@alien:~$ uname -a
    SunOS alien 5.11 snv_101b i86pc i386 i86pc Solaris
    jay@alien:~$ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 18.3G 73.3G 72K /rpool
    rpool/ROOT 2.98G 73.3G 18K legacy
    rpool/ROOT/opensolaris 2.98G 73.3G 2.85G /
    rpool/dump 1023M 73.3G 1023M -
    rpool/export 13.3G 73.3G 19K /export
    rpool/export/home 13.3G 73.3G 19K /export/home
    rpool/export/home/jay 13.3G 73.3G 13.3G /export/home/jay
    rpool/swap 1023M 73.9G 321M -
    zdata 10.7G 80.8G 9.65G /zdata
    zdata/zones 1.08G 80.8G 18K /zdata/zones
    zdata/zones/zonetest 1.08G 80.8G 1.08G /big-zone/export/
    *I have a mountpoint to /big-zone/export
    5. I had try to configure my zone on my new host and I receive and error message:
    host2# zonecfg -z zonetest
    zonetest: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:zonetest> create -a /big-zone/export/zonetest
    invalid path to detached zone
    zonecfg:zonetest>

    And my new big-zone (on the second host) show this in the /big-zone/export/zonetest folder :
    jay@alien:/zdata/zones# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 23.5G 68.0G 72K /rpool
    rpool/ROOT 6.31G 68.0G 18K legacy
    rpool/ROOT/opensolaris 6.31G 68.0G 6.18G /
    rpool/dump 1023M 68.0G 1023M -
    rpool/export 15.2G 68.0G 19K /export
    rpool/export/home 15.2G 68.0G 19K /export/home
    rpool/export/home/jay 15.2G 68.0G 15.2G /export/home/jay
    rpool/swap 1023M 68.6G 361M -
    zdata 11.6G 79.9G 10.7G /zdata
    zdata/zones 921M 79.9G 18K /zdata/zones
    zdata/zones/web 921M 79.9G 21K /zdata/zones/web
    zdata/zones/web/ROOT 921M 79.9G 18K legacy
    zdata/zones/web/ROOT/zbe 921M 79.9G 921M legacy
    zdata/zones/zonetest             54K  79.9G    18K  /big-zone/export/zonetest
    zdata/zones/zonetest/ROOT 36K 79.9G 18K legacy
    zdata/zones/zonetest/ROOT/zbe 18K 79.9G 18K legacy
    jay@alien:/zdata/zones/zonetest# pwd
    /zdata/zones/zonetest
    jay@alien:/zdata/zones/zonetest# ls -ls
    total 6
    3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
    3 drwxr-xr-x 16 root root 19 Feb 8 2009 root
    jay@alien:/zdata/zones/zonetest# cd root
    jay@alien:/zdata/zones/zonetest/root# ls -ls
    total 52902
    1 lrwxrwxrwx 1 root root 9 Feb 1 20:29 bin -> ./usr/bin
    3 drwxr-xr-x 13 root sys 15 Feb 8 2009 dev
    11 drwxr-xr-x 55 root sys 168 Feb 8 2009 etc
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 home
    15 drwxr-xr-x 9 root bin 241 Feb 4 2009 lib
    3 drwxr-xr-x 2 root sys 2 Jan 22 16:23 mnt
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 net
    3 drwxr-xr-x 4 root sys 4 Jan 24 15:26 opt
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:23 proc
    3 drwx------ 3 root root 7 Feb 6 2009 root
    5 drwxr-xr-x 2 root sys 47 Jan 22 16:24 sbin
    3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
    3 drwxrwxrwt 2 root sys 2 Feb 8 2009 tmp
    5 drwxr-xr-x 30 root sys 42 Feb 6 2009 usr
    3 drwxr-xr-x 32 root sys 32 Feb 6 2009 var
    52835 -rw-r--r-- 1 root root 42882560 Jan 22 16:35 webmin-1.441.pkg
    jay@alien:/zdata/zones/zonetest/root#
    I think my problem is there ...
    jay@alien:/big-zone/export/zonetest# pwd
    /big-zone/export/zonetest
    jay@alien:/big-zone/export/zonetest# ls -ls
    total 8
    2 ---------- 1 root root 114 Dec 31 1969 @LongLink
    3 drwxr-xr-x 2 root root 2 Feb 1 21:10 root
    3 drwx------ 4 root root 4 Feb 1 21:10 zonetest
    jay@alien:/big-zone/export/zonetest# cd zonetest/
    jay@alien:/big-zone/export/zonetest/zonetest# ls -ls
    total 6
    3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
    3 drwxr-xr-x 4 root root 5 Feb 1 21:10 root
    jay@alien:/big-zone/export/zonetest/zonetest# cd root
    jay@alien:/big-zone/export/zonetest/zonetest/root# ls -ls
    total 7
    1 lrwxrwxrwx 1 root root 9 Feb 1 21:10 bin -> ./usr/bin
    3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
    3 drwxr-xr-x 23 root sys 28 Feb 1 21:11 usr
    I think I have a problem with my zfs mountpoint but I don't how to resolve this.
    Edited by: jaymachine on Feb 26, 2009 6:16 PM

  • Format disks in a non-global zone

    Hi
    Can i format a disk inside a non global zone which has been assigned (add device) to this non-global zone.
    thanks

    In order to allow managing his/her disk to the zone "administrator".
    This is what i�ve got when i try to format the disk in the non-global zone:
    AVAILABLE DISK SELECTIONS:
    0. c4t600A0B8000179A5000002EFD42AFC76Fd0 <SUN-CSM100_R_FC-0610 cyl 20478 alt 2 hd 64 sec 64> sx2_v2
    ssd22 at scsi_vhci0 slave 0
    1. c4t600A0B8000179A5000002F0A4355CEE9d0 <SUN-CSM100_R_FC-0610 cyl 20478 alt 2 hd 64 sec 64>
    ssd24 at scsi_vhci0 slave 0
    Specify disk (enter its number): 1
    selecting c4t600A0B8000179A5000002F0A4355CEE9d0
    [disk unformatted]
    FORMAT MENU:
    disk - select a disk
    type - select (define) a disk type
    partition - select (define) a partition table
    current - describe the current disk
    format - format and analyze the disk
    repair - repair a defective sector
    label - write label to the disk
    analyze - surface analysis
    defect - defect list management
    backup - search for backup labels
    verify - read and display labels
    save - save new disk/partition definitions
    inquiry - show vendor, product and revision
    volname - set 8-character volume name
    !<cmd> - execute <cmd>, then return
    quit
    format> format
    Ready to format. Formatting cannot be interrupted.
    Continue? yes
    Beginning format. The current time is Thu Nov 10 12:59:50 2005
    Inquiry failed
    failed
    Warning: Unable to get capacity. Cannot check geometry
    Warning: error reading backup label.
    Warning: error reading backup label.
    Warning: error reading backup label.
    Warning: error reading backup label.
    Warning: error reading backup label.

  • Application pkg install in non Global zone

    I have got another virtual instance up on solaris box, i am trying to install application pkg which writes to
    /usr
    which is mounted as read only.
    is it possible to create policy to write under that folder
    e.g. /usr/writable where u can add/change stuff in non global zone
    thanks

    You'll need to build a "whole root" zone to do that. You currently have a "sparse root" zone.
    There are two types of non-global zone root file system models: sparse and whole root. The sparse root zone model optimizes the sharing of objects. The whole root zone model provides the maximum configurability. These concepts are discussed in Chapter 18, Planning and Configuring Non-Global Zones (Tasks).
    (from http://docs.sun.com/app/docs/doc/817-1592/6mhahuooc?a=view )

Maybe you are looking for