Intrusion Detection system in non-global zone

I have a zone configured with exclusive-ip. The zone will be used for an intrusion detection system and the software needs low-level access to the network interface. (thus using exclusive-ip) The problem I'm having is that I need to use another interface for local login and management of the zone. I do not/can not use another interface exclusively for this purpose. The best scenario would be a combination of exclusive-ip and shared-ip, but that's not possible. Vlans would be another option, but the version of Solaris I'm using does not have crossbow.
I'm currently using Solaris 10 138888-08
Any suggestions?

Hi Kevin
As you mentioned yourself I would use VLAN tagging. You do not need to have crossbow to be able to use VLANs.
I am assuming that it will be possible for the switch port you are connected to to be configured for tagged VLANs?
E.g.
Let say your servers physical NIC is e1000g0. Get the switch configured so that your port is a vlan trunk with 2 tagged VLANs, e.g. VLANs 100 and 200.
You can then use e1000g100000 (for vlan 100) and e1000g200000 (for vlan 200) in your exclusive IP zone config. One will carry the traffic for your IDS and the other can be used as your login/management network.
Solaris will handle all the tagging/untagging for you automatically when you plumb in the interfaces e1000g100000 and e1000g200000. The formula for calculating the number part of the name of the NIC is:
(vlan ID * 1000 + NIC_id)
e.g. if your physical NIC is bge3 and you had a vlan id of 150 then the interface to plumb in would be called bge150003
I believe the Solaris IP services manual should explain this.
hope this helps
Martin

Similar Messages

  • NFS and non global zones

    Hi,
    Ive read numerous threads about mounting NFS shares to non global zones but have still not been able to successfully resolve my issue.
    I have 5 T3-2's which are being used as standalone SAP servers running Solaris 10u9 and numerous sparse non global zones. Basically I have a 1Tb HDS LUN presented to 1 T3-2 and have NFS shared this out as /stage to the remaining 4 global zones which works as expected.
    However I am unable to mount the shared NFS filesystem to the non global zones.
    When I try to mount the NFS share from the non global zone itself I receive RPC errors, I have also tried configuring the non global zone with the NFS mount (from the global zone) as lofs but the zone wont boot and also manually mounting the NFS mount from the global zone which looks like it works but when I do a df on the non global zone I receive stat erros.
    Ive even tried linking the NFS share on the global zone to the non global zone directory but that produces a strange linkage when the zone is booted.
    Numerous threads say this is not supported but I cant believe Oracle after ~6/7 years of zones and numerous threads on the subject wouldnt have resolved this issue.
    I could easily locally mount the storage locally and lofs it to the non global zone but unfortunately dont have the storage capacity available which is why I thought NFS mounting to the non global zone would work!!
    Any suggestions would be gratefully received!
    Thanks.

    If you are trying to mount NFS file system on non-global zone from global zone of the same server, use lofs instead.
    You can mount the same file system to all non-global zones using lofs and all non-global zones have read/write access to it.
    If it is global zone of some other server then you can use NFS. But before that check the way it is exported on NFS server whether the client from which you are trying to mount it has permissions to do so.

  • Userdel command is missing in non-global zone

    Hi,
    I am trying to remove a user account in one of the non-global zone. But the 'userdel' command is missing in the system. This system is non-global zone. I am able to remove the same account in other non-global zone. Please find the system details:
    root@mars # uname -a
    SunOS mars 5.9 Generic_Virtual sun4u sparc SUNW,Sun-Fire
    Can you please let me know which package i have to install in this box to perform user admin activity.
    Thanks,
    Ram.

    Hi.
    Not.
    df -h swap - will show how many total swap now available on whole system.
    For check limit you can use for example:
    prctl -n zone.max-swap -i process $$
    Regards.

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Is it possible to patch Global Zone and only specific Non-Global Zones?

    Hi Champs,
    Is it possible to patch Global Zone and only specific Non-Global Zones? Idea is to patch DEV-zones only on the system & test applications and then patch only the STG-zones on same server!
    Not sure if it is possible but just throwing a question...
    Cheers,
    Nitin

    M10vir wrote:
    Yes, if you have branded (non-sparse) zone!Branded zones and sparse zones don't have the relation that you imply. In Solaris 10, native zones can be sparse or whole-root (non-sparse, as you say). Zones that are not native zones are branded zones. Branded zones on Solaris 10 include Solaris Legacy Containers, previously known as Solaris 8 Containers and Solaris 9 Containers. That add-on product allows you to run Solaris 8 and Solaris 9 application environments under a thin layer of virtualization provided by the brands framework. solaris8 and solaris9 branded zones can be patched independently of each other and of the global zone.
    Solaris 11 has no "native zones" - all zones use the brands framework. The "solaris" brand does no emulation and in that respect is very similar to native zones on Solaris 10. Solaris 11 also provides Solaris 10 Zones via the solaris10 brand. This allows zones or the global zone from a Solaris 10 system to be transferred to a Solaris 11 system and run as solaris10 zones. When running on Solaris 11, solaris10 zones can each be patched independently from each other and the Solaris 11 global zone. Technically, Solaris 11 doesn't have patches - it just has newer versions of packages to which the system is updated.

  • Failing to install pkg on non-global zone

    (root)@syslog1:~# pkgadd -d . SUNWant
    Processing package instance <SUNWant> from </home/iqbala>
    Jakarta ANT(sparc) 11.10.0,REV=2005.01.08.05.16
    WARNING: Stale lock installed for pkgrm, pkg SUNWaspell quit in remove-initial state.
    Removing lock.
    Using </> as the package base directory.
    ## Processing package information.
    ERROR: Cannot allocate memory for package object array.
    pkgadd: ERROR: memory allocation failure
    pkgadd: ERROR: unable to process pkgmap
    Installation of <SUNWant> failed (internal error).
    No changes were made to the system.
    (root)@syslog1:~#
    (root)@syslog1:~# zonename
    syslog
    This non-global zone is capped to 1G phy memory out of 2G total of the T1000
    (root)@syslog-global:~# uname -a
    SunOS syslog-global 5.10 Generic_137137-09 sun4v sparc SUNW,Sun-Fire-T1000
    (root)@syslog-global:~# zoneadm list
    global
    syslog
    (root)@syslog-global:~# zonename
    global
    (root)@syslog-global:~# zonecfg -z syslog info
    zonename: syslog
    zonepath: /syslog
    brand: native
    autoboot: true
    bootargs: -m verbose
    pool:
    limitpriv: default,sys_time
    scheduling-class: FSS
    ip-type: shared
    inherit-pkg-dir:
         dir: /lib
    inherit-pkg-dir:
         dir: /platform
    inherit-pkg-dir:
         dir: /sbin
    inherit-pkg-dir:
         dir: /usr
    fs:
         dir: /var/logs
         special: /var/logs
         raw not specified
         type: lofs
         options: []
    fs:
         dir: /usr/local
         special: /syslog-local/usr/local
         raw not specified
         type: lofs
         options: []
    net:
         address: 192.168.0.114
         physical: aggr1
         defrouter: 192.168.0.1
    dedicated-cpu:
         ncpus: 1-8
         importance: 10
    capped-memory:
         physical: 1G
         [swap: 512M]
    attr:
         name: comment
         type: string
         value: "syslog server"
    rctl:
         name: zone.max-swap
         value: (priv=privileged,limit=536870912,action=deny)
    (root)@syslog-global:~# prstat -Z
    PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
    13118 root 7184K 5952K sleep 1 0 52:00:54 0.5% nco_p_syslog/10
    11730 root 162M 123M sleep 59 0 38:51:35 0.1% splunkd/22
    7324 root 12M 8280K sleep 59 0 0:58:06 0.0% syslogd/25
    266 root 97M 24M sleep 49 0 31:45:02 0.0% poold/8
    209 daemon 8104K 3080K sleep 59 0 24:39:56 0.0% rcapd/1
    29553 root 2496K 2024K cpu4 59 5 0:00:00 0.0% splunk-optimize/1
    21578 root 38M 36M sleep 59 0 0:01:10 0.0% puppetd/2
    29554 root 6088K 3712K cpu0 49 0 0:00:00 0.0% prstat/1
    24244 root 5760K 3104K sleep 49 0 0:00:00 0.0% bash/1
    1024 noaccess 171M 96M sleep 59 0 8:41:32 0.0% java/18
    27771 noaccess 189M 100M sleep 1 0 4:44:36 0.0% java/18
    274 daemon 3192K 496K sleep 59 0 0:00:00 0.0% statd/1
    279 daemon 2816K 576K sleep 60 -20 0:00:00 0.0% nfs4cbd/2
    326 root 2304K 40K sleep 59 0 0:00:00 0.0% cimomboot/1
    151 root 2576K 344K sleep 59 0 0:00:00 0.0% drd/2
    ZONEID NPROC SWAP RSS MEMORY TIME CPU ZONE
    3 47 465M 513M 25% 99:54:00 0.7% syslog
    0 42 391M 466M 23% 71:04:39 0.1% global
    Total: 89 processes, 386 lwps, load averages: 0.21, 0.26, 0.26
    Am I hitting a bug?

    If your pkg wants to be installed in /usr or another inherit-pkg-dir, it can't because they are share as read-only.
    Verify wherer the pkg copies its files.

  • DNS client in a non-global zone

    Hello,
    I want to configure only the non-global zone as a DNS client, with
    /etc/resolv.conf
    /etc/defaultdomain
    /etc/nsswitch.conf
    Is this ok or is this a global wide issue?
    -- Nick

    Yes. The /etc file system is private to each zone (both in the sparse and whole root models) so each zone can have it's own DNS settings (as well as private things like a different time zone and such).

  • Problem to migrate a non-global zone to a different machine.

    Hi, recently, I had try to migrate a non-global zone to a different machine but it’s doesn’t work.
    1. First, this is the structure of my machine with my non-global zone:
    host1# uname -a
    SunOS testsolaris 5.11 snv_101b i86pc i386 i86pc
    host1# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    big-zone 1.71G 1.64G 20K /big-zone
    big-zone/export 1.71G 1.64G 22K /big-zone/export
    big-zone/export/big-zone 1.67G 1.64G 21K /big-zone/export/big-zon e
    big-zone/export/big-zone/ROOT 1.67G 1.64G 18K legacy
    big-zone/export/big-zone/ROOT/zbe 1.67G 1.64G 1.66G legacy
    big-zone/export/zonetest 41.8M 1.64G 21K /big-zone/export/zonetes t
    big-zone/export/zonetest/ROOT 41.8M 1.64G 18K legacy
    big-zone/export/zonetest/ROOT/zbe 41.8M 1.64G 1.66G /big-zone/export/zonetes t/root
    rpool 8.35G 7.28G 72K /rpool
    rpool/ROOT 6.86G 7.28G 18K legacy
    rpool/ROOT/opensolaris 6.86G 7.28G 6.73G /
    rpool/dump 575M 7.28G 575M -
    rpool/export 375M 7.28G 21K /export
    rpool/export/home 18K 7.28G 18K /export/home
    rpool/export/small-zone 375M 7.28G 21K /export/small-zone
    rpool/export/small-zone/ROOT 375M 7.28G 18K legacy
    rpool/export/small-zone/ROOT/zbe 375M 7.28G 375M legacy
    rpool/swap 575M 7.78G 56.8M -
    2. In second, I had detach my non-global zone “zonetest” whit this commands :
    host1# zoneadm –z zonetest halt
    host1# zoneadm –z zonetest detach
    3. In third, I had move my zonepath to my new host.
    host1# cd /big-zone/export
    host1# tar cf zonetest.tar zonetest
    host1# sftp jay@new-host
    host1# put zonetest.tar
    Uploading ….
    host1# quit
    4. Unpack my .tar file
    host2# cd /big-zone/export
    host2# tar xf zonetest.tar
    So, after this, I think that my zonepath is transfert to my new host.
    This is the structure of my new host :
    jay@alien:~$ uname -a
    SunOS alien 5.11 snv_101b i86pc i386 i86pc Solaris
    jay@alien:~$ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 18.3G 73.3G 72K /rpool
    rpool/ROOT 2.98G 73.3G 18K legacy
    rpool/ROOT/opensolaris 2.98G 73.3G 2.85G /
    rpool/dump 1023M 73.3G 1023M -
    rpool/export 13.3G 73.3G 19K /export
    rpool/export/home 13.3G 73.3G 19K /export/home
    rpool/export/home/jay 13.3G 73.3G 13.3G /export/home/jay
    rpool/swap 1023M 73.9G 321M -
    zdata 10.7G 80.8G 9.65G /zdata
    zdata/zones 1.08G 80.8G 18K /zdata/zones
    zdata/zones/zonetest 1.08G 80.8G 1.08G /big-zone/export/
    *I have a mountpoint to /big-zone/export
    5. I had try to configure my zone on my new host and I receive and error message:
    host2# zonecfg -z zonetest
    zonetest: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:zonetest> create -a /big-zone/export/zonetest
    invalid path to detached zone
    zonecfg:zonetest>

    And my new big-zone (on the second host) show this in the /big-zone/export/zonetest folder :
    jay@alien:/zdata/zones# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 23.5G 68.0G 72K /rpool
    rpool/ROOT 6.31G 68.0G 18K legacy
    rpool/ROOT/opensolaris 6.31G 68.0G 6.18G /
    rpool/dump 1023M 68.0G 1023M -
    rpool/export 15.2G 68.0G 19K /export
    rpool/export/home 15.2G 68.0G 19K /export/home
    rpool/export/home/jay 15.2G 68.0G 15.2G /export/home/jay
    rpool/swap 1023M 68.6G 361M -
    zdata 11.6G 79.9G 10.7G /zdata
    zdata/zones 921M 79.9G 18K /zdata/zones
    zdata/zones/web 921M 79.9G 21K /zdata/zones/web
    zdata/zones/web/ROOT 921M 79.9G 18K legacy
    zdata/zones/web/ROOT/zbe 921M 79.9G 921M legacy
    zdata/zones/zonetest             54K  79.9G    18K  /big-zone/export/zonetest
    zdata/zones/zonetest/ROOT 36K 79.9G 18K legacy
    zdata/zones/zonetest/ROOT/zbe 18K 79.9G 18K legacy
    jay@alien:/zdata/zones/zonetest# pwd
    /zdata/zones/zonetest
    jay@alien:/zdata/zones/zonetest# ls -ls
    total 6
    3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
    3 drwxr-xr-x 16 root root 19 Feb 8 2009 root
    jay@alien:/zdata/zones/zonetest# cd root
    jay@alien:/zdata/zones/zonetest/root# ls -ls
    total 52902
    1 lrwxrwxrwx 1 root root 9 Feb 1 20:29 bin -> ./usr/bin
    3 drwxr-xr-x 13 root sys 15 Feb 8 2009 dev
    11 drwxr-xr-x 55 root sys 168 Feb 8 2009 etc
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 home
    15 drwxr-xr-x 9 root bin 241 Feb 4 2009 lib
    3 drwxr-xr-x 2 root sys 2 Jan 22 16:23 mnt
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 net
    3 drwxr-xr-x 4 root sys 4 Jan 24 15:26 opt
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:23 proc
    3 drwx------ 3 root root 7 Feb 6 2009 root
    5 drwxr-xr-x 2 root sys 47 Jan 22 16:24 sbin
    3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
    3 drwxrwxrwt 2 root sys 2 Feb 8 2009 tmp
    5 drwxr-xr-x 30 root sys 42 Feb 6 2009 usr
    3 drwxr-xr-x 32 root sys 32 Feb 6 2009 var
    52835 -rw-r--r-- 1 root root 42882560 Jan 22 16:35 webmin-1.441.pkg
    jay@alien:/zdata/zones/zonetest/root#
    I think my problem is there ...
    jay@alien:/big-zone/export/zonetest# pwd
    /big-zone/export/zonetest
    jay@alien:/big-zone/export/zonetest# ls -ls
    total 8
    2 ---------- 1 root root 114 Dec 31 1969 @LongLink
    3 drwxr-xr-x 2 root root 2 Feb 1 21:10 root
    3 drwx------ 4 root root 4 Feb 1 21:10 zonetest
    jay@alien:/big-zone/export/zonetest# cd zonetest/
    jay@alien:/big-zone/export/zonetest/zonetest# ls -ls
    total 6
    3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
    3 drwxr-xr-x 4 root root 5 Feb 1 21:10 root
    jay@alien:/big-zone/export/zonetest/zonetest# cd root
    jay@alien:/big-zone/export/zonetest/zonetest/root# ls -ls
    total 7
    1 lrwxrwxrwx 1 root root 9 Feb 1 21:10 bin -> ./usr/bin
    3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
    3 drwxr-xr-x 23 root sys 28 Feb 1 21:11 usr
    I think I have a problem with my zfs mountpoint but I don't how to resolve this.
    Edited by: jaymachine on Feb 26, 2009 6:16 PM

  • Application pkg install in non Global zone

    I have got another virtual instance up on solaris box, i am trying to install application pkg which writes to
    /usr
    which is mounted as read only.
    is it possible to create policy to write under that folder
    e.g. /usr/writable where u can add/change stuff in non global zone
    thanks

    You'll need to build a "whole root" zone to do that. You currently have a "sparse root" zone.
    There are two types of non-global zone root file system models: sparse and whole root. The sparse root zone model optimizes the sharing of objects. The whole root zone model provides the maximum configurability. These concepts are discussed in Chapter 18, Planning and Configuring Non-Global Zones (Tasks).
    (from http://docs.sun.com/app/docs/doc/817-1592/6mhahuooc?a=view )

  • SFTP chroot from non-global zone to zfs pool

    Hi,
    I am unable to create an SFTP chroot inside a zone to a shared folder on the global zone.
    Inside the global zone:
    I have created a zfs pool (rpool/data) and then mounted it to /data.
    I then created some shared folders: /data/sftp/ipl/import and /data/sftp/ipl/export
    I then created a non-global zone and added a file system that loops back to /data.
    Inside the zone:
    I then did the ususal stuff to create a chroot sftp user, similar to: http://nixinfra.blogspot.com.au/2012/12/openssh-chroot-sftp-setup-in-linux.html
    I modifed the /etc/ssh/sshd_config file and hard wired the ChrootDirectory to /data/sftp/ipl.
    When I attempt to sftp into the zone an error message is displayed in the zone -> fatal: bad ownership or modes for chroot directory /data/
    Multiple web sites warn that folder ownership and access privileges is important. However, issuing chown -R root:iplgroup /data made no difference. Perhaps it is something todo with the fact the folders were created in the global zone?
    If I create a simple shared folder inside the zone it works, e.g. /data3/ftp/ipl......ChrootDirectory => /data3/ftp/ipl
    If I use the users home directory it works. eg /export/home/sftpuser......ChrootDirectory => %h
    FYI. The reason for having a ZFS shared folder is to allow separate SFTP and FTP zones and a common/shared data repository for FTP and SFTP exchanges with remote systems. e.g. One remote client pushes data to the FTP server. A second remote client pulls the data via SFTP. Having separate zones increases security?
    Any help would be appreciated to solve this issue.
    Regards John

    sanjaykumarfromsymantec wrote:
    Hi,
    I want to do IPC between inter-zones ( commnication between processes running two different zones). So what are the different techniques can be used. I am not interested in TCP/IP ( AF_INET) sockets.Zones are designed to prevent most visibility between non-global zones and other zones. So network communication (like you might use between two physical machines) are the most common method.
    You could mount a global zone filesystem into multiple non-global zones (via lofs) and have your programs push data there. But you'll probably have to poll for updates. I'm not certain that's easier or better than network communication.
    Darren

  • Unable to add a device (e.g. /dev/cua0) to a non-global zone

    Hi,
    I've installed solaris 10u4 on a x86 machine with the latest patches, installed with the smpatch utility
    The history:
    I've installed solaris 10u3 without any patches, a quite minimum installation; I 've created a non-global zone, added a zfs dataset, added networking, add one serial device (/dev/cua0); installed hylafax from blastwave in the created zone using the attached modem on /dev/cua0, all was working fine, except some sendmail issues. Due to issues with samba, which I needed on this machine, I've tried to update the machine, after ending up in dependency hell, due the minimum installation, I gave up. I did a fresh install of solaris 10u4 instead also with latest patches applied with the smpatch utility, the I've created a new zone and want to add the device /dev/cua0 like in the s10u3 installation, but the device doesn't appear in the non-global zone, so I've installed hylafax in the global-zone temporary.
    The question, any ideas or workarrounds to bring the async device into a non-global zone again ?
    I'm not a newbie in nix like systems (several years with BSD and GNU/Linux), but for solaris I would classify myself as a newbie ;-)
    thanks in advance.

    Hmm. If that didn't work, then it's possible you're running into a different problem.
    But I checked again and this is the one I was thinking of. Toward the bottom, some patches are referenced. I suppose they won't hurt, but I'm worried you're seeing something related to the 'cua' device rather than the general problem of device creation.
    http://www.opensolaris.org/jive/thread.jspa?messageID=171187
    Darren

  • Non-global zone network configuration

    Hi,
    Zones are a new thing for me so please excuse me if this is a basic query... I have recently jumpstarted a system using a jumpstart script that was developed by somebody else. It creates two non-global zones and configures their network interfaces.
    I have unplumbed one of the virtual interfaces for a particular zone because the IP address it was using is actually being used by another system on the network. However, when I reboot the zone, the interface is re-assigned the same IP address again. The IP address in question is not in /etc/hosts on any of the zones, and in the non-global zones the "hostname.<interface>" files do not exist at all. Also, the IP address is not in sysidcfg in any of the zones.
    So basically, interface e1000g0:2 is being assigned an IP address that was configured by the jumpstart script, so perhaps the jumpstart script has placed that IP address in some file that is read when the zone is booting. I have even checked rc scripts just in case but I cannot find the IP address anywhere. Would anybody please be able to tell me where the configuration information could be coming from in this scenario (nsswitch.conf specifies only files).
    Thank you in advance...

    its in the zone config.
    zonecfg -z <zone in question> info
    it should list a net address and physical device. you can then use:
    zonecfg -z <zone in question>
    from here you can remove the net statements, or change the address if you want to keep using the net card in your zone.

  • Non-global zone in "shutting_down" state.. Hung in this state

    Hi.. My server is running in Sol10. It has got two non-global zones hosted in it in which the database is running.
    There was some complain from the database team that they were not able to login to the server. When I checked, it the status of the local zones were fine. But when tried to "# zlogin" to them, it got hung. So i tried to " # zlogin -S <zone_name>" and i was able to login in the failsafe mode but not able to execute any command in it. Any command from "uptime", "zfs list", gets hung and i had to forcefully logout.
    So I tried to halt the non-global zones first and then boot it. But here, it got stuck in "shutting_down" state.
    When tried to kill the processes of the non-global zones using "kill -9", it failed to kill the processes.
    so I rebooted the global zone which fixed the issue. But then, 10 days later, the same issue came up.
    I followed the same steps to fix the issue but i'm afraid this issue might come up again since i think rebooting the global zone server is a temporary fix.
    I logged a call with Oracle Support for this, but the server looks fine from the explorer output that was provided.
    Has anyone faced this same problem? What can i do to fix this issue permanantly?

    If you encounter the issue again in future, please get a system crash dump by panicing the global zone. This will allow us (support) to review the crash dump and understand why the zone failed to shut down. It will have been waiting on a resource and without the dump there's simply no way to know what or why.
    IIRC we recently (with the past month) did a putback of a bug (which I can't find the ID of right now) whereby if a zone doesn't hang on the way down we'll fork a new instance of the zone and leave the old refs in their hung state. So it's worth ensuring that you're running the latest Patchset.

  • Non-global zone networking

    I've created a non-global zone with a pair of anet devices. I plan to do IPMP inside the non-global zone to manage interface redundancy. The anet config is rather simple -- I have a net0 and net1 whose lower-link's are net2 and net3 respectively.
    Inside the zone, it looks like everything is ready to go. My two VNICs are up.
    zone# dladm show-link
    LINK CLASS MTU STATE OVER
    net0 vnic 1500 up ?
    net1 vnic 1500 up ?
    So I try to plumb them (if I can still use that term).
    zone# ipadm create-ip net0
    zone# ipadm create-ip net1
    zone# ipadm show-if
    IFNAME CLASS STATE ACTIVE OVER
    lo0 loopback ok yes --
    net0 ip down no --
    net1 ip down no --
    That's strange -- why are they not up?
    zone# ifconfig net0 up; ifconfig net1 up
    zone# ipadm show-if
    IFNAME CLASS STATE ACTIVE OVER
    lo0 loopback ok yes --
    net0 ip ok yes --
    net1 ip ok yes --
    Aaah. Much better. Now I can get on with my life.
    # ipadm create-ipmp -i net0 -i net1 ipmp0
    # ipadm create-addr -T static -a 192.168.1.104/24 ipmp0/v4
    So my quesion is why did I have to resort to running an ifconfig up on these interfaces? ifconfig is dead to me -- or so I'd like to think. :)
    What is the "right" way to deal with this problem?

    Figured this out.
    The issue was that I had just done a zlogin to the zone after it was built (which was 3 weeks ago). I had completely forgotten that I had not yet completed the system configuration so the svc:/milestone/config:default service was offline, along with it's many dependancies.
    Basically I manually configured the network information before I had told the system config that I was going to do so.
    Strange behaviour -- but that's what happens when you don't follow order of operation.

  • Problem with exporting devices to non-global zone

    Hi,
    I've problem with exporting devices to my solaris zones (i try do add support to mount /dev/lofi/* in my non-global zone).
    A create cfg for my zone.
    Here it is:
    $ zonecfg -z sapdev info
    zonename: sapdev
    zonepath: /export/home/zones/sapdev
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv: default,sys_time
    scheduling-class:
    ip-type: shared
    fs:
    dir: /sap
    special: /dev/dsk/c1t44d0s0
    raw: /dev/rdsk/c1t44d0s0
    type: ufs
    options: []
    net:
    address: 194.29.128.45
    physical: ce0
    device
    match: /dev/lofi/1
    device
    match: /dev/rlofi/1
    device
    match: /dev/lofi/2
    device
    match: /dev/rlofi/2
    attr:
    name: comment
    type: string
    value: "This is SAP developement zone"
    global# lofiadm
    Block Device File
    /dev/lofi/1 /root/SAP_DB2_9_LUW.iso
    /dev/lofi/2 /usr/tmp/fsfile
    I reboot the non-global zone, even reboot global-zone, and after that, in sapdev zone, there is no /dev/*lofi/* files.
    What i do wrong? Maybe I reduce my sol 10 u4 sparc instalation too much.
    Can anybody help me?
    Thanks for help,
    Marek

    I experienced the same problem on my system Sol 10 08/07.
    Normally, when the zone enters the READY state during boot, it's zoneadmd will run devfsadm -z <zone>. In my understanding this is to create the necessary device files in ZONEPATH/dev.
    This worked well until recently. Now only the directories are still created.
    It seems as if devfsadm -z is broken. Somebody should issue a call to sun.
    As a workaround you can easily copy the device files into the zone. It is important not to copy the symbolic link but the target.
    # cp /dev/lofi/1 ZONEPATH/dev/lofi
    Hope this helps,
    Konstantin Gremliza

Maybe you are looking for

  • Problem with extending an extended class

    Hi JDC I have to use a scrolled list several times among my program so I create a class named ScrolledList that extends Jpanel and an inner class that extends ScrolledLIst. The problem shows up when I?m trying to add an item inside the inner class, t

  • B&W G3 won't boot

    I've looked everywhere and can't find the answer. Forgive me if it's right in front of my face. I received a B&W G3 rev. 2 the other day running OS 8.6. I decided to upgrade it to OS 9 and the power went out. Nothing has been right since. I've tried

  • Severe overheating of 17" MacBook Pro (2009) with 3.06 GHz processor

    For around a year and a half I have been dealing with a computer that constantly overheats very quickly, presumably because I ordered a 3.06 GHz Intel Core 2 Duo CPU for my 17" MacBook Pro without realizing that it would make the surface of the compu

  • Printing to a shared printer on a different subnet

    I have 2 macs: 1 on a wired on subnet 1 (10.0.0.xxx) = SERVER I have my Powerbook on a second (wireless) network (10.0.2.xxx) = CLIENT The 10.0.2.xxx router is plugged into the 10.0.0.xxx router so I can see (ping and map drives on) the SERVER from t

  • Sending in for repair...

    My HD makes some odd noises on my new C2D. So I'm thinking about taking it to the Apple store and having them send it in to replace the HD. My problem is I'm very OCD about scratches or marks of any sort on my laptop. I don't want to send this thing