Using a Fibre Channel HBA with a non-global zone.

I am trying to let a non-global zone use a dual port HBA. Please note the goal is to use the HBA including the SAN devices, not just a device on the SAN. Does anyone know if and how this can be done?
[root@global:/]# more /etc/release
                       Solaris 10 8/07 s10s_u4wos_12b SPARC ...
[root@global:/]# zonecfg -z localzone info
zonename: localzone
zonepath: /zones/localzone
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
net:
        address: x.x.x.x/24
        physical: qfe0
device
        match: /dev/fc/fp[0-1]
device
        match: /dev/cfg/c[1-2]
device
        match: /dev/*dsk/c[1-2]*
[root@global:/]# fcinfo hba-port
HBA Port WWN: 210000e08b083b41
        OS Device Name: /dev/cfg/c1
        Manufacturer: QLogic Corp.
        Model: QLA2342
        Firmware Version: 3.3.24
        FCode/BIOS Version: No Fcode found
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200000e08b083b41
HBA Port WWN: 210100e08b283b41
        OS Device Name: /dev/cfg/c2
        Manufacturer: QLogic Corp.
        Model: QLA2342
        Firmware Version: 3.3.24
        FCode/BIOS Version: No Fcode found
        Type: N-port
        State: online
        Supported Speeds: 1Gb 2Gb
        Current Speed: 2Gb
        Node WWN: 200100e08b283b41
[root@localzone:dev]# ls fc
fp0  fp1
[root@localzone:dev]# ls cfg
c1  c2
[root@localzone:dev]# ls dsk | grep s0
c1t500601613021934Dd0s0
c1t500601693021934Dd0s0
c1t50060482D52D5608d0s0
c1t50060482D52D5626d0s0
c2t500601613021934Dd0s0
c2t500601693021934Dd0s0
c2t50060482D52D5608d0s0
c2t50060482D52D5626d0s0
[root@localzone:dev]# ls rdsk | grep s0
c1t500601613021934Dd0s0
c1t500601693021934Dd0s0
c1t50060482D52D5608d0s0
c1t50060482D52D5626d0s0
c2t500601613021934Dd0s0
c2t500601693021934Dd0s0
c2t50060482D52D5608d0s0
c2t50060482D52D5626d0s0
[root@localzone:dev]# fcinfo hba-port
No Adapters Found.

You cannot present devices directly to the NGZ ( What a mouth/handful of words to say/type...sheesh! What's wrong with local zones, sun?)
You can present filesystems and/or ZFS pools but not HBAs or other devices directly (AFAIK)

Similar Messages

  • Netbackup with Solaris non-global zone!

    Hi,
    How to install and configure netbackup into Solaris 10 non-global zone? what steps need to follow?
    Thanks
    Tanvir

    I agree with running from the global zone. The added benefit is that if you backup the root of all zonepaths, then when you add any new non-global within that path, the new server will be automatically backed up.
    We had been installing the client on each server both global and non-global in the past. On our non-global zones, /usr is not writeable but /opt is. We would symlink /usr/openv to /opt/openv from the global and then remotely install the client software from the backup master via
    "/usr/openv/netbackup/bin/install_client_files ssh <client>"

  • Can't do traceroute or DNS queries withing a non-global zone.

    I'll start by outlining my servers and their roles
    they are all on the same network, behind the same gateway, plugged into the same switch.
    secure1 = a freebsd server running bind. It's a recursive DNS server. works perfectly.
    secure2 = a solaris 10 server.
    zone1 = a zone that was setup before i inherited this env.
    zone2 = a zone i tried to create, and it mostly worked.
    The problem:
    From zone2 I cannot do DNS queries. And traceroutes past the gateway don't work. At first I suspected the firewall, but everything that doesn't work on zone2, works fine on zone 1.
    What does work on zone2
    I can ssh into it
    I can ssh out of it
    I can ping it
    I can ping from it
    I can trace route from it to secure1
    I can ssh to other hosts out on the internet.
    What doesn't work
    I can't do any DNS queries, whether the DNS server is inside of my network or outside of it.
    I can't traceroute past my gateway, tho I can from zone1.
    Finally here's what happens when I do a dns query
    zone2# /usr/sbin/host google.com 66.48.78.91
    ;; connection timed out; no servers could be reached
    Oh, I diffed the zone1.xml and zone2.xml files in /etc/zones and except for things like ip addresses they are the same.
    Any suggestions would be muchly appreciated. Thanks folks.

    ifconfig -a and netstat -rn from the zone that isn't working properly would help.
    Off the top of my head, my guess is that your default route isn't valid for zone 2.

  • Whole root zone and fibre channel hba

    I have Solaris 10 x86 loaded up on an IBM blade. I've created a whole root zone on it and have installed Symantec NetBackup 6.5.5 and configured it as Media Server. I have an SL8500 tape library in the environment and all the tape drives are fibre channel. Now, in the global zone, I can run fcinfo hba-port or scli and the global zone finds the hba's with no problem. I've added the /dev/cfg/c1 and /dev/cfg/c2 devices to the zone, but running the commands, it cannot find the hba's at all.
    Is there something else that I need to do so that the zone can see the hba's, or is this just not possible in a zone?
    Thanks in advance,
    Ed Coates

    Hi.
    Check release notes for Netbackup. As i know Media Server is supported anly at Global zones.
    You can add access to /dev/rmt/* and give access to tapes, but stiil problem with robot device and this configuration not supported.
    Regards.

  • Problem to migrate a non-global zone to a different machine.

    Hi, recently, I had try to migrate a non-global zone to a different machine but it’s doesn’t work.
    1. First, this is the structure of my machine with my non-global zone:
    host1# uname -a
    SunOS testsolaris 5.11 snv_101b i86pc i386 i86pc
    host1# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    big-zone 1.71G 1.64G 20K /big-zone
    big-zone/export 1.71G 1.64G 22K /big-zone/export
    big-zone/export/big-zone 1.67G 1.64G 21K /big-zone/export/big-zon e
    big-zone/export/big-zone/ROOT 1.67G 1.64G 18K legacy
    big-zone/export/big-zone/ROOT/zbe 1.67G 1.64G 1.66G legacy
    big-zone/export/zonetest 41.8M 1.64G 21K /big-zone/export/zonetes t
    big-zone/export/zonetest/ROOT 41.8M 1.64G 18K legacy
    big-zone/export/zonetest/ROOT/zbe 41.8M 1.64G 1.66G /big-zone/export/zonetes t/root
    rpool 8.35G 7.28G 72K /rpool
    rpool/ROOT 6.86G 7.28G 18K legacy
    rpool/ROOT/opensolaris 6.86G 7.28G 6.73G /
    rpool/dump 575M 7.28G 575M -
    rpool/export 375M 7.28G 21K /export
    rpool/export/home 18K 7.28G 18K /export/home
    rpool/export/small-zone 375M 7.28G 21K /export/small-zone
    rpool/export/small-zone/ROOT 375M 7.28G 18K legacy
    rpool/export/small-zone/ROOT/zbe 375M 7.28G 375M legacy
    rpool/swap 575M 7.78G 56.8M -
    2. In second, I had detach my non-global zone “zonetest” whit this commands :
    host1# zoneadm –z zonetest halt
    host1# zoneadm –z zonetest detach
    3. In third, I had move my zonepath to my new host.
    host1# cd /big-zone/export
    host1# tar cf zonetest.tar zonetest
    host1# sftp jay@new-host
    host1# put zonetest.tar
    Uploading ….
    host1# quit
    4. Unpack my .tar file
    host2# cd /big-zone/export
    host2# tar xf zonetest.tar
    So, after this, I think that my zonepath is transfert to my new host.
    This is the structure of my new host :
    jay@alien:~$ uname -a
    SunOS alien 5.11 snv_101b i86pc i386 i86pc Solaris
    jay@alien:~$ zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 18.3G 73.3G 72K /rpool
    rpool/ROOT 2.98G 73.3G 18K legacy
    rpool/ROOT/opensolaris 2.98G 73.3G 2.85G /
    rpool/dump 1023M 73.3G 1023M -
    rpool/export 13.3G 73.3G 19K /export
    rpool/export/home 13.3G 73.3G 19K /export/home
    rpool/export/home/jay 13.3G 73.3G 13.3G /export/home/jay
    rpool/swap 1023M 73.9G 321M -
    zdata 10.7G 80.8G 9.65G /zdata
    zdata/zones 1.08G 80.8G 18K /zdata/zones
    zdata/zones/zonetest 1.08G 80.8G 1.08G /big-zone/export/
    *I have a mountpoint to /big-zone/export
    5. I had try to configure my zone on my new host and I receive and error message:
    host2# zonecfg -z zonetest
    zonetest: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:zonetest> create -a /big-zone/export/zonetest
    invalid path to detached zone
    zonecfg:zonetest>

    And my new big-zone (on the second host) show this in the /big-zone/export/zonetest folder :
    jay@alien:/zdata/zones# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 23.5G 68.0G 72K /rpool
    rpool/ROOT 6.31G 68.0G 18K legacy
    rpool/ROOT/opensolaris 6.31G 68.0G 6.18G /
    rpool/dump 1023M 68.0G 1023M -
    rpool/export 15.2G 68.0G 19K /export
    rpool/export/home 15.2G 68.0G 19K /export/home
    rpool/export/home/jay 15.2G 68.0G 15.2G /export/home/jay
    rpool/swap 1023M 68.6G 361M -
    zdata 11.6G 79.9G 10.7G /zdata
    zdata/zones 921M 79.9G 18K /zdata/zones
    zdata/zones/web 921M 79.9G 21K /zdata/zones/web
    zdata/zones/web/ROOT 921M 79.9G 18K legacy
    zdata/zones/web/ROOT/zbe 921M 79.9G 921M legacy
    zdata/zones/zonetest             54K  79.9G    18K  /big-zone/export/zonetest
    zdata/zones/zonetest/ROOT 36K 79.9G 18K legacy
    zdata/zones/zonetest/ROOT/zbe 18K 79.9G 18K legacy
    jay@alien:/zdata/zones/zonetest# pwd
    /zdata/zones/zonetest
    jay@alien:/zdata/zones/zonetest# ls -ls
    total 6
    3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
    3 drwxr-xr-x 16 root root 19 Feb 8 2009 root
    jay@alien:/zdata/zones/zonetest# cd root
    jay@alien:/zdata/zones/zonetest/root# ls -ls
    total 52902
    1 lrwxrwxrwx 1 root root 9 Feb 1 20:29 bin -> ./usr/bin
    3 drwxr-xr-x 13 root sys 15 Feb 8 2009 dev
    11 drwxr-xr-x 55 root sys 168 Feb 8 2009 etc
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 home
    15 drwxr-xr-x 9 root bin 241 Feb 4 2009 lib
    3 drwxr-xr-x 2 root sys 2 Jan 22 16:23 mnt
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:26 net
    3 drwxr-xr-x 4 root sys 4 Jan 24 15:26 opt
    3 dr-xr-xr-x 2 root root 2 Jan 22 16:23 proc
    3 drwx------ 3 root root 7 Feb 6 2009 root
    5 drwxr-xr-x 2 root sys 47 Jan 22 16:24 sbin
    3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
    3 drwxrwxrwt 2 root sys 2 Feb 8 2009 tmp
    5 drwxr-xr-x 30 root sys 42 Feb 6 2009 usr
    3 drwxr-xr-x 32 root sys 32 Feb 6 2009 var
    52835 -rw-r--r-- 1 root root 42882560 Jan 22 16:35 webmin-1.441.pkg
    jay@alien:/zdata/zones/zonetest/root#
    I think my problem is there ...
    jay@alien:/big-zone/export/zonetest# pwd
    /big-zone/export/zonetest
    jay@alien:/big-zone/export/zonetest# ls -ls
    total 8
    2 ---------- 1 root root 114 Dec 31 1969 @LongLink
    3 drwxr-xr-x 2 root root 2 Feb 1 21:10 root
    3 drwx------ 4 root root 4 Feb 1 21:10 zonetest
    jay@alien:/big-zone/export/zonetest# cd zonetest/
    jay@alien:/big-zone/export/zonetest/zonetest# ls -ls
    total 6
    3 drwxr-xr-x 2 root sys 2 Feb 8 2009 dev
    3 drwxr-xr-x 4 root root 5 Feb 1 21:10 root
    jay@alien:/big-zone/export/zonetest/zonetest# cd root
    jay@alien:/big-zone/export/zonetest/zonetest/root# ls -ls
    total 7
    1 lrwxrwxrwx 1 root root 9 Feb 1 21:10 bin -> ./usr/bin
    3 drwxr-xr-x 4 root root 4 Jan 22 16:23 system
    3 drwxr-xr-x 23 root sys 28 Feb 1 21:11 usr
    I think I have a problem with my zfs mountpoint but I don't how to resolve this.
    Edited by: jaymachine on Feb 26, 2009 6:16 PM

  • FilesystemMountPoints for ufs disks mounted to non-global zones

    Hello,
    I have a SAN ufs disk to be used as a failover storage, mounted to non-global zones (NGZ).
    Solaris 10 nodes using Cluster 3.2
    I'm looking for the correct value for the property FilesystemMountPoints and the vfstab entry required for a failover disk mounted to a NGZ.
    Should the path NOT include the NGZ root path?
    From the man page for SUNW.HAStoragePlus, for the property FilesystemMountPoints:
    You can specify both the path in a non-global zone and the path in a global zone, in this format:
    Non-GlobalZonePath:GlobalZonePath
    The global zone path is optional. If you do not specify a global zone path, Sun Cluster assumes that the path in
    the non-global zone and in the global zone are the same. If you specify the path as
    Non-GlobalZonePath:GlobalZonePath, you must specify Global-ZonePath in the global zone's /etc/vfstab.
    The default setting for this property is an empty list.
    You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable
    the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the
    non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone
    by mounting the file system in the global zone. The resource type then performs a loopback mount in the
    non-global zone.
    Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all
    global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.
    SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group
    with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You
    can specify both local and global file system mounts points at the same time.
    Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to
    be local if its /etc/vfstab entry satisfies both of the following conditions:
    1. The non-global mount option is specified.
    2. The "mount at boot" field for the entry is set to "no."
    In my situation, I want to mount the disk to /mysql_data on the NGZ called ftp_zone. So, which is the correct setup?
    a. FilesystemMountPoints=/mysql_data:/zones/ftp_zone/root/mysql_data
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /zones/ftp_zone/root/mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    OR
    b. FilesystemMountPoints=/mysql_data:/mysql_data (can be condensed to simply /mysql_data)
    Global zone vfstab entry /dev/md/ftpabin/dsk/d110 /dev/md/ftpabin/rdsk/d110 /mysql_data ufs 1 no logging
    NGZ mount point /mysql_data
    Should the path NOT include the NGZ root path?
    And should the fsck pass # be 1 or 2?
    Looking at this example from p. 26 of
    http://wikis.sun.com/download/attachments/24543510/820-4690.pdf
    This example doesn't mention the entry in vfstab.
    Create a resource group that can holds services in nodea zonex and nodeb zoney
    nodea# clresourcegroup create -n nodea:zonex,nodeb:zoney test-rg
    Make sure the HAStoragePlus resource is registered
    nodea# clresourcetype register SUNW.HAStoragePlus
    Now add a UFS [or VxFS] fail-over file system: mount /bigspace1 to failover/export/install in NGZ
    nodea# clresource create -t SUNW.HAStoragePlus -g test-rg \
    -p FilesystemMountPoints=/fail-over/export/install:/bigspace1 \
    ufs-hasp-rs
    Thank you!

    Hi,
    /zones/oracle-z is my root directory of the zone.
    * add the device to the zone :
    root@mpbxapp1 # zonecfg -z oracle-z
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/dsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> add device
    zonecfg:oracle-z:device> set match=/dev/global/rdsk/d12s0
    zonecfg:oracle-z:device> end
    zonecfg:oracle-z> exit
    * add FS to NGZ's /etc/vfstab : ( You may omit this step, I don't know why but it works without this step :) )
    root@mpbxapp1 # vi /zones/oracle-z/root/etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /global/oracle ufs 1 no logging
    * add FS to global zone's /etc/vfstab :
    root@mpbxapp1 # vi /etc/vfstab
    /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0 /zonefs/oracle ufs 1 no logging
    * set the FilesystemMountPoints property :
    root@mpbxapp1 # /usr/cluster/bin/clresource set -p FilesystemMountPoints=/global/oracle:/zonefs/oracle oracle-hastp
    Whit this configuration you may ensure that the FS is not directly accessible from master zone. Actually, it's accessible but with a different PATH. For example, for Oracle, from the master zone Oracle can not be started/stopped because the controlfile can not be accessed. :)
    Hope this helps,
    Murat

  • Unexpected behavior: Solaris10 , vlan , ipmp, non-global zones

    I've configured a System with several non-global zones.
    Each of them has ip - connection via a seperate vlan (1 vlan for each nonglobal zone). The vlans are established by the global zone. They are additionally brought under control of ipmp.
    I followed the instructions described at:
    http://forum.sun.com/thread.jspa?threadID=21225&messageID=59653#59653
    to create the defaultrouters for the non-global zones.
    In addition to that, I've created the default route for the 2nd ipmp-interface. (to keep the route in the non-global Zone in case of ipmp-failover)
    ie:
    route add default  172.16.3.1 -ifp ce1222000
    route add default  172.16.3.1 -ifp ce1222002Furthermore, i' ve put the 172.16.3.1 in the /etc/defaultrouter of the global zone, to ensure it will be the 1st entry in the routing table (because it's the defaultrouter for the global zone)
    Here the unexpected:
    Tried to reach a ip-target ouside the configured subnets, say 172.16.1.3 , via icmp. The router 172.16.3.1 knows the proper route to get it. The 1st tries (can't remember the exact number) went through ce1222000 and associated icmp-replies travelled back trough ce1222000. But suddenly the outgoing interface changed to ce1322000 or ce1122000 ! The defaultrouters configured on these vlans are not aware of the 172.16.1.3 (172.16.1.0/24), and there was no answer. The defaultroutes seemed to be "cycled" between the configured.
    Furthermore the connection from the outside to the nonglobal-zones (wich do have only 1 defaultrouter configured: the one of the vlan the non-global Zone belongs to) was broken intermittent.
    So, how to get the combination of VLAN ,IPMP, diff. defaultrouters, non-global Zones running?
    Got the following config visible in the global zone:
    (the 172.13.x.y are sc3.1u4 priv. interconnect)
    netstat -rn
    Routing Table: IPv4
      Destination           Gateway           Flags  Ref   Use   Interface
    172.31.193.1         127.0.0.1            UH        1      0  lo0
    172.16.19.0          172.16.19.6          U         1   4474  ce1322000
    172.16.19.0          172.16.19.6          U         1      0  ce1322000:1
    172.16.19.0          172.16.19.6          U         1   1791  ce1322002
    172.31.1.0           172.31.1.2           U         1 271194  ce5
    172.31.0.128         172.31.0.130         U         1 271158  ce1
    172.16.11.0          172.16.11.6          U         1   8715  ce1122000
    172.16.11.0          172.16.11.6          U         1      0  ce1122000:1
    172.16.11.0          172.16.11.6          U         1   7398  ce1122002
    172.16.3.0           172.16.3.6           U         1   4888  ce1222000
    172.16.3.0           172.16.3.6           U         1      0  ce1222000:1
    172.16.3.0           172.16.3.6           U         1   4236  ce1222002
    172.16.27.0          172.16.27.6          U         1      0  ce1411000
    172.16.27.0          172.16.27.6          U         1      0  ce1411000:1
    172.16.27.0          172.16.27.6          U         1      0  ce1411002
    192.168.0.0          192.168.0.62         U         1  24469  ce3
    172.31.193.0         172.31.193.2         U         1    651  clprivnet0
    172.16.11.0          172.16.11.6          U         1      0  ce1122002:1
    224.0.0.0            192.168.0.62         U         1      0  ce3
    default              172.16.3.1           UG        1   1454
    default              172.16.19.1          UG        1      0  ce1322000
    default              172.16.19.1          UG        1      0  ce1322002
    default              172.16.11.1          UG        1      0  ce1122000
    default              172.16.11.1          UG        1      0  ce1122002
    default              172.16.3.1           UG        1      0  ce1222000
    default              172.16.3.1           UG        1      0  ce1222002
    127.0.0.1            127.0.0.1            UH        41048047  lo
    #ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
    index 1
            inet 127.0.0.1 netmask ff000000
    lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
    index 1
            zone Z-BTO1-1
            inet 127.0.0.1 netmask ff000000
    lo0:2: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
    index 1
            zone Z-BTO1-2
            inet 127.0.0.1 netmask ff000000
    lo0:3: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
    index 1
            zone Z-ITR1-1
            inet 127.0.0.1 netmask ff000000
    lo0:4: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
    index 1
            zone Z-TDN1-1
            inet 127.0.0.1 netmask ff000000
    lo0:5: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232
    index 1
            zone Z-DRB1-1
            inet 127.0.0.1 netmask ff000000
    ce1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500
    index 10
            inet 172.31.0.130 netmask ffffff00 broadcast 172.31.0.255
            ether 0:3:ba:f:63:95
    ce3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
            inet 192.168.0.62 netmask ffffff00 broadcast 192.168.0.255
            groupname ipmp0
            ether 0:3:ba:f:68:1
    ce5: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500
    index 9
            inet 172.31.1.2 netmask ffffff00 broadcast 172.31.1.127
            ether 0:3:ba:d5:b1:44
    ce1122000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
    index 2
            inet 172.16.11.6 netmask ffffff00 broadcast 172.16.11.127
            groupname ipmp2
            ether 0:3:ba:f:63:94
    ce1122000:1:
    flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
    mtu 1500 index 2
            inet 172.16.11.7 netmask ffffff00 broadcast 172.16.11.127
    ce1122002:
    flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
    1500 index 3
            inet 172.16.11.8 netmask ffffff00 broadcast 172.16.11.127
            groupname ipmp2
            ether 0:3:ba:f:68:0
    ce1122002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
    mtu 1500 index 3
            inet 172.16.11.10 netmask ffffff00 broadcast 172.16.11.255
    ce1122002:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
    mtu 1500 index 3
            zone Z-ITR1-1
            inet 172.16.11.9 netmask ffffff00 broadcast 172.16.11.255
    ce1222000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
    index 4
            inet 172.16.3.6 netmask ffffff00 broadcast 172.16.3.127
            groupname ipmp3
            ether 0:3:ba:f:63:94
    ce1222000:1:
    flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
    mtu 1500 index 4
            inet 172.16.3.7 netmask ffffff00 broadcast 172.16.3.127
    ce1222002:
    flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
    1500 index 5
            inet 172.16.3.8 netmask ffffff00 broadcast 172.16.3.127
            groupname ipmp3
            ether 0:3:ba:f:68:0
    ce1222002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
    mtu 1500 index 5
            zone Z-BTO1-1
            inet 172.16.3.9 netmask ffffff00 broadcast 172.16.3.255
    ce1222002:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
    mtu 1500 index 5
            zone Z-BTO1-2
            inet 172.16.3.10 netmask ffffff00 broadcast 172.16.3.255
    ce1322000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
    index 6
            inet 172.16.19.6 netmask ffffff00 broadcast 172.16.19.127
            groupname ipmp1
            ether 0:3:ba:f:63:94
    ce1322000:1:
    flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
    mtu 1500 index 6
            inet 172.16.19.7 netmask ffffff00 broadcast 172.16.19.127
    ce1322002:
    flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
    1500 index 7
            inet 172.16.19.8 netmask ffffff00 broadcast 172.16.19.127
            groupname ipmp1
            ether 0:3:ba:f:68:0
    ce1322002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
    mtu 1500 index 7
            zone Z-TDN1-1
            inet 172.16.19.9 netmask ffffff00 broadcast 172.16.19.255
    ce1411000: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500
    index 12
            inet 172.16.27.6 netmask ffffff00 broadcast 172.16.27.255
            groupname ipmp4
            ether 0:3:ba:f:63:94
    ce1411000:1:
    flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS>
    mtu 1500 index 12
            inet 172.16.27.7 netmask ffffff00 broadcast 172.16.27.255
    ce1411002:
    flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu
    1500 index 13
            inet 172.16.27.8 netmask ffffff00 broadcast 172.16.27.255
            groupname ipmp4
            ether 0:3:ba:f:68:0
    ce1411002:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4>
    mtu 1500 index 13
            zone Z-DRB1-1
            inet 172.16.27.9 netmask ffffff00 broadcast 172.16.27.255
    clprivnet0:
    flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu
    1500 index 11
            inet 172.31.193.2 netmask ffffff00 broadcast 172.31.193.255
            ether 0:0:0:0:0:2

  • How to create a separate /var partition on solaris non-global zone

    Hi
    I found no simple way to create a separate /var partition in solaris non-global zone.
    I am using solaris 10 u9 and my root pool is zfs. My zone's zonepath is also separate zfs fs.
    But, I do not know how to make the /var as a mountpoint of another zfs dataset since /var is not empty.
    I also do not know if there is a way to install a zone with /var as a separate (outside '/') partition.
    That will be really useful.
    Any suggestion?
    Thanks
    Edited by: vadud3 on Sep 20, 2010 12:16 PM

    I meant a separate zfs fs with mountpoint '/var' in a non-global zone.
    I am insisting, because I do not want /var to fill up the '/' on non-global zone.
    With default non-global zone installation, you cannot avoid that.
    My zonepath itself is a zfs fs. I also have a zfs dataset provisioned to the non-global zone.
    I cannot create a zfs fs out of that dataset and mount it as '/var' becasue by then non-global zone
    already installed content on '/var'
    I want the '/var' as a separate dir or mountpoint, the same reason global zone gives you that option during install.
    Thanks

  • How to using Virtual fibre channel feature in IC 3.5?

    Hi ,guys ,i have noticed that microsoft released IC 3.5 to support virtual fibre channel in redhat and centos linux.
    the links are:
    http://technet.microsoft.com/en-us/library/dn531026.aspx
    http://blogs.technet.com/b/virtualization/archive/2014/01/02/linux-integration-services-3-5-announcement.aspx
    in these links above ,notes 2 says:
    While using virtual fibre channel devices, ensure that logical unit number 0 (LUN 0) has been populated. If LUN 0 has not been populated, a Linux virtual machine might not be able to mount fibre channel devices natively.
    but I do not understand how to ensure logical unit number 0 has been populated ,anyone who have tested such new features please share your expirence , thank you very much.
    yoke88
    IM:[email protected]

    The same issues as yours,no HBA card can be seen
    I have tried the command here to list hyper-v virutal fiber channel card in oracle linux 6.5 ,but no luck,and EMC powerpath does not support UEK r3 which was installed default on OL 6.5 X64,I will try Oracle linux 6.3 with IC 3.5
    http://mikent.wordpress.com/2012/02/07/how-to-find-wwn-of-hba-on-redhat-6-x/
    [root@computer01 ~]# uname -a
    Linux computer01.localdomain 3.8.13-26.1.1.el6uek.x86_64 #2 SMP Thu Feb 13 19:42:43 PST 2014 x86_64 x86_64 x86_64 GNU/Linux
    [root@computer01 ~]# lspci -nn
    00:00.0 Host bridge [0600]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) [8086:7192] (rev 03)
    00:07.0 ISA bridge [0601]: Intel Corporation 82371AB/EB/MB PIIX4 ISA [8086:7110] (rev 01)
    00:07.1 IDE interface [0101]: Intel Corporation 82371AB/EB/MB PIIX4 IDE [8086:7111] (rev 01)
    00:07.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113] (rev 02)
    00:08.0 VGA compatible controller [0300]: Microsoft Corporation Hyper-V virtual VGA [1414:5353]
    [root@computer01 ~]# fdisk -l
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    255 heads, 63 sectors/track, 6527 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000896c0
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *           1          64      512000   83  Linux
    Partition 1 does not end on cylinder boundary.
    /dev/sda2              64        6528    51915776   8e  Linux LVM
    Disk /dev/mapper/vg_computer01-lv_root: 49.0 GB, 48964304896 bytes
    255 heads, 63 sectors/track, 5952 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Disk /dev/mapper/vg_computer01-lv_swap: 4194 MB, 4194304000 bytes
    255 heads, 63 sectors/track, 509 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    [root@computer01 ~]# lsmod |grep hv
    hv_netvsc              21641  0
    hv_utils                7440  0
    hv_balloon              9843  0
    hv_storvsc             10236  2
    hv_vmbus               92011  5 hid_hyperv,hv_netvsc,hv_utils,hv_balloon,hv_storvsc
    [root@computer01 ~]# modinfo hv_vmbus
    filename:       /lib/modules/3.8.13-26.1.1.el6uek.x86_64/kernel/drivers/hv/hv_vmbus.ko
    version:        3.1
    license:        GPL
    srcversion:     8BAC864516830DFFF7DC2B1
    alias:          acpi*:VMBus:*
    alias:          acpi*:VMBUS:*
    depends:
    intree:         Y
    vermagic:       3.8.13-26.1.1.el6uek.x86_64 SMP mod_unload modversions
    using microsoft virutal fibre chanel trouble shooting guide ,i see the event logs indicate the VHBA card was successfully added to the vm.
    yoke88
    IM:[email protected]

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Oracle 10 g non-global zones with asynchronous I/O

    Hi,
    I note that using direct I/O (by setting the forcedirectio while
    mounting the database file systems) and bypassing the file system
    cache may improve database performance significantly, but this should
    be done only for file systems in which database files and redo log
    files exist. If direct I/O is used and there is not enough database
    buffer cache, it may even decrease the performance by moving the
    problem from double buffering to a lack of database buffer cache. So,
    this performance tuning must be planned carefully, and the database
    buffer cache should be sized properly. The direct I/O option should
    not be used for other file systems used by other applications because
    they still need the UFS buffer cache.
    Now, I have Oracle database installed inside a non-global zone and I
    see a lot of Asynchronous I/O wait warnings in the Oracle Alert log
    file. Storage mount points with UFS filesystem contain the Oracle
    datafiles and redo log files. In addition, two Oracle datafiles of 10
    GB each reside on the local disks. The Oracle init.ora parameter to
    set asynchronous I/O for Oracle database files is
    FILESYSTEMIO_OPTIONS= SETALL.
    Although the above parameter was set during the database installation,
    the aiowait warnings don't seem to disappear.
    Can I use the "forcedirectio" option at the Operating System /etc/
    vfstab file for Oracle datafiles and redo log files?
    Or, should I just move the Oracle database files residing on the local
    disks to the external storage? Will this take care of aiowait warnings
    and if yes, how? The storage is a DAS.
    Regards
    Sandeep

    I presume you compiled php on the Sun server, was this done using gcc or the Sun One C compiler.
    If the latter then you can also use the flag: --enable-nonportable-atomics when you run configure                                                                                                                                                                                                                                                                                                                                                                                                   

  • What options do I have to patch the recommended patchset on Solaris 10 with a bunch of non-global zones?

    With the standard patching process(installcluster), it takes a looong time since each zone needs veridated. Any option that I can apply the patchset to the global zone only, then later upgrade the non-global zones?
    If possible, I'd like to use LU.

    You can use LU but it will depend of your system config. There are instructions in the README of the patchset to install it on an alternate boot environment (previously created using lucreate).
    If you plan to use LU, read the following docs first to avoid common issues:
    Solaris Live Upgrade Software Patch Requirements(Doc ID 1004881.1)
    List of currently unsupported Live Upgrade (LU) configurations (Doc ID 1396382.1)
    You can also use Parallel Patching feature to improve performance :
    https://blogs.oracle.com/patch/entry/zones_parallel_patching_feature_now
    Solaris 10 10/09: Zones Parallel Patching to ReducePatching Time (System Administration Guide: Oracle Solaris Containers…
    What you can't do is patch the global zone only and the non-global zones later (unless the zones are detached). It's a requirement that the global and non-global stay synchronize at all time (considering that they are sharing the same kernel).

  • Sun cluster 3.20, live upgrade with non-global zones

    I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
    For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
    The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
    Any assistance gratefully received

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Problem with exporting devices to non-global zone

    Hi,
    I've problem with exporting devices to my solaris zones (i try do add support to mount /dev/lofi/* in my non-global zone).
    A create cfg for my zone.
    Here it is:
    $ zonecfg -z sapdev info
    zonename: sapdev
    zonepath: /export/home/zones/sapdev
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv: default,sys_time
    scheduling-class:
    ip-type: shared
    fs:
    dir: /sap
    special: /dev/dsk/c1t44d0s0
    raw: /dev/rdsk/c1t44d0s0
    type: ufs
    options: []
    net:
    address: 194.29.128.45
    physical: ce0
    device
    match: /dev/lofi/1
    device
    match: /dev/rlofi/1
    device
    match: /dev/lofi/2
    device
    match: /dev/rlofi/2
    attr:
    name: comment
    type: string
    value: "This is SAP developement zone"
    global# lofiadm
    Block Device File
    /dev/lofi/1 /root/SAP_DB2_9_LUW.iso
    /dev/lofi/2 /usr/tmp/fsfile
    I reboot the non-global zone, even reboot global-zone, and after that, in sapdev zone, there is no /dev/*lofi/* files.
    What i do wrong? Maybe I reduce my sol 10 u4 sparc instalation too much.
    Can anybody help me?
    Thanks for help,
    Marek

    I experienced the same problem on my system Sol 10 08/07.
    Normally, when the zone enters the READY state during boot, it's zoneadmd will run devfsadm -z <zone>. In my understanding this is to create the necessary device files in ZONEPATH/dev.
    This worked well until recently. Now only the directories are still created.
    It seems as if devfsadm -z is broken. Somebody should issue a call to sun.
    As a workaround you can easily copy the device files into the zone. It is important not to copy the symbolic link but the target.
    # cp /dev/lofi/1 ZONEPATH/dev/lofi
    Hope this helps,
    Konstantin Gremliza

Maybe you are looking for

  • Adobe Bridge CS4 stopped working after Mavericks 10.9.3 upgrade

    I don't know if it is due to the upgrade today or something else, but since upgrading, I cannot use the file folder function in adobe bridge cs4. The files are on the drive and can easily be found in finder but the favorites in bridge and folder func

  • Error 1704 While Installing iTunes (never mind, I fixed it)

    Edit: OMG, I fixed my own problem! I used the Windows Install Clean Up utility to FINALLY install iTunes 6. YAY Thanks, B, for your help in the past. Hi, I've have been having the WORST time with installing iTunes. I was on this board a couple months

  • Missing Metadata in Web Gallery images exported from Aperture.

    Images that I export directly to my .mac web gallery from aperture do not show any meta data on their info pages. In aperture the images have all meta data. I have checked all the presets and all include metadata. There is no specific preset for web

  • Can we send an Asset From BD64?

    HI All, In BD64 I have created a model view for assets using Add BAPI CREATEFROMDATA And saved it. Does any one know from there how to create an Asset IDOC type FIXEDASSET_CREATE01 and send it to a different SAP system. Please let me know if any one

  • Screen sharing with full screen apps in Lion, control problem.

    Looking for a cleaner way to accomplish this, perhaps there isn't one but thought I'd ask. If you screen share with another computer on Lion, and that other computer has an app running full screen on it, what is the best way to get to "the rest" of t