Sun cluster 3.20, live upgrade with non-global zones

I have a two node cluster with 4 HA-container resource groups holding 4 non-global zones running Sol 10 8/07 u4 which I would upgrade to sol10 u6 10/8. The root fileystem of the non-global zones is ZFS and on shared SAN disks so that can be failed over.
For the LIve upgrade I need to convert the root ZFS to UFS which should be straight forward.
The tricky stuff is going to be performing a live upgrade on non-global zones as their root fs is on the shared disk. I have a free internal disk on each of thenodes for ABE environments. But when I run the lucreate command is it going put the ABE of the zones on the internal disk as well or can i specifiy the location ABE for non-global zones. Ideally I want this to be shared disk
Any assistance gratefully received

Hi,
I am not sure whether this document:
http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
has been on the list of docs you found already.
If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
Regards
Hartmut

Similar Messages

  • Live upgrade - solaris 8/07 (U4) , with non-global zones and SC 3.2

    Dears,
    I need to use live upgrade for SC3.2 with non-global zones, solaris 10 U4 to Solaris 10 10/09 (latest release) and update the cluster to 3.2 U3.
    i dont know where to start, i've read lots of documents, but couldn't find one complete document to cover the whole process.
    i know that upgrade for solaris 10 with non-global zones is supported since my soalris 10 release, but i am not sure if its supported with SC.
    Appreciate your help

    Hi,
    I am not sure whether this document:
    http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach
    has been on the list of docs you found already.
    If you click on the download link, it won't work. But if you use the Tools icon on the upper right hand corner and click on attachements, you'll find the document. Its content is solely based on configurations with ZFS as root and zone root, but should have valuable information for other deployments as well.
    Regards
    Hartmut

  • Sharing software package with non-global zone

    I've installed Solaris Studio 12.3 to a global zone on Solaris 11 following instructions from http://pkg-register.oracle.com which I want to make available to the non-global zones.  Do I need to install it independently for all zones, or can I export/share it?

    All files are normally installed in /opt/solarisstudio12.3. You can share it, but it is recommended to install it independently for all zones if you want different installed versions.

  • Patching sol10 with non-global zones.

    The patch REAMDE says perform in single user mode. if the system is in single user mode, will the non-global zones get patched also?

    Hello
    I found a couple of docs that can help you, see the one that describe how to create "BE" in a sol10 brandz this is the safest way to patch a sol10 brand zone.
    How to Install a Solaris Patchset in a Branded Zone (Doc ID 1489197.1)
    And if you are using sol11.1 you can use something similar to LU
    How to create and patch a new boot environment in an Oracle Solaris 10 Zone on a Oracle Solaris 11.1 system (Doc ID 1558773.1)
    Regards
    Eze

  • Live Upgrade with Zones - still not working ?

    Hi Guys,
    I'm trying to do LiveUpdate from Solaris update 3 to update 4 with non-global zone installed. It's driving me crazy now.
    I did everything as described in documentation, installed SUNWlucfg and supposedly updated SUNWluu and SUNWlur (supposedly because they are exactly the same as were in update 3) both from packages and with script from update 4 DVD, installed all patches mentioned in 72099, but lucreate process still complains about missing patches and I've checked if they're installed five times. They are. It doesn't even allow to create second BE. Once I detached Zone - everything went smooth, but I had an impression that Live Upgrade with Zones will work in Update 4.
    It did create second BE before SUNWlucfg was installed, but failed on update stage with exactly the same message - install patches according to 72099. After installation of SUNWlucfg Live Upgrade process fails instantly, that's a real progress, must admit.
    Is it still "mission impossible" to Live Upgrade with non-global zones installed ? Or am I missed something ?
    Any ideas or success stories are greatly appreciated. Thanks.

    I upgraded from u3 to u5.
    The upgrade went fine, the zones boot up but there are problems.
    sshd doesn't work
    svsc -vx prints out this.
    svc:/network/rpc/gss:default (Generic Security Service)
    State: uninitialized since Fri Apr 18 09:54:33 2008
    Reason: Restarter svc:/network/inetd:default is not running.
    See: http://sun.com/msg/SMF-8000-5H
    See: man -M /usr/share/man -s 1M gssd
    Impact: 8 dependent services are not running:
    svc:/network/nfs/client:default
    svc:/system/filesystem/autofs:default
    svc:/system/system-log:default
    svc:/milestone/multi-user:default
    svc:/system/webconsole:console
    svc:/milestone/multi-user-server:default
    svc:/network/smtp:sendmail
    svc:/network/ssh:default
    svc:/network/inetd:default (inetd)
    State: maintenance since Fri Apr 18 09:54:41 2008
    Reason: Restarting too quickly.
    See: http://sun.com/msg/SMF-8000-L5
    See: man -M /usr/share/man -s 1M inetd
    See: /var/svc/log/network-inetd:default.log
    Impact: This service is not running.
    It seems as thought the container is not upgraded.
    more /etc/release in the container shows this
    Solaris 10 11/06 s10s_u3wos_10 SPARC
    Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
    Use is subject to license terms.
    Assembled 14 November 2006
    How do I get it to fix the inetd service?

  • What options do I have to patch the recommended patchset on Solaris 10 with a bunch of non-global zones?

    With the standard patching process(installcluster), it takes a looong time since each zone needs veridated. Any option that I can apply the patchset to the global zone only, then later upgrade the non-global zones?
    If possible, I'd like to use LU.

    You can use LU but it will depend of your system config. There are instructions in the README of the patchset to install it on an alternate boot environment (previously created using lucreate).
    If you plan to use LU, read the following docs first to avoid common issues:
    Solaris Live Upgrade Software Patch Requirements(Doc ID 1004881.1)
    List of currently unsupported Live Upgrade (LU) configurations (Doc ID 1396382.1)
    You can also use Parallel Patching feature to improve performance :
    https://blogs.oracle.com/patch/entry/zones_parallel_patching_feature_now
    Solaris 10 10/09: Zones Parallel Patching to ReducePatching Time (System Administration Guide: Oracle Solaris Containers…
    What you can't do is patch the global zone only and the non-global zones later (unless the zones are detached). It's a requirement that the global and non-global stay synchronize at all time (considering that they are sharing the same kernel).

  • After installing 137137-09 patch OK in global zone, bad in non global zone

    Hi all,
    scratching my head with this one.
    Installed 137137-09 fine on Sun Fire V210. Machine has one non global zone running a proxy server (nothing very exciting there!). non global zone has a local filesystem attached, but don't think this is the issue (on my test V210 I created the same sort of filesystem and was unable to replicate the problem :( ).
    So 137137-09 is fine in the global zone (I had the non global zone halted when patch installed) it is also installed in the non global zone (ie, when zone boots it says it's at rev 137137-09 via uname) in the patch log in the non global zone I get this:
    PKG=SUNWust2.v
    Original package not installed.
    pkgadd: ERROR: ERROR: unable to get zone brand: zonecfg_get_brand: No such zone configured
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed. This installation
    will attempt to overwrite this package.
    /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: /usr/local/zones/cotchin/lu/dev/.SUNW_patches_1000109009-1847556-000000d3e42faa84/137137-09/FJSVcpcu/install/checkinstall: cannot open
    pkgadd: ERROR: checkinstall script did not complete successfully
    Dryrun complete.
    No changes were made to the system.
    I'm not sure if the branding error is causing the checkinstall postpatch script error or if they are not related. There doesn't seem to be any obvious permissions problems that I can find. I have checked that all the pkg and patch patches are up to date on the system. Searching on the brand error gives me a link to a problem with 127127-11, but that was installed on the system before the local zone was created and all the other seemingly appropriate patches (eg: 119254) are all up to date or at a higher revision than recommended.
    I see the same problem on a M5000 which has two non global zones on it.
    Both machines had the Solaris 10 50/08 update bundle applied when it came out,a nd have had recommended patch sets applied at regular intervals since.
    This issue only came to light when trying the latest bundles with 138888-01/02 in it, and those fail to install on the global zones because the non global zone install dies claiming 137137-09 is not installed (which is plainly wrong).
    I've tried to recreate this on a test server but unfortunately everything works as it should, even though the test server has a similar history in terms of patches and original setup to the others.
    I'm planning to try to detatch the non global zone and try an attach -u to see if it will update the patches properly, but I'm not holding out much hope on that one (I need to wait for a mainteiance window when I can take the zone down in a couple of days).
    Any ideas?

    Well, I am following up to my own post it seems I have determined what is causing the problem, or at least situations where the problem can be reproduced which I have been able to do on my test system.
    It seems that if the zone container's zonepath is in /usr (eg: /usr/zones, /usr/local/zones, or some other path under /usr) the patchadd of 137137-09 will fail with the log similar to posted above, and this will stop further kernel patches (eg: 138888-02) being added.
    The test system had everything patched to current and searching the web I can't find any other instances of this being an issue, but I have reproduced this problem on my test machine (which worked OK because it's test zones were in a filesystem mounted as /zones). When I used zoneadd -z <zonename> move to a zone in /usr/local and applied 137137-09 the same problem came up.
    Not sure what is causing this issue.. I imagine it might have to do with some sort of confusion with the patch utilities and the read-only loopback filesystems in the sparse root zone but I can't bs sure.
    Maybe someone at sun will see this and figure out what the deal is :)
    When I moved my test zone back to /zones the patch applied perfectly so it's definitely having it in /usr or /usr/local (I tried both locations, even though they are seperate ufs filesystems on my test server).
    Oh I am running DiskSuite to mirror filesystems on my V210's which may or may not have anything to do with it.
    Hope this helps someone in the future at least!

  • Solaris Live Upgrade with NGZ

    Hi
    I am trying to perform a Live Upgrade on my 2 Servers, both of them have NGZ installed and those NGZ are on an diifferent Zpool not on the Rpool and also are on an external disk.
    I have installed all the latest patches required for LU to work properly but when i perform an <lucreate> i start having problems... (new_s10BE is the new BE i'm creating)
    On my 1st Server:
    I have a Global zone and 1 NGZ named mddtri.. This is the error i am getting:-
    ERROR: unable to mount zone <mddtri> in </.alt.tmp.b-VBb.mnt>.
    zoneadm: zone 'mddtri': zone root /zoneroots/mddtri/root already in use by zone mddtri
    zoneadm: zone 'mddtri': call to zoneadm failed
    ERROR: unable to mount non-global zones of ABE: cannot make bootable
    ERROR: cannot unmount </.alt.tmp.b-VBb.mnt/var/run>
    ERROR: unable to make boot environment <new_s10BE> bootable
    On my 2nd Server:
    I have a Global zone and 10 NGZ. This is the error i am getting:-
    WARNING: Directory </zoneroots/zone1> zone <global> lies on a filesystem shared netween BEs, remapping path to </zoneroots/zone1/zone1-new_s10BE>
    WARNING: Device <zone1> is shared between BEs, remmapping to <zone1-new_s10BE>
    *.This happens for all the NGZ running.*
    Duplicating ZFS datasets from PBE to ABE.
    ERROR: The dataset <zone1-new_s10BE> is on top of ZFS pool. Unable to clone. Please migrate the zone  to dedicated dataset.
    ERROR: Unable to create a duplicate of <zone1> dataset in PBE. <zone1-new_s10BE> dataset in ABE already exists.
    Reverting state of zones in PBE <old_s10BE>
    ERROR: Unable to copy file system from boot environment <old_s10BE> to BE <new_s10BE>
    ERROR: Unable to populate file systems from boot environment <new_s10BE>
    Help, I need to sort this out a.s.a.p!

    Hi,
    I have the same problem with an attached A5200 with mirrored disks (Solaris 9, Volume Manager). Whereas the "critical" partitions should be copied to a second system disk, the mirrored partitions should be shared.
    Here is a script with lucreate.
    #!/bin/sh
    Logdir=/usr/local/LUscripts/logs
    if &#91; ! -d ${Logdir} &#93;
    then
    echo ${Logdir} existiert nicht
    exit
    fi
    /usr/sbin/lucreate \
    -l ${Logdir}/$0.log \
    -o ${Logdir}/$0.error \
    -m /:/dev/dsk/c2t0d0s0:ufs \
    -m /var:/dev/dsk/c2t0d0s3:ufs \
    -m /opt:/dev/dsk/c2t0d0s4:ufs \
    -m -:/dev/dsk/c2t0d0s1:swap \
    -n disk0
    And here is the output
    root&#64;ahbgbld800x:/usr/local/LUscripts >./lucreate_disk0.sh
    Discovering physical storage devices
    Discovering logical storage devices
    Cross referencing storage devices with boot environment configurations
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice </dev/md/RAID-INT/dsk/d0>.
    ERROR: An error occurred during creation of configuration file.
    ERROR: Cannot create the internal configuration file for the current boot environment <disk3>.
    Assertion failed: *ptrKey == (unsigned long long)_lu_malloc, file lu_mem.c, line 362<br />
    Abort - core dumped

  • How to install a SUN cluster in a non-global zone?

    How do I install a SUN cluster in a non-global zone? If it is the same as installing SUN Cluster in the global zone, then how do I access the cdrom from a non-global zone to the global zone?
    Please point me to some docs or urls if there are any. Thanks in advance!

    You don't really install the cluster software on zones. You have to set up a zone cluster once you have configured the Global Zone cluster (or in other words, clustered the physical systems + OS).
    http://blogs.sun.com/SC/entry/zone_clusters

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • Problem with exporting devices to non-global zone

    Hi,
    I've problem with exporting devices to my solaris zones (i try do add support to mount /dev/lofi/* in my non-global zone).
    A create cfg for my zone.
    Here it is:
    $ zonecfg -z sapdev info
    zonename: sapdev
    zonepath: /export/home/zones/sapdev
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv: default,sys_time
    scheduling-class:
    ip-type: shared
    fs:
    dir: /sap
    special: /dev/dsk/c1t44d0s0
    raw: /dev/rdsk/c1t44d0s0
    type: ufs
    options: []
    net:
    address: 194.29.128.45
    physical: ce0
    device
    match: /dev/lofi/1
    device
    match: /dev/rlofi/1
    device
    match: /dev/lofi/2
    device
    match: /dev/rlofi/2
    attr:
    name: comment
    type: string
    value: "This is SAP developement zone"
    global# lofiadm
    Block Device File
    /dev/lofi/1 /root/SAP_DB2_9_LUW.iso
    /dev/lofi/2 /usr/tmp/fsfile
    I reboot the non-global zone, even reboot global-zone, and after that, in sapdev zone, there is no /dev/*lofi/* files.
    What i do wrong? Maybe I reduce my sol 10 u4 sparc instalation too much.
    Can anybody help me?
    Thanks for help,
    Marek

    I experienced the same problem on my system Sol 10 08/07.
    Normally, when the zone enters the READY state during boot, it's zoneadmd will run devfsadm -z <zone>. In my understanding this is to create the necessary device files in ZONEPATH/dev.
    This worked well until recently. Now only the directories are still created.
    It seems as if devfsadm -z is broken. Somebody should issue a call to sun.
    As a workaround you can easily copy the device files into the zone. It is important not to copy the symbolic link but the target.
    # cp /dev/lofi/1 ZONEPATH/dev/lofi
    Hope this helps,
    Konstantin Gremliza

  • Oracle 10 g non-global zones with asynchronous I/O

    Hi,
    I note that using direct I/O (by setting the forcedirectio while
    mounting the database file systems) and bypassing the file system
    cache may improve database performance significantly, but this should
    be done only for file systems in which database files and redo log
    files exist. If direct I/O is used and there is not enough database
    buffer cache, it may even decrease the performance by moving the
    problem from double buffering to a lack of database buffer cache. So,
    this performance tuning must be planned carefully, and the database
    buffer cache should be sized properly. The direct I/O option should
    not be used for other file systems used by other applications because
    they still need the UFS buffer cache.
    Now, I have Oracle database installed inside a non-global zone and I
    see a lot of Asynchronous I/O wait warnings in the Oracle Alert log
    file. Storage mount points with UFS filesystem contain the Oracle
    datafiles and redo log files. In addition, two Oracle datafiles of 10
    GB each reside on the local disks. The Oracle init.ora parameter to
    set asynchronous I/O for Oracle database files is
    FILESYSTEMIO_OPTIONS= SETALL.
    Although the above parameter was set during the database installation,
    the aiowait warnings don't seem to disappear.
    Can I use the "forcedirectio" option at the Operating System /etc/
    vfstab file for Oracle datafiles and redo log files?
    Or, should I just move the Oracle database files residing on the local
    disks to the external storage? Will this take care of aiowait warnings
    and if yes, how? The storage is a DAS.
    Regards
    Sandeep

    I presume you compiled php on the Sun server, was this done using gcc or the Sun One C compiler.
    If the latter then you can also use the flag: --enable-nonportable-atomics when you run configure                                                                                                                                                                                                                                                                                                                                                                                                   

  • Using a Fibre Channel HBA with a non-global zone.

    I am trying to let a non-global zone use a dual port HBA. Please note the goal is to use the HBA including the SAN devices, not just a device on the SAN. Does anyone know if and how this can be done?
    [root@global:/]# more /etc/release
                           Solaris 10 8/07 s10s_u4wos_12b SPARC ...
    [root@global:/]# zonecfg -z localzone info
    zonename: localzone
    zonepath: /zones/localzone
    brand: native
    autoboot: true
    bootargs:
    pool:
    limitpriv:
    scheduling-class:
    ip-type: shared
    net:
            address: x.x.x.x/24
            physical: qfe0
    device
            match: /dev/fc/fp[0-1]
    device
            match: /dev/cfg/c[1-2]
    device
            match: /dev/*dsk/c[1-2]*
    [root@global:/]# fcinfo hba-port
    HBA Port WWN: 210000e08b083b41
            OS Device Name: /dev/cfg/c1
            Manufacturer: QLogic Corp.
            Model: QLA2342
            Firmware Version: 3.3.24
            FCode/BIOS Version: No Fcode found
            Type: N-port
            State: online
            Supported Speeds: 1Gb 2Gb
            Current Speed: 2Gb
            Node WWN: 200000e08b083b41
    HBA Port WWN: 210100e08b283b41
            OS Device Name: /dev/cfg/c2
            Manufacturer: QLogic Corp.
            Model: QLA2342
            Firmware Version: 3.3.24
            FCode/BIOS Version: No Fcode found
            Type: N-port
            State: online
            Supported Speeds: 1Gb 2Gb
            Current Speed: 2Gb
            Node WWN: 200100e08b283b41
    [root@localzone:dev]# ls fc
    fp0  fp1
    [root@localzone:dev]# ls cfg
    c1  c2
    [root@localzone:dev]# ls dsk | grep s0
    c1t500601613021934Dd0s0
    c1t500601693021934Dd0s0
    c1t50060482D52D5608d0s0
    c1t50060482D52D5626d0s0
    c2t500601613021934Dd0s0
    c2t500601693021934Dd0s0
    c2t50060482D52D5608d0s0
    c2t50060482D52D5626d0s0
    [root@localzone:dev]# ls rdsk | grep s0
    c1t500601613021934Dd0s0
    c1t500601693021934Dd0s0
    c1t50060482D52D5608d0s0
    c1t50060482D52D5626d0s0
    c2t500601613021934Dd0s0
    c2t500601693021934Dd0s0
    c2t50060482D52D5608d0s0
    c2t50060482D52D5626d0s0
    [root@localzone:dev]# fcinfo hba-port
    No Adapters Found.

    You cannot present devices directly to the NGZ ( What a mouth/handful of words to say/type...sheesh! What's wrong with local zones, sun?)
    You can present filesystems and/or ZFS pools but not HBAs or other devices directly (AFAIK)

  • Netbackup with Solaris non-global zone!

    Hi,
    How to install and configure netbackup into Solaris 10 non-global zone? what steps need to follow?
    Thanks
    Tanvir

    I agree with running from the global zone. The added benefit is that if you backup the root of all zonepaths, then when you add any new non-global within that path, the new server will be automatically backed up.
    We had been installing the client on each server both global and non-global in the past. On our non-global zones, /usr is not writeable but /opt is. We would symlink /usr/openv to /opt/openv from the global and then remotely install the client software from the backup master via
    "/usr/openv/netbackup/bin/install_client_files ssh <client>"

  • How to retrieve #  on-line procs in a non-global zone with resource pool

    Is there any way to retrieve the #of on line processors of the machine running in a non global zone with resource pool ?
    sysconf does not return this value. In fact this is an excerpt of the man:
    "If the caller is in a non-global zone and the pools facility is active, sysconf(_SC_NPROCESSORS_CONF) and sysconf_SC_NPROCESSORS_ONLN) return the number of processors in the processor set of the pool to which the zone is bound."

    So, from within a local zone that's in a pool (i.e. in a pool with 8 CPUs) , you want to query how many CPUs really exist in the global zone (i.e. the global zone may actually have 16 CPUs)? I don't think that's possible: in fact for security reasons it's probably intentionally disabled.
    A quick workaround would be a script/cron-job in the global zone that writes a small file in the filesystem of the local zone... then from within that zone you could read the CPU count.
    I'm interested though: what are you trying to set up?
    Regards,
    [email protected]

Maybe you are looking for