Zone clustering in LDOMs

A customer wishes to cluster zones using Solaris 10 u 11, and sun cluster 3.3. Customer has two T4-1s, each server has two ldoms. ldom-1 on the two servers has two non global zones.
Can server1-ldom1 and server2-ldom1 cluster nonglobal zone 1?

1. If you want to cluster non-global zones, the global zones need to be part of a cluster. so, you need to build a cluster with the 2 ldoms (ldom1) first. Then:
2. You have two choices:
- you can build a failover zone using the HA Container agent - but I assume that this is not what you want
- you can configure a zone cluster consisting of the two non-global zones in the ldom1 domains.
This is documented here: http://docs.oracle.com/cd/E37745_01/html/E37727/ggzen.html#scrolltoc
for OSC3.3.
Regards
Hartmut

Similar Messages

  • Creating Oracle-HA config using zone clusters

    We have a three-node Sun Cluster (3.3u1) on Solaris 10 update 9. We are using Hitachi VSP for external storage. Eventually we may go to RAC (had to drop the RAC licenses for the time being due to budget cuts) For the time being I want to deploy zone clusters and create several different Oracle-HA installations.
    I've seen several ways of doing this and not sure what the best practice is or what limitations each method has. (so far I've not been able to get any of them working but I was using the vanilla 3.3 release and just rebuilt using 3.3u1 and then the 145334-09 core patch)
    My question is do I create the cluster resource in the zone cluster or in the global cluster?
    One document I'm trying to following does this at a high level:
    a) Create HASP resource in global cluster to enable the zone cluster the use of the zpool that will house the oracle binaries (add to my zone cluster using "add dataset")
    b) Create a logical hostname resource in the global cluster (add to my zone cluster using "add net")
    Then from within one node of the zone cluster:
    c) Create the oracle resource group
    d) Register the HASP resource type (if not already)
    e) Create an HASP resource to mount the zpool (this seems like a redundant step)
    f) Create a Logical Hostname resource that will be used by the oracle listener (this seems like a redundant step)
    g) Bring the resource group online
    h) Install the oracle binaries from the cluster node where the zpool is current mounted
    i) Install the database and configure the listener, ...
    j) Register the oracle database and listener resource types
    k) Create cluster resource for the listener and the database
    I'm confused as to why I create the resources in the global zone (steps a & b) and then again in the zone cluster... Anyone have any ideas?
    Also I found a sun engineer's blog that shows doing everything above at the global zone only. (Haven't gotten this to work either)
    Thanks,

    When I follow the example in the sun cluster essentials book I get the following error: (This error is the same with sun cluster 3.3 and 3.3u1) My test cluster is now running 3.3u1 with the 145334-09 core patch set.
    When I get to the step where I create the hasp resource within one zone cluster node:
    node01-chuck1:~ # clrs create -g rg-zc-chuck1-oracle -t SUNW.HAStoragePlus -p zpools=chuck1-u01 rs-zc-hasp-chuck1-u01
    clrs: node02:chuck1 - More than one matching zpool for 'chuck1-u01': zpools configured to HAStoragePlus must be unique.
    clrs: (C189917) VALIDATE on resource rs-zc-hasp-chuck1-u01, resource group rg-zc-chuck1-oracle, exited with non-zero exit status.
    clrs: (C720144) Validation of resource rs-zc-hasp-chuck1-u01 in resource group rg-zc-chuck1-oracle on node node02-chuck1 failed.
    clrs: (C891200) Failed to create resource "rs-zc-hasp-chuck1-u01".
    My environment:
    2-node cluster, with physical nodes - node01, node02
    1 local zpool on each physical node called "zones" and mounted as /zones (These are on disks that are only visible to each physical node)
    1 local zfs filesystem on each node called zones/chuck1 and mounted as /zones/chuck1
    1 zone cluster created with zonepath = /zones/chuck1 called chuck1
    1 zpool created on shared storage - chuck1-u01
    At this point all the cluster checks and status commands show everything is healthy. I have done multiple reboots/halts/shutdowns of the zone cluster and no issues that I can see.
    1) I have added the dataset to the zone cluster config from the global zone and rebooted the zone cluster. Note that I still don't see anything when I run "zpool status -v" within a zone cluster node. I would expect to see my chuck1-u01 zpool at this point.
    2) I then created a resource group within one zone cluster node called rg-zc-chuck1-oracle
    3) I then registered the SUNW.HAStoragePlus resource type within one zone cluster node
    4) I then attempted to create the HASP resource type within one zone cluster node and I get the error above.
    Any ideas? I've followed the sun cluster essentials example explicitly. (In my last attempt I skipped doing anything with the logical hostname resource - was saving that for later once I got the zpool working) It seems to get confused on the second node.
    Thx

  • Issue when naming resources the same on different zone clusters

    Dear all
    I found a very strange issue related to naming of resources and zone clusters, and would like to know weather what I am doing is supported.
    I have a 4 node cluster and on the first two nodes I have zone cluster A. On the second two nodes I have zone cluster B. Each zone cluster has configured in it a unique shared address. On each zone cluster various scalable GDS services having same names are configured. When creating the GDS resource the following warning cropped up “Warning: Scalable service group for resource test has already been created” . When I put the resource on zone A up everything works fine but when I startup the resource on cluster B , then for some reason it registers the shared address of zone A on the nodes of zone B ie.. it gets confused.. and thus the service on zone A becomes unavailable from the shared address ip
    Is the use of same names for resources on two different zone clusters supported? From my perspective this issue breaks the “Containment” concept of zone clusters, since zone cluster B can “confuse” zone cluster A
    Regards
    Daniel

    Daniel,
    Is the use of same names for resources on two different zone clusters supported? From my perspective this issue breaks the “Containment” concept of >zone clusters, since zone cluster B can “confuse” zone cluster AYes, the use of same resource name on two different zone clusters is supported.
    I have a 4 node cluster and on the first two nodes I have zone cluster A. On the second two nodes I have zone cluster B. Each zone cluster has >configured in it a unique shared address. On each zone cluster various scalable GDS services having same names are configured. When creating >the GDS resource the following warning cropped up “Warning: Scalable service group for resource test has already been created”As mentioned above, using the same resource name is allowed and the above warning message should not be printed. I will open a bug against
    the product for this.
    When I put the resource on zone A up everything works fine but when I startup the resource on cluster B , then for some reason it registers the shared >address of zone A on the nodes of zone B ie.. it gets confused.. and thus the service on zone A becomes unavailable from the shared address ipThis is the issue we could not see in our lab and need reproducing steps. We were successfully able to create(except for seeing the above warning message) the shared address resources and scalable resources with the same names in two different zones and able to bring them up correctly. This is not GDS service, but a regular apache scalable service and the choice of the data service should not matter here. If you can provide the following, we would be happy to investigate further and send you information:
    1) Reproducing steps (The exact order of the commands that were run to create and bring up the resources)
    2) Is your shared address resource name was also same (Ofcourse the IP address has to be different) in addition to the GDS service name?
    3) Any error messages that got printed.
    4) syslog messages related to this failure (or the syslog messages that you have seen during this time frame)
    Thanks,
    Prasanna

  • Telemetry & Zone Clusters

    Does anyone know a good source for configuring cluster telemetry, specifically with zone clusters? I can't find much in the cluster documentation or by searching oracle's website. The sctelemetry man page wasn't very useful. The sun cluster essentials book provides a brief example but not with zone clusters.
    I'm wanting to use this feature to monitor memory/cpu usage in my various zone clusters. In our environment, we will have a few three node clusters with all applications running inside zone clusters with the "active" cluster nodes being staggered across the 3 nodes.
    Lastly is telemetry really worth the hassle? We are also deploying Ops Center (which I don't really know its capabilities yet) I briefly used an older version of XVM Ops Center at my last gig but only as a provisioning tool. So with Ops Center and the myriad of dtrace tools available, is telemetry worth messing with?
    Thx for any info,
    Chuck

    That's correct. I checked with the features author and telemetry pre-dates the introduction of zone clusters. So "SC only can do cpu.idle monitoring for a zone itself. Anything below that are not monitored, include RG/RS configured inside zones. " is what I got back.
    Tim
    ---

  • T2000 Sever, Zones/Containers or LDOM's?

    I'm getting ready to deploy several T2000 servers. One requirement is Solaris 8, but Solaris 8 will not run on the T2000. I was planning on implementing Containers and install Solaris 8 in the Container. However, I was just told by a co-worker that Containers will not run on the T2000 and that I have to use LDOM's, is that correct?
    Thanks.
    Daryl Rose

    Dynamic Domains (Hardware Domains) are implemented at the hardware level and split the hardware.
    LDOMS are implemented at the firmware level and split the hardware.
    Containers are a combination of Zones (OS isolation) and Resource Management (CPU, RAM).
    xVM is a hypervisor and/or virtual machine technology.
    If you can install Solaris 10 (or OpenSolaris) you can install Containers. Works on SPARC and x86. For example you could have 2 LDOMS on a T2000 to split the box in to 2 physical servers (there are some best practices to this, mainly for fault tolerance and bandwidth). Then install 2 Solaris 10 instances. Then create 5 Containers on each Solaris 10 installation.
    Not everything runs LDOMS. Everything will run a Container.
    Cheers
    Neil

  • Using NAS devices inside zone clusters

    I was reading the docs regarding using NAS devices with Sun Cluster. It mentioned installing a vendor supplied package onto each cluster node to support their NAS, for example, NetApp has the NTAPclnas package. We are using Hitachi HNAS and I doubt they have a an equivalent package. (I'm looking into this but my hopes are not high)
    What exactly do I need this for? Are there any problems with simply adding the NFS mounts I need to the /etc/vfstab of each zone cluster node?
    In my case I will be running our oracle dev/test databases on NFS file-systems and our production clusters will use NFS file-systems for writing RMAN backups into. We aren't using RAC (today) but quite possibly will be moving to RAC in the future.
    I was just curious what the level of integration detailed in http://download.oracle.com/docs/cd/E19680-01/html/821-1556/ewplp.html#eyprn provided? Also will using a vanilla /etc/vfstab approach work if Hitachi doesn't have such a package available.
    Thanks for any info.

    IIRC, that package provides locking and fencing support. You need to be able to ensure that nodes that are not supposed to be able to write to a piece of shared storage cannot do so. With an NFS server, this is done by revoking the share. The second part is NFS lock release.
    So, if you don't have this package, you don't have these features. Without fencing control, you risk data integrity. Without lock release, you risk not being able to access your files.
    That said, there are some circumstances when you probably don't need them. Just dumping RMAN backups to a dump area may be OK. I don't know quite enough about RMAN to comment.
    Tim
    ---

  • ERROR when configuring zone clusters

    Hi,
    I have installed:
    Oracle Solaris 10 9/10 s10x_u9wos_14a X86
    Sun Cluster 3.2u3 for Solaris 10 i386
    And I have my public network on the 11.0.0.0/24 network on e1000g2 on both nodes
    vm1:
    e1000g2: flags=9000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,NOFAILOVER> mtu 1500 index 2
    inet 11.0.0.101 netmask ffffff00 broadcast 11.0.0.255
    groupname sc_ipmp0
    ether 8:0:27:6:df:2b
    vm2:
    e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 11.0.0.102 netmask ffffff00 broadcast 11.0.0.255
    groupname sc_ipmp0
    ether 8:0:27:1d:69:a9
    I am configuring a zone cluster with this config file:
    #cat zonecreate.file
    create -b
    set zonepath=/zones/clzone1
    set brand=cluster
    set enable_priv_net=true
    set autoboot=true
    set ip-type=shared
    add node
    set physical-host=vm1
    set hostname=clzone1
    add net
    set address=11.0.0.130
    set physical=e1000g2
    end
    end
    add sysid
    set system_locale=C
    set terminal=xterm
    set security_policy=NONE
    set nfs4_domain=dynamic
    set timezone=MET
    set root_password=/LB7sdasada
    end
    add node
    set physical-host=vm2
    set hostname=clzone2
    add net
    set address=11.0.0.131
    set physical=e1000g2
    end
    end
    commit
    exit
    # clzc configure -f zonecreate.file zc1
    On line 32 of zonecreate.file:
    zc1: CCR transaction error
    Failed to assign a subnet for zone zc1.
    zc1: failed to verify
    zc1: CCR transaction error
    zc1: CCR transaction error
    Failed to assign a subnet for zone zc1.
    zc1: failed to verify
    zc1: CCR transaction error
    Configuration not saved.
    Any idea where the error is coming from ?

    Hi,
    Thanks for your quick answer. I tried no luck..., just to point out I have normal native zone working ok on the same node:
    [root@vm2:/zones]# zoneadm list -cv (10-14 16:47)
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    1 zone1 running /zones/zone1 native shared
    Modified the File:
    create -b
    set zonepath=/zones/clzone1
    set brand=cluster
    set enable_priv_net=true
    set autoboot=true
    set ip-type=shared
    add node
    set physical-host=vm1
    set hostname=clzone1
    add net
    set address=11.0.0.130/24
    set physical=e1000g2
    end
    end
    add sysid
    set system_locale=C
    set terminal=xterm
    set security_policy=NONE
    set nfs4_domain=dynamic
    set timezone=MET
    set root_password=/LB7qgfbUTwks
    end
    add node
    set physical-host=vm2
    set hostname=clzone2
    add net
    set address=11.0.0.131/24
    set physical=e1000g2
    end
    end
    commit
    exit
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    ~
    "zonecreate.file" 33 lines, 512 characters
    [root@vm2:/zones]# clzc configure -f zonecreate.file zc1 (10-14 16:46)
    On line 32 of zonecreate.file:
    zc1: CCR transaction error
    Failed to assign a subnet for zone zc1.
    zc1: failed to verify
    zc1: CCR transaction error
    zc1: CCR transaction error
    Failed to assign a subnet for zone zc1.
    zc1: failed to verify
    zc1: CCR transaction error
    Configuration not saved.
    [root@vm2:/zones]# (10-14 16:46)
    [root@vm2:/zones]# cat /etc/netmasks (10-14 16:46)
    # The netmasks file associates Internet Protocol (IP) address
    # masks with IP network numbers.
    #      network-number     netmask
    # The term network-number refers to a number obtained from the Internet Network
    # Information Center.
    # Both the network-number and the netmasks are specified in
    # "decimal dot" notation, e.g:
    #           128.32.0.0 255.255.255.0
    11.0.0.0     255.255.255.0
    a full ifconfig, in case it helps:
    [root@vm2:/zones]# ifconfig -a (10-14 16:46)
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
         inet 127.0.0.1 netmask ff000000
    lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
         zone zone1
         inet 127.0.0.1 netmask ff000000
    e1000g0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
         inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255
         ether 8:0:27:b2:3d:f
    e1000g1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
         inet 172.16.1.2 netmask ffffff80 broadcast 172.16.1.127
         ether 8:0:27:25:f5:c8
    e1000g2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
         inet 11.0.0.102 netmask ffffff00 broadcast 11.0.0.255
         groupname sc_ipmp0
         ether 8:0:27:1d:69:a9
    e1000g2:1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
         inet 11.0.0.110 netmask ffffff00 broadcast 11.0.0.255
    e1000g2:2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
         zone zone1
         inet 11.0.0.107 netmask ffffff00 broadcast 11.0.0.255
    e1000g2:3: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 2
         inet 11.0.0.105 netmask ffffff00 broadcast 11.0.0.255
    e1000g3: flags=69040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,STANDBY,INACTIVE> mtu 1500 index 3
         inet 11.0.0.111 netmask ffffff00 broadcast 11.0.0.255
         groupname sc_ipmp0
         ether 8:0:27:9e:57:93
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 6
         inet 172.16.4.2 netmask fffffe00 broadcast 172.16.5.255
         ether 0:0:0:0:0:2
    clprivnet0:1: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 6
         zone zone1
         inet 172.16.4.66 netmask fffffe00 broadcast 172.16.5.255
    Thnx again

  • LDOM and Zones on same T5120

    I have a client that wants to run Zones and LDOM's on the same T5120 Test and Dev box.
    We don't want to run a zones in an LDOM, but have 3 zones running Branded Solaris 8, then install LDOM software for testing.
    Is this possible?
    Thanks
    Rod.
    Edited by: pudwell on Dec 6, 2009 11:21 PM

    Rod,
    As I understand it there are no technical reasons not to use both [non global] zones and LDoms on the same server. In some cases I think using zones in an LDom is encouraged.
    Keep in mind that:
    1. when you change the server to LDom ALL Solaris instances will run in LDom - you can not run LDom and non LDom at the same time.
    2. you will need a "Primary control I/O" LDom in addition to the LDom(s) that you are testing - the Primary-control-I/O LDom could be your current Test-Dev Solaris instance (I'm not sure this would be recommended - but you did say this is a Test-Dev box.).
    3. each LDom needs real hardware including memory, CPU(s), disk space, network connection - you could share the disk and network but the CPUs and memory are not.
    A good place to get additional info on LDoms is the [LDoms Community Cookbook|http://wikis.sun.com/display/SolarisLogicalDomains/LDoms+Community+Cookbook]
    hope this helps.
    Glen
    (just a customer)

  • Q:Creating Solaris 11.1 zone inside Solaris 11.1 LDOM fails

    Hello,
    I have a server where I've created a LDOM and I'd like to create zones inside that LDOMs and it isn't working at all for me. If I try to create a zone outside of my LDOM it works perfectly fine. What am I missing here?
    Control domain: host-22
    LDOM hostname is host-40.
    Zone name is wls-zone.
    OS: Solaris 11.1
    Server: T5220Some information from the SP:
    ============================ FW Version ============================
    Version
    Sun System Firmware 7.4.4.f 2012/09/07 17:21
    ====================== System PROM revisions =======================
    Version
    OBP 4.33.6.a 2012/03/29 11:22I am trying this when I am creating the zone and I get an error message when I am trying to boot the zone, which I don't know the solution or how to debug. AFAIK, in Solaris 11.1, the VNIC should be created automatically when creating the zone (as it does on a bare metal machine without LDOMs)?
    root@host-40:~# zoneadm -z wls-zone boot
    zone 'wls-zone': failed to create vnic for net0: operation failed
    zoneadm: zone 'wls-zone': call to zoneadmd failedThe zone is very simple:
    root@host-40:~# zonecfg -z wls-zone info
    zonename: wls-zone
    zonepath: /rpool/zones/wls-zone
    brand: solaris
    autoboot: true
    bootargs: -m verbose
    file-mac-profile:
    pool:
    limitpriv:
    scheduling-class:
    ip-type: exclusive
    hostid:
    fs-allowed:
    anet:
         linkname: net0
         lower-link: auto
         allowed-address not specified
         configure-allowed-address: true
         defrouter not specified
         allowed-dhcp-cids not specified
         link-protection: mac-nospoof
         mac-address: random
         mac-prefix not specified
         mac-slot not specified
         vlan-id not specified
         priority not specified
         rxrings not specified
         txrings not specified
         mtu not specified
         maxbw not specified
         rxfanout not specified
         vsi-typeid not specified
         vsi-vers not specified
         vsi-mgrid not specified
         etsbw-lcl not specified
         cos not specified
         pkey not specified
         linkmode not specifiedSome more information executed inside the LDOM:
    root@host-40:~# dladm show-phys
    LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
    net0              Ethernet             up         0      unknown   vnet0
    root@host-40:~# dladm show-link
    LINK                CLASS     MTU    STATE    OVER
    net0                phys      1500   up       --And even more information from the control domain (host-22):
    root@host-22:~# ldm list-services
    VCC
        NAME             LDOM             PORT-RANGE
        primary-vcc0     primary          5000-5100
    VSW
        NAME             LDOM             MAC               NET-DEV   ID   DEVICE     LINKPROP   DEFAULT-VLAN-ID PVID VID                  MTU   MODE   INTER-VNET-LINK
        primary-vsw0     primary          00:14:4f:fb:93:c6 net0      0    switch@0              1               1                         1500         on        
        primary-vsw1     primary          00:14:4f:f8:bb:62 net1      1    switch@1              1               1                         1500         on        
    VDS
        NAME             LDOM             VOLUME         OPTIONS          MPGROUP        DEVICE
        primary-vds0     primary          vol1                                           /dev/zvol/dsk/dpool/ldoms/ldmdev/disk_image
                                          vol2                                           /dev/zvol/dsk/dpool/ldoms/ldmprod/disk_image
    root@host-22:~# dladm show-phys
    LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
    net1              Ethernet             up         1000   full      e1000g1
    net2              Ethernet             unknown    0      unknown   e1000g2
    net0              Ethernet             up         1000   full      e1000g0
    net3              Ethernet             unknown    0      unknown   e1000g3
    net4              Ethernet             up         1000   full      vsw0
    net5              Ethernet             up         1000   full      vsw1Any ideas what could be wrong?
    Thanks,
    Andy
    Edited by: A Tael on Nov 13, 2012 9:54 PM

    To use a Solaris 11 zone with an exclusive IP based on a VNIC, you need to be at a recent SRU of Solaris 11.1 and run a recent version of the logical domains manager  (3.0.0.2 or later) - and then configure the virtual network device to have additional MAC addresses. Please have a look at https://blogs.oracle.com/jsavit/entry/vnics_on_vnets_now_available
    regards, Jeff

  • LDOMs, Solaris zones and Live Migration

    Hi all,
    If you are planning to use Solaris zones inside a LDOM and using an external zpool as Solaris zone disk, wouldn't this break one of the requirements for being able to do a Live Migration ? If so, do you have any ideas on how to use Solaris zones inside an LDOM and at the same time be able to do a Live Migration or is it impossible ? I know this may sound as a bad idea but I would very much like to know if it is doable.

    Thanks,
    By external pool I am thinking of the way you probably are doing it, separate LUNs mirrored in a zpool for the zones coming from two separate IO/Service domains. So even if this zpool exist inside the LDOM as zone storage this will not prevent LM ? That's good news. The requirement "no zpool if Live Migration" must then only be valid for the LDOM storage itself and not for storage attached to the running LDOM. I am also worried about a possible performance penalty introducing an extra layer of virtualisation. Have you done any tests regarding this ?

  • LDOM 1.1 migration with boot disk on zfs volume

    I have a big disk and set up a few zfs volumes for guest domains.
    Have anybody successfully configure LDOM 1.1 to migrate guest domains
    between two hosts when guest domais are on zfs volumes.
    The two hosts shares the FC LUN through SAN.
    Any suggestions?
    cm

    Can you provide a working example of this? I fail to see any benefit of using files as bootdisks via NFS. It seems you will need a new bootdisk file (approx. 12g for our env.) for every ldom guest. First, how do you save storage space? Then, how do you manage additional storage devices for applications which require a small amount of additional disk space (say 50g)? If you use file allocation (mkfile on NFS allocated to an ldom) then you've gotten yourself into potentially disastrous performance issues (virtual guest storage -> virtual I/O management -> file -> NFS -> ZFS -> SAN Luns). The only solution to the problem as I see it is to use a clustered file system, which is not currently a viable option for the following reasons:
    1. Veritas CVM does not support Branded Zones in an ldom = we can't use it
    2. ZFS is not a cluster file system
    So, this begs the question, what is the "Best Practice" for setting up ldom environments (with migration ability) from a storage perspective? What is the Sun recommendation on this?

  • The hostname test01 is not authorized to be used in this zone cluster

    Hi,
    I have problems to register a LogicalHostname to a Zone Cluster.
    Here my steps:
    - create the ZoneCluster
    # clzc configure test01
    clzc:test01> info
    zonename: test01
    zonepath: /export/zones/test01
    autoboot: true
    brand: cluster
    bootargs:
    pool: test
    limitpriv:
    scheduling-class:
    ip-type: shared
    enable_priv_net: true
    sysid:
    name_service not specified
    nfs4_domain: dynamic
    security_policy: NONE
    system_locale: en_US.UTF-8
    terminal: vt100
    timezone: Europe/Berlin
    node:
    physical-host: farm01a
    hostname: test01a
    net:
    address: 172.19.115.232
    physical: e1000g0
    node:
    physical-host: farm01b
    hostname: test01b
    net:
    address: 172.19.115.233
    physical: e1000g0
    - create a RG
    # clrg create -Z test01 test01-rg
    - create Logicalhostname (with error)
    # clrslh create -g test01-rg -Z test01 -h test01 test01-ip
    clrslh: farm01b:test01 - The hostname test01 is not authorized to be used in this zone cluster test01.
    clrslh: farm01b:test01 - Resource contains invalid hostnames.
    clrslh: (C189917) VALIDATE on resource test01-ip, resource group test01-rg, exited with non-zero exit status.
    clrslh: (C720144) Validation of resource test01-ip in resource group test01-rg on node test01b failed.
    clrslh: (C891200) Failed to create resource "test01:test01-ip".
    Here the entries in /etc/hosts from farm01a and farm01b
    172.19.115.119 farm01a # Cluster Node
    172.19.115.120 farm01b loghost
    172.19.115.232 test01a
    172.19.115.233 test01b
    172.19.115.252 test01
    Hope somebody could help.
    regards,
    Sascha
    Edited by: sbrech on 13.05.2009 11:44

    When I scanned my last example of a zone cluster, I spotted, that I added my logical host to the zone clusters configuration.
    create -b
    set zonepath=/zones/cluster
    set brand=cluster
    set autoboot=true
    set enable_priv_net=true
    set ip-type=shared
    add inherit-pkg-dir
    set dir=/lib
    end
    add inherit-pkg-dir
    set dir=/platform
    end
    add inherit-pkg-dir
    set dir=/sbin
    end
    add inherit-pkg-dir
    set dir=/usr
    end
    add net
    set address=applh
    set physical=auto
    end
    add dataset
    set name=applpool
    end
    add node
    set physical-host=deulwork80
    set hostname=deulclu
    add net
    set address=172.16.30.81
    set physical=e1000g0
    end
    end
    add sysid
    set root_password=nMKsicI310jEM
    set name_service=""
    set nfs4_domain=dynamic
    set security_policy=NONE
    set system_locale=C
    set terminal=vt100
    set timezone=Europe/Berlin
    end
    I am refering to:
    add net
    set address=applh
    set physical=auto
    end
    So as far as I can see this is missing from your configuration. Sorry for leading you in the wrong way.
    Detlef

  • Creating logical host on zone cluster causing SEG fault

    As noted in previous questions, I've got a two node cluster. I am now creating zone clusters on these nodes. I've got two problems that seem to be showing up.
    I have one working zone cluster with the application up and running with the required resources including a logical host and a shared address.
    I am now trying to configure the resource groups and resources on additional zone clusters.
    In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.
    I log onto the zone and I create a failover resource group, no problem. I then try to create a logical host and I get:
    "Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11"
    This error appears to be happening on the other node, ie: not the one that I'm building from.
    Anyone seen anything like, this have any thoughts on where I should go with it?
    Thanks.

    Hi,
    In some cases when I install the zone cluster the clzc command core dumps at the end. The resulting zones appear to be bootable and running.Look at the stack from your core dump and see whether this is matching with the bug:
    6763940 clzc dumped core after zones were installed
    As far as I know, the above bug is harmless and no functionality should be impacted. This bug is already fixed in the later release.
    "Method hafoip_validate on resource xxx stopped or terminated due to receipt of signal 11" The above message is not enough to figure out what's wrong. Please look at the below:
    1) Check the /var/adm/messages on the nodes and observe the messages that got printed around the same time that the above
    message got printed and see whether that gives more clues.
    2) Also see whether there is a core dump associated with the above message and that might also provide more information.
    If you need more help, please provide the output for the above.
    Thanks,
    Prasanna Kunisetty

  • Oracle RAC 10g on Solaris 10 in a non-global zone

    I need to run Oracle RAC 10g on Solaris 10 in a non-global zone as I must cap the CPUs used for Oracle licensing limitations. My question is a simple one, but one for which I'm getting conflicting information depending upon whom I ask.
    If I want to run RAC in a non-global zone on two nodes, does this require the use of Solaris Cluster?
    I know there are good reasons to use Solaris Cluster, but the company for which I work cannot afford the additional expense of Solaris Cluster at this time. Is it possible to run Oracle RAC 10g in a capped container without Solaris Cluster or is Solaris Cluster absolutely required?
    Thanks in advance for any insight you can provide.

    AFAIK, Oracle 10g RAC is not supported in solaris containers.
    It is however supported in Solaris zone clusters...in order to use it, you would have to use Sun Cluster 3.2 (iinm).

  • Ports management in Solaris 10 Zones

    Hi,
    I am new in this area. We have a software vendor who stated that their applications should be able to:
    - open TCP connections to ports in the ranges 21400-21404 and 50000-50199 on the server
    - telnet port 23 for Galaxy 2000, http port 80 for Viewpoint (Server); etc.
    - IngresNet is on listen address I3 (port 21400).
    We have a Server & Storage infrastructure policy of implementing all Unix based applications on Solaris 10 Zones clustered on two nodes.
    Will there be a problem with the vendor requests?. To me, it is about independent port management within Solaris 10 zones
    Thanks in advance.

    Does this imply there are no ports which are common/shared between the Global and local zones.
    If there are, does it mean that opening one in that group of ports in the local zone will automatically implied its equivalent in the global is also opened?

Maybe you are looking for

  • SSO for Java not working

    Hi, We have configured the Secure login Server and enabled the SPNEGO. We are getting the certificates and able to fully get the features of X.509 and Kerberos functionallity in ABAP. However in the case of JAVA stack it is not taking the windows aut

  • "Delete Overlapping Requests from InfoCube" in a Process Chain

    Dear all, I encountered a problem when I building the process chain in BW 3.0. In a process chain, I schedule daily an InfoPackage to load data to an InfoCube and then delete the previously loaded data by the same InfoPacckage from the InfoCube. For

  • I want wireless connection to backup external hard drive 400 feet away in an unheated out building.

    I know nothing about this stuff. I have a 250gb external hard drive that I want to put in an outbuilding 400 feet away from my HP dv9000 with Windows Vista 32 bit (with some sort of "HP Wireless LAN" thinngy) as a backup in case my house burns down.

  • Controlling Print Size

    This may seem like a really stupid basic question--but, when printing from the Mac, shouldn't there be SOME way to control the size of the final image, be it picture or text, on the paper? I can find "Print Preview" to look at it, but can't find ANY

  • Issue with IBot

    Hi all, when I try to save an Ibot, I get the following error: Oracle BI Scheduler Error: [nQSError: 76012] The Scheduler client connection was closed. Impactos del error Códigos de error: GYFPI8RN not everytime but quite often for 2-3 times in a wee