Sharing resources among resource groups in Sun Cluster 3.1

Hi all,
Is it possible to share a resource among resource groups. For example:
lh: resource of type Logical Hostname =lh-res
/orahome: Oracle binaries and configuration files = orahome-res
/oradata1: Data for instance 1 = oradata1-res
/oradata2: Data for instance 2 = oradata2-res
rg1 ( resource group for Oracle instance 1) ora1-rg = lh + orahome-res + oradata1-res
rg2 (resource group for Oracle instance 2) ora2-rg = lh + orahome-res + oradata2-res
Thanks,
Enrique

Hi Enrique,
if lh represents the same address and the same resource name then the answer is: No not possible one resource can belong to only one resource group.
If it would work and both rg's are running on different node you would create duplicate ip adress errors which can not be your intent.
Which behavior do you want to achieve?
Detlef

Similar Messages

  • Cannot import a disk group after sun cluster 3.1 installation

    Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
    Did anyone run into the same problem?
    The dump of the private region for every single disk in the VM returns the following error:
    # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
    VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
    Format error in disk private region
    Any help or suggestion would be greatly appreciated
    Thx
    Max

    If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
    First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
    You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
    Having said that, I'm still confused by the error. See if the above solves the problem first.
    Regards,
    Tim
    ---

  • SUN  CLUSTER RESOURCE FOR LEGATO CLIENT (LGTO.CLNT) in Oracle database

    hi everyone
    I am tryinig to create a LGTO.clnt resource in oracle-rg resource group in SUN CLUSTER 3.2 with the following commands
    clresource create -g resource_group_name -t LGTO.clnt \
    -x clientname=virtual_hostname -x owned_paths=pathname_1,
    pathname_2[,...] resource_name
    I just need to know what is value of Owned_Paths variable in the above commnad?
    or what PATH it is reffering to ( $ORACLE_HOME or Global devices path ...etc) ?

    Hello,
    The Owned_Paths parameter are the paths (or mountpoints) the legato client will be able to backup from.
    To configure a legato client in the Networker console (and to be managed as a cluster client) you need to declare the in the Owned_Paths the paths you want to save.
    The savesets paths can be a directory under the Owned_Paths.
    Regards
    Pablo Villanueva.

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • Deploy HA Zones with Sun Cluster

    Hi
    I have 2 physical Sol 10 Servers with a storedge array for the shared storage.
    I have installed Sun Cluster 3.3 on both nodes and sorted the quorum and shared drive using a zfs file system for a mount point
    Next i have installed a non global zone on 1 node using the zone path on the shared filesystem
    When i switch the shared file system the zone is not instaalled on the 2nd node.
    So when i try to install the zone on the 2nd node
    i get a Rootpath is already mounted on this filesystem
    Does anyone know how to setup a Sun Cluster with HA Zones please.

    The option to forcibly attach a zone got added to zoneadm with a Solarus 10 Update release. With that option the procedure to configure and install a zone for HA Container use can be:
    The assumption is there is already a RG configured with a HASP resource managing the zpool for the zone rootpath:
    a) Swithch the RG online on node A
    b) Configure (zonecfg) and install (zoneadm) the zone on node A on shared storage
    c) boot the zone and go through interactive sysidcfg within "zlogic -C zonename"
    d) Switch the RG hosting the HASP resource for the pool to node B
    e) Configure (zonecfg) the zone on node B.
    f) "Install" the zone by forcibly attaching it: zoneadm -z <zonename> attach -F
    The user can then test if the zone boots on node B, halt it and proceed with the sczbt resource registration as described within http://download.oracle.com/docs/cd/E18728_01/html/821-2677/index.html.
    Regards
    Thorsten

  • Rename Sun Cluster Resource Group

    Hi All,
    We have a 2 node Sun Cluster 3.0 running on 2 x V440 servers. We want to change the resource group.
    Can I use "scrgadm -c -g RG_NAME -h nodelist -y property" command to change the resource group name. Can I do this online while the clusters are running or do I need to bring the cluster to mainenance mode? Any help would be appreciated.
    Thanks.

    You cannot rename a resource group in that way in Sun Cluster.
    You have two options:
    -Recreate the resource group with the new name
    -Use an unsupported procedure to change the name in the CCR. This requires downtime of both nodes and as it is unsupported I am not going to describe it here. If that is what you want to do, please log a call with Sun.
    If you think renaming resource groups is a useful feature may I also ask you to contact your Sun Service representative so that they can take proper action to log an RFE for the feature.

  • Close/shutdown the Sun Cluster Package/resource Group

    Hi,
    I have a SUN cluster system.
    I want to know what script do when the SUN cluster shutdown the package "app-gcota-rg" as I may need to modify it ?? Where can I find out this information in the system??
    In which directory and log file ???
    Any suggestion ???
    Resource Groups --
    Group Name Node Name State
    Group: ora_gcota_rg ytgcota-1 Online
    Group: ora_gcota_rg ytgcota-2 Offline
    Group: app-gcota-rg ytgcota-1 Online
    Group: app-gcota-rg ytgcota-2 Offline

    Hi,
    you would first find out which resources belong to app-gcota-rg.
    Do a "clrs list -g app-gcota-rg". Then find out which of the resource is the one dealing with your application. Then try to find out its resource type:
    "clrs show -v <resource name>| fgrep Type". If it is a standard type like HA Oracle, it is an extremely bad idea to hack the scripts, as you'll lose support. If type is SUNWgds, the scripts to start, stop and monitor the application are user supplied. You can find their pathnames using:
    "clrs show -v <resource-name>| fgrep _command". This should display full pathnames.
    Regards
    Hartmut

  • Sun cluster resource group online but faulted

    Hi,
    recently our storage admin has deleted a volume d9 from the oradg disk set by mistake. for that we created a new disk d29 and restore data to it but we forgot to remove disk d9 from the disk set. after  a reboot the orarg resource group failed to go online with a faulted status because of ora-stor resource is faulted

    You cannot rename a resource group in that way in Sun Cluster.
    You have two options:
    -Recreate the resource group with the new name
    -Use an unsupported procedure to change the name in the CCR. This requires downtime of both nodes and as it is unsupported I am not going to describe it here. If that is what you want to do, please log a call with Sun.
    If you think renaming resource groups is a useful feature may I also ask you to contact your Sun Service representative so that they can take proper action to log an RFE for the feature.

  • Sun Cluster: Graph resources and resource groups dependencies

    Hi,
    Is there anything like the scfdot (http://opensolaris.org/os/community/smf/scfdot/) to graph resource dependencies in Sun Cluster?
    Regards,
    Ciro

    Solaris 10 8/07 s10s_u4wos_12b SPARC
    + scha_resource_get -O TYPE -R lh-billapp-rs
    + echo SUNW.LogicalHostname:2
    + [ -z sa-billapp-rs ]
    + NETRS=sa-billapp-rs lh-billapp-rs
    + [ true = true -a ! -z sa-billapp-rs lh-billapp-rs ]
    cluster2dot.ksh[193]: test: syntax error
    + + tr -s \n
    + scha_resource_get -O RESOURCE_DEPENDENCIES -R sa-billapp-rs
    DEP=
    + [ true = true -a ! -z sa-billapp-rs lh-billapp-rs ]
    cluster2dot.ksh[193]: test: syntax error
    + + tr -s \n
    + scha_resource_get -O RESOURCE_DEPENDENCIES -R lh-billapp-rs
    DEP=
    + [   !=   ]
    + echo \t\t"lh-billapp-rs";
    + 1>> /tmp/clu-dom3-resources.dot
    + + tr -s \n
    + scha_resource_get -O RESOURCE_DEPENDENCIES_WEAK -R lh-billapp-rs
    DEP_WEAK=

  • QFS Meta data resource on sun cluster failed

    Hi,
    I'm trying to configure QFS on cluster environment, to configure metadata resource faced error. i tried with different type of qfs none of them worked.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/sharedqfs
    n1u332 - shqfs: Invalid priority (0) for server n1u332FS shqfs: validate_node() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/haqfs
    n1u332 - Mount point /global/haqfs does not have the 'shared' option set.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/hasharedqfs
    n1u332 - has: No /dsk/ string (nodev) in device.Inappropriate path in FS has device component: nodev.FS has: validate_qfsdevs() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    any QFS expert here?

    hi
    Yes we have 5.2, here is the wiki's link, [ http://wikis.sun.com/display/SAMQFSDocs52/Home|http://wikis.sun.com/display/SAMQFSDocs52/Home]
    I have added the file system trough webconsole, and it's mounted and working fine.
    after creating the file system i tried to put under sun cluster's management, but it asked for metadata resource and to create metadata resource I have got the mentioned errors.
    I need the use QFS file system in non-RAC environment, just mounting and using the file system. I could mount it on two machine in shared mode and high available mode, in both case in the second node it's 3 time slower then the node which has metadata server when you write and the same read speed. could you please let me know if it's the same for your environment or not. if so what do you think of the reason, i see both side is writing to the storage directly but why it's so slow on one node.
    regards,

  • Sun cluster 3.2 - resource hasstorageplus taking too much time to start

    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?
    Jan 28 20:28:01 hnmdb02 Cluster.Framework: [ID 801593 daemon.notice] stdout: becoming primary for data
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 350207 daemon.notice] 24 fe_rpc_command: cmd_type(enum):<3>:cmd=<null>:tag=<hnmdb.data.10>: Calling security_clnt_connect(..., host=<hnmdb02>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<0>, ...)
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 316625 daemon.notice] Timeout monitoring on method tag <hnmdb.data.10> has been resumed.
    Jan 28 20:34:57 hnmdb02 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_prenet_start> completed successfully for resource <data>, resource group <hnmdb>, node <hnmdb02>, time used: 23% of timeout <1800 seconds>

    heldermo wrote:
    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?I'm not sure how this is supposed to be related to Messaging Server. I suggest you ask your question in the Cluster forum:
    http://forums.sun.com/forum.jspa?forumID=842
    Regards,
    Shane.

  • Sun Cluster 3.1 Failover Resource without Logical Hostname

    Maybe it could sound strange, but I'd need to create a failover service without any network resource in use (or at least with a dependency on a logical hostname created in a different resource-group).
    Does anybody know how to do that?

    Well, you don't really NEED a LogicalHostname in a RG. So, i guess i am not understanding
    the question.
    Is there an application agent which demands to have a network resource in the RG? Sometimes
    the VALIDATE method of such agents refuses to work if there is no network resource in
    the RG.
    If so, tell us a bit more about the application. Is this GDS based and generated by
    Sun Cluster Agent Builder? The Agent Builder has a option of "non Network Aware", if you
    select that while building you app, it ought to work without a network resource in the RG.
    But maybe i should back up and ask the more basic question of exactly what is REQUIRING
    you to create a LogicalHostname?
    HTH,
    -ashu

  • Failed to create resource - Error in Sun cluster 3.2

    Hi All,
    I have a 2 node cluster in place. When i trying to create a resource, i am getting following error.
    Can anybody tell me why i am getting this. I have Sun Cluster 3.2 on Solaris 10.
    I have created zpool called testpool.
    clrs create -g test-rg -t SUNW.HAStoragePlus -p Zpools=testpool hasp-testpool-res
    clrs: sun011:test011z - : no error
    clrs: (C189917) VALIDATE on resource hasp-testpool-res, resource group test-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource hasp-testpool-res in resource group test-rg on node sun011:test011z failed.
    clrs: (C891200) Failed to create resource "hasp-testpool-res".
    Regards
    Kumar

    Thorsten,
    testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
    Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
    Regards
    Kumar

  • How to change the primary node for a resource group. Solaris cluster. 3.2

    I have searched for hours to try to find this answer.
    I want to change the primary node of a resource group.
    example.
    log-rg runs on node1.this.com it will list node1.this.com first when you do clrg status.
    But we run it on node2.this.com
    A reboot will have log-rg run on node1 after a reboot. We have to switch it by hand to run
    on node2 .
    I want it to know that it should always try to first run on node1, but still failover to node2 if the situation arises.
    scswitch -z -g log-rg -h node2 (and all the fully qualified versions of this command)
    would not work.
    How tow can I change the primary node for log-rg (logZ) from node1 to node2???
    thanks!

    Hi.
    Show current configuration for RG:
    clrg show -v log-rg
    For change order Nodelist you can:
    clrg remove-node -n node1 log-rg
    clrg add-node -n node1 log-rg
    But tis command more destructive. It may be problem add-node back to this RG.
    I don't know why Validation of resource log-tiv in res group log-rg on node1 failed.
    Need more know about configuration, resourse type, etc.
    May be it's better create for test small RG and try move and change resource.
    Nodelist - say candidates for run this resources. But at this moment RG can run on any from this list.
    Docs about Sun Cluster.
    http://download.oracle.com/docs/cd/E19787-01/820-7360/fxjbo/index.html
    Typical tasks:
    http://download.oracle.com/docs/cd/E19787-01/820-7359/z40002701009474/index.html
    Adding or Removing a Node to or From a Resource Group
    http://download.oracle.com/docs/cd/E19787-01/820-7359/z400043a1055200/index.html

  • Cannot use file for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Serve

    Hi
    Windows serv 2012 cluster on sql 2012 cluster with 2 instance. on works fine , Second instanc ewhen i try to creat DB a get this message. 
    Cannot use file  for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql
    Server does not have a dependency on it.
    CREATE DATABASE failed. Some file names listed could not be created. Check related errors. (Microsoft SQL Server, Error: 5184)
    Any help please
    kam
    KAMEL

    Hi Saurabh
    Exactly I have SQL SERVER 2012
    Failover Clustering   in windows server 2012 with two nodes with
    two instances and exactly I run them in the same server and each instance with
    three drives Backup, Data and log.   
    KAMEL

Maybe you are looking for