Cannot import a disk group after sun cluster 3.1 installation

Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
Did anyone run into the same problem?
The dump of the private region for every single disk in the VM returns the following error:
# /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
Format error in disk private region
Any help or suggestion would be greatly appreciated
Thx
Max

If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
Having said that, I'm still confused by the error. See if the above solves the problem first.
Regards,
Tim
---

Similar Messages

  • Cannot create ASM disk groups in DBCA - Oracle 11gR1, Windows 32bit

    Good afternoon,
    Using 11gR1 on Windows XP, I need help installing a database (single instance) using ASM, I am not able to select disks to be stamped for use by ASM. I have done the following steps:
    1. Installed the Oracle 11gR1 software (no database created - just the software)
    2. Connected nine (9) SCSI hard drives to the system
    3. Created a primary partition on each hard drive, formatted the partition but did *not* assign a drive letter
    4. Created a listener service (working properly - checked with lsnrctl  status)
    5. Started DBCA to create a single database instance using ASMIn DBCA I performed the following:
    a. at the welcome screen, click Next
    b. at "Step 1 of 16", click Next (accepting the default "Create a Database")
    c. at "Step 2 of 16", click Next (accepting the default "General Purpose or Transaction Processing)
    d. at "Step 3 of 16", "Global Database Name" set to "dbca", "SID" set to "dbca", click Next
    e. at "Step 4 of 16", click Next (accepting the defaults)
    f. at "Step 5 of 16", click "Use same administrative passwords for all accounts" and entered password, click Next
    g. at "Step 6 of 16", click "Automatic Storage Management (ASM)", click Next
    Everything seems just fine until I reach the following step
    h. at "Step 7 of 16", click "Create New"
        - The Create Disk Group window pops up (there is nothing shown in the "Select Member Disk Area")
        - set the "Disk Group Name" to  "DATAGROUP"
        - click on "Show All" (no devices shown after clicking)
        - click on "Stamp Disks..."  this causes the "asmtool operation" window to pop up
          - click on "Add or change label", click on Next
          - the "Select Disks" step appears.   There are 11 hard drives listed but NONE of them
            can be selected - all I can do here is click on Cancel*The question:* Why aren't the drives selectable and what do I need to do to make them selectable ?
    Thank you very much for your help,
    John.

    >
    The question: Why aren't the drives selectable and what do I need to do to make them selectable ?
    >
    To answer my own question (now that I've figured it out), the reason is because the partition should not be formatted.
    When Windows displays the "New Partition Wizard", it is important to:
    1. not assign a drive letter
    2. and select "Do not format this partition"
    This will cause Windows to mount the volume thus making it accessible to Oracle.
    Hopefully this will help someone not fall in this trap as I did,
    John.

  • How to remove Disk Group after deinstall of Grid Infrastructure

    Hello,
    I deinstalled an Oracle Grid Infrastructure for a standalone server. I made the mistake of telling OUI not to drop the Disks/Disk Group. Now when I try to do a new install it shows the disks as members of a Disk Group already on the page 'Create Disk Groups'. How can I get those disks out of the disk group? I tried oracleasm but it does not see any Disk Groups.

    OS is CentOS 5.6
    I am using formatted (ext3) logical volumes. Let me explain a bit why I am doing this. This is a system I lease for my own personal use. I am leasing it so that I can learn Oracle High Availability. Unfortunately due to how each VM is provisioned only a single virtual disk (raw disk) is allocated per instance. I have not leased a dedicated server due to the cost and so do not have/cannot get multiple raw disks. The logical volumes aren’t setup with any RAID on my virtual machine as there are just additional LVM’s assign internally to my server. The underlying host server uses RAID-10.
    When I set up the Disk Group "DATA" for ASM I used the following:
    [root@remarkable:/dev/mapper]pwd
    /dev/mapper
    [root@remarkable:/dev/mapper]ls -l
    total 0
    crw-rw-rw- 1 root root 10, 63 Jul 2 12:55 control
    brw-rw-rw- 1 root disk 253, 0 Jul 2 12:55 VolGroup00-LogVol00
    brw-rw-rw- 1 root disk 253, 2 Aug 10 18:07 VolGroup00-LogVol001
    brw-rw-rw- 1 root disk 253, 3 Aug 10 18:07 VolGroup00-LogVol002
    brw-rw-rw- 1 root disk 253, 4 Aug 10 18:07 VolGroup00-LogVol003
    brw-rw-rw- 1 root disk 253, 1 Jul 2 12:55 VolGroup00-LogVol01
    That worked fine when I set it up initially. But then I realized the groups were not set correctly for the oracle user or the ASM home after I did the install. So I uninstalled the ASM and then set the groups correctly for the Oracle user. However, when I uninstalled the ASM I mistakenly chose not to blow away the disk group. So now when I try to install the Grid Infrastructure using OUI it shows the 3 disks I want to use as already being members of a Disk Group and I don't know how to change that. I tried the oracleasm deletedisk command but oracleasm does not think any disk groups exist.
    [root@remarkable:/dev/mapper]oracleasm listdisks
    [root@remarkable:/dev/mapper]oracleasm deletedisk Disk1
    Disk "DISK1" does not exist or is not instantiated
    So I am not sure where to go from here.

  • Cleaning up ASM disk group after failed rman duplicate session?

    I am using rman duplicate to create a clone of a production database. The rman duplicate failed in phase 1 (retoration of datafiles) and my datafile are being restored to an ASM diskgroup.
    I read in the metalink note Manual Completion of a Failed RMAN Duplicate [ID 360962.1] the following...
    Note : From 10g onwards, if duplicate failed during step 1, which is the restore of datafiles, it is probably best to restart the duplicate process. Any files that have already been restored will be skipped and the duplicate process can continue without manual intervention
    So I followed the advice in the note and started the rman duplcate script over again from the beggining. I am hoping it will have the intelligence to skip any files that have already been restored but if it is not since this is 10.1.0.3 and I cannot use the asmcmd command to connect to the ASM instance to delete any files that have been created then how can I delete and of the files since the database instance never got created correctly and cannot open?
    Any thought on how I can now cleanup the files in the ASM disk group? Also, has anyone started an rman duplicate again after it has previously failed and did it actually skip the files that are already there as stated in the metalink note?
    Thanks.

    Unfortunately the v$asm_file does not have the entire file name or path it just gives you the follwing...
    SQL> desc v$asm_file
    Name Null? Type
    GROUP_NUMBER NUMBER
    FILE_NUMBER NUMBER
    COMPOUND_INDEX NUMBER
    INCARNATION NUMBER
    BLOCK_SIZE NUMBER
    BLOCKS NUMBER
    BYTES NUMBER
    SPACE NUMBER
    TYPE VARCHAR2(64)
    REDUNDANCY VARCHAR2(6)
    STRIPED VARCHAR2(6)
    CREATION_DATE DATE
    MODIFICATION_DATE
    This really stinks I wish I didn't have to work with 10gr1 at all but the database I am trying to move to a ew server and new storage is running this version and we did not want to upgrade the existing database first before attempting the migration because it is reliant upon tape backups which I do not have alot of confidence in...

  • Wrong hostname setting after Sun Cluster failover

    Hi Gurus,
    our PI system has been setup to fail over in a sun cluster with a virtual hostname s280m (primary host s280 secondary host s281)
    The basis team set up the system profiles to use the virtual hostname, and I did all the steps in SAP Note 1052984 "Process Integration 7.1 High Availability" (my PI is 7.11)
    Now I believe to have substituted "s280m" in every spot where previously "s280" existed, but when I start the system on the DR box (s281), the java stack throws erros when starting. Both SCS01 and DVEBMGS00 work directories contain a file called dev_sldregs with the following error:
    Mon Apr 04 11:55:22 2011 Parsing XML document.
    Mon Apr 04 11:55:22 2011 Supplier Name: BCControlInstance
    Mon Apr 04 11:55:22 2011 Supplier Version: 1.0
    Mon Apr 04 11:55:22 2011 Supplier Vendor:
    Mon Apr 04 11:55:22 2011 CIM Model Version: 1.5.29
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 HTTP trace: false
    Mon Apr 04 11:55:22 2011 Data trace: false
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 *** Start SLD Registration ***
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 HTTP open timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP send timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP response timeout = 420 sec
    Mon Apr 04 11:55:22 2011 Used URL: http://s280:50000/sld/ds
    Mon Apr 04 11:55:22 2011 HTTP open status: false - NI RC=0
    Mon Apr 04 11:55:22 2011 Failed to open HTTP connection!
    Mon Apr 04 11:55:22 2011 ****************************
    Mon Apr 04 11:55:22 2011 *** End SLD Registration ***
    Mon Apr 04 11:55:22 2011 ****************************
    notice it is using the wrong hostname (s280 instead of s280m). Where did I forget to change the hostname? Any ideas?
    thanks in advance,
    Peter

    Please note that the PI system is transparent about the Failover system used.
    When you configure the parameters against the mentioned note, this means that in case one of the nodes is down, the load will be sent to another system under the same Web Dispatcher/Load Balancer.
    When using the Solaris failover solution, it covers the whole environment, including the web dispatcher, database and all nodes.
    Therefore, please check the configuration as per the page below, which talks specifically about the Solaris failover solution for SAP usage:
    http://wikis.sun.com/display/SunCluster/InstallingandConfiguringSunClusterHAfor+SAP

  • Problem creating asm disk groups after installation.

    I just completed the grid install successfully on Oracle Linux:
    Candidate ASM disks not showing up in 11.2 installer (Oracle Linux)
    Thanks for the help.
    I am running into the same type of issue where I can't see the asm disks like I ran ran into during the install.
    When I run asmca to create the addditional disk groups I don't see the two asmlib volumes.
    I tried changing the disk discovery path from the default and I still can't see the disks.
    $ /usr/sbin/oracleasm listdisks
    CRSVOL1
    DATAVOL1
    FRAVOL1
    Do the block devices need to be owned by grid?
    $ ls -l /dev/sdc
    brw-r----- 1 root disk 8, 32 Aug 2 15:21 /dev/sdc
    $ ls -l /dev/sdd
    brw-r----- 1 root disk 8, 48 Aug 2 15:21 /dev/sdd
    $ ls -l /dev/sde
    brw-r----- 1 root disk 8, 64 Aug 2 15:21 /dev/sde
    How can the installer work fine and see the disks and then asmca can't see the disks?
    It must be a permissions issue. How can I see what the error is that is preventing the disks from showing up?

    $ cd /dev/oracleasm
    node 2:
    $ cd /dev/oracleasm
    $ ls -lL
    total 0
    -rw-rw---- 1 grid asmadmin 0 Aug 15 11:08 .check_iid
    drwxr-xr-x 1 root root 0 Aug 15 11:08 disks/
    -rw-rw---- 1 grid asmadmin 0 Aug 15 11:08 .get_iid
    drwxrwx--- 1 grid asmadmin 0 Aug 15 11:08 iid/
    -rw-rw---- 1 grid asmadmin 0 Aug 15 11:08 .query_disk
    -rw-rw---- 1 grid asmadmin 0 Aug 15 11:08 .query_version
    $ ls -lL disks/*
    brw-rw---- 1 grid asmadmin 8, 33 Aug 15 11:08 disks/CRSVOL1
    brw-rw---- 1 grid asmadmin 8, 49 Aug 15 11:08 disks/DATAVOL1
    brw-rw---- 1 grid asmadmin 8, 65 Aug 15 11:08 disks/FRAVOL1
    node 1:
    $ cd /dev/oracleasm
    $ ls -lL
    total 0
    -rw-rw---- 1 grid asmadmin 0 Aug 15 12:30 .check_iid
    drwxr-xr-x 1 root root 0 Aug 15 12:30 disks/
    -rw-rw---- 1 grid asmadmin 0 Aug 15 12:30 .get_iid
    drwxrwx--- 1 grid asmadmin 0 Aug 15 12:30 iid/
    -rw-rw---- 1 grid asmadmin 0 Aug 15 12:30 .query_disk
    -rw-rw---- 1 grid asmadmin 0 Aug 15 12:30 .query_version
    $ ls -lL disks/*
    brw-rw---- 1 grid asmadmin 8, 33 Aug 15 12:30 disks/CRSVOL1
    brw-rw---- 1 grid asmadmin 8, 49 Aug 15 12:30 disks/DATAVOL1
    brw-rw---- 1 grid asmadmin 8, 65 Aug 15 12:30 disks/FRAVOL1

  • Invalid node name in Sun Cluster 3.1 installation

    Dear all,
    I need your advice in Sun Cluster 3.1 8/05 installation.
    My colleague was installing Sun Cluster 3.1 8/05 on 2 servers Sun Netra 440 that given hostname 01-in-01 and 01-in-02. But when he want to configuring the cluster, the problem occured.
    The error message is:
    running scinstall: invalid node name
    And when we changed the host name to in-01 and in-02, the cluster can be configured well.
    Why did this problem happened?
    Is it related with the given hostname that using numeric in the beginning? If yes, can you give the documentation that state about that?
    Or maybe you have another explanation?
    Thank you for your help.
    regards,
    Henry

    A bug is being logged against this. (though obviously you could manually fix the shell script yourself if you were in a hurry).
    The problem partly stems from the restriction on hostnames being relaxed by RFC 1123 which relaxed RFC 952's limitation of the first character to only alpha characters.). See man hosts for more info. I guess our code didn't catch up :-)
    Tim
    ---

  • Sharing resources among resource groups in Sun Cluster 3.1

    Hi all,
    Is it possible to share a resource among resource groups. For example:
    lh: resource of type Logical Hostname =lh-res
    /orahome: Oracle binaries and configuration files = orahome-res
    /oradata1: Data for instance 1 = oradata1-res
    /oradata2: Data for instance 2 = oradata2-res
    rg1 ( resource group for Oracle instance 1) ora1-rg = lh + orahome-res + oradata1-res
    rg2 (resource group for Oracle instance 2) ora2-rg = lh + orahome-res + oradata2-res
    Thanks,
    Enrique

    Hi Enrique,
    if lh represents the same address and the same resource name then the answer is: No not possible one resource can belong to only one resource group.
    If it would work and both rg's are running on different node you would create duplicate ip adress errors which can not be your intent.
    Which behavior do you want to achieve?
    Detlef

  • Import Disk Group

    Hi,
    I notice that import a Disk Group in fail over take about
    60 sec. Is it logically?
    Is there a way to reduce this time ?

    Hi Galsh,
    I assume that you are using sun cluster 2.x. If you use SC3.0.x, then there is no visible importing or exporting of disks (using cluster file systems).
    I have found that my import timings have decreased when I use the "vxedit set nconfig=all <dgname>".
    This means that you are keeping a configuration copy on every disk and this reduces the search time for the diskgroups configuration.
    Hope this was useful.
    Rgds,
    Colin

  • Migrate Sun Cluster (+RACdisks to new hardware running Sun Cluster ( + RAC)

    Hello,
    We have old hardware (v490s) running Sun Cluster 3.2 + Oracle RAC 10.2.0.4.0 connected to SAN. We need to move to T4. Oracle advised against including new hardware into existing cluster, so we are planning on building a new cluster with T4's, same software (Solaris 10, Sun Cluster 3.2, RAC 10.2.0.4.0).
    When ready, we plan to shut down existing cluster, zone new cluster to existing disks and bring up everything on new hardware (simply stated).
    Will it work?
    Any gotchas - like need to clear disk ids or Sun Cluster panicking? RAC panicking? Any reference docs out there?
    Thanks
    user12961096

    Do we absolutely need that in our new setup or could we forgo that additional layer? Would Sun Cluster give us anything that the OS + RAC doesn't give us?Yes, Oracle Solaris Cluster does make things a lot easier. It looks after your device space and gives you consistent DID devices for CRS/RAC. It gives you the choice to use sQFS, raw metasets, or ASM. It has clprivnet which is a lot easier and performs better than an IPMP solution. The node failure detection time is <= 10 seconds which is quicker than CRS on it's own and it uses SCSI fencing instead of a STONITH approach. Finally, you have all the off the shelf agents that Solaris Cluster offers.
    However, if you are only doing RAC and you just want ASM and you don't need the last few seconds of failure detection that OSC gives you and you think STONITH is good enough for your fencing purposes, then CRS on its own is perfect. There are many, many deployments both with and without OSC, it's not a simple yes/no answer.
    Having worked for the Solaris Cluster group, I'm still slightly bias to including it rather than going without. Others have the alternate view! :-)
    Hope that helps,
    Tim
    ---

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • SUN  CLUSTER RESOURCE FOR LEGATO CLIENT (LGTO.CLNT) in Oracle database

    hi everyone
    I am tryinig to create a LGTO.clnt resource in oracle-rg resource group in SUN CLUSTER 3.2 with the following commands
    clresource create -g resource_group_name -t LGTO.clnt \
    -x clientname=virtual_hostname -x owned_paths=pathname_1,
    pathname_2[,...] resource_name
    I just need to know what is value of Owned_Paths variable in the above commnad?
    or what PATH it is reffering to ( $ORACLE_HOME or Global devices path ...etc) ?

    Hello,
    The Owned_Paths parameter are the paths (or mountpoints) the legato client will be able to backup from.
    To configure a legato client in the Networker console (and to be managed as a cluster client) you need to declare the in the Owned_Paths the paths you want to save.
    The savesets paths can be a directory under the Owned_Paths.
    Regards
    Pablo Villanueva.

  • Cannot use file for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Serve

    Hi
    Windows serv 2012 cluster on sql 2012 cluster with 2 instance. on works fine , Second instanc ewhen i try to creat DB a get this message. 
    Cannot use file  for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql
    Server does not have a dependency on it.
    CREATE DATABASE failed. Some file names listed could not be created. Check related errors. (Microsoft SQL Server, Error: 5184)
    Any help please
    kam
    KAMEL

    Hi Saurabh
    Exactly I have SQL SERVER 2012
    Failover Clustering   in windows server 2012 with two nodes with
    two instances and exactly I run them in the same server and each instance with
    three drives Backup, Data and log.   
    KAMEL

  • Sun Cluster 3.2 - WARNING: Cannot enable monitoring on resource-group

    clrg online -emM ora-1line-rg(C348385) WARNING: Cannot enable monitoring on resource ora-1line-rs because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor ora-1line-rs' and re-enable monitoring using 'clresource monitor ora-1line-rs'.
    (logical host reference)
    (C348385) WARNING: Cannot enable monitoring on resource ora-hastp-rs because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor ora-hastp-rs' and re-enable monitoring using 'clresource monitor ora-hastp-rs'.
    (hastorageplus reference)
    I am able to unmonitor and monitor the resources manually. What is the cause of these WARNING messages? This from Oracle and we have yet to complete the installation of HA-Oracle. Oracle is not installed and tnsnames.ora and listener.ora is not configured. Is this the reason? If so, could someone explain why you cannot online the resource group until after the application has been installed.
    Thanks in advance,
    Ryan

    As the manual says for clrs create:
    By default, resources are created in the  enabled  state with  monitoring enabled. so when you issue the clrg online -emM it is just simply warning you that these other resources weren't disable. Note they wouldn't have been started because the RG would have been offline.
    Does that explain it? If not, ask more questions.
    Tim
    ---

  • 2 node Sun Cluster 3.2, resource groups not failing over.

    Hello,
    I am currently running two v490s connected to a 6540 Sun Storagetek array. After attempting to install the latest OS patches the cluster seems nearly destroyed. I backed out the patches and right now only one node can process the resource groups properly. The other node will appear to take over the Veritas disk groups but will not mount them automatically. I have been working on this for over a month and have learned alot and fixed alot of other issues that came up, but the cluster is just not working properly. Here is some output.
    bash-3.00# clresourcegroup switch -n coins01 DataWatch-rg
    clresourcegroup: (C776397) Request failed because node coins01 is not a potential primary for resource group DataWatch-rg. Ensure that when a zone is intended, it is explicitly specified by using the node:zonename format.
    bash-3.00# clresourcegroup switch -z zcoins01 -n coins01 DataWatch-rg
    clresourcegroup: (C298182) Cannot use node coins01:zcoins01 because it is not currently in the cluster membership.
    clresourcegroup: (C916474) Request failed because none of the specified nodes are usable.
    bash-3.00# clresource status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ftp-rs coins01:zftp01 Offline Offline
    coins02:zftp01 Offline Offline - LogicalHostname offline.
    xprcoins coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    xprcoins-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    DataWatch-hasp-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    BDSarchive-res coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    I am really at a loss here. Any help appreciated.
    Thanks

    My advice is to open a service call, provided you have a service contract with Oracle. There is much more information required to understand that specific configuration and to analyse the various log files. This is beyond what can be done in this forum.
    From your description I can guess that you want to failover a resource group between non-global zones. And it looks like the zone coins01:zcoins01 is reported to not be in cluster membership.
    Obviously node coins01 needs to be a cluster member. If it is reported as online and has joined the cluster, then you need to verify if the zone zcoins01 is really properly up and running.
    Specifically you need to verify that it reached the multi-user milestone and all cluster related SMF services are running correctly (ie. verify "svcs -x" in the non-global zone).
    You mention Veritas diskgroups. Note that VxVM diskgroups are handled in the global cluster level (ie. in the global zone). The VxVM diskgroup is not imported for a non-global zone. However, with SUNW.HAStoragePlus you can ensure that file systems on top of VxVM diskgroups can be mounted into a non-global zone. But again, more information would be required to see how you configued things and why they don't work as you expect it.
    Regards
    Thorsten

Maybe you are looking for