Sun Cluster 3.1u4 - cannot mount global mount points

I have the following in my /etc/vfstab:
/dev/md/controlm/dsk/d20 /dev/md/controlm/rdsk/d20 /global/ctmprod ufs 2 no global,logging
/dev/md/controlm/dsk/d30 /dev/md/controlm/rdsk/d30 /global/ctmprod/oracle ufs 2 no global,logging
when I try to mount them I get the following message:
mount: /dev/md/controlm/dsk/d20 or /global/controlm, no such file or directory
mount: /dev/md/controlm/dsk/d30 or /global/controlm, no such file or directory
If i rem out the lines in /etc/vfstab, they all work fine. They just will not mount with the lines in /etc/vfstab.
any help would be great.
thank you

thank you for your response. I went back and double checked everything and it turned out that I had created the mount points on one server but no the other.
again, thank you.

Similar Messages

  • Pxvfs:mount(): global mounts are not enabled (need to run "clconfig -g")

    Dear All,
    I having two node cluster and one of the cluster node rebooted automatically . After it boots, it shows the following error.
    Mar 11 14:23:21 scrbdomderue005 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/globaldevices:default: Method "/
    usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96.
    Mar 11 14:23:21 scrbdomderue005 svc.startd[8]: [ID 748625 daemon.error] system/cluster/globaldevices:default misconfigured: tr
    ansitioned to maintenance (see 'svcs -xv' for details)
    Mar 11 14:23:21 scrbdomderue005 Cluster.CCR: [ID 795553 daemon.error] /usr/cluster/bin/scgdevs: Filesystem /global/.devices/no
    de@2 is not available in /etc/mnttab.
    Mar 11 15:04:19 scrbdomderue005 cl_runtime: [ID 403317 kern.warning] WARNING: pxvfs:mount(): global mounts are not enabled (need to run "clconfig -g" first)
    scrbdomderue005:/usr/cluster/lib/sc# ./clconfig -g
    cladm: CLGBLMNT_ENABLE: Device busy
    scrbdomderue005:/usr/cluster/lib/sc#
    When i tried to mount manually it couldn't.
    scrbdomderue005:/# mount /dev/dsk/c1t1d0s5 /global/.devices/node@2
    mount: No such device
    mount: Cannot mount /dev/dsk/c1t1d0s5
    but when i tried to mount the device in new mount point its mounting and i'm able to see the data.
    scrbdomderue005:/# mount /dev/dsk/c1t1d0s5 /mnt
    scrbdomderue005:/mnt# ls
    dev devices lost+found
    Kindly Could some one please guide me. how to resolve this issue.
    Thanks in advance.
    veera

    What have you changed recently? Patches?? Was it booting successfully before?
    May be the /global/.devices/node@X is not mounting properly on one node?
    Tim
    ---

  • /global mount point without mirror

    hi,
    OS: Solaris 10 11/06
    SOFT: Sun Cluster 3.1u4
    system disks of both nodes has raid1.
    after 6 monthes i've found that /global partition is not in raid and mounts as partition of first disk and etc.
    is it possible and how to migrate two partitions to raid1 of every node???
    --mpech                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    hi,
    mtalha Registered: 07.02.2006 6:16      05.07.2007 0:38
    I dont think you will not have any issue mirror them nowSo you meant we won't have any problems with mirroring?
    Can you reply simple scenario for this then?
    --mpech                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Does oracle clusterware and oracle RAC require sun cluster

    Hi,
    I have to setup oracle RAC on solaris 10 SPARC. so is it necessary to install sun cluster 3.2, QFS file system on solaris
    I have 2 sun sparc servers with solaris 10 installed on it and shared LUN setup(SAN disk RAID 5 partitions)
    Have to have 2 node setup for RAC load balancing.
    Regards
    Prakash

    Hi Prakash,
    very interesting point:
    As per oracle clusterware documents the cluster manager support is only for windows and linux.
    In case of solaris SPARC will the cluster manager get configured ???
    The term "Cluster Manager" refers to a "cluster manager" that Oracle used in 9i times and this one was indeed only available on Linux / Windows.
    Therefore, let me, please, ask you something: Which version of Oracle RAC do you plan to use?
    Because for 9i RAC, you would need Sun or Veritas Cluster on Solaris. The answers given here that Sun Cluster would not be required assume 10g RAC or higher.
    Now, you might see other dependencies which can be resolved by Sun Cluster. I cannot comment on those.
    For the RAW setup: having RAW disks (not raw logical volumes) will be fine without Veritas and ASM on top.
    Hope that helps. Thanks,
    Markus

  • Sun Cluster Core Conflict - on SUN Java install

    Hi
    We had a prototype cluster that we were playing with over two nodes.
    We decided to uninstall the cluster by putting node into single user mode and running scinstall -r.
    Afterwards we found that the Java Availability Suite was a little messed up - maybe because the kernel/registry had not been updated - it though the cluster and agent software was uninstalled and would not let us re-install. All the executabvles from /etc/cluster/bin had been removed from the nodes.
    So, On both nodes we ran the uninstall program from /var/sadm/prod/... and then selected cluster and agents to uninstall.
    On the first node, this completely removed the sun cluster compoenets and then allowed us to re-install the cluster software successfully.
    On the second node, for some reason, it has left behind the component "Sun Cluster Core", and will not allow us to remove it with the uninstall.
    When we try to re-install we get the following:
    "Conflict - incomplete version of Sun Cluster Core has been detected"
    In then points us to the sun cluster upgrade guide on sun.com.
    My question is - how do we 'clean up' this node and remove the sun cluster core so we can re-install the sun cluster software from scratch?
    I don't quite understand how this has been left behind....
    thanks in advance
    S1black.

    You can use prodreg directly to clean up when your de-install has gone bad.
    Use:
    # prodreg browse
    to list the products. You may need to recurse down into the individual items. The use:
    # prodreg unregister ...
    to unregister and pkgrm to remove the packages manually.
    That has worked for me in the past. Not sure if it is the 'official' way though!
    Regards,
    Tim
    ---

  • Sun cluster failed when switching, mount /global/ I/O error .

    Hi all,
    I am having a problem during switching two Sun Cluster nodes.
    Environment:
    Two nodes with Solaris 8 (Generic_117350-27), 2 Sun D2 arrays & Vxvm 3.2 and Sun Cluster 3.0.
    Porblem description:
    scswitch failed , then scshutdown and boot up the both nodes. One node failed because of vxvm boot failure.
    The other node is booting up normally but cannot mount /global directories. Manually mount is working fine.
    # mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    # vxdg import globdg
    # vxvol -g globdg startall
    # mount /dev/vx/dsk/globdg/mirror-vol03 /mnt
    # echo $?
    0
    port:root:/global/.devices/node@1/dev/vx/dsk 169# mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    Need help urgently
    Jeff

    I would check your patch levels. I seem to remember there was a linker patch that cause an issue with mounting /global/.devices/node@X
    Tim
    ---

  • Cannot mount lofi globaldevice in sun cluster

    I have following problem:
    df -k shown: df: cannot startvfs /global/.devices/node@1: I/O error and failover fail to standby node. After whole cluster reboot, resume normal.
    May I know have any steps to reconfig it except reboot?
    Regards,
    Hedgehog

    Can you explain what happened in the time leading up to this problem? When you say "failover fail to standby node" do you mean that your applications did not switch over properly?
    I would have expected that you should have been able to simply remount this file system manually. Did the file system still exist? If you are running ZFS for the root file system, it would be a lofi file system backed by a file called /.globaldevices. May be you accidentally removed the file???
    Regards,
    Tim
    ---

  • Cluster global mount point failed

    hi all:
    i use vmware to build sun cluster two nodes. when i build successfully, i found there are some error in message
    Jul 17 09:01:33 sun1 genunix: [ID 233861 kern.warning] WARNING: file system 'pxfs' version mismatch
    Jul 17 09:01:34 sun1 svc.startd[8]: [ID 748625 daemon.error] system/cluster/globaldevices:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Jul 17 09:01:35 sun1 Cluster.CCR: [ID 795553 daemon.error] /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@2 is not available in /etc/mnttab.
    # more /etc/vfstab
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c0d0s1 - - swap - no -
    /dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 / ufs 1 no -
    #/dev/dsk/c0d0s7 /dev/rdsk/c0d0s7 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    sharefs - /etc/dfs/sharetab sharefs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes -
    /dev/did/dsk/d1s7 /dev/did/rdsk/d1s7 /global/.devices/node@2 ufs 2 no global
    # modinfo | grep pxfs "no result"
    # find /kernel -name pxfs
    /kernel/fs/amd64/pxfs
    /kernel/fs/pxfs
    # modload /kernel/fs/amd64/pxfs
    can't load module: Invalid argument
    # modload /kernel/fs/pxfs
    can't load module: No such device or address
    How do i solve this problem, thanks

    I have installed Sun Cluster 3.2 on Solaris U5 Generic_137112-07, and found the same issue. I just have a single node cluster, as I the second piece of hardware is already running applications in production w/ out cluster. My cluster node will replace it, then the other will be rebuilt to the same OS spec and married to the cluster later.
    I see a nice big message that says:
    Sep 12 21:19:49 fnhubjcaps01 Cluster.CCR: [ID 674994 daemon.error] /usr/cluster/bin/scgdevs: Filesystem /global/.devices/node@1 is not available in /etc/mnttab.
    Sep 12 21:21:35 fnhubjcaps01 Cluster.CCR: [ID 914260 daemon.warning] Failed to retrieve global fencing status from the global name server
    Sep 12 21:21:35 fnhubjcaps01 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/globaldevices:default: Method "/usr/cluster/lib/svc/method/globaldevices start" failed with exit status 96.
    Sep 12 21:21:35 fnhubjcaps01 svc.startd[8]: [ID 748625 daemon.error] system/cluster/globaldevices:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    On the console I see:
    mount: /dev/did/dsk/d3s5 no such device
    Trying to remount /global/.devices/node@1
    mount: /dev/did/dsk/d3s5 no such device
    WARNING - Unable to mount one or more of the following filesystem(s):
    /global/.devices/node@1
    if this is not repaired, global devices will be unavailable.
    Run mount manually (mount filesystem...).
    after the problems are corrected, please clear the maintenance flag on globaldevices by running the command (lists svcadm clear command)
    The slice I used for the /globaldevices was c3t0d0s5, and before the last reboot (see reasoning below), that was indeed mapped to /dev/did/(r)dsk/d3s5, what it's mapped to now I can't say but clearly it would seem that device linkage has changed.
    The kicker here is that all was well, I have 8 resource groups, each with logical host and HA StoragePlus ZFS Pool resources. I just did a touch reconfigure;reboot in order to troubleshoot some SAN LUNs where I only have a single path (the ones functioning normally have two paths). Server comes back online, and BLAM now it's a cluster fsck.
    cldev status shows all DID's in Unknown status.
    Edited by: joebob23045 on Sep 12, 2008 2:29 PM

  • Testing ha-nfs in two node cluster (cannot statvfs /global/nfs: I/O error )

    Hi all,
    I am testing HA-NFS(Failover) on two node cluster. I have sun fire v240 ,e250 and Netra st a1000/d1000 storage. I have installed Solaris 10 update 6 and cluster packages on both nodes.
    I have created one global file system (/dev/did/dsk/d4s7) and mounted as /global/nfs. This file system is accessible form both the nodes. I have configured ha-nfs according to the document, Sun Cluster Data Service for NFS Guide for Solaris, using command line interface.
    Logical host is pinging from nfs client. I have mounted there using logical hostname. For testing purpose I have made one machine down. After this step files tem is giving I/O error (server and client). And when I run df command it is showing
    df: cannot statvfs /global/nfs: I/O error.
    I have configured with following commands.
    #clnode status
    # mkdir -p /global/nfs
    # clresourcegroup create -n test1,test2 -p Pathprefix=/global/nfs rg-nfs
    I have added logical hostname,ip address in /etc/hosts
    I have commented hosts and rpc lines in /etc/nsswitch.conf
    # clreslogicalhostname create -g rg-nfs -h ha-host-1 -N
    sc_ipmp0@test1, sc_ipmp0@test2 ha-host-1
    # mkdir /global/nfs/SUNW.nfs
    Created one file called dfstab.user-home in /global/nfs/SUNW.nfs and that file contains follwing line
    share -F nfs –o rw /global/nfs
    # clresourcetype register SUNW.nfs
    # clresource create -g rg-nfs -t SUNW.nfs ; user-home
    # clresourcegroup online -M rg-nfs
    Where I went wrong? Can any one provide document on this?
    Any help..?
    Thanks in advance.

    test1#  tail -20 /var/adm/messages
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 344672 daemon.error] Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 28 22:28:54 testlab5 Cluster.SMF.DR: [ID 801855 daemon.error]
    Feb 28 22:28:54 testlab5 Error in scha_cluster_get
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d5s0 has changed to OK
    Feb 28 22:28:54 testlab5 Cluster.scdpmd: [ID 489913 daemon.notice] The state of the path to device: /dev/did/rdsk/d6s0 has changed to OK
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 652011 daemon.warning] svc:/system/cluster/scsymon-srv:default: Method "/usr/cluster/lib/svc/method/svc_scsymon_srv start" failed with exit status 96.
    Feb 28 22:28:58 testlab5 svc.startd[8]: [ID 748625 daemon.error] system/cluster/scsymon-srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details)
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 537175 daemon.notice] CMM: Node e250 (nodeid: 1, incarnation #: 1235752006) has become reachable.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 525628 daemon.notice] CMM: Cluster has reached quorum.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node e250 (nodeid = 1) is up; new incarnation number = 1235752006.
    Feb 28 22:29:23 testlab5 Cluster.RGM.rgmd: [ID 377347 daemon.notice] CMM: Node testlab5 (nodeid = 2) is up; new incarnation number = 1235840337.
    Feb 28 22:37:15 testlab5 Cluster.CCR: [ID 499775 daemon.notice] resource group rg-nfs added.
    Feb 28 22:39:05 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:05 testlab5 Cluster.CCR: [ID 491081 daemon.notice] resource ha-host-1 removed.
    Feb 28 22:39:17 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<5>:cmd=<null>:tag=<>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:17 testlab5 Cluster.CCR: [ID 254131 daemon.notice] resource group nfs-rg removed.
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 224900 daemon.notice] launching method <hafoip_validate> for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, timeout <300> seconds
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 375444 daemon.notice] 8 fe_rpc_command: cmd_type(enum):<1>:cmd=</usr/cluster/lib/rgm/rt/hafoip/hafoip_validate>:tag=<rg-nfs.ha-host-1.2>: Calling security_clnt_connect(..., host=<testlab5>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<1>, ...)
    Feb 28 22:39:30 testlab5 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hafoip_validate> completed successfully for resource <ha-host-1>, resource group <rg-nfs>, node <testlab5>, time used: 0% of timeout <300 seconds>
    Feb 28 22:39:30 testlab5 Cluster.CCR: [ID 973933 daemon.notice] resource ha-host-1 added.

  • Sun Cluster 3.2 - Global File Systems

    Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
    In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
    From the documentation that I have read,
    "The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
    The GFS is also known as Cluster File System or Proxy File system.
    Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
    Ryan

    Thank you very much, this helped :)
    And how seamless is remounting of the block device LUN if one server dies?
    Should some clustered services (FS clients such as app servers) be restarted
    in case when the master node changes due to failover? Or is it truly seamless
    as in a bit of latency added for duration of mounting the block device on another
    node, with no fatal interruptions sent to the clients?
    And, is it true that this solution is gratis, i.e. may legally be used for free
    unless the customer wants support from Sun (authorized partners)? ;)
    //Jim
    Edited by: JimKlimov on Aug 19, 2009 4:16 PM

  • Problems mounting global file system

    Hello all.
    I have setup a Cluster using two Ultra10 machines called medusa & ultra10 (not very original I know) using Sun Cluster 3.1 with a Cluster patch bundle installed.
    When one of the Ultra10 machines boots it complains about being unable to mount the global file system and for some reason tries to mount the node@1 file system when it is actually node 2.
    on booting I receive the message on the macine ultra10
    Type control-d to proceed with normal startup,
    (or give root password for system maintenance): resuming boot
    If I use control D to continue then the following happens:
    ultra10:
    ultra10:/ $ cat /etc/cluster/nodeid
    2
    ultra10:/ $ grep global /etc/vfstab
    /dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@2 ufs 2 no global
    ultra10:/ $ df -k | grep global
    /dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
    medusa:
    medusa:/ $ cat /etc/cluster/nodeid
    1
    medusa:/ $ grep global /etc/vfstab
    /dev/md/dsk/d32 /dev/md/rdsk/d32 /global/.devices/node@1 ufs 2 no global
    medusa:/ $ df -k | grep global
    /dev/md/dsk/d32 493527 4803 439372 2% /global/.devices/node@1
    Does anyone have any idea why the machine called ultra10 of node ID 2 is trying to mount the node ID 1 global file system when the correct entry is within the /etc/vfstab file?
    Many thanks for any assistance.

    Hmm, so for arguments sake, if I tried to mount both /dev/md/dsk/d50 devices to the same point in the filesystem for both nodes, it would mount OK?
    I assumed the problem was because the device being used has the same name, and was confusing the Solaris OS when both nodes tried to mount it. Maybe some examples will help...
    My cluster consists of two nodes, Helene and Dione. There is fibre-attached storage used for quorum, and website content. The output from scdidadm -L is:
    1 helene:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2 helene:/dev/rdsk/c0t1d0 /dev/did/rdsk/d2
    3 helene:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
    3 dione:/dev/rdsk/c4t50002AC0001202D9d0 /dev/did/rdsk/d3
    4 dione:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
    5 dione:/dev/rdsk/c0t1d0 /dev/did/rdsk/d5
    This allows me to have identical entries in both host's /etc/vfstab files. There are also shared devices under /dev/global that can be accessed by both nodes. But the RAID devices are not referenced by anything from these directories (i.e. there's no /dev/global/md/dsk/50). I just thought it would make sense to have the option of global meta devices, but maybe that's just me!
    Thanks again Tim! :D
    Pete

  • My Airport Extreme and Time Capsule both cannot do not mount my Seagate USB external hard.

    My Airport Extreme and Time Capsule both cannot do not mount my Seagate USB external hard.
    I am running OS 10.7.5
    Both the TC and AE are updated to 7.6.1
    The Seagate drive in formatted: Journaled HFS+
    I tried everything on Apple's Article: HT1331 and the basic reboots and restarts.
    "Next"

    It sounds like you may be confusing "mount" with "recognize".
    Connect the drive to the USB port on the AirPort Extreme
    Open Macintosh HD > Applications > Utilities > AirPort Utility
    Click on the AirPort Extreme icon, then click Edit
    Click the Disks tab
    Does the AirPort Extreme "recognize" your hard drive?  If yes, it will be displayed in the window.
    See example below of a WD 500 GB drive connected to the AirPort Extreme.
    If your drive is formatted HFS+ and it does not appear here, then you will almost certainly need to use a powered USB hub with the hard drive.....even if the hard drive has its own power supply.  The USB port on the AirPort Extreme and Time Capsule is under powered.
    Post back when the drive is "recognized" and we will next "mount" the drive.....although you likely already know how to do this.

  • How to install a SUN cluster in a non-global zone?

    How do I install a SUN cluster in a non-global zone? If it is the same as installing SUN Cluster in the global zone, then how do I access the cdrom from a non-global zone to the global zone?
    Please point me to some docs or urls if there are any. Thanks in advance!

    You don't really install the cluster software on zones. You have to set up a zone cluster once you have configured the Global Zone cluster (or in other words, clustered the physical systems + OS).
    http://blogs.sun.com/SC/entry/zone_clusters

  • Cannot import a disk group after sun cluster 3.1 installation

    Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
    Did anyone run into the same problem?
    The dump of the private region for every single disk in the VM returns the following error:
    # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
    VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
    Format error in disk private region
    Any help or suggestion would be greatly appreciated
    Thx
    Max

    If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
    First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
    You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
    Having said that, I'm still confused by the error. See if the above solves the problem first.
    Regards,
    Tim
    ---

  • 2 node Sun Cluster 3.2, resource groups not failing over.

    Hello,
    I am currently running two v490s connected to a 6540 Sun Storagetek array. After attempting to install the latest OS patches the cluster seems nearly destroyed. I backed out the patches and right now only one node can process the resource groups properly. The other node will appear to take over the Veritas disk groups but will not mount them automatically. I have been working on this for over a month and have learned alot and fixed alot of other issues that came up, but the cluster is just not working properly. Here is some output.
    bash-3.00# clresourcegroup switch -n coins01 DataWatch-rg
    clresourcegroup: (C776397) Request failed because node coins01 is not a potential primary for resource group DataWatch-rg. Ensure that when a zone is intended, it is explicitly specified by using the node:zonename format.
    bash-3.00# clresourcegroup switch -z zcoins01 -n coins01 DataWatch-rg
    clresourcegroup: (C298182) Cannot use node coins01:zcoins01 because it is not currently in the cluster membership.
    clresourcegroup: (C916474) Request failed because none of the specified nodes are usable.
    bash-3.00# clresource status
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    ftp-rs coins01:zftp01 Offline Offline
    coins02:zftp01 Offline Offline - LogicalHostname offline.
    xprcoins coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    xprcoins-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline - LogicalHostname offline.
    DataWatch-hasp-rs coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    BDSarchive-res coins01:zcoins01 Offline Offline
    coins02:zcoins01 Offline Offline
    I am really at a loss here. Any help appreciated.
    Thanks

    My advice is to open a service call, provided you have a service contract with Oracle. There is much more information required to understand that specific configuration and to analyse the various log files. This is beyond what can be done in this forum.
    From your description I can guess that you want to failover a resource group between non-global zones. And it looks like the zone coins01:zcoins01 is reported to not be in cluster membership.
    Obviously node coins01 needs to be a cluster member. If it is reported as online and has joined the cluster, then you need to verify if the zone zcoins01 is really properly up and running.
    Specifically you need to verify that it reached the multi-user milestone and all cluster related SMF services are running correctly (ie. verify "svcs -x" in the non-global zone).
    You mention Veritas diskgroups. Note that VxVM diskgroups are handled in the global cluster level (ie. in the global zone). The VxVM diskgroup is not imported for a non-global zone. However, with SUNW.HAStoragePlus you can ensure that file systems on top of VxVM diskgroups can be mounted into a non-global zone. But again, more information would be required to see how you configued things and why they don't work as you expect it.
    Regards
    Thorsten

Maybe you are looking for

  • Sharing & permissions suddenly read only on files and folders

    Almost all folders and files including home folder have suddenly had their sharing and permissions privileges switched to read only No indication of why this happened. Tedious to go through each item, enter password, add parties and change to read an

  • Javascript alert boxes?

    I'm looking to incorporate an alert box which launches when a user clicks on exit to confirm the click. Is there an easy way to do this within Captivate 2's "execute javascript" command?

  • Idoc segment for shpmnt notification for idoc type shpmnt02

    Hi gurus,            I have to find the idoc segment for shipment notification,idoc  type which we are going to use is shpmnt02 or shpmnt01,Can any one of you please tell me the idoc segments for this idoc type with fields and table name?.Thanks in 

  • Where is Bullet List button?

    I've searched everywhere for how to invoke my bullet list buttons but haven't found them. I am running the latest InDesign version from Creative Cloud. This is my control panel when I'm in paragraph mode: This seems like such a dumb-headed question,

  • How to burn bought movies from iTunes to DVD's?

    I am wondering if I buy a movie on iTunes could I burn it on a DVD to watch on my DVD player? Do I have to use iMovie?