Sun Cluster 3.2 - WARNING: Cannot enable monitoring on resource-group

clrg online -emM ora-1line-rg(C348385) WARNING: Cannot enable monitoring on resource ora-1line-rs because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor ora-1line-rs' and re-enable monitoring using 'clresource monitor ora-1line-rs'.
(logical host reference)
(C348385) WARNING: Cannot enable monitoring on resource ora-hastp-rs because it already has monitoring enabled. To force the monitor to restart, disable monitoring using 'clresource unmonitor ora-hastp-rs' and re-enable monitoring using 'clresource monitor ora-hastp-rs'.
(hastorageplus reference)
I am able to unmonitor and monitor the resources manually. What is the cause of these WARNING messages? This from Oracle and we have yet to complete the installation of HA-Oracle. Oracle is not installed and tnsnames.ora and listener.ora is not configured. Is this the reason? If so, could someone explain why you cannot online the resource group until after the application has been installed.
Thanks in advance,
Ryan

As the manual says for clrs create:
By default, resources are created in the  enabled  state with  monitoring enabled. so when you issue the clrg online -emM it is just simply warning you that these other resources weren't disable. Note they wouldn't have been started because the RG would have been offline.
Does that explain it? If not, ask more questions.
Tim
---

Similar Messages

  • Rename Sun Cluster Resource Group

    Hi All,
    We have a 2 node Sun Cluster 3.0 running on 2 x V440 servers. We want to change the resource group.
    Can I use "scrgadm -c -g RG_NAME -h nodelist -y property" command to change the resource group name. Can I do this online while the clusters are running or do I need to bring the cluster to mainenance mode? Any help would be appreciated.
    Thanks.

    You cannot rename a resource group in that way in Sun Cluster.
    You have two options:
    -Recreate the resource group with the new name
    -Use an unsupported procedure to change the name in the CCR. This requires downtime of both nodes and as it is unsupported I am not going to describe it here. If that is what you want to do, please log a call with Sun.
    If you think renaming resource groups is a useful feature may I also ask you to contact your Sun Service representative so that they can take proper action to log an RFE for the feature.

  • Sun cluster resource group online but faulted

    Hi,
    recently our storage admin has deleted a volume d9 from the oradg disk set by mistake. for that we created a new disk d29 and restore data to it but we forgot to remove disk d9 from the disk set. after  a reboot the orarg resource group failed to go online with a faulted status because of ora-stor resource is faulted

    You cannot rename a resource group in that way in Sun Cluster.
    You have two options:
    -Recreate the resource group with the new name
    -Use an unsupported procedure to change the name in the CCR. This requires downtime of both nodes and as it is unsupported I am not going to describe it here. If that is what you want to do, please log a call with Sun.
    If you think renaming resource groups is a useful feature may I also ask you to contact your Sun Service representative so that they can take proper action to log an RFE for the feature.

  • QFS Meta data resource on sun cluster failed

    Hi,
    I'm trying to configure QFS on cluster environment, to configure metadata resource faced error. i tried with different type of qfs none of them worked.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/sharedqfs
    n1u332 - shqfs: Invalid priority (0) for server n1u332FS shqfs: validate_node() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/haqfs
    n1u332 - Mount point /global/haqfs does not have the 'shared' option set.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/hasharedqfs
    n1u332 - has: No /dsk/ string (nodev) in device.Inappropriate path in FS has device component: nodev.FS has: validate_qfsdevs() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    any QFS expert here?

    hi
    Yes we have 5.2, here is the wiki's link, [ http://wikis.sun.com/display/SAMQFSDocs52/Home|http://wikis.sun.com/display/SAMQFSDocs52/Home]
    I have added the file system trough webconsole, and it's mounted and working fine.
    after creating the file system i tried to put under sun cluster's management, but it asked for metadata resource and to create metadata resource I have got the mentioned errors.
    I need the use QFS file system in non-RAC environment, just mounting and using the file system. I could mount it on two machine in shared mode and high available mode, in both case in the second node it's 3 time slower then the node which has metadata server when you write and the same read speed. could you please let me know if it's the same for your environment or not. if so what do you think of the reason, i see both side is writing to the storage directly but why it's so slow on one node.
    regards,

  • Netegrity SiteMinder Agent on a Sun Cluster

    Salut!
    I have some doubts on how to run the Netegrity SiteMinder Agent on my Sun Cluster. Easiest solution would presumably be to run the Agent as a service on each of the physical cluster nodes.
    The application now can be accessed by the physical IP address/DNS-entry of the current cluster node, and the virtual one of the resource group. The users will access it by the virtual one. Now I somehow have to ensure that the agent watches the virtual one, too. Can this be configured? Takes DNS care of that (most likely not)?
    Or do I have to integrate the agent in the cluster software itself?
    Has anybody done that before?
    Thanks for your help,
    greetings,
    Martin

    Philippe,
    DS 6 Sun Cluster Agent was not tested with SC 3.2 in Zones.
    Zone support came with SC 3.2, and DS 6 Cluster Agent was built with SC 3.1, tested with SC 3.1 and 3.2 in the Global zone.
    Regards,
    Ludovic.

  • Switching resource group in 2 node cluster fails

    hi,
    i configured a 2 node cluster to provide high availability for my oracle DB 9.2.0.7
    i have created a resource and named it oracleha-rg,
    and i crated later the following resources
    oraclelh-rs for logical hostname
    hastp-rs for the HA storage resource
    oracle-server-rs for oracle resource
    and listener-rs for listener
    whenever i try to switch the resource group between nodes is gives me the following in dmesg:
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hafoip_stop> for resource <oraclelh-rs>, resource group <oracleha-rg>, node <DB1>, timeout <300> seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource oraclelh-rs status on node DB1 change to R_FM_UNKNOWN+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource oraclelh-rs status msg on node DB1 change to <Stopping>+
    +Feb  6 16:17:49 DB1 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.050.033.009:0, remote = 000.000.000.000:0, start = -2, end = 6+
    +Feb  6 16:17:49 DB1 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource oraclelh-rs status on node DB1 change to R_FM_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource oraclelh-rs status msg on node DB1 change to <LogicalHostname offline.>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hafoip_stop> completed successfully for resource <oraclelh-rs>, resource group <oracleha-rg>, node <DB1>, time used: 0% of timeout <300 seconds>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource oraclelh-rs state on node DB1 change to R_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <hastorageplus_postnet_stop> for resource <hastp-rs>, resource group <oracleha-rg>, node <DB1>, timeout <1800> seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource hastp-rs status on node DB1 change to R_FM_UNKNOWN+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource hastp-rs status msg on node DB1 change to <Stopping>+
    +Feb  6 16:17:49 DB1 SC[,SUNW.HAStoragePlus:8,oracleha-rg,hastp-rs,hastorageplus_postnet_stop]: [ID 843127 daemon.warning] Extension properties FilesystemMountPoints and GlobalDevicePaths and Zpools are empty.+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <hastorageplus_postnet_stop> completed successfully for resource <hastp-rs>, resource group <oracleha-rg>, node <DB1>, time used: 0% of timeout <1800 seconds>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 443746 daemon.notice] resource hastp-rs state on node DB1 change to R_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 784560 daemon.notice] resource hastp-rs status on node DB1 change to R_FM_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 922363 daemon.notice] resource hastp-rs status msg on node DB1 change to <>+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.error] resource group oracleha-rg state on node DB1 change to RG_OFFLINE_START_FAILED+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB1 change to RG_OFFLINE+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 447451 daemon.notice] Not attempting to start resource group <oracleha-rg> on node <DB1> because this resource group has already failed to start on this node 2 or more times in the past 3600 seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 447451 daemon.notice] Not attempting to start resource group <oracleha-rg> on node <DB2> because this resource group has already failed to start on this node 2 or more times in the past 3600 seconds+
    +Feb  6 16:17:49 DB1 Cluster.RGM.global.rgmd: [ID 674214 daemon.notice] rebalance: no primary node is currently found for resource group <oracleha-rg>.+
    +Feb  6 16:19:08 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource hastp-rs disabled.+
    +Feb  6 16:19:17 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource oraclelh-rs disabled.+
    +Feb  6 16:19:22 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource oracle-rs disabled.+
    +Feb  6 16:19:27 DB1 Cluster.RGM.global.rgmd: [ID 603096 daemon.notice] resource listener-rs disabled.+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB1 change to RG_OFF_PENDING_METHODS+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB2 change to RG_OFF_PENDING_METHODS+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 224900 daemon.notice] launching method <bin/oracle_listener_fini> for resource <listener-rs>, resource group <oracleha-rg>, node <DB1>, timeout <30> seconds+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 515159 daemon.notice] method <bin/oracle_listener_fini> completed successfully for resource <listener-rs>, resource group <oracleha-rg>, node <DB1>, time used: 0% of timeout <30 seconds>+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB1 change to RG_OFFLINE+
    +Feb  6 16:19:51 DB1 Cluster.RGM.global.rgmd: [ID 529407 daemon.notice] resource group oracleha-rg state on node DB2 change to RG_OFFLINE+
    and the resource group fails to switch...
    any help please?

    Hi,
    this forum is for Oracle Clusterware, not Solaris Cluster. You probably should close this thread and open your question in the corresponding Solaris Cluster forum, to get help.
    Regards
    Sebastian

  • Sun Cluster, vx mode - "mode: enabled: cluster inactive"

    Hi,
    I have installed sun cluster 3.2 on solaris 9 (Solaris 9 9/05). I want to make it an active-active setup with shared veritas DGs. This setup also has vxvm 5 (Veritas-5.0_MP1_RP4.4) with rolling pack 4 and solaris has all the latest pathes updated via "updatemanager". The shared storage comes from DMX800.
    In order to get VxVM in cluster mode I have installed licenses for CVM, VCS and also ORCLudlm (3.3.4.8 ) package.
    The sun cluster install has all the necessary framework packages. But the VX mode refuses to be in cluster mode:
    #vxdctl -c mode
    mode: enabled: cluster inactive
    Issue is udlm daemon "dlmmon" isnt starting.
    Also I see the below errors
    cacao: Error: Fail to start cacao agent. (instance default)
    Error: Fail to start cacao agent. (instance default)
    AND messages file on nodeA shows the below error
    [ID 988885 daemon.error] libpnm error: can't connect to PNMd on nodeB
    I am at my wits end on how to resolve this issue :(
    Any help is appreciated.
    Regards,
    Ashish

    Well it could be the problem I ran into... and I went round and round for ages trying to figure out what was wrong - before I realised my mistake.
    Assuming you have VxVM/CVM licensed properly, check that ORCLudlm is installed on all nodes. Then create your rac-framework-rg and ensure you have a rac-framework-rs, rac-udlm-rs AND a rac-cvm-rs resource. Now, unless you have both of these and they can be brought enabled and brought online, then you'll have exactly the problem you are seeing.
    Hope that helps,
    Tim
    Edited by: Tim.Read on Feb 19, 2008 4:08 AM
    Ooops missed the rac-udlm-rs ... Doh!

  • Can we monitor Sun Cluster events

    Hi,
    Whenever there are some events, issues in cluster, we see logs on the node consoles regarding the same. Is there any way to capture these
    events through some other application? Better, is there any way by which we can send these events, in some standard format (protocol way) to some application that can send emails etc to administrators so that one need not keep eye continuously on the console for issus in cluster?

    Well, I did not get a chance to experiment with CRNP, however, came up with a strategy that can be used.
    First of all, we can do this with Solaris 9 and above as we do not have a resource type SUNW.Event available with Solaris 8. This resource type needs to be registered with only one resource group on the cluster, the resource group can be related to your data service.
    So, one should first get the file SUNW.Event from the cluster source, copy it at /usr/cluster/lib/rgm/rtreg, also, copy corresponding RT_BASEDIR contents mentioned in this file at appropriate location. This directory contains start, stop, check and monitor scripts.
    One can set the required properties SUNW.Event.
    All this needs to be done on all the nodes.
    The properties of this resource type cover the port and logical address (which is usually the cluster address) where the events would be sent. One can then write a client to listen to this port and process the events.
    Hope this clarifies the point, please confirm if you see any issues with this.

  • Cannot import a disk group after sun cluster 3.1 installation

    Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
    Did anyone run into the same problem?
    The dump of the private region for every single disk in the VM returns the following error:
    # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
    VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
    Format error in disk private region
    Any help or suggestion would be greatly appreciated
    Thx
    Max

    If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
    First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
    You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
    Having said that, I'm still confused by the error. See if the above solves the problem first.
    Regards,
    Tim
    ---

  • SUN Cluster 3.2, Solaris 10, Corrupted IPMP group on one node.

    Hello folks,
    I recently made a network change on nodename2 to add some resilience to IPMP (adding a second interface but still using a single IP address).
    After a reboot, I cannot keep this host from rebooting. For the one minute that it stays up, I do get the following result from scstat that seems to suggest a problem with the IPMP configuration. I rolled back my IPMP change, but it still doesn't seem to register the IPMP group in scstat.
    nodename2|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 Path online
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 0 1 Offline
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 nodename2
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    scstat:  unexpected error.
    I did manage to run scstat on nodename1 while nodename2 was still up between reboots, here is that result (it does not show any IPMP group(s) on nodename2)
    nodename1|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 faulted
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 -
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Degraded
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: nodename1 sc_ipmp1 Online bge2 Online
    IPMP Group: nodename1 sc_ipmp0 Online bge0 Online
    -- IPMP Groups in Zones --
    Zone Name Group Status Adapter Status
    I believe that I should be able to delete the IPMP group for the second node from the cluster and re-add it, but I'm sure about how to go about doing this. I welcome your comments or thoughts on what I can try before rebuilding this node from scratch.
    -AG

    I was able to restart both sides of the cluster. Now both sides are online, but neither side can access the shared disk.
    Lots of warnings. I will keep poking....
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename2
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface ce3; disabling probe-based failure detection on it
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface bge0; disabling probe-based failure detection on it
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename2: attempting to join cluster.
    NOTICE: CMM: Node nodename1 (nodeid: 1, incarnation #: 1279727883) has become reachable.
    NOTICE: clcomm: Path nodename2:bge3 - nodename1:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename2: joined cluster.
    NOTICE: CCR: Waiting for repository synchronization to finish.
    WARNING: CCR: Invalid CCR table : dcs_service_9 cluster global.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    ==> WARNING: DCS: Error looking up services table
    ==> WARNING: DCS: Error initializing service 9 from file
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d22 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename2 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename1
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename1: attempting to join cluster.
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 errors during initiation
    WARNING: Path nodename1:bge3 - nodename2:bge3 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    NOTICE: bge3: link down
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: CMM: Node nodename2 (nodeid: 2, incarnation #: 1279728026) has become reachable.
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename1: joined cluster.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d26 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename1 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    nsrexecd
    mount: /dev/md/jms-ds/dsk/d100 is already mounted or /opt/esbshares is busy

  • Problem with sun Cluster

    Hi all !
    I've problem with cluster, server cannot see HDD from storedge.
    state-
    - in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
    - use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
    - type �devfsadm �C� on hap103 ----> notice error about HDD.
    - type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
    - type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
    Help me if you can.
    Many thanks.
    Long.
    -----------------------------ok_log-------------------------
    ########## hap103 ##################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    ############ hap203 ###################################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    {3} ok
    ------------------------hap103-------------------------
    hap103>
    hap103> format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap103>
    hap103>
    hap103>
    hap103> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap103>
    hap103>
    hap103> metastat -s dgsmp
    dgsmp/d120: Mirror
    Submirror 0: dgsmp/d121
    State: Needs maintenance
    Submirror 1: dgsmp/d122
    State: Needs maintenance
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 716695680 blocks
    dgsmp/d121: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d5s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d1s0 0 No Maintenance
    d2s0 0 No Maintenance
    d3s0 0 No Maintenance
    d4s0 0 No Maintenance
    d5s0 0 No Last Erred
    dgsmp/d122: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d6s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d6s0 0 No Last Erred
    d7s0 0 No Okay
    d8s0 0 No Okay
    d9s0 0 No Okay
    d10s0 0 No Resyncing
    hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
    hap103>
    hap103>
    hap103>
    hap103>
    hap103> scdidadm -l
    1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
    12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
    13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
    14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
    15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
    hap103>
    hap103>
    hap103> more /etc/vfstab
    [49;1H[K#device device  mount   FS      fsck    mount   mount
    #to     mount   to      fsck            point           type    pass    at boot options
    #/dev/dsk/c1d0s2        /dev/rdsk/c1d0s2        /usr    ufs     1       yes     -
    fd      -       /dev/fd fd      -       no      -
    /proc   -       /proc   proc    -       no      -
    /dev/md/dsk/d20 -       -       swap    -       no      -
    /dev/md/dsk/d10 /dev/md/rdsk/d10        /       ufs     1       no      logging
    #/dev/dsk/c3t0d0s3      /dev/rdsk/c3t0d0s3      /globaldevices  ufs     2       yes     logging
    /dev/md/dsk/d60 /dev/md/rdsk/d60        /in     ufs     2       yes     logging
    /dev/md/dsk/d40 /dev/md/rdsk/d40        /in/oracle      ufs     2       yes     logging
    /dev/md/dsk/d50 /dev/md/rdsk/d50        /indelivery     ufs     2       yes     logging
    swap    -       /tmp    tmpfs   -       yes     -
    /dev/md/dsk/d30 /dev/md/rdsk/d30        /global/.devices/node@1 ufs     2       no      global
    /dev/md/dgsmp/dsk/d120  /dev/md/dgsmp/rdsk/d120 /in/smp ufs     2       yes     logging,global
    #RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
    [K[1;7mvfstab: END[m
    [Khap103> df -h
    df: unknown option: h
    Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
    hap103>
    hap103>
    hap103>
    hap103> df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 4339374 3429010 866971 80% /
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    swap 22744256 136 22744120 1% /var/run
    swap 22744144 24 22744120 1% /tmp
    /dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
    /dev/md/dsk/d60 121571658 1907721 118448221 2% /in
    /dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
    /dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
    /dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
    ------------------log_hap203---------------------------------
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@8,0
    1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@9,0
    2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@a,0
    3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@b,0
    4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@c,0
    5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap203>
    hap203> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap203>
    hap203>
    hap203> devfsadm- -C
    hap203>
    hap203> scdidadm -l
    1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
    17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
    18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
    19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
    20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
    hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (s

    First question is what HBA and driver combination are you using?
    Next do you have MPxIO enabled or disabled?
    Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
    What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
    What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
    The more you leave out of your question the harder it is to offer suggestions.
    Feadshipman

  • LDOM SUN Cluster Interconnect failure

    I am making a test SUN-Cluster on Solaris 10 in LDOM 1.3.
    in my environment, i have T5120, i have setup two guest OS with some configurations, setup sun cluster software, when executed, scinstall, it failed.
    node 2 come up, but node 1 throws following messgaes:
    Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args:
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: test1
    Configuring devices.
    Loading smf(5) service descriptions: 37/37
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    /usr/cluster/bin/scdidadm: Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node test2 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node test1 (nodeid = 2) with votecount = 0 added.
    NOTICE: clcomm: Adapter vnet2 constructed
    NOTICE: clcomm: Adapter vnet1 constructed
    NOTICE: CMM: Node test1: attempting to join cluster.
    NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    NOTICE: clcomm: Path test1:vnet1 - test2:vnet1 errors during initiation
    NOTICE: clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
    WARNING: Path test1:vnet1 - test2:vnet1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    WARNING: Path test1:vnet2 - test2:vnet2 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
    CREATED VIRTUAL SWITCH AND VNETS ON PRIMARY DOMAIN LIKE:<>
    532 ldm add-vsw mode=sc cluster-vsw0 primary
    533 ldm add-vsw mode=sc cluster-vsw1 primary
    535 ldm add-vnet vnet2 cluster-vsw0 test1
    536 ldm add-vnet vnet3 cluster-vsw1 test1
    540 ldm add-vnet vnet2 cluster-vsw0 test2
    541 ldm add-vnet vnet3 cluster-vsw1 test2
    Primary DOmain<>
    bash-3.00# dladm show-dev
    vsw0 link: up speed: 1000 Mbps duplex: full
    vsw1 link: up speed: 0 Mbps duplex: unknown
    vsw2 link: up speed: 0 Mbps duplex: unknown
    e1000g0 link: up speed: 1000 Mbps duplex: full
    e1000g1 link: down speed: 0 Mbps duplex: half
    e1000g2 link: down speed: 0 Mbps duplex: half
    e1000g3 link: up speed: 1000 Mbps duplex: full
    bash-3.00# dladm show-link
    vsw0 type: non-vlan mtu: 1500 device: vsw0
    vsw1 type: non-vlan mtu: 1500 device: vsw1
    vsw2 type: non-vlan mtu: 1500 device: vsw2
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    bash-3.00#
    NOde1<>
    -bash-3.00# dladm show-link
    vnet0 type: non-vlan mtu: 1500 device: vnet0
    vnet1 type: non-vlan mtu: 1500 device: vnet1
    vnet2 type: non-vlan mtu: 1500 device: vnet2
    -bash-3.00# dladm show-dev
    vnet0 link: unknown speed: 0 Mbps duplex: unknown
    vnet1 link: unknown speed: 0 Mbps duplex: unknown
    vnet2 link: unknown speed: 0 Mbps duplex: unknown
    -bash-3.00#
    NODE2<>
    -bash-3.00# dladm show-link
    vnet0 type: non-vlan mtu: 1500 device: vnet0
    vnet1 type: non-vlan mtu: 1500 device: vnet1
    vnet2 type: non-vlan mtu: 1500 device: vnet2
    -bash-3.00#
    -bash-3.00#
    -bash-3.00# dladm show-dev
    vnet0 link: unknown speed: 0 Mbps duplex: unknown
    vnet1 link: unknown speed: 0 Mbps duplex: unknown
    vnet2 link: unknown speed: 0 Mbps duplex: unknown
    -bash-3.00#
    and this configuration i give while setting up scinstall
    Cluster Transport Adapters and Cables <<<You must identify the two cluster transport adapters which attach
    this node to the private cluster interconnect.
    For node "test1",
    What is the name of the first cluster transport adapter [vnet1]?
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    All transport adapters support the "dlpi" transport type. Ethernet
    and Infiniband adapters are supported only with the "dlpi" transport;
    however, other adapter types may support other types of transport.
    For node "test1",
    Is "vnet1" an Ethernet adapter (yes/no) [yes]?
    Is "vnet1" an Infiniband adapter (yes/no) [yes]? no
    For node "test1",
    What is the name of the second cluster transport adapter [vnet3]? vnet2
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    For node "test1",
    Name of the switch to which "vnet2" is connected [switch2]?
    For node "test1",
    Use the default port name for the "vnet2" connection (yes/no) [yes]?
    For node "test2",
    What is the name of the first cluster transport adapter [vnet1]?
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    For node "test2",
    Name of the switch to which "vnet1" is connected [switch1]?
    For node "test2",
    Use the default port name for the "vnet1" connection (yes/no) [yes]?
    For node "test2",
    What is the name of the second cluster transport adapter [vnet2]?
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    For node "test2",
    Name of the switch to which "vnet2" is connected [switch2]?
    For node "test2",
    Use the default port name for the "vnet2" connection (yes/no) [yes]?
    i have setup the configurations like.
    ldm list -l nodename
    NODE1<>
    NETWORK
    NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
    vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:61:63 1 1500
    vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f8:87:27 1 1500
    vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:f8:f0:db 1 1500
    ldm list -l nodename
    NODE2<>
    NETWORK
    NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
    vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:a1:68 1 1500
    vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f9:3e:3d 1 1500
    vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:fb:03:83 1 1500
    ldm list-services
    VSW
    NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    primary-vsw0 primary 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
    cluster-vsw0 primary 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
    cluster-vsw1 primary 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
    ldm list-bindings primary
    VSW
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    primary-vsw0 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
    PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
    vnet1@gitserver 00:14:4f:f8:c0:5f 1 1500
    vnet1@racc2 00:14:4f:f8:2e:37 1 1500
    vnet1@test1 00:14:4f:f9:61:63 1 1500
    vnet1@test2 00:14:4f:f9:a1:68 1 1500
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    cluster-vsw0 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
    PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
    vnet2@test1 00:14:4f:f8:87:27 1 1500
    vnet2@test2 00:14:4f:f9:3e:3d 1 1500
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    cluster-vsw1 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
    PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
    vnet3@test1 00:14:4f:f8:f0:db 1 1500
    vnet3@test2 00:14:4f:fb:03:83 1 1500
    Any Idea Team, i beleive the cluster interconnect adapters were not successfull.
    I need any guidance/any clue, how to correct the private interconnect for clustering in two guest LDOMS.

    You dont have to stick to default IP's or subnet . You can change to whatever IP's you need. Whatever subnet mask you need. Even change the private names.
    You can do all this during install or even after install.
    Read the cluster install doc at docs.sun.com

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • Creating Logical hostname in sun cluster

    Can someone tell me, what exactly logical hostname in sun cluster mean?
    For registering logical hostname resource in failoover group, what exactly i need to specify
    for example, i have two nodes in sun cluster , How to create or configure a logical hostanme and it should point to which IP Address ( Whether it should point to IP addresses of nodes in sun cluster). Can i get clarification on this?

    Thanks Thorsten for ur continue help...
    The output of clrs status abc_lg
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    abc_lg node1 Offline Offline
    node2 Offline Offline
    The status is offline...
    the output of clresourcegroup status
    === Cluster Resource Groups ===
    Group Name Node Name Suspended Status
    abc_rg node1 No Unmanaged
    node2 No Unmanaged
    You say that the resource should de enabled after creating the resource.. I am using GDS and i am just following the steps he provided to acheive high availabilty (in developers guide...)
    I have 1) Logical hostname resorce.
    2) Application resource in my failover resource group
    When i bring online the failover resource group , what should my failover resource group status and the status of resource in my resource group

  • Sun cluster 3.2 - resource hasstorageplus taking too much time to start

    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?
    Jan 28 20:28:01 hnmdb02 Cluster.Framework: [ID 801593 daemon.notice] stdout: becoming primary for data
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 350207 daemon.notice] 24 fe_rpc_command: cmd_type(enum):<3>:cmd=<null>:tag=<hnmdb.data.10>: Calling security_clnt_connect(..., host=<hnmdb02>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<0>, ...)
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 316625 daemon.notice] Timeout monitoring on method tag <hnmdb.data.10> has been resumed.
    Jan 28 20:34:57 hnmdb02 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_prenet_start> completed successfully for resource <data>, resource group <hnmdb>, node <hnmdb02>, time used: 23% of timeout <1800 seconds>

    heldermo wrote:
    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?I'm not sure how this is supposed to be related to Messaging Server. I suggest you ask your question in the Cluster forum:
    http://forums.sun.com/forum.jspa?forumID=842
    Regards,
    Shane.

Maybe you are looking for

  • How can I remove songs in a playlist?

    I have a "Christmas" playlist in iTunes, and in December, I downloaded all the songs in it to my iPhone (by selecting it, and tapping Download All).  I have iTunes Match. Now Christmas is long gone, but Genius keeps picking Christmas songs.  I want t

  • Where does my backup files reside in windows

    I backed up my ipad (using iTunes) to my laptop (running Windows 7).  Where can I find the backup files?  Also, the ipad that I backed is no longer mine.  My wife has an iPad Air 2.  We want to get the content (especially the pics and videos) of the

  • Resizing a MovieClip's contents

    I'm loading an external swf into a movieclip (the external sfw plays a slideshow of JPEGs). When the slideshow plays, the JPEGs extend way beyond the bounds of the movieclip. How would one go about constraining the JPEGs to their original size and ce

  • BAPI to update planned price 2

    Dear All, Could some one tell the BAPI to update the planned price 2 ( Costing view of MM) ? Thanks, Ram

  • Apex authentification  vs.  authorization

    I want to create an open public application where all pages are common and need no personal login to show the information. Only some objects (create, change - buttons) on different pages I want to control with the authorization mechanism based on the