Cluster Transport Adapter Error - Sun Cluster

I am installing sun cluster 3.0 and it gives me an error saying:
failed to add cluster transport adapter - unknown adapter of transport type, trtype=dlpi...
My network card is syskonnect - interface is skge0.....
What is wrong....Thanks

Hi,
I have a similar problem .
Get the same error with Sun Cluster 3.0 the card is Phobos quad port.
Could find a solution to it or had to shell out a few hundred bucks for sun cards ?

Similar Messages

  • What is the name of the first cluster transport adapter?

    Hi,
    I am new to Solaris and just entering to clusters. pardon if asking any question that is already answered.
    I am installing Sun Cluster on 2 X v120 boxes and i have two interfaces eri0 and eri1 on each box.
    I gace connected eri0's to a Switch and eri1's using crossover cable.
    eri0's are configured for 10.10.7.xxx and eri1's are in 176.16.0.xxx
    this was done because I was not able to telnet on network adaptors connected using crossover cable when they were configured for 10.10.7.x.
    Now the problem is while installing sun cluster using scinstall, I am asked for Cluster Transport Adaptor name.
    The script will not work is I give eri0 or eri1, it says,
    Adapter "eri0" is already in use as a public network adapter
    Adapter "eri1" is already in use as a public network adapter.
    These are the only two adaptors I have and dont know hoe to proceed further.
    Now the question are...
    1. Do I need to configure virtual adaptors like eri1:1, eri1:2 or is there any trick that I have misssed?
    2. What is the common IP for which the cluster will respond once configured? Is it the IP of First Node or do I need to provide any other IP?
    Thanks in advance for the help.
    Regards,
    Ramesh.

    Hi,
    the problem starts with the special call so scsi reservations, not with the scsi disk.
    vmware server talks in their documents how to make a bus virtual tu support scsi reservations, no chance with the worstations.
    You may have to "disable" failfast panics by hashing out
    /usr/cluster/lib/sc/cmm_ctl -d -f in the cluster start methods.
    I am not specific here by intent, because then you are vulnerable to split brain, amnesia, ...
    This is totally unsupported and may work, but making mistakes will corrupt your data. If it works fine, treat it with extreme care.
    For the quorum problems the quorum server helps, so You then have to start an other vm on your machine to be the quorum server.
    For demonstration purpose I am personally using single node clusters and found them sufficient.
    I have heard, that the description in the vmware server guide enables scsi2 reservations only, so you are stuck to 2 node clusters, or you have to change the reservation type which is only doable in SC3.2.
    Kind Regards
    Detlef

  • LDOM SUN Cluster Interconnect failure

    I am making a test SUN-Cluster on Solaris 10 in LDOM 1.3.
    in my environment, i have T5120, i have setup two guest OS with some configurations, setup sun cluster software, when executed, scinstall, it failed.
    node 2 come up, but node 1 throws following messgaes:
    Boot device: /virtual-devices@100/channel-devices@200/disk@0:a File and args:
    SunOS Release 5.10 Version Generic_139555-08 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: test1
    Configuring devices.
    Loading smf(5) service descriptions: 37/37
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    /usr/cluster/bin/scdidadm: Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node test2 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node test1 (nodeid = 2) with votecount = 0 added.
    NOTICE: clcomm: Adapter vnet2 constructed
    NOTICE: clcomm: Adapter vnet1 constructed
    NOTICE: CMM: Node test1: attempting to join cluster.
    NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    NOTICE: clcomm: Path test1:vnet1 - test2:vnet1 errors during initiation
    NOTICE: clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
    WARNING: Path test1:vnet1 - test2:vnet1 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    WARNING: Path test1:vnet2 - test2:vnet2 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    clcomm: Path test1:vnet2 - test2:vnet2 errors during initiation
    CREATED VIRTUAL SWITCH AND VNETS ON PRIMARY DOMAIN LIKE:<>
    532 ldm add-vsw mode=sc cluster-vsw0 primary
    533 ldm add-vsw mode=sc cluster-vsw1 primary
    535 ldm add-vnet vnet2 cluster-vsw0 test1
    536 ldm add-vnet vnet3 cluster-vsw1 test1
    540 ldm add-vnet vnet2 cluster-vsw0 test2
    541 ldm add-vnet vnet3 cluster-vsw1 test2
    Primary DOmain<>
    bash-3.00# dladm show-dev
    vsw0 link: up speed: 1000 Mbps duplex: full
    vsw1 link: up speed: 0 Mbps duplex: unknown
    vsw2 link: up speed: 0 Mbps duplex: unknown
    e1000g0 link: up speed: 1000 Mbps duplex: full
    e1000g1 link: down speed: 0 Mbps duplex: half
    e1000g2 link: down speed: 0 Mbps duplex: half
    e1000g3 link: up speed: 1000 Mbps duplex: full
    bash-3.00# dladm show-link
    vsw0 type: non-vlan mtu: 1500 device: vsw0
    vsw1 type: non-vlan mtu: 1500 device: vsw1
    vsw2 type: non-vlan mtu: 1500 device: vsw2
    e1000g0 type: non-vlan mtu: 1500 device: e1000g0
    e1000g1 type: non-vlan mtu: 1500 device: e1000g1
    e1000g2 type: non-vlan mtu: 1500 device: e1000g2
    e1000g3 type: non-vlan mtu: 1500 device: e1000g3
    bash-3.00#
    NOde1<>
    -bash-3.00# dladm show-link
    vnet0 type: non-vlan mtu: 1500 device: vnet0
    vnet1 type: non-vlan mtu: 1500 device: vnet1
    vnet2 type: non-vlan mtu: 1500 device: vnet2
    -bash-3.00# dladm show-dev
    vnet0 link: unknown speed: 0 Mbps duplex: unknown
    vnet1 link: unknown speed: 0 Mbps duplex: unknown
    vnet2 link: unknown speed: 0 Mbps duplex: unknown
    -bash-3.00#
    NODE2<>
    -bash-3.00# dladm show-link
    vnet0 type: non-vlan mtu: 1500 device: vnet0
    vnet1 type: non-vlan mtu: 1500 device: vnet1
    vnet2 type: non-vlan mtu: 1500 device: vnet2
    -bash-3.00#
    -bash-3.00#
    -bash-3.00# dladm show-dev
    vnet0 link: unknown speed: 0 Mbps duplex: unknown
    vnet1 link: unknown speed: 0 Mbps duplex: unknown
    vnet2 link: unknown speed: 0 Mbps duplex: unknown
    -bash-3.00#
    and this configuration i give while setting up scinstall
    Cluster Transport Adapters and Cables <<<You must identify the two cluster transport adapters which attach
    this node to the private cluster interconnect.
    For node "test1",
    What is the name of the first cluster transport adapter [vnet1]?
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    All transport adapters support the "dlpi" transport type. Ethernet
    and Infiniband adapters are supported only with the "dlpi" transport;
    however, other adapter types may support other types of transport.
    For node "test1",
    Is "vnet1" an Ethernet adapter (yes/no) [yes]?
    Is "vnet1" an Infiniband adapter (yes/no) [yes]? no
    For node "test1",
    What is the name of the second cluster transport adapter [vnet3]? vnet2
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    For node "test1",
    Name of the switch to which "vnet2" is connected [switch2]?
    For node "test1",
    Use the default port name for the "vnet2" connection (yes/no) [yes]?
    For node "test2",
    What is the name of the first cluster transport adapter [vnet1]?
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    For node "test2",
    Name of the switch to which "vnet1" is connected [switch1]?
    For node "test2",
    Use the default port name for the "vnet1" connection (yes/no) [yes]?
    For node "test2",
    What is the name of the second cluster transport adapter [vnet2]?
    Will this be a dedicated cluster transport adapter (yes/no) [yes]?
    For node "test2",
    Name of the switch to which "vnet2" is connected [switch2]?
    For node "test2",
    Use the default port name for the "vnet2" connection (yes/no) [yes]?
    i have setup the configurations like.
    ldm list -l nodename
    NODE1<>
    NETWORK
    NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
    vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:61:63 1 1500
    vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f8:87:27 1 1500
    vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:f8:f0:db 1 1500
    ldm list -l nodename
    NODE2<>
    NETWORK
    NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
    vnet1 primary-vsw0@primary 0 network@0 00:14:4f:f9:a1:68 1 1500
    vnet2 cluster-vsw0@primary 1 network@1 00:14:4f:f9:3e:3d 1 1500
    vnet3 cluster-vsw1@primary 2 network@2 00:14:4f:fb:03:83 1 1500
    ldm list-services
    VSW
    NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    primary-vsw0 primary 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
    cluster-vsw0 primary 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
    cluster-vsw1 primary 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
    ldm list-bindings primary
    VSW
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    primary-vsw0 00:14:4f:f9:25:5e e1000g0 0 switch@0 1 1 1500 on
    PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
    vnet1@gitserver 00:14:4f:f8:c0:5f 1 1500
    vnet1@racc2 00:14:4f:f8:2e:37 1 1500
    vnet1@test1 00:14:4f:f9:61:63 1 1500
    vnet1@test2 00:14:4f:f9:a1:68 1 1500
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    cluster-vsw0 00:14:4f:fb:db:cb 1 switch@1 1 1 1500 sc on
    PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
    vnet2@test1 00:14:4f:f8:87:27 1 1500
    vnet2@test2 00:14:4f:f9:3e:3d 1 1500
    NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE INTER-VNET-LINK
    cluster-vsw1 00:14:4f:fa:c1:58 2 switch@2 1 1 1500 sc on
    PEER MAC PVID VID MTU LINKPROP INTERVNETLINK
    vnet3@test1 00:14:4f:f8:f0:db 1 1500
    vnet3@test2 00:14:4f:fb:03:83 1 1500
    Any Idea Team, i beleive the cluster interconnect adapters were not successfull.
    I need any guidance/any clue, how to correct the private interconnect for clustering in two guest LDOMS.

    You dont have to stick to default IP's or subnet . You can change to whatever IP's you need. Whatever subnet mask you need. Even change the private names.
    You can do all this during install or even after install.
    Read the cluster install doc at docs.sun.com

  • Upgrading Solaris OS (9 to 10)  in sun cluster 3.1 environment

    Hi all ,
    I have to upgrade the solaris OS 9 to 10 in Sun cluster 3.1.
    Sun Cluster 3.1
    data service - Netbackup 5.1
    Questions:
    1 .Best ways to upgrade the Solaris 9 to 10 and the Problems while upgrading the OS?
    2 .Sun Trunking support in Sun Cluster 3.1?
    Regards
    Ramana

    Hi Ramana
    We had used the live upgrade for upgrading Solaris 9 to 10 and its the best method for less downtime and risk but you have to follow the proper procedure as it is not the same for normal solaris. Live upgrade with sun cluster is different . you have to take into consideration about global devices and veritas volume manager. while creating new boot environment.
    Thanks/Regards
    Sadiq

  • VCS to Sun Cluster migration

    I am planning to migrate a 2-node cluster from VCS to Sun Cluster. how much downtime does this involve? is there any documentation that i can reference?

    Hi all,
    In the following I outlined the principle steps how to migrate a cluster in place. Tis will be one of the subtopics of an upcoming blog about VCS to SC migration.
    Pavel, you should revisit SC 3.2 definitely and explicitly the bui. We had various VCS admins on different projects who told us, the gap became that small, that VCS is not worth the additional costs.
    Bear in mind that migrating in place is the most complex scenario, and doing it on a complete alternative platform is a much simpler process. But lets proceed with the assumptions and process:
    Let us assume a two node cluster where you want to migrate from VCS with VXVM to Solaris Cluster and Solaris Volume Manager. I assume as well that your data is mirrored. The steps below are a principle outline of the migration process, to get the necessary cluster administration commands you need to consult the appropriate documentation.
    1.Reduce the VCS cluster to a one node cluster and disconnect the interconnect. The interconnect has to be disconnected to allow a Solaris Cluster installation on the other node. Solaris Cluster check the interconnect for unwanted traffic
    2.Split the storage in two halfs, and disallow the access from the VCS cluster to the future Solaris cluster part. This can be achieved in example by modifying the switch zoning, or lun masking. At this point in time your application is still running, but you have no high availability and no data redundancy any more.
    3.Install a single node Solaris Cluster on the second host, it is advisable to start with a fresh Solaris install.
    4.Configure the full Solaris Cluster topology with a temporary copy of your date. The data has to be installed by backup/restore, because you are changing the volume manager as well. It is important here, that you use different IP addresses for the logical hosts to avoid duplicate addresses. Now the new single node Solaris Cluster is ready to take the actual data.
    5.When you are ready for an application downtime, transfer the actual data from the Veritas Cluster again to the Solaris Cluster, and shut down the remaining VCS single node cluster.
    6.Change the IP Addresses of the logical host in the Solaris Cluster to the final value and enable all relevant resources. From now on your application will be running on the new Solaris Cluster.
    7.Reestablish the interconnect, destroy the VCS cluster and install Solaris Cluster packages on the old VCS node, but do not configure the node yet.
    8.Allow data access to the storage for both nodes with appropriate methods.
    9.Add the second node to the Solaris Cluster including the Solaris Cluster device groups, this step will take an other short application downtime.
    10.Mirror your data. From this point you have full redundancy and full high availability again.
    Cheers
    Detlef

  • Can I use one transport adapter on the nodes of the cluster?

    Hi
    I am new to sun cluster, in the cluster documentation they mentioned that each node should have 2 network cards one for public connections and one for private connection. what if I do not want the nodes to have public connections except for one node. In other words, I want to use one network card on each node except for the first node in the cluster, users can access the rest of the nodes through the fist node . Is that possible? If yes, what should be the name of the second transport adapter while installing the cluster software on the nodes.
    Thank You for the help

    Dear
    We are using cluster for HA on failover condition, If you have only one network adapter so how you work in failover, and you can't assign one adaptor to two node as same, you have min 2 network adapter for 2 node cluster..
    :)GooDLucK
    Mohammed Tanvir

  • Didadm: unable to determine hostname.  error on Sun cluster 4.0 - Solaris11

    Trying to install Sun Cluster 4.0 on Sun Solaris 11 (x86-64).
    iscs sharedi Quorum Disk are available in /dev/rdsk/ .. ran
    devfsadm
    cldevice populate
    But don't see DID devices getting populated in /dev/did.
    Also when scdidadm -L is issued getting the following error. Has any seen the same error ??
    - didadm: unable to determine hostname.
    Found in cluster 3.2 there was a Bug 6380956: didadm should exit with error message if it cannot determine the hostname
    The sun cluster command didadm, didadm -l in particular, requires the hostname to function correctly. It uses the standard C library function gethostname to achieve this.
    Early in the cluster boot, prior to the service svc:/system/identity:node coming online, gethostname() returns an empty string. This breaks didadm.
    Can anyone point me in the right direction to get past this issue with shared quorum disk DID.

    Let's step back a bit. First, what hardware are you installing on? Is it a supported platform or is it some guest VM? (That might contribute to the problems).
    Next, after you installed Solaris 11, did the system boot cleanly and all the services come up? (svcs -x). If it did boot cleanly, what did 'uname -n' return? Do commands like 'getent hosts <your_hostname>' work? If there are problems here, Solaris Cluster won't be able to get round them.
    If the Solaris install was clean, what were the results of the above host name commands after OSC was installed? Do the hostnames still resolve? If not, you need to look at why that is happening first.
    Regards,
    Tim
    ---

  • Sun cluster failed when switching, mount /global/ I/O error .

    Hi all,
    I am having a problem during switching two Sun Cluster nodes.
    Environment:
    Two nodes with Solaris 8 (Generic_117350-27), 2 Sun D2 arrays & Vxvm 3.2 and Sun Cluster 3.0.
    Porblem description:
    scswitch failed , then scshutdown and boot up the both nodes. One node failed because of vxvm boot failure.
    The other node is booting up normally but cannot mount /global directories. Manually mount is working fine.
    # mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    # vxdg import globdg
    # vxvol -g globdg startall
    # mount /dev/vx/dsk/globdg/mirror-vol03 /mnt
    # echo $?
    0
    port:root:/global/.devices/node@1/dev/vx/dsk 169# mount /global/stripe01
    mount: I/O error
    mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
    Need help urgently
    Jeff

    I would check your patch levels. I seem to remember there was a linker patch that cause an issue with mounting /global/.devices/node@X
    Tim
    ---

  • Failed to create resource - Error in Sun cluster 3.2

    Hi All,
    I have a 2 node cluster in place. When i trying to create a resource, i am getting following error.
    Can anybody tell me why i am getting this. I have Sun Cluster 3.2 on Solaris 10.
    I have created zpool called testpool.
    clrs create -g test-rg -t SUNW.HAStoragePlus -p Zpools=testpool hasp-testpool-res
    clrs: sun011:test011z - : no error
    clrs: (C189917) VALIDATE on resource hasp-testpool-res, resource group test-rg, exited with non-zero exit status.
    clrs: (C720144) Validation of resource hasp-testpool-res in resource group test-rg on node sun011:test011z failed.
    clrs: (C891200) Failed to create resource "hasp-testpool-res".
    Regards
    Kumar

    Thorsten,
    testpool created in one of the cluster nodes and is accessible from both the nodes in the cluster. But if it is imported in one node and will not be access from other node. If other node want to get access we need to export and import testpool in other node.
    Storage LUNs allocated to testpool are accessible from all the nodes in the cluster and able import and export testpool from all the nodes in the cluster.
    Regards
    Kumar

  • Errors after initial Sun Cluster install

    - SunOS conch 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Fire-V210
    - Sun Cluster 3.2
    I've gone through the scinstall process using the standard answers to questions. The only exception is that when it came to quorum, I answered I would set it up later, as I want to try to the quorum server. There's no shared storage - I'm seeing if it's possible to create a cluster using IP based replication.
    I'm getting these error messages every 30 seconds (looks like a result of:
    # svcs lrc:/etc/rc3_d/S91initgchb_resd
    STATE STIME FMRI
    legacy_run 16:19:29 lrc:/etc/rc3_d/S91initgchb_resd
    Feb 8 16:38:59 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:38:59 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:38:59 conch : Bad file number
    Feb 8 16:39:29 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:39:29 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:39:29 conch : Bad file number
    Feb 8 16:39:59 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:39:59 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:39:59 conch : Bad file number
    Feb 8 16:40:29 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:40:29 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:40:29 conch : Bad file number
    There's no file system errors, and I'm at a complete loss as to why there appears to be this problem. Can anyone offer any advice?
    Cheers,
    Iain

    Hi,
    there are 2 issues here.
    1. THe error messages that you see. I get them on my freshly installed cluster as well. What did I do? I used the JES installer and installed SC3.2 and SCGeo 3.2 - to be configured later. Ithink that it should only install the packages but not configure any part of them. It seems that it does oitherwise. To me ghcb sound like global cluster heartbeat.. I'll follow up with the developers to get this clarified.
    2. Replication within a cluster and no shared storage. THis has several aspects. I, too, see more and more customer demand to have this. If you get it to work let us know. I am not sure though, why you installed the SC Geo edition to achieve this, as I do not think it well help you here.
    In any case I can only recommend to set up the quorum server before proceeding, otherwise your whole cluster will panic as soon as you do a single reboot. That is per design..
    Regards
    Hartmut

  • Sun cluster 3.1 io error

    Hi,
    I have 2 cluster nodes with solaris 9/05 with sun cluster 3.1,After a migration from Hitachi AMS1000 storage to SUN storagetek 9985v when i shutdown one node in the cluster the mounted volumes on the second node giving io error.I already installed the new patches for os,cluster and san but the problem still persists.Please help me
    Regards,
    Arun

    Arun,
    You say you migrated to the 9985v - did you do that with backup and restore or with a replication technology? If it was the latter, you might have inadvertantly copied over some SCSI reservation keys. Otherwise, I can't see any reason for the problem.
    SCSI keys can be removed (with extreme care) using the scsi and pgre commands in the /usr/cluster/lib/sc directory.
    Tim
    ---

  • Replacing network adapter from IPMP group (Sun cluster 3.3)

    Hello!
    I need to change network devices from IPMP group that have devices ge0 ge1 ge2 to ce5 ce6 ce7
    I can do this procedure online? something like:
    Creating files adding to the ipmp groups: /etc/hostname.ce5 ,ce6, c7
    unmonitoring resources group
    umplumb old devices and plumb up new devices
    # scstat -i
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: node0 ipmp0 Online ge1 Online
    IPMP Group: node0 ipmp0 Online ge0 Online
    IPMP Group: node0 ipmp1 Online ce2 Online
    IPMP Group: node0 ipmp1 Online ce0 Online
    IPMP Group: node1 ipmp0 Online ge1 Online
    IPMP Group: node1 ipmp0 Online ge0 Online
    IPMP Group: node1 ipmp1 Online ce2 Online
    IPMP Group: node1 ipmp1 Online ce0 Online
    /etc/hostname.ge0
    n0-testge0 netmask + broadcast + group ipmp0 deprecated -failover up
    addif node0 netmask + broadcast + up
    /etc/hostname.ge1
    n0-testge1 netmask + broadcast + group ipmp0 deprecated -failover up
    /etc/hostname.ge2
    backupn0 mtu 1500
    # ifconfig -a
    lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
    inet 127.0.0.1 netmask ff000000
    ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
    inet 172.19.1.25 netmask ffffff00 broadcast 172.19.1.255
    groupname ipmp1
    ether 0:14:4f:23:1d:9
    ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
    inet 172.19.1.10 netmask ffffff00 broadcast 172.19.1.255
    ce1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 9
    inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
    ether 0:14:4f:23:1d:a
    ce2: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
    inet 172.19.1.26 netmask ffffff00 broadcast 172.19.1.255
    groupname ipmp1
    ether 0:14:4f:26:a4:83
    ce2:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 3
    inet 172.19.1.23 netmask ffffff00 broadcast 172.19.1.255
    ce4: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 8
    inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
    ether 0:14:4f:42:7f:28
    dman0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 192.168.103.6 netmask ffffffe0 broadcast 192.168.103.31
    ether 0:0:be:aa:1c:58
    ge0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 5
    inet 10.1.0.25 netmask ffffff00 broadcast 10.1.0.255
    groupname ipmp0
    ether 8:0:20:e6:61:a7
    ge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
    inet 10.1.0.10 netmask ffffff00 broadcast 10.1.0.255
    ge1: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 6
    inet 10.1.0.26 netmask ffffff00 broadcast 10.1.0.255
    groupname ipmp0
    ether 0:3:ba:c:74:62
    ge1:1: flags=1001040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,FIXEDMTU> mtu 1500 index 6
    inet 10.1.0.23 netmask ffffff00 broadcast 10.1.0.255
    ge2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
    inet 10.1.2.10 netmask ffffff00 broadcast 10.1.2.255
    ether 8:0:20:b5:25:88
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 10
    inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255
    ether 0:0:0:0:0:1
    Thanks in advance!

    You should be able to replace adapters in an IPMP group one-by-one without affecting the cluster operation.
    BUT: You must make sure that the status of the new adapter in the IPMP group gets back to normal, before you start replacing the next adapter.
    Solaris Cluster only reacts to IPMP group failures, not to failures of individual NICs.
    Note, that IPMP is only used for the public network. Cluster interconnects are not configured using IPMP. Nevertheless the same technique can be applied to replace adapters in the cluster interconnect. You need to use the clintr command (IIRC) to replace individual NICs. Again, make sure that all the NICs of the interconnect are healthy before you continue replacing the next adapater.

  • SUN Cluster 3.2, Solaris 10, Corrupted IPMP group on one node.

    Hello folks,
    I recently made a network change on nodename2 to add some resilience to IPMP (adding a second interface but still using a single IP address).
    After a reboot, I cannot keep this host from rebooting. For the one minute that it stays up, I do get the following result from scstat that seems to suggest a problem with the IPMP configuration. I rolled back my IPMP change, but it still doesn't seem to register the IPMP group in scstat.
    nodename2|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 Path online
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 0 1 Offline
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 nodename2
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    scstat:  unexpected error.
    I did manage to run scstat on nodename1 while nodename2 was still up between reboots, here is that result (it does not show any IPMP group(s) on nodename2)
    nodename1|/#scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: nodename1 Online
    Cluster node: nodename2 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: nodename1:bge3 nodename2:bge3 faulted
    -- Quorum Summary from latest node reconfiguration --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node (current status) --
    Node Name Present Possible Status
    Node votes: nodename1 1 1 Online
    Node votes: nodename2 1 1 Online
    -- Quorum Votes by Device (current status) --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: jms-ds nodename1 -
    -- Device Group Status --
    Device Group Status
    Device group status: jms-ds Degraded
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: nodename1 sc_ipmp1 Online bge2 Online
    IPMP Group: nodename1 sc_ipmp0 Online bge0 Online
    -- IPMP Groups in Zones --
    Zone Name Group Status Adapter Status
    I believe that I should be able to delete the IPMP group for the second node from the cluster and re-add it, but I'm sure about how to go about doing this. I welcome your comments or thoughts on what I can try before rebuilding this node from scratch.
    -AG

    I was able to restart both sides of the cluster. Now both sides are online, but neither side can access the shared disk.
    Lots of warnings. I will keep poking....
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename2
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface ce3; disabling probe-based failure detection on it
    Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface bge0; disabling probe-based failure detection on it
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename2: attempting to join cluster.
    NOTICE: CMM: Node nodename1 (nodeid: 1, incarnation #: 1279727883) has become reachable.
    NOTICE: clcomm: Path nodename2:bge3 - nodename1:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename2: joined cluster.
    NOTICE: CCR: Waiting for repository synchronization to finish.
    WARNING: CCR: Invalid CCR table : dcs_service_9 cluster global.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    ==> WARNING: DCS: Error looking up services table
    ==> WARNING: DCS: Error initializing service 9 from file
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d22 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename2 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    Rebooting with command: boot
    Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
    SunOS Release 5.10 Version Generic_141444-09 64-bit
    Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hardware watchdog enabled
    Hostname: nodename1
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    Booting as part of a cluster
    NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
    NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: clcomm: Adapter bge3 constructed
    NOTICE: CMM: Node nodename1: attempting to join cluster.
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 errors during initiation
    WARNING: Path nodename1:bge3 - nodename2:bge3 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
    NOTICE: bge3: link down
    NOTICE: bge3: link up 1000Mbps Full-Duplex
    NOTICE: CMM: Node nodename2 (nodeid: 2, incarnation #: 1279728026) has become reachable.
    NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 online
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
    NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
    NOTICE: CMM: Cluster members: nodename1 nodename2.
    NOTICE: CMM: node reconfiguration #3 completed.
    NOTICE: CMM: Node nodename1: joined cluster.
    WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
    /usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
    Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
    /dev/md/rdsk/d26 is clean
    Reading ZFS config: done.
    NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
    nodename1 console login: obtaining access to all attached disks
    starting NetWorker daemons:
    nsrexecd
    mount: /dev/md/jms-ds/dsk/d100 is already mounted or /opt/esbshares is busy

  • SC 3.2 - scinstall fails cluster transport discovery

    Hello,
    I'm trying to install a cluster on 2 v240 servers running Solaris 10 U4 (08/07) built from a custom Jumpstart profile. Both systems have 4 network connections (2 public, 2 private), spread across a total of 3 VLANs (one public, two private). The cluster software installs just fine, but when it comes time to run 'scinstall' to build the cluster, I get the following error:
      Cluster Creation
        Log file - /var/cluster/logs/install/scinstall.log.4152
        Checking installation status ... done
        The Sun Cluster software is installed on "fusion03".
        The Sun Cluster software is installed on "fusion04".
        Starting discovery of the cluster transport configuration.
        Probes were sent out from all transport adapters configured for this
        node ("fusion03"). But, they were not seen by any of the other nodes.
        This may be due to any number of reasons, including improper cabling
        or a switch which was confused by the probes.
        You can either attempt to correct the problem and try the probes again
        or manually configure the transport. To correct the problem might
        involve re-cabling, changing the configuration, or fixing hardware.
        You must configure the transport manually to configure tagged VLAN
        adapters and non tagged VLAN adapters on the same private interconnect
        VLAN.
        Do you want to try again (yes/no) [yes]?I've been poking at this all day, so any pointers would be greatly appreciated.
    fpsm

    I can plumb and configure an IP on all of interfaces in both machines and they can see each other (ping and arp both work fine).
    I did find one thing while looking through the log though. I am configuring these through SSH, and it appears that the SSH banner was causing a bit of a problem. Notice in the output below that the expected probe information is the words from the legal disclaimer in the SSH banner.
    ===========================
    fusion03
    ===========================
    scrconf -n cmd=discover_send,adapters=bge2:bge3,vlans=0:0,token=suncluster_fusion-dmz,sendcount=30
    ===========================
    fusion04
    ===========================
    ssh root@fusion04 -o "BatchMode yes" -n "/bin/sh -c '/usr/cluster/lib/scadmin/lib/cmd_autodiscovery 0:0 suncluster_fusion-dmz 2 30; /bin/echo SC_COMMAND_STATUS=\$?'"
    quit
    SC_COMMAND_STATUS=0
    All users of this system have consented to, and are subject to, the
    provisions of Corporate Policy Regarding Electronic Communications.
    This system is for the use of authorized users only.  Individuals
    using this computer system without authority, or in excess of their
    authority, are subject to having all of their activities on this
    system monitored and recorded by systems personnel.  In the course
    of monitoring individuals improperly using this system, or in the
    course of system maintenance, the activities of authorized users may
    also be monitored.  Anyone using this system expressly consents to
    such monitoring and is advised that if such monitoring reveals possible
    criminal activity, system personnel may provide the evidence of such
    monitoring to law enforcement officials.
    ===========================
    "fusion04" found an expected probe from "of".
    "fusion04" found an expected probe from "Corporate".
    "fusion04" found an expected probe from "is".
    "fusion04" found an expected probe from "computer".
    "fusion04" found an expected probe from "subject".
    "fusion04" found an expected probe from "and".
    "fusion04" found an expected probe from "individuals".
    "fusion04" found an expected probe from "system".
    "fusion04" found an expected probe from "monitored.".
    "fusion04" found an expected probe from "and".
    "fusion04" found an expected probe from "system".
    "fusion04" found an expected probe from "law".
        Probes were sent out from all transport adapters configured for this
        node ("fusion03"). But, they were not seen by any of the other nodes.
        This may be due to any number of reasons, including improper cabling
        or a switch which was confused by the probes.
        You can either attempt to correct the problem and try the probes again
        or manually configure the transport. To correct the problem might
        involve re-cabling, changing the configuration, or fixing hardware.
        You must configure the transport manually to configure tagged VLAN
        adapters and non tagged VLAN adapters on the same private interconnect
        VLAN.I removed the banner and those messages stopped appearing in the log, but it didn't resolve the problem - I can't get the interconnect to work.
    fpsm

  • Problem with sun Cluster

    Hi all !
    I've problem with cluster, server cannot see HDD from storedge.
    state-
    - in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
    - use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
    - type �devfsadm �C� on hap103 ----> notice error about HDD.
    - type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
    - type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
    Help me if you can.
    Many thanks.
    Long.
    -----------------------------ok_log-------------------------
    ########## hap103 ##################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    ############ hap203 ###################################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    {3} ok
    ------------------------hap103-------------------------
    hap103>
    hap103> format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap103>
    hap103>
    hap103>
    hap103> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap103>
    hap103>
    hap103> metastat -s dgsmp
    dgsmp/d120: Mirror
    Submirror 0: dgsmp/d121
    State: Needs maintenance
    Submirror 1: dgsmp/d122
    State: Needs maintenance
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 716695680 blocks
    dgsmp/d121: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d5s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d1s0 0 No Maintenance
    d2s0 0 No Maintenance
    d3s0 0 No Maintenance
    d4s0 0 No Maintenance
    d5s0 0 No Last Erred
    dgsmp/d122: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d6s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d6s0 0 No Last Erred
    d7s0 0 No Okay
    d8s0 0 No Okay
    d9s0 0 No Okay
    d10s0 0 No Resyncing
    hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
    hap103>
    hap103>
    hap103>
    hap103>
    hap103> scdidadm -l
    1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
    12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
    13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
    14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
    15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
    hap103>
    hap103>
    hap103> more /etc/vfstab
    [49;1H[K#device device  mount   FS      fsck    mount   mount
    #to     mount   to      fsck            point           type    pass    at boot options
    #/dev/dsk/c1d0s2        /dev/rdsk/c1d0s2        /usr    ufs     1       yes     -
    fd      -       /dev/fd fd      -       no      -
    /proc   -       /proc   proc    -       no      -
    /dev/md/dsk/d20 -       -       swap    -       no      -
    /dev/md/dsk/d10 /dev/md/rdsk/d10        /       ufs     1       no      logging
    #/dev/dsk/c3t0d0s3      /dev/rdsk/c3t0d0s3      /globaldevices  ufs     2       yes     logging
    /dev/md/dsk/d60 /dev/md/rdsk/d60        /in     ufs     2       yes     logging
    /dev/md/dsk/d40 /dev/md/rdsk/d40        /in/oracle      ufs     2       yes     logging
    /dev/md/dsk/d50 /dev/md/rdsk/d50        /indelivery     ufs     2       yes     logging
    swap    -       /tmp    tmpfs   -       yes     -
    /dev/md/dsk/d30 /dev/md/rdsk/d30        /global/.devices/node@1 ufs     2       no      global
    /dev/md/dgsmp/dsk/d120  /dev/md/dgsmp/rdsk/d120 /in/smp ufs     2       yes     logging,global
    #RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
    [K[1;7mvfstab: END[m
    [Khap103> df -h
    df: unknown option: h
    Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
    hap103>
    hap103>
    hap103>
    hap103> df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 4339374 3429010 866971 80% /
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    swap 22744256 136 22744120 1% /var/run
    swap 22744144 24 22744120 1% /tmp
    /dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
    /dev/md/dsk/d60 121571658 1907721 118448221 2% /in
    /dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
    /dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
    /dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
    ------------------log_hap203---------------------------------
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@8,0
    1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@9,0
    2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@a,0
    3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@b,0
    4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@c,0
    5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap203>
    hap203> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap203>
    hap203>
    hap203> devfsadm- -C
    hap203>
    hap203> scdidadm -l
    1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
    17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
    18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
    19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
    20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
    hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (s

    First question is what HBA and driver combination are you using?
    Next do you have MPxIO enabled or disabled?
    Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
    What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
    What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
    The more you leave out of your question the harder it is to offer suggestions.
    Feadshipman

Maybe you are looking for

  • Adding a button to the Open Items List

    Hiya everyone,  I am trying to add a button to the Open List Items form (formType - 152) which is located in Reports -> Sales & Purchasing -> Open Items List.  When I attempt to add a button to that particular form it does not appear.  The only form

  • Why is the print out darker than the colors on the screen?

    This is happening in both Photoshop and Illustrator.  My prints of my work come out darker and grayer than the bright colors on the monitor.  I am on a Mac OSX 10.6.8.  When I adjust the Color Settings from the Edit drop-down in both programs to "Mon

  • When I add a phone-number, I can't define it as 'iPhone'.

    When I add a contact, and that person is also using iPhone, I can't define that. I can choose all kinds of things, like radio, fax, home, mobile,.. but not iPhone... This also means I can't use iMessage whit that person. How can I change this?

  • On Windows XP db service starts up but database is not actually started

    Recently upgraded db from 10.1.0.4 to 10.2.0.1 in windows XP The Windows service to start the database runs without error but the database is not actually started. When I connect as sys I get 'connected to idle instance' so i need to run 'startup' in

  • Database problème avec mon exe

    Bonjour a tous, Pour l'un de mes projet de banc de test, j'utilise une base de donnée locale. J'ai créé cette base de donnée sur le PC sur lequel je développe, je l'ai placé dans un dossier à la racine du C: Sur le PC que je développe, je ne recontre