Bizzare Disk reservation probelm with sun cluster 3.2 - solaris 10 X 4600
We have a 4 node X4600 sun cluster with shared AMS500 storage. There over 30 LUN's presented to the cluster.
When any of the two higher nodes ( ie node id 2 and node is 3 ) are booted, their keys are not added to 4 out of 30 LUNS. These 4 LUNs show up as drive type unknown in format. I've noticed that the only thing common with these LUN's is that their size is bigger than 1TB
To resolve this I simply scrub the keys, run sgdevs than they showup as normal in format and all nodes keys are present on the LUNS.
Has anybody come across this behaviour.
Commands used to resolve problem
1. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
2. scrub keys #/usr/cluster/lib/sc/scsi -c scrub -d devicename
3. #sgdevs
4. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
all node's keys are now present on the lun
Hi,
according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
So at the end it all boils down to
- cost: Solaris multipathing is free, as it is bundled
- support: Sun can offer better support for the Sun software
You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
Hartmut
Similar Messages
-
Is oracle 9.2.0.8 compatible with Sun Cluster 3.3 5/11 and 3.3 3/13?
Where can I check compatibility matrix?matthew_morris wrote:
This forum is about Oracle professional certifications (i.e. "Oracle Database 12c Administrator Certified Professional"), not about certifying product compatibility.
I concur with Matthew. The release notes for sun cluster and oracle for solaris might tell you. oracle 9.2.0.8 is out of support on solaris and I recall needing a number of patches to get it to a fit state ... and that is without considering sun cluster. Extended support for 9.2.0.8 ended about 4 years ago ... this is not a combination I would currently be touching with a bargepole! You are best to seek on MOS. -
SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)
Dear All;
i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
for the SAP i did the following configurations
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
it executes the startsap scripts successfully as following
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
stty: : No such device or address
stty: : No such device or address
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
saposcol on host eccprd01 started
Starting SAP Instance ASCS00
Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
saposcol on host eccprd01 started
Running /usr/sap/PRD/SYS/exe/run/startj2eedb
Trying to start PRD database ...
Log file: /export/home/prdadm/startdb.log
Instance Service on host eccprd01 started
Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
/usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
Starting SAP Instance SCS01
Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
Instance Service on host eccprd01 started
Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
Instance on host eccprd01 started
Instance on host eccprd01 started
and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:28 eccprd01 last message repeated 1 time
Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:46 eccprd01 last message repeated 1 time
Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:49 eccprd01 last message repeated 1 time
Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:52 eccprd01 last message repeated 1 time
Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:58 eccprd01 last message repeated 1 time
Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:01 eccprd01 last message repeated 1 time
Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:04 eccprd01 last message repeated 1 time
Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:13 eccprd01 last message repeated 1 time
Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:16 eccprd01 last message repeated 1 time
Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:22 eccprd01 last message repeated 1 time
Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:25 eccprd01 last message repeated 1 time
Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:34 eccprd01 last message repeated 1 time
Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:37 eccprd01 last message repeated 1 time
Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:43 eccprd01 last message repeated 1 time
Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:46 eccprd01 last message repeated 1 time
Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
ARSSESHi all.
I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
Scenrio:
Central Instance (not incluster) : Started on one node
Dialog Instance (not in cluster): Started on the other node
When I create the resource for SUNW.sap_as like
clrs create --g sap-rg -t SUNW.sap_as .....etc etc
in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
Then after timeout it gives up.
Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
TIA -
Upgrade from Solaris 8 SPARC with Sun cluster 3.1u3 to Solaris 10 SPARC
Dear All,
We are planning an upgrade of the OS from Solaris 8 SPARC to Solaris 10 SPARC on a two-node active-standby clustered system.
The current major software we have on the Solaris 8 system are:
1: Sun Cluster 3.1u3
2: Oracle 9i 9.2.0.8
3: Veritas File System Vxfs v4.0
4: Sun Solaris 8 2/04 SPARC
Any pointers as to what sequence and how the upgrade should be done?
Thanks in advance.
Regards,
Rayyes I know it can be quite complicated and complex, but Sun provided us with a detailed documentation, at least in our case Solaris 9 to 10 it was very helpful.
You might get better help in the cluster forum http://forums.sun.com/forum.jspa?forumID=842
-- Nick -
SAP Netweaver 7.0 Web Dispatcher HA Setup with Sun Cluster 3.2
Hi,
How to HA SAP web dispatcher, which is not mentioned in the guide 'Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS'.
Since I do not want to install central instance within the cluster, should I install two standalone web dispatcers within the two nodes, and then HA it? Or maybe just install it on the shared storage with CFS once?
And specifically, what kind of resource type should I use for it? SUNW.sapwebas?
Thanks in advance,
StephenHi all.
I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
Scenrio:
Central Instance (not incluster) : Started on one node
Dialog Instance (not in cluster): Started on the other node
When I create the resource for SUNW.sap_as like
clrs create --g sap-rg -t SUNW.sap_as .....etc etc
in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
Then after timeout it gives up.
Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
TIA -
Failover Zones / Containers with Sun Cluster Geographic Edition and AVS
Hi everyone,
Is the following solution supported/certified by Oracle/Sun? I did find some docs saying it is but cannot find concrete technical information yet...
* Two sites with a 2-node cluster in each site
* 2x Failover containers/zones that are part of the two protection groups (1x group for SAP, other group for 3rd party application)
* Sun Cluster 3.2 and Geographic Edition 3.2 with Availability Suite for SYNC/ASYNC replication over TCP/IP between the two sites
The Zones and their application need to be able to failover between the two sites.
Thanks!
Wim OlivierFritz,
Obviously, my colleagues and I, in the Geo Cluster group build and test Geo clusters all the time :-)
We have certainly built and tested Oracle (non-RAC) configurations on AVS. One issue you do have, unfortunately, is that of zones plus AVS (see my Blueprint for more details http://wikis.sun.com/display/BluePrints/Using+Solaris+Cluster+and+Sun+Cluster+Geographic+Edition). Consequently, you can't built the configuration you described. The alternative is to sacrifice zones for now and wait for the fixes to RG affinities (no idea on the schedule for this feature) or find another way to do this - probably hand crafted.
If you follow the OHAC pages (http://www.opensolaris.org/os/community/ha-clusters/) and look at the endorsed projects you'll see that there is a Script Based Plug-in on the way (for OHACGE) that I'm writing. So, if you are interested in playing with OHACGE source or the SCXGE binaries, you might see that appear at some point. Of course, these aren't supported solutions though.
Regards,
Tim
--- -
Deploy HA Zones with Sun Cluster
Hi
I have 2 physical Sol 10 Servers with a storedge array for the shared storage.
I have installed Sun Cluster 3.3 on both nodes and sorted the quorum and shared drive using a zfs file system for a mount point
Next i have installed a non global zone on 1 node using the zone path on the shared filesystem
When i switch the shared file system the zone is not instaalled on the 2nd node.
So when i try to install the zone on the 2nd node
i get a Rootpath is already mounted on this filesystem
Does anyone know how to setup a Sun Cluster with HA Zones please.The option to forcibly attach a zone got added to zoneadm with a Solarus 10 Update release. With that option the procedure to configure and install a zone for HA Container use can be:
The assumption is there is already a RG configured with a HASP resource managing the zpool for the zone rootpath:
a) Swithch the RG online on node A
b) Configure (zonecfg) and install (zoneadm) the zone on node A on shared storage
c) boot the zone and go through interactive sysidcfg within "zlogic -C zonename"
d) Switch the RG hosting the HASP resource for the pool to node B
e) Configure (zonecfg) the zone on node B.
f) "Install" the zone by forcibly attaching it: zoneadm -z <zonename> attach -F
The user can then test if the zone boots on node B, halt it and proceed with the sczbt resource registration as described within http://download.oracle.com/docs/cd/E18728_01/html/821-2677/index.html.
Regards
Thorsten -
ODSEE 11.1.1.7 with sun cluster
Hi,
Does Oracle directory server 11.1.1.7 support sun cluster as active/passive availability ? Please share me if have any documents.
Thanks,
Kasi.Hi Kasi,
Oracle Directory Server Enterprise Edition 11.1.1.7 doesn't support any O.S. layer cluster, since it's High Availability model is achieved at application layer through the Multi-Master Replication.
Please refer to the official product documentation available here:
Oracle&reg; Fusion Middleware Directory Server Enterprise Edition
Oracle&reg; Fusion Middleware Deployment Planning Guide for Oracle Directory Server Enterprise Edition 11g Release 1…
Thanks,
Marco -
Sun cluster patch for solaris 10 x86
I have Solaris 10 6/06 installed on x4100 box with 2 node clustering using Sun Cluster 3.1 8/05. I just want to know is there any latest patches available for the OS to prevent cluster related bugs. what are they? My kernel patch is 118855-19.
any inputs needed. let me know.Well, I would run S10 updatemanager and get the latest patches that way.
Tim
--- -
Recommendations for Multipathing software in Sun Cluster 3.2 + Solaris 10
Hi all, I'm in the process of building a 2-node cluster with the following specs:
2 x X4600
Solaris 10 x86
Sun Cluster 3.2
Shared storage provided by a EMC CX380 SAN
My question is this: what multipathing software should I use? The in-built Solaris 10 multipathing software or EMC's powerpath?
Thanks in advance,
StewartHi,
according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
So at the end it all boils down to
- cost: Solaris multipathing is free, as it is bundled
- support: Sun can offer better support for the Sun software
You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
Hartmut -
Sun Cluster 3.2/Solaris 10 Excessive ICMP traffic
Hi all,
I have inherited a 2 node cluster with a 3510 san which I have upgraded to Cluster 3.2/Solaris 10. Apparently this was happening on Cluster 3.0/Solaris 8 as well.
The real interfaces on the two nodes seem to be sending excessive pings to the default gateway it is connected to. The configuration of the network adapters are the same - 2 NIC's on each are grouped for multi-home and 2 NIC's configured as private for cluster heartbeats.
The 2 NIC's that are grouped together on each of the servers are the cards generating the traffic.
23:27:52.402377 192.168.200.216 > 192.168.200.1: icmp: echo request [ttl 1]
23:27:52.402392 192.168.200.1 > 192.168.200.216: icmp: echo reply
23:27:52.588793 192.168.200.217 > 192.168.200.1: icmp: echo request [ttl 1]
23:27:52.588806 192.168.200.1 > 192.168.200.217: icmp: echo reply
23:27:52.818690 192.168.200.215 > 192.168.200.1: icmp: echo request [ttl 1]
23:27:52.818714 192.168.200.1 > 192.168.200.215: icmp: echo reply
23:27:53.072442 192.168.200.214 > 192.168.200.1: icmp: echo request [ttl 1]
23:27:53.072479 192.168.200.1 > 192.168.200.214: icmp: echo reply
Here is the setup to one of the servers:
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 2
inet 192.168.200.214 netmask ffffff00 broadcast 192.168.200.255
groupname prod
ether 0:3:ba:43:f4:f4
ce0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.200.212 netmask ffffff00 broadcast 192.168.200.255
ce1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 5
inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
ether 0:3:ba:43:f4:f3
qfe0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 3
inet 192.168.200.216 netmask ffffff00 broadcast 192.168.200.255
groupname prod
ether 0:3:ba:34:95:4
qfe1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
ether 0:3:ba:34:95:5
clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 6
inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255
ether 0:0:0:0:0:1
Any suggestions on why the excessive traffic?I would guess these are the ipmp probes (man in.mpathd).
You can start in.mpathd in debug mode to find out.
HTH,
jono -
Hi all !
I've problem with cluster, server cannot see HDD from storedge.
state-
- in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
- use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
- type �devfsadm �C� on hap103 ----> notice error about HDD.
- type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
- type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
Help me if you can.
Many thanks.
Long.
-----------------------------ok_log-------------------------
########## hap103 ##################
{3} ok probe-scsi-all
/pci@1f,700000/scsi@2,1
/pci@1f,700000/scsi@2
Target 0
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 1
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 2
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
Target 3
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
/pci@1d,700000/pci@2/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1d,700000/pci@2/scsi@4
/pci@1c,600000/pci@1/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1c,600000/pci@1/scsi@4
############ hap203 ###################################
{3} ok probe-scsi-all
/pci@1f,700000/scsi@2,1
/pci@1f,700000/scsi@2
Target 0
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 1
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 2
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
Target 3
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
/pci@1d,700000/pci@2/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1d,700000/pci@2/scsi@4
/pci@1c,600000/pci@1/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1c,600000/pci@1/scsi@4
{3} ok
------------------------hap103-------------------------
hap103>
hap103> format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@8,0
1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@9,0
2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@a,0
3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@b,0
4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@c,0
5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@0,0
6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@1,0
7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@2,0
8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@3,0
Specify disk (enter its number): ^D
hap103>
hap103>
hap103>
hap103> scstart t
-- Cluster Nodes --
Node name Status
Cluster node: hap103 Online
Cluster node: hap203 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: hap103:ce7 hap203:ce7 Path online
Transport path: hap103:ce3 hap203:ce3 Path online
-- Quorum Summary --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
Node votes: hap103 1 1 Online
Node votes: hap203 1 1 Online
-- Quorum Votes by Device --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d1s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: dgsmp hap103 hap203
-- Device Group Status --
Device Group Status
Device group status: dgsmp Online
-- Resource Groups and Resources --
Group Name Resources
Resources: rg-smp has-res SDP1 SMFswitch
-- Resource Groups --
Group Name Node Name State
Group: rg-smp hap103 Pending online
Group: rg-smp hap203 Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: has-res hap103 Offline Unknown - Starting
Resource: has-res hap203 Offline Offline
Resource: SDP1 hap103 Offline Unknown - Starting
Resource: SDP1 hap203 Offline Offline
Resource: SMFswitch hap103 Offline Offline
Resource: SMFswitch hap203 Offline Offline
hap103>
hap103>
hap103> metastat -s dgsmp
dgsmp/d120: Mirror
Submirror 0: dgsmp/d121
State: Needs maintenance
Submirror 1: dgsmp/d122
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 716695680 blocks
dgsmp/d121: Submirror of dgsmp/d120
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace dgsmp/d120 d5s0 <new device>
Size: 716695680 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
d1s0 0 No Maintenance
d2s0 0 No Maintenance
d3s0 0 No Maintenance
d4s0 0 No Maintenance
d5s0 0 No Last Erred
dgsmp/d122: Submirror of dgsmp/d120
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace dgsmp/d120 d6s0 <new device>
Size: 716695680 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
d6s0 0 No Last Erred
d7s0 0 No Okay
d8s0 0 No Okay
d9s0 0 No Okay
d10s0 0 No Resyncing
hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
hap103>
hap103>
hap103>
hap103>
hap103> scdidadm -l
1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
hap103>
hap103>
hap103> more /etc/vfstab
[49;1H[K#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d20 - - swap - no -
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no logging
#/dev/dsk/c3t0d0s3 /dev/rdsk/c3t0d0s3 /globaldevices ufs 2 yes logging
/dev/md/dsk/d60 /dev/md/rdsk/d60 /in ufs 2 yes logging
/dev/md/dsk/d40 /dev/md/rdsk/d40 /in/oracle ufs 2 yes logging
/dev/md/dsk/d50 /dev/md/rdsk/d50 /indelivery ufs 2 yes logging
swap - /tmp tmpfs - yes -
/dev/md/dsk/d30 /dev/md/rdsk/d30 /global/.devices/node@1 ufs 2 no global
/dev/md/dgsmp/dsk/d120 /dev/md/dgsmp/rdsk/d120 /in/smp ufs 2 yes logging,global
#RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
[K[1;7mvfstab: END[m
[Khap103> df -h
df: unknown option: h
Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
hap103>
hap103>
hap103>
hap103> df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d10 4339374 3429010 866971 80% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 22744256 136 22744120 1% /var/run
swap 22744144 24 22744120 1% /tmp
/dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
/dev/md/dsk/d60 121571658 1907721 118448221 2% /in
/dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
/dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
/dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
------------------log_hap203---------------------------------
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@8,0
1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@9,0
2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@a,0
3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@b,0
4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@c,0
5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@8,0
6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@9,0
7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@a,0
8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@b,0
9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@c,0
10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@0,0
11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@1,0
12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@2,0
13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@3,0
Specify disk (enter its number): ^D
hap203>
hap203> scstart t
-- Cluster Nodes --
Node name Status
Cluster node: hap103 Online
Cluster node: hap203 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: hap103:ce7 hap203:ce7 Path online
Transport path: hap103:ce3 hap203:ce3 Path online
-- Quorum Summary --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
Node votes: hap103 1 1 Online
Node votes: hap203 1 1 Online
-- Quorum Votes by Device --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d1s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: dgsmp hap103 hap203
-- Device Group Status --
Device Group Status
Device group status: dgsmp Online
-- Resource Groups and Resources --
Group Name Resources
Resources: rg-smp has-res SDP1 SMFswitch
-- Resource Groups --
Group Name Node Name State
Group: rg-smp hap103 Pending online
Group: rg-smp hap203 Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: has-res hap103 Offline Unknown - Starting
Resource: has-res hap203 Offline Offline
Resource: SDP1 hap103 Offline Unknown - Starting
Resource: SDP1 hap203 Offline Offline
Resource: SMFswitch hap103 Offline Offline
Resource: SMFswitch hap203 Offline Offline
hap203>
hap203>
hap203> devfsadm- -C
hap203>
hap203> scdidadm -l
1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (sFirst question is what HBA and driver combination are you using?
Next do you have MPxIO enabled or disabled?
Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
The more you leave out of your question the harder it is to offer suggestions.
Feadshipman -
Dear Support
Is there any ready agent for ERP baan application ? ?
Thank YouNot that I know of. At least it is not on the list of agents that Sun has on its price list.
Regards
Hartmut -
SUN Cluster 3.2, Solaris 10, Corrupted IPMP group on one node.
Hello folks,
I recently made a network change on nodename2 to add some resilience to IPMP (adding a second interface but still using a single IP address).
After a reboot, I cannot keep this host from rebooting. For the one minute that it stays up, I do get the following result from scstat that seems to suggest a problem with the IPMP configuration. I rolled back my IPMP change, but it still doesn't seem to register the IPMP group in scstat.
nodename2|/#scstat
-- Cluster Nodes --
Node name Status
Cluster node: nodename1 Online
Cluster node: nodename2 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: nodename1:bge3 nodename2:bge3 Path online
-- Quorum Summary from latest node reconfiguration --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node (current status) --
Node Name Present Possible Status
Node votes: nodename1 1 1 Online
Node votes: nodename2 1 1 Online
-- Quorum Votes by Device (current status) --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d3s2 0 1 Offline
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: jms-ds nodename1 nodename2
-- Device Group Status --
Device Group Status
Device group status: jms-ds Online
-- Multi-owner Device Groups --
Device Group Online Status
-- IPMP Groups --
Node Name Group Status Adapter Status
scstat: unexpected error.
I did manage to run scstat on nodename1 while nodename2 was still up between reboots, here is that result (it does not show any IPMP group(s) on nodename2)
nodename1|/#scstat
-- Cluster Nodes --
Node name Status
Cluster node: nodename1 Online
Cluster node: nodename2 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: nodename1:bge3 nodename2:bge3 faulted
-- Quorum Summary from latest node reconfiguration --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node (current status) --
Node Name Present Possible Status
Node votes: nodename1 1 1 Online
Node votes: nodename2 1 1 Online
-- Quorum Votes by Device (current status) --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d3s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: jms-ds nodename1 -
-- Device Group Status --
Device Group Status
Device group status: jms-ds Degraded
-- Multi-owner Device Groups --
Device Group Online Status
-- IPMP Groups --
Node Name Group Status Adapter Status
IPMP Group: nodename1 sc_ipmp1 Online bge2 Online
IPMP Group: nodename1 sc_ipmp0 Online bge0 Online
-- IPMP Groups in Zones --
Zone Name Group Status Adapter Status
I believe that I should be able to delete the IPMP group for the second node from the cluster and re-add it, but I'm sure about how to go about doing this. I welcome your comments or thoughts on what I can try before rebuilding this node from scratch.
-AGI was able to restart both sides of the cluster. Now both sides are online, but neither side can access the shared disk.
Lots of warnings. I will keep poking....
Rebooting with command: boot
Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
SunOS Release 5.10 Version Generic_141444-09 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: nodename2
Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface ce3; disabling probe-based failure detection on it
Jul 21 10:00:16 in.mpathd[221]: No test address configured on interface bge0; disabling probe-based failure detection on it
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
Booting as part of a cluster
NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
NOTICE: clcomm: Adapter bge3 constructed
NOTICE: CMM: Node nodename2: attempting to join cluster.
NOTICE: CMM: Node nodename1 (nodeid: 1, incarnation #: 1279727883) has become reachable.
NOTICE: clcomm: Path nodename2:bge3 - nodename1:bge3 online
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
NOTICE: CMM: Cluster has reached quorum.
NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
NOTICE: CMM: Cluster members: nodename1 nodename2.
NOTICE: CMM: node reconfiguration #3 completed.
NOTICE: CMM: Node nodename2: joined cluster.
NOTICE: CCR: Waiting for repository synchronization to finish.
WARNING: CCR: Invalid CCR table : dcs_service_9 cluster global.
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
==> WARNING: DCS: Error looking up services table
==> WARNING: DCS: Error initializing service 9 from file
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
/dev/md/rdsk/d22 is clean
Reading ZFS config: done.
NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
nodename2 console login: obtaining access to all attached disks
starting NetWorker daemons:
Rebooting with command: boot
Boot device: /pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@0,0:a File and args:
SunOS Release 5.10 Version Generic_141444-09 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hardware watchdog enabled
Hostname: nodename1
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
Booting as part of a cluster
NOTICE: CMM: Node nodename1 (nodeid = 1) with votecount = 1 added.
NOTICE: CMM: Node nodename2 (nodeid = 2) with votecount = 1 added.
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
NOTICE: clcomm: Adapter bge3 constructed
NOTICE: CMM: Node nodename1: attempting to join cluster.
NOTICE: bge3: link up 1000Mbps Full-Duplex
NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 errors during initiation
WARNING: Path nodename1:bge3 - nodename2:bge3 initiation encountered errors, errno = 62. Remote node may be down or unreachable through this path.
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
NOTICE: CMM: Cluster doesn't have operational quorum yet; waiting for quorum.
NOTICE: bge3: link down
NOTICE: bge3: link up 1000Mbps Full-Duplex
NOTICE: CMM: Node nodename2 (nodeid: 2, incarnation #: 1279728026) has become reachable.
NOTICE: clcomm: Path nodename1:bge3 - nodename2:bge3 online
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
NOTICE: CMM: Cluster has reached quorum.
NOTICE: CMM: Node nodename1 (nodeid = 1) is up; new incarnation number = 1279727883.
NOTICE: CMM: Node nodename2 (nodeid = 2) is up; new incarnation number = 1279728026.
NOTICE: CMM: Cluster members: nodename1 nodename2.
NOTICE: CMM: node reconfiguration #3 completed.
NOTICE: CMM: Node nodename1: joined cluster.
WARNING: CMM: Open failed for quorum device /dev/did/rdsk/d3s2 with error 2.
ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],0:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],0:c,raw".
/usr/cluster/bin/scdidadm: Could not stat "../../devices/iscsi/[email protected],1:c,raw" - No such file or directory.
Warning: Path node loaded - "../../devices/iscsi/[email protected],1:c,raw".
/dev/md/rdsk/d26 is clean
Reading ZFS config: done.
NOTICE: iscsi session(6) iqn.1994-12.com.promise.iscsiarray2 online
nodename1 console login: obtaining access to all attached disks
starting NetWorker daemons:
nsrexecd
mount: /dev/md/jms-ds/dsk/d100 is already mounted or /opt/esbshares is busy -
Cannot import a disk group after sun cluster 3.1 installation
Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
Did anyone run into the same problem?
The dump of the private region for every single disk in the VM returns the following error:
# /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
Format error in disk private region
Any help or suggestion would be greatly appreciated
Thx
MaxIf I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
Having said that, I'm still confused by the error. See if the above solves the problem first.
Regards,
Tim
---
Maybe you are looking for
-
Erase and sync issues (backup/restore)
Hello. I have an issue with the backup/restore with my iPhone. Sometimes, it is difficult to find an answer that fits your problem exactly on the forum! Lots of good stuff though! This post will be little longer than normal, but I need to get all th
-
HT201210 Problem while updating to IOS 6
I was trying to update my 4s to IOS6 and once I got to the stage where my iPhone restarts its stuck in the loading section. How do I fix this?
-
Multiple line items in created delivery
Hi all, I am executing the transaction VL10E and after reducing the quantity of material( Say from 500 to 50) I am craeting delivery by clicking the push button 'BACKGROUND' but when i check the delivery created in VL02n. It is showing the same mate
-
FBVB error msg "Transaction not defined for direct call-up"
Dear Gurus, Pls help me on issues below:- 1.)What is the diff between FBV0 and FBVB? 2.)When i try to initialize FBVB. I received an error "Transaction FBVB not defined for direct call-up. Can u teach me how to fix it up coz one of my end-user reques
-
Adding Teradata source component programmatically in SSIS
Need help to add a Teradata source component programmatically in SSIS . I am able to add the component but the column info and other metadata is not available for subsequent components. Any documentations or suggestion is needed .