Deploy HA Zones with Sun Cluster
Hi
I have 2 physical Sol 10 Servers with a storedge array for the shared storage.
I have installed Sun Cluster 3.3 on both nodes and sorted the quorum and shared drive using a zfs file system for a mount point
Next i have installed a non global zone on 1 node using the zone path on the shared filesystem
When i switch the shared file system the zone is not instaalled on the 2nd node.
So when i try to install the zone on the 2nd node
i get a Rootpath is already mounted on this filesystem
Does anyone know how to setup a Sun Cluster with HA Zones please.
The option to forcibly attach a zone got added to zoneadm with a Solarus 10 Update release. With that option the procedure to configure and install a zone for HA Container use can be:
The assumption is there is already a RG configured with a HASP resource managing the zpool for the zone rootpath:
a) Swithch the RG online on node A
b) Configure (zonecfg) and install (zoneadm) the zone on node A on shared storage
c) boot the zone and go through interactive sysidcfg within "zlogic -C zonename"
d) Switch the RG hosting the HASP resource for the pool to node B
e) Configure (zonecfg) the zone on node B.
f) "Install" the zone by forcibly attaching it: zoneadm -z <zonename> attach -F
The user can then test if the zone boots on node B, halt it and proceed with the sczbt resource registration as described within http://download.oracle.com/docs/cd/E18728_01/html/821-2677/index.html.
Regards
Thorsten
Similar Messages
-
Is oracle 9.2.0.8 compatible with Sun Cluster 3.3 5/11 and 3.3 3/13?
Where can I check compatibility matrix?matthew_morris wrote:
This forum is about Oracle professional certifications (i.e. "Oracle Database 12c Administrator Certified Professional"), not about certifying product compatibility.
I concur with Matthew. The release notes for sun cluster and oracle for solaris might tell you. oracle 9.2.0.8 is out of support on solaris and I recall needing a number of patches to get it to a fit state ... and that is without considering sun cluster. Extended support for 9.2.0.8 ended about 4 years ago ... this is not a combination I would currently be touching with a bargepole! You are best to seek on MOS. -
Failover Zones / Containers with Sun Cluster Geographic Edition and AVS
Hi everyone,
Is the following solution supported/certified by Oracle/Sun? I did find some docs saying it is but cannot find concrete technical information yet...
* Two sites with a 2-node cluster in each site
* 2x Failover containers/zones that are part of the two protection groups (1x group for SAP, other group for 3rd party application)
* Sun Cluster 3.2 and Geographic Edition 3.2 with Availability Suite for SYNC/ASYNC replication over TCP/IP between the two sites
The Zones and their application need to be able to failover between the two sites.
Thanks!
Wim OlivierFritz,
Obviously, my colleagues and I, in the Geo Cluster group build and test Geo clusters all the time :-)
We have certainly built and tested Oracle (non-RAC) configurations on AVS. One issue you do have, unfortunately, is that of zones plus AVS (see my Blueprint for more details http://wikis.sun.com/display/BluePrints/Using+Solaris+Cluster+and+Sun+Cluster+Geographic+Edition). Consequently, you can't built the configuration you described. The alternative is to sacrifice zones for now and wait for the fixes to RG affinities (no idea on the schedule for this feature) or find another way to do this - probably hand crafted.
If you follow the OHAC pages (http://www.opensolaris.org/os/community/ha-clusters/) and look at the endorsed projects you'll see that there is a Script Based Plug-in on the way (for OHACGE) that I'm writing. So, if you are interested in playing with OHACGE source or the SCXGE binaries, you might see that appear at some point. Of course, these aren't supported solutions though.
Regards,
Tim
--- -
Upgrade from Solaris 8 SPARC with Sun cluster 3.1u3 to Solaris 10 SPARC
Dear All,
We are planning an upgrade of the OS from Solaris 8 SPARC to Solaris 10 SPARC on a two-node active-standby clustered system.
The current major software we have on the Solaris 8 system are:
1: Sun Cluster 3.1u3
2: Oracle 9i 9.2.0.8
3: Veritas File System Vxfs v4.0
4: Sun Solaris 8 2/04 SPARC
Any pointers as to what sequence and how the upgrade should be done?
Thanks in advance.
Regards,
Rayyes I know it can be quite complicated and complex, but Sun provided us with a detailed documentation, at least in our case Solaris 9 to 10 it was very helpful.
You might get better help in the cluster forum http://forums.sun.com/forum.jspa?forumID=842
-- Nick -
Bizzare Disk reservation probelm with sun cluster 3.2 - solaris 10 X 4600
We have a 4 node X4600 sun cluster with shared AMS500 storage. There over 30 LUN's presented to the cluster.
When any of the two higher nodes ( ie node id 2 and node is 3 ) are booted, their keys are not added to 4 out of 30 LUNS. These 4 LUNs show up as drive type unknown in format. I've noticed that the only thing common with these LUN's is that their size is bigger than 1TB
To resolve this I simply scrub the keys, run sgdevs than they showup as normal in format and all nodes keys are present on the LUNS.
Has anybody come across this behaviour.
Commands used to resolve problem
1. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
2. scrub keys #/usr/cluster/lib/sc/scsi -c scrub -d devicename
3. #sgdevs
4. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
all node's keys are now present on the lunHi,
according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
So at the end it all boils down to
- cost: Solaris multipathing is free, as it is bundled
- support: Sun can offer better support for the Sun software
You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
Hartmut -
SAP Netweaver 7.0 Web Dispatcher HA Setup with Sun Cluster 3.2
Hi,
How to HA SAP web dispatcher, which is not mentioned in the guide 'Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS'.
Since I do not want to install central instance within the cluster, should I install two standalone web dispatcers within the two nodes, and then HA it? Or maybe just install it on the shared storage with CFS once?
And specifically, what kind of resource type should I use for it? SUNW.sapwebas?
Thanks in advance,
StephenHi all.
I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
Scenrio:
Central Instance (not incluster) : Started on one node
Dialog Instance (not in cluster): Started on the other node
When I create the resource for SUNW.sap_as like
clrs create --g sap-rg -t SUNW.sap_as .....etc etc
in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
Then after timeout it gives up.
Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
TIA -
ODSEE 11.1.1.7 with sun cluster
Hi,
Does Oracle directory server 11.1.1.7 support sun cluster as active/passive availability ? Please share me if have any documents.
Thanks,
Kasi.Hi Kasi,
Oracle Directory Server Enterprise Edition 11.1.1.7 doesn't support any O.S. layer cluster, since it's High Availability model is achieved at application layer through the Multi-Master Replication.
Please refer to the official product documentation available here:
Oracle&reg; Fusion Middleware Directory Server Enterprise Edition
Oracle&reg; Fusion Middleware Deployment Planning Guide for Oracle Directory Server Enterprise Edition 11g Release 1…
Thanks,
Marco -
Hi all !
I've problem with cluster, server cannot see HDD from storedge.
state-
- in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
- use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
- type �devfsadm �C� on hap103 ----> notice error about HDD.
- type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
- type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
Help me if you can.
Many thanks.
Long.
-----------------------------ok_log-------------------------
########## hap103 ##################
{3} ok probe-scsi-all
/pci@1f,700000/scsi@2,1
/pci@1f,700000/scsi@2
Target 0
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 1
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 2
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
Target 3
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
/pci@1d,700000/pci@2/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1d,700000/pci@2/scsi@4
/pci@1c,600000/pci@1/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1c,600000/pci@1/scsi@4
############ hap203 ###################################
{3} ok probe-scsi-all
/pci@1f,700000/scsi@2,1
/pci@1f,700000/scsi@2
Target 0
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 1
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 2
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
Target 3
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
/pci@1d,700000/pci@2/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1d,700000/pci@2/scsi@4
/pci@1c,600000/pci@1/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1c,600000/pci@1/scsi@4
{3} ok
------------------------hap103-------------------------
hap103>
hap103> format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@8,0
1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@9,0
2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@a,0
3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@b,0
4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@c,0
5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@0,0
6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@1,0
7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@2,0
8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@3,0
Specify disk (enter its number): ^D
hap103>
hap103>
hap103>
hap103> scstart t
-- Cluster Nodes --
Node name Status
Cluster node: hap103 Online
Cluster node: hap203 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: hap103:ce7 hap203:ce7 Path online
Transport path: hap103:ce3 hap203:ce3 Path online
-- Quorum Summary --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
Node votes: hap103 1 1 Online
Node votes: hap203 1 1 Online
-- Quorum Votes by Device --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d1s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: dgsmp hap103 hap203
-- Device Group Status --
Device Group Status
Device group status: dgsmp Online
-- Resource Groups and Resources --
Group Name Resources
Resources: rg-smp has-res SDP1 SMFswitch
-- Resource Groups --
Group Name Node Name State
Group: rg-smp hap103 Pending online
Group: rg-smp hap203 Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: has-res hap103 Offline Unknown - Starting
Resource: has-res hap203 Offline Offline
Resource: SDP1 hap103 Offline Unknown - Starting
Resource: SDP1 hap203 Offline Offline
Resource: SMFswitch hap103 Offline Offline
Resource: SMFswitch hap203 Offline Offline
hap103>
hap103>
hap103> metastat -s dgsmp
dgsmp/d120: Mirror
Submirror 0: dgsmp/d121
State: Needs maintenance
Submirror 1: dgsmp/d122
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 716695680 blocks
dgsmp/d121: Submirror of dgsmp/d120
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace dgsmp/d120 d5s0 <new device>
Size: 716695680 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
d1s0 0 No Maintenance
d2s0 0 No Maintenance
d3s0 0 No Maintenance
d4s0 0 No Maintenance
d5s0 0 No Last Erred
dgsmp/d122: Submirror of dgsmp/d120
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace dgsmp/d120 d6s0 <new device>
Size: 716695680 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
d6s0 0 No Last Erred
d7s0 0 No Okay
d8s0 0 No Okay
d9s0 0 No Okay
d10s0 0 No Resyncing
hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
hap103>
hap103>
hap103>
hap103>
hap103> scdidadm -l
1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
hap103>
hap103>
hap103> more /etc/vfstab
[49;1H[K#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d20 - - swap - no -
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no logging
#/dev/dsk/c3t0d0s3 /dev/rdsk/c3t0d0s3 /globaldevices ufs 2 yes logging
/dev/md/dsk/d60 /dev/md/rdsk/d60 /in ufs 2 yes logging
/dev/md/dsk/d40 /dev/md/rdsk/d40 /in/oracle ufs 2 yes logging
/dev/md/dsk/d50 /dev/md/rdsk/d50 /indelivery ufs 2 yes logging
swap - /tmp tmpfs - yes -
/dev/md/dsk/d30 /dev/md/rdsk/d30 /global/.devices/node@1 ufs 2 no global
/dev/md/dgsmp/dsk/d120 /dev/md/dgsmp/rdsk/d120 /in/smp ufs 2 yes logging,global
#RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
[K[1;7mvfstab: END[m
[Khap103> df -h
df: unknown option: h
Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
hap103>
hap103>
hap103>
hap103> df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d10 4339374 3429010 866971 80% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 22744256 136 22744120 1% /var/run
swap 22744144 24 22744120 1% /tmp
/dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
/dev/md/dsk/d60 121571658 1907721 118448221 2% /in
/dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
/dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
/dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
------------------log_hap203---------------------------------
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@8,0
1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@9,0
2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@a,0
3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@b,0
4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@c,0
5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@8,0
6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@9,0
7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@a,0
8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@b,0
9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@c,0
10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@0,0
11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@1,0
12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@2,0
13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@3,0
Specify disk (enter its number): ^D
hap203>
hap203> scstart t
-- Cluster Nodes --
Node name Status
Cluster node: hap103 Online
Cluster node: hap203 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: hap103:ce7 hap203:ce7 Path online
Transport path: hap103:ce3 hap203:ce3 Path online
-- Quorum Summary --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
Node votes: hap103 1 1 Online
Node votes: hap203 1 1 Online
-- Quorum Votes by Device --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d1s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: dgsmp hap103 hap203
-- Device Group Status --
Device Group Status
Device group status: dgsmp Online
-- Resource Groups and Resources --
Group Name Resources
Resources: rg-smp has-res SDP1 SMFswitch
-- Resource Groups --
Group Name Node Name State
Group: rg-smp hap103 Pending online
Group: rg-smp hap203 Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: has-res hap103 Offline Unknown - Starting
Resource: has-res hap203 Offline Offline
Resource: SDP1 hap103 Offline Unknown - Starting
Resource: SDP1 hap203 Offline Offline
Resource: SMFswitch hap103 Offline Offline
Resource: SMFswitch hap203 Offline Offline
hap203>
hap203>
hap203> devfsadm- -C
hap203>
hap203> scdidadm -l
1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (sFirst question is what HBA and driver combination are you using?
Next do you have MPxIO enabled or disabled?
Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
The more you leave out of your question the harder it is to offer suggestions.
Feadshipman -
Update only global zone with patch cluster?
Is there a way to apply a Recommended and Security patch cluster to a global zone WITHOUT applying it to any non-global zones?
Much akin to patchadd -G?Sound familiar?
[http://opensolaris.org/jive/thread.jspa?threadID=105001&tstart=0]
This guy killed a process as workaround
[http://alittlestupid.com/2009/07/04/solaris-zone-stuck-in-shutting_down-state/]
We patched some SPARC systems recently and no issues though that's little consolation to you x86 admins. -
Dear Support
Is there any ready agent for ERP baan application ? ?
Thank YouNot that I know of. At least it is not on the list of agents that Sun has on its price list.
Regards
Hartmut -
I need a manual or advices for introducing a third node in a RAC with Sun Cluster. I don't know if qourum votes readjust automatic or I have add new quorum votes manualy, if I have to add a thirn mediator in svm ... etc
A lot of thanks and sorry for my englishAfter you have added your nodes to the cluster you will need to expand the RGs node list to include the new nodes if you need the RG to run on them. This is not automatic. Something like:
# clrg set -n <nodelist> <rg_name>
Is what you need.
I'm not sure I understand what you said about the quorum count. Only nodes and quorum devices (QD) or quorum servers (QS) get a vote, cabinets do not. So each node gets a vote and a QD/QS gets a vote count equal to the number of nodes it connects to minus 1. Thus with a two node cluster, you have 3 votes with one QD. With a 4 node cluster with one fully connected QD/QS, you have 7 votes (after re-adding it).
Hope that helps,
Tim
P.S. <shameless plug> I can recommend a good book on the product: "Oracle Solaris Cluster Essentials" ;-) -
Sun Cluste 3.1 with SAN 6320 - Any Known Issues?
Hello,
We are moving to new Sun hardware with following configurations.
Solaris 8,Sun Cluster 3.1, Oracle 8.0.6.3 on two V1280 connected to Sun Stordege SAN 6320. SAN is also connected to 5 other machines including one windows 2000.
Following were the limitations which we came across during the testing phase.
1. Maximum LUN you can have on a 6320 , co-existing with SUN Cluster is 16. ( You can not have more than 16 LUNS configured on 6320..!)
2. Maximum number of CLUSTER nodes that you can have with 6320 is FOUR.
Refer:
http://docs-pdf.sun.com/816-3381/816-3381.pdf
Bug ID: 4840853
Is anybody else there, already moved/moving to any such configuration and wants to give some tips and suggestions. Please let me know.
Thanks
SairAn update on the same..
we are having issues with SAN 6320.
SAN hangs when we use 7 nodes with Sun CLuster 3.1 simultaneously accessing the volumes. No volumes are being accessed from moer than a single node.
will update later... -
Availability of Sun Cluster 3.2
Has anyone some news about the release date and the new features of Sun Cluster 3.2. It was once announced by end of 2006.
FritzHi Tim
I more or less expected this answer from you ;-)
We are planing to use the Sun Cluster to switch Zones / Containers as GDS between nodes. We have currently some installations with Sun Cluster 3.1, but we are now in the process of evaluating a framework to deploy Solaris Zones in a large scale. This would also include containers in a clustered environment. There Sun Cluster 3.2 seems to have some interesting new features.
Unfortunately I had not the resources to paritipizate in the beta program.
Regards
Fritz -
IDS 5.0 and Sun-Cluster
I want to make IDS 5.0 highly available using SunCluster. I have few questions about it.
1. Can I install IDS on local disk and make it highly availbale. Sun-Cluster doc says it should be on shared disk.
2. I have an already installed IDS I want to make it highly available using Sun-Cluster, what steps I should follow to achive this.I suppose that the answer is that it is not a simple task and it depends on the kind of cluster you want to deploy.
I suggest that you carefully read the documentation of Sun Cluster and specifically the Directory Server specific parts.
The way to do it is different with Sun Cluster 2 and Sun Cluster 3.0....
Or you can request help from Sun Professional Services...
Regards,
Ludovic. -
Sun Cluster 3.0 MQ Series 5.2 configuration
Hi All,
we have to review MQ Series installation/configuration on 2 solaris 8 Clustered with Sun Cluster 3.0 machines. The present configuration has a global filesystem /var/mqm with one queue manager.
According to Sun Cluster 3.1 dataservice for websphere MQ(5.3 ndr) there are 2 ways of filesystem layout
FFS: with local qmgrs (data and log) at each cluster node
GFS: with global filesystem qmgrs (data and log).
Are there any special consideration about shmem and ipc directories in <qmgr>/data?
Does this scenario also apply to 3.0 /5.2 ?
Does FFS configuration allow persistant messages failover at takeover?
Are there any dataservices/docs available for MQ on 3.0?
Thanks in advance.To deploy multiple qmgrs requires /var/mqm to be mounted as a GFS. The reason for this is to overcome IPC key clashes. The recommended file system layout is as follows -> represents a symlink, assuming two qmgrs - qmgr1 & qmgr2
Using FFS (recommended - /local/mqm etc.. are mounted as FFS with /etc/vfstab)
/var/mqm -> /global/mqm
/global/mqm/qmgrs/qmgr1 -> /local/mqm/qmgr/qmgr1
/global/mqm/qmgrs/qmgr2 -> /local/mqm/qmgr/qmgr2
/global/mqm/log/qmgr1 -> /local/mqm/log/qmgr1
/global/mqm/log/qmgr2 -> /local/mqm/log/qmgr2
Using GFS (mainly early SC3.0 as HAStoragePlus wasn't available until later on)
All mounted as GFS with /etc/vfstab
/var/mqm -> /global/mqm
/global/mqm/qmgrs/qmgr1
/global/mqm/qmgrs/qmgr2
/global/mqm/log/qmgr1
/global/mqm/log/qmgr2
Finally, FFS (Failover File System) is recommend because, at present, whenever GFS is used for the qmgr & log files, MQ Series is unable to determine that the qmgr may have been started on another node. e.g. Assuming GFS, and MQ Series is started on Node A, it is possible (but don't do it) to start MQ Series on Node B.
The Sun Cluster Agent provides some protection against this. Instead it's recommened to deploy FFS as above.
The agent for WebSphere MQ for SC 3.1 is available and supported on SC3.0 update 3 as well as SC3.1. There is also a patch available for the WebSphere MQ Agent which deals with IPC cleanup, for single or multiple qmgrs.
Docs available can be found at
http://docs.sun.com/db/prod/7192#hic - Just select Sun Cluster Data Service for WebSphere MQ
Finally, the above scenario also applies to SC3.0/5.2 as well as SC3.1/5.3 and either GFS/FFS allow for persistant messages to be available after a failover.
Regards
Neil
Maybe you are looking for
-
Error while Activating the Standard DSO
Hi, I am getting the below error while Activating the Standard DSO. Characteristic 0DPM_DCAS__0CSM_SPRO: DS object 0DPM_DCAS__0CSM_STAT (master data check) does not exist I tried searching the forum , but didnt find any answer. Any suggestions are we
-
I am receiving an error message: "Access to "DDRC-Computer Class" in "Birthdays" in account "iCloud" is not permitted. The server responded: "403" to operation CalDAVWriteEntityQueueableOperation. Revert to server /Go offline. When I click revert to
-
Changing PC that manages the Home Hub?
HI I'm just getting a new PC which will replace my old home desktop machine...does anyone know how I can find a copy of the Home Hub manager software that runs on Windows 8 (of course I have lost my disk that I had the first time round... :-) )? I'l
-
Store third party data with annotation or document .
How i can store third party data(data may be string value ) with pdf annotation or document using c++ plugin API's ? Thanks ..
-
Can't get the external speakers to work!!! PLEASE HELP!
The external speakers attached to my Macbook Pro (plugged into the headphone jack) aren't working...the output is set properly and the mute is not checked off - it worked last night and then this morning suddenly it didn't - what do I do?? I can stil