Scswitch problem on Sun Cluster 3.0.
I am having a problem when using scswitch command to switch the system on Solaris 8 with Veritas vxvm 3.2. They are connected to Sun D2 disk arrays.
I manage to bring the resource down.
porter:root:~ 103# scswitch -n -j hastorage-res
scswitch: tds-rg: resource group is undergoing a reconfiguration, please try again later
Please see my configuration as below.
porter:root:~ 102# scstat -g
-- Resource Groups and Resources --
Group Name Resources
Resources: tds-rg tdsdi tdsdi-2 hastorage-res ora_tds ora_listener
tds-res SLAPD-res
-- Resource Groups --
Group Name Node Name State
Group: tds-rg porter Pending online
Group: tds-rg bert Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: tdsdi porter Offline Unknown - Starting
Resource: tdsdi bert Offline Offline - LogicalH
ostname offline.
Resource: tdsdi-2 porter Offline Unknown - Starting
Resource: tdsdi-2 bert Offline Offline - LogicalH
ostname offline.
Resource: hastorage-res porter Offline Offline
Resource: hastorage-res bert Offline Offline
Resource: ora_tds porter Offline Offline
Resource: ora_tds bert Offline Offline
Resource: ora_listener porter Offline Offline
Resource: ora_listener bert Offline Offline
Resource: tds-res porter Offline Offline
Resource: tds-res bert Offline Offline
Resource: SLAPD-res porter Offline Offline
Resource: SLAPD-res bert Offline Offline
I have no idea how to fix this problem.
Any idea is highly appreciated.
Jeff
Once you have the tds-rg offline, try using scsetup to update the state of the VxVM disk groups, then try switching just the disk group back and forward. Once that works, try bring the RG online.
Tim
---
Similar Messages
-
Hi all !
I've problem with cluster, server cannot see HDD from storedge.
state-
- in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
- use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
- type �devfsadm �C� on hap103 ----> notice error about HDD.
- type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
- type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
Help me if you can.
Many thanks.
Long.
-----------------------------ok_log-------------------------
########## hap103 ##################
{3} ok probe-scsi-all
/pci@1f,700000/scsi@2,1
/pci@1f,700000/scsi@2
Target 0
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 1
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 2
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
Target 3
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
/pci@1d,700000/pci@2/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1d,700000/pci@2/scsi@4
/pci@1c,600000/pci@1/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1c,600000/pci@1/scsi@4
############ hap203 ###################################
{3} ok probe-scsi-all
/pci@1f,700000/scsi@2,1
/pci@1f,700000/scsi@2
Target 0
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 1
Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
Target 2
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
Target 3
Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
/pci@1d,700000/pci@2/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1d,700000/pci@2/scsi@4
/pci@1c,600000/pci@1/scsi@5
Target 8
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target 9
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target a
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target b
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target c
Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
Target f
Unit 0 Processor SUN StorEdge 3310 D1159
/pci@1c,600000/pci@1/scsi@4
{3} ok
------------------------hap103-------------------------
hap103>
hap103> format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@8,0
1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@9,0
2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@a,0
3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@b,0
4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@c,0
5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@0,0
6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@1,0
7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@2,0
8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@3,0
Specify disk (enter its number): ^D
hap103>
hap103>
hap103>
hap103> scstart t
-- Cluster Nodes --
Node name Status
Cluster node: hap103 Online
Cluster node: hap203 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: hap103:ce7 hap203:ce7 Path online
Transport path: hap103:ce3 hap203:ce3 Path online
-- Quorum Summary --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
Node votes: hap103 1 1 Online
Node votes: hap203 1 1 Online
-- Quorum Votes by Device --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d1s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: dgsmp hap103 hap203
-- Device Group Status --
Device Group Status
Device group status: dgsmp Online
-- Resource Groups and Resources --
Group Name Resources
Resources: rg-smp has-res SDP1 SMFswitch
-- Resource Groups --
Group Name Node Name State
Group: rg-smp hap103 Pending online
Group: rg-smp hap203 Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: has-res hap103 Offline Unknown - Starting
Resource: has-res hap203 Offline Offline
Resource: SDP1 hap103 Offline Unknown - Starting
Resource: SDP1 hap203 Offline Offline
Resource: SMFswitch hap103 Offline Offline
Resource: SMFswitch hap203 Offline Offline
hap103>
hap103>
hap103> metastat -s dgsmp
dgsmp/d120: Mirror
Submirror 0: dgsmp/d121
State: Needs maintenance
Submirror 1: dgsmp/d122
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 716695680 blocks
dgsmp/d121: Submirror of dgsmp/d120
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace dgsmp/d120 d5s0 <new device>
Size: 716695680 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
d1s0 0 No Maintenance
d2s0 0 No Maintenance
d3s0 0 No Maintenance
d4s0 0 No Maintenance
d5s0 0 No Last Erred
dgsmp/d122: Submirror of dgsmp/d120
State: Needs maintenance
Invoke: after replacing "Maintenance" components:
metareplace dgsmp/d120 d6s0 <new device>
Size: 716695680 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
d6s0 0 No Last Erred
d7s0 0 No Okay
d8s0 0 No Okay
d9s0 0 No Okay
d10s0 0 No Resyncing
hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
hap103>
hap103>
hap103>
hap103>
hap103> scdidadm -l
1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
hap103>
hap103>
hap103> more /etc/vfstab
[49;1H[K#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/md/dsk/d20 - - swap - no -
/dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no logging
#/dev/dsk/c3t0d0s3 /dev/rdsk/c3t0d0s3 /globaldevices ufs 2 yes logging
/dev/md/dsk/d60 /dev/md/rdsk/d60 /in ufs 2 yes logging
/dev/md/dsk/d40 /dev/md/rdsk/d40 /in/oracle ufs 2 yes logging
/dev/md/dsk/d50 /dev/md/rdsk/d50 /indelivery ufs 2 yes logging
swap - /tmp tmpfs - yes -
/dev/md/dsk/d30 /dev/md/rdsk/d30 /global/.devices/node@1 ufs 2 no global
/dev/md/dgsmp/dsk/d120 /dev/md/dgsmp/rdsk/d120 /in/smp ufs 2 yes logging,global
#RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
[K[1;7mvfstab: END[m
[Khap103> df -h
df: unknown option: h
Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
hap103>
hap103>
hap103>
hap103> df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d10 4339374 3429010 866971 80% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 22744256 136 22744120 1% /var/run
swap 22744144 24 22744120 1% /tmp
/dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
/dev/md/dsk/d60 121571658 1907721 118448221 2% /in
/dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
/dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
/dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
------------------log_hap203---------------------------------
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@8,0
1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@9,0
2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@a,0
3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@b,0
4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/pci@1/scsi@5/sd@c,0
5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@8,0
6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@9,0
7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@a,0
8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@b,0
9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1d,700000/pci@2/scsi@5/sd@c,0
10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@0,0
11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@1,0
12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@2,0
13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1f,700000/scsi@2/sd@3,0
Specify disk (enter its number): ^D
hap203>
hap203> scstart t
-- Cluster Nodes --
Node name Status
Cluster node: hap103 Online
Cluster node: hap203 Online
-- Cluster Transport Paths --
Endpoint Endpoint Status
Transport path: hap103:ce7 hap203:ce7 Path online
Transport path: hap103:ce3 hap203:ce3 Path online
-- Quorum Summary --
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
Node votes: hap103 1 1 Online
Node votes: hap203 1 1 Online
-- Quorum Votes by Device --
Device Name Present Possible Status
Device votes: /dev/did/rdsk/d1s2 1 1 Online
-- Device Group Servers --
Device Group Primary Secondary
Device group servers: dgsmp hap103 hap203
-- Device Group Status --
Device Group Status
Device group status: dgsmp Online
-- Resource Groups and Resources --
Group Name Resources
Resources: rg-smp has-res SDP1 SMFswitch
-- Resource Groups --
Group Name Node Name State
Group: rg-smp hap103 Pending online
Group: rg-smp hap203 Offline
-- Resources --
Resource Name Node Name State Status Message
Resource: has-res hap103 Offline Unknown - Starting
Resource: has-res hap203 Offline Offline
Resource: SDP1 hap103 Offline Unknown - Starting
Resource: SDP1 hap203 Offline Offline
Resource: SMFswitch hap103 Offline Offline
Resource: SMFswitch hap203 Offline Offline
hap203>
hap203>
hap203> devfsadm- -C
hap203>
hap203> scdidadm -l
1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
May 6 15:05:58 hap203 scsi: ASC: 0x47 (sFirst question is what HBA and driver combination are you using?
Next do you have MPxIO enabled or disabled?
Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
The more you leave out of your question the harder it is to offer suggestions.
Feadshipman -
6140 Replication Problem in Sun Cluster
Hi,
I am not able to mount a replicate volume from cluster system (primary site) to non-cluster system (DR site). Replication was done by 6140 storage. In primary site the volume was configured in a system with metaset under Solaris Cluster 3.2. and in DR site it was mapped in a non-cluster system after suspending the replication.
I even tried to mount the volume in DR site (non-cluster system) by creating a metaset and putting the volume under this and mount it from there. But this action also not working.
Following are the log of the errors:
drserver # mount -F ufs /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 /mnt/
mount: /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 is not this fstype
drserver #
drserver #
drserver #
drserver #
drserver # fstyp -v /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0
Unknown_fstyp (no matches)
drserver #
I will be grateful if you have any workaround for this. Please note that, the replication from the non-cluster system is working fine. Only from the cluster system it is not working and showing above errors.I am not sure how you can run Solaris 10 Update 8, since to my knowledge that is not released.
What is available is Solaris 10 05/09, which would be Update 7.
You are not describing what exact problem you have (like specific error messages you see), or what exactly you did to end up in the situation you have.
I would recommend to open a support case to get a more structured analysis of your problem.
Regards
Thorsten -
Sun Cluster 3.0 and VxVM 3.2 problems at boot
i 've a little problem with a two node cluster (2 x 480r + 2 x 3310 with a single raid ctl.)
Every 3310 has 3 (raid5) luns .
I've mirrored these 3 luns with VxVM, and i've mirror also the 2 internal (o.s.) disks.
One of the disk of the first 3310 is the quorum disk.
Every time i boot the nodes , i read an error at "block 0" of the quorum disk and then starts a fastidious synchronization of the mirrors. (sometimes also of the os mirror..)
Why does it happen?
Thanks.
Regards,
Mauro.We did another test today and again the resource group went into a STOP_FAILED state. On this occasion, the export for the corresponding ZFS pool timed-out. We were able to successfully bring the resource group online on the desired cluster node. Subsequent failovers worked fine. There's something strange happening when the zpool is being exported (eg error correction?). Once the zpool is exported, further imports of it seem to work fine.
When we first had the problem, we were able to manually export and import the zpools, though they did take quite some time to export/import.
"zpool list" shows we have a total of 7 zpools.
"zfs list" shows we have a total of 27 zfs file systems.
Is there any specific Sun or otherwise links to any problems with Sun Cluster and ZFS? -
Sun Cluster 3.2u3 clprivnet0 problem
Hello,
I have a strange behaviour after building a cluster. Everything looks fine, and working EXCEPT for the clprivnet0 interface. The communication over this interface is not successful, for example creating a metaset or listing IPMP groups fails with a timeout for 172.16.4.1 - 172.16.4.2.
Strangely there is communication, because snooping the clprivnet0 interface I can see during a ping for example (or metaset creation try) that the ARP request and reply is going through, but after that the communication is not continued (ICMP, TCP).
The system is a Solaris 10u8, Sun Cluster 3.2u3. I have built the system with Solaris10u8 with Recommend Patchcluster from May EISDVD, and also just plain Solaris 10u8. I have to use u8 due to software requirement (also the same for 3.2u3) - ACSLS software.
Anyone had such an issue, or have an idea what could be the problem? This thing is really wierd and fighting with it for a week.
Best regards,
Gyulaa /dev/ip setting caused the problem ...
-
Sun Cluster failed to switchover
Hi,
I have configured two node sun cluster and was working fine all these days.
Since yesterday, i am unable to failover the cluster to second node.
instead, resources are stopped and started again on the first node.
when i use the command "scswitch -z -g oracle_failover_rg -h MFIN-SOL02" in first node I am getting these messages on the console
Sep 28 17:53:16 MFIN-SOL01 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.010.007.120:0, remote = 000.000.000.00
0:0, start = -2, end = 6
Sep 28 17:53:16 MFIN-SOL01 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
Pl. suggest me to solve this problem.Those messages aren't important here. I think that might be related to the fault monitor being stopped.
As I said in the previous post, you need to diagnose this bit by bit. Try the procedure manually, i.e. stop Oracle on node 1, manually switch-over the disks and storage to node 2, mount the file system, bring up the logical address, start the database.
I expect there is something wrong with your configuration, e.g. incorrect listener configuration.
There is also a way of increasing the debug level for the Oracle agent. This is documented in the manuals IIRC.
Regards,
Tim
--- -
Sun cluster failed when switching, mount /global/ I/O error .
Hi all,
I am having a problem during switching two Sun Cluster nodes.
Environment:
Two nodes with Solaris 8 (Generic_117350-27), 2 Sun D2 arrays & Vxvm 3.2 and Sun Cluster 3.0.
Porblem description:
scswitch failed , then scshutdown and boot up the both nodes. One node failed because of vxvm boot failure.
The other node is booting up normally but cannot mount /global directories. Manually mount is working fine.
# mount /global/stripe01
mount: I/O error
mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
# vxdg import globdg
# vxvol -g globdg startall
# mount /dev/vx/dsk/globdg/mirror-vol03 /mnt
# echo $?
0
port:root:/global/.devices/node@1/dev/vx/dsk 169# mount /global/stripe01
mount: I/O error
mount: cannot mount /dev/vx/dsk/globdg/stripe-vol01
Need help urgently
JeffI would check your patch levels. I seem to remember there was a linker patch that cause an issue with mounting /global/.devices/node@X
Tim
--- -
Hi all,
Need some help from all out there
In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
ora-29701 *Unable to connect to Cluster Manager*
during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesnt work anymore when the server is rebooted.
Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
Bug No. 4262155
Filed 25-MAR-2005 Updated 11-APR-2005
Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
Platform Linux x86
Platform Version 2.4.21-9.0.1
Database Version 9.2.0.6.0
Affects Platforms Port-Specific
Severity Severe Loss of Service
Status Not a Bug. To Filer
Base Bug N/A
Fixed in Product Version No Data
Problem statement:
ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
*** 03/25/05 07:32 am ***
TAR:
PROBLEM:
Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
While creating the database, customer receives following error:
ORA-29701: unable to connect to Cluster Manager
However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
DIAGNOSTIC ANALYSIS:
It seems that the problem is with libskgxn9.so shared library.
For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
installActions2005-03-22_03-44-42PM.log:,
[libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
For 9.2.0.4 -> 9.2.0.6, install log shows:
installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
WORKAROUND:
Customer is using the following workaround:
cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
RELATED BUGS:
Bug 4169291Check if following MOS note helps.
Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1) -
SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)
Dear All;
i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
for the SAP i did the following configurations
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
# clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
it executes the startsap scripts successfully as following
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
stty: : No such device or address
stty: : No such device or address
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
Starting SAP-Collector Daemon
11:04:57 04.06.2008 LOG: Effective User Id is root
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
* This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
* Usage: saposcol -l: Start OS Collector
* saposcol -k: Stop OS Collector
* saposcol -d: OS Collector Dialog Mode
* saposcol -s: OS Collector Status
* Starting collector (create new process)
saposcol on host eccprd01 started
Starting SAP Instance ASCS00
Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
saposcol on host eccprd01 started
Running /usr/sap/PRD/SYS/exe/run/startj2eedb
Trying to start PRD database ...
Log file: /export/home/prdadm/startdb.log
Instance Service on host eccprd01 started
Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
/usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
Starting SAP Instance SCS01
Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
Instance Service on host eccprd01 started
Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
Instance on host eccprd01 started
Instance on host eccprd01 started
and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:28 eccprd01 last message repeated 1 time
Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:46 eccprd01 last message repeated 1 time
Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:49 eccprd01 last message repeated 1 time
Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:52 eccprd01 last message repeated 1 time
Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:26:58 eccprd01 last message repeated 1 time
Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:01 eccprd01 last message repeated 1 time
Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:04 eccprd01 last message repeated 1 time
Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:13 eccprd01 last message repeated 1 time
Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:16 eccprd01 last message repeated 1 time
Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:22 eccprd01 last message repeated 1 time
Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:25 eccprd01 last message repeated 1 time
Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:34 eccprd01 last message repeated 1 time
Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:37 eccprd01 last message repeated 1 time
Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:43 eccprd01 last message repeated 1 time
Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
Jun 4 12:27:46 eccprd01 last message repeated 1 time
Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
ARSSESHi all.
I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
Scenrio:
Central Instance (not incluster) : Started on one node
Dialog Instance (not in cluster): Started on the other node
When I create the resource for SUNW.sap_as like
clrs create --g sap-rg -t SUNW.sap_as .....etc etc
in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
Then after timeout it gives up.
Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
TIA -
File System Sharing using Sun Cluster 3.1
Hi,
I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
The files in the remote system should be read/writabe from both the solaris servers concurrently.
As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
thanks
SureshYou could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
The other option would be to use shared QFS, but without Sun Cluster.
Regards,
Tim
--- -
Real Application Cluster on Sun Solaris 8 and Sun Cluster 3
Hello,
we want to install Oracle 9i Enterprise Edition in combination with Oracle Real Application Cluster-Option on 2 Nodes. Every node (12-CPU-SMP-Machine) should running Sun Solaris 8 and Sun Cluster 3 Service.
Does this configuration work with ORAC? I found nowhere informations about. Is there anything I have to pay special attention for during installation?
Thank you for helping and best regards from Berlin/Germany
Michael WuttkeForms and report services work fine on solaris 8.
My problem is on the client side.
I have to use solaris 8 with netscape like forms clients
and I wasn't able to make it work with java plugins.
Any solution?
Mauro -
Upgrading Solaris OS (9 to 10) in sun cluster 3.1 environment
Hi all ,
I have to upgrade the solaris OS 9 to 10 in Sun cluster 3.1.
Sun Cluster 3.1
data service - Netbackup 5.1
Questions:
1 .Best ways to upgrade the Solaris 9 to 10 and the Problems while upgrading the OS?
2 .Sun Trunking support in Sun Cluster 3.1?
Regards
RamanaHi Ramana
We had used the live upgrade for upgrading Solaris 9 to 10 and its the best method for less downtime and risk but you have to follow the proper procedure as it is not the same for normal solaris. Live upgrade with sun cluster is different . you have to take into consideration about global devices and veritas volume manager. while creating new boot environment.
Thanks/Regards
Sadiq -
Bizzare Disk reservation probelm with sun cluster 3.2 - solaris 10 X 4600
We have a 4 node X4600 sun cluster with shared AMS500 storage. There over 30 LUN's presented to the cluster.
When any of the two higher nodes ( ie node id 2 and node is 3 ) are booted, their keys are not added to 4 out of 30 LUNS. These 4 LUNs show up as drive type unknown in format. I've noticed that the only thing common with these LUN's is that their size is bigger than 1TB
To resolve this I simply scrub the keys, run sgdevs than they showup as normal in format and all nodes keys are present on the LUNS.
Has anybody come across this behaviour.
Commands used to resolve problem
1. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
2. scrub keys #/usr/cluster/lib/sc/scsi -c scrub -d devicename
3. #sgdevs
4. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
all node's keys are now present on the lunHi,
according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
So at the end it all boils down to
- cost: Solaris multipathing is free, as it is bundled
- support: Sun can offer better support for the Sun software
You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
Hartmut -
Information about Sun Cluster 3.1 5Q4 and Storage Foundation 4.1
Hi,
I have 2 Sunfire V440 with Solaris 9 last release 9/05 with last cluster patchs.. , Qlogic HBA fibre card on a seven disks shared on a Emc Clariion cx500. I have installed and configured Sun cluster 3.1 and Veritas Storage Foundation 4.1 MP1. My problems is when i run format wcommand on each node, I see the disks in a different order and veritas SF (4.1) is also picking up the disks in a different order.
1. Storage Foundation 4.1 is compatible with Sun cluster 3.1 2005Q4?????
2. Do you have a how to or other procedure for Storage foundation 4.1 with Sun Cluster 3.1.
I'm very confuse with veritas Storage foundation
Thanks!
J-F AubinThis combination does not work today, but it will be available later.
Since Sun and Veritas are two separate companies, it takes more
time than expected to synchronize releases. Products supported by
Sun for Sun Cluster installation undergo extensive testing, which also
takes time.
-- richard -
Shared Tuxedo 8.0 Binaries on a SUN Cluster 3.0
I know pefectly well that in every installation document BEA strongly advise not to
try to share executables across remote file systems (NFS etc.) Still i need to ask
if one of you have any experience in a setup of a SUN 8 / SUN cluster 3.0 enviroment,
where 2 or more nodes shares disks by the same SUN 3 cluster. The basic idea is to
have the the Tux8 binaries installed only once, and then separate all the "dynamic"
files tmconfig, tlogdevices etc in to its own respective katalog (/node1, /node2
etc.) But stil they remain on the clusterd disks.
Thank you for a quick response.
Best of regards
RaoulWe have the same problem with 2 SUN E420 and a D1000 storage array.
The problem is releted to the settings on the file :
/etc/system
added by the cluster installation :
set rpcmod:svc_default_stksize=0x4000
set ge:ge_intr_mode=0x833
the second one line try to set a Gigabit Ethernet Interface that does not exist.
We comment out the two line and every things work fine.
I'm interesting to know what you think about Sun Cluster 3.0. and your experience.
email : [email protected]
Stefano
Maybe you are looking for
-
Hi All, I got error while transport.the error messgse is Source RSDS YISU_CIRCLE_DETAILS K77CLNT900 does not exist No rule exists Source RSDS 0FC_CLEARR_TEXT K77CLNT900 does not exist No rule exists Source RSDS 0UC_AKLASSE_TEXT K77CLNT900 does not e
-
Hai All. I got an error when put this EJB-QL syntax : <query> <query-method> <method-name>findByName</method-name> <method-params> <method-param>java.lang.String</method-param> </method-params> </query-method> <ejb-ql>SELECT DISTINCT object (a) FROM
-
This could be a very simple question but I have not found my answer anywhere. I have recently moved over to a Mac operating system from Windows XP, and on my old XP machine my flles which I kept in pertinant folders were organized alphabetically firs
-
So I have an iPhone and want to be able to view photos for the most part on the go without having a ton of photos on my phone. Hope to have an iPad in the future to go along with this, but just iPhone for now. I have a MobileMe account already. Which
-
Hey there, im having problems emptying the trash on my macbookpro and I wonder is it the problem that slows down my computer too because it has been awhile since i have emptied my trash and now I can't empty it due to the error showed as "PodcastsPla