6140 Replication Problem in Sun Cluster

Hi,
I am not able to mount a replicate volume from cluster system (primary site) to non-cluster system (DR site). Replication was done by 6140 storage. In primary site the volume was configured in a system with metaset under Solaris Cluster 3.2. and in DR site it was mapped in a non-cluster system after suspending the replication.
I even tried to mount the volume in DR site (non-cluster system) by creating a metaset and putting the volume under this and mount it from there. But this action also not working.
Following are the log of the errors:
drserver # mount -F ufs /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 /mnt/
mount: /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 is not this fstype
drserver #
drserver #
drserver #
drserver #
drserver # fstyp -v /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0
Unknown_fstyp (no matches)
drserver #
I will be grateful if you have any workaround for this. Please note that, the replication from the non-cluster system is working fine. Only from the cluster system it is not working and showing above errors.

I am not sure how you can run Solaris 10 Update 8, since to my knowledge that is not released.
What is available is Solaris 10 05/09, which would be Update 7.
You are not describing what exact problem you have (like specific error messages you see), or what exactly you did to end up in the situation you have.
I would recommend to open a support case to get a more structured analysis of your problem.
Regards
Thorsten

Similar Messages

  • JNDI replication problems in WebLogic cluster.

    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.

    It is not a good idea to use JNDI to replicate application data. Did you consider
    using JMS for this? Or JavaGroups (http://sourceforge.net/projects/javagroups/) -
    there is an example of distibuted hashtable in examples.
    Alex Rogozinsky <[email protected]> wrote:
    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.--
    Dimitri

  • Problem with sun Cluster

    Hi all !
    I've problem with cluster, server cannot see HDD from storedge.
    state-
    - in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
    - use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
    - type �devfsadm �C� on hap103 ----> notice error about HDD.
    - type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
    - type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
    Help me if you can.
    Many thanks.
    Long.
    -----------------------------ok_log-------------------------
    ########## hap103 ##################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    ############ hap203 ###################################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    {3} ok
    ------------------------hap103-------------------------
    hap103>
    hap103> format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap103>
    hap103>
    hap103>
    hap103> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap103>
    hap103>
    hap103> metastat -s dgsmp
    dgsmp/d120: Mirror
    Submirror 0: dgsmp/d121
    State: Needs maintenance
    Submirror 1: dgsmp/d122
    State: Needs maintenance
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 716695680 blocks
    dgsmp/d121: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d5s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d1s0 0 No Maintenance
    d2s0 0 No Maintenance
    d3s0 0 No Maintenance
    d4s0 0 No Maintenance
    d5s0 0 No Last Erred
    dgsmp/d122: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d6s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d6s0 0 No Last Erred
    d7s0 0 No Okay
    d8s0 0 No Okay
    d9s0 0 No Okay
    d10s0 0 No Resyncing
    hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
    hap103>
    hap103>
    hap103>
    hap103>
    hap103> scdidadm -l
    1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
    12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
    13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
    14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
    15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
    hap103>
    hap103>
    hap103> more /etc/vfstab
    [49;1H[K#device device  mount   FS      fsck    mount   mount
    #to     mount   to      fsck            point           type    pass    at boot options
    #/dev/dsk/c1d0s2        /dev/rdsk/c1d0s2        /usr    ufs     1       yes     -
    fd      -       /dev/fd fd      -       no      -
    /proc   -       /proc   proc    -       no      -
    /dev/md/dsk/d20 -       -       swap    -       no      -
    /dev/md/dsk/d10 /dev/md/rdsk/d10        /       ufs     1       no      logging
    #/dev/dsk/c3t0d0s3      /dev/rdsk/c3t0d0s3      /globaldevices  ufs     2       yes     logging
    /dev/md/dsk/d60 /dev/md/rdsk/d60        /in     ufs     2       yes     logging
    /dev/md/dsk/d40 /dev/md/rdsk/d40        /in/oracle      ufs     2       yes     logging
    /dev/md/dsk/d50 /dev/md/rdsk/d50        /indelivery     ufs     2       yes     logging
    swap    -       /tmp    tmpfs   -       yes     -
    /dev/md/dsk/d30 /dev/md/rdsk/d30        /global/.devices/node@1 ufs     2       no      global
    /dev/md/dgsmp/dsk/d120  /dev/md/dgsmp/rdsk/d120 /in/smp ufs     2       yes     logging,global
    #RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
    [K[1;7mvfstab: END[m
    [Khap103> df -h
    df: unknown option: h
    Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
    hap103>
    hap103>
    hap103>
    hap103> df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 4339374 3429010 866971 80% /
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    swap 22744256 136 22744120 1% /var/run
    swap 22744144 24 22744120 1% /tmp
    /dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
    /dev/md/dsk/d60 121571658 1907721 118448221 2% /in
    /dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
    /dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
    /dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
    ------------------log_hap203---------------------------------
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@8,0
    1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@9,0
    2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@a,0
    3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@b,0
    4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@c,0
    5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap203>
    hap203> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap203>
    hap203>
    hap203> devfsadm- -C
    hap203>
    hap203> scdidadm -l
    1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
    17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
    18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
    19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
    20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
    hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (s

    First question is what HBA and driver combination are you using?
    Next do you have MPxIO enabled or disabled?
    Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
    What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
    What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
    The more you leave out of your question the harder it is to offer suggestions.
    Feadshipman

  • Scswitch problem on Sun Cluster 3.0.

    I am having a problem when using scswitch command to switch the system on Solaris 8 with Veritas vxvm 3.2. They are connected to Sun D2 disk arrays.
    I manage to bring the resource down.
    porter:root:~ 103# scswitch -n -j hastorage-res
    scswitch: tds-rg: resource group is undergoing a reconfiguration, please try again later
    Please see my configuration as below.
    porter:root:~ 102# scstat -g
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: tds-rg tdsdi tdsdi-2 hastorage-res ora_tds ora_listener
    tds-res SLAPD-res
    -- Resource Groups --
    Group Name Node Name State
    Group: tds-rg porter Pending online
    Group: tds-rg bert Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: tdsdi porter Offline Unknown - Starting
    Resource: tdsdi bert Offline Offline - LogicalH
    ostname offline.
    Resource: tdsdi-2 porter Offline Unknown - Starting
    Resource: tdsdi-2 bert Offline Offline - LogicalH
    ostname offline.
    Resource: hastorage-res porter Offline Offline
    Resource: hastorage-res bert Offline Offline
    Resource: ora_tds porter Offline Offline
    Resource: ora_tds bert Offline Offline
    Resource: ora_listener porter Offline Offline
    Resource: ora_listener bert Offline Offline
    Resource: tds-res porter Offline Offline
    Resource: tds-res bert Offline Offline
    Resource: SLAPD-res porter Offline Offline
    Resource: SLAPD-res bert Offline Offline
    I have no idea how to fix this problem.
    Any idea is highly appreciated.
    Jeff

    Once you have the tds-rg offline, try using scsetup to update the state of the VxVM disk groups, then try switching just the disk group back and forward. Once that works, try bring the RG online.
    Tim
    ---

  • Sun Cluster 3.0 and VxVM 3.2 problems at boot

    i 've a little problem with a two node cluster (2 x 480r + 2 x 3310 with a single raid ctl.)
    Every 3310 has 3 (raid5) luns .
    I've mirrored these 3 luns with VxVM, and i've mirror also the 2 internal (o.s.) disks.
    One of the disk of the first 3310 is the quorum disk.
    Every time i boot the nodes , i read an error at "block 0" of the quorum disk and then starts a fastidious synchronization of the mirrors. (sometimes also of the os mirror..)
    Why does it happen?
    Thanks.
    Regards,
    Mauro.

    We did another test today and again the resource group went into a STOP_FAILED state. On this occasion, the export for the corresponding ZFS pool timed-out. We were able to successfully bring the resource group online on the desired cluster node. Subsequent failovers worked fine. There's something strange happening when the zpool is being exported (eg error correction?). Once the zpool is exported, further imports of it seem to work fine.
    When we first had the problem, we were able to manually export and import the zpools, though they did take quite some time to export/import.
    "zpool list" shows we have a total of 7 zpools.
    "zfs list" shows we have a total of 27 zfs file systems.
    Is there any specific Sun or otherwise links to any problems with Sun Cluster and ZFS?

  • Sun cluster, 6140 and 'cross-connections'

    This was brought up in the storage forum by somebody else, but the responses never answered the original question:
    In the 6140 setup document located at http://docs.sun.com/source/819-7497-11/chapter3.html#50589714_93886 it shows two different ways to cable a host to a 6140 via a SAN switch. figure 3-3 and 3-4
    It states that the setup in 3-4 is not supported in a sun cluster environment.
    The problem is that given the acive/passive nature of the 6140, the setup shown in 3-4 is the obvious one to use since it prevents one from having to force all of the luns over in the event of an hba port or switch failure.
    To make life more interesting, the 6140 setup doc does not make not of what version of sun cluster it is not supported on. Or what the bugid is, or any informationto know if the restriction is still valid.
    So, does this restriction still exist? If so, for what version of sun cluster? What version of solaris?

    Thanks for the clarification.
    As an aside, it would be nice if in the future, the documentation could contain a bit more information than just a simple note saying 'this is not supported'. A reference to a bugid or info doc would go a long way in helping folks determine if the restriction is still valid.
    --john                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Sun Cluster & 6130/6140 thru switch with cross-connections not supported?

    Hi:
    I noticed that the 6140 does not support cross-connecting the 2 controllers to 2 switches for higher availability when using Sun Cluster:
    http://docs.sun.com/source/819-7497-10/chapter3.html
    Does anyone know why this restriction is there?
    Thanks!

    Since there was no real answer to the question in this forum, I cross posted this issue to the cluster forum.
    See http://forum.java.sun.com/thread.jspa?threadID=5261282&tstart=0 for the full thread.
    Basically, the restriction against cross-connections is no longer valid and the documentation should be updated to remove the note.
    This is all a good thing, because I had my 6140's wired into my sun cluster environment via the 'cross-connections' method diagramed in figure 3-4. :-)

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • Sun Cluster 3.2u3 clprivnet0 problem

    Hello,
    I have a strange behaviour after building a cluster. Everything looks fine, and working EXCEPT for the clprivnet0 interface. The communication over this interface is not successful, for example creating a metaset or listing IPMP groups fails with a timeout for 172.16.4.1 - 172.16.4.2.
    Strangely there is communication, because snooping the clprivnet0 interface I can see during a ping for example (or metaset creation try) that the ARP request and reply is going through, but after that the communication is not continued (ICMP, TCP).
    The system is a Solaris 10u8, Sun Cluster 3.2u3. I have built the system with Solaris10u8 with Recommend Patchcluster from May EISDVD, and also just plain Solaris 10u8. I have to use u8 due to software requirement (also the same for 3.2u3) - ACSLS software.
    Anyone had such an issue, or have an idea what could be the problem? This thing is really wierd and fighting with it for a week.
    Best regards,
    Gyula

    a /dev/ip setting caused the problem ...

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • TimesTen database in Sun Cluster environment

    Hi,
    Currently we have our application together with the TimesTen database installed at the customer on two different nodes (running on Sun Solaris 10). The second node acts as a backup to provide failover functionality, although right now only manual failover is supported.
    We are now looking into a hot-standby / high availability solution using Sun Cluster software. As understood from the documentation, applications can be 'plugged-in' to the Sun Cluster using Agents to monitor the application. Sun Cluster Agents should be already available for certain applications such as:
    # MySQL
    # Oracle 9i, 10g (HA and RAC)
    # Oracle 9iAS Application Server
    # PostgreSQL
    (See http://www.sun.com/software/solaris/cluster/faq.jsp#q_19)
    Our question is whether Sun Cluster Agents are already (freely) available for TimesTen? If so, where to find them. If not, should we write a specific Agent separately for TimesTen or handle database problems from the application.
    Does someone have any experience using TimesTen in a Sun Cluster environment?
    Thanks in advance!

    Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
    create replication MYDB.REPSCHEME
    element SERVER01_DS datastore
    master MYDB on "SERVER01_REP"
    transmit nondurable
    subscriber MYDB on "SERVER02_REP"
    element SERVER02_DS datastore
    master MYDB on "SERVER02_REP"
    transmit nondurable
    subscriber MYDB on "SERVER01_REP"
    store MYDB on "SERVER01_REP"
    port 16004
    failthreshold 500
    store MYDB on "SERVER02_REP"
    port 16004
    failthreshold 500
    The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
    In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT?

  • Errors after initial Sun Cluster install

    - SunOS conch 5.10 Generic_118833-36 sun4u sparc SUNW,Sun-Fire-V210
    - Sun Cluster 3.2
    I've gone through the scinstall process using the standard answers to questions. The only exception is that when it came to quorum, I answered I would set it up later, as I want to try to the quorum server. There's no shared storage - I'm seeing if it's possible to create a cluster using IP based replication.
    I'm getting these error messages every 30 seconds (looks like a result of:
    # svcs lrc:/etc/rc3_d/S91initgchb_resd
    STATE STIME FMRI
    legacy_run 16:19:29 lrc:/etc/rc3_d/S91initgchb_resd
    Feb 8 16:38:59 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:38:59 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:38:59 conch : Bad file number
    Feb 8 16:39:29 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:39:29 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:39:29 conch : Bad file number
    Feb 8 16:39:59 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:39:59 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:39:59 conch : Bad file number
    Feb 8 16:40:29 conch Cluster.GCHB_resd: Unable to open door descriptor /var/run/rgmd_receptionist_door
    Feb 8 16:40:29 conch Cluster.GCHB_resd: GCHB system error: scha_cluster_open failed with 18
    Feb 8 16:40:29 conch : Bad file number
    There's no file system errors, and I'm at a complete loss as to why there appears to be this problem. Can anyone offer any advice?
    Cheers,
    Iain

    Hi,
    there are 2 issues here.
    1. THe error messages that you see. I get them on my freshly installed cluster as well. What did I do? I used the JES installer and installed SC3.2 and SCGeo 3.2 - to be configured later. Ithink that it should only install the packages but not configure any part of them. It seems that it does oitherwise. To me ghcb sound like global cluster heartbeat.. I'll follow up with the developers to get this clarified.
    2. Replication within a cluster and no shared storage. THis has several aspects. I, too, see more and more customer demand to have this. If you get it to work let us know. I am not sure though, why you installed the SC Geo edition to achieve this, as I do not think it well help you here.
    In any case I can only recommend to set up the quorum server before proceeding, otherwise your whole cluster will panic as soon as you do a single reboot. That is per design..
    Regards
    Hartmut

  • Sun cluster 3.1 io error

    Hi,
    I have 2 cluster nodes with solaris 9/05 with sun cluster 3.1,After a migration from Hitachi AMS1000 storage to SUN storagetek 9985v when i shutdown one node in the cluster the mounted volumes on the second node giving io error.I already installed the new patches for os,cluster and san but the problem still persists.Please help me
    Regards,
    Arun

    Arun,
    You say you migrated to the 9985v - did you do that with backup and restore or with a replication technology? If it was the latter, you might have inadvertantly copied over some SCSI reservation keys. Otherwise, I can't see any reason for the problem.
    SCSI keys can be removed (with extreme care) using the scsi and pgre commands in the /usr/cluster/lib/sc directory.
    Tim
    ---

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • SAP 7.0 on SUN Cluster 3.2 (Solaris 10 / SPARC)

    Dear All;
    i'm installing a two nodes cluster (SUN Cluster 3.2 / Solaris 10 / SPARC), for a HA SAP 7.0 / Oracle 10g DataBase
    SAP and Oracle softwares were successfully installed and i could successfully cluster the Oracle DB and it is tested and working fine.
    for the SAP i did the following configurations
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=01 -p Ci_services_string=SCS -p Ci_startup_script=startsap_01 -p Ci_shutdown_script=stopsap_01 -p resource_dependencies=sap-hastp-rs,ora-db-res sap-ci-scs-res
    # clresource create -g sap-ci-res-grp -t SUNW.sap_ci_v2 -p SAPSID=PRD -p Ci_instance_id=00 -p Ci_services_string=ASCS -p Ci_startup_script=startsap_00 -p Ci_shutdown_script=stopsap_00 -p resource_dependencies=sap-hastp-rs,or-db-res sap-ci-Ascs-res
    and when trying to bring the sap-ci-res-grp online # clresourcegroup online -M sap-ci-res-grp
    it executes the startsap scripts successfully as following
    Sun Microsystems Inc.     SunOS 5.10     Generic     January 2005
    stty: : No such device or address
    stty: : No such device or address
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    Starting SAP-Collector Daemon
    11:04:57 04.06.2008 LOG: Effective User Id is root
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    * This is Saposcol Version COLL 20.94 700 - V3.72 64Bit
    * Usage: saposcol -l: Start OS Collector
    * saposcol -k: Stop OS Collector
    * saposcol -d: OS Collector Dialog Mode
    * saposcol -s: OS Collector Status
    * Starting collector (create new process)
    saposcol on host eccprd01 started
    Starting SAP Instance ASCS00
    Startup-Log is written to /export/home/prdadm/startsap_ASCS00.log
    saposcol on host eccprd01 started
    Running /usr/sap/PRD/SYS/exe/run/startj2eedb
    Trying to start PRD database ...
    Log file: /export/home/prdadm/startdb.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:01 eccprd01 SAPPRD_00[26054]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    /usr/sap/PRD/SYS/exe/run/startj2eedb completed successfully
    Starting SAP Instance SCS01
    Startup-Log is written to /export/home/prdadm/startsap_SCS01.log
    Instance Service on host eccprd01 started
    Jun 4 11:05:02 eccprd01 SAPPRD_01[26111]: Unable to open trace file sapstartsrv.log. (Error 11 Resource temporarily unavailable) [ntservsserver.cpp 1863]
    Instance on host eccprd01 started
    Instance on host eccprd01 started
    and the it repeats the following warnings on the /var/adm/messages till it fails to the other node
    Jun 4 12:26:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:28 eccprd01 last message repeated 1 time
    Jun 4 12:26:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:46 eccprd01 last message repeated 1 time
    Jun 4 12:26:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:49 eccprd01 last message repeated 1 time
    Jun 4 12:26:49 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:52 eccprd01 last message repeated 1 time
    Jun 4 12:26:52 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:55 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:26:58 eccprd01 last message repeated 1 time
    Jun 4 12:26:58 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:01 eccprd01 last message repeated 1 time
    Jun 4 12:27:01 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:04 eccprd01 last message repeated 1 time
    Jun 4 12:27:04 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:07 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:10 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:13 eccprd01 last message repeated 1 time
    Jun 4 12:27:13 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:16 eccprd01 last message repeated 1 time
    Jun 4 12:27:16 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:19 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:22 eccprd01 last message repeated 1 time
    Jun 4 12:27:22 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:25 eccprd01 last message repeated 1 time
    Jun 4 12:27:25 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:28 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:31 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:34 eccprd01 last message repeated 1 time
    Jun 4 12:27:34 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:37 eccprd01 last message repeated 1 time
    Jun 4 12:27:37 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:40 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:43 eccprd01 last message repeated 1 time
    Jun 4 12:27:43 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-scs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dispatcher to come up.
    Jun 4 12:27:46 eccprd01 last message repeated 1 time
    Jun 4 12:27:46 eccprd01 SC[SUNW.sap_ci_v2,sap-ci-res-grp,sap-ci-Ascs-res,sap_ci_svc_start]: [ID 183934 daemon.notice] Waiting for SAP Central Instance main dis
    can anyone one help me if there is any error on configurations or what is the cause of this problem.....thanks in advance
    ARSSES

    Hi all.
    I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
    Scenrio:
    Central Instance (not incluster) : Started on one node
    Dialog Instance (not in cluster): Started on the other node
    When I create the resource for SUNW.sap_as like
    clrs create --g sap-rg -t SUNW.sap_as .....etc etc
    in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
    Then after timeout it gives up.
    Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
    TIA

Maybe you are looking for