Ocfs2 a problem in my cluster

I'm working in a cluster and when I try to use(start) ocfs2 the service report:
"Starting cluster ocfs2: Failed
cluster ocfs2 created
node 1 added
node 2 added
o2cbt_cbtl: Configuration error discovered while populating cluster ocfs2. None of it's nodes where considered local...bla bla bla
Stopping cluster ocfs2:OK"
THNX FOR YOUR HELP AND SORRY

I face the same problem!!!!
wait for the solution.

Similar Messages

  • Ocfs2 mounting problem

    I am just installing a 2 nodes RAC. I cannot mount the ocfs2 filesystem. At startup and at command line, the mount command shows errores. Eventhough the ocfs2console mounts the same partition succesfully!
    [root@rac1 ~]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L /crs1 /dev/sde1
    mkfs.ocfs2 1.2.1
    Overwriting existing ocfs2 partition.
    Proceed (y/N): y
    Filesystem label=/crs1
    Block size=4096 (bits=12)
    Cluster size=32768 (bits=15)
    Volume size=1073512448 (32761 clusters) (262088 blocks)
    2 cluster groups (tail covers 505 clusters, rest cover 32256 clusters)
    Journal size=16777216
    Initial number of node slots: 4
    Creating bitmaps: done
    Initializing superblock: done
    Writing system files: done
    Writing superblock: done
    Formatting Journals: done
    Writing lost+found: done
    mkfs.ocfs2 successful
    [root@rac1 ~]# mount -t ocfs2 -o datavolume,nointr /dev/sde /crs1
    ocfs2_hb_ctl: Bad magic number in superblock while reading uuid
    mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not
    permitted"
    [root@rac1 ~]# /etc/rc.d/init.d/o2cb status
    Module "configfs": Loaded
    Filesystem "configfs": Mounted
    Module "ocfs2_nodemanager": Loaded
    Module "ocfs2_dlm": Loaded
    Module "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking cluster ocfs2: Online
    Checking heartbeat: Not active

    Hi, the same problem is here
    [root@node1 ~]# mounted.ocfs2 -d
    Device FS UUID Label
    /dev/sdb1 ocfs2 456a44f0-1b41-4a42-b2d2-3538aa003e50 /u03
    [root@node1 ~]# service ocfs2 start
    Starting Oracle Cluster File System (OCFS2) ocfs2_hb_ctl: Bad magic number in inode while reading uuid
    mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted"
    [FAILED]
    but it is ok on node2.
    anybody have some idea?

  • Ocfs2 mount problem due to datavolume option.

    hi,
    i am trying to install oracle rac on fedora core 6 through iscsi.
    when i try to mount,
    mount.ocfs2: Invalid argument while mounting /dev/mapper/rac-crs on /mnt/crs. Check 'dmesg' for more information on this error.
    error log in dmesg,
    ocfs2: Unmounting device (253,11) on (node 255)
    (27354,0):ocfs2_parse_options:753 ERROR: Unrecognized mount option "datavolume" or missing value
    version of ocfs2:
    [root@server ~]# rpm -qa | grep ocfs
    ocfs2console-1.2.2-1
    ocfs2-tools-devel-1.2.2-1
    ocfs2-2.6.9-42.EL-1.2.4-2
    ocfs2-tools-1.2.2-1
    i googled and found this datavolume option was taken from 1.2 but also it is quite needed for voting disk.I saw the dates of threads discussing this problem and it is almost 7 to 8 months back.
    how to make it work? is any new version available now? or how to fix it?
    thanks for any response.
    regards,
    Nirmal Tom.V

    Hi,
    What parameter you use to mount ocfs2 filesystem??, they must be
    /dev/sdb1 /u02 ocfs2 _netdev,datavolume,nointr  0 0
    Regards
    Angel

  • JNDI replication problems in WebLogic cluster.

    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.

    It is not a good idea to use JNDI to replicate application data. Did you consider
    using JMS for this? Or JavaGroups (http://sourceforge.net/projects/javagroups/) -
    there is an example of distibuted hashtable in examples.
    Alex Rogozinsky <[email protected]> wrote:
    I need to implement a replicable property in the cluster: each server could
    update it and new value should be available for all cluster. I tried to bind
    this property to JNDI and got several problems:
    1) On each rebinding I got error messages:
    <Nov 12, 2001 8:30:08 PM PST> <Error> <Cluster> <Conflict start: You tried
    to bind an object under the name example.TestName in the jndi tree. The
    object you have bound java.util.Date from 10.1.8.114 is non clusterable and
    you have tried to bind more than once from two or more servers. Such objects
    can only deployed from one server.>
    <Nov 12, 2001 8:30:18 PM PST> <Error> <Cluster> <Conflict Resolved:
    example.TestName for the object java.util.Date from 10.1.9.250 under the
    bind name example.TestName in the jndi tree.>
    As I understand this is a designed behavior for non-RMI objects. Am I
    correct?
    2) Replication is still done, but I got randomly results: I bind object to
    server 1, get it from server 2 and they are not always the same even with
    delay between operation in several seconds (tested with 0-10 sec.) and while
    it lookup returns old version after 10 sec, second attempt without delay
    could return correct result.
    Any ideas how to ensure correct replication? I need lookup to return the
    object I bound on different sever.
    3) Even when lookup returns correct result, Admin Console in
    Server->Monitoring-> JNDI Tree shows an error for bound object:
    Exception
    javax.naming.NameNotFoundException: Unable to resolve example. Resolved: ''
    Unresolved:'example' ; remaining name ''
    My configuration: admin server + 3 managed servers in a cluster.
    JNDI bind and lookup is done from stateless session bean. Session is
    clusterable and deployed to all servers in cluster. Client invokes session
    methods throw t3 protocol directly on servers.
    Thank you for any help.--
    Dimitri

  • 6140 Replication Problem in Sun Cluster

    Hi,
    I am not able to mount a replicate volume from cluster system (primary site) to non-cluster system (DR site). Replication was done by 6140 storage. In primary site the volume was configured in a system with metaset under Solaris Cluster 3.2. and in DR site it was mapped in a non-cluster system after suspending the replication.
    I even tried to mount the volume in DR site (non-cluster system) by creating a metaset and putting the volume under this and mount it from there. But this action also not working.
    Following are the log of the errors:
    drserver # mount -F ufs /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 /mnt/
    mount: /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0 is not this fstype
    drserver #
    drserver #
    drserver #
    drserver #
    drserver # fstyp -v /dev/dsk/c3t600A0B80004832A600002D554B74AC56d0s0
    Unknown_fstyp (no matches)
    drserver #
    I will be grateful if you have any workaround for this. Please note that, the replication from the non-cluster system is working fine. Only from the cluster system it is not working and showing above errors.

    I am not sure how you can run Solaris 10 Update 8, since to my knowledge that is not released.
    What is available is Solaris 10 05/09, which would be Update 7.
    You are not describing what exact problem you have (like specific error messages you see), or what exactly you did to end up in the situation you have.
    I would recommend to open a support case to get a more structured analysis of your problem.
    Regards
    Thorsten

  • Problem with sun Cluster

    Hi all !
    I've problem with cluster, server cannot see HDD from storedge.
    state-
    - in �ok� , use "probe-scsi-all" command : hap203 can detect all 14 HDD ( 4 HDD local, 5 HDD from 3310_1 and 5 HDD from 3310_2) ; hap103 detect only 13 HDD ( 4 local, 5 from 3310_1 and only 4 from 3310_2 )
    - use �format� command on hap203, this server can detect 14 HDD ( from 0 to 13 ) ; but type �format� on hap103, only see 9 HDD (from 0 to 8).
    - type �devfsadm �C� on hap103 ----> notice error about HDD.
    - type "scstat" on hap103 ----------> Resorce Group : hap103' status is �pending online� and hap203's status is "offline".
    - type "metastat �s dgsmp" on hap103 : notice �need maintenance�.
    Help me if you can.
    Many thanks.
    Long.
    -----------------------------ok_log-------------------------
    ########## hap103 ##################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    ############ hap203 ###################################
    {3} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 1
    Unit 0 Disk SEAGATE ST373307LSUN72G 0507 143374738 Blocks, 70007 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ72NSUN72G PQ0B 143374738 Blocks, 70007 MB
    /pci@1d,700000/pci@2/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1d,700000/pci@2/scsi@4
    /pci@1c,600000/pci@1/scsi@5
    Target 8
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target 9
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target a
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target b
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target c
    Unit 0 Disk FUJITSU MAP3735N SUN72G 0401
    Target f
    Unit 0 Processor SUN StorEdge 3310 D1159
    /pci@1c,600000/pci@1/scsi@4
    {3} ok
    ------------------------hap103-------------------------
    hap103>
    hap103> format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    1. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    2. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    3. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    4. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    5. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    6. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    7. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    8. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap103>
    hap103>
    hap103>
    hap103> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap103>
    hap103>
    hap103> metastat -s dgsmp
    dgsmp/d120: Mirror
    Submirror 0: dgsmp/d121
    State: Needs maintenance
    Submirror 1: dgsmp/d122
    State: Needs maintenance
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 716695680 blocks
    dgsmp/d121: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d5s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d1s0 0 No Maintenance
    d2s0 0 No Maintenance
    d3s0 0 No Maintenance
    d4s0 0 No Maintenance
    d5s0 0 No Last Erred
    dgsmp/d122: Submirror of dgsmp/d120
    State: Needs maintenance
    Invoke: after replacing "Maintenance" components:
    metareplace dgsmp/d120 d6s0 <new device>
    Size: 716695680 blocks
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Hot Spare
    d6s0 0 No Last Erred
    d7s0 0 No Okay
    d8s0 0 No Okay
    d9s0 0 No Okay
    d10s0 0 No Resyncing
    hap103> May 6 14:55:58 hap103 login: ROOT LOGIN /dev/pts/1 FROM ralf1
    hap103>
    hap103>
    hap103>
    hap103>
    hap103> scdidadm -l
    1 hap103:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap103:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap103:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap103:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap103:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap103:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap103:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap103:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap103:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap103:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    11 hap103:/dev/rdsk/c2t0d0 /dev/did/rdsk/d11
    12 hap103:/dev/rdsk/c3t0d0 /dev/did/rdsk/d12
    13 hap103:/dev/rdsk/c3t1d0 /dev/did/rdsk/d13
    14 hap103:/dev/rdsk/c3t2d0 /dev/did/rdsk/d14
    15 hap103:/dev/rdsk/c3t3d0 /dev/did/rdsk/d15
    hap103>
    hap103>
    hap103> more /etc/vfstab
    [49;1H[K#device device  mount   FS      fsck    mount   mount
    #to     mount   to      fsck            point           type    pass    at boot options
    #/dev/dsk/c1d0s2        /dev/rdsk/c1d0s2        /usr    ufs     1       yes     -
    fd      -       /dev/fd fd      -       no      -
    /proc   -       /proc   proc    -       no      -
    /dev/md/dsk/d20 -       -       swap    -       no      -
    /dev/md/dsk/d10 /dev/md/rdsk/d10        /       ufs     1       no      logging
    #/dev/dsk/c3t0d0s3      /dev/rdsk/c3t0d0s3      /globaldevices  ufs     2       yes     logging
    /dev/md/dsk/d60 /dev/md/rdsk/d60        /in     ufs     2       yes     logging
    /dev/md/dsk/d40 /dev/md/rdsk/d40        /in/oracle      ufs     2       yes     logging
    /dev/md/dsk/d50 /dev/md/rdsk/d50        /indelivery     ufs     2       yes     logging
    swap    -       /tmp    tmpfs   -       yes     -
    /dev/md/dsk/d30 /dev/md/rdsk/d30        /global/.devices/node@1 ufs     2       no      global
    /dev/md/dgsmp/dsk/d120  /dev/md/dgsmp/rdsk/d120 /in/smp ufs     2       yes     logging,global
    #RALF1:/in/RALF1 - /inbackup/RALF1 nfs - yes rw,bg,soft
    [K[1;7mvfstab: END[m
    [Khap103> df -h
    df: unknown option: h
    Usage: df [-F FSType] [-abegklntVv] [-o FSType-specific_options] [directory | block_device | resource]
    hap103>
    hap103>
    hap103>
    hap103> df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 4339374 3429010 866971 80% /
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    swap 22744256 136 22744120 1% /var/run
    swap 22744144 24 22744120 1% /tmp
    /dev/md/dsk/d50 1021735 2210 958221 1% /indelivery
    /dev/md/dsk/d60 121571658 1907721 118448221 2% /in
    /dev/md/dsk/d40 1529383 1043520 424688 72% /in/oracle
    /dev/md/dsk/d33 194239 4901 169915 3% /global/.devices/node@2
    /dev/md/dsk/d30 194239 4901 169915 3% /global/.devices/node@1
    ------------------log_hap203---------------------------------
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@8,0
    1. c0t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@9,0
    2. c0t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@a,0
    3. c0t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@b,0
    4. c0t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1c,600000/pci@1/scsi@5/sd@c,0
    5. c1t8d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@8,0
    6. c1t9d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@9,0
    7. c1t10d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@a,0
    8. c1t11d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@b,0
    9. c1t12d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1d,700000/pci@2/scsi@5/sd@c,0
    10. c3t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@0,0
    11. c3t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@1,0
    12. c3t2d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@2,0
    13. c3t3d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@1f,700000/scsi@2/sd@3,0
    Specify disk (enter its number): ^D
    hap203>
    hap203> scstart t
    -- Cluster Nodes --
    Node name Status
    Cluster node: hap103 Online
    Cluster node: hap203 Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: hap103:ce7 hap203:ce7 Path online
    Transport path: hap103:ce3 hap203:ce3 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: hap103 1 1 Online
    Node votes: hap203 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d1s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: dgsmp hap103 hap203
    -- Device Group Status --
    Device Group Status
    Device group status: dgsmp Online
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: rg-smp has-res SDP1 SMFswitch
    -- Resource Groups --
    Group Name Node Name State
    Group: rg-smp hap103 Pending online
    Group: rg-smp hap203 Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: has-res hap103 Offline Unknown - Starting
    Resource: has-res hap203 Offline Offline
    Resource: SDP1 hap103 Offline Unknown - Starting
    Resource: SDP1 hap203 Offline Offline
    Resource: SMFswitch hap103 Offline Offline
    Resource: SMFswitch hap203 Offline Offline
    hap203>
    hap203>
    hap203> devfsadm- -C
    hap203>
    hap203> scdidadm -l
    1 hap203:/dev/rdsk/c0t8d0 /dev/did/rdsk/d1
    2 hap203:/dev/rdsk/c0t9d0 /dev/did/rdsk/d2
    3 hap203:/dev/rdsk/c0t10d0 /dev/did/rdsk/d3
    4 hap203:/dev/rdsk/c0t11d0 /dev/did/rdsk/d4
    5 hap203:/dev/rdsk/c0t12d0 /dev/did/rdsk/d5
    6 hap203:/dev/rdsk/c1t8d0 /dev/did/rdsk/d6
    7 hap203:/dev/rdsk/c1t9d0 /dev/did/rdsk/d7
    8 hap203:/dev/rdsk/c1t10d0 /dev/did/rdsk/d8
    9 hap203:/dev/rdsk/c1t11d0 /dev/did/rdsk/d9
    10 hap203:/dev/rdsk/c1t12d0 /dev/did/rdsk/d10
    16 hap203:/dev/rdsk/c2t0d0 /dev/did/rdsk/d16
    17 hap203:/dev/rdsk/c3t0d0 /dev/did/rdsk/d17
    18 hap203:/dev/rdsk/c3t1d0 /dev/did/rdsk/d18
    19 hap203:/dev/rdsk/c3t2d0 /dev/did/rdsk/d19
    20 hap203:/dev/rdsk/c3t3d0 /dev/did/rdsk/d20
    hap203> May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@8,0 (sd53):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 61 Error Block: 61
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG6Y
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 8 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 63 Error Block: 63
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Fatal
    May 6 15:05:58 hap203 scsi: Requested Block: 66 Error Block: 66
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1097 Error Block: 1097
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (scsi parity error), ASCQ: 0x0, FRU: 0x0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5 (qus1):
    May 6 15:05:58 hap203 Target synch. rate reduced. tgt 12 lun 0
    May 6 15:05:58 hap203 scsi: WARNING: /pci@1c,600000/pci@1/scsi@5/sd@c,0 (sd57):
    May 6 15:05:58 hap203 Error for Command: write Error Level: Retryable
    May 6 15:05:58 hap203 scsi: Requested Block: 1100 Error Block: 1100
    May 6 15:05:58 hap203 scsi: Vendor: FUJITSU Serial Number: 0444Q0GG2L
    May 6 15:05:58 hap203 scsi: Sense Key: Aborted Command
    May 6 15:05:58 hap203 scsi: ASC: 0x47 (s

    First question is what HBA and driver combination are you using?
    Next do you have MPxIO enabled or disabled?
    Are you using SAN switches? If so whose, what F/W level and what configuration, (ie. single switch, cascade of multiple switches, etc.)
    What are the distances from nodes to storage (include any fabric switches and ISL's if multiple switches) and what media are you using as a transport, (copper, fibre {single mode, multi-mode})?
    What is the configuration of your storage ports, (fabric point to point, loop, etc.)? If loop what are the ALPA's for each connection?
    The more you leave out of your question the harder it is to offer suggestions.
    Feadshipman

  • Exchange 2010 - Clustering & DAG problems after restoring CLUSTER from domain

    Hi there!
    SITE 1
    Primary EXCHANGE server (2010SP3 with latest CU) with all the roles installed
    SITE 2
    Secondary Exchange server (2010SP3 with latest CU) with only mailbox role for DAG purpouse.
    SITE 1 and SITE 2 are connected with site-to-site-vpn.
    Both servers are on 2008 r2 ENT.
    About 3-4 months ago we have accidentely delete DAG node from domain. We have managed to restore it from domain with using AD RESTORE and checking that DAG is member of all the required Exchange groups in the domain.
    Now we are having some big problems if site-to-site-vpn dropps, our primary Exchange server in SITE1 is not working.
    If VPN dropps between the sites, OWA gets unavailable as it Exchange servers would think that Exchange in SITE2 is primary server.
    Please advice us how to track and repair the root of the problem.
    With best regards,
    bostjanc

    Running command:
    Get-MailboxDatabaseCopyStatus –Server "exchangesrvname" | FL MailboxServer,*database*,Status,ContentIndexState
    Gives as an output that all the databases are healthy:
    Example of 1 database report:
    MailboxServer      : ExchangeSRVname
    DatabaseName       : DatabaseName1
    ActiveDatabaseCopy : exchange2010
    Status             : Mounted
    ContentIndexState  : Healthy
    Running command:
    Test-ReplicationHealth –Server "exchange2010.halcom.local" | FL
    Also gives output that everything is fine.
    We still need to solve this issue so we will be unmarking the thread being ansered.
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ClusterService
    CheckDescription : Checks if the cluster service is healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ReplayService
    CheckDescription : Checks if the Microsoft Exchange Replication service is running.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ActiveManager
    CheckDescription : Checks that Active Manager is running and has a valid role.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : TasksRpcListener
    CheckDescription : Checks that the Tasks RPC Listener is running and is responding to remote requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : TcpListener
    CheckDescription : Checks that the TCP Listener is running and is responding to requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ServerLocatorService
    CheckDescription : Checks that the Server Locator Service is running and is responding to requests.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : DagMembersUp
    CheckDescription : Verifies that the members of a database availability group are up and running.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : ClusterNetwork
    CheckDescription : Checks that the networks are healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : QuorumGroup
    CheckDescription : Checks that the quorum and witness for the database availability group is healthy.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    RunspaceId       : c8c20c41-7b3e-463c-9c98-56785d62c74b
    Server           : ExchangeSRVname
    Check            : FileShareQuorum
    CheckDescription : Verifies that the path used for the file share witness can be reached.
    Result           : Passed
    Error            :
    Identity         :
    IsValid          : True
    ObjectState      : New
    bostjanc

  • OCFS2 Configuration Problem

    Hi Guys
    I am trying to configure OCFS on my 2 nodes. Here is my configuration
    OS :Centos plus 5.2
    Kernel: Linux rac1 2.6.18-128.1.6.el5.centos.plus #1 SMP Thu Apr 2 12:53:36 EDT 2009 i686 i686 i386 GNU/Linux
    I have installed following ocfs rpms
    [root@rac1 ~]# rpm -qa |grep ocfs
    ocfs2-tools-1.4.1-1.el5
    ocfs2console-1.4.1-1.el5
    ocfs2-2.6.18-128.1.6.el5-1.4.1-1.el5
    [root@rac1 ~]#
    When I try to enable o2cb, I get following errors
    [root@rac1 ~]# /etc/init.d/o2cb enable
    Writing O2CB configuration: OK
    Loading filesystem "ocfs2_dlmfs": Unable to load filesystem "ocfs2_dlmfs"
    Failed
    [root@rac1 ~]#
    [root@rac1 ~]# /etc/init.d/o2cb status
    Driver for "configfs": Loaded
    Filesystem "configfs": Mounted
    Driver for "ocfs2_dlmfs": Not loaded
    Checking O2CB cluster ocfs2: Offline
    I have disabled SELINUX
    [root@rac1 ~]# cat /etc/selinux/config
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    # enforcing - SELinux security policy is enforced.
    # permissive - SELinux prints warnings instead of enforcing.
    # disabled - SELinux is fully disabled.
    SELINUX=disabled
    # SELINUXTYPE= type of policy in use. Possible values are:
    # targeted - Only targeted network daemons are protected.
    # strict - Full SELinux protection.
    SELINUXTYPE=targeted
    [root@rac1 ~]#
    Any help will be highly appreciated
    Regards
    PG

    I just updated all my machines using RHN. Well the result is this one:
    [root@IPPRDN1 ~]# /etc/init.d/o2cb start
    Loading filesystem "ocfs2_dlmfs": Unable to load filesystem "ocfs2_dlmfs"
    Failed
    What is going on here is that I used OCFS2 RPMs downloaded at Oracle site sime time ago, exactly I used these packages:
    [root@IPPRDN1 ocfs2]# ls -la
    total 15324
    drwx------ 3 root root 4096 Sep 14 06:19 .
    drwx------ 10 root root 4096 Sep 13 13:20 ..
    -rwx------ 1 root root 461341 Sep 7 06:26 ocfs2-1_4-usersguide.pdf
    -rwx------ 1 root root 333906 Sep 7 06:26 ocfs2-2.6.18-194.11.3.el5-1.4.7-1.el5.x86_64.rpm
    -rwx------ 1 root root 10611977 Sep 7 06:26 ocfs2-2.6.18-194.11.3.el5-debuginfo-1.4.7-1.el5.x86_64.rpm
    -rwx------ 1 root root 340353 Sep 14 04:56 ocfs2console-1.4.4-1.el5.x86_64.rpm
    -rwx------ 1 root root 1549393 Sep 14 04:56 ocfs2-tools-1.4.4-1.el5.x86_64.rpm
    -rwx------ 1 root root 2191217 Sep 14 04:56 ocfs2-tools-debuginfo-1.4.4-1.el5.x86_64.rpm
    -rwx------ 1 root root 136780 Sep 14 04:56 ocfs2-tools-devel-1.4.4-1.el5.x86_64.rpm
    drwx------ 6 root root 4096 Sep 13 11:57 .svn
    Now OCFS2 partitions are not mounted within the boot time. My /etc/fstab file looks like this (I list ocfs2 records only)
    /dev/sdb /ip ocfs2 _netdev,datavolume,nointr        0       0
    /dev/sdc /ip_backup ocfs2 _netdev,nointr                         0       0
    And finally I get the message Unable to load filesystem "ocfs2_dlmfs". So what is the root cause. It seems to be very prozaic. Before the update I used kernel 2.6.18-194.11.3.el5. Well also the ocfs2 drivers were installed based on that information in /lib/modules/2.6.18-194.11.3.el5/kernel/fs/ocfs2.
    After update my uname -r changed to 2.6.18-194.26.1.el5 and here is probably also the issue:
    -bash: cd: /lib/modules/2.6.18-194.26.1.el5/kernel/fs/ocfs2: No such file or directory
    So I see 3 solutions:
    - Compile ocfs2 from the source
    - edit grub to boot back the 2.6.18-194.11.3.el5 kernel using bash# nano /etc/grub.conf
    I get it working to boot up the old kernel again and of course everything worked again well.
    As soon as there are no precompiled RPMs packages for 2.6.18-194.11.3.el5 kernel I will wait until they are ready and install them on the new kernel then.

  • Session lost problem in Weblogic cluster and Iplanet proxy

              We have an environment like this
              Iplanet 4.1--> Two Weblogic servers.(WLS 6.1 sp1)
              The Weblogic servers are not really clusterd, but we are using the Weblogic Cluster
              attribute in the obj.conf to configure the proxy (The Weblogic servers are really
              independant servers with the same application deployed).
              This is working fine in our development environment. No problem with session and
              load balancing. Although this architecture is not documented in Weblogic docs,
              I have seen several references to this type of architecture in the web and it
              seems to be working.
              Now we are into our pre-production environment. The same archictecture exists.
              Only the following differences
              1. Both the weblogic servers run on multihomed machines. We have bound the Weblogic
              servers to specific IP addresses using the Listen address option.
              2. The same IP addresses are there in the proxy plug-in conf file.
              3. There is a firewall between the Iplanet and Weblogic.
              Now if only one weblogic is running, the application works fine. The moment we
              turn the other weblogic on, the application starts misbehaving. The session seems
              to get lost and proxy forwards requests randomly.
              What could be the reason?
              Regards
              Anup
              

              Hi
              The problem got solved. There was an older version of libproxy.so in the Iplanet
              proxy
              Regards
              Anup
              Yeshwant Kamat <[email protected]> wrote:
              >Anup,
              >
              >Is there a reason you are not clustering the WLS instances? Remove the
              >firewall in your
              >prod environment and see if that makes a difference.
              >
              >Anup wrote:
              >
              >> Hi Mike
              >> 1. As per the documentation WebLogic Server is set up to handle session
              >tracking
              >> by default. So we are not doing anything special. Since the application
              >works
              >> fine with one Weblogic and multiple clients connecting to it through
              >Iplanet ,
              >> I think there is no problem with Session tracking as such.
              >>
              >> 2.We are using the default cookie name for session (JSESSIONID). So
              >we haven't
              >> done anything extra in the proxy set up or weblogic.xml
              >>
              >> 3. I have watched the cookie that comes on the browser
              >> PAACk3iviDm4ZuMPIbB9TpTTw9slk40IEC02MKjpu14EZ9ayzqaP!-1196227542!gmbpds054!7015!7016.
              >>
              >> gmbpds054 is the DNS name of one of the weblogic servers. So that is
              >also fine.
              >>
              >> What else could be the problem.
              >> Regards
              >> Anup
              >>
              >> "Mike Reiche" <[email protected]> wrote:
              >> >
              >> >If you have session tracking turned on in weblogic, creating a session
              >> >will write
              >> >a cookie back to the browser. iPlanet does sticky load balancing
              >based
              >> >on the
              >> >IP address in this cookie. So -
              >> >
              >> >1) do you have session tracking turned on?
              >> >
              >> >2) is the cookie getting written to your browser?
              >> >
              >> >3) are iPlanet and WebLogic using the same cookie? (same name)
              >> >
              >> >Mike
              >> >
              >> >"Anup Maliyackel" <[email protected]> wrote:
              >> >>
              >> >>We have an environment like this
              >> >>
              >> >>Iplanet 4.1--> Two Weblogic servers.(WLS 6.1 sp1)
              >> >>
              >> >>The Weblogic servers are not really clusterd, but we are using the
              >Weblogic
              >> >>Cluster
              >> >>attribute in the obj.conf to configure the proxy (The Weblogic servers
              >> >>are really
              >> >>independant servers with the same application deployed).
              >> >>
              >> >>This is working fine in our development environment. No problem with
              >> >>session and
              >> >>load balancing. Although this architecture is not documented in Weblogic
              >> >>docs,
              >> >>I have seen several references to this type of architecture in the
              >web
              >> >>and it
              >> >>seems to be working.
              >> >>
              >> >>Now we are into our pre-production environment. The same archictecture
              >> >>exists.
              >> >>Only the following differences
              >> >>1. Both the weblogic servers run on multihomed machines. We have
              >bound
              >> >>the Weblogic
              >> >>servers to specific IP addresses using the Listen address option.
              >> >>
              >> >>2. The same IP addresses are there in the proxy plug-in conf file.
              >> >>
              >> >>3. There is a firewall between the Iplanet and Weblogic.
              >> >>
              >> >>Now if only one weblogic is running, the application works fine.
              >The
              >> >>moment we
              >> >>turn the other weblogic on, the application starts misbehaving. The
              >> >session
              >> >>seems
              >> >>to get lost and proxy forwards requests randomly.
              >> >>
              >> >>What could be the reason?
              >> >>
              >> >>Regards
              >> >>Anup
              >> >
              >
              

  • Conflict problem in a cluster

    Hello,
              We are using WLS 6.0 sp2 clustering (EJB) on AIX 4.3. We are two nodes
              in the cluster and when the servers are started and join the cluster, we
              have the error below on the two nodes.
              Anyone has an idea ?
              Thanks
              Emmanuel
              <Sep 25, 2001 11:30:49 AM CEST> <Info> <Cluster> <Adding server
              7108643983580856
              455S:172.18.50.1:[6001,6001,6002,6002,6001,6002,-1] to cluster view>
              <Sep 25, 2001 11:30:50 AM CEST> <Info> <Cluster> <Adding
              7108643983580856455S:17
              2.18.50.1:[6001,6001,6002,6002,6001,6002,-1] to the cluster>
              <Sep 25, 2001 11:30:51 AM CEST> <Error> <Cluster> <Conflict start: You
              tried to
              bind an object under the name
              weblogic.transaction.coordinators.DemoServer in th
              e jndi tree. The object you have bound
              weblogic.transaction.internal.Coordinator
              Impl from 172.18.50.2 is non clusterable and you have tried to bind more
              than on
              ce from two or more servers. Such objects can only deployed from one
              server.>
              <Sep 25, 2001 11:30:51 AM CEST> <Error> <Cluster> <Conflict start: You
              tried to
              bind an object under the name
              weblogic.transaction.coordinators.DemoServer in th
              e jndi tree. The object you have bound
              weblogic.transaction.internal.Coordinator
              Impl_WLStub from 172.18.50.1 is non clusterable and you have tried to
              bind more
              than once from two or more servers. Such objects can only deployed from
              one serv
              er.>
              <Sep 25, 2001 11:30:52 AM CEST> <Error> <Cluster> <Conflict start: You
              tried to
              bind an object under the name weblogic.management.home.DemoServer in the
              jndi tr
              ee. The object you have bound
              weblogic.management.internal.AdminMBeanHomeImpl fr
              om 172.18.50.2 is non clusterable and you have tried to bind more than
              once from
              two or more servers. Such objects can only deployed from one server.>
              <Sep 25, 2001 11:30:52 AM CEST> <Error> <Cluster> <Conflict start: You
              tried to
              bind an object under the name weblogic.management.adminhome in the jndi
              tree. Th
              e object you have bound
              weblogic.management.internal.AdminMBeanHomeImpl_WLStub f
              rom 172.18.50.1 is non clusterable and you have tried to bind more than
              once fro
              m two or more servers. Such objects can only deployed from one server.>
              

    Emmanuel,
              Do you have the admin server part of the cluster? If yes, then remove it and
              configure a managed server to be part of the cluster.
              Emmanuel Rias wrote:
              > Hello,
              > We are using WLS 6.0 sp2 clustering (EJB) on AIX 4.3. We are two nodes
              > in the cluster and when the servers are started and join the cluster, we
              > have the error below on the two nodes.
              > Anyone has an idea ?
              > Thanks
              > Emmanuel
              >
              > <Sep 25, 2001 11:30:49 AM CEST> <Info> <Cluster> <Adding server
              > 7108643983580856
              > 455S:172.18.50.1:[6001,6001,6002,6002,6001,6002,-1] to cluster view>
              > <Sep 25, 2001 11:30:50 AM CEST> <Info> <Cluster> <Adding
              > 7108643983580856455S:17
              > 2.18.50.1:[6001,6001,6002,6002,6001,6002,-1] to the cluster>
              > <Sep 25, 2001 11:30:51 AM CEST> <Error> <Cluster> <Conflict start: You
              > tried to
              > bind an object under the name
              > weblogic.transaction.coordinators.DemoServer in th
              > e jndi tree. The object you have bound
              > weblogic.transaction.internal.Coordinator
              > Impl from 172.18.50.2 is non clusterable and you have tried to bind more
              > than on
              > ce from two or more servers. Such objects can only deployed from one
              > server.>
              > <Sep 25, 2001 11:30:51 AM CEST> <Error> <Cluster> <Conflict start: You
              > tried to
              > bind an object under the name
              > weblogic.transaction.coordinators.DemoServer in th
              > e jndi tree. The object you have bound
              > weblogic.transaction.internal.Coordinator
              > Impl_WLStub from 172.18.50.1 is non clusterable and you have tried to
              > bind more
              > than once from two or more servers. Such objects can only deployed from
              > one serv
              > er.>
              > <Sep 25, 2001 11:30:52 AM CEST> <Error> <Cluster> <Conflict start: You
              > tried to
              > bind an object under the name weblogic.management.home.DemoServer in the
              > jndi tr
              > ee. The object you have bound
              > weblogic.management.internal.AdminMBeanHomeImpl fr
              > om 172.18.50.2 is non clusterable and you have tried to bind more than
              > once from
              > two or more servers. Such objects can only deployed from one server.>
              > <Sep 25, 2001 11:30:52 AM CEST> <Error> <Cluster> <Conflict start: You
              > tried to
              > bind an object under the name weblogic.management.adminhome in the jndi
              > tree. Th
              > e object you have bound
              > weblogic.management.internal.AdminMBeanHomeImpl_WLStub f
              > rom 172.18.50.1 is non clusterable and you have tried to bind more than
              > once fro
              > m two or more servers. Such objects can only deployed from one server.>
              

  • Problem sending Tab/Cluster control image to Word report.

    I've set up an option for the user to send a control image to a Word report where they can decide to Print and/or save from Word (I've found the normal Print option too slow with the printer available). The Word document opens fine but I am loosing data from my indicators, without which there's not much point in printing!!
    Is there a problem with Clusters, I have a chart and table on another page of the Tab in my main program and they come out fine.
    I'm using LV8 with the Report toolkit and Word 2000
    Thanks
    Ian
    Attachments:
    Send to Word.vi ‏34 KB
    Send to Wordp.png ‏22 KB

    This problem has been reported in previous posts on the LabVIEW forums.  LabVIEW R&D is aware of the problem, and it should be fixed in a future LabVIEW version.  For now, the only workaround I know of is to make extra space in your clusters so the controls are visible in the report.  I know it's ugly, but it's the only thing I know that works (other than reverting back to LabVIEW 7.1).
    -D
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

  • Solaris boot problem after Patch cluster install

    just wondering if anyone can assist. i have just installed solaris 10 x86 recommended patches on a 16 disks server. where first 2 disks are mirrored called rpool, and remaining 14 disks are raid z called spool. upon installing the patches successfully and rebooting server, i am coming up with the following error:
    NOTICE: Can not read the pool label from '/pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@0,0:a /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@1,0:a'
    NOTICE: spa_import_rootpool: error 5, Inc. All rights reserved.
    Cannot mount root on /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@0,0:a /pci@0,0/pci8086,25f8@4/pci111d,801c@0/pci111d,801c@4/pci108e,286@0/disk@1,0:a fstype zfs
    panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
    fffffffffbc4b190 genunix:vfs_mountroot+323 ()
    fffffffffbc4b1d0 genunix:main+a9 ()
    fffffffffbc4b1e0 unix:_start+95 ()
    skipping system dump - no dump device configured
    rebooting...
    It looks like the solaris 10 os cannot find zfs filesystem and keeps on rebooting in normal solaris os mode, but when I go to safe mode and type: zfs list, I can see zfs rpool (mirrored) file system and cannot see the zfs spool (raid z) file system, any ideas on how i can fix the boot problem and get zfs spool back in the safest way possible?...

    Same problem here with actual recomended patch set.
    NOTICE: Can not read the pool label from '/pci@0,0/pci10de,375@f/pci1000,3150@0/sd@0,0:a /pci@0,0/pci10de,375@f/pci1000,3150@0/sd@1,0:a'
    NOTICE: spa_import_rootpool: error 5
    Cannot mount root on /pci@0,0/pci10de,375@f/pci1000,3150@0/sd@0,0:a /pci@0,0/pci10de,375@f/pci1000,3150@0/sd@1,0:a fstype zfs
    panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
    fffffffffbc4b190 genunix:vfs_mountroot+323 ()
    fffffffffbc4b1d0 genunix:main+a9 ()
    fffffffffbc4b1e0 unix:_start+95 ()
    What the hell happened to the bootsektor or the pool or what ever?

  • Problem found in cluster install

    Hi,
    installing a cluster server (additional J2ee on new server) in EP6 SP2 Patch 5 with J2EE PL26, I see an error in the server node startup on the cluster ... only one ... a surprise. Does anyone know if it is significant?
      Error : Background synchronization of application [bcbici] did not succeed, be
    cause of : java.rmi.RemoteException: Error occured while synchronizing applicati
    on [bcbici]. Committing deploy to container - EJBContainer failed.com.inqmy.serv
    ices.deploy.container.DeploymentException: ID020230 Cannot load bean <SrvContain
    erBean>, because Exception occured in object instantiation: java.lang.NoClassDef
    FoundError: com/sap/bcb/common/InternalException; nested exception is:
            com.inqmy.services.deploy.container.DeploymentException: ID020230 Cannot
    load bean <SrvContainerBean>, because Exception occured in object instantiation
    : java.lang.NoClassDefFoundError: com/sap/bcb/common/InternalException

    Everything else - including irj - synchronised, or reported that it did. I checked the number of files - even allowing for log and other trace files, the numbers were of the order of 4000 files adrift. Worrying.
    I understand that many files are not copied anyway - SDM, other tools .... because they are centrally operated?

  • Problem encountered in Cluster Communication for WLS 6.1 SP2

              Hi,
              Currently, we have an admin server with 2 clusters of managed servers. We realise
              that the managed server in the cluster will communicate with the other servers
              in the other cluster using port 7001. Is there any way to stop the communication
              between the 2 clusters, as this causes our servers to hang when one of the managed
              servers is hanged? Thanks.
              We are using WebLogic 6.1 SP 2.
              Regards,
              apple
              

              They are different. We have configured them differently.
              "Sree Bodapati" <[email protected]> wrote:
              >Is the multicast address for the two clusters different?
              >
              >sree
              >
              >"apple" <[email protected]> wrote in message
              >news:[email protected]..
              >>
              >>
              >> Hi,
              >>
              >> Currently, we have an admin server with 2 clusters of managed servers.
              >We
              >realise
              >> that the managed server in the cluster will communicate with the other
              >servers
              >> in the other cluster using port 7001. Is there any way to stop the
              >communication
              >> between the 2 clusters, as this causes our servers to hang when one
              >of the
              >managed
              >> servers is hanged? Thanks.
              >>
              >> We are using WebLogic 6.1 SP 2.
              >>
              >> Regards,
              >> apple
              >
              >
              

  • Having problem starting my cluster after my servers have been relocated

    crsctl check crs hands indefinitely. CRS is healthy on only 1 node, #1 in the cluster. Please help.
    Edited by: 895124 on Nov 4, 2011 8:22 AM

    Check if Clusterware can see all the nodes..
    #$CRS_HOME/bin>olsnodes
    If you can see all the nodes, do the following:
    #$CRS_HOME/bin>./crsctl disable crs
    #$CRS_HOME/bin>reboot
    After systems comes back up......run:
    #$CRS_HOME/bin>./crsctl start crs
    #tail /var/log/messages
    paste the output here......to see what exactly is going on........ BTW crsctl disable crs only stop crs stack to auto-start in next reboot. You can always say 'crsctl enable crs' to put it back.
    HTH,

Maybe you are looking for

  • Build a slide show with various photos from different events?

    Is it possible to select various photos from a few different events, and then select "Create slideshow"? How do I select just the specific photos I want for the slideshow, and not load the entire event(s). Thanks!

  • My videos wont show...

    i purchased some videos from itunes a while ago and they were playing fine....but recently everytime i sync something in my ipod, it says SOME OF THE ITEMS IN THE ITUNES LIBRARY, WERE NOT COPIED TO THE IPOD BECAUSE YOU ARE NOT AUTHORIZED TO PLAY THEM

  • What can i use to post photos to the internet?

    I tried Picassa and didn't particularly care for it.  The Apple Mobile Gallery is gone.  Should I be using some function of iCloud to make a photo album to share over the internet?

  • BW data modelling scenario

    Hi BW gurus, I would like to know how do we decide if a given data should be a key figure or characteristic data in dimension table ? Regards, Tushar.

  • How can i call the UK from my I phone?

    How can I use my 3G I phone to call the United Kingdom?