Shared Storage Check

Hi all,
We are planning to add a node to our existing RAC deployment (Database: 10gr2 and Sun Solaris 5.9 OS). Currently the shared storage is IBM SAN.
When i run shared storage check using cluvfy, it fails to detect any shared storage. Given that i can ignore this error message (since cluvfy doesn't work wth SAN i beleive), how can i check whether the storage is shared or not?
Note
When i see partition table from both servers, it looks same (for the SAN drive, of course) but the name/label of the storages are different (For example: In existing node it show c6t0d0 but in the new node, which is to be added, it shows something different. Is it ok?).
regards,
Muhammad Riaz

Never mind. I found solution from http://www.idevelopment.info.
(1) Create following directory structure on second node (same as first node) with the same permissions on existins node:
/asmdisks
- crs
-disk1
-disk2
- vote
(2) use ls -lL /dev/rdsk/<Disk> to find out major and minor ids of shared disk and attach those ids to relveant direcotries above using mknod command:
# ls -lL /dev/rdsk/c4t0d0*
crw-r-----   1 root     sys       32,256 Aug  1 11:16 /dev/rdsk/c4t0d0s0
crw-r-----   1 root     sys       32,257 Aug  1 11:16 /dev/rdsk/c4t0d0s1
crw-r-----   1 root     sys       32,258 Aug  1 11:16 /dev/rdsk/c4t0d0s2
crw-r-----   1 root     sys       32,259 Aug  1 11:16 /dev/rdsk/c4t0d0s3
crw-r-----   1 root     sys       32,260 Aug  1 11:16 /dev/rdsk/c4t0d0s4
crw-r-----   1 root     sys       32,261 Aug  1 11:16 /dev/rdsk/c4t0d0s5
crw-r-----   1 root     sys       32,262 Aug  1 11:16 /dev/rdsk/c4t0d0s6
crw-r-----   1 root     sys       32,263 Aug  1 11:16 /dev/rdsk/c4t0d0s7
mknod /asmdisks/crs      c 32 257
mknod /asmdisks/disk1      c 32 260
mknod /asmdisks/disk2      c 32 261
mknod /asmdisks/vote      c 32 259
# ls -lL /asmdisks
total 0
crw-r--r--   1 root     oinstall  32,257 Aug  3 09:07 crs
crw-r--r--   1 oracle   dba       32,260 Aug  3 09:08 disk1
crw-r--r--   1 oracle   dba       32,261 Aug  3 09:08 disk2
crw-r--r--   1 oracle   oinstall  32,259 Aug  3 09:08 vote

Similar Messages

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • Runcluvfy.bat comp ssa - Shared storage check failed

    I've run the cluvfy on Windows 2003 64bits on SAN with 3 nodes and found that it's unsuccessful for the shared storage checking. (whereas it's successful on Windows2003 32 bits.)
    C:\Documents and Settings\Administrator>C:\_Ly1\102010_win64_x64_clusterware\clusterware\cluvfy\runcluvfy.bat comp ssa -n rac1,rac2,rac3
    The system cannot find the file specified.
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    Shared storage check failed on nodes "rac2,rac1,rac3".
    Verification of shared storage accessibility was unsuccessful on all the nodes.
    C:\Documents and Settings\Administrator>
    I'm not sure that it may result in the Clusterware Installation fail or not.
    Here, I captured the failure screen :
    http://alkaspace.com/is.php?i=30223&img=clusterware-fai.jpg
    Please help me out. Thank you!!

    I just ran into this myself while building an enterprise system on Win Server 2003. The answers here did not sit well with me and I needed to be sure of the shared storage prior to proceeding further. Researching Metalink did not uncover any relevant information either. I then opened an SR with Oracle and I did get back a satisfactory response which allowed me to verify my shared storage. The entire text of their solution can be found at http://www.webofwood.com/rac/oracle-response-to-shared-storage-check-failed-on-nodes/. Basically, it is a method of using another utility to identify the storage device names used by Windows and then writing and reading to them from each node to verify each node 'sees' what is written by the other node(s). If this check is successful, you can then proceed.

  • 11gR2 Verification of shared storage accessibility

    Friends,
    I do not understand how is this possible. I am trying to apply 11.2.0.2.5 psu on a 2-node cluster running on RHEL 5.5 vms. Followed ORACLE-BASE examples when installed this laptop RAC.
    I am not using any ACFS and none of GI and DB homes are shared. But, on node 2, cluvfy THINKS database home is shared.
    [oracle@rac1 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle/product/11.2.0.2/db_1 -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle/product/11.2.0.2/db_1" is not shared
    Shared storage check failed on nodes "rac2,rac1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    NODE_STATUS::rac2:VFAIL
    NODE_STATUS::rac1:VFAIL
    OVERALL_STATUS::VFAIL
    [oracle@rac1 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle_grid/product/11.2.0.2 -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle_grid/product/11.2.0.2" is not shared
    Shared storage check failed on nodes "rac2,rac1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    NODE_STATUS::rac2:VFAIL
    NODE_STATUS::rac1:VFAIL
    OVERALL_STATUS::VFAIL
    [oracle@rac1 trace]$ hostname
    rac1
    [oracle@rac1 trace]$ echo $ORACLE_HOSTNAME
    rac1
    [oracle@rac1 trace]$
    [oracle@rac2 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle/product/11.2.0.2/db_1 -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle/product/11.2.0.2/db_1" is shared
    Shared storage check was successful on nodes "rac2,rac1"
    Verification of shared storage accessibility was successful.
    NODE_STATUS::rac2:SUCC
    NODE_STATUS::rac1:SUCC
    OVERALL_STATUS::SUCC
    [oracle@rac2 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle_grid/product/11.2.0.2/ -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle_grid/product/11.2.0.2/" is not shared
    Shared storage check failed on nodes "rac2,rac1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    NODE_STATUS::rac2:VFAIL
    NODE_STATUS::rac1:VFAIL
    OVERALL_STATUS::VFAIL
    [oracle@rac2 trace]$ hostname
    rac2
    [oracle@rac2 trace]$ echo $ORACLE_HOSTNAME
    rac2
    [oracle@rac2 trace]$
    I can not determine any reasons and do not know how to fix.
    Any help?
    Thank you.

    Hi,
    CLUFVY COMP SSA checks if the used storage/location is shared.
    If you do "cluvfy comp ssa -t software" cluvfy checks if your software home is shared.
    It tells you it is not. Hence the checks fails (which is correct, because you said DB Home is not shared).
    So where is the problem?
    CLUVFY COMP SSA only makes sense to check the sharedness. If it is not shared, then there is no sense in testing for it.
    Regards
    Sebastian

  • When installing clusterware, shared storage trouble

    I was trying to install clusterware. When I typed location of OCR, I got error below:
    Oracle Cluster Registry (OCR) is not shared across all the nodes in the cluster
    Then, I found I can not mount ocfs2 on both nodes at the same time. But I can mount it on any one of nodes if it is umounted on the other one.
    Can you anyone give me a hand?
    Environment is as following:
    - OS: Oracle Linux 5 (update 4)
    - Openfiler 3 + ocfs2
    - Oracle 10gR2

    Hi;
    Please see:
    http://kr.forums.oracle.com/forums/thread.jspa?messageID=4254569
    Oracle 10g RAC install- OPEN FAIL ON DEV
    Oracle Cluster Registry (OCR) is not shared across all the nodes . . .
    After OCFS2 install/configure ~ Shared storage check check fails
    Problem of using OCFS2 as shared storage to install RAC 10g on VMware
    Regard
    Helios

  • My wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign

    MMy wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign in to iTunes using her Apple id - I checked the iTunes id and password on both devices - can anyone help

    Have a look here...
    http://macmost.com/setting-up-multiple-ios-devices-for-messages-and-facetime.htm l

  • Shared Storage is not showing in SErver pool physical disk .

    Hi i have added couple of images to give you exact picture what i am doing .. please have a look
    OVM can see Shared storage from Openfiler
    http://www.picpanda.com/viewer.php?file=u1pf8hre1oqufkx6vyr3.gif
    OVM , SErver Pool (physical disk ) can see Shared Storage name
    http://www.picpanda.com/viewer.php?file=0h2dn48x6gh5avr6fob.gif
    OVM does not show the physical disk from Shared storage
    http://www.picpanda.com/viewer.php?file=c3njd7bg4exyjpymsit.gif
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    thanks
    is there any thing do i have to do to make it work ??

    Fosiul wrote:
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    is there any thing do i have to do to make it work ??Check to see if the physical disk appears under the default volume group for your san. If not, refresh the san and wait until it does. Also, make sure you have an admin server configured for that san and that your Oracle VM servers are in the default access group.

  • BWA OS Upgrade Error Shared Storage

    We have recently upgraded our IBM BWA to SUSE Linux 11.1, upgraded the GPFS 3.4.0-7 and recompiled RDAC. Everything seemed to work fine, the queries execute, but when I run the checkBIA, I get these warning/error messages:
    OK: ====== Shared Storage ======
    OK: Storage: /usr/sap/BXQ/TRX00/index
    OK: Logdir: /usr/sap/BXQ/SYS/global/trex/checks/report_2012-02-28_102556
    OK: Local avg directory creation/deletion: 1330.9 dirs/s, exp: 700.0 dirs/s
    ERROR: Parallel remote avg 4 hosts: 24.4 MB/s, exp: 31.0 MB/s
    ERROR: 1 hosts parallel iterative remote avg: 57.9 MB/s, exp: 90.0 MB/s
    WARNING: 2 hosts parallel iterative remote avg: 42.8 MB/s, exp: 45.0 MB/s
    WARNING: 4 hosts parallel iterative remote avg: 26.4 MB/s, exp: 31.0 MB/s
    ERROR: Serial remote avg: 59.4 MB/s, exp: 90.0 MB/s
    OK: ====== Landscape Reorganization ======
    Anyone seen this before, or have any idea what may be the cause? Thank you
    Karl

    Hello Karl,
    first, BWA is not supported for all SUSE versions. Please check http://service.sap.com/pam > "BW Accelerator 7.20". It's certainly not supported for BWA 7.0.
    Secondly, you should not upgrade the OS unless specifically instructed to do so by SAP Support or if security patches are required. Especially moving from SUSE 9 or 10 to 11 is not something that is recommended.
    Finally, because of #1 please contact IBM to make sure you have a supported OS version for your specific BWA hardware. IBM also needs to make sure that your hardware fulfills the minimum specifications for network and storage throughput (which is check by the script).
    Thanks,
    Marc Bernard
    SAP Customer Solution Adoption (CSA)

  • Shared Storage RAC

    Hello,
    This is my Oracle RAC 11gR2 real world installation, I need to configure the shared storage for RAC 2 nodes on redhat enterprise linux 5.
    could please send me a step by step how to do it? I want to use Device Mapper Multipath for that and ASM for Storage.
    Thank you

    899660 wrote:
    Hello,
    This is my Oracle RAC 11gR2 real world installation, I need to configure the shared storage for RAC 2 nodes on redhat enterprise linux 5.
    could please send me a step by step how to do it? I want to use Device Mapper Multipath for that and ASM for Storage.
    Thank youHi,
    Shared storage is a hardware device, which cant be created by you. Ofcourse you can use that shared device to configure ASM.
    Check the below links
    http://martincarstenbach.wordpress.com/2010/11/16/configuration-device-mapper-multipath-on-oel5-update-5/
    http://www.oracle.com/technetwork/database/device-mapper-udev-crs-asm.pdf

  • RAC with OCFS2 shared storage

    Hi all
    I wont to create RAC env in oracle VM 2.2 (one server) , with lokal disk's which I used to create LVM for ocr in in guest:
    - two quest with Oracle enterprise linux 5
    - both have ocfs2 rpm instaled
    when I wont to create shared storage for ocr I configure cluster.conf
    - service o2cb configure -> all ok -> on both nodes
    - service o2cb enable -> ok -> on both nodes
    - then mkfs.ocfs2 in node1
    - mount -t ocfs2 in node1
    - mount -t ocfs2 in node 2:
    [root@lin2 ~]# mount -t ocfs2 /dev/sde1 /ocr
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sde1 on /ocr. Check 'dmesg' for more information on this error.
    Jun 27 22:57:23 lin2 kernel: (o2net,1454,0):o2net_connect_expired:1664 ERROR: no connection established with node 0 after 30.0 seconds, giving up and returning errors.
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_request_join:1036 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_try_to_join_domain:1210 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_join_domain:1488 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_register_domain:1754 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_dlm_init:2808 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_mount_volume:1447 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: ocfs2: Unmounting device (8,65) on (node 1)
    can You help me where I doing mistake
    thank You Brano

    Please find the answer in the below link
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-usingOCFS2+in+a+group+of+VM+hosts+to+share+block+storage

  • Server Pool WITHOUT shared storage

    The documentation defines Server Pool as:
          +Logically an autonomous region that contains one or more physical Oracle VM Servers.+
    Therefore, should it possible to add multiple servers (physically separate VM servers) to the same Server Pool even though they are NOT using shared storage? When I tried to add the second VM Server to the Server Pool I received the following error:
    During adding servers ([vmoracle2]) to server pool (VM_Server_Pool), Cluster setup failed:
    (OVM-1011 OVM Manager communication with vmoracle1 for operation HA Setup for Oracle VM
    Agent 2.2.0 failed: <Exception: SR '/dev/sda3' not supported: type 'ocfs2.local' not in
    ['nfs', 'ocfs2.cluster']> )Thanks.

    Nothing is as easy as it seems when it comes to Oracle VM.
    When trying to create a new Server Pool to accommodate my second VM Server, I received the following error:
    +2010-03-08 18:18:24.575 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:24.752 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:24.755 NOTIFICATION Checking agent vmoracle2 is active or not?+
    +2010-03-08 18:18:24.916 NOTIFICATION [Server Pool Management][Server][vmoracle2]:Check agent (vmoracle2) connectivity.+
    +2010-03-08 18:18:30.304 NOTIFICATION entering into assign vs action...+
    +2010-03-08 18:18:30.311 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:30.482 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:30.483 NOTIFICATION Checking agent vmoracle2 is active or not?+
    +2010-03-08 18:18:30.638 NOTIFICATION [Server Pool Management][Server][vmoracle2]:Check agent (vmoracle2) connectivity.+
    +2010-03-08 18:18:45.236 NOTIFICATION Getting agent version for agent:vmoracle2 ...+
    +2010-03-08 18:18:45.410 NOTIFICATION The agent version is 2.3-19+
    +2010-03-08 18:18:45.434 NOTIFICATION master server is:vmoracle2+
    +2010-03-08 18:18:45.435 NOTIFICATION Start to check cluster for server pool+
    +2010-03-08 18:18:45.581 WARNING failed:<Exception: Cluster root not found.>+
    StackTrace:
    File "/opt/ovs-agent-2.3/OVSSiteCluster.py", line 535, in cluster_precheck
    clusterprecheck(single_node, ha_enable)+
    +File "/opt/ovs-agent-2.3/OVSSiteCluster.py", line 515, in clusterprecheck+
    if not cluster_root_sr_uuid: raise Exception("Cluster root not found.")
    +2010-03-08 18:18:45.582 NOTIFICATION Failed check cluster for server pool+
    +2010-03-08 18:18:45.583 ERROR [Server Pool Management][Server Pool][VMORACLE2_Server_Pool]:Check prerequisites to create server pool (VMORACLE2_Server_Pool) failed: (OVM-1011 OVM Manager communication with vmoracle2 for operation Pre-check cluster root for Server Pool failed:+
    +<Exception: Cluster root not found.>+
    +)+
    +2010-03-08 18:18:45.607 NOTIFICATION Exception Message:OVM-1011 OVM Manager communication with vmoracle2 for operation Pre-check cluster root for Server Pool failed:+
    +<Exception: Cluster root not found.>+
    The "*Test Connection*" succeeded just fine prior to clicking NEXT on the "Create Server Pool" page.
    Any suggestions?

  • Runcluvy on shared storage

    Hey, I just might forget anything.
    I am going to install a rac cluster on SLES11 SP1.
    I am using multipathing where shared storage are provided by two SAN´s.
    I see all LUNs and did a raw device mapping via disk by-id ( all LUNs hava a partion on it)
    I am able to write with dd on the raw devices from both rac nodes at the same time.
    I set the permissions to the raw devices
    Installed cluvy.rpm on both nodes.
    When I start cluvy it passes the first checks successful.
    ./runcluvfy.sh stage -post hwos -n raca,racb -verbose
    But when its starts checking the shared storages - it fails.
    But prereq needs to be configured for a successful check of the storage devices ?
    Chris
    cluvy comes out of the installation package from 10.2.0.1

    Christian wrote:
    Hey, I just might forget anything.
    I am going to install a rac cluster on SLES11 SP1.
    I am using multipathing where shared storage are provided by two SAN´s.
    I see all LUNs and did a raw device mapping via disk by-id ( all LUNs hava a partion on it)
    I am able to write with dd on the raw devices from both rac nodes at the same time.
    I set the permissions to the raw devices
    Installed cluvy.rpm on both nodes.
    When I start cluvy it passes the first checks successful.
    ./runcluvfy.sh stage -post hwos -n raca,racb -verbose
    But when its starts checking the shared storages - it fails.
    But prereq needs to be configured for a successful check of the storage devices ?
    Chris
    cluvy comes out of the installation package from 10.2.0.1do you want to do pre-check or post-check?
    if you would like to do pre-check, check with
    ./runcluvfy.sh stage -pre crsinst -n raca,racb -verbose
    HTH,
    Refer installation guide for reference.
    Good Luck.

  • Backup Exec 9.2 SSO (shared storage option) SCSI LTO

    Greetings, all...
    I have a 2-node cluster setup at a particular client. Backup Exec is licensed for SSO, and unfortunately, while I have the HP Ultrium 960 in the middle of a shared SCSI bus between the two servers, because it's not on the SAN, Backup Exec apparently refuses to recognize it as a valid shared storage device.
    I was wondering if anyone has been able to get around this in Backup Exec, as the drive is indeed shared (can be seen) by both servers. When using a clustered setup, but without SSO, it is difficult to keep the media management in sync, as each node is given its own subdirectory instead of sharing the media management db.
    TIA

    Rachelsdad,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Acessing shared storage

    For some reason I just cannot access the shared storage device on my home network. The device appears as a media device, but not as a computer as it normally does. I have tried every kind of troubleshooting, but nothing seems to work.

    Hi,
    If the issue did not occur before, you may try to use the system restore, return to the point which did not appear the problem.
    Please try the following:
    1. Check the permission settings of the network storage device: Add Everyone to Sharing tab and Security tab
    2. Adjust "File sharing connections” encryption setting:
    a. Go to “Control Panel\Network and Internet\Network and Sharing Center\Advanced sharing settings”.
    b. Adjust the File sharing connections setting and check if the computer can access the network storage device.
    3. Please temporarily disable Windows Firewall and security software.
    4. Try to manually use  UNC (\\server\share) and IP (\\ip.add.res.s\share) to access the device for a test.
    Please also use Device Manager to verify that your network adapter is working correctly and reinstall the drivers.
    If the shared folder still can’t be accessed, please re-create the connection and check the result again.
    Hope it helps.
    Regards,
    Blair Deng
    Blair Deng
    TechNet Community Support

  • Find shared storage in clustered nodes

    Hi,
    How to check shared storage in clustered nodes
    OS – Solaris
    Regards,
    M@rk....

    I've just discovered that one of the SCSI cards was faulty which explains why I couldn't see all the disks from one of the nodes.

Maybe you are looking for

  • Lumia 1020 isn't recognized by Windows 8.1

    I just installed Windows 8.1 and when I plug in my Lumia 1020, it isn't recognized.  In Device Manager it shows up under "Other Devices" as "Nokia Lumia 909 (RM-877)".  I understand that the 909 and the 1020 are the same so no mystery there.  When I

  • Editing converted document

    How do I edit a Word document converted from pdf and save to another file on my computer?

  • How do I unlock my phone using touchless control by speaking my PIN?

    I just recently got my Droid Maxx & I am enjoying most of the features. I have set up & trained the launch phrase "Ok Google Now".  I have set a PIN code for security to unlock the phone.  I have the box checked that says "Speak PIN to unlock".  But

  • Playback of WAV files, doesn't automatically go to next track?

    I have a number of DTS-encode WAV files in a playlist that play fine thru my Airport Express, but unfortunately, it seems that iTunes doesn't automatically go from the end of one track onto the next track. How do I make it play a WAV playlist continu

  • Deleted pictures in "originals".

    Hi, after restoring iphoto with time mashine some photoes disappeared in the folder "originals". These were photoes I by mistake had stored manualy in the folder "originals" without importing them to iphoto first. Does anybody have any idea how i can