Find shared storage in clustered nodes

Hi,
How to check shared storage in clustered nodes
OS – Solaris
Regards,
M@rk....

I've just discovered that one of the SCSI cards was faulty which explains why I couldn't see all the disks from one of the nodes.

Similar Messages

  • Pointing existing RAC nodes to a fresh Shared Storage discarding old one

    Hi,
    I have a RAC Setup with the Primary Database on Oracle 10gR2.
    For this setup, there is a Physical Standby Database Setup (using DataGuard configuration) also with 30min delay.
    Assume that the "Shared Storage" of the Primary DB fails completely.
    In the above scenario, my plan is to refresh a "fresh" shared storage device using Physical Standby Database Setup and then "point" the RAC nodes to the new "Shared Storage".
    Is this possible?
    Simply put, how can I refresh the Primary database using the Standby Database?
    Please help with the utilities (RMAN, DatGuard, other non-Oracle products etc.) that can be used to do this.
    Regards

    Does following Shared Device configuration is fine for 10g RAC on Windows 2003?
    . 1 SCSI drive
    • Two PCI network adapters on each node in the cluster.
    • Storage cables to attach the shared storage device to all computers.
    regard.

  • Choice of shared storage for Oralce VM clustering feature

    Hi,
    I would like to experiment the Oracle VM clustering feature over multiple OVM servers. One requirement is the shared storage which can be provided by iSCSI/FC SAN, or NFS. These types of external storage are usually very expensive. For testing purpose, what other options of shared storage can be used? Can someone share your experience?

    You don't need to purchase an expensive SAN storage array for this. A regular PC running Linux or Solaris will do just fine to act as an iSCSI target or to provide NFS shares via TCP/IP. Googling for "linux iscsi target howto" reveals a number of hits like this one: "RHEL5 iSCSI Target/Initiator" - http://blog.hamzahkhan.com/?p=55
    For Solaris, this book might be useful: "Configuring Oracle Solaris iSCSI Targets and Initiators (Tasks)" - http://download.oracle.com/docs/cd/E18752_01/html/817-5093/fmvcd.html

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • Clustering without shared storage

    Is there a way to cluster two Sun Fire 280R without a shared storage?

    Not so sure on the argument there... The 'Sun Cluster Overview for Solaris OS' document (available on docs.sun.com) clearly states on Page 9...
    'A cluster is two or more systems, or nodes, that work together as a single, continuously availble system ...'
    Thus you can have a two-node cluster. To do so, however, you would NEED shared storage to configure the Quorum device. This device is, essentially, the third vote. In a split-brain situation, where interconnects have failed and both sides of the cluster think they're the only node active, the Quorum device is used to determine which node stays in the cluster. This was historically done by all nodes racing to place a SCSI reservation on the nominated Quorum device. The node which fails this would panic, instigated by the failfast driver, to ensure data integrity. How it is actually done now I'm not quite sure, but there is still a race for quorum by all nodes (p.22 Sun Cluster Overview). Thus the Quorum device is required for a two-node cluster, and the cluster would not fail completely in the event of a single node failure.
    Hope this helps
    Glennog

  • Disk replication for Shared Storage in Weblogic server

    Hi,
    Why we need a disk replication in web-logic server for shared storage systems? What is the advantage of it and how this disk replication can be achieved in web-logic for the shared storage which contains the common configurations and software's which will be used by a pool of client machines? Please clarify.
    Thanks.

    Hi,
    I am not the middleware expert. However ACFS (Oracle Cloud File System) is a clustering filesystem, which also has the functionality for replication:
    http://www.oracle.com/technetwork/database/index-100339.html
    Maybe you also finde information on what you need on the MAA website: www.oracle.com/goto/maa
    Regards
    Sebastian

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Shared storage client timed out error

    Hello everybody, please help
    I have been at this now for about 2 days and still can't find out the source of my issue using FCP to render a project through compressor to multiple macs using Qmaster.
    What is happening is that I start the render and it gets sent to the second computer, I can see the prosessor ramping up then after (+-) 30 seconds I get the error bellow, and the render fails.
    Here is my set up:
    MacBook1 as the cluster controller
    Macbook2 as the service node
    connected via a gigabyte switch using an ethernet cable
    The error I keep getting is this ("Macintosh-7" is the name of MacBook1, "chikako-komatsus-computer" is the name of MacBook2):
    3x HOST [chikako-komatsus-computer.local] Shared storage client timed out while subscribing to "nfs://Macintosh-7.local/Volumes/portable/Cluster_scratch/4AD40699-B5BD6A1A/sha red"
    The volume mentioned above in the error is a shared Fire wire drive connected to the MacBook1. It is have full read and write privileges to everyone. This drive is where the project file and all the source video are located. MacBook1 via the Qmaster system preferences is pointing to a folder "Cluster_scratch" on this drive.
    I have been mounting this drive from macBook2 using the connect to sever option in the finder under Go menu. This method seems to only enable me to connect to this drive using AFP, is this my problem?
    I have "allowed all incoming traffic" of the fire wall on the MacBook1
    What is funny (not really) is that i can Compress a previously compiled video with the cluster If I don't go thou Final Cut Pro!
    Any help with this would be greatly appreciated.
    Thanks

    I also administer a managed cluster with 6 machines, and have been using it successfully for almost a year now. But the only encoding that is submitted is directly through Compressor, never via FCP.
    With QMaster, it sees a Quickcluster and a managed cluster the same way. While they are set up differently, the principle is the same, QMaster only sees services.
    Exporting out of FCP to any cluster has always been slow. If you want to harness the power of distributed encoding, you could export a Quicktime reference file and take that into Compressor to be submitted to the cluster for encoding.

  • Problem of using OCFS2 as shared storage to install RAC 10g on VMware

    Hi, all
    I am installing a RAC 10g cluster with two linux nodes on VMware. I created a shared 5G disk for the two nodes as shared storage partition. By using OCFS2 tools, i formatted this shared storage partition and successfully auto mounted it on both nodes.
    Before installing, i use the command "runcluvfy.sh stage -pre crsinst -n node1,node2" to determine the installation prerequisites. Everything is ok except an error "Could not find a suitable set of interfaces for VIPs.". By searching the web, i found this error could be safely ignored.
    The OCFS2 works well on both nodes, i formatted the shared partition as ocfs2 file system and configure o2bc to auto start ocfs service. I mounted the shared disk on both nodes at /ocfs directory. By adding an entry into both nodes' /etc/fstab, this partition can be auto mounted at system boots. I could access files in shared partition on both nodes.
    My problem is that, when installing clusterware, at the stage "Specify Oracle Cluster Registry", I enter "/ocfs/OCRFILE" for Specify OCR Location and "/ocfs/OCRFILE_Mirror" for Specify OCR Mirror Location. But got an error as following:
    ----- Error Message ----
    The location /ocfs/OCRFILE, entered for the Oracle Cluster Registry(OCR) is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system that is visible by the same name on all nodes of the cluster.
    ------ Error Message ---
    I don't know why the OUI can't recognize /ocfs as shared partition. On both nodes, using command "mounted.ocfs2 -f", i can get the result:
    Device FS Nodes
    /dev/sdb1 ocfs2 node1, node2
    What's the possible wrong? Any help is appreciated!
    Addition information:
    1) uname -r
    2.6.9-42.0.0.0.1.EL
    2) Permission of shared partition
    $ls -ld /ocfs/
    drwxrwxr-x 6 oracle dba 4096 Aug 3 18:22 /ocfs/

    Hello
    I am not sure how far this following solution is relevant to your problem (regardless when it was originally posted - may help someone who is reading this thread), here is what I faced and here is how I fixed it:
    I was setting up RAC using VMWare. I prepared rac1 [installed OS, configured disks, users, etc] and the made a copy of it as rac2. So far so good. When, as per the guide I was following for RAC configuration, I started OCFS2 configuration, faced the following error on RAC2 when I tried to mount the /dev/adb1:
    ===================================================
    [Root @ *rac2* ~] # mount - t ocfs2 - o datavolume, nointr / dev / sdb1 / ocfs
    ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid
    mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted" mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted"
    ===================================================
    After a lot of "googling around", I finally bumped into a page, the kind person who posted the solution said [in my words below and more detailed ]:
    o shutdown both rac1 and rac2
    o in VMWare, "edit virtual machine settings" for rac1
    o remove the disk [make sure you drop the correct one]
    o recreate it and select *"allocate all disk space now"* [with same name and in the same directory where it was before]
    o start rac1 and login as *"root"* and *"fdisk /dev/sdb"* [or whichever is/was your disk where you r installing ocfs2]
    Once done, repeat the steps for configuring OCFS2. I was successfully able to mount the disk on both machines.
    All this problem was apparently caused by not choosing "allocate all disk space now" option while creating the disk to be used for OCFS2.
    If you still have any questions or problem, email me at [email protected] and I'll try to get back to you at my earliest.
    Good luck!
    Muhammad Amer
    [email protected]

  • Shared Storage Check

    Hi all,
    We are planning to add a node to our existing RAC deployment (Database: 10gr2 and Sun Solaris 5.9 OS). Currently the shared storage is IBM SAN.
    When i run shared storage check using cluvfy, it fails to detect any shared storage. Given that i can ignore this error message (since cluvfy doesn't work wth SAN i beleive), how can i check whether the storage is shared or not?
    Note
    When i see partition table from both servers, it looks same (for the SAN drive, of course) but the name/label of the storages are different (For example: In existing node it show c6t0d0 but in the new node, which is to be added, it shows something different. Is it ok?).
    regards,
    Muhammad Riaz

    Never mind. I found solution from http://www.idevelopment.info.
    (1) Create following directory structure on second node (same as first node) with the same permissions on existins node:
    /asmdisks
    - crs
    -disk1
    -disk2
    - vote
    (2) use ls -lL /dev/rdsk/<Disk> to find out major and minor ids of shared disk and attach those ids to relveant direcotries above using mknod command:
    # ls -lL /dev/rdsk/c4t0d0*
    crw-r-----   1 root     sys       32,256 Aug  1 11:16 /dev/rdsk/c4t0d0s0
    crw-r-----   1 root     sys       32,257 Aug  1 11:16 /dev/rdsk/c4t0d0s1
    crw-r-----   1 root     sys       32,258 Aug  1 11:16 /dev/rdsk/c4t0d0s2
    crw-r-----   1 root     sys       32,259 Aug  1 11:16 /dev/rdsk/c4t0d0s3
    crw-r-----   1 root     sys       32,260 Aug  1 11:16 /dev/rdsk/c4t0d0s4
    crw-r-----   1 root     sys       32,261 Aug  1 11:16 /dev/rdsk/c4t0d0s5
    crw-r-----   1 root     sys       32,262 Aug  1 11:16 /dev/rdsk/c4t0d0s6
    crw-r-----   1 root     sys       32,263 Aug  1 11:16 /dev/rdsk/c4t0d0s7
    mknod /asmdisks/crs      c 32 257
    mknod /asmdisks/disk1      c 32 260
    mknod /asmdisks/disk2      c 32 261
    mknod /asmdisks/vote      c 32 259
    # ls -lL /asmdisks
    total 0
    crw-r--r--   1 root     oinstall  32,257 Aug  3 09:07 crs
    crw-r--r--   1 oracle   dba       32,260 Aug  3 09:08 disk1
    crw-r--r--   1 oracle   dba       32,261 Aug  3 09:08 disk2
    crw-r--r--   1 oracle   oinstall  32,259 Aug  3 09:08 vote

  • RAC with OCFS2 shared storage

    Hi all
    I wont to create RAC env in oracle VM 2.2 (one server) , with lokal disk's which I used to create LVM for ocr in in guest:
    - two quest with Oracle enterprise linux 5
    - both have ocfs2 rpm instaled
    when I wont to create shared storage for ocr I configure cluster.conf
    - service o2cb configure -> all ok -> on both nodes
    - service o2cb enable -> ok -> on both nodes
    - then mkfs.ocfs2 in node1
    - mount -t ocfs2 in node1
    - mount -t ocfs2 in node 2:
    [root@lin2 ~]# mount -t ocfs2 /dev/sde1 /ocr
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sde1 on /ocr. Check 'dmesg' for more information on this error.
    Jun 27 22:57:23 lin2 kernel: (o2net,1454,0):o2net_connect_expired:1664 ERROR: no connection established with node 0 after 30.0 seconds, giving up and returning errors.
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_request_join:1036 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_try_to_join_domain:1210 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_join_domain:1488 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):dlm_register_domain:1754 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_dlm_init:2808 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: (mount.ocfs2,9327,0):ocfs2_mount_volume:1447 ERROR: status = -107
    Jun 27 22:57:23 lin2 kernel: ocfs2: Unmounting device (8,65) on (node 1)
    can You help me where I doing mistake
    thank You Brano

    Please find the answer in the below link
    http://wiki.oracle.com/page/Oracle+VM+Server+Configuration-usingOCFS2+in+a+group+of+VM+hosts+to+share+block+storage

  • No Shared Storage Available (RAC Installation)

    Hello Guys,
    I am in process of installing RAC 10G R2 on windows 2000 operating system. Well i am going for 2 node clustering. The problem is that we dont have any shared storage system like SAN. Is it possible that we can use any other computer's HDD for stroing data files? All other files can be stored in different drives of the 2 nodes.... OR is it possible to store datafiles on these nodes?
    Please guide me... and what type of storage it will be called... obviously not ASM but it would be OCFS?
    Please help.
    Regards,
    Imran

    Well we are doing it for testing purpose... when we will go for production installation then obviouly we will keep our data files on a shared storage....
    I have read the document but it is not clear to me... can we keep data files on any one of the node?
    Regards,
    Imran

  • Backup Exec 9.2 SSO (shared storage option) SCSI LTO

    Greetings, all...
    I have a 2-node cluster setup at a particular client. Backup Exec is licensed for SSO, and unfortunately, while I have the HP Ultrium 960 in the middle of a shared SCSI bus between the two servers, because it's not on the SAN, Backup Exec apparently refuses to recognize it as a valid shared storage device.
    I was wondering if anyone has been able to get around this in Backup Exec, as the drive is indeed shared (can be seen) by both servers. When using a clustered setup, but without SSO, it is difficult to keep the media management in sync, as each node is given its own subdirectory instead of sharing the media management db.
    TIA

    Rachelsdad,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • Eliminating shared storage

    I'm experimenting or brainstorming up ideas on how to eliminate shared storage.
    Is there a a way to migrate local filesystems (on local drives) to-fro nodes by placing them in the /dev/global directory?
    Or is this wishful thinking?

    the answer i it depends.
    On SC3.2 with SRDF and tru copy you can use the storage based replication as a substitute for shared storage, and we are implementing PostgreSQL WAL file shipping as a replacement for shared storage. But for generic 2 nod clusters apart from that is wishful thinking right now. You would need something like iscsi and host based mirorring on a thumper where you have enough luns, but this is future stuff.
    Detlef

  • Cheap shared storage for test RAC

    Hi All,
    Is there cheap shared storage device is available to create test environment RAC. I used to create RAC with vmware but environment is not much stable.
    Regards

    Two options:
    The Oracle VM templates can be used to build clusters of any number of nodes using Oracle Database 11g Release 2, which includes Oracle 11g Rel. 2 Clusterware, Oracle 11g Rel. 2 Database, and Oracle Automatic Storage Management (ASM) 11g Rel. 2, patched to the latest, recommended patches.
    This is supported for Production.
    http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700.
    The information in this guide below is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only.
    http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Dec 10, 2012 10:59 AM

Maybe you are looking for