Shared Storage for RAC

Dear All,
what is the best options for shared storage system for Oracle RAC 10g R2 on windows operating system.
How to share disk in windows so that it can be available in all RAC nodes in Dynamic mode.
Need help from people who have configured RAC on windows operating system and have used shared disk option.
Regards,
Imran

In production, the only realistic options are to turn to certified SAN or NAS vendors.
The issue is simplel - even though many types of shared storage allows you to have multiple machines connected to the same disk, certified shared storage allows these machines to write to the same disk sector and block. Most storage solutions have built-in protection to stop that from happening.
For a small shop, I certainly recommend NetApp NAS.

Similar Messages

  • Firewire storage for RAC

    DB Version: 11.2.0.2
    OS : Solaris 5.10
    We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
    http://www.lacie.com/asia/products/product.htm?id=10330
    I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?

    T.Boyd wrote:
    DB Version: 11.2.0.2
    OS : Solaris 5.10
    We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
    http://www.lacie.com/asia/products/product.htm?id=10330
    I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?Oracle "supports" this in a pure development environment. There is (or was) an Oracle owned mailing list that specifically dealt with this - using firewire shared storage for RAC.
    Some years ago I tried it (with a Lacie drive). But could not get both RHEL3 servers to open a connection to the drive. The config was correct, but the 2nd server always failed to establish a connection (complaining something like that no more connections were supported to the drive). I put that one down as a driver bug of sorts - and were not keen to go bug hunting and build the driver from source code. Left it at that - but according to the docs I read (from Oracle) at the time, this was a "valid" config for testing RAC.

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • Considering shared storage for Oracle RAC 10g

    Hi, guys!
    My Oracle RAC will be run on VMware ESXI 5.5. So, both 2 nodes and shared storage are on VM. Don't blame for this, I dont have another choice.
    I am choosing shared storage for Oracle RAC. I am choosing between NFS and ISCSI server, both can be done in RedHat linux or FreeNAS.
    Can u, guys, help me to do the choise?
    RedHat or FreeNAS
    ISCSI or NFS
    Any help will be appreciated.

    JohnWatson написал(а):
    NFS is really easy. Create your zero-filled files, set the ownership and access modes, and point your asm_diskstring at them. Much simpler than configuring an iSCSI target and initiators, and then messing about with ASMlib or udev.
    I recorded a public lecture that (if I remember correctly) describes it here, Oracle ASM Free Tutorial
    I will be using OCFS2 as cluster FS. Does it make any difference for NFS vs ISCSI?

  • Shared Disks For RAC

    Hi,
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? OCFS2? or any other option?
    Can some one lead me to a documnet on how can I use the shared disks for RAC?
    Thanks.

    javed555 wrote:
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? You have two options:
    1. Create shared virtual, i.e. file-backed disks. These files will be stored in /OVS/sharedDisk/ and made available to each guest
    2. Expose physical devices directly to each guest, e.g. an LVM partition or a multipath LUN.
    With both options, the disks show up as devices in the guests and you would then provision them with ASM, exactly the same way as if your RAC nodes were physical.
    OCFS2 or NFS are required to create shared storage for Oracle VM Servers. This is to ensure the /OVS mount point is shared between multiple Oracle VM Servers.

  • Doubts about shared disk for RAC

    Hi All,
    I am really new to RAC.Even after reading various documents,I still have many doubts regarding shared storage and file systems needed for RAC.
    1.Clusterware has to be installed on a shared file system like OCFS2.Which type of hard drive is required to install OCFS2 so that it can be accessed from all nodes??
    It has to be an external hard drive???Or we can use any simple hard disk for shared storage??
    If we use external hard drive then does it need to be connected to a seperate server alltogether or can it be connected to any one of the nodes in the cluster???
    Apart from this shared drives,approximately what size of hard disk is required for all nodes(for just a testing environment).
    Sincerely appreciate a reply!!
    Thanks in advance.

    Clusterware has to be installed on shared storage. RAC also requires shared storage for the database.
    Shared storage can be managed via many methods.
    1. Some sites using Linux or UNIX-based OSes choose to use RAW disk devices. This method is not frequently used due to the unpleasant management overhead and long-term manageability for RAW devices.
    2. Many sites use cluster filesystems. On Linux and Windows, Oracle offers OCFS2 as one (free) cluster filesystem. Other vendors also offer add-on products for some OSes that provide supported cluster filesystems (like GFS, GPFS, VxFS, and others). Supported cluster filesystems may be used for Clusterware files (OCR and voting disks) as well as database files. Check Metalink for a list of supported cluster filesystems.
    3. ASM can be used to manage shared storage used for database files. Unfortunately, due to architecture decisions made by Oracle, ASM cannot currently be used for Clusterware files (OCR and voting disks). It is relatively common to see ASM used for DB files and either RAW or a cluster filesystem used for Clusterware files. In other words, ASM and cluster filesystems and RAW are not mutually exclusive.
    As for hardware--I have not seen any hardware capable of easily connecting multiple servers to internal storage. So, shared storage is always (in my experience) housed externally. You can find some articles on OTN and other sites (search Google for them) that use firewire drives or a third computer running openfiler to provide the shared storage in test environments. In production environments, SAN devices are commonly employed to provide concurrent access to storage from multiple servers.
    Hope this helps!
    Message was edited by:
    Dan_Norris

  • In the old Mobile me storage I had shared storage for my family and all of our devices. How do I breakup the 55GB across my apple accounts now that iCloud is by device by user?

    In the old Mobile me storage I had shared storage for my family and all of our devices. How do I breakup the 55GB across my apple accounts now that iCloud is by device by user? Anyone else have this issue. I am also in need of correcting the storage from my work where I do not have Safari or I can not down load iCould to my desk top.

    That storage moves to master account. Since there are no accounts like that in Icloud, people in your family will enjoy complimentary 5 gigs from Apple and they will let you know if they need anymore. You will not be able to manage storage from windows desktop

  • To use NFS mount as shared storage for calendar

    hi all,
    Colocated IM deployment <<To ensure high availbility, Oracle Calendar Server is placed on Cold Failover Cluster. Cold Failover Cluster installation requires shared storage for ORACLE_HOME and oraInventory.
    Q: can NFS mount be used as the shared storage? has anyone tried it? thanks

    Hi Arnaud!
    This is of course a test environment on my laptop. I WOULD NEVER do this in production or even mention this to a customer :-)
    In this environment I do not care for performance but it is not slow.
    cu
    Andreas

  • Cheap shared storage for test RAC

    Hi All,
    Is there cheap shared storage device is available to create test environment RAC. I used to create RAC with vmware but environment is not much stable.
    Regards

    Two options:
    The Oracle VM templates can be used to build clusters of any number of nodes using Oracle Database 11g Release 2, which includes Oracle 11g Rel. 2 Clusterware, Oracle 11g Rel. 2 Database, and Oracle Automatic Storage Management (ASM) 11g Rel. 2, patched to the latest, recommended patches.
    This is supported for Production.
    http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700.
    The information in this guide below is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only.
    http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Dec 10, 2012 10:59 AM

  • No Shared Storage Available (RAC Installation)

    Hello Guys,
    I am in process of installing RAC 10G R2 on windows 2000 operating system. Well i am going for 2 node clustering. The problem is that we dont have any shared storage system like SAN. Is it possible that we can use any other computer's HDD for stroing data files? All other files can be stored in different drives of the 2 nodes.... OR is it possible to store datafiles on these nodes?
    Please guide me... and what type of storage it will be called... obviously not ASM but it would be OCFS?
    Please help.
    Regards,
    Imran

    Well we are doing it for testing purpose... when we will go for production installation then obviouly we will keep our data files on a shared storage....
    I have read the document but it is not clear to me... can we keep data files on any one of the node?
    Regards,
    Imran

  • Choice of shared storage for Oralce VM clustering feature

    Hi,
    I would like to experiment the Oracle VM clustering feature over multiple OVM servers. One requirement is the shared storage which can be provided by iSCSI/FC SAN, or NFS. These types of external storage are usually very expensive. For testing purpose, what other options of shared storage can be used? Can someone share your experience?

    You don't need to purchase an expensive SAN storage array for this. A regular PC running Linux or Solaris will do just fine to act as an iSCSI target or to provide NFS shares via TCP/IP. Googling for "linux iscsi target howto" reveals a number of hits like this one: "RHEL5 iSCSI Target/Initiator" - http://blog.hamzahkhan.com/?p=55
    For Solaris, this book might be useful: "Configuring Oracle Solaris iSCSI Targets and Initiators (Tasks)" - http://download.oracle.com/docs/cd/E18752_01/html/817-5093/fmvcd.html

  • Storage for RAC & ASM

    I am planning to install Oracle 10g RAC with ASM for our Oracle 10.2.0.4 database on Solaris 10. We have a Sun StorEdge SAN.
    I would like to get your suggestions for the best storage infrastructure for the RAC/ASM.
    Can someone share their storage design - RAID LUN's, Disk layout for the RAC environment? Do you have RAID volumes or individual Disks presented to the O.S? If RAID Volumes, then what RAID level are you using? Since ASM stripes and mirrors (normal or high redundancy), how have you layed out your storage for ASM?
    Should I create one RAID LUN and present it to the operating system, so that ASM diskgroup can be created from that LUN or individual disks can be presented?
    If anyone can point me to any documentation that can put some light on the storage design and layout, it would be really helpful.
    Thanks in advance!

    Refer:- it may helps you.
    http://www.oracle.com/technetwork/database/clustering/tech-generic-unix-new-166583.html
    http://www.orafaq.com/node/66
    http://www.dba-oracle.com/real_application_clusters_rac_grid/cluster_file_system.htm
    Regards,

  • Do I have to have Shared Storage for Vm 3.0 ??

    Hi
    i used Oracle vm 2.2 .
    where I can install vm by using Server's Storage .
    Example
    Server A has Vm 2.2 installed and it has 2Tb Storage .
    From From Oracle Vm manager 2.2. I can create vm in Server A by using those 2Tb storage.
    does this rules apply to Vm 3.0 ??
    or for vm 3.0, you have to have Shared storage like Sun or something to hold vm's files ???
    Please let me know ..
    thanks

    Hi
    From last 8 hours I am trying to figured it how use Local storage for vm
    "The choice is yours to use the local disks either to provision logical storage volumes as disks for virtual machines or to install a storage repository. If you place a storage repository on the local disk, an OCFS2 file system is installed."
    I have configured 2 Server for test
    Server A : partition type
    [root@ov2 vm]# df -T
    Filesystem Type 1K-blocks Used Available Use% Mounted on
    /dev/sda2 ext3 3050092 575588 2317068 20% /
    /dev/sdb1 ocfs2 937496576 3124224 934372352 1% /vm
    /dev/sda1 ext3 101086 37328 58539 39% /boot
    tmpfs tmpfs 323728 6644 317084 3% /dev/shm
    none tmpfs 318720 40 318680 1% /var/lib/xenstored
    So accordin to document : "f you place a storage repository on the local disk, an OCFS2 file system is installed.""
    Becauyse /dev/sdb1 is a ocfs2 file sytem i should be able to put this one in to Storage Repository .. but i cant ..
    Server B :
    fdisk -l /dev/sdb
    WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
    WARNING: The size of this disk is 3.0 TB (2996997980160 bytes).
    DOS partition table format can not be used on drives for volumes
    larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
    partition table format (GPT).
    Disk /dev/sdb: 2996.9 GB, 2996997980160 bytes
    255 heads, 63 sectors/track, 364364 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 1 267350 2147483647+ ee EFI GPT
    This is unpartition..
    so accordign to document : " The choice is yours to use the local disks either to provision logical storage volumes as disks for virtual machines "
    now when i add this one into hardware ..
    3600605b00380680015dd0ed432ef8876     Local Storage Volume Group     Generic Local Storage Array @ ov1.domain.lan     SAN     None     2791.17     0.0     ov1.domain.lan
    its can see but its General local Storage Array ..
    and i cant do anythign with this.
    can any one please give me some advise to create vm in any of this server with this local Storage ...
    i would really apprecaite your help ..
    thanks

  • Shared storage devices RAC

    hello
    I was doing Grid Infrastructure Installation for RAC in linux. Where the disk group needs to be created I changed the device discovery path to QRCL* as said in the guide that i followed. But after that the list of candidate disks went empty.
    /etc/init.d.oracleasm listdisks command does not return any disks. I can't delete the disks it says they are not instantiated i also cant create them again. please help im stuck :(

    Where the disk group needs to be created I changed the device discovery path to QRCL* give full path of disk in device discovery path like /dev/dsk*

  • Please help me deciding the storage for RAC !!

    Hello,
    I am looking for some help to decide hardware in order to install 10g R2 RAC. I have two IBM P5 series boxes with IBM - D24 Storage (SCSI storage). I followed Oracle's Clusterware installation guide. It says you have to create Physical Volumes in "Concurrent Mode" which my system does not support. even if i import this Physical vol with LVs in Concurrent mode it gives error as this is not an concurrent PV. I found that HACMP is needed to do this. But same time i found that HACMP is not needed ... I am very much confused.
    Can we take SAN and get rid of this kind of problem ? is there any software with SAN for concurrency ?...
    If you have any suggestions highly appreciated.
    Thanks

    1. I think what Ashok was saying is that you don't need to put these disks under the volume manager at all--just address them as raw disk devices. I don't recall where the raw disk devices are found on AIX, but you shouldn't have to do any importing or creating of physical volumes in order to use ASM.
    2. I think all the storage arrays I've used have allowed access from more than one host. It's usually an access control list or something similar.
    Dan

Maybe you are looking for