Storage for RAC & ASM

I am planning to install Oracle 10g RAC with ASM for our Oracle 10.2.0.4 database on Solaris 10. We have a Sun StorEdge SAN.
I would like to get your suggestions for the best storage infrastructure for the RAC/ASM.
Can someone share their storage design - RAID LUN's, Disk layout for the RAC environment? Do you have RAID volumes or individual Disks presented to the O.S? If RAID Volumes, then what RAID level are you using? Since ASM stripes and mirrors (normal or high redundancy), how have you layed out your storage for ASM?
Should I create one RAID LUN and present it to the operating system, so that ASM diskgroup can be created from that LUN or individual disks can be presented?
If anyone can point me to any documentation that can put some light on the storage design and layout, it would be really helpful.
Thanks in advance!

Refer:- it may helps you.
http://www.oracle.com/technetwork/database/clustering/tech-generic-unix-new-166583.html
http://www.orafaq.com/node/66
http://www.dba-oracle.com/real_application_clusters_rac_grid/cluster_file_system.htm
Regards,

Similar Messages

  • Firewire storage for RAC

    DB Version: 11.2.0.2
    OS : Solaris 5.10
    We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
    http://www.lacie.com/asia/products/product.htm?id=10330
    I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?

    T.Boyd wrote:
    DB Version: 11.2.0.2
    OS : Solaris 5.10
    We are thinking of setting up a RAC db(Development) with a Firewire 800 device as our Shared storage. We are thinking of buying a 2 port Firewire Storage device mentioned in the below URL along with 2 firewire PCI cards for both of our machines.
    http://www.lacie.com/asia/products/product.htm?id=10330
    I have read in other OTN posts that RAC with firewire storage is only good for Demo purpose. Does this mean that it is not good for development DBs at least?Oracle "supports" this in a pure development environment. There is (or was) an Oracle owned mailing list that specifically dealt with this - using firewire shared storage for RAC.
    Some years ago I tried it (with a Lacie drive). But could not get both RHEL3 servers to open a connection to the drive. The config was correct, but the 2nd server always failed to establish a connection (complaining something like that no more connections were supported to the drive). I put that one down as a driver bug of sorts - and were not keen to go bug hunting and build the driver from source code. Left it at that - but according to the docs I read (from Oracle) at the time, this was a "valid" config for testing RAC.

  • Creating Standby for RAC ASM database using RMAN

    We are having a Primary Site of 3-Node RAC ASM and we takes daily RMAN backup with the following script
    run
    allocate channel c1 device type disk format 'g:\rmanbackup\%U';
    backup database;
    backup archivelog from time 'trunc(sysdate-1)' until time 'sysdate';
    We have configured 3-Node RAC Cluster in Standby site. We have copied the rman backup folder to one of the Nodes in the Standby site.
    Request your help to restore the RMAN backup. The backup size is around 200GB. We do not know how to use DUPLICATE DUPICATE option in RMAN as if it is restoring from the Primary Location, then transferring 200GB over the network will not be possible.
    We need a solution to restore it directly from the Backup folder available in the DR Server. We are not using catalog.
    OS : Windows IA 64-bit
    RDBMS : Oracle 10.2.0.4
    Storage : ASM
    DB Nodes at Primary Site, Node1, Node2, Node3
    DB instances at primary site : ORCL1, ORCL2, ORCL3
    DB Nodes at Standby Site, Node101, Node102, Node103
    DB instances at Standby site : ORCL1, ORCL2, ORCL3
    DB Name : ORCL on both the sites.

    When you create the standby, you can use catalogued backups, stored somewhere local to your standby servers.
    For that, they need to be copied to the standby server, or taken from production there, and after restoring the standby controlfile on the standby server, use rman to catalogue the backups to be used. Then, you can use DUPLICATE... and it will be reading your local backup files.
    The directory g:\rmanbackup of your rman script is local to standby servers ?
    The docs for this have a lot of details: http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/rcmbackp.htm
    Regards.

  • Please help me deciding the storage for RAC !!

    Hello,
    I am looking for some help to decide hardware in order to install 10g R2 RAC. I have two IBM P5 series boxes with IBM - D24 Storage (SCSI storage). I followed Oracle's Clusterware installation guide. It says you have to create Physical Volumes in "Concurrent Mode" which my system does not support. even if i import this Physical vol with LVs in Concurrent mode it gives error as this is not an concurrent PV. I found that HACMP is needed to do this. But same time i found that HACMP is not needed ... I am very much confused.
    Can we take SAN and get rid of this kind of problem ? is there any software with SAN for concurrency ?...
    If you have any suggestions highly appreciated.
    Thanks

    1. I think what Ashok was saying is that you don't need to put these disks under the volume manager at all--just address them as raw disk devices. I don't recall where the raw disk devices are found on AIX, but you shouldn't have to do any importing or creating of physical volumes in order to use ASM.
    2. I think all the storage arrays I've used have allowed access from more than one host. It's usually an access control list or something similar.
    Dan

  • Shared Storage for RAC

    Dear All,
    what is the best options for shared storage system for Oracle RAC 10g R2 on windows operating system.
    How to share disk in windows so that it can be available in all RAC nodes in Dynamic mode.
    Need help from people who have configured RAC on windows operating system and have used shared disk option.
    Regards,
    Imran

    In production, the only realistic options are to turn to certified SAN or NAS vendors.
    The issue is simplel - even though many types of shared storage allows you to have multiple machines connected to the same disk, certified shared storage allows these machines to write to the same disk sector and block. Most storage solutions have built-in protection to stop that from happening.
    For a small shop, I certainly recommend NetApp NAS.

  • OVM disks for RAC implementation

    Dear All
    is there any guide available on how can you create the disks for RAC ASM in OVM 3.3.1 using a fiber channel block level storage?
    Thanks
    George

    You are right, you can't use virtual disks for RAC configuration. Have a look here (especially page 18):
    http://www.oracle.com/technetwork/products/clustering/oracle-rac-in-oracle-vm-environment-131948.pdf
    Using physical disks means that you create LUNs on your storage array connected to Oracle VM Servers by fabric channel. You map these LUNs to all servers in the pool or all standalone servers where you are going to install your virtual machines being Clusterware nodes. Then you rediscover storage in Oracle VM Manager, mark these LUNs as "shared" in OVMM and add them to your virtual machines as "Physical disks" (by editing guest properties in OVMM).
    Alternatively you can directly map iSCSI or NFS storage to your guests. By "directly" I mean you use IP addresses and software in your guests as iSCSI initiator or NFS client - without engaging Oracle VM in the middle.
    Regards,
    Michal

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • Oracle White Paper- Storage Option for RAC. on Linux

    hi..
    This in reference with White Paper.. Stotage Option for RAC on LINUX. Author- Umadevi Byrappa.
    Which states on page 10: To Use NAS for RAC database file storage select the file system storage option in OUI. or the Clustered file system storage option in DBCA.
    With which i disagree. File storage option of oracle 10g RAC only shows.
    1. Clustered File system.
    2. ASM
    3. Raw Devices.
    When i try to select Cfs and select the shared directory on NFS it says. < Directory_Name> is not a clustered file system or Shared on Both <Server_1> < Server_2>. and at this point of time i m stuck.
    1. I dont want to use OCFS as its not supporting NAS.
    2. Selecting CFS doesnt recognise mounted shared volume as valid storage device option can only store OCRfile and CSS file.
    3. So, I have to use ASM.. with zero padded files..which i dont want to because no other option. ( PART NO. B10766-02 PAGE C-6)
    Also, I would like oracle to provide Back up Recovery Option or Document which tells me how could i Recover Database when i use Zero Padded Files.
    What would be the best Option in the above scenario ?
    I hope by applying patch or with workaround somehow it shows Filesystem ONLY.I'll be the happiest man in this Case.
    Any suggestions and corrections are most welcome. I wish i m wrong.
    Nadeem ( [email protected] )

    NFS isn't a clustered file system at all - by RFC it is an exported file system with access controls
    If your vendor offers an NFS solution over and beyond that - more power to them. However that isn't "NFS"..
    http://www.faqs.org/rfcs/rfc3010.html
    You can use NFS in a clustered server environment (to mount apps, read only data and for synchronous access to files), however it doesn't support the concurrency out of the box for RDBMS transactions - if a vendor is supporting this promise then that is for you to decide.
    However i stand by my statement that NFS is a file system exported across your network and not a full clustered file system.

  • ASM best configuration disks on EMC storage in RAC

    After reading ASM best practices, suggesting to use a disk group for all datafiles and a disk group for FRA I was wondering how to optimize I/O on storage.
    I mean.. what about if I add also a disk group for the redologs of the two RAC instances?
    Moreover... what are the best practices to use the several disks offered by the storage compared with the possibility to use different LUN on the same big disk of of the storage?
    Example...
    My storage has 2 disks of about 2TB each one:
    1) I could partition one disk into two LUN and use this two LUN for the DG_DATA and DG_FRA asm disk groups. Every LUN should be partitioned in small units.
    2) I could use two disks and use one disk for DG_DATA asm disk group and the other for DG_FRA asm disk group.
    3) I could use two disk partitioned in two LUN (and so I have 4 LUN). Add to the DG_DATA disk group the LUN partitions physically based on two different disks, and the others two LUN for the DG_FRA disk group.
    Hope you can solve my doubts,
    thanks Marco

    Here's a brief comment ....
    There is not one answer that fits all. This is because it all depends on your SAN env and the storage architecture. Some configs are good for a system but disastrous for another.
    Also, we must make a clear distinction between the terms LUNs vs. disks. Because DBA interact with storage at the host level, we use them synonymously which is understandable. However, for SAN and storage, they should not be and they are not.
    These are some of the guidelines I learned ...
    1) use small spindles with highest RPM for your disk arrays. Rarely we have control over that, but DBAs should at least understand and obtain these specs from SAN admins.
    2) Understand your system's potential bottlenecks and load balance around them. For example, I've seen systems bottleneck at the HBA, DA, spindle, or even host level. This where you need to take caution when you layout DB storage. You do not want 2 IO intensive apps accessing same LUNs on same controller whereas the 2nd one remains semi-idle. Most systems should auto balance them, but at least these apps primary or preferred access path should be separated.
    3) Use expected DB size to decide on your LUN size. You can not layout a TB database using 12GB LUNS nor can you use a 500GB LUN to layout a 50GB DB.
    4) For most storage systems now, data separation occurs as a result of data protection, management convenience and to a lesser extent for performance reasons or IO type (random vs. sequential). Again, the accuracy of this statement varies from one system to another, but at least this is the trend.
    5) always always, measure you IO metrics. There are simple, yet effective measures you or (SAN admin) can do to measure IP performance. Use metrics like service times and latency measures. Also, most application have typical IO profiles like PIO/sec, sequential vs. random reads.
    6) On some systems, I have seen DBAs request LUNs carved from RAID10 arrays for redo logs and RAID5 for the remainder of the database, You can certainly exercise this option.
    Hope this helps
    Thank you

  • Cheap shared storage for test RAC

    Hi All,
    Is there cheap shared storage device is available to create test environment RAC. I used to create RAC with vmware but environment is not much stable.
    Regards

    Two options:
    The Oracle VM templates can be used to build clusters of any number of nodes using Oracle Database 11g Release 2, which includes Oracle 11g Rel. 2 Clusterware, Oracle 11g Rel. 2 Database, and Oracle Automatic Storage Management (ASM) 11g Rel. 2, patched to the latest, recommended patches.
    This is supported for Production.
    http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Learn how to set up and configure an Oracle RAC 11g Release 2 development cluster on Oracle Linux for less than US$2,700.
    The information in this guide below is not validated by Oracle, is not supported by Oracle, and should only be used at your own risk; it is for educational purposes only.
    http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html
    Regards,
    Levi Pereira
    Edited by: Levi Pereira on Dec 10, 2012 10:59 AM

  • Storage Knowledge for RAC

    Please advice me few books for developing my knowledge on EMC storage.
    We are planning to buy EMC for RAC setup.
    There are not many books around that topic.

    Learning Oracle,
    I don't think this or any other Oracle forum is a place for free consultancy, especially if you don't specify any details.
    Richmond Shee and others have published a book on the Oracle Wait Interface, it is pretty indispensible.
    Oracle Press has also a book on ASM out.
    As this is not the place to post free abstracts of books, made by volunteers, I suggest you visit sites lke www.amazon.com or www.bn.com or whatever vendor you have access to.
    Sybrand Bakker
    Senior Oracle DBA

  • Storage related question for RAC

    Hi Gurus,
    I have a question here about the storage design. My company has 2 node Oracle RAC already setup on Linux with 11g for clusterware, ASM and 10g for database(for application support we are using still 10g for database). This RAC is dedicated for one database and shared by several applications.
    Now, we decided to add two more nodes and additional storage to it.
    The existing storage uses EMC SAN DMX 4574 (raw partitions) disks. And the additional storage disks provided by our storage team are EMC SAN DMX 3752.
    Is it possible to combine these two different DISK (different series from same vendor) into one single storage sub system visible to entire RAC Database? All the 4 nodes will be used for the same database and application. Its a production system so we wanted to confirm and proceed further as we dont have a test scenario.
    Shall we connect these different disks to the switches so that ASM can see all the disks at once and can be used for database?
    Please advise.
    Regards,
    vasu
    Edited by: vasu77 on Feb 23, 2010 10:12 AM

    Hi Vasu,
    The existing storage uses EMC SAN DMX 4574 (raw partitions) disks. And the additional storage disks provided by our storage team are EMC SAN DMX 3752.
    Is it possible to combine these two different DISK (different series from same vendor) into one single storage sub >system visible to entire RAC Database? All the 4 nodes will be used for the same database and application. Its a >production system so we wanted to confirm and proceed further as we dont have a test scenario.First of all, You have to ask the OS support team in order to see if there is any kind of problem mixing it. I don't think it would be a problem, but as you don't have any test environemnt, You'd better ask them.
    For the ASM side, it's not a problem, You can work with this kind of configuration. In fact I have a couple of environemnt using this configuration. I'm using a expensive storage with a better performance to critical tablespaces, and a lower cost storage to put historical data and other kind of data that are not accessed every time.
    Shall we connect these different disks to the switches so that ASM can see all the disks at once and can be used for database? Well,
    As I told before, I don't think it would be a problem, but again... You should ask the OS support team. For the ASM side, You shouldn't mix disks with diferent behaviour in the same diskgroup, You should use disks with the same size and similar/equal performance characteristics. As ASM balances the I/O over the disks in a ASM diskgroup, it's important to configure in this way in order to avoid performance problems.
    Hope it helps,
    Cerreia

  • Considering shared storage for Oracle RAC 10g

    Hi, guys!
    My Oracle RAC will be run on VMware ESXI 5.5. So, both 2 nodes and shared storage are on VM. Don't blame for this, I dont have another choice.
    I am choosing shared storage for Oracle RAC. I am choosing between NFS and ISCSI server, both can be done in RedHat linux or FreeNAS.
    Can u, guys, help me to do the choise?
    RedHat or FreeNAS
    ISCSI or NFS
    Any help will be appreciated.

    JohnWatson написал(а):
    NFS is really easy. Create your zero-filled files, set the ownership and access modes, and point your asm_diskstring at them. Much simpler than configuring an iSCSI target and initiators, and then messing about with ASMlib or udev.
    I recorded a public lecture that (if I remember correctly) describes it here, Oracle ASM Free Tutorial
    I will be using OCFS2 as cluster FS. Does it make any difference for NFS vs ISCSI?

  • Installing RAC/ASM for OCP preparation

    Hello Fellow Forumites,
    I just passed OCA in 11g.
    Now I want to study for OCP.
    My approach has always been to take a self study, before registering for an instructor led class. To achieve this, I am currently using Sybex (Sybex OCP Oracle Database 11g Administrator Certified Professional Study Guide Exam 1Z0-053).
    My question is how do i install RAC and ASM on my server.
    I have checked the hardware requirement for rac and my server passed, it is also running on win 2003 server, but if i want to configure rac /asm, i get this error telling me that it can not be installed on my server.
    Your kind help will highly be appreciated.
    I forgot to mention that i am using Enterprise Edition 11g R2.
    Thanks
    Harrison

    If you're getting an error installing software, you're probably better off posting this in the Database Installation.
    When you do, it would be quite helpful to post the exact error stack you are getting and exactly what step of the installation process you are on.
    Justin

  • Archivelogs deleted in RAC ASM Production for standby

    Hi,
    DB: 10.2.0.4
    OS: AIX 5.3 L
    This is 2 node RAC ASM database with single standby on ASM.
    I set the retention policy to 2 days for my Live database.While running the TAPE backup script, the archivelogs were deleted .The deleted archivelogs are generated just before /while the TAPE backup.Backup script has not considered the retention policy for Primary.All archivelogs , except generated after backup ,were deleted .
    Now my standby is waiting for deleted archivelogs.
    How do i can make sync of my standby with Primary ?.
    Please understand my position and request you to help me on this.
    Regards,
    Sunand
    Edited by: sunand on Jan 22, 2011 3:15 AM

    Hi Jean & Abhi,
    Thanks you very much for your reply.
    I came out the problem with following taking the backup of incremental from standby SCN value.
    Actually i did not have backup of deleted archivelog .Now my standby database is in sync with primary and working fine.
    About recovering of deleted archivelog from backup piece, If i run below script , whether it will recover the specified archivelogs to our archivelog destination or will recover to database ?.I never done this before.Could you explain me ,please.
    run
    allocate channel t1 type sbt.....;
    restore archivelog from sequence X until sequence Y thread 1;
    Also, my database is on ASM. Once we recover the archivelogs,how do i can move to standby?.I mean , moving them from ASM to file system and registering to standby database .
    Thank you very much,
    Regards,
    Sunand

Maybe you are looking for