RAC with ASM and shared disks?

Hi all,
Can someone clarify this little point please. If I use ASM as my storage with a RAC database, I have to configure these nodes to shared disks. At least this is what the UG says ...
When you create a disk group for a cluster or add new disks to an existing clustered disk group, you only need to prepare the underlying physical storage on shared disks. The shared disk requirement is the only substantial difference between using ASM in a RAC database compared to using it in a single-instance Oracle database. ASM automatically re-balances the storage load after you add or delete a disk or disk group.
With my 9i databases, I used HCAMP to allow for concurrent VG access among the nodes. My questions are ...
1) How can I share this storage as stated above without using HACMP? My understanding is with 10g I no longer have to use it.
2) Can Oracle's clusterware be used to share storage? I have not seen any indication that it does.
3) Does this mean I still have to use HCAMP with 10g crs to allow shared storage?
Thank you

"...meaning visible to all the participating nodes, which you don't need HACMP..."
This is one step forward, but still not clear. On unix, storage is presented to ASM as raw volumes. As such, how can these volumes be visible on all nodes without using HCAMP (or whatever 3rd party clusterware you are using). Presenting raw volumes on several nodes is something that is not done at OS level without using some clusterware functionality.
I do understand that storage or LUNs can be shared at the SAN fabric level. But then, these LUNs are carved in bug chunks and I would like to be able to allocate storage at much granular level using raw partitions.
So all in all, here are my questions ...
1) On unix platforms, can ASM disks be LUNs, raw volumes, or may be both?
2) If raw volumes, how are these shared (or made visible) without using 3rd party clusterware? Having managed 9i RAC, it was the function of HACMP to make these volumes visible on all nodes, otherwise, we had to imp/exp VGs on all nodes to make them visible.
Thank you

Similar Messages

  • RAC with ASM and without ASM

    Hi all,
    we planing to install RAC 11g instance active/active . and we are using SAN storage RAID 10.
    I know ASM is nice feature . but it need more maintenance in future . This is what I see
    it from Manual and training . for patching ..... because it maintain as instance.
    why I do need ASM since I have SAN and I can control mirroring ...etc
    I need sold answer here ?? why I need to use this feature that already can be covered using another facility like SAN.
    Best Regards,

    What I have found in a RAC world is there is maintenance no matter which way you go, A cluster file system will require upgrades, patches, etc. RAW volumes will require extra effort in allocation, etc. as well as increase the number of files in the database. ASM requires additional instance on each node to maintain which is quite simple and rolling patches in ASM is becoming reality slowly. I have found that removing the management of RAW volumes is more trouble then the maintenance of the ASM instances and the added benefits of ASM outweigh the maintenance for sure. I found that the cluster file system mainteance is pretty well a wash.
    As for ASM being widely used, the most recent RAC clusters (last 3) I have built have all been ASM....... 1 on HPUX and 2 on Linux (Red Hat and Oracle Enterprise Linux) and future clusters coming up that I know of are all going to be ASM as well. While it may be true that a lot of existing RAC environments have not yet gone to ASM almost all new RAC environments are. It is certainly taking hold. If you look at the effort on a large database to move to ASM from RAW volumes or cluster file system it can appear to be a lot of work and that is true, but in the long run my experience with ASM has been positive therefore I would not hesitate to recommend new RAC clusters be built with ASM and existing clusters should have a migration plan in place. As with some cluster file systems like veritas, GPFS, etc. There is addtional cost involved where ASM does not have the additional cost so moving existing clusters can save $$........ RAM volumne management may not fall on the DBA but someone has to manage all those volumnes at a SAN level and that is additional management just may not really be with the DBA.
    Just my additional 2 cents worth.
    Hope this helps.

  • Production RAC with ASM and DR with non-rac and ASM

    Hi,
    I have a question whether this configuration is feasible or not. Production environment is 10gR2 RAC running on 2 node cluster. DR site will have single instance with ASM. Disks from primary site will be mirrored to DR site using EMC SRDF. Would i be able to bring up single instance at DR site on mirrored ASM disks once split is done? Currently not incorporating Dataguard for DR solution.
    Wanted to check if anyone has done like this before.
    Thanks

    This is feasible. SRDF is capable of providing data at the DR site that is always in a "crash-consistent state", without doing anything at the production site RAC environment. Starting the database at the DR site will cause it to run crash recovery first. SRDF synchronous or asynchronous feature should be used in this case.
    At the DR site, your default OS devices files could be different. Either change the asm disk strings or configure the device files for the disks such that they are the same as the shared device files at production side. This way, you can use the ASM configuration as is.
    Also the database is to be started in a non-cluster mode, by commenting the cluster related parameters (ie: cluster) at the init*.ora parameter file.

  • Does managing Oracle 10g RAC with ASM require full root access?

    We currently have three entirely separate support areas, Unix, Storage and DBA. We're now considering using Oracle 10g RAC with ASM and as part of the assessment trying to work out if we can still draw similar support boundaries. I know that installing RAC and configuring ASM requires root access but will DBA continue to need root access to manage & support RAC? If so, does anyone know if the commands they need can be RBAC'd or if we just need to share root access going forwards. I've had a look at a number of docs including http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/toc.htm. which is fairly informative but none of them seem to mention the requirement for root access on Solaris. I'm guessing they just assume that it's available, but that's not generally the case in our environment.
    All advice / info welcome!

    I would have thought that the only reason you would need root access once RAC and ASM had been set up would be to add more disks to the ASM configuration. This would be needed to change the ownership on the raw LUNs or to make additions to metasets (SVM/Oban) or diskgroups (VxVM/CVM). Beyond that, I can't imagine needing root access.
    I'm sure others will chime in if they can think of other reasons!
    Regards,
    Tim
    Edited by: Tim.Read on Jun 3, 2008 2:25 AM

  • Best practice for oracle 10.2 RAC with ASM

    Did any one tried/installed Oracle 10.2 RAC with ASM and CRS ?
    What is the best practice?
    1. separate home for CRS, ASM and Oracle Database?
    2. separate home for CRS and same home for ASM and Oracle Darabase?
    we set up the test environment with separate CRS, ASM and Oracle database homes, but we have tons of issues with the listener, spfile and tnsnames.ora files. So, seeking advise from the gurus who implimeted/tested the same ?

    I am getting ready to install the 10gR2 database software (10gR2 Clusterware was just installed ) and I want to have a home for ASM and another for database as you suggest. I have been told that 10gR2 was to have a smaller set of binaries that can be used for the ASM home ... but I am not sure how I go about installing it. The first look at the installer does not seem to make it obvious...Is it a custom build option?

  • Binding Luns in red hat linux5 while installing RAC with ASM

    Hi All,
    I am in the process of installing RAC with ASM. Our OS team have presented the shared LUN's on the cluster nodes. Now that i need to bound/map them to the raw partitions. I did the below step. Could you please let me know the next steps to be done for the binding to be completed?
    # cat /etc/udev/rules.d/60-raw.rules
    # This file and interface are deprecated.
    # Applications needing raw device access should open regular
    # block devices with O_DIRECT.
    # Enter raw device bindings here.
    # An example would be:
    # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
    # to bind /dev/raw/raw1 to /dev/sda, or
    # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
    # to bind /dev/raw/raw2 to the device with major 8, minor 1.
    ACTION=="add", KERNEL=="/dev/emcpowerg", RUN+="/bin/raw /dev/raw/OCR.dbf %N"
    ACTION=="add", KERNEL=="/dev/emcpowerd", RUN+="/bin/raw /dev/raw/VOTING.dbf %N"
    ACTION=="add", KERNEL=="/dev/emcpowerf", RUN+="/bin/raw /dev/raw/ASM1 %N"
    ACTION=="add", KERNEL=="/dev/emcpowerc", RUN+="/bin/raw /dev/raw/ASM2 %N"
    ACTION=="add", KERNEL=="/dev/emcpowere", RUN+="/bin/raw /dev/raw/ASM3 %N"
    ACTION=="add", KERNEL=="/dev/emcpowerb", RUN+="/bin/raw /dev/raw/ASM4 %N"
    # cd /etc/udev/rules.d/
    # udevtest /dev/raw/OCR.dbf | grep mode
    # raw -qa
    # start_udev
    Starting udev: OK
    # raw -qa
    # ls -l /dev/raw
    ls: /dev/raw: No such file or directory
    Thanks for all your support,
    Sravan

    hi
    you should have got feedback on your other post refereing to the same question
    Bindind raw partitions in Red hat linux5 for Oracle RAC with ASM install
    regards,
    hub

  • SQLLoader issues of Oracle RAC with ASM

    One of our client wants to use Oracle RAC with ASM for our application, i just want to know if there would be any two-phased commit transactions and SQL*Loader issues in ASM.
    Database is Oracle 10g

    ASM works only at storage layer and has nothing to do with:
    - distributed transactions
    - client executable that connects to database instance: SQL*Loader, SQL*Plus, etc.
    RAC has also nothing to do with distributed transactions: a RAC database is a single database with multiple instances but still a single database; there is no need to use distributed transactions because you have a RAC database.
    Edited by: P. Forstmann on 24 févr. 2011 13:27
    Edited by: P. Forstmann on 24 févr. 2011 13:31

  • Migrating Non ASM, Non RMAN to New Server with ASM and RMAN - Possible?

    We currently have a database ( Oracle 10g R1 ) on a Sun Solaris server that is NOT using ASM or RMAN. The database is about 300GB. We are getting a new server and we want to install Oracle 10g R2 with ASM and RMAN and migrate the database.
    I have seen the documentation on migrating non ASM to an ASM server but the methods all use RMAN. Is it possible to migrate to an ASM database without using RMAN? Would datapump import/export work if I created a new database on the new server with all the same tablespaces? Or, do I have to bite the bullet, install RMAN on the old server and do the backup?
    Thanks.

    If you're not using RMAN that doesn't mean you can't use it to perform a single backup, rman is contained in every oracle RDBMS installation version 10G or higher.
    this is only a sample of how to do it
    RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '<file_system_path>/%U.DBF';
    --first we allocate the channel default channel.
    RMAN>RUN
    ALLOCATE CHANNEL DEFAULTCHANNEL TYPE DISK;
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    BACKUP DATABASE;
    SHUTDOWN
    }then once you have it, you can do what you want.
    It should also be possible to manually restore the database from the original datafiles but it's better to follow the solution involving RMAN.
    Bye Alessandro

  • How to convert single instance10g db to 11gR2 RAC with ASM

    Hi,
    I need your help to decide the plan about how to convert single instance 10g database to 11gr2 rac with asm.
    I can have about 6 to 8 hours of downtime to upgrade and move to rac with asm.
    db size is about 1.5tb and on AIX.
    here is my plan....
    1) install 11gr2 rac with asm on two nodes
    2) verify rac installation and clustered asm
    3)install 10g oracle binaries( yes 10g )
    4) shutdown production db ( machine prod )
    5) make copy of production and restore on 1st node ( using shadow image , so it's quick and its file system )
    6) upgrade db to 11g ( still of file system )
    7) after successful upgrade, move to asm ( rman backup )
    8) add another node
    does it look okay ? OR is there a better approach to save time?
    can someone help me ?
    Thanks...

    Thanks ...
    So here is what I thought... suggest if something is not right...
    1) install 11gr2 grid infra on node A and B with ASM
    2) stop CLUSTER ON BOTH NODES.
    3) shutdown prod db on 10g ( downtime starts )
    4) take rman cold backup
    5) restore rman backup on node A on ASM ( as if single instance 11g , no rac parameters )
    6) mount and run upgrade script for 11g, then open db with 11
    7) after successful upgrade, shutdown db node A
    8) change all rac related parameters, spfile,undo,redo for rac environment on both nodes
    9) open db in rac environment
    can I do this way ?
    My only question is, even though I installed rac on node A and B, in step 5 and 6 I'm using only node A as if it is single instance. Is it possible ?
    if it is then I'm good to go...
    Thanks for all suggestions.

  • RAC with ASM

    Hi,
    We will be implementing oracle 10g RAC with ASM.
    Just wondering how many production systems are running on ASM.
    Are there any known issues with ASM for performance and maintaince.
    Please advice.
    Thanks in advance.

    Check more information about the difference between OCFS and ASM in metalink doc
    Automatic Storage Management (ASM) and Oracle Cluster File System (OCFS) in Oracle10g
    Doc ID: Note:255359.1
    ASM provides many advantages over pure raw device, I believe it's fairly standard to use ASM together with RAC in 10g. And perhaps some files on OCFS2 that ASM doesn't handle.

  • How do i licence a two node RAC with ASM infrastructure

    AM trying to estimate the licensing of two node RAC with ASM as storage manager and GRID control as management tool

    Hi,
    Oracle license depends on lot of things, Edition of Oracle your running standard or enterprise, the optional packges you need and etc.
    Read through http://docs.oracle.com/cd/E11882_01/license.112/e10594/editions.htm#BABDJGGI
    also if you license named users or processor. You can get the price for the from online oracle store
    https://oraclestore.oracle.com/OA_HTML/ibeCCtdMinisites.jsp?language=US&ref=ibeCZzpHome.jsp

  • 12.1.3 EBS single node 11.2.0.3 database to 2 node RAC with ASM

    Hi,
    We are planning to convert single node local/ordinary file system 11.2.0.3 database to 2 node RAC with ASM.
    Please help me in creating the roadmap for the same.
    e.g
    1. Create the shared raw file system
    2. Create diskgroup for ASM
    3.. Convert the local file system to ASM first
    etc, etc
    This is the first big task hence need expert guidance.

    Please refer to:
    Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 (Doc ID 823587.1)
    Oracle E-Business Suite Release 12 High Availability Documentation Roadmap (Doc ID 1072636.1)
    Thanks,
    Hussein

  • Manual of adding the new node on RAC with ASM

    Hello everbody
    Someone have the manual of the adding a new node on RAC with ASM for Solaris ?
    regards
    Spaulonci

    Go to http://www.oracle.com/technology/documentation/index.html, select your unknown database version and search for a manual named 'Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide'.
    Werner

  • RAC with Dbnode1 and Dbnode2

    I have RAC with Dbnode1 and Dbnode2 and my application submit job in Dbnode1 .job is running and Dbnode1 is down .
    It possible running Job automatically move on the Dbnode2 .
    1 App1 and App2 node /DBNode2 and DBNode1 node are running.
    2 Application batch submit successfully from Appnod1/DBnode1 and DBNode1 goes down in middle of the batch.
    3 Pending Job will not switch automatically on DBnode2.

    995587 wrote:
    I have RAC with Dbnode1 and Dbnode2 and my application submit job in Dbnode1 .job is running and Dbnode1 is down .
    It possible running Job automatically move on the Dbnode2 .
    1 App1 and App2 node /DBNode2 and DBNode1 node are running.
    2 Application batch submit successfully from Appnod1/DBnode1 and DBNode1 goes down in middle of the batch.
    3 Pending Job will not switch automatically on DBnode2.Yes... It complete possible, but your application need support Oracle RAC.
    Client Failover Best Practices for Highly Available Oracle Databases: Oracle Database 11g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr2-client-failover-173305.pdf
    Application Failover with Oracle Database 11g
    http://www.oracle.com/technetwork/database/app-failover-oracle-database-11g-173323.pdf
    How to develop it:
    Transparent Application Failover in OCI
    http://docs.oracle.com/cd/E14072_01/appdev.112/e10646.pdf

  • Oracle 10g RAC design with ASM and OCFS

    Hi all,
    I have a question about a proposed Oracle 10g Release 2 RAC design for a 2 node cluster.
    ASM can store database files but not Oracle binaries nor OCR and voting disk. As such, OCFS version 1 does not support a shared Oracle Home. We plan to use OCFS version 2 with ASM version 2 on Red Hat Linux Enteprrise Server 4 with Oracle 10g Release 2 (10.2.0.1).
    For OCFS v2, a shared Oracle home and shared OCR and voting disk are supported. My question is does the following proposed architecture make sense for OCFS v2 with ASM v2 on Red Hat Linux 4?
    Oracle 10g Release 2 on Red Hat Enterprise Linux Server 4:
    OCFS V2:
    - shared Oracle home and binaries
    - shared OCR and vdisk files
    - CRS software shared OCFS v2 filesystem
    - spfile
    - controlfiles
    - tnsnames.ora
    ASM v2 with ASMLib v2:
    Proposed ASM disk groups:
    - data_dg for application data
    - backupdg for flashback and archivelogs
    - undo_rac1dg ASM diskgroup for undo tablespace for racnode1
    - undo_rac2dg ASM diskgroup for undo tablespace for racnode2
    - redo_rac1dg ASM diskgroup to hold redo logs for racnode1
    - redo_rac2dg ASM diskgroup to hold redo logs for racnode2
    - temp1dg temp tablespace for racnode1
    - temp2dg temp tablespace for racnode2
    Does this sound like a good initial design?
    Ben Prusinski, Senior DBA

    OK Tim, thanks for advices.
    I think Netbackup can be integrated with RMAN but I don't want to loose time on this (political).
    To summarize:
    ORACLE_HOME and CRS_HOME on each node (RAID1 and NTFS)
    Shared storage:
    Disk1 and disk 2: RAID1: - Raw partition 1 for OCR
    - Raw partition 2 for VotingDisk
    - OCFS for FLASH_RECOVERY_AREA
    Disk3, disk4 and disk5: RAID 0 - Raw with ASM redundancy normal 1 diskgroup for database files.
    This is a running project here, will start testing the design on VMware and then go for production setup.
    Regards

Maybe you are looking for