11gR2 OCR and ASM, recommendation please

For 11gR2 - storing OCR on ASM
I see posts recommending that OCR be stored in a seperate diskgroup from Database or recovery files, but no detail on why.
I'm mearly seeking to understand the recommendation, I'm not questioning it.
Please can someone detail why OCR should be seperated
Thanks

One reason could be to separate access to Oracle Database files from the Oracle Clusterware files. Originally, I believe it was recommended to use separate disk groups for Oracle Clusterware files and Oracle Database data files because storing the voting disk in Oracle ASM requires more failure groups than is required for other disk groups.
A normal redundancy disk group normally requires 2 failure groups (or two independent disk devices), but when you store a voting disk in a normal redundancy disk group 3 failure groups (or 3 disk devices) are required.
For example, if you have a normal redundancy disk group that stores the OCR, voting disks, and data files,and you want 150 GB of space for the database files, then you would need 3 disks with a total size of 450 GB. If you use separate disk groups for the Oracle Clusterware files and Oracle Database files, and both disk groups are normal redundancy, then you would still need three disks, but only 306 GB of disk space (assuming each disk in the Oracle Clusterware disk group is a 2 GB partition).

Similar Messages

  • OCR and ASM dependancy in 11.2

    Grid Version: 11.2.0.2
    Platform : Solaris 10
    Question1.
    Since ASM's configuration information is stored in OCR , if OCR is lost (for eg: due to a corrupt LUN in OCR's Disk group) , will ASM instance crash ?
    Question2.
    If ASM instance crashes (for eg: someone accidently killed an ASM mandatory process) , will OCR be accessible as OCR is stored in an ASM disk group ?
    Question3.
    What are the precautions I can take to protect OCR from failures ?

    Tom wrote:
    Hi Levi, Kuljeet
    Because of OLR (Oracle local registry) , I was under the impression that ASM won't crash even if OCR is lost.
    http://www.linkedin.com/groups/How-restore-OCR-in-11gR2-3156190.S.93908910
    Hi Tom,
    When Clusterware starts three files are involved.
    OLR - Is the first to be read and opened. This file is local and this file contains information where is stored voting disk, and information to startup the ASM. (e.g ASM DiscoveryString)
    VOTING DISK - This is the second file to be opened and read, to read the voting file only depend on the OLR be accessible. ASM start after CSSD or ASM does not start if CSSD is offline (i.e voting file missing)
    OCR - Finally the ASM Instance starts and mount all Diskgroups, then Clusterware Deamon (CRSD) open and read the OCR which is stored on Diskgroup.
    So, if ASM already started, ASM does not depend on OCR or OLR to be online. ASM depend on CSSD (Votedisk) to be online.
    There is a exclusive mode to start ASM without CSSD (but it's to restore OCR or VOTE purposes)
    Regards,
    Levi Pereira

  • OCR and voting disks on ASM, problems in case of fail-over instances

    Hi everybody
    in case at your site you :
    - have an 11.2 fail-over cluster using Grid Infrastructure (CRS, OCR, voting disks),
    where you have yourself created additional CRS resources to handle single-node db instances,
    their listener, their disks and so on (which are started only on one node at a time,
    can fail from that node and restart to another);
    - have put OCR and voting disks into an ASM diskgroup (as strongly suggested by Oracle);
    then you might have problems (as we had) because you might:
    - reach max number of diskgroups handled by an ASM instance (63 only, above which you get ORA-15068);
    - experiment delays (especially in case of multipath), find fake CRS resources, etc.
    whenever you dismount disks from one node and mount to another;
    So (if both conditions are true) you might be interested in this story,
    then please keep reading on for the boring details.
    One step backward (I'll try to keep it simple).
    Oracle Grid Infrastructure is mainly used by RAC db instances,
    which means that any db you create usually has one instance started on each node,
    and all instances access read / write the same disks from each node.
    So, ASM instance on each node will mount diskgroups in Shared Mode,
    because the same diskgroups are mounted also by other ASM instances on the other nodes.
    ASM instances have a spfile parameter CLUSTER_DATABASE=true (and this parameter implies
    that every diskgroup is mounted in Shared Mode, among other things).
    In this context, it is quite obvious that Oracle strongly recommends to put OCR and voting disks
    inside ASM: this (usually called CRS_DATA) will become diskgroup number 1
    and ASM instances will mount it before CRS starts.
    Then, additional diskgroup will be added by users, for DATA, REDO, FRA etc of each RAC db,
    and will be mounted later when a RAC db instance starts on the specific node.
    In case of fail-over cluster, where instances are not RAC type and there is
    only one instance running (on one of the nodes) at any time for each db, it is different.
    All diskgroups of db instances don't need to be mounted in Shared Mode,
    because they are used by one instance only at a time
    (on the contrary, they should be mounted in Exclusive Mode).
    Yet, if you follow Oracle advice and put OCR and voting inside ASM, then:
    - at installation OUI will start ASM instance on each node with CLUSTER_DATABASE=true;
    - the first diskgroup, which contains OCR and votings, will be mounted Shared Mode;
    - all other diskgroups, used by each db instance, will be mounted Shared Mode, too,
    even if you'll take care that they'll be mounted by one ASM instance at a time.
    At our site, for our three-nodes cluster, this fact has two consequences.
    One conseguence is that we hit ORA-15068 limit (max 63 diskgroups) earlier than expected:
    - none ot the instances on this cluster are Production (only Test, Dev, etc);
    - we planned to have usually 10 instances on each node, each of them with 3 diskgroups (DATA, REDO, FRA),
    so 30 diskgroups each node, for a total of 90 diskgroups (30 instances) on the cluster;
    - in case one node failed, surviving two should get resources of the failing node,
    in the worst case: one node with 60 diskgroups (20 instances), the other one with 30 diskgroups (10 instances)
    - in case two nodes failed, the only node survived should not be able to mount additional diskgroups
    (because of limit of max 63 diskgroup mounted by an ASM instance), so all other would remain unmounted
    and their db instances stopped (they are not Production instances);
    But it didn't worked, since ASM has parameter CLUSTER_DATABASE=true, so you cannot mount 90 diskgroups,
    you can mount 62 globally (once a diskgroup is mounted on one node, it is given a number between 2 and 63,
    and other diskgroups mounted on other nodes cannot reuse that number).
    So as a matter of fact we can mount only 21 diskgroups (about 7 instances) on each node.
    The second conseguence is that, every time our CRS handmade scripts dismount diskgroups
    from one node and mount it to another, there are delays in the range of seconds (especially with multipath).
    Also we found inside CRS log that, whenever we mounted diskgroups (on one node only), then
    behind the scenes were created on the fly additional fake resources
    of type ora*.dg, maybe to accomodate the fact that on other nodes those diskgroups were left unmounted
    (once again, instances are single-node here, and not RAC type).
    That's all.
    Did anyone go into similar problems?
    We opened a SR to Oracle asking about what options do we have here, and we are disappointed by their answer.
    Regards
    Oscar

    Hi Klaas-Jan
    - best practises require that also online redolog files are in a separate diskgroup, in case of ASM logical corruption (we are a little bit paranoid): in case DATA dg gets corrupted, you can restore Full backup plus Archived RedoLog plus Online Redolog (otherwise you will stop at the latest Archived).
    So we have 3 diskgroups for each db instance: DATA, REDO, FRA.
    - in case of fail-over cluster (active-passive), Oracle provide some templates of CRS scripts (in $CRS_HOME/crs/crs/public) that you edit and change at your will, also you might create additionale scripts in case of additional resources you might need (Oracle Agents, backups agent, file systems, monitoring tools, etc)
    About our problem, the only solution is to move OCR and voting disks from ASM and change pfile af all ASM instance (parameter CLUSTER_DATABASE from true to false ).
    Oracle aswers were a litlle bit odd:
    - first they told us to use Grid Standalone (without CRS, OCR, voting at all), but we told them that we needed a Fail-over solution
    - then they told us to use RAC Single Node, which actually has some better features, in csae of planned fail-over it might be able to migreate
    client sessions without causing a reconnect (for SELECTs only, not in case of a running transaction), but we already have a few fail-over cluster, we cannot change them all
    So we plan to move OCR and voting disks into block devices (we think that the other solution, which needs a Shared File System, will take longer).
    Thanks Marko for pointing us to OCFS2 pros / cons.
    We asked Oracle a confirmation that it supported, they said yes but it is discouraged (and also, doesn't work with OUI nor ASMCA).
    Anyway that's the simplest approach, this is a non-Prod cluster, we'll start here and if everthing is fine, after a while we'll do it also on Prod ones.
    - Note 605828.1, paragraph 5, Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5
    - Note 428681.1: OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE)
    -"Grid Infrastructure Install on Linux", paragraph 3.1.6, Table 3-2
    Oscar

  • OCR and VOT disk, in the same asm disk or different disks ?! and how ?!

    What Oracle recommends regarding the OCR and Voting disks in Rac 11gR2 on ASM ?!
    Making them under the same disk or different disks ?!
    and how ?!
    Thanks a lot,
    Regards,
    Gehad.

    Best practice is that OCR/Voting should be under ASM. Making them under the same disk or different disks depends upon the ASM diskgroup. If we want to keep it on seperate failover groups then we should create muliple failover group.
    For external : 1 ; Normal : 3 and higher : 5 asm diskgroups.
    I hope i would be clear ith my answers.
    Regards,
    Dheeraj Vaish

  • Recover OCR and VOTE disk after complete corruption of ASM disk groups.

    Hi Gurus,
    I am simulating a recovery situation to perform recover of OCR and Vote files after complete corruption of ASM related disks and diskgroups. I have setup my environment as follows:\
    Environment: RAC
    OS: OEL 5.5 32-bit
    GI Version: 11.2.0.2.0
    ASM Disk groups: +OCR, +DATA
    OCR, Vote Files location: +OCR
    ASM Redundancy: External
    ASM Disks: /dev/asm-disk1, /dev/asm-disk2
    /dev/asm-disk1 - mapped on +OCR
    /dev/asm-disk2 - mapped on +DATA
    With the above configuration in place I have manually corrupted +OCR, +DATA diskgroups with dd command. I used this command to completely corrupt +OCR disk group.
    dd if=/dev/zero of=/dev/asm-disk1. I have manual backups as well as automatic backups of OCR and Vote disk. I am not using ASMLib.
    I followed this link:
    http://docs.oracle.com/cd/E11882_01/rac.112/e17264/adminoc.htm#TDPRC237
    When I tried to recover OCR file, I could not do so as there is no such diskgroup which ASM can restore the OCR, Voting disk to. I could not Re-create OCR and DATA diskgroups as I cannot connect to ASM instance. If you have a solution or workaround for my situation please describe it. That will be greatly appreciated.
    Thanks and Regards,
    Suresh.

    Please go through the following document which have the detailed steps to restore the OCR
    How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems [ID 1062983.1]

  • Backup and Restore OCR,Voting Disk and ASM Disks in new SAN-10g RAC

    Dear Friends,
    I am using 10g R2 RAC serup on Linux
    My OCR,Voting Disk and ASM Disk for DBF Files are on a SAN box
    Now i am reorganising SAN by scrapping the entire SAN and cretaing new LUN's (It's a must)
    so pleae let me know
    1) how do i take backup of OCR and Voting Disk from existing SAN and how do i restore it in new LUN's after SAN reorganisation
    2) how do i take backup of existing Database's from existing SAN and how do i restore it in new LUN's after SAN reorganisation.
    I will be doing it in Planned downtime only
    Regards,
    DB

    For step 1 you should following metalink doc.
    For step 2 here is simple backup command script.
    I have done this in windows for you.
    D:\app\ranjit\product\11.2.0\dbhome_1\BIN>rman target /
    Recovery Manager: Release 11.2.0.1.0 - Production on Wed Feb 8 21:48:47 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    connected to target database: ORCL (DBID=1299593730)
    RMAN> run
    +{+
    allocate channel c1 device type disk format 'D:\app\ranjit\rman\%U';
    allocate channel c2 device type disk format 'D:\app\ranjit\rman\%U';
    backup database;
    backup current controlfile format 'D:\app\ranjit\rman\%U';
    +}+
    Regards

  • Advantage of having OCR and Voting disk on ASM

    What are advantages of putting OCR and Voting disk on ASM from 11g

    well, other than the sharing thing, you dont have to go RAIDing an additional shared disk either. If you have properly configured ASM, redundancy should be built in as well, either soft or hard.
    not sure what other advantages you may need. Theres the IO thing with ASM, buit thats not really an advantage persay with ocr and vote. I may be contradicted by others but Ive never seen performance hit of any kind attributed to ocr and vote on non-ASM disk.

  • Oracle 11gR2 RAC Installation - ASM Disks - Need advice on configurations

    Hi Guys
    How many disks are needed for Voting, Data, Log and Failover. Each should be in a seperate group or it can be in a single group of disk. Please advice. Thanks.

    Hi Friend,
    Option 1 :
    If Oracle Clusterware is used for implementing normal redundancy. We require the following for failover purpose also.
    1. Two OCR files - 280 MB each
    2. Three Voting Disks - 280 MB each
    Total - 1.4 GB approx
    Option 2 :
    Oracle recommends that the disk used for the file system be on a RAID. When you use external redundancy the minimum requirement is One OCR and One Voting Disk with 280 MB each.
    Choices :
    1. External Redundancy - > Minimum No Of Disks (1) - > One OCR (280 MB) and One Voting Disk (280 MB) -> 580 MB
    2. Normal Redundancy - > Minimum No Of Disks (3) - > Two OCR (560 MB) and Three Voting Disk (840 MB) -> 1.4 GB
    3. High Redundancy - > Minimum No Of Disks (5) - > Three OCR (840 MB) and Five Voting Disk (1.4 GB) -> 2.3 GB
    So, Choose based on redundancy...
    Hope it helps..
    Note : Use ASM for storing above files..
    Thanks
    LaserSoft

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

  • Cloning 11i non split to split configuration with RAC and ASM

    Hello Hussein,
    I just want to ask some ideas on what is the best way to clone our UAT/DEV environment to our PROD environment.
    Right now no RAC and ASM setup for the source system still 9.2.0.5 but the plan is to convert to ASM + 10g RAC.
    Can you please let me know on what is the best way to setup PROD out of our UAT environment?
    Here's my options:
    1. Install a fresh prod system
    2. Convert source system to ASM + RAC before cloning to target - setup as below:
    (Source)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 32bit
    (Target)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 64bit
    3. Clone existing target system (non RAC and non ASM)
    copy source APPL directories to target
    Install 64 bit Oracle 10g to the target system
    clone/convert database source (9i 32bit) to database targer (10g 64 bit) using RMAN.
    Install clusterware 11gR2
    Convert database to RAC
    Can you please let me know on what is the best approach to do this? For fresh install it will take some time to apply the current patch level and applying other patches.
    For Option 2, seems a bit complicated to do 32bit - 64 bit cloning on RAC. Appreciated if you can provide doc id for this.
    For Option 3, not sure how smooth the conversion from 32bit to 64bit.
    Appreciate your insights on this.
    Regards,
    jeffrey

    Hi Jeffrey,
    Since you are on 9.2.0.5, I assume you are running Oracle Apps 11i and not R12.
    1. Install a fresh prod systemThis option requires applying all patches (as you mentioned above) plus you will have to convert to ASM/RAC on the source/target instance. I would not recommend this approach since would require extra work/time.
    2. Convert source system to ASM + RAC before cloning to target - setup as below:
    (Source)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 32bit
    (Target)
    APPS server - 32bit
    DB server (2 node RAC + ASM) - 64bitWhat are the source and target database version?
    As per (Certified RAC Scenarios for E-Business Suite Cloning [ID 783188.1]) this is supported by Rapid Clone. So, in this case you need to convert the source instance to RAC and migrate to ASM then use Rapid Clone to clone the application/database.
    Cloning Oracle Applications Release 11i with Rapid Clone [ID 230672.1] -- 6. Cloning a RAC System
    You will have to convert the target database then from 32-bit to 64-bit.
    3. Clone existing target system (non RAC and non ASM)
    copy source APPL directories to target
    Install 64 bit Oracle 10g to the target system
    clone/convert database source (9i 32bit) to database targer (10g 64 bit) using RMAN.
    Install clusterware 11gR2
    Convert database to RAC Here you will have to convert to RAC/ASM on both the source/target instances -- You are eliminating the patches part in Option 1, but again extra work need to be done to convert the database from 32-bit to 64-bit on the target instance + convert to RAC and migrate to ASM (on both instances).
    Based on the above, I would recommend and suggest you go with Option 2.
    Thanks,
    Hussein

  • 11GR2 2nodes CRSD ASM - Failed to open file in dirty mode

    Hi...
    we facing a problem with a two node 11gr2 cluster.
    Independently first started node one ore node two. The node that has start first starts normal.
    The second started node fail with error mess ......
    vi .../emcrsp.log
    2011-04-17 10:19:14.406: [  OCRASM][4090540208]ASM Error Stack : ORA-15077: could not locate ASM instance serving a required diskgroup
    2011-04-17 10:19:14.408: [  OCRASM][4090540208]proprasmo: kgfoCheckMount returned [7]
    2011-04-17 10:19:14.408: [  OCRASM][4090540208]proprasmo: The ASM instance is down
    2011-04-17 10:19:14.416: [  OCRRAW][4090540208]proprioo: Failed to open [+DGCONF]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
    2011-04-17 10:19:14.416: [  OCRRAW][4090540208]proprioo: No OCR/OLR devices are usable
    2011-04-17 10:19:14.416: [  OCRASM][4090540208]proprasmcl: asmhandle is NULL
    2011-04-17 10:19:14.416: [  OCRRAW][4090540208]proprinit: Could not open raw device
    2011-04-17 10:19:14.416: [  OCRASM][4090540208]proprasmcl: asmhandle is NULL
    2011-04-17 10:19:14.416: [ default][4090540208]a_init:7!: Backend init unsuccessful : [26]
    [   CLWAL][738463920]clsw_Initialize: OLR initlevel [30000]
    2011-04-17 10:19:15.272: [  OCRASM][3128352944]proprasmo: Failed to open file in dirty mode
    2011-04-17 10:19:15.272: [  OCRASM][3128352944]proprasmo: Error in open/create file in dg [DGCONF]
    [  OCRASM][3128352944]SLOS : SLOS: cat=8, opn=kgfolclcpi1, dep=402, loc=kgfokge
    The interlink is up and running.
    We try to recreate the OCR and Voting from daily backup without any result
    Does anyone has an idea ?
    Thanks *T
    Edited by: tbrinkmann on Apr 20, 2011 5:15 AM

    Hi Paul,
    yes the ASM is down.
    That was confusing me. If I shutdown the other node the +ASM can start and clustering com´s up normal.
    It looks like only one node can use voting or ocr....
    The behavior looks like the interlink is down buts is not ;:-(
    One node ( first com´s up) start normally and take all cluster resources ...scan .. the vips..
    And second node show this error mess..
    Thanks
    *T                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • OCR and Voting Disk Mirror

    Hi,
    We have oracle 10.2.0.4 RAC database. Now we are planning to mirror the OCR and Voting disk. Currently we are using SAN mirror copy. SAN admin assures that we don't need mirror copy as it is being mirrored at the disk level. Can any one please suggest me if we need any mirror copy of these. If so what advantages I get from the regular SAN copy.

    You use SAN or High Availability Storage + RAID. You don't need to mirror OCR/VOTING .
    Anyway , If you can do it... Mirror OCR + Multi VOTING... , that's good thing oracle recommend.
    By the way, If you can not mirror OCR + multi VOTE on 10g, You should backup VOTE File ("dd" linux command) after setup completed/add node/delete node.
    http://download.oracle.com/docs/cd/B19306_01/rac.102/b28759/adminoc.htm#sthref148
    And Check OCR file automatic backup:
    $ ocrconfig -showbackup
    http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/ocrsyntax.htm

  • OCR and votingdisk

    I believe that voting disk and CRS has to be raw devices..
    can we create OCR and votingdisk in directories.
    in RAC ?
    check here
    db1:/oradata/d01/CRS$ crsctl query css votedisk
    0. 0 /oradata/d01/CRS/VOTE
    1. 0 /oradata/d01/CRS/VOTE
    2. 0 /oradata/d01/CRS/VOTE
    located 3 votedisk(s).
    VOTE
    is the name of the voting disk.. where they've created this disk in the directory ccbs/oradata/u01/CRS
    am wondering how ?
    this is in solaris ?
    any idea ?
    I cannot find any symbolic links here als
    Please guide
    kai

    Hai Ashok,
    Yes we are using OCFS only... thanks for the info... I understand that if you are not formatting with OCFS, (ie.ASM) then you have to use OCR and VOTE DISK as raw device only right ?..If using OCFS,we can have OCR and votedisk in directories.. can u refer me some links to understand this RAC concepts well ?
    Hai Chandra,
    Thanks for the reply. yes they are OCR and voting disk only and they are updated every minute. yes.. crsctl query css votedisk displays the output and I understand by this command what are the configured voting disk that we have...is there any command to check the OCR like this ?..in Otherwords I want to display all the configured and mirrired ocr's.. any commands to display that ??
    thanks for helping pal s
    Kai

  • AIX 6.1 - Do we need HACMP to store the OCR and Voting Disks on Raw Logical

    We are planning to install 11G r1 rac on AIX 6.1. We will use clusterware and ASM. We would like to avoid GPFS and HACMP.
    We plan to put the Clusterware OCR and Voting Disks on Raw Logical Volumes.
    I do not believe we need HACMP. see the passage below from doc:
    Oracle® Clusterware
    Installation Guide
    11g Release 1 (11.1) for AIX Based Systems
    B28258-05
    Configuring Raw Disk Devices for Oracle Clusterware Without HACMP or GPFS
    If you are installing Oracle RAC on an AIX cluster without HACMP or GPFS, then you
    must use shared raw disk devices for the Oracle Clusterware files. You can also use
    shared raw disk devices for database file storage. However, Oracle recommends that
    you use Automatic Storage Management to store database files in this situation.
    This section describes how to configure the shared raw disk devices for Oracle
    Clusterware files (Oracle Cluster Registry and Oracle Clusterware voting disk). It also
    describes how to configure shared raw devices for Oracle ASM and for Database files,
    if you intend to install Oracle Database, and you need to create new disk devices.
    Question:
    Do we need HACMP to store the OCR and Voting Disks on Raw Logical Volumes? According to the passage above we do not.

    You can archive log either in shared location or separate location associated with each instance. Oracle recommends using a shared location for all the instances in a RAC configuration. Check the topic "Location of Archived Logs for the Cluster File System Archiving Scheme" in Clusters Administration and Deployment Guide

  • RAC and ASM issue

    Hi All,
    i am getting an error message in asm_alert log file saying "NOTE: ASMB process exiting due to lack of ASM file activity".
    This leads to frequent crashing of NODE 1. Please check below detail error and suggest solution.
    Thu Mar 24 07:05:11 2011
    LMD0 (ospid: 32493) has not called a wait for 94 secs.
    GES: System Load is HIGH.
    GES: Current load is 55.87 and high load threshold is 20.00
    Thu Mar 24 07:06:32 2011
    LMD0 (ospid: 32493) has not called a wait for 174 secs.
    GES: System Load is HIGH.
    GES: Current load is 71.23 and high load threshold is 20.00
    Thu Mar 24 07:06:36 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:07:49 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:08:16 2011
    Waiting for clusterware split-brain resolution
    Thu Mar 24 07:18:17 2011
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc (incident=60073):+
    ORA-29740: evicted by member 1, group incarnation 120
    Incident details in: /u01/app/oracle/diag/asm/asm/+ASM1/incident/incdir_60073/+ASM1_lmon_32484_i60073.trc+
    Thu Mar 24 07:18:19 2011
    +Trace dumping is performing id=[cdmp_20110324071819]+
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc:+
    ORA-29740: evicted by member 1, group incarnation 120
    LMON (ospid: 32484): terminating the instance due to error 29740
    System state dump is made for local instance
    System State dumped to trace file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_diag_32459.trc+
    +Trace dumping is performing id=[cdmp_20110324071820]+
    Instance terminated by LMON, pid = 32484
    Thu Mar 24 07:18:31 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth1 172.20.223.0 configured from OCR for use as a cluster interconnect
    Interface type 1 eth0 172.20.222.0 configured from OCR for use as  a public interface
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/asm_1/dbs/arch
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.7.0.
    Using parameter settings in server-side pfile /u01/app/oracle/product/11.1.0/asm_1/dbs/initASM1.ora+
    System parameters with non-default values:
    large_pool_size          = 12M
    instance_type            = "asm"
    cluster_database         = TRUE
    instance_number          = 1
    asm_diskstring           = "ORCL:*"
    asm_diskgroups           = "REDO01"
    asm_diskgroups           = "REDO02"
    asm_diskgroups           = "DATA"
    asm_diskgroups           = "RECOVERY"
    diagnostic_dest          = "/u01/app/oracle"
    Cluster communication is configured to use the following interface(s) for this instance
    +172.20.223.25+
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    Thu Mar 24 07:18:36 2011
    PMON started with pid=2, OS id=23120
    Thu Mar 24 07:18:36 2011
    VKTM started with pid=3, OS id=23123 at elevated priority
    VKTM running at (20)ms precision
    Thu Mar 24 07:18:36 2011
    DIAG started with pid=4, OS id=23127
    Thu Mar 24 07:18:37 2011
    PING started with pid=5, OS id=23129
    Thu Mar 24 07:18:37 2011
    PSP0 started with pid=6, OS id=23131
    Thu Mar 24 07:18:37 2011
    DIA0 started with pid=7, OS id=23133
    Thu Mar 24 07:18:37 2011
    LMON started with pid=8, OS id=23135
    Thu Mar 24 07:18:37 2011
    LMD0 started with pid=9, OS id=23137
    Thu Mar 24 07:18:37 2011
    LMS0 started with pid=10, OS id=23148 at elevated priority
    Thu Mar 24 07:18:37 2011
    MMAN started with pid=11, OS id=23152
    Thu Mar 24 07:18:38 2011
    DBW0 started with pid=12, OS id=23170
    Thu Mar 24 07:18:38 2011
    LGWR started with pid=13, OS id=23176
    Thu Mar 24 07:18:38 2011
    CKPT started with pid=14, OS id=23218
    Thu Mar 24 07:18:38 2011
    SMON started with pid=15, OS id=23224
    Thu Mar 24 07:18:38 2011
    RBAL started with pid=16, OS id=23237
    Thu Mar 24 07:18:38 2011
    GMON started with pid=17, OS id=23239
    lmon registered with NM - instance id 1 (internal mem no 0)
    Reconfiguration started (old inc 0, new inc 124)
    ASM instance
    List of nodes:
    +0 1 2+
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    * allocate domain 1, invalid = TRUE
    * allocate domain 2, invalid = TRUE
    * allocate domain 3, invalid = TRUE
    * allocate domain 4, invalid = TRUE
    * domain 0 valid = 1 according to instance 1
    * domain 1 valid = 1 according to instance 1
    * domain 2 valid = 1 according to instance 1
    * domain 3 valid = 1 according to instance 1
    * domain 4 valid = 1 according to instance 1
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Fix write in gcs resources
    Reconfiguration complete
    Thu Mar 24 07:18:40 2011
    LCK0 started with pid=18, OS id=23277
    ORACLE_BASE from environment = /u01/app/oracle
    Thu Mar 24 07:18:41 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    NOTE: cache registered group DATA number=1 incarn=0xf7063e39
    NOTE: cache began mount (not first) of group DATA number=1 incarn=0xf7063e39
    NOTE: cache registered group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache began mount (not first) of group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache registered group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache began mount (not first) of group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache registered group REDO02 number=4 incarn=0xf7163e3c
    NOTE: cache began mount (not first) of group REDO02 number=4 incarn=0xf7163e3c
    NOTE:Loaded lib: /opt/oracle/extapi/32/asm/orcl/1/libasm.so
    NOTE: Assigning number (1,0) to disk (ORCL:ASM_DATA1)
    NOTE: Assigning number (1,1) to disk (ORCL:ASM_DATA2)
    NOTE: Assigning number (2,0) to disk (ORCL:ASM_RECO1)
    NOTE: Assigning number (3,0) to disk (ORCL:ASM_LOG1)
    NOTE: Assigning number (4,0) to disk (ORCL:ASM_LOG2)
    kfdp_query(): 5
    kfdp_queryBg(): 5
    NOTE: cache opening disk 0 of grp 1: DATA1 label:ASM_DATA1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache opening disk 1 of grp 1: DATA2 label:ASM_DATA2
    NOTE: cache mounting (not first) group 1/0xF7063E39 (DATA)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 1
    NOTE: LGWR attempting to mount thread 1 for diskgroup 1
    NOTE: LGWR mounted thread 1 for disk group 1
    NOTE: opening chunk 1 at fcn 0.10794571 ABA
    NOTE: seq=81 blk=1313
    NOTE: cache mounting group 1/0xF7063E39 (DATA) succeeded
    NOTE: cache ending mount (success) of group DATA number=1 incarn=0xf7063e39
    kfdp_query(): 6
    kfdp_queryBg(): 6
    NOTE: cache opening disk 0 of grp 2: RECO1 label:ASM_RECO1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 2/0xF7063E3A (RECOVERY)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 2
    NOTE: LGWR attempting to mount thread 1 for diskgroup 2
    NOTE: LGWR mounted thread 1 for disk group 2
    NOTE: opening chunk 1 at fcn 0.10436377 ABA
    NOTE: seq=48 blk=4298
    NOTE: cache mounting group 2/0xF7063E3A (RECOVERY) succeeded
    NOTE: cache ending mount (success) of group RECOVERY number=2 incarn=0xf7063e3a
    kfdp_query(): 7
    kfdp_queryBg(): 7
    NOTE: cache opening disk 0 of grp 3: LOG1 label:ASM_LOG1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 3/0xF7163E3B (REDO01)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 3
    NOTE: LGWR attempting to mount thread 1 for diskgroup 3
    NOTE: LGWR mounted thread 1 for disk group 3
    NOTE: opening chunk 1 at fcn 0.229332 ABA
    NOTE: seq=30 blk=10690
    NOTE: cache mounting group 3/0xF7163E3B (REDO01) succeeded
    NOTE: cache ending mount (success) of group REDO01 number=3 incarn=0xf7163e3b
    kfdp_query(): 8
    kfdp_queryBg(): 8
    NOTE: cache opening disk 0 of grp 4: LOG2 label:ASM_LOG2
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 4/0xF7163E3C (REDO02)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 4
    NOTE: LGWR attempting to mount thread 1 for diskgroup 4
    NOTE: LGWR mounted thread 1 for disk group 4
    NOTE: opening chunk 1 at fcn 0.225880 ABA
    NOTE: seq=30 blk=10556
    NOTE: cache mounting group 4/0xF7163E3C (REDO02) succeeded
    NOTE: cache ending mount (success) of group REDO02 number=4 incarn=0xf7163e3c
    kfdp_query(): 9
    kfdp_queryBg(): 9
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 1
    SUCCESS: diskgroup DATA was mounted
    kfdp_query(): 10
    kfdp_queryBg(): 10
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 2
    SUCCESS: diskgroup RECOVERY was mounted
    kfdp_query(): 11
    kfdp_queryBg(): 11
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 3
    SUCCESS: diskgroup REDO01 was mounted
    kfdp_query(): 12
    kfdp_queryBg(): 12
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 4
    SUCCESS: diskgroup REDO02 was mounted
    SUCCESS: ALTER DISKGROUP ALL MOUNT
    Thu Mar 24 08:26:28 2011
    Starting background process ASMB
    Thu Mar 24 08:26:28 2011
    ASMB started with pid=20, OS id=9597
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    Thu Mar 24 08:27:39 2011
    Starting background process ASMB
    Thu Mar 24 08:27:39 2011
    ASMB started with pid=25, OS id=10735
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    +[oracle@qa1crmrac1 trace]$ tail -1500 alert_ASM1.log
    Thu Mar 24 07:05:11 2011
    LMD0 (ospid: 32493) has not called a wait for 94 secs.
    GES: System Load is HIGH.
    GES: Current load is 55.87 and high load threshold is 20.00
    Thu Mar 24 07:06:32 2011
    LMD0 (ospid: 32493) has not called a wait for 174 secs.
    GES: System Load is HIGH.
    GES: Current load is 71.23 and high load threshold is 20.00
    Thu Mar 24 07:06:36 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:07:49 2011
    +Trace dumping is performing id=[cdmp_20110324070635]+
    Thu Mar 24 07:08:16 2011
    Waiting for clusterware split-brain resolution
    Thu Mar 24 07:18:17 2011
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc (incident=60073):+
    ORA-29740: evicted by member 1, group incarnation 120
    Incident details in: /u01/app/oracle/diag/asm/asm/+ASM1/incident/incdir_60073/+ASM1_lmon_32484_i60073.trc+
    Thu Mar 24 07:18:19 2011
    +Trace dumping is performing id=[cdmp_20110324071819]+
    Errors in file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_lmon_32484.trc:+
    ORA-29740: evicted by member 1, group incarnation 120
    LMON (ospid: 32484): terminating the instance due to error 29740
    System state dump is made for local instance
    System State dumped to trace file /u01/app/oracle/diag/asm/asm/+ASM1/trace/+ASM1_diag_32459.trc+
    +Trace dumping is performing id=[cdmp_20110324071820]+
    Instance terminated by LMON, pid = 32484
    Thu Mar 24 07:18:31 2011
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth1 172.20.223.0 configured from OCR for use as a cluster interconnect
    Interface type 1 eth0 172.20.222.0 configured from OCR for use as  a public interface
    Picked latch-free SCN scheme 2
    Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/11.1.0/asm_1/dbs/arch
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    Starting up ORACLE RDBMS Version: 11.1.0.7.0.
    Using parameter settings in server-side pfile /u01/app/oracle/product/11.1.0/asm_1/dbs/initASM1.ora+
    System parameters with non-default values:
    large_pool_size          = 12M
    instance_type            = "asm"
    cluster_database         = TRUE
    instance_number          = 1
    asm_diskstring           = "ORCL:*"
    asm_diskgroups           = "REDO01"
    asm_diskgroups           = "REDO02"
    asm_diskgroups           = "DATA"
    asm_diskgroups           = "RECOVERY"
    diagnostic_dest          = "/u01/app/oracle"
    Cluster communication is configured to use the following interface(s) for this instance
    +172.20.223.25+
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    Thu Mar 24 07:18:36 2011
    PMON started with pid=2, OS id=23120
    Thu Mar 24 07:18:36 2011
    VKTM started with pid=3, OS id=23123 at elevated priority
    VKTM running at (20)ms precision
    Thu Mar 24 07:18:36 2011
    DIAG started with pid=4, OS id=23127
    Thu Mar 24 07:18:37 2011
    PING started with pid=5, OS id=23129
    Thu Mar 24 07:18:37 2011
    PSP0 started with pid=6, OS id=23131
    Thu Mar 24 07:18:37 2011
    DIA0 started with pid=7, OS id=23133
    Thu Mar 24 07:18:37 2011
    LMON started with pid=8, OS id=23135
    Thu Mar 24 07:18:37 2011
    LMD0 started with pid=9, OS id=23137
    Thu Mar 24 07:18:37 2011
    LMS0 started with pid=10, OS id=23148 at elevated priority
    Thu Mar 24 07:18:37 2011
    MMAN started with pid=11, OS id=23152
    Thu Mar 24 07:18:38 2011
    DBW0 started with pid=12, OS id=23170
    Thu Mar 24 07:18:38 2011
    LGWR started with pid=13, OS id=23176
    Thu Mar 24 07:18:38 2011
    CKPT started with pid=14, OS id=23218
    Thu Mar 24 07:18:38 2011
    SMON started with pid=15, OS id=23224
    Thu Mar 24 07:18:38 2011
    RBAL started with pid=16, OS id=23237
    Thu Mar 24 07:18:38 2011
    GMON started with pid=17, OS id=23239
    lmon registered with NM - instance id 1 (internal mem no 0)
    Reconfiguration started (old inc 0, new inc 124)
    ASM instance
    List of nodes:
    +0 1 2+
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    * allocate domain 1, invalid = TRUE
    * allocate domain 2, invalid = TRUE
    * allocate domain 3, invalid = TRUE
    * allocate domain 4, invalid = TRUE
    * domain 0 valid = 1 according to instance 1
    * domain 1 valid = 1 according to instance 1
    * domain 2 valid = 1 according to instance 1
    * domain 3 valid = 1 according to instance 1
    * domain 4 valid = 1 according to instance 1
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Submitted all GCS remote-cache requests
    Post SMON to start 1st pass IR
    Fix write in gcs resources
    Reconfiguration complete
    Thu Mar 24 07:18:40 2011
    LCK0 started with pid=18, OS id=23277
    ORACLE_BASE from environment = /u01/app/oracle
    Thu Mar 24 07:18:41 2011
    SQL> ALTER DISKGROUP ALL MOUNT
    NOTE: cache registered group DATA number=1 incarn=0xf7063e39
    NOTE: cache began mount (not first) of group DATA number=1 incarn=0xf7063e39
    NOTE: cache registered group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache began mount (not first) of group RECOVERY number=2 incarn=0xf7063e3a
    NOTE: cache registered group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache began mount (not first) of group REDO01 number=3 incarn=0xf7163e3b
    NOTE: cache registered group REDO02 number=4 incarn=0xf7163e3c
    NOTE: cache began mount (not first) of group REDO02 number=4 incarn=0xf7163e3c
    NOTE:Loaded lib: /opt/oracle/extapi/32/asm/orcl/1/libasm.so
    NOTE: Assigning number (1,0) to disk (ORCL:ASM_DATA1)
    NOTE: Assigning number (1,1) to disk (ORCL:ASM_DATA2)
    NOTE: Assigning number (2,0) to disk (ORCL:ASM_RECO1)
    NOTE: Assigning number (3,0) to disk (ORCL:ASM_LOG1)
    NOTE: Assigning number (4,0) to disk (ORCL:ASM_LOG2)
    kfdp_query(): 5
    kfdp_queryBg(): 5
    NOTE: cache opening disk 0 of grp 1: DATA1 label:ASM_DATA1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache opening disk 1 of grp 1: DATA2 label:ASM_DATA2
    NOTE: cache mounting (not first) group 1/0xF7063E39 (DATA)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 1
    NOTE: LGWR attempting to mount thread 1 for diskgroup 1
    NOTE: LGWR mounted thread 1 for disk group 1
    NOTE: opening chunk 1 at fcn 0.10794571 ABA
    NOTE: seq=81 blk=1313
    NOTE: cache mounting group 1/0xF7063E39 (DATA) succeeded
    NOTE: cache ending mount (success) of group DATA number=1 incarn=0xf7063e39
    kfdp_query(): 6
    kfdp_queryBg(): 6
    NOTE: cache opening disk 0 of grp 2: RECO1 label:ASM_RECO1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 2/0xF7063E3A (RECOVERY)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 2
    NOTE: LGWR attempting to mount thread 1 for diskgroup 2
    NOTE: LGWR mounted thread 1 for disk group 2
    NOTE: opening chunk 1 at fcn 0.10436377 ABA
    NOTE: seq=48 blk=4298
    NOTE: cache mounting group 2/0xF7063E3A (RECOVERY) succeeded
    NOTE: cache ending mount (success) of group RECOVERY number=2 incarn=0xf7063e3a
    kfdp_query(): 7
    kfdp_queryBg(): 7
    NOTE: cache opening disk 0 of grp 3: LOG1 label:ASM_LOG1
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 3/0xF7163E3B (REDO01)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 3
    NOTE: LGWR attempting to mount thread 1 for diskgroup 3
    NOTE: LGWR mounted thread 1 for disk group 3
    NOTE: opening chunk 1 at fcn 0.229332 ABA
    NOTE: seq=30 blk=10690
    NOTE: cache mounting group 3/0xF7163E3B (REDO01) succeeded
    NOTE: cache ending mount (success) of group REDO01 number=3 incarn=0xf7163e3b
    kfdp_query(): 8
    kfdp_queryBg(): 8
    NOTE: cache opening disk 0 of grp 4: LOG2 label:ASM_LOG2
    NOTE: F1X0 found on disk 0 fcn 0.0
    NOTE: cache mounting (not first) group 4/0xF7163E3C (REDO02)
    kjbdomatt send to node 1
    kjbdomatt send to node 2
    NOTE: attached to recovery domain 4
    NOTE: LGWR attempting to mount thread 1 for diskgroup 4
    NOTE: LGWR mounted thread 1 for disk group 4
    NOTE: opening chunk 1 at fcn 0.225880 ABA
    NOTE: seq=30 blk=10556
    NOTE: cache mounting group 4/0xF7163E3C (REDO02) succeeded
    NOTE: cache ending mount (success) of group REDO02 number=4 incarn=0xf7163e3c
    kfdp_query(): 9
    kfdp_queryBg(): 9
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 1
    SUCCESS: diskgroup DATA was mounted
    kfdp_query(): 10
    kfdp_queryBg(): 10
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 2
    SUCCESS: diskgroup RECOVERY was mounted
    kfdp_query(): 11
    kfdp_queryBg(): 11
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 3
    SUCCESS: diskgroup REDO01 was mounted
    kfdp_query(): 12
    kfdp_queryBg(): 12
    NOTE: Instance updated compatible.asm to 10.1.0.0.0 for grp 4
    SUCCESS: diskgroup REDO02 was mounted
    SUCCESS: ALTER DISKGROUP ALL MOUNT
    Thu Mar 24 08:26:28 2011
    Starting background process ASMB
    Thu Mar 24 08:26:28 2011
    ASMB started with pid=20, OS id=9597
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    Thu Mar 24 08:27:39 2011
    Starting background process ASMB
    Thu Mar 24 08:27:39 2011
    ASMB started with pid=25, OS id=10735
    NOTE: ASMB process exiting due to lack of ASM file activity for 5 seconds
    Do i need to set the compatible parameter?
    Regards,
    Vish

    It looks to me like your server is absolutely buried, and ASM may just be an innocent bystander. What is going on in the database when this happens? Also, run sar samples at 30 second intervals up to when this happens to see what is happening. It's overhead, but you need to find what is causing the problem with the server(s).
    Are you swapping?

Maybe you are looking for