10.2.0.4 RAC / OCFS2 upgrade to 11g R2 RAC / ASM

We are in the planning phase of an upgrade from 10.2.0.4 RAC on OCFS2 Cluster to an 11g R2 RAC and ASM. Is there documentation that will guide me in the best possible and least problematic direction? I have recently moved into the DBA world from a Senior Developer position so my questions might be fairly beginner and I apologize for that now. What I am thinking I would do is create a new RAID10 partition on our SAN and have my ASM disks there. Once the 11g database is installed and configured my plan was to take the current database out of service, back it up, and then recover to the 11g ASM ... is this possible when coming from OCFS2? Additionally I have found the certification matrix in support.oracle.com but that seems to be only a certification of the database against a specific OS ... is there a certification matrix for the hardware I will be installing this on especially my QLogic Infiniband HCA and the firmware version there on.

You should try this question on the Database Upgrade forum here. http://goo.gl/hOYZ

Similar Messages

  • ASM Installation on Oracle 11g Clusterware (RAC) environment

    Hi All,
    I am trying to setup Oracle 11g Standared Edition RAC+ASM on RedHat Linux 5.0 VM ware box. As part of this RAC setup successfully completed. For this i have used raw devices and mounted using NFS.
    In many of forumns i have read that ASM installtion on a cluster envioronmnet is different than non cluster environment.
    I dont know how to start ASM installtion on RAC environment. Please share any documents if you have?
    Thanks,
    Rakesh

    will the steps need to be done on two nodes of cluster or it is sufficient on first node.
    "raw device" for OCR, VOTE, SPFILE_FOR ASM and ASM DISKGROUPs
    before do "raw devices" you have to fdisk(make partitions) on share storages
    node1:
    # fdisk /dev/sdf
    # ls /dev/sdf*
    node2:
    # ls /dev/sdf*
    don't find...,so just fdisk -> l and -> w
    fdisk how? http://linux.about.com/od/commands/l/blcmdl8_fdisk.htm
    Example: /etc/sysconfig/rawdevices do it every nodes
    #Oracle OCR File +~280M+
    /dev/raw/raw1 /dev/sdf1
    #Oracle Voting File +~280M+
    /dev/raw/raw2 /dev/sdf2
    #Oracle ASM spfile ~50M+
    /dev/raw/raw3 /dev/sdf3
    #Oracle ASM DISK Group1
    /dev/raw/raw4 /dev/sdg1
    #Oracle ASM DISK Group2
    /dev/raw/raw5 /dev/sdh1
    And oracle user.. can read /dev/raw/raw* You should find on every nodes
    http://oraclepitstop.wordpress.com/2008/02/15/raw-devices-on-rhel-5-or-oel-5/
    ls -la /dev/raw/raw*
    crw-rw---- 1 root oinstall 162, 1 Jan 13 12:53 /dev/raw/raw1
    crw-rw---- 1 oracle oinstall 162, 2 Jan 13 12:53 /dev/raw/raw2
    crw-rw---- 1 oracle oinstall 162, 3 Jan 13 12:53 /dev/raw/raw3
    crw-rw---- 1 oracle oinstall 163, 1 Jan 13 12:53 /dev/raw/raw4
    crw-rw---- 1 oracle oinstall 164, 1 Jan 13 12:53 /dev/raw/raw5
    do it on node1 but you have to pass phrase on every nodes Before
    http://www.puddingonline.com/~dave/publications/SSH-with-Keys-HOWTO/document/html/SSH-with-Keys-HOWTO-5.html
    example:
    node01:
    $ ssh node01 hostname
    node01
    $ ssh node02 hostname
    node02
    - Install + Setup Clusterware:
    OCR = /dev/raw/raw1
    VOTE = /dev/raw/raw2
    - Install Oracle Database for ASM Home
    spfile for ASM = /dev/raw/raw3
    - after ASM started... create disk groups from /dev/raw/raw4 and /dev/raw/raw5
    - Install Oracle Database for RDBMS Home
    - Create Database to use ASM diskgroups
    http://www.oracle-base.com/articles/11g/OracleDB11gR1RACInstallationOnRHEL5UsingVMwareESXAndNFS.php
    Did you followed the same procedure to set up Oracle 11g RAC+ASM on RHEL 5.0.I'd done(test 11gRAC) it on RHEL 4 + 11g + ASM(on raw device)
    on production, I use ASMlib... + ASM + 10g
    you can read on metalink to help idea
    465001.1
    357492.1
    605828.1
    564580.1
    on http://startoracle.com/2007/09/30/so-you-want-to-play-with-oracle-11gs-rac-heres-how/
    Oracle 11g’s RAC.. I think.. that can help you ;)
    Good Luck
    Edited by: Surachart Opun (HunterX) on Jun 26, 2009 11:22 AM
    Example from IBM... Deploying Oracle RAC 11g R1 on RHEL 5 or SLES 10 with Oracle ASM on the IBM DS3400, DS4200, DS4700, and DS4800 Storage Subsystems
    http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101323
    Edited by: Surachart Opun (HunterX) on Jun 26, 2009 11:27 AM

  • 10g RAC upgrade to 11g RAC

    How to perform 10g RAC upgrade to 11g with Dataguard in place?
    OS=LINUX
    RDBMS=10.2.0.4
    DG=PHYSICAL STANDBY
    Below is my understanding
    1)Install 11g CRS in NEW_CRS_HOME
    2)Install 11g in NEW_ASM_HOME
    3)Install 11g in NEW_RDBMS_HOME
    4)Export the new NEW_ORACLE_HOME and start up the database with startup upgrade.
    5) Run the catupgrade.sql script.
    Are there any other steps involve?
    In case I have to rollback then how to rollback CRS.

    Personally I would use DBUA instead of catupgrade.sql, but make sure you follow the upgrade documentation for pre-checks etc. You will also need to use netca to recreate the sqlnet listeners in the 11g homes and register with 11g CRS.
    CRS is generally fairly quick to re-install and re-register databases, and this is cleaner than trying to restore OCR and VOTE disks etc. If you prefer to rollback be sure you backup the OCR and VOTE disks (e.g use dd), backup /etc/oracle, CRS_HOME, /etc/init* scripts etc.
    The dataguard standby database should roll forward through the upgrade, but you will need to manually register the standby with 11gCRS. Make sure you have your DB and Log create_file_dest's set, as well as standby_file_management=auto. Personally I would manually recover the standby after the upgrade in case I need to use it to fallback.
    Edited by: rgeier on Sep 1, 2009 3:51 PM

  • RAC node upgrade issue

    We have our company's database on Oracle Real application clusters database consisting of two RAC nodes. We would like to perform some hardware upgrades on both the RAC nodes. Could anyone please tell if it is OK to shutdown one instance at a time and remove all the network/interconnect cables from it and at the same time the other RAC node keeps working. After one node is upgraded and all the network/interconnect cables are connected back to it, will everything be just OK like before or are there are certain things to be cautious about ?
    Thanks in advance

    I think you got to do a clean shutdown so that it does not require any instance recovery when you re-open the database
    SQL>SHUTDOWN TRANSACTIONAL
    would allow all the current transactions to complete then shutdown the db

  • Upgrade 10.2.0.4 RAC to 11.2.0.2 RAC using a transient logical standby?

    Has anyone performed a successful upgrade from 10.2.0.4 RAC to 11.2.0.2 RAC using a transient logical standby? I found one reference to this white paper "http://www.oracle.com/technetwork/database/features/availability/maa-wp-10gr2-transrollupg-168590.pdf" that describes steps to us transient logical standby to upgrade to 11.1.0.6 but I wasn't sure if it also works for upgrades to 11.2.0.2.
    I have a 3 node 10.2.0.4 RAC cluster running on rhel 4.8 with a 3 node dataguard physical standby (which the customer want to keep as a physical standby after the upgrade) that I need to upgrade to 11.2.0.2 with as little downtime as possible. This approach seems like a possible solution but I want to know if it is proven.

    Hi,
    I have a 3 node 10.2.0.4 RAC cluster running on rhel 4.8 with a 3 node dataguard physical standby (which the customer want to keep as a physical standby after the upgrade) that I need to upgrade to 11.2.0.2 with as little downtime as possible. This approach seems like a possible solution but I want to know if it is proven.I suggest you to log a S.R and confirm the same.
    thanks,
    X A H E E R

  • Need to upgrade 9.2.0.6 rac to 10.2.0.4 rac

    Hi All,
    I need to upgrade 9.2.0.6 2 node rac to 10.2.0.4 2 node rac under 11.5.10.2 EBS
    Env Details:
    Total 4 nodes:
    1st node: Web and forms
    2nd node: web and forms
    3rd node: Reports, PCP node1, DB node1
    4rth node: reports, PCP node2, DB node 2
    OS env linux-86-64 2.9 on all the 4 nodes.
    Only financial modules are running with some cutomized reports.
    Please help me with some or other doc IDs atleast..
    Thanks a mil in advance.
    Cheers

    I'm in the process of upgrading EBS 10.2.0.4 from 9.2.0.8 RAC to 10.2.0.4 RAC.
    I was not able to find any one document that provides a procedure to upgrade EBS on RAC. I'm cobbling together a procedure from the following Metalink notes:
    362203.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 10g Release 2 (10.2.0)
    316889.1 - Complete Checklist for Manual Upgrades to 10gR2 (DB Upgrade Assistant failed for me...)
    362135.1 - Configuring Oracle Applications Release 11i with Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management (we're not doing ASM tho).
    Hope that helps. If you happen to run across a Metalink note that does address upgrading an EBS database that is already RAC, please reply :-)
    Regards,
    Jerry

  • Rac 10gr2 upgrade

    Hi experts,
    We are prototyping RAC database upgrade from 10.1.0 to 10.2.0.4. I just created a new directory as 10.2.0 ORACLE_HOME. Since the 10gr1 database is running, all the environmental variables set by login profile point to the current 10.1.0 database configuration. Need advice on a few of questions:
    1) At this point, I only want to use OUI to install 10gr2 binaries, and 10.2.0.4 binaries. Thought I could just kick off runInstaller to get it done. However, after reading Oracle documentation and a few articles, I am not sure if I should set any environmental variables.
    One article recommends
    . set ORACLE_BASE=/u01/app/oracle
    . set ORACLE_SID=orcl1 # Each RAC node must have a unique Oracle SID!
    . set LD_LIBRARY_PATH=$ORACLE_HOME/lib
    . unset ORACLE_HOME
    Others:
    . set ORACLE_HOME
    . set PATH
    . set LD_LIBRARY_PATH
    . set ORA_CRS_HOME
    . unset TNS_ADMIN
    I know I need to have all of them set properly prior to 10gr2 upgrade. Do I need to set any of them at all just to install 10gr2 binaries and patchset via OUI? If so, which ones?
    2) There are a few 10.2.0.4 patches needed to be opatch applied to each cluster node.
    I noticed some of the post processing involves sql executions via sqlplus.
    My guess is to withhold these sql command steps temporarily, and execute them after upgrade to 10.2.0.4 completes. Is this the correct interpretation of the patch application?
    Thanks, Newbie

    There is a CRS PSU for CRS 10.2
    Patch# 8705958 - 10.2.0.4.2 for CRS PSU 2
    Should the PSU should be applied to CRSprior to database upgrade from 10.1 to 10.2? Can it be applied after the upgrade? What is the general guideline on this?
    Thanks.

  • RAC - hardware upgrade

    Env:
    10.2.0.4 with ASM
    RedHat Linux
    We are planning to replace hardware (no DB upgrade but with a Linux OS upgrade RH-4 to RH-5 ) for our RAC database. It appears I have to add new nodes, and remove the existing to accomplish this.
    So if I have four existing RAC nodes :a,b,c,d and the replacement new nodes are: aa,bb,cc,dd
    then
    -add aa,bb,cc,dd
    -shutdown a,b,c,d
    -Make sure things are fine, which may mean running it for a week (rollback is easy , shut the new bring up the old).
    -drop a,b,c,d
    RAC is now made up of aa,bb,cc,dd
    And if we were to retain the same host names, since RAC host name (public) cannot be changed, I have to go through another iteration of delete-node and add-node. Something like :
    drop aa
    rename host aa as a
    add a
    All changes will be done in a maintenance window. Any suggestions or comments?
    Thanks
    -Madhu

    I have not considered DG, because it requires double the storage (our DB size is 5TB) , plus you have to consider licensing costs. The rollback should be easy for my approach too - just shutdown the new cluster , and fall back to old. I don't plan on removing old cluster nodes until about a week of successful running on new cluster.
    Re the raw device deprecation, the only thing that should impact us is the OCR, Voting disk, which we can move it to Netapps NFS if needed. I will let my sysadmin worry about it.
    I have checked with Oracle support, and the analyst agrees with my approach (not sure how qualified he is though :-( ) . Surprisingly, judging by the information on Metalink or internet, very few seem to have upgraded h/w on RAC.
    Thanks for your input.
    -Madhu

  • Database Vault 11g with RAC

    According to Oracle whitepaper, it mentioned that "The Oracle Home where DV is to be installed does not contain any ASM instance."
    But, ASM and RAC DB are located at the same home (Default installation) at my site.
    I jsut puzzle how I can fulfill this requirement. Could any expert give me some advices ?

    PiqousKerberos wrote:
    Johna Pakas wrote:
    I have built a test environment using single instance only, same os version and db version, without clusterware and rac.
    I perform the same by installing asm and db in the same home and create db, and then install DBV .
    Register the existing db instance for DBConsole at all.
    Eventually it turns out that no problem.
    Any supporting reason for that I must separate the Oracle Home ?On Oracle [ID 793739.1] -> 12 Things to Check for a Successful Database Vault Installation in 10gR2
    #1. Oracle Label Security is correctly installed and enabled.
    #2. Database console is functional.
    #3. The database to be installed should figure in the oratab file
    #4. the sqlnet files (tnsnames.ora and listener.ora) are placed in the default location:
    *#5. The Oracle Home where DV is to be installed does not contain any ASM instance.*
    #6. In case of RAC srvctl must work fine on all nodes.
    #7. The database vault must be installed:
    + For a single instance database it must be installed in a home that does not have the RAC option enabled.
    + For a RAC database it must be installed in a home that has the RAC option enabled.
    You cannot run a single instance database vault from a RAC enabled home.
    #8. In the database there must be a temporary tablespace named TEMP.
    #9. The service name used in jdbc_str must appear as an entry in tnsnames. ora and must match in name and case the ORACLE_SID.
    #10. The installer expects the listener name to be LISTENER and this is the only listener it starts. If using a different listener name, it must be started before running runInstaller or dvca.
    #11. Respect the password restrictions. Use the alphanumeric, characters, numbers and any of the $,#,_ characters. Password must be at least 8 characters long and contain at least one character from each of the groups mentioned above.
    #12. In addition to all the above, the 10.2.0.3 installation expects the ORACLE_SID to be lowercase. If this is too restrictive, it would be recommended to upgrade to the 10.2.0.4 database and database vault releases.
    * Certainly , you can install DBV with ASM. However, Oracle support doesn't guarantee it's stability and patch availability.That's fine. Let me discuss with stakeholdersr . My next task is to split RAC and ASM
    Edited by: Johna Pakas on 2010年2月25日 上午5:24

  • 11g R2  RAC book

    Please suggest a good 11g R2 RAC book.

    Book details looks very good espically the "Real World Deployments" part
    Book Details:
    --Design, implement and support complex Oracle 11g RAC environments for real world deployments
    --Design, implement and support complex Oracle 11g RAC environments
    --Understand sophisticated components that make up your Oracle RAC environment such as the role of High Availability, the required RAC architecture, the RAC installation and upgrade process and much more!
    --Get hold of new RAC components such as ASM (Automatic Storage Management) features, performance tuning, and troubleshooting
    --Deploy Oracle 11g with complex standard off-the-shelf ERP systems like Oracle EBS-
    Packed with practical, real-world examples, expert tips, and troubleshooting advice on how to administer a complex Oracle 11g RAC environment
    Bonus Oracle 11g RAC R2 information included

  • Oracle Database upgrade to 11G (Host_Command Issues)

    Sorry I'm posting this again. Original post in Database - Upgrade. I'm not getting a lot of clicks in that category
    We recently upgraded to Oracle 11g (Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production). Our PL/SQL release: (PL/SQL Release 11.2.0.2.0 - Production). We use a lot of host commands inside our PL/SQL stored procedures. One of these included fetching data from an external server and pushing the data into Oracle Tables.
    Most of this process is running fine, except the execution never completes. There is no issue data wise. It's just that the Host_command doesn't receive the completion status and seems to be running even though there's nothing going on in the background.
    The Oracle e-business suite's concurrent program based on this stored proc also never completes, though it doesn't do anything in the background. For testing purpose, our DBA used 2 insert statements one before and one after the host_command. The insert before the host_command worked but not the one after. Please help! This is happening after we upgraded to 11g 3 weeks ago!
    Thanks, Naveen Gagadam.

    Pl do not post duplicates - Upgrade to 11g, Host Command Inside Stored Proc doesn't end.
    I have moved your duplicate post to the EBS forums
    Srini

  • Problem with statement after upgrading to 11g

    Hello,
    We recently upgraded to 11g from 9i and one of our statements we routinely use no longer works. The statement is:
    delete from ALLEMPLOYEES x where exists( (select * from ALLEMPLOYEES where email_id=x.email_id) minus (select * from X_ALLEMPLOYEES where email_id=x.email_id));
    This statement deletes no rows from the ALLEMPLOYEES table. When we run this statement as a check:
    (select * from ALLEMPLOYEES) MINUS (select * from X_ALLEMPLOYEES);
    We find many rows produced. Why would the delete fail now? We have colleagues still using 9i and use this same delete statement and it works for them with no trouble.

    Do you have proper indexes?
    SQL> select  *
      2    from  v$version
      3  /
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL> create table emp1 as select * from emp where deptno != 30
      2  /
    Table created.
    SQL> create index emp1_idx_comm on emp1(comm)
      2  /
    Index created.
    SQL> create index emp_idx_comm on emp(comm)
      2  /
    Index created.
    SQL> explain plan for
      2  delete emp e
      3    where exists(
      4                  select  *
      5                    from  emp
      6                    where comm = e.comm
      7                 minus
      8                  select  *
      9                    from  emp1
    10                    where comm = e.comm
    11                )
    12  /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 1994471334
    | Id  | Operation              | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT       |              |     1 |    28 |     9  (34)| 00:00:01 |
    |   1 |  DELETE                | EMP          |       |       |            |          |
    |   2 |   NESTED LOOPS         |              |     1 |    28 |     9  (34)| 00:00:01 |
    |   3 |    VIEW                | VW_SQ_1      |    14 |   182 |     8  (25)| 00:00:01 |
    |   4 |     MINUS              |              |       |       |            |          |
    |   5 |      SORT UNIQUE       |              |    14 |   532 |            |          |
    PLAN_TABLE_OUTPUT
    |   6 |       TABLE ACCESS FULL| EMP          |    14 |   532 |     3   (0)| 00:00:01 |
    |   7 |      SORT UNIQUE       |              |     8 |   696 |            |          |
    |   8 |       TABLE ACCESS FULL| EMP1         |     8 |   696 |     3   (0)| 00:00:01 |
    |*  9 |    INDEX RANGE SCAN    | EMP_IDX_COMM |     1 |    15 |     0   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       9 - access("VW_COL_1"="E"."COMM")
           filter("E"."COMM" IS NOT NULL)
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    26 rows selected.
    SQL> explain plan for
      2  delete emp
      3    where comm is not null
      4     and comm not in (
      5                      select  comm
      6                        from  emp1
      7                        where comm is not null
      8                     )
      9  /
    Explained.
    SQL> @?\rdbms\admin\utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 123997034
    | Id  | Operation          | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | DELETE STATEMENT   |               |     3 |    84 |     1   (0)| 00:00:01 |
    |   1 |  DELETE            | EMP           |       |       |            |          |
    |   2 |   NESTED LOOPS ANTI|               |     3 |    84 |     1   (0)| 00:00:01 |
    |*  3 |    INDEX FULL SCAN | EMP_IDX_COMM  |     4 |    60 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN| EMP1_IDX_COMM |     1 |    13 |     0   (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
       3 - filter("COMM" IS NOT NULL)
       4 - access("COMM"="COMM")
           filter("COMM" IS NOT NULL)
    Note
       - dynamic sampling used for this statement (level=2)
    22 rows selected.
    SQL> SY.

  • 11g R2 RAC - Grid Infrastructure installation - "root.sh" fails on node#2

    Hi there,
    I am trying to create a two node 11g R2 RAC on OEL 5.5 (32-bit) using VMWare virtual machines. I have correctly configured both nodes. Cluster Verification utility returns on following error \[which I believe can be ignored]:
    Checking daemon liveness...
    Liveness check failed for "ntpd"
    Check failed on nodes:
    rac2,rac1
    PRVF-5415 : Check to see if NTP daemon is running failed
    Clock synchronization check using Network Time Protocol(NTP) failed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    While Grid Infrastructure installation (for a Cluster option), things go very smooth until I run "root.sh" on node# 2. orainstRoot.sh ran OK on both. "root.sh" run OK on node# 1 and ends with:
    Checking swap space: must be greater than 500 MB.   Actual 1967 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    *'UpdateNodeList' was successful.*
    *[root@rac1 ~]#*
    "root.sh" fails on rac2 (2nd node) with following error:
    CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
    CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
    Timed out waiting for the CRS stack to start.
    *[root@rac2 ~]#*
    I know this info may not be enough to figure out what the problem may be. Please let me know what should I look for to find the issue and fix it. Its been like almost two weeks now :-(
    Regards
    Amer

    Hi Zheng,
    ocssd.log is HUGE. So I am putting few of the last lines in the log file hoping they may give some clue:
    2011-07-04 19:49:24.007: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2180 > margin 1500  cur_ms 36118424 lastalive 36116244
    2011-07-04 19:49:26.005: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 4150 > margin 1500 cur_ms 36120424 lastalive 36116274
    2011-07-04 19:49:26.006: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 4180 > margin 1500  cur_ms 36120424 lastalive 36116244
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:27.997: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:33.001: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:37.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:43.000: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:49:48.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:49:48.005: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.003: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:12.008: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1660 > margin 1500 cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:12.009: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1660 > margin 1500  cur_ms 36166424 lastalive 36164764
    2011-07-04 19:50:15.796: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 2130 > margin 1500  cur_ms 36170214 lastalive 36168084
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:16.996: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1540 > margin 1500 cur_ms 36172244 lastalive 36170704
    2011-07-04 19:50:17.826: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1570 > margin 1500  cur_ms 36172244 lastalive 36170674
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:21.999: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1740 > margin 1500 cur_ms 36180424 lastalive 36178684
    2011-07-04 19:50:26.011: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1620 > margin 1500  cur_ms 36180424 lastalive 36178804
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:27.004: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: DiskPingThread for voting file ORCL:DATA sched delay 1700 > margin 1500 cur_ms 36182414 lastalive 36180714
    2011-07-04 19:50:28.002: [    CSSD][2997803920]clssnmvSchedDiskThreads: KillBlockThread for voting file ORCL:DATA sched delay 1790 > margin 1500  cur_ms 36182414 lastalive 36180624
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:31.998: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-07-04 19:50:37.001: [    CSSD][2901298064]clssnmSendingThread: sending status msg to all nodes
    2011-07-04 19:50:37.002: [    CSSD][2901298064]clssnmSendingThread: sent 5 status msgs to all nodes
    *<end of log file>*And the alertrac2.log contains:
    *[root@rac2 rac2]# cat alertrac2.log*
    Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved.
    2011-07-02 16:43:51.571
    [client(16134)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_16134.log.
    2011-07-02 16:43:57.125
    [client(16134)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 16:44:43.214
    [ohasd(16188)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 16:45:06.446
    [ohasd(16188)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 16:53:30.061
    [ohasd(16188)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 16:53:55.042
    [cssd(17674)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 16:54:38.334
    [cssd(17674)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    [cssd(17674)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 16:54:38.464
    [cssd(17674)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 16:54:39.174
    [ohasd(16188)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 16:55:43.430
    [cssd(17945)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 16:56:02.852
    [cssd(17945)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 16:56:04.061
    [cssd(17945)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 16:56:18.350
    [cssd(17945)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 16:56:29.283
    [ctssd(18020)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 16:56:29.551
    [ctssd(18020)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 16:56:29.615
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 16:56:29.616
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 16:56:29.641
    [ctssd(18020)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(18052)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18056)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:01:40.963
    [ohasd(16188)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.asm'. Details at (:CRSPE00111:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ohasd/ohasd.log.
    [client(18590)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(18594)]CRS-10001:ACFS-9322: done.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:27:46.385
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 17:46:48.717
    [crsd(22519)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:49.641
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:51.459
    [crsd(22553)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:51.776
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:53.928
    [crsd(22574)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:53.956
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:55.834
    [crsd(22592)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:56.273
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:46:57.762
    [crsd(22610)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:46:58.631
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:00.259
    [crsd(22628)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:00.968
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:02.513
    [crsd(22645)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:03.309
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:05.081
    [crsd(22663)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:05.770
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:07.796
    [crsd(22681)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:08.257
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:10.733
    [crsd(22699)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:11.739
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:13.547
    [crsd(22732)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 17:47:14.111
    [ohasd(16188)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 17:47:14.112
    [ohasd(16188)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 17:58:18.459
    [ctssd(18020)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    [client(26883)]CRS-10001:ACFS-9200: Supported
    2011-07-02 18:13:34.627
    [ctssd(18020)]CRS-2405:The Cluster Time Synchronization Service on host rac2 is shutdown by user
    2011-07-02 18:13:42.368
    [cssd(17945)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:15:13.877
    [client(27222)]CRS-2106:The OLR location /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olr is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/client/ocrconfig_27222.log.
    2011-07-02 18:15:14.011
    [client(27222)]CRS-2101:The OLR was formatted using version 3.
    2011-07-02 18:15:23.226
    [ohasd(27261)]CRS-2112:The OLR service started on node rac2.
    2011-07-02 18:15:23.688
    [ohasd(27261)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
    2011-07-02 18:15:24.064
    [ohasd(27261)]CRS-2772:Server 'rac2' has been assigned to pool 'Free'.
    2011-07-02 18:16:29.761
    [ohasd(27261)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    2011-07-02 18:16:30.190
    [gpnpd(28498)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:16:41.561
    [cssd(28562)]CRS-1713:CSSD daemon is started in exclusive mode
    2011-07-02 18:16:49.111
    [cssd(28562)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:16:49.166
    [cssd(28562)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    [cssd(28562)]CRS-1636:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1 and is terminating; details at (:CSSNM00006:) in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log
    2011-07-02 18:17:01.122
    [cssd(28562)]CRS-1603:CSSD on node rac2 shutdown by user.
    2011-07-02 18:17:06.917
    [ohasd(27261)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac2'.
    2011-07-02 18:17:23.602
    [mdnsd(28485)]CRS-5602:mDNS service stopping by request.
    2011-07-02 18:17:36.217
    [gpnpd(28732)]CRS-2328:GPNPD started on node rac2.
    2011-07-02 18:17:43.673
    [cssd(28794)]CRS-1713:CSSD daemon is started in clustered mode
    2011-07-02 18:17:49.826
    [cssd(28794)]CRS-1707:Lease acquisition for node rac2 number 2 completed
    2011-07-02 18:17:49.865
    [cssd(28794)]CRS-1605:CSSD voting file is online: ORCL:DATA; details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/cssd/ocssd.log.
    2011-07-02 18:18:03.049
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 rac2 .
    2011-07-02 18:18:06.160
    [ctssd(28861)]CRS-2403:The Cluster Time Synchronization Service on host rac2 is in observer mode.
    2011-07-02 18:18:06.220
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
    2011-07-02 18:18:06.238
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:18:06.239
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 18:18:06.794
    [ctssd(28861)]CRS-2401:The Cluster Time Synchronization Service started on host rac2.
    [client(28891)]CRS-10001:ACFS-9327: Verifying ADVM/ACFS devices.
    [client(28895)]CRS-10001:ACFS-9322: done.
    2011-07-02 18:18:33.465
    [crsd(29020)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:33.575
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:35.757
    [crsd(29051)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:36.129
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:38.596
    [crsd(29066)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:39.146
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:41.058
    [crsd(29085)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:41.435
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:44.255
    [crsd(29101)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:45.165
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:47.013
    [crsd(29121)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:47.409
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:50.071
    [crsd(29136)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:50.118
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:51.843
    [crsd(29156)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:52.373
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:54.361
    [crsd(29171)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:54.772
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:56.620
    [crsd(29202)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:57.104
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:58.997
    [crsd(29218)]CRS-1013:The OCR location in an ASM disk group is inaccessible. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/crsd/crsd.log.
    2011-07-02 18:18:59.301
    [ohasd(27261)]CRS-2765:Resource 'ora.crsd' has failed on server 'rac2'.
    2011-07-02 18:18:59.302
    [ohasd(27261)]CRS-2771:Maximum restart attempts reached for resource 'ora.crsd'; will not restart.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 18:49:58.070
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:21:33.362
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 19:52:05.271
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:22:53.696
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 20:53:43.949
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:24:32.990
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 21:55:21.907
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 21:55:21.908
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:26:45.752
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 22:57:54.682
    [ctssd(28861)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/grid/oracle/product/11.2.0/grid/log/rac2/ctssd/octssd.log.
    2011-07-02 22:57:54.683
    [ctssd(28861)]CRS-2409:The clock on host rac2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
    2011-07-02 23:07:28.603
    [cssd(28794)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.020 seconds
    2011-07-02 23:07:35.621
    [cssd(28794)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.010 seconds
    2011-07-02 23:07:39.629
    [cssd(28794)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 3.000 seconds
    2011-07-02 23:07:42.641
    [cssd(28794)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 205080558
    2011-07-02 23:07:44.751
    [cssd(28794)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
    2011-07-02 23:07:45.326
    [ctssd(28861)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
    2011-07-04 19:46:26.008
    [ohasd(27261)]CRS-8011:reboot advisory message from host: rac1, component: mo155738, with time stamp: L-2011-07-04-19:44:43.318
    [ohasd(27261)]CRS-8013:reboot advisory message text: clsnomon_status: need to reboot, unexpected failure 8 received from CSS
    *[root@rac2 rac2]#* This log file start with complaint that OLR is not accessible. Here is what I see (rca2):
    -rw------- 1 root oinstall 272756736 Jul  2 18:18 /u01/grid/oracle/product/11.2.0/grid/cdata/rac2.olrAnd I guess rest of the problems start with this.

  • HSODBC not working after upgrade to 11G

    Hi,
    I am having trouble with dblinks that worked under 10G but no longer work after upgrading to 11G. For this installation I am running Oracle on Windows 32 bit
    and am using a Sybase ODBC driver to connect to a Sybase database.
    I did catch the fact that hsodbc has been replaced by dg4odbc and have changed my listener configuration accordingly. However, when I try to use the dblinks I am receiving ORA-28500 errors "Incorrect syntax near 'FROM'"
    Any idea on what the problem might be or how to troubleshoot?
    Thanks!

    This kind of problem is commonly related to the QUOTED IDENTIFIER option within the ODBC driver
    - Open ODBC Admin and step throug the config; you'll find an option:
    EnableQuotedIdentifiers make sure this option is checked.
    -Also make sure you have set in your init<dg4odbc>.ora file the parameter: HS_FDS_SUPPORT_STATISTICS=FALSE

  • Oracle 11g R2 RAC installation on AIX

    hi,
    I wana setup Oracle 11g R2 RAC on Ibm with AIX. Is there any good document or cookbook explaining step by step procedure for the installation...I'll be thankful
    thanx

    Hi;
    Please check below links which is mention about 11gR2 Rac installation on AIX
    11gR2 RAC installation in AIX help urgent
    Re: 11gR2 RAC installation in AIX help urgent...!!!!
    11gR2 RAC Installation Manual (with ASM+Rawdevice)
    http://www.goodus.com/knowledge_pds/%EA%B8%B0%EC%88%A0%EB%85%B8%ED%8A%B8_11gR2_RAC_Guide.pdf
    Regard
    Helios

Maybe you are looking for