Global Resource Directory frozen oracle rac database

I have 2 node oracle cluster on x86_64 solaris 10 system and i am getting the following error while starting asm instance on node2.
Here are the error in the log file.
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
IPC Send timeout detected. Sender: ospid 2388 [[email protected] (PING)]
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
I have disable the firewalls on both the nodes.
Can any one please help me on this.

Without four digit version info?
No way, Jose
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • ASM ORA-27041 'Global Resource Directory partially frozen for dirty detach'

    Hi there,
    I am running a 10gR2 RAC on two nodes with ASM. Once in three days, the following happens to the ASM instance of one of the nodes. But it's not predictable which node will be next. In other words: Both ASM instances failed so far, but never both at the same time. Least I can say, the cluster always worked. But what the f... happens here? Please have a look at the following alert log extract of the ASM system.
    ---------------------------------------snip---------------------------------------------
    <notmal operational output until here>
    Tue Jan 17 05:33:28 2006
    WARNING: cache failed to read fn=2 blk=0 from disk(s): 1 0
    ORA-27041: Datei kann nicht geoffnet werden
    NOTE: cache initiating offline of disk 1 group 2
    NOTE: cache initiating offline of disk 0 group 2
    WARNING: offlining disk 1.4051955519 (AWISRACVOL_0001) with mask 0x3
    WARNING: offlining disk 0.4051955520 (AWISRACVOL_0000) with mask 0x3
    NOTE: PST update: grp = 2, dsk = 1, mode = 0x6
    NOTE: PST update: grp = 2, dsk = 0, mode = 0x6
    Tue Jan 17 05:33:28 2006
    ERROR: too many offline disks in PST (grp 2)
    Tue Jan 17 05:33:28 2006
    NOTE: PST not enabling heartbeating (grp 2): group dismounted
    Tue Jan 17 05:33:28 2006
    NOTE: halting all I/Os to diskgroup AWISRACVOL
    NOTE: active pin found: 0x0x1019bcc38
    NOTE: active pin found: 0x0x1019bcce8
    Tue Jan 17 05:33:33 2006
    ERROR: PST-initiated MANDATORY DISMOUNT of group AWISRACVOL
    NOTE: cache dismounting group 2/0x9D831FCE (AWISRACVOL)
    NOTE: dbwr not being msg'd to dismount
    Tue Jan 17 05:33:33 2006
    kjbdomdet send to node 0
    detach from dom 2, sending detach message to node 0
    Tue Jan 17 05:33:33 2006
    Dirty detach reconfiguration started (old inc 2, new inc 2)
    List of nodes:
    0 1
    Global Resource Directory partially frozen for dirty detach
    * dirty detach - domain 2 invalid = TRUE
    553 GCS resources traversed, 0 cancelled
    1067 GCS resources on freelist, 6165 on array, 6165 allocated
    Dirty Detach Reconfiguration complete
    Tue Jan 17 05:33:34 2006
    WARNING: dirty detached from domain 2
    Tue Jan 17 05:33:34 2006
    SUCCESS: diskgroup AWISRACVOL was dismounted
    Tue Jan 17 05:35:25 2006
    WARNING: cache failed to read fn=1 blk=2 from disk(s): 0 1
    ORA-27041: Datei kann nicht geoffnet werden
    NOTE: cache initiating offline of disk 0 group 1
    NOTE: cache initiating offline of disk 1 group 1
    WARNING: offlining disk 0.4051955518 (AWISAUXVOL_0000) with mask 0x3
    WARNING: offlining disk 1.4051955517 (AWISAUXVOL_0001) with mask 0x3
    NOTE: PST update: grp = 1, dsk = 0, mode = 0x6
    NOTE: PST update: grp = 1, dsk = 1, mode = 0x6
    Tue Jan 17 05:35:25 2006
    ERROR: too many offline disks in PST (grp 1)
    Tue Jan 17 05:35:25 2006
    NOTE: PST not enabling heartbeating (grp 1): group dismounted
    Tue Jan 17 05:35:25 2006
    NOTE: halting all I/Os to diskgroup AWISAUXVOL
    NOTE: active pin found: 0x0x1019bcc38
    NOTE: active pin found: 0x0x1019bcce8
    NOTE: active pin found: 0x0x1019bcd98
    Tue Jan 17 05:35:25 2006
    ERROR: PST-initiated MANDATORY DISMOUNT of group AWISAUXVOL
    NOTE: cache dismounting group 1/0x9D731FCD (AWISAUXVOL)
    Tue Jan 17 05:35:26 2006
    kjbdomdet send to node 0
    detach from dom 1, sending detach message to node 0
    Tue Jan 17 05:35:26 2006
    Dirty detach reconfiguration started (old inc 2, new inc 2)
    List of nodes:
    0 1
    Global Resource Directory partially frozen for dirty detach
    * dirty detach - domain 1 invalid = TRUE
    5052 GCS resources traversed, 0 cancelled
    Dirty Detach Reconfiguration complete
    Tue Jan 17 05:35:27 2006
    WARNING: dirty detached from domain 1
    Tue Jan 17 05:35:27 2006
    SUCCESS: diskgroup AWISAUXVOL was dismounted
    <log ends here>
    ---------------------------------------snap--------------------------------------------
    Useless to say that the database based on this ASM instance terminates with complaints about non-readable controlfiles and so on.
    It has been no problem to re-start the ASM- and DB-Instance after this happened. So no persistent damage was done, but I find it widely alarming. Has anybody an idea, or can explain, what Oracle does here?
    Thanks in advance,
    Martin Klier

    Hi,
    it seem we have found the solution of this problem, it has been a whole bunch of trouble:
    - First of all, the interconnect should never be a crossover cable. Oracle said so in the official RAC FAQ, and they describe as reason why not: "b) Instability. We have seen different problems e.g.. ORA-29740 at configurations using crossover cable, and other errors." I had known, but the errors just appeared weeks after the switchover to crossover cabling.
    - The ASM-used devices / raw devices MUST be owned by oracle:oinstall (or :dba depending on the setup), if they are owned by root:disk (with oracle as a memeber of disk) it complains - the ASM instance does exactly the error above.
    - Having this fixed, I always got an error ORA-27041 but without "Global Resource Directory partially frozen". The system worked fine, but on every start of ASM the error was logged. Solution: provide an ASM_DISKSTRING parameter, in order to prevent ASM trying to use other (maybe CRS-used) raw devices than its own.
    The stuff took away three weeks of my life and a bunch of hair :)
    Martin

  • Enterprise User Security (EUS) with Oracle RAC database

    Hi all,
    i'm experiencing a problem configuring centralized AAA on Oracle OID for Oracle RAC Database.
    My environment is:
    1) Oracle OID 10g (192.168.15.245 - rh4oidserver.klab.it)
    2) Oracle RAC database 11g
    I successfull configured a standalone Oracle Database to authenticate user in OID centralized repository, but i'm experiencing different problem to do, with RAC, same things.
    In dept:
    1) Oracle RAC works correctly and internal user (SYS,Oracle, ecc.) are correctly authenticated and authorizated against database
    2) Oracle RAC register himself in OID (see attached snapshoot)
    3) I run sqlplus to connect on Oracle RAC using OID users and i get following error: ORA-28030 Server encountered problems accessing LDAP directory service
    Using a sniffer, i can see a reset message after SSL handshake (SSL v3 encrypted alert), but i don't undenstand root cause....
    Host file on RAC server is:
    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1          localhost.localdomain localhost
    ::1          localhost6.localdomain6 localhost6
    # Public
    192.168.15.177          orclrac1.klab.it orclrac1
    192.168.15.178 orclrac2.klab.it orclrac2
    #Private
    192.168.1.100          orclrac1-priv.klab.it orclrac1-priv
    192.168.1.105 orclrac2-priv.klab.it orclrac2-priv
    #Virtual
    192.168.15.88 orclrac1-vip.klab.it orclrac1-vip
    192.168.15.96 orclrac2-vip.klab.it orclrac2-vip
    92.168.15.184 openfiler.klab.it openfiler
    192.168.1.90 openfiler-priv.klab.it openfiler-priv
    192.168.15.246     acti.klab.it acti
    #192.168.1.245 rh4oidserver.klab.it rh4oidserver
    192.168.15.245 rh4oidserver.klab.it rh4oidserver
    tnsname.ora is:
    # tnsnames.ora Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/tnsnames.ora
    # Generated by Oracle configuration tools.
    RACDB1 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac1-vip)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = racdb.klab.it)
    (INSTANCE_NAME = racdb1)
    RACDB =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = racdb.klab.it)
    LISTENERS_RACDB =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac2-vip)(PORT = 1521))
    RACDB2 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac2-vip)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = racdb.klab.it)
    (INSTANCE_NAME = racdb2)
    ldap.ora is:
    # ldap.ora Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/ldap.ora
    # Generated by Oracle configuration tools.
    DIRECTORY_SERVERS= (rh4oidserver.klab.it:389:636)
    DEFAULT_ADMIN_CONTEXT = "dc=dbtest101,dc=klab,dc=it"
    DIRECTORY_SERVER_TYPE = OID
    sqlnet.ora is:
    # sqlnet.ora.orclrac1 Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/sqlnet.ora.orclrac1
    # Generated by Oracle configuration tools.
    NAMES.DIRECTORY_PATH= (LDAP,TNSNAMES)
    WALLET_LOCATION =
    (SOURCE =
    (METHOD = FILE)
    (METHOD_DATA =
    (DIRECTORY = /u01/app/oracle/admin/racdb)
    listener.ora is:
    # listener.ora.orclrac1 Network Configuration File: /u01/app/oracle/product/11.1.0/db_1/network/admin/listener.ora.orclrac1
    # Generated by Oracle configuration tools.
    LISTENER_ORCLRAC1 =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac1-vip)(PORT = 1521)(IP = FIRST))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.15.177)(PORT = 1521)(IP = FIRST))
    LISTENER_ORCLRAC2 =
    (DESCRIPTION_LIST =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orclrac1-vip)(PORT = 1521)(IP = FIRST))
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.15.178)(PORT = 1521)(IP = FIRST))
    Thank's in advance for any help or suggestion.
    Antonio

    Hello bipkary,
    what version are you using?
    the following link tells you everything about EUS in oracle10g R2:
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14269/toc.htm

  • Use Oracle RAC Database 10g on SunOS nova 5.9

    Hello!
    We use Oracle RAC Database 10g Enterprise Edition Release 10.1.0.5.0 - 64bit that consist of two nodes on SunOS nova 5.9 Generic_117171-17 sun4u sparc SUNW,Sun-Fire-V440.
    Connection string as follows:
    jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=
    (PROTOCOL=TCP)(HOST=host2)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=proddb))).
    Clustered Java application performs direct JDBC INSERT of record into database throw oracle.jdbc.pool.OracleDataSource.
    Table got the following fields:
    FIELD1 VARCHAR2(100 BYTE) NOT NULL,
    FIELD2 VARCHAR2(100 BYTE) NOT NULL,
    FIELD3 VARCHAR2(100 BYTE) NOT NULL,
    FIELD4 VARCHAR2(100 BYTE),
    FIELD5 VARCHAR2(100 BYTE),
    FIELD6 VARCHAR2(100 BYTE),
    FIELD7 INTEGER,
    FIELD8 BLOB,
    FIELD9 BLOB
    Approx in 60 milliseconds another application node performs SELECT of that record, which return void result. However next SELECT returns needed result.
    Defect is occurs with 50% probability.
    Whether problem is connected with defect in Oracle RAC/JDBC Driver?
    Thanks in advance!

    RAC propogates commit messages using a method which piggy-backs on the next available inter-instance message. I think it's called the Lampart Method, but anyways .... it means you have an uncontrollable delay between committed data in your two instances.
    Look at the init.ora parameter max_commit_propagation_delay, and consider setting it to 0. The parameter is briefly described in the Oracle Reference Guide.
    cheers,
    -Mark
    www.remidata.com/book_nuts2soup.htm

  • Connect Database Host Name in Oracle Rac Database

    Hi All,
    I am using Oracle SES 11g to create a "Table Source" and a have following question.
    I have to added new table source to crawl, in the field "Database Host Name" i want to connect with Oracle Rac Database Server with two node.
    I am searching in the document and i can't find any relevant on that scenario.
    Can anyone help me in that ?
    Thank you.
    NG

    Check this Rittman Mead Consulting &amp;raquo; Blog Archive &amp;raquo; Oracle BI EE 11g – Managing Host Name Changes

  • RMAN backup in ORACLE RAC database

    Hi there,
    We are using RMAN backup strategy for taking backups from one node(eg.node1).Database is RAC database, version 9.2.3. and OS is Solaris 5.10.
    this is the configuration which we use, as follows:
    Connecting to Prod.Database using catalog database.
    rman catalog=rman/rman@CATDB target=sys/oracle@PRODDB(node 2).
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 30;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/rmanback/_%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 5;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rmanback/CMWPROD_%s.bak';
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'SYS/oracle@PRODDB(node2)';
    CONFIGURE MAXSETSIZE TO 7 G;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/9.2.3/dbs/snapcf_CMWPROD1.f';
    As the above config shows RMAN is connecting to PROD DB (node 2). But we need to take backups (datafiles and as well as archive logs) from two nodes (i.e from node 1 and as well as node 2).
    Is there any possibility for taking backups from two nodes?
    Thanks,
    Balu.

    Duplicate thread
    RMAN configuration in ORACLE RAC database
    Khurram                                                                                                                                                                                                                               

  • Help required of creating Oracle RAC database having multiple instances

    Hello Guys!
    I want to create one database having 2 instances on Oracle 11g R1
    I have 2 nodes in this Oracle RAC
    Database name will be Val
    and instance name on node 1 will be val1 and instance name on node 2 will be val2
    Raw storage will be used, that is, without using automatic storage management or an Oracle Cluster File System
    Can anyone help how to do that ?
    Thanks

    Hi,
    Wonderful example thanks for the link.You're welcome :)
    I'm little confused which option to choose in shared storage option of DBCA
    i.e. Cluster File System or Raw devices?I suggest you to use ASM for you database storage requirement and select raw devices. oracle recommends to use ASM storage for sahred database.
    thanks,
    X A H E E R

  • Changing the Physical IP address of Oracle RAC database Server

    Hi All,
    We are planning to change the Physical IP address of Oracle RAC database Server. I would like know to, what all are the changes need to be done the from Oracle part.
    Thanks in Advance

    Check document 283684.1 on metalink and/or
    http://orcl-experts.info/index.php?name=FAQ&id_cat=9

  • RMAN configuration in ORACLE RAC database

    Hi there,
    We are using RMAN backup strategy for taking backups from one node(eg.node1).Database is RAC database, version 9.2.3. and OS is Solaris 5.10.
    this is the configuration which we use, as follows:
    Connecting to Prod.Database using catalog database.
    rman catalog=rman/rman@CATDB target=sys/oracle@PRODDB(node 2).
    RMAN> show all;
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 30;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/rmanback/_%F';
    CONFIGURE DEVICE TYPE DISK PARALLELISM 5;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/rmanback/CMWPROD_%s.bak';
    CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'SYS/oracle@PRODDB(node2)';
    CONFIGURE MAXSETSIZE TO 7 G;
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/9.2.3/dbs/snapcf_CMWPROD1.f';
    As the above config shows RMAN is connecting to PROD DB (node 2). But we need to take backups (datafiles and as well as archive logs) from two nodes (i.e from node 1 and as well as node 2).
    Is there any possibility for taking backups from two nodes?
    Thanks,
    Balu.

    You will need to setup an NFS share such that both nodes can access each other's archived redo logs area. You also need to ensure your LOG_ARCHIVE_DEST_n parameter settings are properly set. Once setup correctly, either node will be able to run the backup and include each other's archived redo log files. Since this is RAC, the data files are shared by design and either node can already backup these files.
    Multiple channels would allow 'simultaneous' backup of the archived redo logs.
    I would highly recommend using a CFS (single shared file system where all nodes store their archived redo logs distinguished by thread) to store the archived redo logs, this reduces administration and configuration especially when you get ready to scale.
    Hope this helps.

  • Manual creation of Oracle RAC Database

    Hello Guru's,
    I successfully installed Oracle Grid(clusterware) and Oracle RDBMS 11.2.0.1.0 for RAC containing 2 nodes on Linux 5.3 64 bit.
    All the clusterware services and ASM instance are running fine on both nodes.
    Now im planning to create database manually on node1 (RAC-NODE1).
    1) configure the environmental variables as follows
    export TMP=/tmp
    export ORACLE_HOSTNAME=`hostname`
    export ORACLE_SID=finance1
    export ORACLE_UNQNAME=finance
    export ORACLE_BASE=/rdbms1/app/oracle/
    export ORACLE_HOME=/rdbms1/app/oracle/product/11.2.0/db_home/
    export ORACLE_TERM=xterm
    export PATH=$ORACLE_HOME/bin:$PATH:.
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    2) created the pfile and modified the parameters as follows
    db_name='finance'
    processes = 200
    audit_trail ='db'
    db_block_size=8192
    db_domain='vod.com'
    diagnostic_dest='/oradump/oradata/finance/dump'
    sessions=200
    remote_login_passwordfile='EXCLUSIVE'
    undo_tablespace='UNDOTBS1'
    control_files = +CTRL
    compatible ='11.2.0'
    job_queue_processes = 10
    undo_management = 'AUTO'
    finance1.instance_name = finance1
    db_create_file_dest = +DBFILE
    db_create_online_log_dest_1 = +REDO1
    db_create_online_log_dest_2 = +REDO2
    3) Created the password file as follows
    $ orapwd file=orapwfinance1 passwors=syspassword entries=10
    4) edit the /etc/oratab file as follows
    finance1:/rdbms1/app/oracle/product/11.2.0/db_home:N
    5) created the rac db script as follows.
    CREATE DATABASE FINANCE
    DATAFILE '+DBFILE' SIZE 500M
    AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED
    EXTENT MANAGEMENT LOCAL
    SYSAUX DATAFILE '+DBFILE' SIZE 500M
    AUTOEXTEND ON NEXT 50M MAXSIZE UNLIMITED
    DEFAULT TEMPORARY TABLESPACE TEMP
    TEMPFILE '+DBFILE' SIZE 100M AUTOEXTEND ON NEXT 10M MAXSIZE 500M
    UNDO TABLESPACE UNDOTBS1
    DATAFILE '+DBFILE' SIZE 100M AUTOEXTEND ON NEXT 10M MAXSIZE 500M
    DEFAULT TABLESPACE USERDATA
    DATAFILE '+DBFILE' SIZE 100M AUTOEXTEND ON NEXT 10M MAXSIZE 500M
    LOGFILE
    GROUP 1 ('+REDO1','+REDO2') SIZE 20M,
    GROUP 2 ('+REDO1','+REDO2') SIZE 20M,
    GROUP 3 ('+REDO1','+REDO2') SIZE 20M
    MAXINSTANCES 8
    MAXLOGHISTORY 300
    MAXLOGFILES 64
    MAXLOGMEMBERS 5
    MAXDATAFILES 150
    USER SYS IDENTIFIED BY "sys_123"
    USER SYSTEM IDENTIFIED BY "system_123";
    6) started the instance as follows.
    $ sqlplus / as sysdba
    SQL> startup nomount
    ORACLE instance started.
    Total System Global Area 217157632 bytes
    Fixed Size 2211928 bytes
    Variable Size 159387560 bytes
    Database Buffers 50331648 bytes
    Redo Buffers 5226496 bytes
    SQL> @rac_db.sql
    CREATE DATABASE FINANCE
    ERROR at line 1:
    ORA-01501: CREATE DATABASE failed
    ORA-00200: control file could not be created
    ORA-00202: control file: '+CTRL'
    ORA-15045: ASM file name '+CTRL' is not in reference form
    ORA-17502: ksfdcre:5 Failed to create file +CTRL
    ORA-15081: failed to submit an I/O operation to a disk
    It shows that oracle unable to create control file on +CTRL diskgroup, i found that there is a problem with permissions on disks in diskgroups.
    brw-rw-r-- 1 grid asmadmin 8, 49 Apr 8 04:25 CTRLDISK1
    brw-rw-r-- 1 grid asmadmin 8, 50 Apr 8 04:25 CTRLDISK2
    brw-rw-r-- 1 grid asmadmin 8, 51 Apr 8 04:25 CTRLDISK3
    brw-rw-r-- 1 grid asmadmin 8, 52 Apr 8 04:25 CTRLDISK4
    since all the disks in diskgroup has 664 permision and owned by grid user primary group is asmadmin.
    # id grid
    uid=1001(grid) gid=501(oinstall) groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper)
    # id oracle
    uid=1002(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper),505(asmdba)
    please help me to overcome the above problem.
    regards,

    Thanks for comments,
    I successfully created the database, once i restared the database it is unbale to mount because of unbale to open +CTRL group file.
    SQL> startup
    ORACLE instance started.
    Total System Global Area 217157632 bytes
    Fixed Size 2211928 bytes
    Variable Size 159387560 bytes
    Database Buffers 50331648 bytes
    Redo Buffers 5226496 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    alert log:
    ORACLE_BASE from environment = /rdbms1/app/oracle/
    Thu Apr 11 09:36:34 2013
    ALTER DATABASE MOUNT
    ORA-00210: cannot open the specified control file
    ORA-00202: control file: '+CTRL'
    ORA-17503: ksfdopn:2 Failed to open file +CTRL
    ORA-15045: ASM file name '+CTRL' is not in reference form
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    Thu Apr 11 09:36:42 2013
    NOTE: initiating MARK startup
    raw devices:
    # ls -l /dev/sd*
    brw-rw-r-- 1 grid asmadmin 8, 16 Apr 11 09:04 /dev/sdb
    brw-rw-r-- 1 grid asmadmin 8, 17 Apr 11 09:51 /dev/sdb1
    brw-rw-r-- 1 grid asmadmin 8, 18 Apr 11 09:51 /dev/sdb2
    brw-rw-r-- 1 grid asmadmin 8, 19 Apr 11 09:51 /dev/sdb3
    brw-rw-r-- 1 grid asmadmin 8, 20 Apr 11 09:45 /dev/sdb4
    brw-rw-r-- 1 grid asmadmin 8, 32 Apr 11 09:04 /dev/sdc
    brw-rw-r-- 1 grid asmadmin 8, 33 Apr 11 09:30 /dev/sdc1
    brw-rw-r-- 1 grid asmadmin 8, 34 Apr 11 09:30 /dev/sdc2
    brw-rw-r-- 1 grid asmadmin 8, 35 Apr 11 09:30 /dev/sdc3
    brw-rw-r-- 1 grid asmadmin 8, 36 Apr 11 09:05 /dev/sdc4
    brw-rw-r-- 1 grid asmadmin 8, 48 Apr 11 09:04 /dev/sdd
    brw-rw-r-- 1 grid asmadmin 8, 49 Apr 11 09:51 /dev/sdd1
    brw-rw-r-- 1 grid asmadmin 8, 50 Apr 11 09:51 /dev/sdd2
    brw-rw-r-- 1 grid asmadmin 8, 51 Apr 11 09:51 /dev/sdd3
    brw-rw-r-- 1 grid asmadmin 8, 52 Apr 11 09:30 /dev/sdd4
    brw-rw-r-- 1 grid asmadmin 8, 64 Apr 11 09:04 /dev/sde
    brw-rw-r-- 1 grid asmadmin 8, 65 Apr 11 09:51 /dev/sde1
    brw-rw-r-- 1 grid asmadmin 8, 66 Apr 11 09:51 /dev/sde2
    brw-rw-r-- 1 grid asmadmin 8, 67 Apr 11 09:51 /dev/sde3
    brw-rw-r-- 1 grid asmadmin 8, 68 Apr 11 09:30 /dev/sde4
    brw-rw-r-- 1 grid asmadmin 8, 80 Apr 11 09:04 /dev/sdf
    brw-rw-r-- 1 grid asmadmin 8, 81 Apr 11 09:51 /dev/sdf1
    brw-rw-r-- 1 grid asmadmin 8, 82 Apr 11 09:51 /dev/sdf2
    brw-rw-r-- 1 grid asmadmin 8, 83 Apr 11 09:51 /dev/sdf3
    brw-rw-r-- 1 grid asmadmin 8, 84 Apr 11 09:30 /dev/sdf4
    brw-rw-r-- 1 grid asmadmin 8, 96 Apr 11 09:04 /dev/sdg
    brw-rw-r-- 1 grid asmadmin 8, 97 Apr 11 09:51 /dev/sdg1
    brw-rw-r-- 1 grid asmadmin 8, 98 Apr 11 09:51 /dev/sdg2
    brw-rw-r-- 1 grid asmadmin 8, 99 Apr 11 09:51 /dev/sdg3
    brw-rw-r-- 1 grid asmadmin 8, 100 Apr 11 09:30 /dev/sdg4
    brw-rw-r-- 1 grid asmadmin 8, 112 Apr 11 09:04 /dev/sdh
    brw-rw-r-- 1 grid asmadmin 8, 113 Apr 11 09:51 /dev/sdh1
    brw-rw-r-- 1 grid asmadmin 8, 114 Apr 11 09:51 /dev/sdh2
    brw-rw-r-- 1 grid asmadmin 8, 115 Apr 11 09:51 /dev/sdh3
    brw-rw-r-- 1 grid asmadmin 8, 116 Apr 11 09:30 /dev/sdh4
    asm disks:
    [09:52 AM [email protected] disks]# ll
    brw-rw-r-- 1 grid asmadmin 8, 65 Apr 11 09:05 ARCLOGDISK1
    brw-rw-r-- 1 grid asmadmin 8, 66 Apr 11 09:05 ARCLOGDISK2
    brw-rw-r-- 1 grid asmadmin 8, 67 Apr 11 09:05 ARCLOGDISK3
    brw-rw-r-- 1 grid asmadmin 8, 68 Apr 11 09:05 ARCLOGDISK4
    brw-rw-r-- 1 grid asmadmin 8, 49 Apr 11 09:05 CTRLDISK1
    brw-rw-r-- 1 grid asmadmin 8, 50 Apr 11 09:05 CTRLDISK2
    brw-rw-r-- 1 grid asmadmin 8, 51 Apr 11 09:05 CTRLDISK3
    brw-rw-r-- 1 grid asmadmin 8, 52 Apr 11 09:05 CTRLDISK4
    brw-rw-r-- 1 grid asmadmin 8, 33 Apr 11 09:05 DBFILEDISK1
    brw-rw-r-- 1 grid asmadmin 8, 34 Apr 11 09:05 DBFILEDISK2
    brw-rw-r-- 1 grid asmadmin 8, 35 Apr 11 09:05 DBFILEDISK3
    brw-rw-r-- 1 grid asmadmin 8, 36 Apr 11 09:05 DBFILEDISK4
    brw-rw-r-- 1 grid asmadmin 8, 113 Apr 11 09:05 FRADISK1
    brw-rw-r-- 1 grid asmadmin 8, 114 Apr 11 09:05 FRADISK2
    brw-rw-r-- 1 grid asmadmin 8, 115 Apr 11 09:05 FRADISK3
    brw-rw-r-- 1 grid asmadmin 8, 116 Apr 11 09:05 FRADISK4
    brw-rw-r-- 1 grid asmadmin 8, 81 Apr 11 09:05 REDODISK1
    brw-rw-r-- 1 grid asmadmin 8, 82 Apr 11 09:05 REDODISK2
    brw-rw-r-- 1 grid asmadmin 8, 83 Apr 11 09:05 REDODISK3
    brw-rw-r-- 1 grid asmadmin 8, 84 Apr 11 09:05 REDODISK4
    brw-rw-r-- 1 grid asmadmin 8, 97 Apr 11 09:05 REDODISK5
    brw-rw-r-- 1 grid asmadmin 8, 98 Apr 11 09:05 REDODISK6
    brw-rw-r-- 1 grid asmadmin 8, 99 Apr 11 09:05 REDODISK7
    brw-rw-r-- 1 grid asmadmin 8, 100 Apr 11 09:05 REDODISK8
    brw-rw-r-- 1 grid asmadmin 8, 17 Apr 11 09:05 VOTEDISK1
    brw-rw-r-- 1 grid asmadmin 8, 18 Apr 11 09:05 VOTEDISK2
    brw-rw-r-- 1 grid asmadmin 8, 19 Apr 11 09:05 VOTEDISK3
    brw-rw-r-- 1 grid asmadmin 8, 20 Apr 11 09:05 VOTEDISK4
    [09:53 AM [email protected] ~]# ls -l /rdbms1/app/oracle/product/11.2.0/db_home/bin/oracle
    -rwsr-s--x 1 oracle asmadmin 210824720 Apr 8 13:39 /rdbms1/app/oracle/product/11.2.0/db_home/bin/oracle
    I know this problem is coming for improper assiging of permissions, but i could not pin point. kindly clarify the same.
    one more thing i need to know.., whenever i rebooted nodes, both of raw devices and asm disks are come to default permission brw------, i have to change every time to brw-rw-r--. is there any way to make these disks and devices permissions make permenet.
    Thanks in advance.

  • Can not start Rac database

    Hi,
    Oracle RAC database 10.2.0.3/RedHat4 with 2 nodes.
    In the begining we had an error ORA-600[12803] so only sys can connect to database I find the note 1026653.6 this note said that we need to create AUDSES$ sequence but befor that we have to restart the database.
    When we stop the datanbase we had another ORA-600 and it's impossible to start it!!
    Here is a coppy of our alert file:
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    sga_max_size = 524288000
    __shared_pool_size = 310378496
    __large_pool_size = 4194304
    __java_pool_size = 8388608
    __streams_pool_size = 8388608
    spfile = +DATA/osista/spfileosista.ora
    nls_language = FRENCH
    nls_territory = FRANCE
    nls_length_semantics = CHAR
    sga_target = 524288000
    control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
    db_block_size = 8192
    __db_cache_size = 184549376
    compatible = 10.2.0.3.0
    log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
    db_file_multiblock_read_count= 16
    cluster_database = TRUE
    cluster_database_instances= 2
    db_create_file_dest = +DATA
    db_recovery_file_dest = +FLASH
    db_recovery_file_dest_size= 68543315968
    thread = 2
    instance_number = 2
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    undo_retention = 29880
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
    local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
    remote_listener = LISTENERS_OSISTA
    job_queue_processes = 10
    background_dump_dest = /oracle/product/admin/OSISTA/bdump
    user_dump_dest = /oracle/product/admin/OSISTA/udump
    core_dump_dest = /oracle/product/admin/OSISTA/cdump
    audit_file_dest = /oracle/product/admin/OSISTA/adump
    db_name = OSISTA
    open_cursors = 300
    pga_aggregate_target = 104857600
    aq_tm_processes = 1
    Cluster communication is configured to use the following interface(s) for this instance
    172.16.0.2
    Wed Jun 13 11:04:30 2012
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=8560
    DIAG started with pid=3, OS id=8562
    PSP0 started with pid=4, OS id=8566
    LMON started with pid=5, OS id=8570
    LMD0 started with pid=6, OS id=8574
    LMS0 started with pid=7, OS id=8576
    LMS1 started with pid=8, OS id=8580
    MMAN started with pid=9, OS id=8584
    DBW0 started with pid=10, OS id=8586
    LGWR started with pid=11, OS id=8588
    CKPT started with pid=12, OS id=8590
    SMON started with pid=13, OS id=8592
    RECO started with pid=14, OS id=8594
    CJQ0 started with pid=15, OS id=8596
    MMON started with pid=16, OS id=8598
    Wed Jun 13 11:04:31 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=17, OS id=8600
    Wed Jun 13 11:04:31 2012
    starting up 1 shared server(s) ...
    Wed Jun 13 11:04:31 2012
    lmon registered with NM - instance id 2 (internal mem no 1)
    Wed Jun 13 11:04:31 2012
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    1
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:04:31 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:04:31 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Wed Jun 13 11:04:31 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:04:31 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:04:31 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    LCK0 started with pid=20, OS id=8877
    Wed Jun 13 11:04:43 2012
    alter database mount
    Wed Jun 13 11:04:43 2012
    This instance was first to mount
    Wed Jun 13 11:04:43 2012
    Starting background process ASMB
    ASMB started with pid=25, OS id=10068
    Starting background process RBAL
    RBAL started with pid=26, OS id=10072
    Wed Jun 13 11:04:47 2012
    SUCCESS: diskgroup DATA was mounted
    Wed Jun 13 11:04:51 2012
    Setting recovery target incarnation to 1
    Wed Jun 13 11:04:52 2012
    Successful mount of redo thread 2, with mount id 3005749259
    Wed Jun 13 11:04:52 2012
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Completed: alter database mount
    Wed Jun 13 11:05:06 2012
    alter database open
    Wed Jun 13 11:05:06 2012
    This instance was first to open
    Wed Jun 13 11:05:06 2012
    Beginning crash recovery of 1 threads
    parallel recovery started with 2 processes
    Wed Jun 13 11:05:07 2012
    Started redo scan
    Wed Jun 13 11:05:07 2012
    Completed redo scan
    61 redo blocks read, 4 data blocks need recovery
    Wed Jun 13 11:05:07 2012
    Started redo application at
    Thread 1: logseq 7924, block 3, scn 506098125
    Wed Jun 13 11:05:07 2012
    Recovery of Online Redo Log: Thread 1 Group 2 Seq 7924 Reading mem 0
    Mem# 0: +DATA/osista/onlinelog/group_2.372.742132543
    Wed Jun 13 11:05:07 2012
    Completed redo application
    Wed Jun 13 11:05:07 2012
    Completed crash recovery at
    Thread 1: logseq 7924, block 64, scn 506118186
    4 data blocks read, 4 data blocks written, 61 redo blocks read
    Switch log for thread 1 to sequence 7925
    Picked broadcast on commit scheme to generate SCNs
    db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Thread 1 advanced to log sequence 7926
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Thread 1 advanced to log sequence 7927
    Wed Jun 13 11:05:11 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=31, OS id=12747
    Wed Jun 13 11:05:11 2012
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC1 started with pid=32, OS id=12749
    Wed Jun 13 11:05:12 2012
    Thread 2 opened at log sequence 7176
    Current log# 4 seq# 7176 mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
    Wed Jun 13 11:05:12 2012
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    Wed Jun 13 11:05:12 2012
    Successful open of redo thread 2
    Wed Jun 13 11:05:12 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Jun 13 11:05:12 2012
    ARC0: Becoming the heartbeat ARCH
    Wed Jun 13 11:05:12 2012
    SMON: enabling cache recovery
    Wed Jun 13 11:05:15 2012
    Successfully onlined Undo Tablespace 20.
    Wed Jun 13 11:05:15 2012
    SMON: enabling tx recovery
    Wed Jun 13 11:05:15 2012
    Database Characterset is AL32UTF8
    Wed Jun 13 11:05:16 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Wed Jun 13 11:05:16 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Error 600 happened during db open, shutting down database
    USER: terminating instance due to error 600
    Instance terminated by USER, pid = 9174
    ORA-1092 signalled during: alter database open...
    Wed Jun 13 11:06:16 2012
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth0 172.16.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 bond0 132.147.160.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    sga_max_size = 524288000
    __shared_pool_size = 314572800
    __large_pool_size = 4194304
    __java_pool_size = 8388608
    __streams_pool_size = 8388608
    spfile = +DATA/osista/spfileosista.ora
    nls_language = FRENCH
    nls_territory = FRANCE
    nls_length_semantics = CHAR
    sga_target = 524288000
    control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
    db_block_size = 8192
    __db_cache_size = 180355072
    compatible = 10.2.0.3.0
    log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
    db_file_multiblock_read_count= 16
    cluster_database = TRUE
    cluster_database_instances= 2
    db_create_file_dest = +DATA
    db_recovery_file_dest = +FLASH
    db_recovery_file_dest_size= 68543315968
    thread = 2
    instance_number = 2
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    undo_retention = 29880
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
    local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
    remote_listener = LISTENERS_OSISTA
    job_queue_processes = 10
    background_dump_dest = /oracle/product/admin/OSISTA/bdump
    user_dump_dest = /oracle/product/admin/OSISTA/udump
    core_dump_dest = /oracle/product/admin/OSISTA/cdump
    audit_file_dest = /oracle/product/admin/OSISTA/adump
    db_name = OSISTA
    open_cursors = 300
    pga_aggregate_target = 104857600
    aq_tm_processes = 1
    Cluster communication is configured to use the following interface(s) for this instance
    172.16.0.2
    Wed Jun 13 11:06:16 2012
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=18682
    DIAG started with pid=3, OS id=18684
    PSP0 started with pid=4, OS id=18695
    LMON started with pid=5, OS id=18704
    LMD0 started with pid=6, OS id=18721
    LMS0 started with pid=7, OS id=18735
    LMS1 started with pid=8, OS id=18753
    MMAN started with pid=9, OS id=18767
    DBW0 started with pid=10, OS id=18788
    LGWR started with pid=11, OS id=18796
    CKPT started with pid=12, OS id=18799
    SMON started with pid=13, OS id=18801
    RECO started with pid=14, OS id=18803
    CJQ0 started with pid=15, OS id=18805
    MMON started with pid=16, OS id=18807
    Wed Jun 13 11:06:17 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=17, OS id=18809
    Wed Jun 13 11:06:17 2012
    starting up 1 shared server(s) ...
    Wed Jun 13 11:06:17 2012
    lmon registered with NM - instance id 2 (internal mem no 1)
    Wed Jun 13 11:06:17 2012
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    1
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    LCK0 started with pid=20, OS id=18816
    Wed Jun 13 11:06:18 2012
    ALTER DATABASE MOUNT
    Wed Jun 13 11:06:18 2012
    This instance was first to mount
    Wed Jun 13 11:06:18 2012
    Reconfiguration started (old inc 2, new inc 4)
    List of nodes:
    0 1
    Wed Jun 13 11:06:18 2012
    Starting background process ASMB
    Wed Jun 13 11:06:18 2012
    Global Resource Directory frozen
    Communication channels reestablished
    ASMB started with pid=22, OS id=18913
    Starting background process RBAL
    * domain 0 valid = 0 according to instance 0
    Wed Jun 13 11:06:18 2012
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Wed Jun 13 11:06:18 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:06:18 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    RBAL started with pid=23, OS id=18917
    Reconfiguration complete
    Wed Jun 13 11:06:22 2012
    SUCCESS: diskgroup DATA was mounted
    Wed Jun 13 11:06:26 2012
    Setting recovery target incarnation to 1
    Wed Jun 13 11:06:26 2012
    Successful mount of redo thread 2, with mount id 3005703530
    Wed Jun 13 11:06:26 2012
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Completed: ALTER DATABASE MOUNT
    Wed Jun 13 11:06:27 2012
    ALTER DATABASE OPEN
    This instance was first to open
    Wed Jun 13 11:06:27 2012
    Beginning crash recovery of 1 threads
    parallel recovery started with 2 processes
    Wed Jun 13 11:06:27 2012
    Started redo scan
    Wed Jun 13 11:06:27 2012
    Completed redo scan
    61 redo blocks read, 4 data blocks need recovery
    Wed Jun 13 11:06:28 2012
    Started redo application at
    Thread 2: logseq 7176, block 3
    Wed Jun 13 11:06:28 2012
    Recovery of Online Redo Log: Thread 2 Group 4 Seq 7176 Reading mem 0
    Mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
    Wed Jun 13 11:06:28 2012
    Completed redo application
    Wed Jun 13 11:06:28 2012
    Completed crash recovery at
    Thread 2: logseq 7176, block 64, scn 506138248
    4 data blocks read, 4 data blocks written, 61 redo blocks read
    Picked broadcast on commit scheme to generate SCNs
    Wed Jun 13 11:06:28 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=28, OS id=19692
    Wed Jun 13 11:06:28 2012
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC1 started with pid=29, OS id=19695
    Wed Jun 13 11:06:28 2012
    Thread 2 advanced to log sequence 7177
    Thread 2 opened at log sequence 7177
    Current log# 3 seq# 7177 mem# 0: +DATA/osista/onlinelog/group_3.291.742134597
    Successful open of redo thread 2
    Wed Jun 13 11:06:28 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Jun 13 11:06:28 2012
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Wed Jun 13 11:06:28 2012
    ARC1: Becoming the heartbeat ARCH
    Wed Jun 13 11:06:28 2012
    SMON: enabling cache recovery
    Wed Jun 13 11:06:28 2012
    db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Wed Jun 13 11:06:31 2012
    Successfully onlined Undo Tablespace 20.
    Wed Jun 13 11:06:31 2012
    SMON: enabling tx recovery
    Wed Jun 13 11:06:31 2012
    Database Characterset is AL32UTF8
    Wed Jun 13 11:06:31 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_19596.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Wed Jun 13 11:06:32 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_19596.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Error 600 happened during db open, shutting down database
    USER: terminating instance due to error 600
    Instance terminated by USER, pid = 19596
    ORA-1092 signalled during: ALTER DATABASE OPEN...
    Wed Jun 13 11:11:35 2012
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Interface type 1 eth0 172.16.0.0 configured from OCR for use as a cluster interconnect
    Interface type 1 bond0 132.147.160.0 configured from OCR for use as a public interface
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.3.0.
    System parameters with non-default values:
    processes = 300
    sessions = 335
    sga_max_size = 524288000
    __shared_pool_size = 318767104
    __large_pool_size = 4194304
    __java_pool_size = 8388608
    __streams_pool_size = 8388608
    spfile = +DATA/osista/spfileosista.ora
    nls_language = FRENCH
    nls_territory = FRANCE
    nls_length_semantics = CHAR
    sga_target = 524288000
    control_files = DATA/osista/controlfile/control01.ctl, DATA/osista/controlfile/control02.ctl
    db_block_size = 8192
    __db_cache_size = 176160768
    compatible = 10.2.0.3.0
    log_archive_dest_1 = LOCATION=USE_DB_RECOVERY_FILE_DEST
    db_file_multiblock_read_count= 16
    cluster_database = TRUE
    cluster_database_instances= 2
    db_create_file_dest = +DATA
    db_recovery_file_dest = +FLASH
    db_recovery_file_dest_size= 68543315968
    thread = 2
    instance_number = 2
    undo_management = AUTO
    undo_tablespace = UNDOTBS2
    undo_retention = 29880
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=OSISTAXDB)
    local_listener = (address=(protocol=tcp)(port=1521)(host=132.147.160.243))
    remote_listener = LISTENERS_OSISTA
    job_queue_processes = 10
    background_dump_dest = /oracle/product/admin/OSISTA/bdump
    user_dump_dest = /oracle/product/admin/OSISTA/udump
    core_dump_dest = /oracle/product/admin/OSISTA/cdump
    audit_file_dest = /oracle/product/admin/OSISTA/adump
    db_name = OSISTA
    open_cursors = 300
    pga_aggregate_target = 104857600
    aq_tm_processes = 1
    Cluster communication is configured to use the following interface(s) for this instance
    172.16.0.2
    Wed Jun 13 11:11:35 2012
    cluster interconnect IPC version:Oracle UDP/IP (generic)
    IPC Vendor 1 proto 2
    PMON started with pid=2, OS id=16101
    DIAG started with pid=3, OS id=16103
    PSP0 started with pid=4, OS id=16105
    LMON started with pid=5, OS id=16107
    LMD0 started with pid=6, OS id=16110
    LMS0 started with pid=7, OS id=16112
    LMS1 started with pid=8, OS id=16116
    MMAN started with pid=9, OS id=16120
    DBW0 started with pid=10, OS id=16132
    LGWR started with pid=11, OS id=16148
    CKPT started with pid=12, OS id=16169
    SMON started with pid=13, OS id=16185
    RECO started with pid=14, OS id=16203
    CJQ0 started with pid=15, OS id=16219
    MMON started with pid=16, OS id=16227
    Wed Jun 13 11:11:36 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=17, OS id=16229
    Wed Jun 13 11:11:36 2012
    starting up 1 shared server(s) ...
    Wed Jun 13 11:11:36 2012
    lmon registered with NM - instance id 2 (internal mem no 1)
    Wed Jun 13 11:11:36 2012
    Reconfiguration started (old inc 0, new inc 2)
    List of nodes:
    1
    Global Resource Directory frozen
    * allocate domain 0, invalid = TRUE
    Communication channels reestablished
    Master broadcasted resource hash value bitmaps
    Non-local Process blocks cleaned out
    Wed Jun 13 11:11:36 2012
    LMS 0: 0 GCS shadows cancelled, 0 closed
    Wed Jun 13 11:11:36 2012
    LMS 1: 0 GCS shadows cancelled, 0 closed
    Set master node info
    Submitted all remote-enqueue requests
    Dwn-cvts replayed, VALBLKs dubious
    All grantable enqueues granted
    Post SMON to start 1st pass IR
    Wed Jun 13 11:11:36 2012
    LMS 1: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:11:36 2012
    LMS 0: 0 GCS shadows traversed, 0 replayed
    Wed Jun 13 11:11:36 2012
    Submitted all GCS remote-cache requests
    Fix write in gcs resources
    Reconfiguration complete
    LCK0 started with pid=20, OS id=16235
    Wed Jun 13 11:11:37 2012
    ALTER DATABASE MOUNT
    Wed Jun 13 11:11:37 2012
    This instance was first to mount
    Wed Jun 13 11:11:37 2012
    Starting background process ASMB
    ASMB started with pid=22, OS id=16343
    Starting background process RBAL
    RBAL started with pid=23, OS id=16347
    Wed Jun 13 11:11:44 2012
    SUCCESS: diskgroup DATA was mounted
    Wed Jun 13 11:11:49 2012
    Setting recovery target incarnation to 1
    Wed Jun 13 11:11:49 2012
    Successful mount of redo thread 2, with mount id 3005745065
    Wed Jun 13 11:11:49 2012
    Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
    Completed: ALTER DATABASE MOUNT
    Wed Jun 13 11:22:25 2012
    alter database open
    This instance was first to open
    Wed Jun 13 11:22:26 2012
    Beginning crash recovery of 1 threads
    parallel recovery started with 2 processes
    Wed Jun 13 11:22:26 2012
    Started redo scan
    Wed Jun 13 11:22:26 2012
    Completed redo scan
    61 redo blocks read, 4 data blocks need recovery
    Wed Jun 13 11:22:26 2012
    Started redo application at
    Thread 1: logseq 7927, block 3
    Wed Jun 13 11:22:26 2012
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 7927 Reading mem 0
    Mem# 0: +DATA/osista/onlinelog/group_1.283.742132543
    Wed Jun 13 11:22:26 2012
    Completed redo application
    Wed Jun 13 11:22:26 2012
    Completed crash recovery at
    Thread 1: logseq 7927, block 64, scn 506178382
    4 data blocks read, 4 data blocks written, 61 redo blocks read
    Switch log for thread 1 to sequence 7928
    Picked broadcast on commit scheme to generate SCNs
    Wed Jun 13 11:22:27 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=31, OS id=13010
    Wed Jun 13 11:22:27 2012
    ARC0: Archival started
    ARC1: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARC1 started with pid=32, OS id=13033
    Wed Jun 13 11:22:27 2012
    Thread 2 opened at log sequence 7178
    Current log# 4 seq# 7178 mem# 0: +DATA/osista/onlinelog/group_4.289.742134597
    Successful open of redo thread 2
    Wed Jun 13 11:22:27 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Wed Jun 13 11:22:27 2012
    ARC0: Becoming the 'no FAL' ARCH
    ARC0: Becoming the 'no SRL' ARCH
    Wed Jun 13 11:22:27 2012
    ARC1: Becoming the heartbeat ARCH
    Wed Jun 13 11:22:27 2012
    SMON: enabling cache recovery
    Wed Jun 13 11:22:30 2012
    db_recovery_file_dest_size of 65368 MB is 0.61% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    SUCCESS: diskgroup FLASH was mounted
    SUCCESS: diskgroup FLASH was dismounted
    Wed Jun 13 11:22:31 2012
    Successfully onlined Undo Tablespace 20.
    Wed Jun 13 11:22:31 2012
    SMON: enabling tx recovery
    Wed Jun 13 11:22:32 2012
    Database Characterset is AL32UTF8
    Wed Jun 13 11:22:32 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_11751.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Wed Jun 13 11:22:33 2012
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_11751.trc:
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []
    Error 600 happened during db open, shutting down database
    USER: terminating instance due to error 600
    Instance terminated by USER, pid = 11751
    ORA-1092 signalled during: alter database open...
    regards,

    Hi;
    Errors in file /oracle/product/admin/OSISTA/udump/osista2_ora_9174.trc:Did you check trc file?
    ORA-00600: code d'erreur interne, arguments : [kokiasg1], [], [], [], [], [], [], []You are getting oracle internal error(ORA 600) which mean you could need to work wiht oracle support team. Please see below note, if its not help than i suggest log a sr:
    Troubleshoot an ORA-600 or ORA-7445 Error Using the Error Lookup Tool [ID 153788.1]
    for your future rac issue please use Forum Home » High Availability » RAC, ASM & Clusterware Installation which is RAC dedicated forum site.
    Regard
    Helios

  • How to install Oracle 11.2.0.4.0 Rac Database on Grid Infrastructure 12.1.0.2.0 (Windows 2012 R2)

    Hi all.
    This is the scenario:
    I have a 2 node cluster Microsoft Windows 2012 R2 x86_64, the nodes are part of domain.
    Installed Oracle 12cR1 (12.1.0.2.0) as follows:
    Installation done using domain administrator named "installoracle" that has been EXPLICIT promoted to local administrator in both nodes.
    One domain user named "oracle12c" with no administrator privileges neither in the domain or nodes.
    Installation, after a lot of customization in the OS of the nodes, has gone ok, perfect indeed, not even a warning during runinstaller execution.
    All services and resources are up & running, I have rebooted nodes and all comes back ok in place. In windows services I can see some services started with "oracle12c" user, and all windows groups: DBA, SYSKM, etc, has been created.
    Now I have to install Oracle RAC database in 11gR2 version (11.2.0.4.0) to accomplish with application requirements.
    So, I execute runInstaller (setup.exe) logged as administrator via domain administrator "installoracle" (the one that was able to do the smooth 12c GI installation) and the gui comes with out problems, but at 4th screen the one in which you have to choose what kind of installation you want to do (Single, RAC or RoN) I select RAC option, the gui returns error: crs is not running in the local node...
    Finally, in order to deliver some database 11.2.0.4.0 for my customer, I try to do installation of SW for Single Node, I did using "installoracle". The sw has been installed in one of the nodes. Then I have created a database and I have stored datafiles in ASM (this ASM is part of 12c clusterware) and also register the 11gR2 single instance database in the Grid Infrastructure listener
    So, whats next??? Why is this behavior??? I did similar installations in Linux/Solaris with no issue (well as everyone can imagine, always there are some issues, but when solved you got GI in 12c and databases in the release that you want 12c, 11g, 10g...)
    Any clue?
    Thanks in advance!!
    Best Regards!!

    To go back to old version (without changing compatible parameter), you can perform downgrade.
    Instructions are available at
    How to Downgrade from Database 11.2 to Previous Release (Includes 11.2.0.4 - 11.2.0.1) (Doc ID 883335.1)
    In this case you will having all the data file including the new one.
    But remember database may not be in the same status as before (Before upgrade). it may have invalid objects of higher version after downgrade. 
    Downgrade is applicable only to successfully upgraded databases. In case upgrade faced some issue, do not try downgrade there.
    Your second approach it not correct. Note that each datafile header will have db version. We can not open database in normal mode when header version and binary versions are different.

  • Uninstall Oracle 11gr2 RAC database in grid infrastructure

    Hi all,
    After several attempt to install my Oracle database RAC with grid infrastructure, i want now to do a fresh installation as i have attempted 3 times and now i have all the procedure on installing the database and RAC.
    Actually i have installed it correctly but now i want to cleanup my server and remove all oracle installation directory and do a fresh installation.
    My question is what is the procedure to uninstall an Oracle RAC database and Clusterware with grid infrastucture and cleanup oracle base installation.
    The architecture is:
    GRID and clusterware: Oracle grid 11gR2
    Database: Oracle database 11gR2
    Database and grid storage: ASM
    OS: linux centos 6
    Thank you.
    Raluce.

    The deinstallation of Oracle GI could be not so easy thing to do, because it contains many components one should be aware of.  The proper deinstall is important because it will safe you from many issues with next install on these servers
    In general we need to be sure that:
    1. all sowftware stopped properly
    2. removed from oraInventory
    3. binaries removed
    4. /etc/oracle cleared
    5. ocr and votes cleared using dd
    6. /etc/oratab updated
    7. .profile updated
    8. init.d files in /etc/ cleard
    Usually its recommended to use deconfigure scripts, if they fails for some reason, the manual procedure should be followed.
              How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure [ID 1377349.1]
    How to Deinstall Oracle Clusterware Home Manually [ID 1364419.1]
    As general recommendation its good idea to save your crs configuration for future reference.
    Regards
    Ed Rudans
    http://erudans.blogspot.com

  • How to I find a RAC database  in oracle SR support

    Hi Friends,
    I could not find a product name for oracle RAC /database in product box when I try to make a SR.
    But I need to selecte a name from dripdown list. however I can not find a name that is for oracle RAC or real application cluster or RAC database
    any suggestion?
    thanks
    Jim

    Oracle Server - Enterprise Edition
    Oracle Server - Personal Edition
    Oracle Server - Standarad Edition
    It's there. Look harder!
    RAC is selected in the Problem drop-down.

  • Oracle RAC 11g query

    Hi All,
    I am new to Oracle RAC 11g, facing a issue, Request all to help.
    1) What is the instance id means which we get from gv$session in Oracle RAC setup?
    2) What is differnce between Session failure and node failure?
    3) If session fail, then client should connect to node1(to which session previously connected) or node2?
    Thanks

    1) What is the instance id means which we get from gv$session in Oracle RAC setup?Instance ID is unique for each instance in clustered database.
    2) What is differnce between Session failure and node failure?Session Failure - when the connection to an instance is lost, SESSION failover results only in the establishment of a new connection to another Oracle RAC node
    Node Failure/Eviction - we can say hardware fails, the Cluster Manager reports the change in the cluster's membership to Global Resource Directory (GRD)
    Resource directory which consists of both Global enqueue service & cache service
    3) If session fail, then client should connect to node1(to which session previously connected) or node2?You have to configure in SERVICE.

Maybe you are looking for