11.1.0.6 RAC
hey we are using 11.1.0.6.0 cluster and RDBMS.
for disaster recover purpose we are using metro mirror to copys all the binaries. but the miiroing has problem with copying ASM file system. does any one had similar kind of scenario. that will help me a lot.
as he copy binaries from 2 nodes to other 2 nodes are we able to bring up the cluster and after that recreting disk group and restoring from the backup works?
Hi,
See if this can help you:
Find this on PDF
Scenario 4: Metro Mirror Remote Clone of a Oracle RAC 10g database
This exercise shows how to clone a Oracle RAC 10g database on ASM to a remote DS8000 and Oracle RAC cluster using Metro Mirror.
http://levipereira.wordpress.com/2011/05/13/leveraging-ds8000-series-advanced-copy-services-for-oracle-user-managed-backup-and-recovery/
Regards,
Levi Pereira
Similar Messages
-
MULTIPLE USERS 10G RAC ORACLE_HOME INSTALL WITH ASM/CRS
Hi,
We need to install multiple 10g RAC databases on a two node Sun servers. Below is our configuration:
1) Sun Solaris (ver 10) with Sun Cluster 3.2
2) One ASM/CRS install (by 1 OS account)
3) Four ORACLE_HOME 10g database install (by 4 different OS user accounts)
We would like to use one ASM instance for all four databases with appropriate privileges.
OS User: OS Group
======== =========
oraasm dbaasm - (ASM and CRS install owner)
ora1 dbaora1 - first db owner
ora2 dbaora2 - second db owner
ora3 dbaora3 - third db owner
ora4 dbaora4 - fourth db owner
I understand that certain privileges need to be shared between ASM/CRS and DB owners. Please let me know the steps to be followed to complete this install.
Thanks in advance.Hi
Please read that: Documentation http://download.oracle.com/docs/html/B10766_08/intro.htm
- You can install and operate multiple Oracle homes and different versions of Oracle cluster database software on the same computer as described in the following points:
-You can install multiple Oracle Database 10g RAC homes on the same node. The multiple homes feature enables you to install one or more releases on the same machine in multiple Oracle home directories. However, each node can have only one CRS home.
-In addition, you cannot install Oracle Database 10g RAC into an existing single-instance Oracle home. If you have an Oracle home for Oracle Database 10g, then use a different Oracle home, and one that is available across the entire cluster for your new installation. Similarly, if you have an Oracle home for an earlier Oracle cluster database software release, then you must also use a different home for the new installation.
If the OUI detects an earlier version of a database, then the OUI asks you about your upgrade preferences. You have the option to upgrade one of the previous-version databases with DBUA or to create a new database using DBCA. The information collected during this dialog is passed to DBUA or DBCA after the software is installed.
- You can use the OUI to complete some of the de-install and re-install steps for Oracle Database 10g Real Application Clusters if needed.
Note:
Do not move Oracle binaries from one Oracle home to another because this causes dynamic link failures.
. If you are using ASM with Oracle database instances from multiple database homes on the same node, then Oracle recommends that you run the ASM instance from an Oracle home that is distinct from the database homes. In addition, the ASM home should be installed on every cluster node. This prevents the accidental removal of ASM instances that are in use by databases from other homes during the de-installation of a database's Oracle home. -
Error while running runcluvfy.sh(11g RAC on CentOS 5(RHEL 5))
Oracle Version: 11G
Operating System: Centos 5 (RHEL 5) : Linux centos51-rac-1 2.6.18-128.1.6.el5 #1 SMP Wed Apr 1 09:19:18 EDT 2009 i686 i686 i386 GNU/Linux
Question (including full error messages and setup scripts where applicable):
I am attempting to install oracle 11g in a RAC configuration with Centos 5 (redhat 5) as the operating system. I get the following error
ERROR : Cannot Identify the operating system. Ensure that the correct software is being executed for this operating system
Verification cannot complete
I get this error message when I run runcluvfy.sh, to verify the my configuration is clusterable. I don't know why.
I edited the /etc/redhat-release and entered echo "Red Hat Enterprise Linux AS release 4 (Nahant Update 7)" to attempt to fool the installer into thinking its red hat 4.
But still shows the same message.
Anyone knows how to fix this ?
Please help me.http://www.idevelopment.info/data/Oracle/DBA_tips/Linux/LINUX_20.shtml
runcluvfy.sh will not work on centos because the cluster verification utility checks the operating system version using the redhat-release packag and centos do this with his packages, so you must install and use redhat-release package
Get rpm-build to be able to build rpm’s:
[root@centos5 ~]# yum install rpm-build
Get source rpm of redhat-release
[root@centos5 ~]# wget ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/redhat-release-5Server-5.1.0.2.src.rpm
Build package:
[root@centos5 ~]# rpmbuild –rebuild redhat-release-5Server-5.1.0.2.src.rpm
Install newly generated rpm:
[root@centos5 ~]# rpm -Uvh –force /usr/src/redhat/RPMS/i386/redhat-release-5Server-5.1.0.2.i386.rpm -
Error in Creation of Dataguard for RAC
My pfile of RAC looks like:
RACDB2.__large_pool_size=4194304
RACDB1.__large_pool_size=4194304
RACDB2.__shared_pool_size=92274688
RACDB1.__shared_pool_size=92274688
RACDB2.__streams_pool_size=0
RACDB1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDB/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDB/bdump'
*.cluster_database_instances=2
*.cluster_database=true
*.compatible='10.2.0.1.0'
*.control_files='+DATA/racdb/controlfile/current.260.627905745','+FLASH/racdb/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDB/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASH'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDBXDB)'
*.fal_client='RACDB'
*.fal_server='RACDG'
RACDB1.instance_number=1
RACDB2.instance_number=2
*.job_queue_processes=10
*.log_archive_config='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASH/RACDB/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDB'
*.log_archive_dest_2='SERVICE=RACDG VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='DEFER'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATA/RACDB','+DATADG/RACDG'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
*.remote_listener='LISTENERS_RACDB'
*.remote_login_passwordfile='exclusive'
*.service_names='RACDB'
*.sga_target=167772160
*.standby_file_management='AUTO'
RACDB2.thread=2
RACDB1.thread=1
*.undo_management='AUTO'
RACDB2.undo_tablespace='UNDOTBS2'
RACDB1.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/app/oracle/admin/RACDB/udump'
My pfile of Dataguard Instance in nomount state looks like:
RACDG.__db_cache_size=58720256
RACDG.__java_pool_size=4194304
RACDG.__large_pool_size=4194304
RACDG.__shared_pool_size=96468992
RACDG.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/RACDG/adump'
*.background_dump_dest='/u01/app/oracle/admin/RACDG/bdump'
##*.cluster_database_instances=2
##*.cluster_database=true
*.compatible='10.2.0.1.0'
##*.control_files='+DATA/RACDG/controlfile/current.260.627905745','+FLASH/RACDG/controlfile/current.256.627905753'
*.core_dump_dest='/u01/app/oracle/admin/RACDG/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATADG'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.db_name='RACDB'
*.db_recovery_file_dest='+FLASHDG'
*.db_recovery_file_dest_size=2147483648
*.dispatchers='(PROTOCOL=TCP) (SERVICE=RACDGXDB)'
*.FAL_CLIENT='RACDG'
*.FAL_SERVER='RACDB'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(RACDB,RACDG)'
*.log_archive_dest_1='LOCATION=+FLASHDG/RACDG/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=RACDG'
*.log_archive_dest_2='SERVICE=RACDB VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=RACDB'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.log_archive_format='%t_%s_%r.arc'
*.log_file_name_convert='+DATADG/RACDG','+DATA/RACDB'
*.open_cursors=300
*.pga_aggregate_target=16777216
*.processes=150
##*.remote_listener='LISTENERS_RACDG'
*.remote_login_passwordfile='exclusive'
SERVICE_NAMES='RACDG'
sga_target=167772160
standby_file_management='auto'
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='/u01/app/oracle/admin/RACDG/udump'
DB_UNIQUE_NAME=RACDG
and here is what I am doing on the standby location:
[oracle@dg01 ~]$ echo $ORACLE_SID
RACDG
[oracle@dg01 ~]$ rman
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Jul 17 21:19:21 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDG (not mounted)
RMAN> connect target sys/xxxxxxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-17 22:27:08
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-17 22:27:10
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl4.ctl
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl4.ctl tag=TAG20070717T201921
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:23
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-17 22:27:34
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/17/2007 22:27:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database
RMAN>
Any help to clear this error will be apprecited.......
Message was edited by:
Bal
nullHi
Thanks everybody for helping me on this issue...........
As suggested, I had taken the parameter log_file_name_convert and db_file_name_convert out of my RAC primary database but still I am getting the same error.
Any help will be appriciated..............
SQL> show parameter convert
NAME TYPE VALUE
db_file_name_convert string
log_file_name_convert string
SQL>
oracle@dg01<3>:/u01/app/oracle> rman
Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jul 18 17:07:49 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN> connect auxiliary /
connected to auxiliary database: RACDB (not mounted)
RMAN> connect target sys/xxx@RACDB
connected to target database: RACDB (DBID=625522512)
RMAN> duplicate target database for standby;
Starting Duplicate Db at 2007-07-18 17:10:53
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=156 devtype=DISK
contents of Memory Script:
restore clone standby controlfile;
sql clone 'alter database mount standby database';
executing Memory Script
Starting restore at 2007-07-18 17:10:54
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /software/backup/ctl5.ctr
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=/software/backup/ctl5.ctr tag=TAG20070718T170529
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:33
output filename=+DATADG/racdg/controlfile/current.275.628208075
output filename=+FLASHDG/racdg/controlfile/backup.268.628208079
Finished restore at 2007-07-18 17:11:31
sql statement: alter database mount standby database
released channel: ORA_AUX_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/18/2007 17:11:43
RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs2.265.627906771 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/example.264.627905917 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/users.259.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/sysaux.257.627905385 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/undotbs1.258.627905395 conflicts with a file used by the target database
RMAN-05001: auxiliary filename +DATA/racdb/datafile/system.256.627905375 conflicts with a file used by the target database -
How to install 11gR2 RAC on 64 bit linux OS
I am completely new to this topic of RAC and need to be installing and standing up RAC on Linux 64 bit OS . I have good knowledge of installing oracle database ENTERPRISE version 11gR2.
Can you guide me as to how to start. I am looking for leads. Probably we will have 2 nodes.
Thank you very much for helping me in advanceIf you are a My Oracle Support (Metalink) user, go check out these two notes created by the Oracle RAC Assurance Team. They are excellent.
NOTE: 810394.1 RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
NOTE: 811306.1 RAC Assurance Support Team: RAC Starter Kit (Linux)
In the Linux note mentioned above there is a link to a Linux Step by Step Instruction Guide. This step by step instruction guide is the best start to finish document I've seen for how to set-up and install Oracle RAC. I believe the guide is written for installing release 11.2.0.2. -
In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?
The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.
-
RMAN, RAC, NFS, and server lock ups
Good day. My environment is:
--a 2-node RAC
--Enterprise Edition 11.2.0.3
--RHEL 5.1
The goal is to use RMAN to push backups to a shared NFS mount (on a different server). Both nodes will have access to this location (in the event one node goes down, the other can still run backups). Easy, right?
Wrong.
I've tried every NFS mount option in the book. Most work just fine, some don't. When I use the recommended NFS mount options:
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp, vers=3,timeo=600, actimeo=0
or
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,forcedirectio, vers=3,suid
The mount works normally. I can "ls" and "mkdir" and "touch" and "vi" and "cp" files back and forth from the NFS backup location to the RAC node all day long. No problems. However, when I try to do almost anything in RMAN which requires writing to the NFS backup location such as the command "backup archive all delete input;" (or even things as simple as a Crosscheck or RMAN configuration change which writes any changes back to the autobackup ControlFile) the node locks up. There are no errors (or if there are, I don't know where to find them), even when I use RMAN log.
Just to recap: I run a Crosscheck (or any RMAN process that writes to the NFS backup location), the node will lock up, and I can let it sit for a day, inaccessible, with CRSCTL on the other node saying it's offline, and the node will never come out of a "frozen" state. It cannot be pinged or connected to.
I think I can safely rule out NFS mount options at this point.
I understand (after extensive reading of MOS docs and testing) that RAC RMAN can and does suffer from inefficient I/O when writing to an NFS mount. I don't think that's the culprit either. The autobackup ControlFile is not that big and I cannot see how running a simple Crosscheck would lock an entire node.
I am hoping someone has encountered this in the past and hopefully it's just a simple misconfiguration somewhere.My NFS line in /etc/fstab is (these options are for supporting 11.2.0.3, 11.1.0.7, and 10.2.0.4/5 simultaneously): server.domain:/NFS_Export /backup nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0
Before you installed GI, did you by chance do a yum update? I've encountered a similar issue which ended up being due to mkinitrd creating a corrupted kernel; mkinitrd is invoked during the GI installation when the ADVM drivers are added and in my case mkinitrd created a new kernel prior to the new kernel being installed. Second to that, make sure you have the matching kernel headers to your kernel version. If they are different then you could probably get away with just creating a new kernel with mkinitrd and relinking GI/RDBMS homes, but be prepared to wipe GI and reinstall. -
Gns is getting failed with error CRS-2632 during RAC installation
hello guys i am new to oracle RAC and i am trying to configure two node ORACLE 11G R2 RAC setup on OEL 5.4 using GNS Every things works great till I execute
root.sh script on the first node
It gives me error
CRS-2674: Start of 'ora.gns' on 'host01' failed
CRS-2632: There are no more servers to try to place resource 'ora.gns' on that would satisfy its placement policy
start gns ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
When i check status of cluster recourse i get this as output
[root@host01 ~]# crs_stat -t
Name Type Target State Host
ora.DATA.dg ora....up.type ONLINE ONLINE host01
ora....N1.lsnr ora....er.type OFFLINE OFFLINE
ora....N2.lsnr ora....er.type OFFLINE OFFLINE
ora....N3.lsnr ora....er.type OFFLINE OFFLINE
ora.asm ora.asm.type ONLINE ONLINE host01
ora.eons ora.eons.type ONLINE ONLINE host01
ora.gns ora.gns.type ONLINE OFFLINE
ora.gns.vip ora....ip.type ONLINE OFFLINE
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE host01
ora.host01.gsd application OFFLINE OFFLINE
ora.host01.ons application ONLINE ONLINE host01
ora.host01.vip ora....t1.type ONLINE ONLINE host01
ora....network ora....rk.type ONLINE ONLINE host01
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE host01
ora....ry.acfs ora....fs.type OFFLINE OFFLINE
ora.scan1.vip ora....ip.type OFFLINE OFFLINE
ora.scan2.vip ora....ip.type OFFLINE OFFLINE
ora.scan3.vip ora....ip.type OFFLINE OFFLINE
These are my GNS configuration file entries
vi /var/named/chroot/etc/named.conf
options {
listen-on port 53 { 192.9.201.59; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; };
allow-query-cache { any; };
zone "." IN {
type hint;
file "named.ca";
zone "localdomain" IN {
type master;
file "localdomain.zone";
allow-update { none; };
zone "localhost" IN {
type master;
file "localhost.zone";
allow-update { none; };
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
allow-update { none; };
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
type master;
file "named.ip6.local";
allow-update { none; };
zone "255.in-addr.arpa" IN {
type master;
file "named.broadcast";
allow-update { none; };
zone "0.in-addr.arpa" IN {
type master;
file "named.zero";
allow-update { none; };
zone "example.com" IN {
type master;
file "forward.zone";
allow-transfer { 192.9.201.180; };
zone "201.9.192.in-addr.arpa" IN {
type master;
file "reverse.zone";
zone "0.0.10.in-addr.arpa" IN {
type master;
file "reverse1.zone";
vi /var/named/chroot/var/named/forward.zone
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS server1
IN A 192.9.201.59
server1 IN A 192.9.201.59
host01 IN A 192.9.201.181
host02 IN A 192.9.201.182
host03 IN A 192.9.201.183
openfiler IN A 192.9.201.184
host01-priv IN A 10.0.0.2
host02-priv IN A 10.0.0.3
host03-priv IN A 10.0.0.4
vi /var/named/chroot/var/named/reverse.zone
$ORIGIN cluster01.example.com.
@ IN NS cluster01-gns.cluster01.example.com.
cluster01-gns IN A 192.9.201.180
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
59 IN PTR server1.example.com.
184 IN PTR openfiler.example.com.
181 IN PTR host01.example.com.
182 IN PTR host02.example.com.
183 IN PTR host03.example.com.
vi /var/named/chroot/var/named/reverse1.zone
$TTL 86400
@ IN SOA server1.example.com. root.server1.example.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
IN NS server1.example.com.
2 IN PTR host01-priv.example.com.
3 IN PTR host02-priv.example.com.
4 IN PTR host03-priv.example.com.
Please suggest me what i am doing wrong
Edited by: 1001408 on Apr 21, 2013 9:17 AM
Edited by: 1001408 on Apr 21, 2013 9:22 AMHello guys finally i find mistake i was doing
while configuring Public Ip for the nodes i was not giving Default Gateway .I was assuming as all these machine is in same network with same Ip range so they would not be needing Gateway but here my assumption mismatch with oracle well finally happy to see 11G r2 with GNS running on my personal laptop.
cheers
Rahul -
Oracel 11gR1 RAC Cluster issue
We have 2-node Oracle 11gR2 RAC on HP-UX 11.31 environment. It was running lase 2 month without any issue.
We got some netconfig issue, and node-1 got rebooted today. after the reboot cluster didn't not start on node-1, database is running on node-2.
grid@hublhp4:/app/oracle/grid/product/11.2.0.1/log/hublhp4/crsd$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
grid@hublhp4:/app/oracle/grid/product/11.2.0.1/log/hublhp4/crsd$ crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
grid@hublhp4:/app/oracle/grid/product/11.2.0.1/log/hublhp4/crsd$ ocrcheck
PROT-602: Failed to retrieve data from the cluster registry
PROC-26: Error while accessing the physical storage ASM error [SLOS: cat=8, opn=kgfolclcpi1, dep=301, loc=kgfokge
AMDU-00301: Unable to open file tmp-AMIPOCR01.ocr
AMDU-00204: Disk N0002 is in currently mounted diskgroup AMIPOCR01
AMDU-00201: Disk N0002: '/dev/rdisk/ora_OCR
] [8]
grid@hublhp4:/app/oracle/grid/product/11.2.0.1/log/hublhp4/crsd$ olsnodes -n
hublhp4 1
hublhp5 2
any idea please.
Edited by: ManoRangasamy on Jul 5, 2011 6:38 PMHi,
Please post the alertlog ASM from node 1, crsd.log and ocssd.log from node 1
It might be because node 1 can't see asm disk or permission accidentally changed when the node rebooted
Cheers -
Started with 11.2.0.2.0 Grid Installation for 2 Node RAC on HP-UX 11.31 Itanium 64.
Copying Software to remote node & linking libraries were successfully without any issue (upto 76%). But got issue while executing root.sh on Node1
sph1erp:/oracle/11.2.0/grid #sh root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/11.2.0/grid
Enter the full pathname of the local bin directory: [usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'sys'..
Operation successful.
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'sph1erp'
CRS-2676: Start of 'ora.mdnsd' on 'sph1erp' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'sph1erp'
CRS-2676: Start of 'ora.gpnpd' on 'sph1erp' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'sph1erp'
CRS-2672: Attempting to start 'ora.gipcd' on 'sph1erp'
CRS-2676: Start of 'ora.gipcd' on 'sph1erp' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'sph1erp' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'sph1erp'
CRS-2672: Attempting to start 'ora.diskmon' on 'sph1erp'
CRS-2676: Start of 'ora.diskmon' on 'sph1erp' succeeded
CRS-2676: Start of 'ora.cssd' on 'sph1erp' succeeded
ASM created and started successfully.
Disk Group OCRVOTE created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'sys'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk ab847ed2b4f04f2dbfb875226d2bb194.
Successful addition of voting disk 85c05a5b30384f8dbff48cc069de7a7c.
Successful addition of voting disk 649196fbdd614f9cbf26a9a0e6670a6e.
Successful addition of voting disk 8815dfcee2e64f64bf00b9c76626ab41.
Successful addition of voting disk 8ce55fe5534f4f77bfa9f54187592707.
Successfully replaced voting disk group with +OCRVOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
1. ONLINE ab847ed2b4f04f2dbfb875226d2bb194 (/dev/oracle/ocrvote1) [OCRVOTE]
2. ONLINE 85c05a5b30384f8dbff48cc069de7a7c (/dev/oracle/ocrvote2) [OCRVOTE]
3. ONLINE 649196fbdd614f9cbf26a9a0e6670a6e (/dev/oracle/ocrvote3) [OCRVOTE]
4. ONLINE 8815dfcee2e64f64bf00b9c76626ab41 (/dev/oracle/ocrvote4) [OCRVOTE]
5. ONLINE 8ce55fe5534f4f77bfa9f54187592707 (/dev/oracle/ocrvote5) [OCRVOTE]
Located 5 voting disk(s).
Start of resource "ora.cluster_interconnect.haip" failed
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'sph1erp'
CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
Start action for HAIP aborted
CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'sph1erp' failed
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'sph1erp'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'sph1erp' succeeded
CRS-4000: Command Start failed, or completed with errors.
Failed to start Oracle Clusterware stack
Failed to start High Availability IP at /oracle/11.2.0/grid/crs/install/crsconfig_lib.pm line 1046.
*/oracle/11.2.0/grid/perl/bin/perl -I/oracle/11.2.0/grid/perl/lib -I/oracle/11.2.0/grid/crs/install /oracle/11.2.0/grid/crs/install/rootcrs.pl execution failed*
sph1erp:/oracle/11.2.0/grid #
Last few lines from CRS Log for node 1, where error came
[ctssd(6467)]CRS-2401:The Cluster Time Synchronization Service started on host sph1erp.
2011-02-25 23:04:16.491
[oracle/11.2.0/grid/bin/orarootagent.bin(6423)]CRS-5818:Aborted command 'start for resource: ora.cluster_interconnect.haip 1 1' for resource 'ora.cluster_int
erconnect.haip'. Details at (:CRSAGF00113:) {0:0:178} in */oracle/11.2.0/grid/log/sph1erp/agent/ohasd/orarootagent_root/orarootagent_root.log.*
2011-02-25 23:04:20.521
[ohasd(5513)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.cluster_interconnect.haip'. Details at (:CRSPE00111:) {0:0:178} in
*/oracle/11.2.0/grid/log/sph1erp/ohasd/ohasd.log.*
Few lines from */oracle/11.2.0/grid/log/sph1erp/agent/ohasd/orarootagent_root/orarootagent_root.log.*
=====================================================================================================
2011-02-25 23:04:16.823: [ USRTHRD][16] {0:0:178} Starting Probe for ip 169.254.74.54
2011-02-25 23:04:16.823: [ USRTHRD][16] {0:0:178} Transitioning to Probe State
2011-02-25 23:04:17.177: [ USRTHRD][15] {0:0:178} [NetHAMain] thread stopping
2011-02-25 23:04:17.177: [ USRTHRD][15] {0:0:178} Thread:[NetHAMain]isRunning is reset to false here
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} Thread:[NetHAMain]stop }
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} thread cleaning up
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} pausing thread
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} posting thread
2011-02-25 23:04:17.178: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop {
2011-02-25 23:04:17.645: [ USRTHRD][16] {0:0:178} [NetHAWork] thread stopping
2011-02-25 23:04:17.645: [ USRTHRD][16] {0:0:178} Thread:[NetHAWork]isRunning is reset to false here
2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop }
2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop {
2011-02-25 23:04:17.645: [ USRTHRD][12] {0:0:178} Thread:[NetHAWork]stop }
2011-02-25 23:04:17.891: [ora.cluster_interconnect.haip][12] {0:0:178} [start] Start of HAIP aborted
2011-02-25 23:04:17.892: [ AGENT][12] {0:0:178} UserErrorException: Locale is
2011-02-25 23:04:17.893: [ora.cluster_interconnect.haip][12] {0:0:178} [start] clsnUtils::error Exception type=2 string=
CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error:
Start action for HAIP aborted
2011-02-25 23:04:17.893: [ AGFW][12] {0:0:178} sending status msg [CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the foll
owing error:
Start action for HAIP aborted
] for start for resource: ora.cluster_interconnect.haip 1 1
2011-02-25 23:04:17.893: [ora.cluster_interconnect.haip][12] {0:0:178} [start] clsn_agent::start }
2011-02-25 23:04:17.894: [ AGFW][10] {0:0:178} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:661
2011-02-25 23:04:18.552: [ora.diskmon][12] {0:0:154} [check] DiskmonAgent::check {
2011-02-25 23:04:18.552: [ora.diskmon][12] {0:0:154} [check] DiskmonAgent::check } - 0
2011-02-25 23:04:19.573: [ AGFW][10] {0:0:154} Agent received the message: AGENT_HB[Engine] ID 12293:669
2011-02-25 23:04:20.510: [ora.cluster_interconnect.haip][18] {0:0:178} [start] got lock
2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] tryActionLock }
2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] abort }
2011-02-25 23:04:20.511: [ora.cluster_interconnect.haip][18] {0:0:178} [start] clsn_agent::abort }
2011-02-25 23:04:20.511: [ AGFW][18] {0:0:178} Command: start for resource: ora.cluster_interconnect.haip 1 1 completed with status: TIMEDOUT
2011-02-25 23:04:20.512: [ora.cluster_interconnect.haip][8] {0:0:178} [check] NetworkAgent::init enter {
2011-02-25 23:04:20.513: [ora.cluster_interconnect.haip][8] {0:0:178} [check] NetworkAgent::init exit }
2011-02-25 23:04:20.517: [ AGFW][10] {0:0:178} Agent sending reply for: RESOURCE_START[ora.cluster_interconnect.haip 1 1] ID 4098:661
2011-02-25 23:04:20.519: [ USRTHRD][8] {0:0:178} Ocr Context init default level 23886304
2011-02-25 23:04:20.519: [ default][8]clsvactversion:4: Retrieving Active Version from local storage.
[ CLWAL][8]clsw_Initialize: OLR initlevel [70000]
Few lines from */oracle/11.2.0/grid/log/sph1erp/ohasd/ohasd.log.*
=====================================================================================================
2011-02-25 23:04:21.627: [UiServer][30] {0:0:180} Done for ctx=6000000002604ce0
2011-02-25 23:04:21.642: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:26.139: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:26.139: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
2011-02-25 23:04:26.973: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
2011-02-25 23:04:26.973: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
2011-02-25 23:04:26.992: [UiServer][30] {0:0:181} processMessage called
2011-02-25 23:04:26.993: [UiServer][30] {0:0:181} Sending message to PE. ctx= 6000000001b440f0
2011-02-25 23:04:26.993: [UiServer][30] {0:0:181} Sending command to PE: 67
2011-02-25 23:04:26.994: [ CRSPE][29] {0:0:181} Processing PE command id=173. Description: [Stat Resource : 600000000135f760]
2011-02-25 23:04:26.997: [UiServer][30] {0:0:181} Done for ctx=6000000001b440f0
2011-02-25 23:04:27.012: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:31.135: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:31.135: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
2011-02-25 23:04:32.318: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
2011-02-25 23:04:32.318: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
2011-02-25 23:04:32.332: [UiServer][30] {0:0:182} processMessage called
2011-02-25 23:04:32.333: [UiServer][30] {0:0:182} Sending message to PE. ctx= 6000000001b45ef0
2011-02-25 23:04:32.333: [UiServer][30] {0:0:182} Sending command to PE: 68
2011-02-25 23:04:32.334: [ CRSPE][29] {0:0:182} Processing PE command id=174. Description: [Stat Resource : 600000000135f760]
2011-02-25 23:04:32.338: [UiServer][30] {0:0:182} Done for ctx=6000000001b45ef0
2011-02-25 23:04:32.352: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:36.155: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:36.155: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
2011-02-25 23:04:37.683: [UiServer][31] CS(60000000014b0790)set Properties ( root,60000000012e0260)
2011-02-25 23:04:37.683: [UiServer][31] SS(6000000001372270)Accepted client connection: saddr =(ADDRESS=(PROTOCOL=ipc)(DEV=92)(KEY=OHASD_UI_SOCKET))daddr = (A
DDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET))
2011-02-25 23:04:37.702: [UiServer][30] {0:0:183} processMessage called
2011-02-25 23:04:37.703: [UiServer][30] {0:0:183} Sending message to PE. ctx= 6000000002604ce0
2011-02-25 23:04:37.703: [UiServer][30] {0:0:183} Sending command to PE: 69
2011-02-25 23:04:37.704: [ CRSPE][29] {0:0:183} Processing PE command id=175. Description: [Stat Resource : 600000000135f760]
2011-02-25 23:04:37.708: [UiServer][30] {0:0:183} Done for ctx=6000000002604ce0
2011-02-25 23:04:37.722: [UiServer][31] Closed: remote end failed/disc.
2011-02-25 23:04:41.156: [ CLSINET][33]Returning NETDATA: 1 interfaces
2011-02-25 23:04:41.156: [ CLSINET][33]# 0 Interface 'lan2',ip='10.10.16.50',mac='3c-4a-92-48-71-be',mask='255.255.255.240',net='10.10.16.48',use='cluster_int
erconnect'
What could be the issue ????
Experts Please help me. Doing setup for the PRoduction Env...
Do response ASAP...... Thanks
Regards,
ManishThanks Sebastian for your input.
yes. my lan2 is used for Cluster_interconnect which is having subnet 255.255.255.240.
Below are IPs used for RAC
Public
Node1: 10.10.1.173/255.255.240.0
Node2: 10.10.1.174/255.255.240.0
Private
Node1: 10.10.16.50/255.255.255.240
Node2: 10.10.16.51/255.255.255.240
Virtual
Node1: 10.10.1.191/255.255.240.0
Node2: 10.10.1.192/255.255.240.0
SCAN (Defined in DNS)
10.10.1.193/255.255.240.0
10.10.1.194/255.255.240.0
10.10.1.195/255.255.240.0
As you said, I will scrap GI Software again & will try with 255.255.255.0.
I Believe this Redundant Interconnect and ora.cluster_interconnect.haip present in 11.2.0.2.0 Version.
Oracle says:
Redundant Interconnect without any 3rd-party IP failover technology (bond, IPMP or similar) is supported natively by Grid Infrastructure starting from 11.2.0.2. Multiple private network adapters can be defined either during the installation phase or afterward using the oifcfg. Oracle Database, CSS, OCR, CRS, CTSS, and EVM components in 11.2.0.2 employ it automatically.
Grid Infrastructure can activate a maximum of four private network adapters at a time even if more are defined. The ora.cluster_interconnect.haip resource will start one to four link local HAIP on private network adapters for interconnect communication for Oracle RAC, Oracle ASM, and Oracle ACFS etc.
Grid automatically picks link local addresses from reserved 169.254.*.* subnet for HAIP, and it will not attempt to use any 169.254.*.* address if it's already in use for another purpose. With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces, and corresponding HAIP address will be failed over transparently to other adapters if one fails or becomes non-communicative. .
The number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster . If there's only one active private network, Grid will create one; if two, Grid will create two; and if more than two, Grid will create four HAIPs. The number of HAIPs won't change even if more private network adapters are activated later, a restart of clusterware on all nodes is required for new adapters to become effective.
In my Setup, I am having Teaming for NIC's for Public & Private Interface. So I am thinking to break teaming of NICs because HAIP internally searching for next available NIC & not getting as all 4 are already in used with OS level NIC teaming.
My only Concern is, as I am going to change subnet for the Private IPs, should I change Private IP address ????
Thanks for the Support...
Regards,
Manish -
Active session Spike on Oracle RAC 11G R2 on HP UX
Dear Experts,
We need urgent help please, as we are facing very low performance in production database.
We are having oracle 11G RAC on HP Unix environment. Following is the ADDM report. Kindly check and please help me to figure it out the issue and resolve it at earliest.
---------Instance 1---------------
ADDM Report for Task 'TASK_36650'
Analysis Period
AWR snapshot range from 11634 to 11636.
Time period starts at 21-JUL-13 07.00.03 PM
Time period ends at 21-JUL-13 09.00.49 PM
Analysis Target
Database 'MCMSDRAC' with DB ID 2894940361.
Database version 11.2.0.1.0.
ADDM performed an analysis of instance mcmsdrac1, numbered 1 and hosted at
mcmsdbl1.
Activity During the Analysis Period
Total database time was 38466 seconds.
The average number of active sessions was 5.31.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 CPU Usage 1.44 | 27.08 1
2 Interconnect Latency .07 | 1.33 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: CPU Usage
Impact is 1.44 active sessions, 27.08% of total activity.
Host CPU was a bottleneck and the instance was consuming 99% of the host CPU.
All wait times will be inflated by wait for CPU.
Host CPU consumption was 99%.
Recommendation 1: Host Configuration
Estimated benefit is 1.44 active sessions, 27.08% of total activity.
Action
Consider adding more CPUs to the host or adding instances serving the
database on other hosts.
Action
Session CPU consumption was throttled by the Oracle Resource Manager.
Consider revising the resource plan that was active during the analysis
period.
Finding 2: Interconnect Latency
Impact is .07 active sessions, 1.33% of total activity.
Higher than expected latency of the cluster interconnect was responsible for
significant database time on this instance.
The instance was consuming 110 kilo bits per second of interconnect bandwidth.
20% of this interconnect bandwidth was used for global cache messaging, 21%
for parallel query messaging and 7% for database lock management.
The average latency for 8K interconnect messages was 42153 microseconds.
The instance is using the private interconnect device "lan2" with IP address
172.16.200.71 and source "Oracle Cluster Repository".
The device "lan2" was used for 100% of interconnect traffic and experienced 0
send or receive errors during the analysis period.
Recommendation 1: Host Configuration
Estimated benefit is .07 active sessions, 1.33% of total activity.
Action
Investigate cause of high network interconnect latency between database
instances. Oracle's recommended solution is to use a high speed
dedicated network.
Action
Check the configuration of the cluster interconnect. Check OS setup like
adapter setting, firmware and driver release. Check that the OS's socket
receive buffers are large enough to store an entire multiblock read. The
value of parameter "db_file_multiblock_read_count" may be decreased as a
workaround.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Cluster" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
Hard parsing of SQL statements was not consuming significant database time.
The database's maintenance windows were active during 100% of the analysis
period.
----------------Instance 2 --------------------
ADDM Report for Task 'TASK_36652'
Analysis Period
AWR snapshot range from 11634 to 11636.
Time period starts at 21-JUL-13 07.00.03 PM
Time period ends at 21-JUL-13 09.00.49 PM
Analysis Target
Database 'MCMSDRAC' with DB ID 2894940361.
Database version 11.2.0.1.0.
ADDM performed an analysis of instance mcmsdrac2, numbered 2 and hosted at
mcmsdbl2.
Activity During the Analysis Period
Total database time was 2898 seconds.
The average number of active sessions was .4.
Summary of Findings
Description Active Sessions Recommendations
Percent of Activity
1 Top SQL Statements .11 | 27.65 5
2 Interconnect Latency .1 | 24.15 1
3 Shared Pool Latches .09 | 22.42 1
4 PL/SQL Execution .06 | 14.39 2
5 Unusual "Other" Wait Event .03 | 8.73 4
6 Unusual "Other" Wait Event .03 | 6.42 3
7 Unusual "Other" Wait Event .03 | 6.29 6
8 Hard Parse .02 | 5.5 0
9 Soft Parse .02 | 3.86 2
10 Unusual "Other" Wait Event .01 | 3.75 4
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Findings and Recommendations
Finding 1: Top SQL Statements
Impact is .11 active sessions, 27.65% of total activity.
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.
Recommendation 1: SQL Tuning
Estimated benefit is .05 active sessions, 12.88% of total activity.
Action
Investigate the PL/SQL statement with SQL_ID "d1s02myktu19h" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID d1s02myktu19h.
begin dbms_utility.validate(:1,:2,:3,:4); end;
Rationale
The SQL Tuning Advisor cannot operate on PL/SQL statements.
Rationale
Database time for this SQL was divided as follows: 13% for SQL
execution, 2% for parsing, 85% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "d1s02myktu19h" was executed 48 times and had
an average elapsed time of 7 seconds.
Rationale
Waiting for event "library cache pin" in wait class "Concurrency"
accounted for 70% of the database time spent in processing the SQL
statement with SQL_ID "d1s02myktu19h".
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"63wt8yna5umd6" are responsible for 100% of the database time spent on
the PL/SQL statement with SQL_ID "d1s02myktu19h".
Related Object
SQL statement with SQL_ID 63wt8yna5umd6.
begin DBMS_UTILITY.COMPILE_SCHEMA( 'TPAUSER', FALSE ); end;
Recommendation 2: SQL Tuning
Estimated benefit is .02 active sessions, 4.55% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"fk3bh3t41101x".
Related Object
SQL statement with SQL_ID fk3bh3t41101x.
SELECT MEM.MEMBER_CODE ,MEM.E_NAME,Pol.Policy_no
,pol.date_from,pol.date_to,POL.E_NAME,MEM.SEX,(SYSDATE-MEM.BIRTH_DATE
) AGE,POL.SCHEME_NO FROM TPAUSER.MEMBERS MEM,TPAUSER.POLICY POL WHERE
POL.QUOTATION_NO=MEM.QUOTATION_NO AND POL.BRANCH_CODE=MEM.BRANCH_CODE
and endt_no=(select max(endt_no) from tpauser.members mm where
mm.member_code=mem.member_code AND mm.QUOTATION_NO=MEM.QUOTATION_NO)
and member_code like '%' || nvl(:1,null) ||'%' ORDER BY MEMBER_CODE
Rationale
The SQL spent 92% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "fk3bh3t41101x" was executed 14 times and had
an average elapsed time of 4.9 seconds.
Rationale
At least one execution of the statement ran in parallel.
Recommendation 3: SQL Tuning
Estimated benefit is .02 active sessions, 3.79% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"7mhjbjg9ntqf5".
Related Object
SQL statement with SQL_ID 7mhjbjg9ntqf5.
SELECT SUM(CNT) FROM (SELECT COUNT(PROC_CODE) CNT FROM
TPAUSER.TORBINY_PROCEDURE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
:B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND PR_EFFECTIVE_DATE<=
:B2 AND PROC_CODE = :B1 UNION SELECT COUNT(MED_CODE) CNT FROM
TPAUSER.TORBINY_MEDICINE WHERE BRANCH_CODE = :B6 AND QUOTATION_NO =
:B5 AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND M_EFFECTIVE_DATE<= :B2
AND MED_CODE = :B1 UNION SELECT COUNT(LAB_CODE) CNT FROM
TPAUSER.TORBINY_LAB WHERE BRANCH_CODE = :B6 AND QUOTATION_NO = :B5
AND CLASS_NO = :B4 AND OPTION_NO = :B3 AND L_EFFECTIVE_DATE<= :B2 AND
LAB_CODE = :B1 )
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 0% for SQL execution,
0% for parsing, 100% for PL/SQL execution and 0% for Java execution.
Rationale
SQL statement with SQL_ID "7mhjbjg9ntqf5" was executed 31 times and had
an average elapsed time of 3.4 seconds.
Rationale
Top level calls to execute the SELECT statement with SQL_ID
"a11nzdnd91gsg" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "7mhjbjg9ntqf5".
Related Object
SQL statement with SQL_ID a11nzdnd91gsg.
SELECT POLICY_NO,SCHEME_NO FROM TPAUSER.POLICY WHERE QUOTATION_NO
=:B1
Recommendation 4: SQL Tuning
Estimated benefit is .01 active sessions, 3.03% of total activity.
Action
Investigate the SELECT statement with SQL_ID "4uqs4jt7aca5s" for
possible performance improvements. You can supplement the information
given here with an ASH report for this SQL_ID.
Related Object
SQL statement with SQL_ID 4uqs4jt7aca5s.
SELECT DISTINCT USER_ID FROM GV$SESSION, USERS WHERE UPPER (USERNAME)
= UPPER (USER_ID) AND USERS.APPROVAL_CLAIM='VC' AND USER_ID=:B1
Rationale
The SQL spent only 0% of its database time on CPU, I/O and Cluster
waits. Therefore, the SQL Tuning Advisor is not applicable in this case.
Look at performance data for the SQL to find potential improvements.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "4uqs4jt7aca5s" was executed 261 times and had
an average elapsed time of 0.35 seconds.
Rationale
At least one execution of the statement ran in parallel.
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"91vt043t78460" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "4uqs4jt7aca5s".
Related Object
SQL statement with SQL_ID 91vt043t78460.
begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
4); end;
Recommendation 5: SQL Tuning
Estimated benefit is .01 active sessions, 3.03% of total activity.
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"7kt28fkc0yn5f".
Related Object
SQL statement with SQL_ID 7kt28fkc0yn5f.
SELECT COUNT(*) FROM TPAUSER.APPROVAL_MASTER WHERE APPROVAL_STATUS IS
NULL AND (UPPER(CODED) = UPPER(:B1 ) OR UPPER(PROCESSED_BY) =
UPPER(:B1 ))
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "7kt28fkc0yn5f" was executed 1034 times and
had an average elapsed time of 0.063 seconds.
Rationale
Top level calls to execute the PL/SQL statement with SQL_ID
"91vt043t78460" are responsible for 100% of the database time spent on
the SELECT statement with SQL_ID "7kt28fkc0yn5f".
Related Object
SQL statement with SQL_ID 91vt043t78460.
begin TPAUSER.RECEIVE_NEW_FAX_APRROVAL(:V00001,:V00002,:V00003,:V0000
4); end;
Finding 2: Interconnect Latency
Impact is .1 active sessions, 24.15% of total activity.
Higher than expected latency of the cluster interconnect was responsible for
significant database time on this instance.
The instance was consuming 128 kilo bits per second of interconnect bandwidth.
17% of this interconnect bandwidth was used for global cache messaging, 6% for
parallel query messaging and 8% for database lock management.
The average latency for 8K interconnect messages was 41863 microseconds.
The instance is using the private interconnect device "lan2" with IP address
172.16.200.72 and source "Oracle Cluster Repository".
The device "lan2" was used for 100% of interconnect traffic and experienced 0
send or receive errors during the analysis period.
Recommendation 1: Host Configuration
Estimated benefit is .1 active sessions, 24.15% of total activity.
Action
Investigate cause of high network interconnect latency between database
instances. Oracle's recommended solution is to use a high speed
dedicated network.
Action
Check the configuration of the cluster interconnect. Check OS setup like
adapter setting, firmware and driver release. Check that the OS's socket
receive buffers are large enough to store an entire multiblock read. The
value of parameter "db_file_multiblock_read_count" may be decreased as a
workaround.
Symptoms That Led to the Finding:
Inter-instance messaging was consuming significant database time on this
instance.
Impact is .06 active sessions, 14.23% of total activity.
Wait class "Cluster" was consuming significant database time.
Impact is .06 active sessions, 14.23% of total activity.
Finding 3: Shared Pool Latches
Impact is .09 active sessions, 22.42% of total activity.
Contention for latches related to the shared pool was consuming significant
database time.
Waits for "library cache lock" amounted to 5% of database time.
Waits for "library cache pin" amounted to 17% of database time.
Recommendation 1: Application Analysis
Estimated benefit is .09 active sessions, 22.42% of total activity.
Action
Investigate the cause for latch contention using the given blocking
sessions or modules.
Rationale
The session with ID 17 and serial number 15595 in instance number 1 was
the blocking session responsible for 34% of this recommendation's
benefit.
Symptoms That Led to the Finding:
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 4: PL/SQL Execution
Impact is .06 active sessions, 14.39% of total activity.
PL/SQL execution consumed significant database time.
Recommendation 1: SQL Tuning
Estimated benefit is .05 active sessions, 12.5% of total activity.
Action
Tune the entry point PL/SQL "SYS.DBMS_UTILITY.COMPILE_SCHEMA" of type
"PACKAGE" and ID 6019. Refer to the PL/SQL documentation for addition
information.
Rationale
318 seconds spent in executing PL/SQL "SYS.DBMS_UTILITY.VALIDATE#2" of
type "PACKAGE" and ID 6019.
Recommendation 2: SQL Tuning
Estimated benefit is .01 active sessions, 1.89% of total activity.
Action
Tune the entry point PL/SQL
"SYSMAN.EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS" of type "PACKAGE" and
ID 68654. Refer to the PL/SQL documentation for addition information.
Finding 5: Unusual "Other" Wait Event
Impact is .03 active sessions, 8.73% of total activity.
Wait event "DFS lock handle" in wait class "Other" was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 8.73% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .03 active sessions, 8.27% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 5.05% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Module "TOAD
9.7.2.5".
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 3.21% of total activity.
Action
Investigate the cause for high "DFS lock handle" waits in Module
"toad.exe".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 6: Unusual "Other" Wait Event
Impact is .03 active sessions, 6.42% of total activity.
Wait event "reliable message" in wait class "Other" was consuming significant
database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 6.42% of total activity.
Action
Investigate the cause for high "reliable message" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .03 active sessions, 6.42% of total activity.
Action
Investigate the cause for high "reliable message" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 4.13% of total activity.
Action
Investigate the cause for high "reliable message" waits in Module "TOAD
9.7.2.5".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 7: Unusual "Other" Wait Event
Impact is .03 active sessions, 6.29% of total activity.
Wait event "enq: PS - contention" in wait class "Other" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .03 active sessions, 6.29% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits. Refer to
Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .02 active sessions, 6.02% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Service
"mcmsdrac".
Recommendation 3: Application Analysis
Estimated benefit is .02 active sessions, 4.93% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits with
P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
"3599" respectively.
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 2.74% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Module
"Inbox Reader_92.exe".
Recommendation 5: Application Analysis
Estimated benefit is .01 active sessions, 2.74% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits in Module
"TOAD 9.7.2.5".
Recommendation 6: Application Analysis
Estimated benefit is .01 active sessions, 1.37% of total activity.
Action
Investigate the cause for high "enq: PS - contention" waits with
P1,P2,P3 ("name|mode, instance, slave ID") values "1347616774", "1" and
"3598" respectively.
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
Finding 8: Hard Parse
Impact is .02 active sessions, 5.5% of total activity.
Hard parsing of SQL statements was consuming significant database time.
Hard parses due to cursor environment mismatch were not consuming significant
database time.
Hard parsing SQL statements that encountered parse errors was not consuming
significant database time.
Hard parses due to literal usage and cursor invalidation were not consuming
significant database time.
The Oracle instance memory (SGA and PGA) was adequately sized.
No recommendations are available.
Symptoms That Led to the Finding:
Contention for latches related to the shared pool was consuming
significant database time.
Impact is .09 active sessions, 22.42% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 9: Soft Parse
Impact is .02 active sessions, 3.86% of total activity.
Soft parsing of SQL statements was consuming significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .02 active sessions, 3.86% of total activity.
Action
Investigate application logic to keep open the frequently used cursors.
Note that cursors are closed by both cursor close calls and session
disconnects.
Recommendation 2: Database Configuration
Estimated benefit is .02 active sessions, 3.86% of total activity.
Action
Consider increasing the session cursor cache size by increasing the
value of parameter "session_cached_cursors".
Rationale
The value of parameter "session_cached_cursors" was "100" during the
analysis period.
Symptoms That Led to the Finding:
Contention for latches related to the shared pool was consuming
significant database time.
Impact is .09 active sessions, 22.42% of total activity.
Wait class "Concurrency" was consuming significant database time.
Impact is .1 active sessions, 24.96% of total activity.
Finding 10: Unusual "Other" Wait Event
Impact is .01 active sessions, 3.75% of total activity.
Wait event "IPC send completion sync" in wait class "Other" was consuming
significant database time.
Recommendation 1: Application Analysis
Estimated benefit is .01 active sessions, 3.75% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits. Refer
to Oracle's "Database Reference" for the description of this wait event.
Recommendation 2: Application Analysis
Estimated benefit is .01 active sessions, 3.75% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits with P1
("send count") value "1".
Recommendation 3: Application Analysis
Estimated benefit is .01 active sessions, 2.59% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits in
Service "mcmsdrac".
Recommendation 4: Application Analysis
Estimated benefit is .01 active sessions, 1.73% of total activity.
Action
Investigate the cause for high "IPC send completion sync" waits in
Module "TOAD 9.7.2.5".
Symptoms That Led to the Finding:
Wait class "Other" was consuming significant database time.
Impact is .15 active sessions, 38.29% of total activity.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional Information
Miscellaneous Information
Wait class "Application" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Wait class "User I/O" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.
The database's maintenance windows were active during 100% of the analysis
period.
Please help.Hello experts...
Please do the needful... It's really very urgent.
Thanks,
Syed -
Hi All
I am installing Oracle RAC 10g 10.2.0.1 on HP-UX B.11.31 U ia64 but can not complete
hosts file
#Public IPs
10.144.1.111 spgdb01
10.144.1.112 spgdb02
#Private IPs
10.144.2.2 spgdb01p
10.144.2.3 spgdb02p
#Virtual IPs
10.144.1.113 spgdb01v
10.144.1.114 spgdb02v
I do installation with runInstaller without error. It copy and link is ok. When I run root.sh then It cannot complete as following
Checking to see if Oracle CRS stack is already configured
Checking to see if any 9i GSD is up
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/product/10.2.0' is not owned by root
WARNING: directory '/oracle/product' is not owned by root
WARNING: directory '/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 0: spgdb01 spgdb01p spgdb01
node 1: spgdb02 spgdb02p spgdb02
Creating OCR keys for user 'root', privgrp 'sys'..
Operation successful.
Now formatting voting device: /ora/crs/votedisk01
waitpid(-1, 0x7fffdf50, WUNTRACED) .................................................................................................... [sleeping]
Now formatting voting device: /oracle/oradata1/crs/votedisk02
Now formatting voting device: /oracle/oradata2/crs/votedisk03
Format of 3 voting devices complete.
Startup will be queued to init within 30 seconds.
====================
I have waited for 10 mins but still not complete
Additionally, log from runInstaller, I got
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-04-28_12-13-31AM. Please wait ...-bash-4.2$ Oracle Universal Installer, Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved.
Private Interconnect : null
Private Interconnect : null
Private Interconnect : null
Private Interconnect : null
So, please help me fix this issue
Thank youI had this problem and resolved it by transporting the file to the installation server with the correct ftp datatype (binary).
On page 54 of the install guide (..Server\Oracle_Business_Intelligence\doc\doc\bi.1013\b31765.pdf) that comes with the installation files, there is an instruction to make sure that any ftp activity is done in binary.
This may not have occured with the license.xml file if you use a tool which offers the "feature" of automatic datatype recognition.
Hope this helps. -
APEX Application behaviour in a RAC setup
Hi
Caveat first: I'm pretty new to Oracle RAC and just looking into it as an option. We have an APEX application currently running in Oracle 11gR2 single node currently and are considering HA for this.
My question is: What would be the expected behaviour seen by a User of an APEX application, in the event of a node failure, when running with an OHS / RAC configuration? Will they get "transparent fail-over" and see nothing or will they see an error?
I appreciate I could post in the APEX forum, but feel that is probably more of a development forum and possibly someone here has had to look at things at this level.
I have read what I think may be the definitive reference for this:
http://www.oracle.com/technetwork/developer-tools/apex/learnmore/apex-rac-wp-133532.pdf
but while it covers most of what I want I don't believe I have found an answer to my question
This states:
"The Transparent Application Failover (TAF) feature of Oracle Net Services is a runtime failover for high-availability environments. It enables client applications to automatically reconnect to the database if the connection fails and, optionally, resume a SELECT statement that was in progress. The reconnection happens automatically from within the Oracle Call Interface (OCI) library. For applications that do insert, update or delete transactions, the application must trap the error when the failure occurs, rollback the transaction, and then resubmit. If the application is not written to be TAF aware, the session will get disconnected."
However (as I understand it) APEX runs in the database and would fail with the database, it isn't a typical "client application" connecting to Oracle via a TAF aware connection pool - it is essentially a large pl/sql package and TAF only covers SELECT statements not packages.
May be I'm over-reading this and it's simpler than that: APEX/Mod_plsql might just handle it?
- APEX User/HTTP session state is stored in database APEX: Understanding session state which is available on other nodes
- Mod_plsql in OHS can detect the error returning and reissues the request to good server and APEX on that instance can retrieve Users/HTTP state and process the request (APEX/RAC doc states mod_plsql can see an error from database and cleanup connection up and form a new connection, but not that it will retry the request for the client into other APEX/DB node).
I'm really just after a (transparent/non-transparent) statement based on experience, but an outline of how the components behave would be useful.
Thanks in advance
DaveHi
Any chance of getting that link outside of Metalink? - I'm trying to get our customer support id, but no luck at present.
I'm aware that APEX can run with RAC (as per the link I posted) - I'm really after next level info around behaviour in that environment.
Thanks
Dave -
Oracle Upgrade from Oracle RAC 9.2.0.6 to Oracle 10.2.0.4
Hi All,
Currently, we are running 4 node Oracle RAC environment with below mentioned configuration.
OS: Sun Solaris 5.9
Hardware: Sun E2900
Oracle Version: 9.2.0.6
Veritas Cluster Server: 4.1
We want to upgrade Oracle version to 10g, and currently analyzing the options to perform this. The current database size is 1TB appx and we want to spend minimum application shutdown time running on this database.
As part of upgrade, we also need to upgrade Veritas Cluster Server from 4.1 to 5.1 to support Oracle 10g. It would be great help,, if someone can pass some guideline to perform this task.
We are currently thinking about piecemeal approach, where we can upgrade each node individually and then put them back to cluster. There are some complexities involved, and its really high risk approach.
Thanks a lot in advance for help
Regards,
ManojOrcale 10g RAC requires you to install Oracle Clusterware. Oracle Supports running it along side a 3rd party clustering software. Not sure why you're so anxious to upgrade Veritas Clusterware when it will be trivial on the 10g db hosts.
-
Difference between RAC and MySQL Cluster !
Difference between RAC and MySQL Cluster
Please write me in well explanation , with examples , needed useful link and all other stuffs.
(1)Italian dealers/distributors for MySQL
(2)Difference between RAC and MySQL Cluster
(3)Pricing about MySQL and PostgreSQL
(4)How and which type / Way to deliver support by MySQL
(5)Security features about MySQL Vs Oracle
(6)Management Console
MySQL Vs Oracle
Thanks in advance !
MySQL ClusterHa ha, most amusing.
I suggest you try googling for answers to these things. This is a site dedicated to the Oracle database, the questions are answered by volunteers (not Oracle employees) and we are primarily geeks rather than marketing droids. If you have a specific Oracle question please feel free to post anytime.
Thank you for your interest.
Arrivederci, APC -
Database is not starting after RAC installation
Hi All,
I have installed 10g R2 2-node RAC on RHEL4 without any problem. But after the installation the database is not starting and i'm not able to connect. It is showing the below error..
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux Error: 2: No such file or directory
Can anybody help what might be the problem?
Thanks,
PraveenHi,
I'm giving the contents of alter log file from the first node...
Mon Feb 19 16:45:59 2007
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 eth1 10.250.90.0 configured from OCR for use as a cluster interconnect
Interface type 1 eth0 10.250.90.0 configured from OCR for use as a public interface
Shared memory segment for instance monitoring created
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
IMODE=BR
ILAT =36
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.2.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 1073741824
sga_target = 1073741824
db_block_size = 8192
compatible = 10.2.0.2.0
log_archive_dest_1 = LOCATION=+BSA_DATA1/bsaprod/
log_archive_format = %t_%s_%r.dbf
db_file_multiblock_read_count= 16
cluster_database_instances= 1
db_create_file_dest = +ORCL_DATA1
instance_number = 1
undo_management = AUTO
undo_tablespace = UNDOTBS1
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=bsaprodXDB)
remote_listener = LISTENERS_BSAPROD
job_queue_processes = 10
background_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/bdump
user_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/udump
core_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/cdump
audit_file_dest = /u01/app/oracle/oracle/admin/bsaprod/adump
db_name = bsaprod
open_cursors = 300
pga_aggregate_target = 262144000
Cluster communication is configured to use the following interface(s) for this instance
10.250.90.107
Mon Feb 19 16:46:00 2007
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=31282
DIAG started with pid=3, OS id=31284
PSP0 started with pid=4, OS id=31286
LMON started with pid=5, OS id=31300
LMD0 started with pid=6, OS id=31315
MMAN started with pid=7, OS id=31317
DBW0 started with pid=8, OS id=31319
LGWR started with pid=9, OS id=31321
CKPT started with pid=10, OS id=31323
SMON started with pid=11, OS id=31325
RECO started with pid=12, OS id=31327
CJQ0 started with pid=13, OS id=31329
MMON started with pid=14, OS id=31331
Mon Feb 19 16:46:00 2007
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=15, OS id=31333
Mon Feb 19 16:46:00 2007
starting up 1 shared server(s) ...
Mon Feb 19 16:46:00 2007
lmon registered with NM - instance id 1 (internal mem no 0)
Mon Feb 19 16:46:01 2007
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
0
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Resources and enqueues cleaned out
Resources remastered 0
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Reconfiguration complete
Mon Feb 19 16:46:01 2007
CREATE DATABASE "bsaprod"
MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 SIZE 51200K,
GROUP 2 SIZE 51200K
USER SYS IDENTIFIED BY *USER SYSTEM IDENTIFIED BY
Mon Feb 19 16:46:01 2007
Starting background process ASMB
ASMB started with pid=19, OS id=31365
Starting background process RBAL
RBAL started with pid=20, OS id=31369
Loaded ASM Library - Generic Linux, version 2.0.2 (KABI_V2) library for asmlib interface
Mon Feb 19 16:46:05 2007
SUCCESS: diskgroup ORCL_DATA1 was mounted
SUCCESS: diskgroup ORCL_DATA1 was dismounted
Mon Feb 19 16:46:06 2007
SUCCESS: diskgroup ORCL_DATA1 was mounted
Mon Feb 19 16:46:06 2007
Database mounted in Exclusive Mode
Mon Feb 19 16:46:09 2007
Successful mount of redo thread 1, with mount id 3309011657
Assigning activation ID 3309011657 (0xc53b82c9)
Thread 1 opened at log sequence 1
Current log# 1 seq# 1 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Successful open of redo thread 1
Mon Feb 19 16:46:09 2007
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Mon Feb 19 16:46:09 2007
SMON: enabling cache recovery
Mon Feb 19 16:46:09 2007
create tablespace SYSTEM datafile SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL online
Mon Feb 19 16:46:20 2007
Completed: create tablespace SYSTEM datafile SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL online
Mon Feb 19 16:46:20 2007
create rollback segment SYSTEM tablespace SYSTEM
storage (initial 50K next 50K)
Completed: create rollback segment SYSTEM tablespace SYSTEM
storage (initial 50K next 50K)
Mon Feb 19 16:46:24 2007
CREATE SMALLFILE UNDO TABLESPACE UNDOTBS1 DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
Mon Feb 19 16:46:31 2007
Successfully onlined Undo Tablespace 1.
Completed: CREATE SMALLFILE UNDO TABLESPACE UNDOTBS1 DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
Mon Feb 19 16:46:31 2007
create tablespace SYSAUX datafile SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO online
Completed: create tablespace SYSAUX datafile SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO online
Mon Feb 19 16:46:36 2007
CREATE SMALLFILE TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
Completed: CREATE SMALLFILE TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
Mon Feb 19 16:46:36 2007
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP
Completed: ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP
Mon Feb 19 16:46:36 2007
ALTER DATABASE DEFAULT TABLESPACE SYSTEM
Completed: ALTER DATABASE DEFAULT TABLESPACE SYSTEM
Mon Feb 19 16:46:38 2007
SMON: enabling tx recovery
Mon Feb 19 16:46:39 2007
Threshold validation cannot be done before catproc is loaded.
Threshold validation cannot be done before catproc is loaded.
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=22, OS id=32303
Mon Feb 19 16:46:39 2007
Completed: CREATE DATABASE "bsaprod"
MAXINSTANCES 32
MAXLOGHISTORY 1
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
DATAFILE SIZE 300M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE SIZE 120M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 20M AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE "UNDOTBS1" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 SIZE 51200K,
GROUP 2 SIZE 51200K
USER SYS IDENTIFIED BY *USER SYSTEM IDENTIFIED BY
Mon Feb 19 16:46:39 2007
CREATE SMALLFILE UNDO TABLESPACE "UNDOTBS2" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
Completed: CREATE SMALLFILE UNDO TABLESPACE "UNDOTBS2" DATAFILE SIZE 200M AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
Mon Feb 19 16:46:46 2007
CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE SIZE 5M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE SIZE 5M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
Mon Feb 19 16:46:46 2007
ALTER DATABASE DEFAULT TABLESPACE "USERS"
Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
Mon Feb 19 16:47:05 2007
Thread 1 advanced to log sequence 2
Current log# 2 seq# 2 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:47:34 2007
Thread 1 cannot allocate new log, sequence 3
Checkpoint not complete
Current log# 2 seq# 2 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 3
Current log# 1 seq# 3 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:48:10 2007
Thread 1 cannot allocate new log, sequence 4
Checkpoint not complete
Current log# 1 seq# 3 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 4
Current log# 2 seq# 4 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:49:05 2007
Thread 1 cannot allocate new log, sequence 5
Checkpoint not complete
Current log# 2 seq# 4 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 5
Current log# 1 seq# 5 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:50:19 2007
Thread 1 cannot allocate new log, sequence 6
Checkpoint not complete
Current log# 1 seq# 5 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 6
Current log# 2 seq# 6 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:51:00 2007
Thread 1 cannot allocate new log, sequence 7
Checkpoint not complete
Current log# 2 seq# 6 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 7
Current log# 1 seq# 7 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:51:54 2007
Thread 1 cannot allocate new log, sequence 8
Checkpoint not complete
Current log# 1 seq# 7 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 8
Current log# 2 seq# 8 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:55:32 2007
Thread 1 cannot allocate new log, sequence 9
Checkpoint not complete
Current log# 2 seq# 8 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 9
Current log# 1 seq# 9 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:56:35 2007
Thread 1 cannot allocate new log, sequence 10
Checkpoint not complete
Current log# 1 seq# 9 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 10
Current log# 2 seq# 10 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:56:52 2007
Thread 1 cannot allocate new log, sequence 11
Checkpoint not complete
Current log# 2 seq# 10 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 11
Current log# 1 seq# 11 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:57:04 2007
Thread 1 cannot allocate new log, sequence 12
Checkpoint not complete
Current log# 1 seq# 11 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 12
Current log# 2 seq# 12 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:57:14 2007
Thread 1 cannot allocate new log, sequence 13
Checkpoint not complete
Current log# 2 seq# 12 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 13
Current log# 1 seq# 13 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:57:31 2007
Thread 1 cannot allocate new log, sequence 14
Checkpoint not complete
Current log# 1 seq# 13 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 14
Current log# 2 seq# 14 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 16:58:13 2007
Thread 1 cannot allocate new log, sequence 15
Checkpoint not complete
Current log# 2 seq# 14 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 15
Current log# 1 seq# 15 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 16:59:42 2007
Thread 1 cannot allocate new log, sequence 16
Checkpoint not complete
Current log# 1 seq# 15 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 16
Current log# 2 seq# 16 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:00:33 2007
Thread 1 cannot allocate new log, sequence 17
Checkpoint not complete
Current log# 2 seq# 16 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 17
Current log# 1 seq# 17 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:01:49 2007
Thread 1 cannot allocate new log, sequence 18
Checkpoint not complete
Current log# 1 seq# 17 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 18
Current log# 2 seq# 18 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:02:23 2007
Thread 1 cannot allocate new log, sequence 19
Checkpoint not complete
Current log# 2 seq# 18 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 19
Current log# 1 seq# 19 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:04:06 2007
Thread 1 cannot allocate new log, sequence 20
Checkpoint not complete
Current log# 1 seq# 19 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 20
Current log# 2 seq# 20 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:05:22 2007
Thread 1 cannot allocate new log, sequence 21
Checkpoint not complete
Current log# 2 seq# 20 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 21
Current log# 1 seq# 21 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:06:22 2007
Thread 1 cannot allocate new log, sequence 22
Checkpoint not complete
Current log# 1 seq# 21 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 22
Current log# 2 seq# 22 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:08:42 2007
Thread 1 cannot allocate new log, sequence 23
Checkpoint not complete
Current log# 2 seq# 22 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 23
Current log# 1 seq# 23 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:09:43 2007
Thread 1 cannot allocate new log, sequence 24
Checkpoint not complete
Current log# 1 seq# 23 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 24
Current log# 2 seq# 24 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:10:42 2007
Thread 1 cannot allocate new log, sequence 25
Checkpoint not complete
Current log# 2 seq# 24 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 25
Current log# 1 seq# 25 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:11:31 2007
Thread 1 cannot allocate new log, sequence 26
Checkpoint not complete
Current log# 1 seq# 25 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 26
Current log# 2 seq# 26 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:12:01 2007
Thread 1 cannot allocate new log, sequence 27
Checkpoint not complete
Current log# 2 seq# 26 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 27
Current log# 1 seq# 27 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:13:12 2007
Thread 1 cannot allocate new log, sequence 28
Checkpoint not complete
Current log# 1 seq# 27 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 28
Current log# 2 seq# 28 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:14:05 2007
Thread 1 cannot allocate new log, sequence 29
Checkpoint not complete
Current log# 2 seq# 28 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 29
Current log# 1 seq# 29 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:14:57 2007
Thread 1 cannot allocate new log, sequence 30
Checkpoint not complete
Current log# 1 seq# 29 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Thread 1 advanced to log sequence 30
Current log# 2 seq# 30 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Mon Feb 19 17:16:29 2007
Thread 1 cannot allocate new log, sequence 31
Checkpoint not complete
Current log# 2 seq# 30 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_2.258.614969167
Thread 1 advanced to log sequence 31
Current log# 1 seq# 31 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Mon Feb 19 17:17:54 2007
Starting background process EMN0
EMN0 started with pid=25, OS id=15651
Mon Feb 19 17:17:54 2007
Shutting down instance: further logons disabled
Mon Feb 19 17:17:55 2007
Stopping background process QMNC
Mon Feb 19 17:17:55 2007
Stopping background process CJQ0
Mon Feb 19 17:17:57 2007
Stopping background process MMNL
Mon Feb 19 17:17:58 2007
Stopping background process MMON
Mon Feb 19 17:17:59 2007
Shutting down instance (immediate)
License high water mark = 1
Mon Feb 19 17:17:59 2007
Stopping Job queue slave processes
Mon Feb 19 17:17:59 2007
Job queue slave processes stopped
All dispatchers and shared servers shutdown
Mon Feb 19 17:18:01 2007
ALTER DATABASE CLOSE NORMAL
Mon Feb 19 17:18:01 2007
SMON: disabling tx recovery
SMON: disabling cache recovery
Mon Feb 19 17:18:03 2007
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
Thread 1 closed at log sequence 31
Successful close of redo thread 1
Mon Feb 19 17:18:04 2007
Completed: ALTER DATABASE CLOSE NORMAL
Mon Feb 19 17:18:04 2007
ALTER DATABASE DISMOUNT
Mon Feb 19 17:18:04 2007
SUCCESS: diskgroup ORCL_DATA1 was dismounted
Mon Feb 19 17:18:04 2007
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
Mon Feb 19 17:18:09 2007
freeing rdom 0
Mon Feb 19 17:18:12 2007
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 eth1 10.250.90.0 configured from OCR for use as a cluster interconnect
Interface type 1 eth0 10.250.90.0 configured from OCR for use as a public interface
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
IMODE=BR
ILAT =36
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.2.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 1073741824
sga_target = 1073741824
control_files = +ORCL_DATA1/bsaprod/controlfile/current.256.614969167
db_block_size = 8192
compatible = 10.2.0.2.0
log_archive_dest_1 = LOCATION=+BSA_DATA1/bsaprod/
log_archive_format = %t_%s_%r.dbf
db_file_multiblock_read_count= 16
cluster_database_instances= 1
db_create_file_dest = +ORCL_DATA1
instance_number = 1
undo_management = AUTO
undo_tablespace = UNDOTBS1
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=bsaprodXDB)
remote_listener = LISTENERS_BSAPROD
job_queue_processes = 10
background_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/bdump
user_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/udump
core_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/cdump
audit_file_dest = /u01/app/oracle/oracle/admin/bsaprod/adump
db_name = bsaprod
open_cursors = 300
pga_aggregate_target = 262144000
Cluster communication is configured to use the following interface(s) for this instance
10.250.90.107
Mon Feb 19 17:18:13 2007
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=16124
DIAG started with pid=3, OS id=16126
PSP0 started with pid=4, OS id=16128
LMON started with pid=5, OS id=16130
LMD0 started with pid=6, OS id=16132
MMAN started with pid=7, OS id=16134
DBW0 started with pid=8, OS id=16136
LGWR started with pid=9, OS id=16138
CKPT started with pid=10, OS id=16140
SMON started with pid=11, OS id=16142
RECO started with pid=12, OS id=16144
CJQ0 started with pid=13, OS id=16146
MMON started with pid=14, OS id=16148
Mon Feb 19 17:18:13 2007
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=15, OS id=16150
Mon Feb 19 17:18:13 2007
starting up 1 shared server(s) ...
Mon Feb 19 17:18:13 2007
lmon registered with NM - instance id 1 (internal mem no 0)
Mon Feb 19 17:18:14 2007
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
0
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Resources and enqueues cleaned out
Resources remastered 0
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Reconfiguration complete
Mon Feb 19 17:18:14 2007
ALTER DATABASE MOUNT
Mon Feb 19 17:18:14 2007
Starting background process ASMB
ASMB started with pid=19, OS id=16182
Starting background process RBAL
RBAL started with pid=20, OS id=16186
Loaded ASM Library - Generic Linux, version 2.0.2 (KABI_V2) library for asmlib interface
Mon Feb 19 17:18:18 2007
SUCCESS: diskgroup ORCL_DATA1 was mounted
Mon Feb 19 17:18:23 2007
Setting recovery target incarnation to 1
Mon Feb 19 17:18:23 2007
Successful mount of redo thread 1, with mount id 3309012054
Mon Feb 19 17:18:23 2007
Database mounted in Exclusive Mode
Completed: ALTER DATABASE MOUNT
Mon Feb 19 17:18:23 2007
alter database archivelog
Completed: alter database archivelog
Mon Feb 19 17:18:23 2007
alter database open
Mon Feb 19 17:18:23 2007
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=22, OS id=16393
Mon Feb 19 17:18:23 2007
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=23, OS id=16395
Mon Feb 19 17:18:23 2007
Thread 1 opened at log sequence 31
Current log# 1 seq# 31 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Successful open of redo thread 1
Mon Feb 19 17:18:23 2007
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Mon Feb 19 17:18:23 2007
SMON: enabling cache recovery
Mon Feb 19 17:18:23 2007
ARC0: STARTING ARCH PROCESSES
Mon Feb 19 17:18:23 2007
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Feb 19 17:18:23 2007
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
ARC2 started with pid=24, OS id=16422
Mon Feb 19 17:18:23 2007
Successfully onlined Undo Tablespace 1.
Mon Feb 19 17:18:23 2007
SMON: enabling tx recovery
Mon Feb 19 17:18:23 2007
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=25, OS id=16449
Mon Feb 19 17:18:25 2007
Completed: alter database open
Mon Feb 19 17:18:25 2007
ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3 SIZE 51200K,
GROUP 4 SIZE 51200K
Completed: ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3 SIZE 51200K,
GROUP 4 SIZE 51200K
Mon Feb 19 17:18:28 2007
ALTER DATABASE ENABLE PUBLIC THREAD 2
Completed: ALTER DATABASE ENABLE PUBLIC THREAD 2
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
Mon Feb 19 17:18:29 2007
Starting background process EMN0
EMN0 started with pid=26, OS id=16564
Mon Feb 19 17:18:29 2007
Shutting down instance: further logons disabled
Mon Feb 19 17:18:29 2007
Stopping background process QMNC
Mon Feb 19 17:18:29 2007
Stopping background process CJQ0
Mon Feb 19 17:18:30 2007
Stopping background process MMNL
Mon Feb 19 17:18:31 2007
Stopping background process MMON
Mon Feb 19 17:18:32 2007
Shutting down instance (immediate)
License high water mark = 3
Mon Feb 19 17:18:32 2007
Stopping Job queue slave processes
Mon Feb 19 17:18:32 2007
Job queue slave processes stopped
All dispatchers and shared servers shutdown
Mon Feb 19 17:18:34 2007
ALTER DATABASE CLOSE NORMAL
Mon Feb 19 17:18:34 2007
SMON: disabling tx recovery
SMON: disabling cache recovery
Mon Feb 19 17:18:34 2007
Shutting down archive processes
Archiving is disabled
Mon Feb 19 17:18:39 2007
ARCH shutting down
ARC2: Archival stopped
Mon Feb 19 17:18:44 2007
ARCH shutting down
ARC1: Archival stopped
Mon Feb 19 17:18:49 2007
ARCH shutting down
ARC0: Archival stopped
Mon Feb 19 17:18:50 2007
Thread 1 closed at log sequence 31
Successful close of redo thread 1
Mon Feb 19 17:18:51 2007
Completed: ALTER DATABASE CLOSE NORMAL
Mon Feb 19 17:18:51 2007
ALTER DATABASE DISMOUNT
Mon Feb 19 17:18:51 2007
SUCCESS: diskgroup ORCL_DATA1 was dismounted
Mon Feb 19 17:18:51 2007
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
Mon Feb 19 17:18:56 2007
freeing rdom 0
Mon Feb 19 17:18:59 2007
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 eth1 10.250.90.0 configured from OCR for use as a cluster interconnect
Interface type 1 eth0 10.250.90.0 configured from OCR for use as a public interface
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.2.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 1073741824
spfile = +BSA_DATA1/bsaprod/spfilebsaprod.ora
sga_target = 1073741824
control_files = +ORCL_DATA1/bsaprod/controlfile/current.256.614969167
db_block_size = 8192
compatible = 10.2.0.2.0
log_archive_dest_1 = LOCATION=+BSA_DATA1/bsaprod/
log_archive_format = %t_%s_%r.dbf
db_file_multiblock_read_count= 16
cluster_database = TRUE
cluster_database_instances= 2
db_create_file_dest = +ORCL_DATA1
thread = 1
instance_number = 1
undo_management = AUTO
undo_tablespace = UNDOTBS1
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=bsaprodXDB)
remote_listener = LISTENERS_BSAPROD
job_queue_processes = 10
background_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/bdump
user_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/udump
core_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/cdump
audit_file_dest = /u01/app/oracle/oracle/admin/bsaprod/adump
db_name = bsaprod
open_cursors = 300
pga_aggregate_target = 262144000
Cluster communication is configured to use the following interface(s) for this instance
10.250.90.107
Mon Feb 19 17:19:00 2007
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=17373
DIAG started with pid=3, OS id=17375
PSP0 started with pid=4, OS id=17377
LMON started with pid=5, OS id=17379
LMD0 started with pid=6, OS id=17381
LMS0 started with pid=7, OS id=17383
LMS1 started with pid=8, OS id=17387
MMAN started with pid=9, OS id=17391
DBW0 started with pid=10, OS id=17393
LGWR started with pid=11, OS id=17395
CKPT started with pid=12, OS id=17397
SMON started with pid=13, OS id=17399
RECO started with pid=14, OS id=17412
CJQ0 started with pid=15, OS id=17428
MMON started with pid=16, OS id=17430
Mon Feb 19 17:19:00 2007
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=17, OS id=17432
Mon Feb 19 17:19:00 2007
starting up 1 shared server(s) ...
Mon Feb 19 17:19:01 2007
lmon registered with NM - instance id 1 (internal mem no 0)
Mon Feb 19 17:19:01 2007
Reconfiguration started (old inc 0, new inc 2)
List of nodes:
0
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Mon Feb 19 17:19:01 2007
LMS 1: 0 GCS shadows cancelled, 0 closed
Mon Feb 19 17:19:01 2007
LMS 0: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Mon Feb 19 17:19:01 2007
LMS 1: 0 GCS shadows traversed, 0 replayed
Mon Feb 19 17:19:01 2007
LMS 0: 0 GCS shadows traversed, 0 replayed
Mon Feb 19 17:19:01 2007
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Fix write in gcs resources
Reconfiguration complete
LCK0 started with pid=20, OS id=17460
Mon Feb 19 17:19:02 2007
ALTER DATABASE MOUNT
Mon Feb 19 17:19:02 2007
This instance was first to mount
Mon Feb 19 17:19:02 2007
Starting background process ASMB
ASMB started with pid=22, OS id=17466
Starting background process RBAL
RBAL started with pid=23, OS id=17470
Loaded ASM Library - Generic Linux, version 2.0.2 (KABI_V2) library for asmlib interface
Mon Feb 19 17:19:06 2007
SUCCESS: diskgroup ORCL_DATA1 was mounted
Mon Feb 19 17:19:09 2007
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
Mon Feb 19 17:19:10 2007
Setting recovery target incarnation to 1
Mon Feb 19 17:19:10 2007
Successful mount of redo thread 1, with mount id 3308997510
Mon Feb 19 17:19:10 2007
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Completed: ALTER DATABASE MOUNT
Mon Feb 19 17:19:11 2007
ALTER DATABASE OPEN
This instance was first to open
Picked broadcast on commit scheme to generate SCNs
Mon Feb 19 17:19:11 2007
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=26, OS id=17679
Mon Feb 19 17:19:11 2007
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=27, OS id=17706
Mon Feb 19 17:19:11 2007
Thread 1 opened at log sequence 31
Current log# 1 seq# 31 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Successful open of redo thread 1
Mon Feb 19 17:19:11 2007
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Mon Feb 19 17:19:11 2007
ARC0: STARTING ARCH PROCESSES
Mon Feb 19 17:19:11 2007
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Feb 19 17:19:11 2007
SMON: enabling cache recovery
Mon Feb 19 17:19:11 2007
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
ARC2 started with pid=28, OS id=17708
Mon Feb 19 17:19:11 2007
Successfully onlined Undo Tablespace 1.
Mon Feb 19 17:19:11 2007
SMON: enabling tx recovery
Mon Feb 19 17:19:11 2007
Database Characterset is WE8ISO8859P1
Mon Feb 19 17:19:11 2007
Instance recovery: looking for dead threads
Instance recovery: lock domain invalid but no dead threads
Mon Feb 19 17:19:12 2007
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
Mon Feb 19 17:19:12 2007
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=29, OS id=17735
Mon Feb 19 17:19:13 2007
Completed: ALTER DATABASE OPEN
Mon Feb 19 17:20:11 2007
Shutting down archive processes
Mon Feb 19 17:20:16 2007
ARCH shutting down
ARC2: Archival stopped
Mon Feb 19 17:23:35 2007
Starting background process EMN0
EMN0 started with pid=74, OS id=28350
Mon Feb 19 17:23:35 2007
Shutting down instance: further logons disabled
Mon Feb 19 17:23:36 2007
Stopping background process QMNC
Mon Feb 19 17:23:36 2007
Stopping background process CJQ0
Mon Feb 19 17:23:38 2007
Stopping background process MMNL
Mon Feb 19 17:23:39 2007
Stopping background process MMON
Mon Feb 19 17:23:40 2007
Shutting down instance (immediate)
License high water mark = 34
Mon Feb 19 17:23:40 2007
Stopping Job queue slave processes
Mon Feb 19 17:23:40 2007
Job queue slave processes stopped
All dispatchers and shared servers shutdown
Mon Feb 19 17:24:07 2007
ALTER DATABASE CLOSE NORMAL
Mon Feb 19 17:24:07 2007
SMON: disabling tx recovery
SMON: disabling cache recovery
Mon Feb 19 17:24:08 2007
Shutting down archive processes
Archiving is disabled
Mon Feb 19 17:24:18 2007
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC1: Archiving disabled
ARCH shutting down
Mon Feb 19 17:24:18 2007
ARC1: Archival stopped
Mon Feb 19 17:24:18 2007
ARC0: Becoming the heartbeat ARCH
Mon Feb 19 17:24:18 2007
ARC0: Archiving disabled
ARCH shutting down
ARC0: Archival stopped
Mon Feb 19 17:24:19 2007
Thread 1 closed at log sequence 31
Successful close of redo thread 1
Mon Feb 19 17:24:19 2007
Completed: ALTER DATABASE CLOSE NORMAL
Mon Feb 19 17:24:19 2007
ALTER DATABASE DISMOUNT
Mon Feb 19 17:24:19 2007
SUCCESS: diskgroup ORCL_DATA1 was dismounted
Mon Feb 19 17:24:19 2007
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
Mon Feb 19 17:24:25 2007
freeing rdom 0
Mon Feb 19 17:24:35 2007
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 eth1 10.250.90.0 configured from OCR for use as a cluster interconnect
Interface type 1 eth0 10.250.90.0 configured from OCR for use as a public interface
Picked latch-free SCN scheme 2
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.2.0.
System parameters with non-default values:
processes = 300
sessions = 335
sga_max_size = 1073741824
__shared_pool_size = 255852544
__large_pool_size = 4194304
__java_pool_size = 4194304
__streams_pool_size = 0
spfile = +BSA_DATA1/bsaprod/spfilebsaprod.ora
sga_target = 1073741824
control_files = +ORCL_DATA1/bsaprod/controlfile/current.256.614969167
db_block_size = 8192
__db_cache_size = 801112064
compatible = 10.2.0.2.0
log_archive_dest_1 = LOCATION=+BSA_DATA1/bsaprod/
log_archive_format = %t_%s_%r.dbf
db_file_multiblock_read_count= 16
cluster_database = TRUE
cluster_database_instances= 2
db_create_file_dest = +ORCL_DATA1
thread = 1
instance_number = 1
undo_management = AUTO
undo_tablespace = UNDOTBS1
remote_login_passwordfile= EXCLUSIVE
db_domain =
dispatchers = (PROTOCOL=TCP) (SERVICE=bsaprodXDB)
remote_listener = LISTENERS_BSAPROD
job_queue_processes = 10
background_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/bdump
user_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/udump
core_dump_dest = /u01/app/oracle/oracle/admin/bsaprod/cdump
audit_file_dest = /u01/app/oracle/oracle/admin/bsaprod/adump
db_name = bsaprod
open_cursors = 300
pga_aggregate_target = 262144000
Cluster communication is configured to use the following interface(s) for this instance
10.250.90.107
Mon Feb 19 17:24:35 2007
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=29950
DIAG started with pid=3, OS id=29952
PSP0 started with pid=4, OS id=29954
LMON started with pid=5, OS id=29956
LMD0 started with pid=6, OS id=29958
LMS0 started with pid=7, OS id=29960
LMS1 started with pid=8, OS id=29964
MMAN started with pid=9, OS id=29968
DBW0 started with pid=10, OS id=29970
LGWR started with pid=11, OS id=29972
CKPT started with pid=12, OS id=29974
SMON started with pid=13, OS id=29976
RECO started with pid=14, OS id=29978
CJQ0 started with pid=15, OS id=29980
MMON started with pid=16, OS id=29982
Mon Feb 19 17:24:36 2007
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=17, OS id=29984
Mon Feb 19 17:24:36 2007
starting up 1 shared server(s) ...
Mon Feb 19 17:24:36 2007
lmon registered with NM - instance id 1 (internal mem no 0)
Mon Feb 19 17:24:39 2007
Reconfiguration started (old inc 0, new inc 4)
List of nodes:
0 1
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
* domain 0 not valid according to instance 1
* domain 0 valid = 0 according to instance 1
Mon Feb 19 17:24:39 2007
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Mon Feb 19 17:24:39 2007
LMS 1: 0 GCS shadows cancelled, 0 closed
Mon Feb 19 17:24:39 2007
LMS 0: 0 GCS shadows cancelled, 0 closed
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Mon Feb 19 17:24:39 2007
LMS 0: 0 GCS shadows traversed, 0 replayed
Mon Feb 19 17:24:39 2007
LMS 1: 0 GCS shadows traversed, 0 replayed
Mon Feb 19 17:24:39 2007
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Fix write in gcs resources
Reconfiguration complete
LCK0 started with pid=20, OS id=30069
Mon Feb 19 17:24:40 2007
ALTER DATABASE MOUNT
Mon Feb 19 17:24:41 2007
Starting background process ASMB
ASMB started with pid=22, OS id=30121
Starting background process RBAL
RBAL started with pid=23, OS id=30125
Loaded ASM Library - Generic Linux, version 2.0.2 (KABI_V2) library for asmlib interface
Mon Feb 19 17:24:45 2007
SUCCESS: diskgroup ORCL_DATA1 was mounted
Mon Feb 19 17:24:49 2007
Setting recovery target incarnation to 1
Mon Feb 19 17:24:49 2007
Successful mount of redo thread 1, with mount id 3309017062
Mon Feb 19 17:24:49 2007
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Mon Feb 19 17:24:51 2007
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
SUCCESS: diskgroup BSA_DATA1 was mounted
SUCCESS: diskgroup BSA_DATA1 was dismounted
Mon Feb 19 17:24:52 2007
Completed: ALTER DATABASE MOUNT
Mon Feb 19 17:24:52 2007
ALTER DATABASE OPEN
Picked broadcast on commit scheme to generate SCNs
Mon Feb 19 17:24:52 2007
LGWR: STARTING ARCH PROCESSES
ARC0 started with pid=25, OS id=30406
Mon Feb 19 17:24:53 2007
ARC0: Archival started
ARC1: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
ARC1 started with pid=26, OS id=30422
Mon Feb 19 17:24:53 2007
Thread 1 opened at log sequence 31
Current log# 1 seq# 31 mem# 0: +ORCL_DATA1/bsaprod/onlinelog/group_1.257.614969167
Successful open of redo thread 1
Mon Feb 19 17:24:53 2007
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Mon Feb 19 17:24:53 2007
ARC0: STARTING ARCH PROCESSES
Mon Feb 19 17:24:53 2007
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
Mon Feb 19 17:24:53 2007
SMON: enabling cache recovery
Mon Feb 19 17:24:53 2007
ARC2: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
ARC0: Becoming the heartbeat ARCH
ARC2 started with pid=27, OS id=30436
Mon Feb 19 17:25:07 2007
Successfully onlined Undo Tablespace 1.
Mon Feb 19 17:25:07 2007
SMON: enabling tx recovery
Mon Feb 19 17:25:07 2007
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=29, OS id=30882
Mon Feb 19 17:26:31 2007
Completed: ALTER DATABASE OPEN
Mon Feb 19 17:26:38 2007
ALTER SYSTEM SET service_names='bsaprod','bsatest' SCOPE=MEMORY SID='bsaprod1';
Mon Feb 19 17:26:53 2007
Shutting down archive processes
Mon Feb 19 17:26:58 2007
ARCH shutting down
ARC2: Archival stopped
Mon Feb 19 17:32:16 2007
Error: KGXGN aborts the instance (6)
Mon Feb 19 17:32:16 2007
Errors in file /u01/app/oracle/oracle/admin/bsaprod/bdump/bsaprod1_lmon_29956.trc:
ORA-29702: error occurred in Cluster Group Service operation
LMON: terminating instance due to error 29702
Mon Feb 19 17:32:17 2007
System state dump is made for local instance
System State dumped to trace file /u01/app/oracle/oracle/admin/bsaprod/bdump/bsaprod1_diag_29952.trc
Mon Feb 19 17:32:18 2007
Trace dumping is performing id=[cdmp_20070219173217]
Mon Feb 19 17:32:21 2007
Instance terminated by LMON, pid = 29956
and also the output of crs_stat -t in first node is....
Name Type Target State Host
ora....d1.inst application ONLINE OFFLINE
ora....d2.inst application ONLINE OFFLINE
ora....od1.srv application ONLINE OFFLINE
ora....od2.srv application ONLINE OFFLINE
ora....test.cs application ONLINE OFFLINE
ora.bsaprod.db application ONLINE OFFLINE
ora....SM1.asm application ONLINE UNKNOWN dsrvbd003
ora....ER.lsnr application ONLINE UNKNOWN dsrvbd003
ora....03.lsnr application ONLINE UNKNOWN dsrvbd003
ora....003.gsd application ONLINE UNKNOWN dsrvbd003
ora....003.ons application ONLINE UNKNOWN dsrvbd003
ora....003.vip application ONLINE ONLINE dsrvbd003
ora....SM2.asm application ONLINE OFFLINE
ora....04.lsnr application ONLINE OFFLINE
ora....004.gsd application ONLINE OFFLINE
ora....004.ons application ONLINE OFFLINE
ora....004.vip application ONLINE OFFLINE
Maybe you are looking for
-
Running Mavericks on an iMac 2.7 GHz Intel Core i5, I've been trying to install Adobe CS3 Design Premium. All components install except InDesign. Is this an installation disk problem, or is there a compatibility problem with the OS?
-
I'm not finding software I purchased in my purchase history. Can anyone help lead me to where to find it. I have the software on my IMac and want to install it on my new laptop. I cannot find it? Help! Thanks, Chad
-
Since installing the IOS upgrade, it seems impossible to sync the calendar(s) on my iMac (via the cloud OR via cable) with my iPhone 4 or my iPad (last generation). Any thoughts?
-
Can I see when the last time a user logged was?
Can I see when the last time a user logged was? Both in local directory and Open Directory
-
Workflows not working in production
Hi, I have developed some workflows which are working fine in DEV and QUA environments. However, in PROD they are not working. I activated the event linkage and traced the event via transaction SWEL. The findings are that an error occurs in one of my