All connections are connecting to 2nd node only in a 2 Node RAC Cluster
Hello,
I have a 10.2.0.3 database on a two node RAC Cluster with only one service configured. This service set to be preferred on both nodes.
However, all the connections are falling on Node2 only. Any idea where to look.
$> srvctl config service -d PSDB
psdbsrv1 PREF: psdb1 psdb2 AVAIL:
Thanks,
MM
Application is using the following connection string.
jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = PQ2-PS-db-01-vip)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = PQ2-PS-db-02-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = PSDBSRV1)(FAILOVER_MODE =(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))
--MM
Similar Messages
-
Install mulutple RAC databases on 2-node RAC cluster
I am installing 5 RAC databases on a 2-node RAC cluster. I have setup SCAN using 3 IP addresses.
Do I have to use SCAN listener for all databases?
When installing the 3 database, I get ORA-12537: TNS connection closed error.
ENV: 11gR2 2-node RH5.x
Thanks!I have setup SCAN using 3 IP addresses.
Do I have to use SCAN listener for all databases?These 3 scan ip will work for youe all database running under this cluster setup.
you may also use VIP to make connection like 10g.
I get ORA-12537: TNS connection closed error.Appear some connectivity/configuration isue,please try MOS doc contain detail on this.
How to Troubleshoot Connectivity Issue with 11gR2 SCAN Name [ID 975457.1] -
Multiple databases/instances on 4-node RAC Cluster including Physical Stand
OS: Windows 2003 Server R2 X64
DB: 10.2.0.4
Virtualization: NONE
Node Configuration: x64 architecture - 4-Socket Quad-Core (16 CPUs)
Node Memory: 128GB RAM
We are planning the following on the above-mentioned 4-node RAC cluster:
Node 1: DB1 with instanceDB11 (Active-Active: Load-balancing & Failover)
Node 2: DB1 with instanceDB12 (Active-Active: Load-balancing & Failover)
Node 3: DB1 with instanceDB13 (Active-Passive: Failover only) + DB2 with instanceDB21 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB31 (Active-Active: Load-balancing & Failover) + DB4 with instance41 (Active-Active: Load-balancing & Failover)
Node 4: DB1 with instanceDB14 (Active-Passive: Failover only) + DB2 with instanceDB22 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB32 (Active-Active: Load-balancing & Failover) + DB4 with instance42 (Active-Active: Load-balancing & Failover)
Note: DB1 will be the physical primary PROD OLTP database and will be open in READ-WRITE mode 24x7x365.
Note: DB2 will be a Physical Standby of DB1 and will be open in Read-Only mode for reporting purposes during the day-time, except for 3 hours at night when it will apply the logs.
Note: DB3 will be a Physical Standby of a remote database DB4 (not part of this cluster) and will be mounted in Managed Recovery mode for automatic failover/switchover purposes.
Note: DB4 will be the physical primary Data Warehouse DB.
Note: Going to 11g is NOT an option.
Note: Data Guard broker will be used across the board.
Please answer/advise of the following:
1. Is the above configuration supported and why so? If not, what are the alternatives?
2. Is the above configuration recommended and why so? If not, what are the recommended alternatives?Hi,
As far as i understand, there's nothing wrong in configuration except you need to consider below points while implementing final design.
1. No of CPU on each servers
2. Memory on each servers
3. If you've RAC physical standby then apply(MRP0) will run on only one instance.
4. Since you are configuring physical standby for on 3rd and 4th nodes of DB1 4 node cluster where DB13 and DB14 instances are used only for failver, if you've a disaster at data center or power failure in entire data center, you are losing both primary and secondary with an assumption that your primary and physical standby reside in same data center so it may not be highly available architecture. If you are going to use extended RAC for this configuration then it makes sense where Node 1 and Node 2 will reside in Datacenter A and Node 3 ,4 will reside in Datacenter B.
Thanks,
Keyur -
Automatic restart of services on a 1 node rac cluster with Clusterware
How do we enable a service to automaticly start-up when the db starts up?
Thanks,
Davesrvctl enable service -d DBThanks for your reply M. Nauman. I researched that command and found we do have it enabled and that it only works if the database instance was previously taken down. Since the database does not go down on an Archiver Hung error as we are using FRA with an alt location, this never kicks in and brings up the service. What we are looking for something that will trigger off of when the archive logs error and switch from FRA(Flash Recovery Area) to our Alternate disk location. Or more presicely, when it goes back to a Valid status(on the FRA - after we've run an archive log backup to clear it).
I found out from our 2 senior dba's that our other 2 node rac environment does not suffer from this problem, only the newly created 1 node rac cluster environment. The problem is we don't know what that is(a parameter on the db or cluster or what) and how do we set it?
Anyone know?
Thanks,
Gib
Message was edited by:
Gib2008
Message was edited by:
Gib2008 -
Routing all connections through a one node in a 2 node RAC cluster
Hi everyone
My client has the following requirement: an active/active RAC cluster (eg node1/node2), but with only one of the nodes being used (node1) and the other sitting there just in case.
For things like services, i'm sure this is straightforward enough - just have them set to preferred on node1 and available on node 2.
For connections, I imagine I would just have the vips in order in the tns file, but with LOAD_BALANCING=OFF, so they go through the tns entries in order (i.e node 1, then node 2), so this would still allow the vip to failover if node 1 is down.
Does that sound about right? Have I missed anything?
Many thanks
Rupuser573914 wrote:
My client has the following requirement: an active/active RAC cluster (eg node1/node2), but with only one of the nodes being used (node1) and the other sitting there just in case.Why? What is the reason for a "+just in case+" node - and when and how is is "enabled" when that just-in-case situation occurs?
This does not many any kind of sense from a high availability or redundancy view.
For connections, I imagine I would just have the vips in order in the tns file, but with LOAD_BALANCING=OFF, so they go through the tns entries in order (i.e node 1, then node 2), so this would still allow the vip to failover if node 1 is down.
Does that sound about right? Have I missed anything?Won't work on 10g - may not work on 11g. The Listener can and does handoff connections, depending on what the TNS connection string say. If you do not connect via a SID entry but via a SERVICE entry, and that service is available on multiple nodes, you may not (and often will not) be connected to instance on the single IP that you used in your TNS connection.
Basic example:
// note that this TEST-RAC alias refers to a single specific IP of a cluster, and use
// SERVICE_NAME as the request
/home/billy> tnsping test-rac
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 18-JAN-2011 09:06:33
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/usr/lib/oracle/xe/app/oracle/product/10.2.0/server/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS=(PROTOCOL=TCP)(HOST= 196.1.83.116)(PORT=1521)) (LOAD_BALANCE=no) (CONNECT_DATA=(SERVER=shared)(SERVICE_NAME=myservicename)))
OK (50 msec)
// now connecting to the cluster using this TEST-RAC TNS alias - and despite we listing a single
// IP in our TNS connection, we are handed off to a different RAC node (as the service is available
// on all nodes)
// and this also happens despite our TNS connection explicitly requesting no load balancing
/home/billy> sqlplus scott/tiger@test-rac
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jan 18 09:06:38 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Data Mining and Real Application Testing options
SQL> !lsof -n -p $PPID | grep TCP
sqlplus 5432 billy 8u IPv4 2199967 0t0 TCP 10.251.93.58:33220->196.1.83.127:37031 (ESTABLISHED)
SQL> So we connected to RAC node 196.1.83.116 - and that listener handed us off to RAC node 196.1.83.127. The 11gr2 Listener seems to behave differently - it does not do a handoff (from a quick test I did on a 11.2.0.1 RAC) in the above scenario.
This issue aside - how do you deal with just-in-case situation? How do you get clients to connect to node 2 when node 1 is down? Do you rely on the virtual IP of node 1 to be switched to node 2? Is this a 100% safe and guaranteed method?
It can take some time (minutes, perhaps more) for a virtual IP address to fail over to another node. During that time, any client connection using that virtual IP will fail. Is this acceptable?
I dunno - I dislike this concept of your client of treating the one RAC node as some kind of standby database for a just-in-case situation. I fail to see any logic in that approach. -
Theoretical question on having "stand by" instance on 4 node RAC cluster.
We have been having discussions on how we can improve the overall stability and performance of our RAC cluster. Currently we are running multiple databases on a 2 node cluster. The idea is that we will add two more nodes to the cluster and then spread the instances across the 4 nodes but only have two active and 1 as a standby in the event of node failure.
In other words:
Node1, Node2, and Node3 would run DB1
Node 4 Node3 and Node2 would run DB2
Additionally DB1_Instance3 would be shutdown on Node3 and only started if Node1 or 2 failed.
This would be the same for DB2 where DB2_Instance3 on Node2 would only be started if Node3 or Node4 failed.
Underneath them all would be the same ASM database shared for all 4 nodes.
Has anyone seen such a configuration? Is anyone aware of a white paper that discusses it.the parameter active_instance_count is a 9i parameter and should only be set if you want to run RAC with a 2 node cluster - one instance active (all users connected here) and the second instance up but no one actively using it. IF instance one goes away, all work is move to instance 2. This is hot failover .
If you are adding a new instance to a rac database, you are recommended to use DBCA or EM to add the instance. If you do not want to have users connected to the instance all the time, you can either leave it running but do not enable any services on it until you need it or stop the instance with srvctl stop instance. The cluster will not try to restart the instance automatically.
The services option, when you create a service, you define which instances you wish it to run on. You can dynamically change where it runs. For the end user, they just connect to the service, the listener directs the connection to the least loaded instance providing the service at connect time. See the workload management with RAC 10g white paper on otn (otn.oracle.com/rac) or Chapter 6 of admin & deploy guide -
Failover takes 14 minutes on 2 node rac cluster installed on virtualbox
i have 2 node rac 10gr2 installation without asm on virtualbox for learning purposes, the storage is kept on rac1 node and shared via nfs,
i connected a third machine to be used as client by creating a service
but when i shutdown the rac2 node to cause a failover, it takes almost 14-15 minutes to reconnect to the rac1 node
even though the vip of rac2 is taken up by rac1 in not more than 30 seconds
so i checked the listener log for rac1 node
the correct service update shows up after 14 minutes every time i try
tried a lot of googling but found nothing
This is the service
RAC =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip.localdomain)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip.localdomain)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = RAC.WORLD)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 1)
18-DEC-2013 18:34:48 * node_down * rac2 * 0
18-DEC-2013 18:34:48 * service_down * RAC2 * 0
18-DEC-2013 18:34:48 * service_update * RAC1 * 0
18-DEC-2013 18:34:48 * service_update * RAC1 * 0
18-DEC-2013 18:35:48 * service_update * RAC1 * 0
18-DEC-2013 18:35:48 * service_update * RAC1 * 0
18-DEC-2013 18:36:48 * service_update * RAC1 * 0
18-DEC-2013 18:36:48 * service_update * RAC1 * 0
18-DEC-2013 18:37:48 * service_update * RAC1 * 0
18-DEC-2013 18:37:48 * service_update * RAC1 * 0
18-DEC-2013 18:38:48 * service_update * RAC1 * 0
18-DEC-2013 18:38:48 * service_update * RAC1 * 0
18-DEC-2013 18:39:45 * service_update * RAC1 * 0
18-DEC-2013 18:39:45 * service_update * RAC1 * 0
18-DEC-2013 18:40:12 * service_update * RAC1 * 0
18-DEC-2013 18:40:12 * service_update * RAC1 * 0
18-DEC-2013 18:40:48 * service_update * RAC1 * 0
18-DEC-2013 18:40:48 * service_update * RAC1 * 0
18-DEC-2013 18:41:01 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=rac1.localdomain)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER_RAC1)(VERSION=169869568)) * status * 0
18-DEC-2013 18:41:49 * service_update * RAC1 * 0
18-DEC-2013 18:41:49 * service_update * RAC1 * 0
18-DEC-2013 18:42:31 * service_update * RAC1 * 0
18-DEC-2013 18:42:31 * service_update * RAC1 * 0
18-DEC-2013 18:43:49 * service_update * RAC1 * 0
18-DEC-2013 18:43:49 * service_update * RAC1 * 0
18-DEC-2013 18:44:49 * service_update * RAC1 * 0
18-DEC-2013 18:44:49 * service_update * RAC1 * 0
18-DEC-2013 18:45:40 * service_update * RAC1 * 0
18-DEC-2013 18:45:40 * service_update * RAC1 * 0
18-DEC-2013 18:45:49 * service_update * RAC1 * 0
18-DEC-2013 18:45:49 * service_update * RAC1 * 0
18-DEC-2013 18:46:49 * service_update * RAC1 * 0
18-DEC-2013 18:46:49 * service_update * RAC1 * 0
18-DEC-2013 18:47:49 * service_update * RAC1 * 0
18-DEC-2013 18:47:49 * service_update * RAC1 * 0
18-DEC-2013 18:47:54 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=RAC.WORLD)(FAILOVER_MODE=(TYPE=SELECT)(METHOD=BASIC)(RETRIES=180)(DELAY=1))(CID=(PROGRAM=sqlplus)(HOST=client.localdomain)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.103)(PORT=32774)) * establish * RAC.WORLD * 0
18-DEC-2013 18:47:55 * service_update * RAC1 * 0
18-DEC-2013 18:47:55 * service_update * RAC1 * 0These entries came by when the failover state resumed
[19-DEC-2013 15:17:05:546] ntt2err: entry
[19-DEC-2013 15:17:05:547] ntt2err: soc 9 error - operation=5, ntresnt[0]=530, ntresnt[1]=113, ntresnt[2]=0
[19-DEC-2013 15:17:05:547] ntt2err: exit
[19-DEC-2013 15:17:05:547] nttrd: exit
[19-DEC-2013 15:17:05:547] nsprecv: error exit
[19-DEC-2013 15:17:05:547] nserror: entry
[19-DEC-2013 15:17:05:547] nserror: nsres: id=0, op=68, ns=12570, ns2=12560; nt[0]=530, nt[1]=113, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
[19-DEC-2013 15:17:05:547] nsrdr: error exit
[19-DEC-2013 15:17:05:547] nsdo: nsctxrnk=0
[19-DEC-2013 15:17:05:547] nsdo: error exit
[19-DEC-2013 15:17:05:547] nioqrc: recv: packet reader error -> translated to IFCR_EOF
[19-DEC-2013 15:17:05:547] nioqer: entry
[19-DEC-2013 15:17:05:547] nioqer: incoming err = 12151
[19-DEC-2013 15:17:05:547] nioqce: entry
[19-DEC-2013 15:17:05:547] nioqce: exit
[19-DEC-2013 15:17:05:547] nioqer: returning err = 3113
[19-DEC-2013 15:17:05:547] nioqer: exit
[19-DEC-2013 15:17:05:547] nioqrc: exit
[19-DEC-2013 15:17:05:547] nioqrs: entry
[19-DEC-2013 15:17:05:547] nioqrs: state = normal (0)
[19-DEC-2013 15:17:05:547] nioqrs: reset called, but connection in EOF state.
[19-DEC-2013 15:17:05:547] nioqrs: exit
[19-DEC-2013 15:17:05:547] nioqds: entry
[19-DEC-2013 15:17:05:547] nioqds: disconnecting...
[19-DEC-2013 15:17:05:547] nsclose: entry
[19-DEC-2013 15:17:05:547] nstimarmed: entry
[19-DEC-2013 15:17:05:547] nstimarmed: no timer allocated
[19-DEC-2013 15:17:05:547] nstimarmed: normal exit
[19-DEC-2013 15:17:05:547] nttctl: entry
[19-DEC-2013 15:17:05:547] nttctl: entry
[19-DEC-2013 15:17:05:547] nsdo: entry
[19-DEC-2013 15:17:05:547] nsdo: cid=0, opcode=66, *bl=0, *what=0, uflgs=0x0, cflgs=0x2
[19-DEC-2013 15:17:05:548] nsdo: rank=64, nsctxrnk=0
[19-DEC-2013 15:17:05:548] nsdo: nsctx: state=1, flg=0x1004009, mvd=0
[19-DEC-2013 15:17:05:548] nsdo: entry
there are a lot more entries
but can't really paste all of them here
still have no clue what so ever regarding what is actually happening -
SOA with Oracle 2 node RAC cluster
Hi All,
Just a simple doubt, I have successfully installed and configured SOA suite 11.1.1.3 & BAM in one wls Domain 10.3.3 in a linux box and could access all the application like BAM console, BPEL console etc .... also could see all my data-sources deployed in the Data Sources with a single node database.
1. Now I have to RE-configure this whole SOA suite with RAC (2 node database cluster) what changes or configuration needed to implement SOA suite with RAC database?
2. Do I need to create a "Multi Data Source" to configure RAC with SOA suite..?
Thanks
SamDB wrote:
This is regarding Oracle RAC..so if there is a specific category..please let me know..
I have installed OEL linux 5.6 as guest OS (using virtualbox) in two laptops.
I want to install 2 node oracle 10gR2 RAC with the OEL linux as OS and each laptop as one node.
Read docs and understood that there must be shared storage for oracle clusterware and oracle ASM for oracle RAC to work.
Please let me know the steps to create shared storage for oracle clusterware and oracle ASM (considering virtualbox OEL) and to configure public,private and virtual IPs.
I already have document to create 2node oracle RAC using virtualbox with two nodes on the same laptop.so please dont suggest that doc.
Thanks,
DBMay be my step by step RAC installation guide can help you somehow?
http://kamranagayev.wordpress.com/2011/04/05/step-by-step-installing-oracle-10g-rac-on-vmware/ -
i cannot get my all my ringtones to transfer to my vip email alerts. there is a specific tone I need for one vip email alert, but that one doesn't transfer, however it IS in my ringtones and textones. how can I get that particular tone for my vip email?
Only Apple Account Security could help at this point. You can try calling Apple Support in Canada - you'll have to find one of the several ways, such as Skype, to call an 800 number from outside of the relevant country - and ask for Account Security and see if they can help. Or you can find a friend who speaks Chinese and ask them to help you talk to Apple Support in China. There are really no other options that I know of.
Note, by the way, that these are user-to-user support forums. You aren't speaking with Apple when you post here.
Regards. -
Parallel queries are failing in 8 node RAC DB
While running queries with parallel hints , the queries are failing with
ORA-12805 parallel query server died unexpectedly
Upon checking the alert logs, I couldnt find any thing about ORA-12805, But the i find this error: Please help me to fix this problem
Fatal NI connect error 12537, connecting to:
(LOCAL=NO)
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.7.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.7.0 - Production
Time: 15-MAY-2012 16:49:15
Tracing not turned on.
Tns error struct:
ns main err code: 12537
TNS-12537: TNS:connection closed
ns secondary err code: 12560
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
ORA-609 : opiodr aborting process unknown ospid (18807_47295439087424)
Tue May 15 16:49:16 2012A couple of thoughts come immediately to mind:
1. When I read ... "Tracing not turned on" ... I wonder to myself ... why not turn on tracing?
2. When I read ... "Version 11.1.0.7.0" ... I wonder to myself ... why not apply all of the patches Oracle has created in the last 3 years and see if having a fully patched version addresses the issue?
3. When I read ... "parallel query server died" ... I wonder whether you have gone to support.oracle.com and looked up the causes and solutions for Parallel Query Server dying?
Of course I also wonder why you have an 8 node cluster as that is adding substantial complexity and which leads me to wonder ... "is it happening on only one node or all nodes?"
Hope this helps. -
HELP !! only one ASM instance up at any given time on a 2 node RAC cluster
OS: Solaris 10
Oracle: 10.2.0.4
Problem: Installing ASM, dbca hangs, and errors out with End of communication channel.
Only one ASM instance can be brought up at any given time
PLEASE HELP !!!!
- nkalert log
======
Thu Oct 1 23:09:54 2009
lmon registered with NM - instance id 1 (internal mem no 0)
Thu Oct 1 23:09:54 2009
Reconfiguration started (old inc 0, new inc 12)
ASM instance
List of nodes:
0 1
Global Resource Directory frozen
Communication channels reestablished
Thu Oct 1 23:24:59 2009
Errors in file /oracle/admin/+ASM/bdump/+asm1_lmon_1999.trc:
ORA-00481: LMON process terminated with error
Thu Oct 1 23:24:59 2009
LMON: terminating instance due to error 481
Thu Oct 1 23:24:59 2009
System state dump is made for local instance
Thu Oct 1 23:24:59 2009
Trace dumping is performing id=[cdmp_20091001232459]
Thu Oct 1 23:25:00 2009
Instance terminated by LMON, pid = 1999
Thu Oct 1 23:25:03 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 bge1 192.168.10.0 configured from OCR for use as a cluster interconnect
Interface type 1 bge0 10.134.246.32 configured from OCR for use as a public interface
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as /oracle/product/10.2.0/db_1/dbs/arch
Autotune of undo retention is turned off.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.4.0.
System parameters with non-default values:
large_pool_size = 12582912
instance_type = asm
cluster_database = TRUE
instance_number = 1
remote_login_passwordfile= EXCLUSIVE
background_dump_dest = /oracle/admin/+ASM/bdump
user_dump_dest = /oracle/admin/+ASM/udump
core_dump_dest = /oracle/admin/+ASM/cdump
asm_diskgroups =
Cluster communication is configured to use the following interface(s) for this instance
192.168.10.1
Thu Oct 1 23:25:03 2009
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=17923
DIAG started with pid=3, OS id=17925
PSP0 started with pid=4, OS id=17927
LMON started with pid=5, OS id=17929
LMD0 started with pid=6, OS id=17931
LMS0 started with pid=7, OS id=17933
MMAN started with pid=8, OS id=17937
DBW0 started with pid=9, OS id=17939
LGWR started with pid=10, OS id=17941
CKPT started with pid=11, OS id=17943
SMON started with pid=12, OS id=17945
RBAL started with pid=13, OS id=17955
GMON started with pid=14, OS id=17957
Thu Oct 1 23:25:04 2009
lmon registered with NM - instance id 1 (internal mem no 0)
Thu Oct 1 23:25:04 2009
Reconfiguration started (old inc 0, new inc 14)
ASM instance
List of nodes:
0 1
Global Resource Directory frozen
Communication channels reestablished
Thu Oct 1 23:40:09 2009
Errors in file /oracle/admin/+ASM/bdump/+asm1_lmon_17929.trc:
ORA-00481: LMON process terminated with error
Thu Oct 1 23:40:09 2009
LMON: terminating instance due to error 481
Thu Oct 1 23:40:09 2009
System state dump is made for local instance
Thu Oct 1 23:40:09 2009
Trace dumping is performing id=[cdmp_20091001234009]
Thu Oct 1 23:40:10 2009
Instance terminated by LMON, pid = 17929
Thu Oct 1 23:40:12 2009
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Interface type 1 bge1 192.168.10.0 configured from OCR for use as a cluster interconnect
Interface type 1 bge0 10.134.246.32 configured from OCR for use as a public interface
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as /oracle/product/10.2.0/db_1/dbs/arch
Autotune of undo retention is turned off.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.4.0.
System parameters with non-default values:
large_pool_size = 12582912
instance_type = asm
cluster_database = TRUE
instance_number = 1
remote_login_passwordfile= EXCLUSIVE
background_dump_dest = /oracle/admin/+ASM/bdump
user_dump_dest = /oracle/admin/+ASM/udump
core_dump_dest = /oracle/admin/+ASM/cdump
asm_diskgroups =
Cluster communication is configured to use the following interface(s) for this instance
192.168.10.1
Thu Oct 1 23:40:13 2009
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=26086
DIAG started with pid=3, OS id=26088
PSP0 started with pid=4, OS id=26090
LMON started with pid=5, OS id=26092
LMD0 started with pid=6, OS id=26094
LMS0 started with pid=7, OS id=26096
MMAN started with pid=8, OS id=26100
DBW0 started with pid=9, OS id=26102
LGWR started with pid=10, OS id=26112
CKPT started with pid=11, OS id=26114
SMON started with pid=12, OS id=26116
RBAL started with pid=13, OS id=26118
GMON started with pid=14, OS id=26120
Thu Oct 1 23:40:13 2009
lmon registered with NM - instance id 1 (internal mem no 0)
Thu Oct 1 23:40:14 2009
Reconfiguration started (old inc 0, new inc 16)
ASM instance
List of nodes:
0 1
Global Resource Directory frozen
Communication channels reestablished -
Diff hardwares for 2 node rac cluster
Dear All,
Can i implement RAC 2 node cluster on 2 nodes with diff hardwares like diff server models?
Regardshungry_dba wrote:
Dear All,
Can i implement RAC 2 node cluster on 2 nodes with diff hardwares like diff server models?
Hi,
Also you can read tech note below:
*RAC: Frequently Asked Questions [ID 220970.1]*
Can I have different servers in my Oracle RAC? Can they be from different vendors? Can they be different sizes?
Oracle RAC does support a cluster with nodes that have different hardware configurations. An example is a cluster with 3 nodes with 4 CPUs and another node with 6 CPUs. This can easily occur when adding a new node after the cluster has been in production for a while. For this type of configuration, customers must consider some additional features to get the optimal cluster performance. The servers used in the cluster can be from different vendors; this is fully supported as long as they run the same binaries. Since many customers implement Oracle RAC for high availability, you must make sure that your hardware vendor will support the configuration. If you have a failure, will you get support for the hardware configuration?
Regards,
Levi Pereira -
Getting Error in starting VIP in 3 NODE RAC Cluster in VMWARE
hi
please can some one help me to have solution for why VIPCA is failing to start VIP on RAC Node 3 it gives the ERROR: CRS-1006; CRS-0215 no more members. Network Configuration is like:
/etc/hosts
127.0.0.1 localhost.localdomain localhost
#Public IP
192.168.2.131 rac1.sun.com rac1
192.168.2.132 rac2.sun.com rac2
192.168.2.133 rac3.sun.com rac3
#Private IP
10.10.10.31 rac1-priv rac1-priv
10.10.10.32 rac2-priv rac2-priv
10.10.10.33 rac3-priv rac3-priv
#Virtual IP
192.168.2.131 rac1-vip.sun.com rac1-vip
192.168.2.132 rac2-vip.sun.com rac2-vip
192.168.2.133 rac3-vip.sun.com rac3-vip
/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac1.sun.com
GATEWAY=192.168.2.1
Thanks in Advanceyou should have to user some other new ips for VIP.
PLEASE CHANGE THE VIP IP's and try again.
192.168.2.131 rac1-vip.sun.com rac1-vip
192.168.2.132 rac2-vip.sun.com rac2-vip
192.168.2.133 rac3-vip.sun.com rac3-vipchange the ips to some other ip not used by any machines.
sample /etc/hosts file
127.0.0.1 localhost.localdomain localhost
# Public
10.1.10.201 rac1.localdomain rac1
10.1.10.202 rac2.localdomain rac2
#Private
10.1.9.201 rac1-priv.localdomain rac1-priv
10.1.9.202 rac2-priv.localdomain rac2-priv
#Virtual
*10.1.10.203 rac1-vip.localdomain rac1-vip*
*10.1.10.204 rac2-vip.localdomain rac2-vip* -
OCR and Votinging Disk File Permissions for NFS 2 node RAC cluster
This is a fresh reinstall of Oracle 11gR2 Clusterware on Red Hat Enterprise Linux 64bit and I'm using NFS as my shared storage.
I've create three file systems /ocrVote01 /ocrVote02 and /ocrVote03 as NFS share and exported them to two nodes that will use them as shared OCR and Voting Disk. Currently I'm getting errors when I run root.sh
I've done the following:
chown rac:dba -R /ocrVote01 /ocrVote02 /ocrVote03
It appears that the root.sh script is trying to create a directory /ocrVote01/storage and getting permissions denied.
Any idea why Root can not create this directory?
root.sh is also getting error permission denied at /u01/gridinfra/11.2.0/GI/crs/install/crsconfiglib.pm at line 4478
Any Ideas on what file permissions should be set for /ocrVote01 /ocrVote02 and /ocrVote03 ?
Thanks
32352 close(3) = 0
32352 write(2, "mkdir /ocrVote01/storage/: Permi"..., 112) = 112
| 00000 6d 6b 64 69 72 20 2f 6f 63 72 56 6f 74 65 30 31 **mkdir /o crVote01** |
| 00010 2f 73 74 6f 72 61 67 65 2f 3a 20 50 65 72 6d 69 /storage /: Permi |
| 00020 73 73 69 6f 6e 20 64 65 6e 69 65 64 20 61 74 20 ssion de nied at |
| 00030 2f 75 30 31 2f 67 72 69 64 69 6e 66 72 61 2f 31 /u01/gri dinfra/1 |
| 00040 31 2e 32 2e 30 2f 47 49 2f 63 72 73 2f 69 6e 73 1.2.0/GI /crs/ins |
| 00050 74 61 6c 6c 2f 63 72 73 63 6f 6e 66 69 67 5f 6c tall/crs config_l |
| 00060 69 62 2e 70 6d 20 6c 69 6e 65 20 34 34 37 38 0a ib.pm li ne 4478. |
32352 exit_group(25) = ?After reading OTN feedback, which is very much appreciated. I decided to look deeper into my NFS configuration and found that the /etc/exports had the incorrect configuration for shared disk where in root can make appropriate changes to voting disk and crs files. The correct configuration for /etc/fstab using NFS for shared storage for voting disk and CRS:
192.168.1.21:/ocrVote01 /ocrVote01 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 1 2
192.168.1.21:/ocrVote02 /ocrVote02 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 1 2
192.168.1.21:/ocrVote03 /ocrVote03 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 1 2 -
How to write Parallel DML in 2 node RAC Cluster
Any ideas on how to write a DML that will run on a two node cluster in parallel? I would like to scale a DML statement within a RAC environment. Thanks
Check out [this article|http://www.oracle.com/technology/pub/articles/conlon_rac.html].
Maybe you are looking for
-
Hi, I am a paying user of Acrobat XI Pro. It stopped working a few days ago though. By stopped working I mean everything: when I open na archive, it stops working. When I try to create a new pdf, it stops working. When I try to merge a new pdf, it st
-
JPA -- How can I turn off the caching for an entity?
Hi, I have a problem that I will illustrate with a simplified example. I have created an entity: @Entity(name="Customer") @Table(name="CUSTOMERS") public class Customer implements Serializable { }I have also set the collowing properties in persistenc
-
Operations Manager 2012 Reporting Service installation failed again
Hi dear guys. i'm an administrator in a company. and our company decided to have monitoring tools for our servers located in two sites. i started installation of SCOM in Two servers: 1- SQL server 2-Management server, Operations Console, Web Console
-
Hello, I'm looking at the dev2dev article by Alex Toussaint on Integrating ALBPM Suite 6.0 and ALSB but I'm having a problem making the actual web service call to ALSB. The code in figure 4 is cut off so I'm not sure if what I'm doign is correct, but
-
Problem with IMAP email in E52
Hello, I have that problem with the email on my E52 v034.001: I am using my personal email (Gmail via IMAP) and it is working fine, on this one I am receiving few emails a day. When I am trying to set up my company's email, all the settings are ok, r