Oracle RAC (job distribution to indivisual node)

Is it possible in Real application cluster to dedicate set of node for insertion job and set of node for searching job and set of node for Text indexing job.

if you use job scheduler, you may assigne each job to a separate job class and for each job class you may assign different service which will be offered by different instances in cluster. when job starts, he will connect to the service defined for job class ... and there you go...

Similar Messages

  • Oracle RAC with more then two nodes?

    Hello,
    knows anybody a reference client project that use Oracle RAC with more then two nodes on a Linux environment?
    Many thnaks!
    Norman Bock

    Hello Norman,
    XioTech is a SAN company that has a project called "THE TENS". They configured and ran a 10 node RH Linux Oralce 9i RAC. I understand they want to see if 32 are possible. I am sure if you ask them, they would be happy to give you the details.
    http://www.xiotech.com/
    Cheers

  • HOW INCREASE SGA IN ORACLE RAC 10 G WITH 2 NODES

    How increase sga_max_size, sga_target in ORACLE RAC 10 G WITH 2 NODES;
    i have oracle 10g in unix hp-ux 11i in rac (2 nodes)
    with sga 8g; and i want to increase 12g;
    i can alter these parameters without shutdown the entire database ?; , i can alter and take these change in one node first and later de second node?
    i used in first node :
    1- alter system set sga_max_size=16g scope spfile;
    2- alter system set sga_targer=12g cope spfile;
    later i restard all intance one by one:
    srcvtl stop instance -d my_database -i my_instance1 -o immediate;
    srcvtl start instance -d my_database -i my_instance1
    3- in second node.
    srcvtl stop instance -d my_database -i my_instance2 -o immediate;
    srcvtl start instance -d my_database -i my_instance2
    but my sga is the SAME 8G.. WHY NOT CHANGE...
    i changed these parameters and restar my instance in first node later stop and start using srvctl the second node but my sga not change. continue in 8g ;however these changes are in spfile so;
    prd2.sga_max_size=8589934592#internally adjusted
    prd1.sga_max_size=8589934592#internally adjusted
    *.sga_max_size=17179869184
    prd2.sga_target=8589934592
    prd1.sga_target=8589934592
    *.sga_target=12884901888
    prd2.thread=2
    prd1.thread=1
    how i can apply these change node by node or i need shutdown the entire database?
    need to make these changes without affecting my application because i can not shutdown the both node...
    Edited by: user568681 on 02-sep-2010 14:32

    Hi,
    I just checked on a test RAC configuration (HP-UX, 10.2.0.4)
    You don't need to stop the database.
    Keep your "rolling" original scenario but change :
    alter system set sga_max_size=16g scope spfile;
    alter system set sga_target=12g cope spfile;by
    alter system set sga_max_size=16g scope spfile sid = 'PRD1';
    alter system set sga_target=12g scope spfile sid = 'PRD1';
    alter system set sga_max_size=16g scope spfile sid = 'PRD2';
    alter system set sga_target=12g cope spfile sid = 'PRD2';Actually
    alter system set sga_max_size=16g scope spfile;
    alter system set sga_max_size=16g scope spfile SID='*';changes globally the values for every instance in the spfile ("*.XXXXXX" is updated) but it does not remove the specific entries already assigned to one particular instance (and it is your case !)
    Alternatively you could reset the values assigned specifically to one instance with "alter system reset" to have only "*.XXXX" for those parameters.
    Best regards
    Phil

  • Oracle Rac - Targetting clients to particular nodes?

    We have a deployment case that wanted to find out best practices.
    Currently there is a two node RAC setup.
    We have an application that on one side -- is a high write, minor read component -- and another side which is mostly read and minor write. Each component uses multiple connections and have logical separations of what tables they are writing to.
    The current proposed solution is to target the high write components to one node instance -- while distributing the other component, which does A LOT of reading and some writing, across both RAC instances.
    The question is what is the best paradigm for this: Any documentation of when to target components to single instances versus letting RAC do it's own distribution/etc. There are different user accounts for the components that do the high volume writing versus the selecting/etc..
    Thanks.

    Hi,
    The question is what is the best paradigm for this: Any documentation of when to target components to single instances versus letting RAC do it's own distribution/etc. There are different user accounts for the components that do the high volume writing versus the selecting/etc..I'm assuming you're using version 10g or later. On Oracle RAC 9i you don't have this feature.
    You can direct connections to nodes that have characteristics in common workload using Oracle Services.
    To manage workloads or a group of applications, you can define services that you assign to a particular application or to a subset of an application's operations. You can also group work by type under services.
    Oracle recommends that all users who share a service have the same service level requirements. You can define specific characteristics for services and each service can be a separate unit of work. There are many options that you can take advantage of when using services. Although you do not have to implement these options, using them helps optimize application performance.
    When you define a service, you define which instances normally support that service. These are known as the PREFERRED instances. You can also define other instances to support a service if the service's preferred instance fails. These are known as AVAILABLE instances.
    Services are integrated with Resource Manager, which enables you to restrict the resources that are used by the users who connect with a service in an instance. The Resource Manager enables you to map a consumer group to a service so that users who connect with the service are members of the specified consumer group.
    Using a Service (11.1 or later) when you execute a SQL statement in parallel, the parallel processes only run on the instances that offer the service with which you originally connected to the database. This is the default behavior. This does not affect other parallel operations such as parallel recovery or the processing of GV$ queries. To override this behavior, set a value for the PARALLEL_INSTANCE_GROUP initialization parameter.
    You'll have much more concept to learn how it works than to know how to configure.
    Understanding how it works is essential to configure the services properly.
    http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/hafeats.htm#CHDGEBED
    http://www.ardentperf.com/pub/schneider-services.pdf
    Any questions just ask.
    Regards,
    Levi Pereira

  • Oracle 11g RAC on CentOS 5.2 - Node isnt working after hardware failure

    Hi,
    We have a 2-node Oracle RAC 11g running on identical nodes with Centos 5.2 as operating system.
    Due to a drive failure and a reboot yesterday, the ASM disk, which is onnected via iSCSI, changed from sdd to sdc in name - (the faulty raid drive was removed, thus the change).
    The oracle services didnt start, which appeard perfectly logical at the time, given that the ASM disk suddenly wasnt where it was supposed to be, but even today after a replacement disk was inserted, the RAID rebuilt + with the ASM disk back to its original naming in /dev/; the node doesnt seem to work properly. Some of the oracle services are running, but there seems to be a problem with the listener, which might also explain why the node does not set it's virtual ip after starting up.
    Luckily, the other one is unaffected and is working perfectly fine..
    I'm far from having a clue about oracle, it was sheer luck (that, and the very usable documentation by oracle) that i managed to get that cluster running in the first place... I know my way around linux, so if it's a operating system related problem i'll be able to fix it, but i cant figure out what the problem with oracle is.
    The listener doesnt seem to be running
    [oracle@serv-211 ~]$ lsnrctl status
    LSNRCTL for Linux: Version 11.1.0.6.0 - Production on 08-JUN-2010 11:12:00
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    TNS-12541: TNS:no listener
    TNS-12560: TNS:protocol adapter error
    TNS-00511: No listener
    Linux Error: 111: Connection refused
    Manual starting works, but not with the result i hoped for
    [root@serv-211 oracle]# lsnrctl status
    LSNRCTL for Linux: Version 11.1.0.6.0 - Production on 08-JUN-2010 11:32:16
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for Linux: Version 11.1.0.6.0 - Production
    Start Date 08-JUN-2010 11:30:42
    Uptime 0 days 0 hr. 1 min. 34 sec
    Trace Level support
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Log File /opt/oracle/11g/diag/tnslsnr/serv-211/listener/alert/log.xml
    Listener Trace File /opt/oracle/11g/diag/tnslsnr/serv-211/listener/trace/ora_22694_140227439994592.trc
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=serv-211)(PORT=1521)))
    The listener supports no services
    The command completed successfully
    [root@serv-211 oracle]#
    While other services appear to be fine
    [oracle@serv-211 ~]$ crsctl check crs
    Cluster Synchronization Services appears healthy
    Cluster Ready Services appears healthy
    Event Manager appears healthy
    I feel a little lost in the vast amount of logfiles, a point in the right direction might actually be enough. I cant image it can be that serious.
    Please assist
    Regards
    Edited by: user9134821 on 08.06.2010 02:39

    Hi,
    As usual, as soon as i post a problem in some board, i find the solution myself.
    Although this way isnt really "linuxy", deleting the listener with the netCA tool and adding a new one solved it.
    Regards.

  • Oracle RAC 2 node architecture-- Node -2 always gets evicted

    Hi,
    I have Oracle RAC DB with simple 2 node architecture( Host RHEL5.5 X 86_64) . The problem we are facing is, whenever there is network failure on either of nodes, always node-2 gets evicted (rebooted). We do not see any abnormal errors on alert.log file on both the nodes.
    The steps followed and results are:
    **Node-1#service network restart**
    **Result: Node-2 evicted**
    **Node-2# service network restart**
    **Result: Node-2 evicted**
    I would like to know why node-1 never gets evicted even if the network is down or restarted on node-1 itself?? Is this normal.
    Regards,
    Raj

    Hi,
    Please find the output below:
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 50% heartbeat fatal, removal in 14.120 seconds
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) is impending reconfig, flag 132108, misstime 15880
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: local diskTimeout set to 27000 ms, remote disk timeout set to 27000, impending reconfig status(1)
    2011-06-03 16:36:05.994: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 760 > margin 750 cur_ms 1480138014 lastalive 1480137254
    2011-06-03 16:36:07.493: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:07.493: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:08.084: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 850 > margin 750 cur_ms 1480140104 lastalive 1480139254
    2011-06-03 16:36:09.831: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 75% heartbeat fatal, removal in 7.110 seconds
    2011-06-03 16:36:10.122: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 880 > margin 750 cur_ms 1480142134 lastalive 1480141254
    2011-06-03 16:36:11.112: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 860 > margin 750 cur_ms 1480143124 lastalive 1480142264
    2011-06-03 16:36:12.212: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 950 > margin 750 cur_ms 1480144224 lastalive 1480143274
    2011-06-03 16:36:12.487: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:12.487: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:13.840: [    CSSD][1216194880]clssnmPollingThread: local diskTimeout set to 200000 ms, remote disk timeout set to 200000, impending reconfig status(0)
    2011-06-03 16:36:14.881: [    CSSD][1205705024]clssgmTagize: version(1), type(13), tagizer(0x494dfe)
    2011-06-03 16:36:14.881: [    CSSD][1205705024]clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 21
    2011-06-03 16:36:17.487: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:17.487: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:22.486: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:22.486: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: [network] failed recv attempt endp 0x2eb80c0 [0000000001fed69c] { gipcEndpoint : localAddr 'gipc://prddbs01:80b3-6853-187b-4d2e#192.168.7.1#33842', remoteAddr 'gipc://prddbs02:gm_prddbs-cluster#192.168.7.2#60074', numPend 4, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x1e10, pidPeer 0, flags 0x2616, usrFlags 0x0 }, req 0x2aaaac308bb0 [0000000001ff4b7d] { gipcReceiveRequest : peerName '', data 0x2aaaac2e3cd8, len 10240, olen 0, off 0, parentEndp 0x2eb80c0, ret gipc
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos op : sgipcnTcpRecv
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos dep : Connection reset by peer (104)
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos loc : recv
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos info: dwRet 4294967295, cookie 0x2aaaac308bb0
    2011-06-03 16:36:23.162: [    CSSD][1205705024]clssgmeventhndlr: Disconnecting endp 0x1fed69c ninf 0x2aaab0000f90
    2011-06-03 16:36:23.162: [    CSSD][1205705024]clssgmPeerDeactivate: node 2 (prddbs02), death 0, state 0x80000001 connstate 0x1e
    2011-06-03 16:36:23.162: [GIPCXCPT][1205705024]gipcInternalDissociate: obj 0x2eb80c0 [0000000001fed69c] { gipcEndpoint : localAddr 'gipc://prddbs01:80b3-6853-187b-4d2e#192.168.7.1#33842', remoteAddr 'gipc://prddbs02:gm_prddbs-cluster#192.168.7.2#60074', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x1e10, pidPeer 0, flags 0x261e, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
    2011-06-03 16:36:32.494: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:37.493: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:37.494: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:40.598: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 90% heartbeat fatal, removal in 2.870 seconds, seedhbimpd 1
    2011-06-03 16:36:42.497: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:42.497: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:43.476: [    CSSD][1216194880]clssnmPollingThread: Removal started for node prddbs02 (2), flags 0x20000, state 3, wt4c 0
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: Initiating sync 178830908
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssscUpdateEventValue: NMReconfigInProgress val 1, changes 57
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: local disk timeout set to 27000 ms, remote disk timeout set to 27000
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: new values for local disk timeout and remote disk timeout will take effect when the sync is completed.
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: Starting cluster reconfig with incarnation 178830908
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetupAckWait: Ack message type (11)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetupAckWait: node(1) is ALIVE
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908), indicating EXADATA fence initialization complete
    2011-06-03 16:36:43.476: [    CSSD][1237174592]List of nodes that have ACKed my sync: NULL
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmWaitForAcks: Ack message type(11), ackCount(1)
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: Node prddbs01, number 1, is EXADATA fence capable
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssscUpdateEventValue: NMReconfigInProgress val 1, changes 58
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: local disk timeout set to 27000 ms, remote disk timeout set t:
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Sending Event(2), type 2, incarn 178830907
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Node[1] state = 3, birth = 178830889, unique = 1305623432
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Node[2] state = 5, birth = 178830907, unique = 1307103307
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: Acknowledging sync: src[1] srcName[prddbs01] seq[73] sync[178830908]
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmSendAck: node 1, prddbs01, syncSeqNo(178830908) type(11)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmStartNMMon: node 1 active, birth 178830889
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleAck: src[1] dest[1] dom[0] seq[0] sync[178830908] type[11] ackCount(0)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmStartNMMon: node 2 active, birth 178830907
    2011-06-03 16:36:43.476: [    CSSD][1240850064]NMEVENT_SUSPEND [00][00][00][06]
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908), indicating EXADATA fence initialization complete
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmUpdateEventValue: CmInfo State val 5, changes 190
    2011-06-03 16:36:43.476: [    CSSD][1237174592]List of nodes that have ACKed my sync: 1
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmSuspendAllGrocks: Issue SUSPEND
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmWaitForAcks: done, msg type(11)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion:node1 product/protocol (11.2/1.4)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion: properties common to all nodes: 1,2,3,4,5,6,7,8,9,10,11,12,13,14
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion: min product/protocol (11.2/1.4)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmQueueGrockEvent: groupName(IG+ASMSYS$USERS) count(2) master(1) event(2), incarn 22, mbrc 2, to member 1, events 0x0, state 0x0
    2011-06-03 16:36:43.477: [    CSSD][1237174592]clssnmSetMinMaxVersion: max product/protocol (11.2/1.4)
    2011-06-03 16:36:43.477: [    CSSD][1237174592]clssnmNeedConfReq: No configuration to change
    etc.etc....
    Let me know if any other logfile required. No unususal messages on /var/log/messages.
    Regards,
    Raj

  • Oracle RAC one node

    why Oracle introduced Oracle RAC one node in 11gR2?
    I have gone through oracle documents on oracle rac one node, but i couldn't find much added advantage than instance failover. We have already some technologies like datagaurd and some third party softwares for instance failover.
    Then why oracle introduced this RAC one node in the new release i.e 11gR2?
    What exactly oracle wants to provide from oracle RAC one node?
    Thanks...
    Bharath

    Why RAC one node?
    Oracle RAC one node is a single instance of Oracle RAC that runs on the node in a cluster. The benefit of the RAC one node option is that it allows you to consolidate many databases into one cluster without a lot of overhead, while also providing high availbilty benefits of failover protection, as well as for Online rolling patch application and rolling upgrades for the Oracle clusterware.
    Another aspect of RAC one node allows you to limit th CPU utilization of individual database instances within the cluster through a feature called resource manager instance caging, which gives you the ability to dynamilcally change the limit as required.
    Furhtermore , with RAC one node there is no limitation for server scalabilty such that if applications outgrow the current resources than a single node can supply, you can then upgrade the applications online to Oracle RAC.
    In tthe event that the node which is runnig Oracle RAc one node becomes saturated and out of resources, you can migrate the instance to another node in the cluster using Oracle rac one node and another new utilty called OMOTION feature.OMOTION feature for Oracle RAC 11g R2 rac allows you to migrate a running instance to another server without downtime or distrubtion in service for your enviornment.
    Hope you're understand.

  • Recommendations - Oracle RAC 10g on Solaris 10 Containers Logical/Local..

    Dear Oracle Experts et all
    I have a couple of questions for Oracle 10g RAC implementation on Solaris and seek your advice. we are attempting to implement oracle 10g RAC on Solaris OS and SPARC Platform.
    1 We are wondering if Oracle 10g RAC could be implemented on Solaris Local/Logical Containers? I was assuming that Oracle will always link it self with OS binaries and Libraries while S/W installation and hence will need an OS image/Root Disk over which it could go. However, in containers, I assume we have a single solaris installation and configuration which will thus be shared to the containers which will be further configured in it. In such situations how does Oracle instalation proceed? Do I need to look at a scenario where, the global Container/Zone will have Oracle install and this image be shared across to zones/containers accordingly? If it is so, what all filesystems from OS will need to be shared across to these zones/containers?
    Additionally, even if this approach is supported, is it a recommended approach? I am unsure about the stability and functionality of Oracle in such cases and am not able to completly conceptualize. However, I assume there could be certain items which needs to be approprietly taken care off. It will help if you could share observations from your experiences.
    2 The idea of RAC we are looking at is to have multiple Oracle Installations on top of native clustering solution say veritas clusters/Sun Clusters. Do we still need to have Oracle Cluster solution Clusterware (ORACRS) on top of this to achieve Oracle Clustering? Will I be able to install Oracle as a standalone installation on top of native clustering solution say veritas clusters/Sun Clusters?
    Our requirement is to have the above mentioned multiple Oracle installations spread across two (2) seperate H/W platforms,say Node A and Node B, and configure our Cluster Solution to behave as active-passive across Node A and Node B. In other words, I will configure Clustering Solution like VRTS/SunCluster in Active-Passive, then have 3 Oracle installations on Node A, another 3 on Node B. I will configure one database each for each of these Oracle S/W installation (with an idea not to have Clusterware between clustering solution VRTS/SunCluster and Oracle installation, if it works). Now I will run 3 databases thus on each of these nodes. If any downtime happens on any one of the nodes, say Node A, I will fail all oracle databases and S/W accordingly to the alternate available node, Node B in this case, using native clustering solution and I will want the database to behave as it was behaving earlier, on Node A. I am not sure though if I will be able to bring the database up on Node B when resources in OS perspective are failed over.
    we want to use Oracle 10g RAC Release 2 EE on Solaris 10 OS latest/one before the latest release.
    Please share your thoughts.
    Regards!
    Sarat

    Sarat Chandra C wrote:
    Dear Oracle Experts et all
    I have a couple of questions for Oracle 10g RAC implementation on Solaris and seek your advice. we are attempting to implement oracle 10g RAC on Solaris OS and SPARC Platform.
    1 We are wondering if Oracle 10g RAC could be implemented on Solaris Local/Logical Containers? My understanding is that RAC in a Zone (Container) is not supported by Oracle, and will not work anyway. Regardless of installation, RAC needs to do cluster level stuff about the cluster configuration, changing network addresses dynamically, and sending guaranteed messages over the cluster interconnect. None of this stuff can be done in a Local Zone in Solaris, because Local Zones have fewer permissions that the Global Zone. This is part of the design of Solaris Zones, and nothing to do with how Oracle RAC itself works on them.
    This is all down to the security model of Zones, and Local Zones lack the ability to do certain things, to stop them reconfiguring themselves and impacting other Zones. Hence RAC cannot do dynamic cluster reconfiguration in a Local Zone, such as changing virtual network addresses when a node fails.
    My understanding is that RAC just cannot work in a Local Zone. This was certainly true 5 years ago (mid 2005), and was a result of the inherent design and implementation of Zones in Solaris. Things may have changed, so check the Solaris documentation, and check if Oracle RAC is supported in Local Zones. However, as I said, this limitation was inherent in the design of Zones, so I do not see how Sun could possibly have changed it so that RAC would work in a Local Zone.
    To me, your only option is the Global Zone. Which pretty much destroys the argument for having Zones on a Solaris system, unless you can host other non-Oracle application on the other Zones.
    2 The idea of RAC we are looking at is to have multiple Oracle Installations on top of native clustering solution say veritas clusters/Sun Clusters. Do we still need to have Oracle Cluster solution Clusterware (ORACRS) on top of this to achieve Oracle Clustering? Will I be able to install Oracle as a standalone installation on top of native clustering solution say veritas clusters/Sun Clusters?I am not sure the term 'native' is correct. All 'Cluster' software is low level, and has components that run within the operating system. Whether this is Sun Cluster, Veritas Cluster Server, or Oracle Clusterware. They are all as 'native' to Solaris as each other. They all perform the same function for Oracle RAC around Cluster management - which nodes are members of the cluster, heartbeats between nodes, reliable fast message delivery, etc.
    You only need one piece of Cluster software. So pick one and use it. If you use the Sun or Veritas cluster products, then you do not need the Oracle Clusterware software. But I would use it, because it is free (included with RAC), is from Oracle themselves and so guaranteed to work, is fully supported, and is one less third party product to deal with. Having an all Oracle software stack makes things simpler and more reliable, as far as I am concerned. You can be sure that Oracle will have fully tested RAC on their own Clusterware, and be able to replicate any issues in their own support environments.
    Officially the Sun and Veritas products will work and are supported. But when you get a problem with your Cluster environment, who are you going to call? You really want to avoid "finger pointing" when you have a problem, with each vendor blaming the cause of the problem on another vendor. Using an all Oracle stack is simpler, and ensures Oracle will "own" all your support problems.
    Also future upgrades between versions will be simpler, as Oracle will release all their software together, and have tested it together. When using third party Cluster software, you have to wait for all vendors to release new versions of their own software, and then wait again while it is tested against all the different third party software that runs on it. I have heard of customers stuck on old versions of certain cluster products, who cannot upgrade because there are no compatible combinations in the support matrices between the cluster product and Oracle database versions.
    I will configure Clustering Solution like VRTS/SunCluster in Active-Passive, then have 3 Oracle installations on Node A, another 3 on Node B. As I said before, these 3 Oracle installations will actually all be on the same Global Zone, because RAC will not go into Local Zones.
    John

  • Oracle RAC Interconnect, PowerVM VLANs, and the Limit of 20

    Hello,
    Our company has a requirement to build a multitude of Oracle RAC clusters on AIX using Power VM on 770s and 795 hardware.
    We presently have 802.1q trunking configured on our Virtual I/O Servers, and have currently consumed 12 of 20 allowed VLANs for a virtual ethernet adapter. We have read the Oracle RAC FAQ on Oracle Metalink and it seems to otherwise discourage the use of sharing these interconnect VLANs between different clusters. This puts us in a scalability bind; IBM limits VLANs to 20 and Oracle says there is a one-to-one relationship between VLANs and subnets and RAC clusters. We must assume we have a fixed number of network interfaces available and that we absolutely have to leverage virtualized network hardware in order to build these environments. "add more network adapters to VIO" isn't an acceptable solution for us.
    Does anyone know if Oracle can afford any flexibility which would allow us to host multiple Oracle RAC interconnects on the same 802.1q trunk VLAN? We will independently guarantee the bandwidth, latency, and redundancy requirements are met for proper Oracle RAC performance, however we don't want a design "flaw" to cause us supportability issues in the future.
    We'd like it very much if we could have a bunch of two-node clusters all sharing the same private interconnect. For example:
    Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
    Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
    Cluster 2, node 1: 192.168.16.4 / 255.255.255.0 / VLAN 16
    Cluster 2, node 2: 192.168.16.5 / 255.255.255.0 / VLAN 16
    Cluster 3, node 1: 192.168.16.6 / 255.255.255.0 / VLAN 16
    Cluster 3, node 2: 192.168.16.7 / 255.255.255.0 / VLAN 16
    Cluster 4, node 1: 192.168.16.8 / 255.255.255.0 / VLAN 16
    Cluster 4, node 2: 192.168.16.9 / 255.255.255.0 / VLAN 16
    etc.
    Whereas the concern is that Oracle Corp will only support us if we do this:
    Cluster 1, node 1: 192.168.16.2 / 255.255.255.0 / VLAN 16
    Cluster 1, node 2: 192.168.16.3 / 255.255.255.0 / VLAN 16
    Cluster 2, node 1: 192.168.17.2 / 255.255.255.0 / VLAN 17
    Cluster 2, node 2: 192.168.17.3 / 255.255.255.0 / VLAN 17
    Cluster 3, node 1: 192.168.18.2 / 255.255.255.0 / VLAN 18
    Cluster 3, node 2: 192.168.18.3 / 255.255.255.0 / VLAN 18
    Cluster 4, node 1: 192.168.19.2 / 255.255.255.0 / VLAN 19
    Cluster 4, node 2: 192.168.19.3 / 255.255.255.0 / VLAN 19
    Which eats one VLAN per RAC cluster.

    Thank you for your answer!!
    I think I roughly understand the argument behind a 2-node RAC and a 3-node or greater RAC. We, unfortunately, were provided with two physical pieces of hardware to virtualize to support production (and two more to support non-production) and as a result we really have no place to host a third RAC node without placing it within the same "failure domain" (I hate that term) as one of the other nodes.
    My role is primarily as a system engineer, and, generally speaking, our main goals are eliminating single points of failure. We may be misusing 2-node RACs to eliminate single points of failure since it seems to violate the real intentions behind RAC, which is used more appropriately to scale wide to many nodes. Unfortunately, we've scaled out to only two nodes, and opted to scale these two nodes up, making them huge with many CPUs and lots of memory.
    Other options, notably the active-passive failover cluster we have in HACMP or PowerHA on the AIX / IBM Power platform is unattractive as the standby node drives no resources yet must consume CPU and memory resources so that it is prepared for a failover of the primary node. We use HACMP / PowerHA with Oracle and it works nice, however Oracle RAC, even in a two-node configuration, drives load on both nodes unlike with an active-passive clustering technology.
    All that aside, I am posing the question to both IBM, our Oracle DBAs (whom will ask Oracle Support). Typically the answers we get vary widely depending on the experience and skill level of the support personnel we get on both the Oracle and IBM sides... so on a suggestion from a colleague (Hi Kevin!) I posted here. I'm concerned that the answer from Oracle Support will unthinkingly be "you can't do that, my script says to tell you the absolute most rigid interpretation of the support document" while all the time the same document talks of the use of NFS and/or iSCSI storage eye roll
    We have a massive deployment of Oracle EBS and honestly the interconnect doesn't even touch 100mbit speeds even though the configuration has been checked multiple times by Oracle and IBM and with the knowledge that Oracle EBS is supposed to heavily leverage RAC. I haven't met a single person who doesn't look at our environment and suggest jumbo frames. It's a joke at this point... comments like "OMG YOU DON'T HAVE JUMBO FRAMES" and/or "OMG YOU'RE NOT USING INFINIBAND WHATTA NOOB" are commonplace when new DBAs are hired. I maintain that the utilization numbers don't support this.
    I can tell you that we have 8Gb fiber channel storage and 10Gb network connectivity. I would probably assume that there were a bottleneck in the storage infrastructure first. But alas, I digress.
    Mainly I'm looking for a real-world answer to this question. Aside from violating every last recommendation and making oracle support folk gently weep at the suggestion, are there any issues with sharing interconnects between RAC environments that will prevent it's functionality and/or reduce it's stability?
    We have rapid spanning tree configured, as far as I know, and our network folks have tuned the timers razor thin. We have Nexus 5k and Nexus 7k network infrastructure. The typical issues you'd fine with standard spanning tree really don't affect us because our network people are just that damn good.

  • Oracle RAC with QFS shared storage going down when one disk fails

    Hello,
    I have an oracle RAC on my testing environment. The configuration follows
    nodes: V210
    Shared Storage: A5200
    #clrg status
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Online
    host2 No Online
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Online
    qfs-meta-rg host1 No Online
    host2 No Offline
    rac_server_proxy-rg host1 No Online
    host2 No Online
    #metastat -s racdg
    racdg/d200: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d3s0 0 No No
    racdg/d100: Concat/Stripe
    Size: 143237376 blocks (68 GB)
    Stripe 0:
    Device Start Block Dbase Reloc
    d2s0 0 No No
    #more /etc/opt/SUNWsamfs/mcf
    racfs 10 ma racfs - shared
    /dev/md/racdg/dsk/d100 11 mm racfs -
    /dev/md/racdg/dsk/d200 12 mr racfs -
    When the disk /dev/did/dsk/d2 failed (I have failed it by removing from the array), the oracle RAC went offline on both nodes, and then both nodes paniced and rebooted. Now the #clrg status shows below output.
    Group Name Node Name Suspended Status
    rac-framework-rg host1 No Pending online blocked
    host2 No Pending online blocked
    scal-racdg-rg host1 No Online
    host2 No Online
    scal-racfs-rg host1 No Online
    host2 No Pending online blocked
    qfs-meta-rg host1 No Offline
    host2 No Offline
    rac_server_proxy-rg host1 No Pending online blocked
    host2 No Pending online blocked
    crs is not started in any of the nodes. I would like to know if anybody faced this kind of a problem when using QFS on diskgroup. When one disk is failed, the oracle is not supposed to go offline as the other disk is working, and also my qfs configuration is to mirror these two disks !!!!!!!!!!!!!!
    Many thanks in advance
    Ushas Symon

    I'm not sure why you say QFS is mirroring these disks!?!? Shared QFS has no inherent mirroring capability. It relies on the underlying volume manager (VM) or array to do that for it. If you need to mirror you storage, you do it at the VM level by creating a mirrored metadevice.
    Tim
    ---

  • Connect Database Host Name in Oracle Rac Database

    Hi All,
    I am using Oracle SES 11g to create a "Table Source" and a have following question.
    I have to added new table source to crawl, in the field "Database Host Name" i want to connect with Oracle Rac Database Server with two node.
    I am searching in the document and i can't find any relevant on that scenario.
    Can anyone help me in that ?
    Thank you.
    NG

    Check this Rittman Mead Consulting » Blog Archive » Oracle BI EE 11g – Managing Host Name Changes

  • Oracle RAC Concept  - load balancing

    Hello All,
    I have a question about Oracle RAC and how it balance loading.
    What I know about Oracle RAC that it balancing load between its node, so if a new session is connected to the Database it will execute on the node which have less load.
    If I am having an application that connects to the database with few number of sessions and some sessions have huge (huge load) processes to execute while the other sessions are executing small process (in terms of load).
    So below is an example scenario that i am asking for an answer of it:
    if am having a two node RAC database.
    My application that is connected to this RAC database wants to execute three processes P1, P2 and P3 each will have a session correspondingly S1, S2 and S3
    Let is say that P1 will take 60% of the database resources (memory ....) and P2 will take 1% and P3 1%.
    So my question is it a sessions load balancing or Processes load balancing ? So in case and randomly (Managed by ORACLE RAC) S1 is connected to node 1 and it is using node 1 resources so the load on node 1 will be 60%, so when S2 and S3 needs to connect it will go to node 2 since node 1 is loaded. in that case i will have Node 1 using 60% of it is resources (becuase of S1 that is executing P1) while node 2 only uses 2 % (because of S2 and S3 that is executing P2 and P3).
    Is that how Oracle RAC works ? or it balance the load of S1 (60%) between node 1 and node 2 ?
    I am asking this question since my application is not users application, so it will connect to the database with few number of sessions, and 1 session of them may execute a huge process while the other are executing small processes.
    So in that case, how RAC is doing the balance loading?
    Regards,

    i hope , this links may help you.
    Thread: Server side Load balancing in RAC
    Server Side Load Balancing Testing
    read Oracle documentation:
    http://www.oracle.com/pls/db102/search?word=server+side+load+balancing&partno=
    http://www.oracleracexpert.com/2010/01/oracle-rac-load-balancing-and-failover.html
    http://www.databasejournal.com/features/oracle/article.php/3659411/Oracle-RAC-Administration---Part-15-Connection-Load-Balancing-and-FAN.htm
    http://oracleinstance.blogspot.com/2010/08/transparent-application-failover-taf.html

  • Insert slow in Oracle RAC

    Dear friends,
    I implemented Oracle RAC in 2 Windows server nodes, with iSCSI as the storage disk for ASM. The performance of select is almost twice as compared to that of a stand alone oracle machine and the load is almost equally splitted. But when I tried INSERT, UPDATE and DELETE, the performance is degrading and is poorer than that of the stand alone machine. Is this due to the waiting for disk access?
    Regards,
    Aravind K R

    Aravind K R wrote:
    Dear friends,
    I implemented Oracle RAC in 2 Windows server nodes, with iSCSI as the storage disk for ASM. The performance of select is almost twice as compared to that of a stand alone oracle machine and the load is almost equally splitted. But when I tried INSERT, UPDATE and DELETE, the performance is degrading and is poorer than that of the stand alone machine. Is this due to the waiting for disk access?
    It may or may not be due to disk access. In general, for select, the RAC would give better performance. But for DML's , scalability is what is going to be possible speedup , in general won't be there. Since you are on RAC, there are lots of interconnect messages that would be shared between the nodes so besides looking at the disk access, do ensure to check the performance of the HBA's since they, in addition to the disk, are going to be one of the most important points which can effect the performance.
    HTH
    Aman....

  • In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    In oracle rac, If user query a select query and in processing data is fetched but in the duration of fetching the particular node is evicted then how failover to another node internally?

    The query is re-issued as a flashback query and the client process can continue to fetch from the cursor. This is described in the Net Services Administrators Guide, the section on Transparent Application Failover.

  • ASM instances on 2 node Oracle RAC 10g r2  on Red Hat 4 u1

    Hi all
    I'm experiencing a problem in configuring diskgroups under +ASM instances on a two node Oracle RAC.
    I followed the official guide and also official documents from metalink site, but i'm stuck with the visibility of asm disks.
    I created fake disks on nfs with Netapp certified storage binding them to block device with the usual trick "losetup /dev/loopX /nfs/disk1 " ,
    run "oracleasm createdisk DISKX /dev/loopX" on one node and
    "oracleasm scandisks" on the other one.
    With "oracleasm listdisks" i can see the disks at OS level in both nodes , but , when i try to create and mount diskgroup in the ASM instances , on the instance on which i create the diskgroup all is well, but the other one doesn't see the disks at all, and diskgroup mount fails with :
    ERROR: no PST quorum in group 1: required 2, found 0
    Tue Sep 20 16:22:32 2005
    NOTE: cache dismounting group 1/0x6F88595E (DG1)
    NOTE: dbwr not being msg'd to dismount
    ERROR: diskgroup DG1 was not mounted
    any help would be appreciated
    thanks a lot.
    Antonello

    I'm having this same problem. Did you ever find a solution?

Maybe you are looking for

  • Why does my macbook pro "goes to sleep" when charging?

    Why does my macbook pro "goes to sleep" when charging? It is working fine without charging but once I attach the charger to my laptop, it just blacks out or goes to sleep mode and I can't wake it up unless I unpluck the charger. I read from threads t

  • ATV Not Showing Up in iTunes Devices

    Well, I've read other postings, tried things and no joy. After either upgrading to ATV 2.1 and iTunes 7.7, the two quit talking. I've confirmed the ATV connects fine with the network, I can ping the thing from my computer and it responds (no conflict

  • BT Infinity Hub stuck at 2-3 mbps and constant dis...

    It first started with the Hub 2 which would constantly disconnect and run slowly from the laptop, playstation, smartTV, BT box, mobile phone a few times every hour. After the poor customer service and days of being told to 'restart the hub' etc etc a

  • 10.3 upgrade problem involving XPostFacto4 -- can't log in

    Help!! Someone gave me a Macintosh G3 that was upgraded to a G4, with OSX 10.1.5. I needed an upgrade to use some of the open source software out there, so someone gave me OSX 10.3. When I tried to install it, it said that 10.3 couldn't be used on my

  • Accessing Excel 2007-file from BO Explorer App

    Hello all, I have a problem when trying to access excel 2007-files in an InfoSpace ( XI 3.1 SP2) from the iPad with the BO Explorer App. Accessing this file from the Browser is no problem and accessing the same file saved as excel 2003 is also no pro