OIM RAC Cluster

Hi has anyone installed OIM in a RAC environment in a 2 node cluster? I have OIM up and runing on one node, all is fine. We have 2 servers both have App Server installed on them and both nodes share the RAC databse. I tried to install OIM on the second node but it fails when it tries to get to the databse, I am thinking. I know we should not have to do the prepare.sh script, how do we just lay the OIM product down?

Hi..
How install OIM11g in high avilibality mode..can any one pls provide the steps or any doccumentation to install it

Similar Messages

  • Multiple databases/instances on 4-node RAC Cluster including Physical Stand

    OS: Windows 2003 Server R2 X64
    DB: 10.2.0.4
    Virtualization: NONE
    Node Configuration: x64 architecture - 4-Socket Quad-Core (16 CPUs)
    Node Memory: 128GB RAM
    We are planning the following on the above-mentioned 4-node RAC cluster:
    Node 1: DB1 with instanceDB11 (Active-Active: Load-balancing & Failover)
    Node 2: DB1 with instanceDB12 (Active-Active: Load-balancing & Failover)
    Node 3: DB1 with instanceDB13 (Active-Passive: Failover only) + DB2 with instanceDB21 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB31 (Active-Active: Load-balancing & Failover) + DB4 with instance41 (Active-Active: Load-balancing & Failover)
    Node 4: DB1 with instanceDB14 (Active-Passive: Failover only) + DB2 with instanceDB22 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB32 (Active-Active: Load-balancing & Failover) + DB4 with instance42 (Active-Active: Load-balancing & Failover)
    Note: DB1 will be the physical primary PROD OLTP database and will be open in READ-WRITE mode 24x7x365.
    Note: DB2 will be a Physical Standby of DB1 and will be open in Read-Only mode for reporting purposes during the day-time, except for 3 hours at night when it will apply the logs.
    Note: DB3 will be a Physical Standby of a remote database DB4 (not part of this cluster) and will be mounted in Managed Recovery mode for automatic failover/switchover purposes.
    Note: DB4 will be the physical primary Data Warehouse DB.
    Note: Going to 11g is NOT an option.
    Note: Data Guard broker will be used across the board.
    Please answer/advise of the following:
    1. Is the above configuration supported and why so? If not, what are the alternatives?
    2. Is the above configuration recommended and why so? If not, what are the recommended alternatives?

    Hi,
    As far as i understand, there's nothing wrong in configuration except you need to consider below points while implementing final design.
    1. No of CPU on each servers
    2. Memory on each servers
    3. If you've RAC physical standby then apply(MRP0) will run on only one instance.
    4. Since you are configuring physical standby for on 3rd and 4th nodes of DB1 4 node cluster where DB13 and DB14 instances are used only for failver, if you've a disaster at data center or power failure in entire data center, you are losing both primary and secondary with an assumption that your primary and physical standby reside in same data center so it may not be highly available architecture. If you are going to use extended RAC for this configuration then it makes sense where Node 1 and Node 2 will reside in Datacenter A and Node 3 ,4 will reside in Datacenter B.
    Thanks,
    Keyur

  • Interesting load issue on a new 11.2.0.3.0 RAC cluster

    Hi All,
    This is for a two node 11.2.0.3.0 Std RAC cluster running RHEL 5.4 x64.
    I've built a good few RAC clusters before, and this is a new issue in 11.2.0.3.0 (I haven't seen it in 10.2.0.4/5, 11.1.0.6/7, or 11.2.0.1/2). What I've noticed is that the grid infrastructure processes are "busier" on both nodes than they were in previous releases. These include, but are not limited to, ocssd.bin, gipcd.bin, and oraagent.bin.
    Load isn't "high", but the database isn't in use and the load on the server is sitting at around 1.05, whereas on other idle clusters it would be a quarter of that, on average. Has anyone else observed this behavior? If possible, provide a MOS article.
    If not, I will escalate this to Oracle and see what they say.
    Thanks.

    It seems that grid processes in 11g are not fully optimized/tested and due to that higher load can be expected to appear although the 'real', actual load on the system is not happening.
    Few months ago we had similar situation on 11.2.0.2 two node RAC.
    On one node, grid user(eons resource) was doing high CPU load causing load on the OS varying from 3-4, database was almost completely inactive,there were no other software installed on the nodes except oracle and on
    the second node load was at the same time varying from 0,2-0,5.
    I resolved that by doing a stop/start of the eons resource on the overloaded node.
    There are several articles on support.oracle.com about similar situations with grid.
    Some of them are:
    High Resource Usage by 11.2.0.1 EONS [ID 1062675.1]
    Bug 9378784: EONS HIGH RESOURCE USAGE

  • RMAN Backup to Disk in a RAC cluster...?

    We have a two-node RAC cluster, using Linux and ASM. Pretty typical setup.
    We are backing up to disk via RMAN. Right now, that filesystem is mounted on node #1. It's a SAN volume but is not presently clustered.
    My questions...
    (1) Is it best practice that only one node out of the cluster performs the backups?
    (2) Or is there a config where all nodes participate in the backup?
    My concerns are what happens when node #1 fails (presumably we'd have to mount the volume on node #2), and also the asymmetrical load during backups.
    Thank you!

    Hello;
    Managing backup and recovery for RAC databases is no different from managing those for single-instance databases.
    RMAN backs up the database not the individual instance.
    This may help :
    http://www.databasejournal.com/features/oracle/article.php/3665211/Oracle-RAC-Administration--Backing-up-your-RAC-with-RMAN.htm
    Instance Recovery in Oracle RAC
    http://docs.oracle.com/cd/E11882_01/rac.112/e16795/backup.htm#i492578
    Best Regards
    mseberg

  • Oracle RAC Cluster Health Monitor on Windows 2008 R2 64Bit

    Hello colleaques,
    i run a 2-node RAC Cluster 11.2.0.2 64Bit on Windows 2008 R2 64bit. I did installl the Berkeley DB Version 4.6.21 succesfully.
    After that i installed the crfpack.zip (CHM) as described in the README.txt.
    F:\Software\ClusterHealthcheck\install>perl crfinst.pl -i eirac201,eirac202 -b F:\BerkeleyDB -m eirac201
    Performing checks on nodes: "eirac201 eirac202" ...
    Assigning eirac202 as replica
    Installing on nodes "eirac202 eirac201" ...
    Generating cluster wide configuration file for eirac202...
    Generating cluster wide configuration file for eirac201...
    Configuration complete on nodes "eirac202 eirac201" ...
    Please run "perl C:\"program files"\oracrf\install\crfinst.pl -f, optionally specifying BDB location with -b <bdb location> as Admin
    user on each node to complete the install process.
    F:\Software\ClusterHealthcheck\install>c:
    C:\Users\etmtst>cd \
    C:\>cd "Program Files"
    C:\Program Files>cd oracrf
    C:\Program Files\oracrf>cd install (on both nodes as described in the README.txt)
    C:\Program Files\oracrf\install>perl crfinst.pl -f -b F:\BerkeleyDB
    01/30/12 16:42:21 OracleOsToolSysmService service installed
    Installation completed successfully at C:\"program files"\oracrf...
    C:\Program Files\oracrf\install>runcrf
    01/30/12 16:44:03 StartService(OracleOsToolSysmService) failed:(1053) The service did not respond to the start or control request i
    n a timely fashion.
    01/30/12 16:44:03 OracleOsToolSysmService service started
    It says here OracleOsToolSysmService was started, but it was not !!!
    Manual starting gives the same error. !!!!
    Please has anybody had the same problem ??
    regards and greetings, Abraham

    There will be a new version of the Standalone CHM/OS for Windows that will work with 11.2.0.2 and earlier versions available on OTN in the near future. The older version that you are using has not been tested and due to the infrastructure changes in 11.2 it is not expected it will work. The integrated CHM/OS that is included as part of the 11.2.0.3 GI installation does work as does the new GUI (CHMOSG) now available for download.

  • Multiple databases in one single RAC cluster

    Hi, I would like to know if one can have multiple databases running on a single RAC cluster, we have several databases in our shop and would like to consolidate all of them into a single 3-4 node RAC cluster running databases with 10.2 and 11.1 versions.
    I am newbie to RAC and would like to get some clarification if anyone has done this, google search comes up with few hits on this topic, so obviously this is not doable.
    In our case we have one database supporting critical applications and few other not so critical but are used very extensively between 9-5, so what is the use of RAC if I cannot consolidate all my databases into one cluster, or if I need a separate cluster for each of these critical databases?
    I have been all the Oracle docs that keep repeating one database multiple instances and one instance-one machine-one node, they don't even advise running multiple instances on a single node?.
    I appreciate any insight.
    Thanks.

    ora-sql-dba wrote:
    Can you give more details on how you would setup multiple databases running different versions on a single RAC cluster, I am yet to find any documentation that supports or even elaborates on this topic.You can configure a cluster with 12 nodes. Then, using dbca, configure a dev instance for nodes 1 and 2, a prod1 instance for nodes 3 to 6 and a prod2 instance for nodes 7 to 12.
    You also can configure each of these instances for all 12 nodes. And use it on all 12 nodes.
    Or, after configuring it for all 12 nodes, you can start the dev instance on nodes 1 and 2, prod1 on 3 - 6 and prod2 on the remaining nodes. If dev needs more power, you can for example shutdown prod2 on node 12 and start another dev instance there.
    My issue is with the 2nd option - running more than one instance on the same node or server. Why? Each instance has a basic resource footprint ito shared memory needed, system processes required (like db writer, log writer, sys monitor) etc. It does not make sense to pay for that same footprint more than once on a server. Each time you do, you need to reduce the amount of resources that can be used by each instance.
    So instead of using (for example) 60% of that server's memory as the SGA for a single instance, if you use 2 instances on that server you now have to reduce the SGA of each to 30% of system memory. Effectively crippling those instances by 50% - they will now have smaller buffer caches, require more physical I/O and be more limited in what they can do.
    So unless you have very sound technical reasons for running more than one instance on a server (RAC or non-RAC), do not.

  • 10.1.0.3 Agent on 10.2 RAC Cluster

    Has anyone sucessfully installed the 10.1.0.3 OEM agent on a linux based 10.2 RAC cluster? I can not seem to find any documention on this. I have 10.1.0.3 agent running on 10.2 "non-rac" nodes but the installer seems to have difficulty when I try to configure it for my 10.2 cluster.

    Never mind. Supoprt got back to and said that it's not supported. It looks like I'll have to wait for the next release of OEM before I can effectively monitor 10.2 with Grid Control..

  • JVM patch required for DST on 10.2.0.2 RAC cluster

    I have looked all over the internet and Metalink for information regarding the JVM patching on a RAC cluster and haven't found anything useful, so I apologize if this question has already been asked multiple times. Also if there is a forum dedicated to DST issues, please point me in that direction.
    I have a 10.2.0.2 RAC cluster so I know I have to do the JVM patching required because of the DST changes. The README for 5075470 says to follow post-implementation steps in the fix5075470README.txt file. Step 3 of those instructions say to bounce the database, and then not allow the use of java until step 4 is complete (which is to run the fix5075470b.sql script).
    Here's my question: since this is a RAC database, does that mean I have to shutdown both instances, start them back up, run the script, and then let users log back in? IN OTHER WORDS, AN OUTAGE IS REQUIRED?
    Is there a way around having to take an outage? Can I bounce each instance separately (in a rolling fashion) so there's no outage, and then run the script even though users are logged on if I think java isn't being used by the application? Is there a way to confirm whether or not it's being used? If I confirm the application isn't using java, is it ok to run the script while users are logged on?
    Any insight would be greatly appreciated.
    Thanks,
    Susan

    According to Note: 414309.1 USA 2007 DST Changes: Frequently Asked Questions an Problem for Oracle JVM Patches, question 4 Does the database need to be down before the OVJM patch is applied, the bounce is necessary. That says nothing about a rolling upgrade in RAC.
    You might file an SR asking if a rolling upgrade is possible.

  • Using dbca to extend RAC cluster error

    Hi all,
    I'm trying to extend my 11gR2 RAC cluster (POC) using the Oracle documentation (http://vishalgupta.com/oracle/docs/Database11.2/rac.112/e10718/adddelunix.htm). I've already cloned and extended Clusterware and ASM (Grid Infrastructure) to the new node, as well as cloned the RAC database software to the new node. When I run the below statement to have dbca extend add a new instance on the node for the RAC I get the error shown:
    CMD:
    $ORACLE_HOME/bin/dbca -silent -addInstance -nodeList newnode13 -gdbName racdb -instanceName racdb4 -sysDBAUserName sys
    -sysDBAPassword manager123
    ERROR:
    cat racdb0.log
    "Adding instance" operation on the admin managed database racdb requires instance configured on local node. There is no instance configured on the local node "newnode13".
    I set ORACLE_HOME before running dbca, and I've also tried setting ORACLE_SID to both racdb4 and racdb, no change. My environment is below, any help is appreciated.
    OS: SLES 11.1
    Database: 11.2.0.1
    Existing Nodes: node01,node02, node03
    New Node: newnode13
    DB Name: racdb
    Instances: racdb1, racdb2, racdb3
    New Instance: racdb4
    Thanks.

    Silly me, I was running the command from the new node instead of an existing node. I guess it was a rough weekend after all. Thanks all!

  • Automatic restart of services on a 1 node rac cluster with Clusterware

    How do we enable a service to automaticly start-up when the db starts up?
    Thanks,
    Dave

    srvctl enable service -d DBThanks for your reply M. Nauman. I researched that command and found we do have it enabled and that it only works if the database instance was previously taken down. Since the database does not go down on an Archiver Hung error as we are using FRA with an alt location, this never kicks in and brings up the service. What we are looking for something that will trigger off of when the archive logs error and switch from FRA(Flash Recovery Area) to our Alternate disk location. Or more presicely, when it goes back to a Valid status(on the FRA - after we've run an archive log backup to clear it).
    I found out from our 2 senior dba's that our other 2 node rac environment does not suffer from this problem, only the newly created 1 node rac cluster environment. The problem is we don't know what that is(a parameter on the db or cluster or what) and how do we set it?
    Anyone know?
    Thanks,
    Gib
    Message was edited by:
    Gib2008
    Message was edited by:
    Gib2008

  • Is there a way to config WLS to fail over from a primary RAC cluster to a DR RAC cluster?

    Here's the situation:
    We have two Oracle RAC clusters, one in a primary site, and the other in a DR site
    Although they run active/active using some sort of replication (Oracle Streams? not sure), we are being asked to use only the one currently being used as the primary to prevent latency & conflict issues
    We are using this only for read-only queries.
    We are not concerned with XA
    We're using WebLogic 10.3.5 with MultiDatasources, using the Oracle Thin driver (non-XA for this use case) for instances
    I know how to set up MultiDatasources for an individual RAC cluster, and I have been doing that for years.
    Question:
    Is there a way to configure MultiDatasources (mDS) in WebLogic to allow for automatic failover between the two clusters, or does the app have to be coded to failover from an mDS that's not working to one that's working (with preference to a currently labelled "primary" site).
    Note:
    We still want to have load balancing across the current "primary" cluster's members
    Is there a "best practice" here?

    Hi Steve,
    There are 2 ways to connect WLS to a Oracle RAC.
    1. Use the Oracle RAC service URL which contains the details of all the RAC nodes and the respective IP address and DNS.
    2. Connect to the primary cluster as you are currently doing and use a MDS to load-balance/failover between multiple nodes in the primary RAC (if applicable).
        In case of a primary RAC nodes failure and switch to DR RAC nodes, use WLST scripts to change the connection URL and restart the application to remove any old connections.
        Such DB fail-over tests can be conducted in a test/reference environment to set up the required log monitoring and subsequent steps to measure the timelines.
    Thanks,
    Souvik.

  • All connections are connecting to 2nd node only in a 2 Node RAC Cluster

    Hello,
    I have a 10.2.0.3 database on a two node RAC Cluster with only one service configured. This service set to be preferred on both nodes.
    However, all the connections are falling on Node2 only. Any idea where to look.
    $> srvctl config service -d PSDB
    psdbsrv1 PREF: psdb1 psdb2 AVAIL:
    Thanks,
    MM

    Application is using the following connection string.
    jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = PQ2-PS-db-01-vip)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = PQ2-PS-db-02-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = PSDBSRV1)(FAILOVER_MODE =(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))
    --MM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Routing all connections through a one node in a 2 node RAC cluster

    Hi everyone
    My client has the following requirement: an active/active RAC cluster (eg node1/node2), but with only one of the nodes being used (node1) and the other sitting there just in case.
    For things like services, i'm sure this is straightforward enough - just have them set to preferred on node1 and available on node 2.
    For connections, I imagine I would just have the vips in order in the tns file, but with LOAD_BALANCING=OFF, so they go through the tns entries in order (i.e node 1, then node 2), so this would still allow the vip to failover if node 1 is down.
    Does that sound about right? Have I missed anything?
    Many thanks
    Rup

    user573914 wrote:
    My client has the following requirement: an active/active RAC cluster (eg node1/node2), but with only one of the nodes being used (node1) and the other sitting there just in case.Why? What is the reason for a "+just in case+" node - and when and how is is "enabled" when that just-in-case situation occurs?
    This does not many any kind of sense from a high availability or redundancy view.
    For connections, I imagine I would just have the vips in order in the tns file, but with LOAD_BALANCING=OFF, so they go through the tns entries in order (i.e node 1, then node 2), so this would still allow the vip to failover if node 1 is down.
    Does that sound about right? Have I missed anything?Won't work on 10g - may not work on 11g. The Listener can and does handoff connections, depending on what the TNS connection string say. If you do not connect via a SID entry but via a SERVICE entry, and that service is available on multiple nodes, you may not (and often will not) be connected to instance on the single IP that you used in your TNS connection.
    Basic example:
    // note that this TEST-RAC alias refers to a single specific IP of a cluster, and use
    // SERVICE_NAME as the request
    /home/billy> tnsping test-rac
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 18-JAN-2011 09:06:33
    Copyright (c) 1997, 2005, Oracle.  All rights reserved.
    Used parameter files:
    /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS=(PROTOCOL=TCP)(HOST= 196.1.83.116)(PORT=1521)) (LOAD_BALANCE=no) (CONNECT_DATA=(SERVER=shared)(SERVICE_NAME=myservicename)))
    OK (50 msec)
    // now connecting to the cluster using this TEST-RAC TNS alias - and despite we listing a single
    // IP in our TNS connection, we are handed off to a different RAC node (as the service is available
    // on all nodes)
    // and this also happens despite our TNS connection explicitly requesting no load balancing
    /home/billy> sqlplus scott/tiger@test-rac
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jan 18 09:06:38 2011
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Data Mining and Real Application Testing options
    SQL> !lsof -n -p $PPID | grep TCP
    sqlplus 5432 billy    8u  IPv4 2199967      0t0     TCP 10.251.93.58:33220->196.1.83.127:37031 (ESTABLISHED)
    SQL> So we connected to RAC node 196.1.83.116 - and that listener handed us off to RAC node 196.1.83.127. The 11gr2 Listener seems to behave differently - it does not do a handoff (from a quick test I did on a 11.2.0.1 RAC) in the above scenario.
    This issue aside - how do you deal with just-in-case situation? How do you get clients to connect to node 2 when node 1 is down? Do you rely on the virtual IP of node 1 to be switched to node 2? Is this a 100% safe and guaranteed method?
    It can take some time (minutes, perhaps more) for a virtual IP address to fail over to another node. During that time, any client connection using that virtual IP will fail. Is this acceptable?
    I dunno - I dislike this concept of your client of treating the one RAC node as some kind of standby database for a just-in-case situation. I fail to see any logic in that approach.

  • Why do we use reverse proxy for Oracle RAC Cluster setup

    Hello All,
                 I got this question lately.. "why do we use reverse proxy for Oracle RAC Cluster setup". I know we use the reverse proxy at Middleware level for multiple security reasons.
    Thanks..

    "why do we use reverse proxy for Oracle RAC Cluster setup".
    I wouldn't. I wouldn't use a proxy of any sort for the Cluster Interconnect for sure.
    Cheers,
    Brian

  • Multiple listeners on a RAC cluster?

    Hello -
    I have a RAC cluster that will contain between 5 and 10 databases. Would it be a best practice to have a separate listener for each database, or one listener for the entire cluster?
    Thanks,
    mike

    I do not think one listener would work in my situation, please comment.
    I have two and three node RACs running multiple installations of E-Business Suite, each installation is owned by a different userid with primary group dba.
    Autoconfig does not seem aware of the concept of a single listener; an execution of autoconfig is only aware of it's current context and so will generate a single listener.ora and tnsnames.ora.
    Thanks for your insights.
    Jerry

Maybe you are looking for

  • Satellite C855D-S5302 problem with win8

    I recently bought new Satellite C855D-S5302 after retiring my 12 year old Satellite after excellent service. BUT this new one comes with Windows 8 installed and that is just not cutting it for me. Often when moving the cursor to the left, the charms

  • Windows 7 Via Bootcamp says Disc is blank.

    Ok, So I have an Offical Downloadable copy of Windows 7 Via my college and Everytime I've burned it to a DVD, it says "You have inserted a blank CD". and when I try to install it with bootcamp, it says "The installer disc could not be found. Insert y

  • How can I utilize a 2nd Airport Extreme to extend a network but not WIFI, only hardwired?

    Hi All.  Need your help figuring this out.  The location of the cable line in my home causes a bit of an issue for my internet dependant electronics.  My computers have no problem connecting over Wifi but certain peripherals require an ethernet conne

  • Is there a way I get can my Dell V305 printer to work with my Mac?

    I "inherited" my son's Mac Book Pro but when I tried to hook it up to our printer, Dell V305, it wouldn't work.  I tried downloading the software from the Dell site, since we don't have the disks, but it wouldn't work.  Is there a way I can get our p

  • Want to upgrade palm dsktop 4.1 to 4.2.1 revD, currently on OS X 10.4.2

    Can I install 4.2.1 revD or do I need to use a previous download: 4.2.1 revA My handheld is a Tungsten E. If I install 4.2.1 rev D - Can I copy over files/folders from the 4.1 folders into the 4.2.1 rev D folders? any help/advice - greatly appreciate