RAC using SUN Geo Clusters with Fail over

Hi ,
My customer is in the process of investigating and deploying Sun GeoClusters to fail over a RAC from one location to another, the distance between the primary and fail over site is 1200km, they are going to use TrueCopy to replicate the storage across the sites.
I am in the process of gathering information and need to find out more detail and if any one has any knowledge of this software.If anybody knows about the clients who are using(some urls) the same please let me know.
Regards
Manoj

TrueCopy is a way of replicating storage offsite. RAC works using a single source for the database. That means that RAC can not be used simultaneously at both locations with the files being used locally.
If my memory serves me well, Hitachi TrueCopy was OSCP (oracle storage compatiblity program) certified, but the OSCP program seems to be discontinued per januari 2007 (see http://www.oracle.com/technology/deploy/availability/htdocs/oscp.html)
That means that you can use TrueCopy to replicate the storage layer to another location (according to the OSCP note), and use the replicated storage to startup the RAC database in case of failover.

Similar Messages

  • TES v6.1 want to install a Win Tidal Agent on a Cluster Resource with Fail Over... help

    I am working with my Window Admins and they want to know how to install the Win Tidal Agent on a Cluster Resource with Fail Over.  Currently running Tidal Master (UNIX) v6.1.0.483
    Thanks,
    Rich

    Please refer to the Agent Installation and Configuration guide.pdf from Cisco. The steps to configure the agents in a cluster have been explained in section Configuring the Agents for a Cluster

  • Help With Fail Over

    I have been playing around with Directory Server fail over with IMS 5.2 however its not working too well, so far I have:
    configured local.ugldaphost to ldap-a ldap-b
    No problem there, on ldap server a (Master) I have setup replication of the following:
    dc=domain,dc=blah,dc=blah
    o=internet
    Using this method I get "Can't connect to LDAP server" once I have logged in to IMS 5.2 web mail, trying to access my folders (which work) and personal address book (which doesnt), looking at the logs for HTTP it states:
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Debug: ldappool::new_conn failed: Can't connect to the LDAP server Connection refused
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Debug: ldappool::access_pool_get 0/0 valid connections
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Debug: PAB_Search() error: Can't connect to the LDAP server
    [04/Sep/2003:10:33:49 +0100] ice httpd[2724]: General Error: Cannot search address book at ou=user, ou=people, o=internet,dc=domain,dc=BLAH,dc=BLAH,o=pab: Can't connect to the LDAP server
    Do I also need to replicate:
    o=NetscapeRoot
    o=pab
    I have tried replicating simply o=NetscapeRoot and I could no longer login as admin into the directory server console on the REPLICA (ldap-b).
    Can anybody help me out?

    Hi,
    To successfully have a 'low cost' failover iDS5/iMS5 scenerio, you need to do
    a number of things. Or if you have a wad of cash use Vertias cluster
    (HA iplanet agents), etc. Directory server proxies (iDAR) :(
    Currently I'm using a 'low cost' failover technique.
    None of what I'm about to describe is in the SunONE doco. for iMS.
    I have tested this in the Lab, all works, it's now in production.
    Before you do anything, test in the lab first, so you feel comfortable with
    the setups.
    OK my scenerio.
    Primary LDAP = ldap-a
    Secondary LDAP = ldap-b
    Mailserver = mta1
    iDS5 = iDS5.1p1
    iMS5 = iMS5.2p1
    1. Install iDS5 on ldap-a. Acts a User dir. and Config dir. server.
    2. Prep. ldap-a for iMS5 install (run ims_dssetup.pl).
         - YES to schema files/indices
    3. Install iMS5 on mta1, using ldap-a as User and Config dir. server.
         - I have iMS5 configured in Direct LDAP mode.
    4. Install iDS5 on ldap-b, use ldap-b as User and ldap-a as Config dir. server.
         - No need populate the User tree on ldap-b (ie. example users from install)
         - MUST USE ldap-a as config server, as you will be replicating this tree.
    if not you will not be able to access the admin server, as you stated.
    5. Also run ims_dssetup.pl on ldap-b.
         - YES to schema files/indices
    6. Setup multi-master replication between ldap-a and ldap-b.
         - See Admin Guide, have a good read, needs correct setup !
         - This allows read/write, thus seamless to email user for password
         changing, PAB writes...if LDAP has failed over.
         - Replicate all suffices
              o=isp           (User dir.)
              o=internet      (DC tree)
              o=pab          (PAB tree)
              o=NetscapeRoot     (Config. dir)
         - Init consumer(ldap-b) from ldap-a for all suffices.
    Note: Only problem with multi-master is uniqueness plugins, (if using)
    it's no problem aslong as you use ldap-a as master. See iDS Admin guide.
    7. Now ldap-b requires a change to allow iMS5 to allow writes to o=NetscapeRoot
    in the event of failover. Otherwise the error message "ldap server unavailable,
    no configuration server, using locally cached values...whatever blah blah !
    Managing Console Fail Over
    If you have a multi-master installation with o=NetscapeRoot replicated
    between your two masters, ldap-a and ldap-b, you can modify the console
    on the second server (ldap-b) so that it uses ldap-b's instance instead
    of ldap-a's. (By default, writes with ldap-b's console would be made to
    ldap-a then replicated over.)
    To accomplish this, you must:
    Shut down the Administration Server and Directory Server.
    Change these files to reflect ldap-b's values:
    'serverRoot'/userdb/dbswitch.conf:
    directory default ldap://ldap-b:389/o%3DNetscapeRoot
    'serverRoot'/admin-serv/config/adm.conf:
    ldapHost: ldap-b
    ldapPort: 389
    'serverRoot'/shared/config/dbswitch.conf:
    directory default ldap://ldap-b:389/o%3DNetscapeRoot
    'serverRoot'/slapd-serverID/config/dse.ldif:
    nsslapd-pluginarg0: ldap://ldap-b:389/o%3DnetscapeRoot
    Note: assuming your LDAP TCP port is 389
    Turn off the pass through authentication (PTA) plug-in on ldap-b by editing
    its dse.ldif file.
    In a text editor, open the 'serverRoot'/slapd-serverID/config/dse.ldif file.
    Locate the entry for the the PTA plug-in:
    dn: cn=Pass Through Authentication,cn=plugins,cn=config
    Change nsslapd-pluginEnabled: on to nsslapd-pluginEnabled: off.
    Restart the Directory Server and Administration Server.
    8. Now on mta1, using configutil, set options to these values
    **a. local.ldaphost = "ldap-a ldap-b"
              - Required to use both servers as Config dir. servers,
    in the event of failover, config. is taken from ldap-b.
         b. local.ugldaphost = "ldap-a ldap-b"
              - Required to use both servers for User dir. lookups in event of
    failover.
    c. local.service.pab = "ldap-a ldap-b"
    - Required to use both servers for PAB lookup/additions in the event
    of failover.
         ** To make config dir. failover, shutdown Admin server. and change,
    'serverroot'/shared/config/dbswith.conf:
    directory default ldap://ldap-a ldap-b:389/o%3DNetscapeRoot
         Restart Admin server.     
    OK that's all there is too it.
    Now test everything, failover LDAP, test logins for email POP/Webmail, IMAP if used.
    Test email connections (ie. inbound/outbound email conns)
    Note:
    Whatever LDAP server fails, iMS5 will continue to use other LDAP server, even when
    failed LDAP server comes online. Either stop/start iDS5 on current LDAP server or
    stop/start iMS5.
    .....Well it worked for me !
    Good luck ;)

  • RAC: using Sun cluster or Oracle clusterware?

    Going to build RAC on SUN machine (Solaris 10) and Sun SAN storage, couldn't decide how to build the foundation-----use Sun cluster 3.1 or Oracle clusterware?
    Any idea? Thanks.

    Starting with 10g Release 2, Oracle provides its own and indipendient cluster management, i.e., Clusterware which can be used as replacement for the OS specific cluster software.
    You can still use the OS level cluster, in this case, Oracle Clusterware depends on OS cluster for the node information.
    If you are on 10g, you can safely, go with Oracle Clusterware.
    Jaffar

  • How Front End pool deals with fail over to keep user state?

         Hello to all, I searched a lot of articles to understand how Lync 2010 keeps user state if a fail happens in a Front Pool node, but didn't find anything clear.
         I found a MS info. about ths topic : " The Front End Servers maintain transient information—such as logged-on state and control information for an IM, Web, or audio/video (A/V) conference—only for the duration of a user’s session.
    This configuration
    is an advantage because in the event of a Front End Server failure, the clients connected to that server can quickly reconnect to another Front End Server that belongs to the same Front End pool. "
        As I read, the client uses DNS to reconnect to another Front End in the pool. When it reconnects to an available server, does he lose what he/she was doing at Lync client? Can the server that is now hosting his section recover all
    "user's session data"? Is positive, how?
       Regards, EEOC.

    The presence information and other dynamic user data is stored in the RTCDYN database on the backend SQL database in a 2010 pool:
    http://blog.insidelync.com/2011/04/the-lync-server-databases/  If you fail over to another pool member, this pool member has access to the same data.
    Ongoing conversations and the like are cached at the workstation.
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer".
    SWC Unified Communications

  • Two ACS4.0 box using win- can connect with cross over cable

    Hi
    we have 2 ACS4.0 box, internal replication is happening between ACS1(prim) to ACS2(sec) but not ACS2 to ACS1 why?
    Also I need 1 suggesion, whether we can connect 2 ACS boxes through cross cable for sync.
    At present it is connected with 2 diff cores(ACS1 to core 1 & ACS2 to core2) and cores are interconnected.
    What is the normal practice.
    Regards
    Naga

    Hi Naga,
    The purpose of Replication in ACS is for the Primary Server to overwrite the secondary server's settings that you have chosen.
    This is by design Replication is meant to be one way and not bi-directional.
    The Cisco Secure ACS Solution Engine supports the operation of only one Ethernet connector at a time. Concurrent operation of both Ethernet connectors is not supported."
    To get redundancy with any ACS - you need replication setup with TWO ACS, it is not
    possible to setup a NIC failover in the same chasis.
    Regards,
    Jagdeep

  • BPEL, clustering and fail over

    How is failure of a BPEL PM node monitored so that in-flight processes can be automatically restarted on one of the surviving PM nodes?

    BPEL PM has a built-in retry mechanism and end-point load balancing:
    <partnerLinkBinding name="RatingService">
    <property name="wsdlLocation">
    http://localhost:8080/axis/services/RatingService1?wsdl
    http://localhost:8080/axis/services/RatingService2?wsdl
    </property>
    </partnerLinkBinding>
    <partnerLinkBinding name="FlakyService">
    <property name="wsdlLocation">http://localhost:8080/axis/services/FlakyService?wsdl</property>
    <property name="location">http://localhost:2222/axis/services/FlakyService</property>
    <property name="retryCount">2</property>
    <property name="retryInterval">60</property>
    </partnerLinkBinding>
    Please refer to the Resilient Flow Demo for more information, packaged in the product and documented on OTN, for more information:
    http://www.oracle.com/technology/products/ias/bpel/htdocs/orabpel_technotes.tn007.html

  • Physical standby database fail-over

    Hi,
    I am working on Oracle 10.2.0.3 on Solaris SPARC 64-bit.
    I have a Dataguard configuration with a single Physical standby database that uses real time application. We had a major application upgrade yesterday and before the start of upgrade, we cancelled the media recovery and disabled the log_archive_dest_n so that it doesn't ship the archive logs to standby site. We left the dataguard configuration in this mode incase of a rollback.
    Primary:
    alter system set log_archive_dest_state_2='DEFER';
    alter system switch logfile;
    Standby:
    alter database recover managed standby database cancel;Due to application upgrade induced problems we had to failover to the physical standby, which was not in sync with primary from yesterday. I used the following method to fail-over since i do not want to apply any redo from yesterday.
    Standby:
    alter database activate physical standby database;
    alter database open;
    shutdown immediate;
    startupSo, after this step, the database was a stand alone database, which doesn't have any standby databases yet (but it still has log_archive_config parameter set and log_archive_dest_n parameters set but i have 'DEFER' the log_archive_dest_n pointing to the old primary). I have even changed the "archive log deletion policy to NONE"
    RMAN> configure archivelog deletion policy to none;After the fail-over was completed, the log sequence started from Sequence 1. We cleared the FRA to make space for the new archive logs and started off a FULL database backup (backup incremental level 0 database plus archivelog delete input). The backup succeded but we got these alerts in the backup log that RMAN cannot delete the archivelogs.
    RMAN-08137: WARNING: archive log not deleted as it is still neededMy question here is
    1) Even though i have disabled the log_archive_dest_n parameters, why is RMAN not able to delete the archivelogs after backup when there is no standby database for this failed-over database?
    2) Are all the old backups marked unusable after a fail-over is performed?
    FYI... flashback database was not used in this case as it did not server our purpose.
    Any information or documentation links would be greatly appreciated.
    Thanks,
    Harris.

    Thanks for the reply.
    The FINISH FORCE works in some cases but if there is an archive gap (though it didn't report in our case), it might not work some times (DOCID: 846087.1). So, we followed the Switch-over & Fail-Over best practices where it mentioned about this "ACTIVE PHYSICAL STANDBY" for a fail-over if you intend not to apply any archivelogs. The process we followed is the Right one.
    Anyhow, we got the issue resolved. Below is the resolution path.
    1) Even though if you DEFER the LOG_ARCHIVE_DEST_STATE_N parameter's on the primary, there are some situations where the Primary database in a dataguard configuration where it will not delete the archive logs due to some SCN issues. This issue may or may not arise in all fail-over scenarios. If it does, then do the following checks
    Follow DOCID: 803635.1, which talks about a PLSQL procedure to check for problematic SCN's in a dataguard configuration even though the physical standby databases are no available (i.e., if the dataguard parameters are set, log_archive_config, log_archive_dest_n='SERVICE=..." still set and even though corresponding LOG_ARCHIVE_DEST_STATE_N parameters are DEFERRED).
    If this procedure returns any rows, then the primary database is not able to delete the archivelogs because it is still thinking there is a standby database and trying to save the archive logs because of the SCN conflict.
    So, the best thing to do is, remove the DG related parameters from the spfile (log_archive_config, log_archive_dest_n parameters).
    After i made these changes, i ran a test backup using "backup archivelog all delete input", the archive logs got deleted after backup without any issues.
    Thanks,
    Harris.
    Edited by: user11971589 on Nov 18, 2010 2:55 PM

  • ISE fail over

    Hi I have 2 ise 3315 working in stanalone mode
    I have 2 sites
    ISE_1 is installed on site 1 and manage user groupe_1
    ISE_2 is installed on site 2 and manage user groupe_2
    I am plannig to use the 2 ISE in fail over
    I would like to configure
    1. ISE_1 to be primary  for user groupe_1 and secondary (backup) for user groupe_2
    2. ISE_2 to be primary  for user groupe_2 and secondary (backup) for user groupe_1
    Please how can I configure it ?
    Which midofication would I add on the switch, WLC and ISE ?
    Thanks in advance for your help

    Hello,
    In this case, you can use a simple 2-node deployment scenario, in this scenario you will have ISE-1 as: primary admin, secondary monitor, and PSN. you'll have ISE-2 as: secondary admin, primary monior, and PSN.
    Be aware of these points:
    1- If ISE-1 went down, you have to access ISE-2 GUI and promote it manually.
    2- If ISE-2 fails, no problem the monitoring persona failover happens automatically.
    3- To load balance the users you are talking about, you have to do this based on NADs. for example you have 4 switches, so do the following:
    A.make SW1 and SW2 point to ISE-1 and ISE-2 as the radius servers but give higher priority to ISE-1.
    B.make SW3 and SW4 point to ISE-1 and ISE-2 as the radius servers but give higher priority to ISE-2.
    So you have divided the job on the two nodes, if one is down the other will handle all the communications with the NADs.
    check this document for all the info you mau need regarding distributed deployments ( and yes the connection speed between the two nodes should be 1Gbps)
    http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/howto_50_ise_deployment_tg.pdf
    Message was edited by: Ahmed AboRahal to add the document link.

  • It needs to recover by hand when the fail-over occured.

    Hi all,
    I have the database using Oracle10g(10.1.0.5)SE on Windows Server 2003 with HA Cluster using Oracle Fail Safe(3.3.4).
    It had smoothly run with fail-over before.
    But now,It needs to recover by hand when the fail-over occured.
    Even no connection with it,it needs to recover by hand.
    Have anyone met same situation or Is someone who understand it?
    Thanks in advance.
    Tom

    Hi Frank...
    I see from your post, no one was able to help you however, I encountered this EXACT same scenario and don't know what to do to fix this. What did you finally do???  -- Andy

  • Audio Applications in Unity Fail-over

    Hi all,
    I am going to install Cisco Unity with fail-over and what I remember, I should to rebuild the applications like Auto Attendant in secondary server. because this is not part of the replication.
    Am I right? or no need to rebuild the applications?

    Hi JFV,
    That is no longer the case
    How Standby Redundancy Works in Cisco Unity 8.x
    Cisco Unity standby redundancy uses failover functionality to provide duplicate Cisco Unity servers for disaster recovery. The primary server is located at the primary facility, and the secondary server is located at the disaster-recover facility.
    Standby redundancy functions in the following manner:
    •Data is replicated to the secondary server, with the exceptions noted in the "Data That Is Not Replicated in Cisco Unity 8.x" section.
    •Automatic failover is disabled.
    •In the event of a loss of the primary server, the secondary server is manually activated.
    Data That Is Not Replicated in Cisco Unity 8.x
    Changes to the following Cisco Unity settings are not replicated between the primary and secondary servers. You must manually change values on both servers.
    •Registry settings
    •Recording settings
    •Phone language settings
    •GUI language settings
    •Port settings
    •Integration settings
    •Conversation scripts
    •Key mapping scripts (can be modified through the Custom Key Map tool)
    •Media Master server name settings
    •Exchange message store, when installed on the secondary server
    http://www.cisco.com/en/US/docs/voice_ip_comm/unity/8x/failover/guide/8xcufg040.html#wp1099338
    Cheers!
    Rob

  • What are the advantages of using an internal table with workarea

    Hi,
    can anyone tell me
    What are the advantages of using an internal table with workarea
    over an internal table with header line?
    thnks in adv
    regards
    nagi

    HI,
    Internal tables are a standard data type object which exists only during the runtime of the program. They are used to perform table calculations on subsets of database tables and for re-organising the contents of database tables according to users need.
    http://help.sap.com/saphelp_nw04/helpdata/en/fc/eb35de358411d1829f0000e829fbfe/content.htm
    <b>Difference between Work Area and Header Line</b>
    While adding or retrieving records to / from internal table we have to keep the record temporarily.
    The area where this record is kept is called as work area for the internal table. The area must have the same structure as that of internal table. An internal table consists of a body and an optional header line.
    Header line is a implicit work area for the internal table. It depends on how the internal table is declared that the itab will have the header line or not.
    e.g.
    data: begin of itab occurs 10,
    ab type c,
    cd type i,
    end of itab. " this table will have the header line.
    data: wa_itab like itab. " explicit work area for itab
    data: itab1 like itab occurs 10. " table is without header line.
    The header line is a field string with the same structure as a row of the body, but it can only hold a single row.
    It is a buffer used to hold each record before it is added or each record as it is retrieved from the internal table. It is the default work area for the internal table
    1) The difference between
    whih header line and with out heater line of internal table.
    ex:-
    a) Data : itab like mara occurs 0 with header line.
    b) Data: itab like mara occurs 0.
    -While adding or retrieving records to / from internal table we have to keep the record temporarily.
    -The area where this record is kept is called as work area for the internal table.
    -The area must have the same structure as that of internal table. An internal table consists of a body and an optional header line.
    -Header line is a implicit work area for the internal table. It depends on how the internal table is declared that the itab will have the header line or not.
    a) Data : itab like mara occurs 0 with header line.
    table is with header line
    b) Data: itab like mara occurs 0.
    table is without header line
    2)work area / field string and internal table
    which one is prefarable for good performance any why ?
    -The header line is a field string with the same structure as a row of the body, but it can only hold a single row , whereas internal table can have more than one record.
    In short u can define a workarea of an internal table which means that area must have the same structure as that of internal table and can have one record only.
    Example code:
    data: begin of itab occurs 10,
    ab type c,
    cd type i,
    end of itab. " this table will have the header line.
    data: wa_itab like itab. " explicit work area for itab
    data: itab1 like itab occurs 10. " table is without header line.
    Regards,
    Padmam.

  • Sun Studio 11 with Rational ClearCase (for its VCS)

    I'm trying to use Sun Studio 11 with Rational ClearCase.
    IMHO, it is possible to use ClearCase in it as a VCS,
    because an error message in Sun Studio 11 indicated http://vcsgeneric.netbeans.org/profile/ to add external VCS support.
    Then, my questions are...
    1. Is it possible to use Rational ClearCase for the VCS?
    2. If it is possible, which module do I have to get from the site
    (http://vcsgeneric.netbeans.org/profiles/)?
    3. How to Install it into Sun Studio 11?
    Thanks in advance.

    First of all - Sun Studio IDE is shipped without Clear Case support,
    and there is no official way to add this feature. Technically it is possible
    to add a plugin module, compatible with NetBeans 3.5, which will
    provide Clear Case support, but it is not a supported configuration.
    So it is not recommended to experiment with a Sun Studio installation,
    which is used in real job. The answers below are mostly my guess,
    because we did not try Sun Studio with Clear Case.
    1. Is it possible to use Rational ClearCase
    for the VCS?Yes, I think there is a ClearCase plugin module for Netbeans 3.5:
    http://vcsgeneric.netbeans.org/profiles/index.html
    2. If it is possible, which module do I have to get
    from the site
    (http://vcsgeneric.netbeans.org/profiles/)?
    I think this one:
    http://vcsgeneric.netbeans.org/files/documents/67/117/clearcase-profile.nbm
    . How to Install it into Sun Studio 11?You have to be able to write to the installation tree, which means you
    have to be root, or you have to change access modes. This module
    will be installed in /opt/netbeans/3.5V11/ directory, so this directory
    and its subdirectories should be writable for you. You have to start
    Netbeans using script /opt/netbeans/3.5V11/ bin/runide.sh, and
    upgrade it with the new plugin module. I can provide more details,
    but I want to repeat again, that this is unsupported configuration,
    so if you want to try it on your own system as an experiment -
    let me know, I'll describe the process in details.
    Sorry about this problem.
    Thanks,
    Nik

  • Best Practices for patching Sun Clusters with HA-Zones using LiveUpgrade?

    We've been running Sun Cluster for about 7 years now, and I for
    one love it. About a year ago, we starting consolidating our
    standalone web servers into a 3 node cluster using multiple
    HA-Zones. For the most part, everything about this configuration
    works great! One problem we've having is with patching. So far,
    the only documentation I've been able to find that talks about
    patch Clusters with HA-Zones is the following:
    http://docs.sun.com/app/docs/doc/819-2971/6n57mi2g0
    Sun Cluster System Administration Guide for Solaris OS
    How to Apply Patches in Single-User Mode with Failover Zones
    This documentation works, but has two major drawbacks:
    1) The nodes/zones have to be patched in Single-User Mode, which
    translates to major downtime to do patching.
    2) If there are any problems during the patching process, or
    after the cluster is up, there is no simple back out process.
    We've been using a small test cluster to test out using
    LiveUpgrade with HA-Zones. We've worked out most of bugs, but we
    are still in a position of patching our HA-Zoned clusters based
    on home grow steps, and not anything blessed by Oracle/Sun.
    How are others patching Sun Cluster nodes with HA-Zones? Has any
    one found/been given Oracle/Sun documentation that lists the
    steps to patch Sun Clusters with HA-Zones using LiveUpgrade??
    Thanks!

    Hi Thomas,
    there is a blueprint that deals with this problem in much more detail. Actually it is based on configurations that are solely based on ZFS, i.e. for root and the zone roots. But it should be applicable also to other environments. "!Maintaining Solaris with Live Upgrade and Update On Attach" (http://wikis.sun.com/display/BluePrints/Maintaining+Solaris+with+Live+Upgrade+and+Update+On+Attach)
    Unfortunately, due to some redirection work in the joint Sun and Oracle network, access to the blueprint is currently not available. If you send me an email with your contact data I can send you a copy via email. (You'll find my address on the web)
    Regards
    Hartmut

  • Is there a way to config WLS to fail over from a primary RAC cluster to a DR RAC cluster?

    Here's the situation:
    We have two Oracle RAC clusters, one in a primary site, and the other in a DR site
    Although they run active/active using some sort of replication (Oracle Streams? not sure), we are being asked to use only the one currently being used as the primary to prevent latency & conflict issues
    We are using this only for read-only queries.
    We are not concerned with XA
    We're using WebLogic 10.3.5 with MultiDatasources, using the Oracle Thin driver (non-XA for this use case) for instances
    I know how to set up MultiDatasources for an individual RAC cluster, and I have been doing that for years.
    Question:
    Is there a way to configure MultiDatasources (mDS) in WebLogic to allow for automatic failover between the two clusters, or does the app have to be coded to failover from an mDS that's not working to one that's working (with preference to a currently labelled "primary" site).
    Note:
    We still want to have load balancing across the current "primary" cluster's members
    Is there a "best practice" here?

    Hi Steve,
    There are 2 ways to connect WLS to a Oracle RAC.
    1. Use the Oracle RAC service URL which contains the details of all the RAC nodes and the respective IP address and DNS.
    2. Connect to the primary cluster as you are currently doing and use a MDS to load-balance/failover between multiple nodes in the primary RAC (if applicable).
        In case of a primary RAC nodes failure and switch to DR RAC nodes, use WLST scripts to change the connection URL and restart the application to remove any old connections.
        Such DB fail-over tests can be conducted in a test/reference environment to set up the required log monitoring and subsequent steps to measure the timelines.
    Thanks,
    Souvik.

Maybe you are looking for

  • IPhone 3GS running on iOS 6.1.3 and  to my ipad 3 on iOS 7?

    Will my iPhone 3GS running on iOS 6.1.3 be able to communicate and syn wirelessly to my ipad 3 which will eventually have iOS 7? Thanks in advance.

  • RSS Feed and  Podcast Categories Issue

    Hey everyone, I'm having a lot of problems publishing my podcasts with iweb. I create them on Garageband and then send to iWeb where they're part of a larger site. Anyway, they were working at one stage, but now seem not to be at all. I ran the RSS f

  • Re: COMPLAINTS!

    I have for the last month requested a resolution for a faulty line and broadband. All I get is the runaround and a non UK help desk that read the same script. The problem still remains unresolved. I still have no services. BT are a disgrace. I have n

  • Oracle Reports embed in JDeveloper

    Reports in Jdeveloper ? =========================== Hi Experts, I am finding a solution over there about suitable Reporting tools by Oracle JDeveloper 11g . I am migrating oracle forms to JDeveloper and trying to reporting solution. How can I embed O

  • [svn:osmf:] 10516: More unit tests for ListenerProxyElement.

    Revision: 10516 Author:   [email protected] Date:     2009-09-22 15:13:13 -0700 (Tue, 22 Sep 2009) Log Message: More unit tests for ListenerProxyElement.  Required some minor API changes to SwitchableTrait, to enable triggering of the indicesChange e