Replication between 2 node RAC environment and standalone

I would like to find out if we can setup replication between a (2 node) RAC environment and standalone database located at different location. Any help regarding this would be greatly appreciated.

Thanks for the reply.
Consider for a moment I cannot implement dataguard/stream -- because I believe both involves licensing issue --- now only option left is writing my own code. If I right my own code what are the prerequisites for this and what do I have to keep in (technically)mind before i start implementing this. Any help or any lead would be greatly appreciated.

Similar Messages

  • Is there any way to enable eventlog replication between two nodes in windows 2008 failover cluster.

    Is there any way to enable eventlog replication between two nodes in windows 2008 failover cluster.
    Thanks Azam When you see answers please Mark as Answer if Helpful..vote as helpful.

    Hi,
    As far as I know there don’t have the log replica function between failover cluster node, if you want to have the Unified log management you can refer the following related
    KB:
    Configure Computers to Forward and Collect Events
    http://technet.microsoft.com/en-us/library/cc748890.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Configure replication between directory server 5.1 and 5.2

    we have two directory servers running on different machine 5.1 and new 5.2. All database have been successfully backup and restore from 5.1 to new 5.2. In this scenario, we would like to setup 5.1 and new 5.2 D.S as multi-master replication.
    As described in the sun Documentation, we have copy few ldif file from new 5.2 to 5.1 so that both schema are up to date.
    The new instance of 5.2 is running fine. However, on the other hand, 5.1 has a problem to start the server as show in the following below.
    # ./start-slapd
    [31/May/2005:14:07:43 +0800] dse - The entry cn=schema in file /usr/iplanet/servers/slapd-ifpdev02/config/schema/50ns-admin.ldif is invalid, error code 21 (Invalid syntax) - object class nsAdminServer: Unknown required attribute type "nsServerID"
    [31/May/2005:14:07:43 +0800] dse - Please edit the file to correct the reported problems and then restart the server.
    Any help from you guys are greatly appreciated.

    I recommened that you read the Release Notes of DS5.2, there are some notes on Replication between 5.1 and 5.2.
    ===
    In Directory Server 5.2, the schema file 11rfc2307.ldif has been altered to conform to rfc2307. If replication is enabled between 5.2 servers and 5.1 servers, the rfc2307 schema MUST be corrected on the 5.1 servers, or replication will not work correctly.
    Workaround
    To ensure correct replication between Directory Server 5.2 and Directory Server 5.1, perform the following tasks:
    * For zip installations, remove the 10rfc2307.ldif file from the 5.1 schema directory and copy the 5.2 11rfc2307.ldif file to the 5.1 schema directory. (5.1 Directory Server Solaris packages already include this change.)
    * Copy the following files from the 5.2 schema directory into the 5.1 schema directory, overwriting the 5.1 copies of these files:
    11rfc2307.ldif, 50ns-msg.ldif, 30ns-common.ldif, 50ns-directory.ldif, 50ns-mail.ldif, 50ns-mlm.ldif, 50ns-admin.ldif, 50ns-certificate.ldif, 50ns-netshare.ldif, 50ns-legacy.ldif, and 20subscriber.ldif.
    * Restart the Directory Server 5.1 server.
    * In the Directory Server 5.2 server, set the nsslapd-schema-repl-useronly attribute under cn=config to on.
    * Configure replication on both servers.
    * Initialize the replicas.
    ===
    Also search for "migrate" or "repl" or "5.1" in Release Notes and read the relevant information.
    http://docs.sun.com/source/817-7611/index.html
    Another guide is "Installation and Migration Guide"
    http://docs.sun.com/app/docs/doc/817-7608
    HTH.
    Gary

  • Replication between 130 nodes and 1 Data Center

    Hi everyone.
    I have 130 database nodes (Oracle Standard Edition One) with a big distance of separation, and 1 Data Center with 3 nodes (Oracle Real Application Cluster 10g R2). The connection between nodes and datacenter is through various ISP ( WAN).
    I have exactly the same model design of database in nodes and datacenter.
    DataCenter is a repository of data for reporting to directors and dictate the business rules to guide all nodes.
    Each node have approximately 15 machines connected with desktop application.
    In other words Desktop Application with a Backend Database (node).
    My idea of replication is not instantly, when a transaction commit in a node then replicate to datacenter. Also over nigth replicate images because is heavy, approximately 1 mg per image. Each image correspond to one transaction.
    On the orher hand i have to replicate some data from datacenter to nodes, business rule, for example: new company names, new persons, new prohibitions, etc.
    My problem is to determine th best way to replicate data through nodes to datacenter.
    Please somebody could suggest me the best solution.
    Thanks in advanced.

    Last I checked, Streams and multi-master replication require enterprise edition databases at both ends, which rules them out for the sort of deployment you're envisioning.
    If a given table will only ever be modified on nodes or on the master site, never both, you can build everything as read-only materialized views. This would probably require, though, that the server at the data center have 130 copies of each table, 1 per node. For schemas of any size, this obviously gets complicated very quickly. For asynchronous replcation to work, you'd need to schedule periodic refreshes, which assumes that you have relatively stable internet connections between the nodes and the data center.
    I guess I would tend to question the utility of having so many nodes. Is it really necessary to have so many? Or could you just beef up the master and have everyone connect directly?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • How to add a second database along with existing two node RAC environment

    Hi,
    I was wondering if anyone can help me with this.
    My Environment:
    1. Two node RAC Cluster database (11.2.0.2) with ASM running perfectly (Oracle Sid = test-1)
    2. I have installed a second single instance db on node 2 (Oracle Sid = test-2) with NTFS file system for datafiles
    3. Database is up and running, but I am not able to connect it from any client.
    4. I am getting ORA-12514:  TNS:listener does not currently know of service requested in connect descriptor
    5. Database (test2) is registered with grid LISTENER
    5. TNSPING from client machine response is ok
    Am I missing something here, happy to provide more info if requested.
    Thanks,
    PS

    C:\Users\root.test_prod>lsnrctl
    LSNRCTL for 64-bit Windows: Version 11.2.0.2.0 - Production on 23-FEB-2012 11:58:05
    Copyright (c) 1991, 2010, Oracle. All rights reserved.
    Welcome to LSNRCTL, type "help" for information.
    LSNRCTL> status
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 64-bit Windows: Version 11.2.0.2.0 - Production
    Start Date 18-FEB-2012 19:51:47
    Uptime 4 days 16 hr. 6 min. 24 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File C:\app\11.2.0\grid\network\admin\listener.ora
    Listener Log File C:\app\11.2.0\grid\log\diag\tnslsnr\IRIS11G-DB-2\listener\alert\log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.30.0.202)(PORT=1520)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.30.0.215)(PORT=1520)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+asm2", status READY, has 1 handler(s) for this service...
    Service "test-11gRAC.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Service "test-11gRXDB.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Service "irisapps.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Service "test-2" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    Service "test-2XDB" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    The command completed successfully
    LSNRCTL> service
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+asm2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1322 refused:0 state:ready
    LOCAL SERVER
    Service "test-11gRAC.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1829 refused:0 state:ready
    LOCAL SERVER
    Service "test-11gRXDB.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: IRIS11G-DB-2, pid: 4496>
    (ADDRESS=(PROTOCOL=tcp)(HOST=IRIS11G-DB-2.test_prod.internal)(PORT=57695))
    Service "irisapps.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1829 refused:0 state:ready
    LOCAL SERVER
    Service "test-2" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "test-2XDB" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: IRIS11G-DB-2, pid: 9340>
    (ADDRESS=(PROTOCOL=tcp)(HOST=IRIS11G-DB-2.test_prod.internal)(PORT=58653))
    The command completed successfully
    LSNRCTL>

  • Best way to create a 2 node RAC environment from existing setup

    Hello all,
    I have a 2 node RAC(10.2.0.3) running on Solaris 10 as my Prod Database.
    We are planning to have another 2 node RAC for DEV purposes with 10.2.0.3 itself.
    [Due to certain reasons this will act as the PROD for few weeks, so we need exact copy of the DB]
    I cannot afford any downtime.
    i am planning to
    Install CRS and upgrade to 10.2.0.3
    Install RAC and upgrade to 10.2.0.3
    Duplicate the database using RMAN
    Are there any better ways that i could replicate the environment either using Grid Control(10.2.0.2) or Dataguard (or anyother) ?
    TIA,
    JJ

    I don't think you're going to achieve no downtime, but if you get the DB copied to the 2nd cluster (using RMAN or whatever method you like) and apply all the logs, then your downtime should be able to be limited to the time it takes to apply the last log or two once you shutdown the primary site (a la Data Guard). That should also allow you to avoid data loss by applying the last logs (you'll likely have to manually copy and apply them). I agree with DbaKerber that using Data Guard may not be a bad solution here. You're not going to get 0 downtime, but I think it would be the safest way to have the shortest downtime window with no data loss.

  • Best way to create a similar 2 node RAC environment

    Hello all,
    I have a 2 node RAC(10.2.0.3) running on Solaris 10 as my Prod Database.
    We are planning to have another 2 node RAC for DEV purposes with 10.2.0.3 itself.
    [Due to certain reasons this will act as the PROD for few weeks, so we need exact copy of the DB]
    I cannot afford any downtime.
    i am planning to
    Install CRS and upgrade to 10.2.0.3
    Install RAC and upgrade to 10.2.0.3
    Duplicate the database using RMAN
    Are there any better ways that i could replicate the environment either using Grid Control(10.2.0.2) or Dataguard (or anyother) ?
    TIA,
    JJ

    I don't think you're going to achieve no downtime, but if you get the DB copied to the 2nd cluster (using RMAN or whatever method you like) and apply all the logs, then your downtime should be able to be limited to the time it takes to apply the last log or two once you shutdown the primary site (a la Data Guard). That should also allow you to avoid data loss by applying the last logs (you'll likely have to manually copy and apply them). I agree with DbaKerber that using Data Guard may not be a bad solution here. You're not going to get 0 downtime, but I think it would be the safest way to have the shortest downtime window with no data loss.

  • Replication between linux on z (zLinux) and linux x86

    Hey guys,
    I'm looking for the best/recommended solution to setup replication between Linux on z (s390x) which is a big-endian platform and Linux x86 which is small-endian. I know Dataguard would not work but what would you suggest instead?
    Source DB: 10.2.0.4 on Linux x86_64
    Destination DB: 10.2.0.4 on Linux s390x

    What are requirements for a replication?
    - max latency time
    - volume of changes (bytes/time)
    - one direction / by-direction / multi-direction
    - synchronous/asynchronous
    - some specific/exotic data types
    - is any transformation of values required
    - is any filtering of rows based on column values or other criteria like user or time required
    - other?
    Edited by: user11181920 on Sep 24, 2012 2:28 PM

  • Static listener registration 11gr2 RAC Primary and standalone Standby

    Hi Guys,
    I have to build up a single instance Physical Standby(Dataguard) for a primary RAC (11.2.0.1).The concern or rather confusion we have three SCAN LISTENERS across 2 node cluster,in order for me to have static registration for duplication of database for standby i need to have it put in the listener.ora .
    Do i have to create the static listeners for default listener as well as the scan listeners?

    My understanding is No.
    The Oracle instance would automatically register the local listener to SCAN listeners, so I believe just the default needs this.
    This document has Scan listener example and none of them use a _DGMGRL
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-rac-standby-133152.pdf
    Best Regards
    mseberg
    This document seems to support my statement :
    11.2 Scan and Node TNS Listener Setup Examples [ID 1070607.1]
    Edited by: mseberg on Aug 5, 2011 2:02 PM
    Also of interest :
    11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained [ID 887522.1]

  • Difference between add-in Java-Stack and standalone stacks

    Hello experts,
    is there any difference between a Java-Stack as add-in in ABAP AS and two standalone stacks, aside from that the standalone systems are physically seperated. What other reasons are there to choose the one or the other? Is it performance or scalability? Are there any differences for developers (for example on XI or BI) or is the difference only important for administrators?
    Best regards,
    David

    Hi David,
    I've just found the following document is uploaded on SDN.
    [How to Deploy SAP NetWeaver: Dual Stack vs. Separated Stacks|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d074d7de-8d55-2b10-1e94-fb2e9d2893d1]
    Hope this help you
    regards,
    Yoshi

  • Database replication between Oracle RAC and Oracle Standalone DB

    Hi,
    We have currently 4 node RAC environment and Oracle RAC implementation with 4 Oracle instances. We want to move the one instance and make it Standalone with Oracle database due to availability issue. Thus we will remain with 3 nodes on the RAC database and one Standalone database. We want to implement Oracle replication on this solution, where partial database of RAC environment needs to be replicated to Standalone node.
    We will have Oracle 9i database server for both RAC and Standalone machine.
    The partial data that we are looking for replication are of size 25GB. Some of questions we have:
    1. Is there any other replication mechanism apart from Materilzed view for this solution?
    2. Is it feasible to sychronize around 25GB data between servers?
    3. What can be estimated time for refresh or Synchronization?
    Thanks in advance for help.

    The methods that we are evaluating currently for this replication are:
    1. Multi Master replication
    2. Oracle 9i Datagaurd.
    Please let me know if we need to look for some other methods of replication also.

  • Database replication between Oracle RAC and Oracle Standalone DB in 9i

    Hi,
    We have currently 4 node RAC environment and Oracle RAC implementation with 4 Oracle instances. We want to move the one instance and make it Standalone with Oracle database due to availability issue. Thus we will remain with 3 nodes on the RAC database and one Standalone database. We want to implement Oracle replication on this solution, where partial database of RAC environment needs to be replicated to Standalone node.
    We will have Oracle 9i database server for both RAC and Standalone machine.
    The partial data that we are looking for replication are of size 25GB. Some of questions we have:
    1. Is there any other replication mechanism apart from Materilzed view for this solution?
    2. Is it feasible to sychronize around 25GB data between servers?
    3. What can be estimated time for refresh or Synchronization?
    The methods that we are evaluating currently for this replication are:
    1. Multi Master replication
    2. Oracle 9i Datagaurd.
    Please let me know if we need to look for some other methods of replication also.
    Thanks in advance for help.

    ManojMac wrote:
    The partial data that we are looking for replication are of size 25GB. Some of questions we have:
    1. Is there any other replication mechanism apart from Materilzed view for this solution?Streams is another option
    2. Is it feasible to sychronize around 25GB data between servers?Sure. Depending on the rate of change, your latency requirements, whether the standalone database has the horsepower to apply all the changes generated by to other three nodes, etc.
    3. What can be estimated time for refresh or Synchronization?Depends on the architecture, the network connection, whether you are doing incremental refreshes, etc. And it depends on what time you're measuring-- you might be measuring the latency between the RAC cluster and the standalone database, you might be measuring the time it takes to incrementally refresh a single materailized view when there have been no changes, you might be measuring the time it takes to do a complete refresh of an entire refresh group, pulling 25 GB of data over the network.
    The methods that we are evaluating currently for this replication are:
    1. Multi Master replication
    2. Oracle 9i Datagaurd.DataGuard is not an option if you only want to replicate a subset of the data. The two realistic options are materialized views and Streams. Are you anticipating that you will be making changes on both nodes? If not, you can use simple materialized views rather than multi-master replication.
    Since 9.2 is not longer covered by Premier Support, are you planning to upgrade to a supported version in the near future? In particular, Streams works a lot better in later versions of the database.
    Justin

  • Multi master replication between 5.2 and 6.3.1

    I have a setup in which I have a master running version 5.2 and about 15 consumers ( slaves) all of which have been upgraded to 6.3.1 . I now want to create a multi master topology by promoting one of these consumers to be a master and still keep the 5.2 in use as we have a bunch of other applications that depend on the 5.2 instance. Our master has two suffixes. The master server is also the CA cert authority for all the consumers . After reading the docs I narrowed down the procedure to be
    1. Promote one of the 6.3.1 consumers to hub and then to master using the dsconf promote-repl commands. The problem here is that I am not sure how I can create a single consumer that can slave both the suffixes. We currently have them being slaved to different consumers.
    Also do I need to stop the existing replication between the 5.2 master and the would be 6.3.1 master to promote to hub and master.
    2. Set the replication manager manually or using dsconf set-server-prop on the new 6.3.1 master .
    3. Create a new replication agreement from 5.2 to 6.3.1 master without initializing. (using java console)
    4. Create new replication agreement from 6.3.1 to 5.2 (using command line)
    5. Create new repl agreements between the new 6.3.1 master and all the other consumers. For this do I need to first disable all the agreements between 5.2 and 6.3 or can I create new agreements without disabling the old ones?
    6. Initialize 6.3.1 from the 5.2 master.
    My biggest concern at this point is surrounding the ssl certs and the existing trusts the consumers have with the 5.2 master. Currently my 5.2 server acts as the CA authority for our certificate management with the ldap slaves. How can I migrate this functionality to the new server and also will this affect how the slaves communicate to the new master server ?
    Thanks in advance.

    Thanks Marco and Chris for the replies.
    I was able to get around the message by first manually initialzing the new slave using an ldif of the ou from the master , using dscc to change the default replication manager account to connect and finally editing the dse.ldif to enter the correct crypt hash for the new repl manager password. After these steps I was able to successfully set up replication to the second ou and also promote it to hub and master ( I had to repeat the steps after promotion of the slave to master as somehow it reset replication manager settings when I did that).
    So right now, I have a 5.2 master with two ou's replicating to about 15 consumers.
    I promoted one of these to be a second master (from consumer to hub to master). Replication is setup from 5.2 to 6.3 master but not the other way round.
    I am a little bit nervous setting up replication the other way round as this is our production environment and do want to end up blowing up my production instance. The steps I plan on taking are , from the new master server
    1. dsconf create-repl-agmt -p 389 dc=xxxxx,dc=com <5.2-master>:389
    2. dsconf set-repl-agmt-prop -p 389 dc=xxxxx,dc=com <5.2-master>:389 auth-pwd-file:<passwd_file.txt>
    I am assuming I can do all of this while the instances are up. Also in the above, does create-repl-agmt just create the agreement or does it also initalize the consumer with the data ? I want to ensure I do not initialize my 5.2 master with my 6.3 data.
    Thanks again

  • Single node file system to 3 node rac and asm migration

    hi,
    we have several utl_file and external table applications running on 10.2 single node veritas file system. and we want to migrate to 3 node RAC ASM environment. what is the best practices in order to succeed this migration during this migration. thanks.

    1. Patch to 10.2.0.3 or 10.2.0.4 if not already there.
    2. Dump Veritas from any future consideration.
    3. Build and validate the new RAC environment and then plug in your data using transportable tablespaces.
    Do not expect the first part of step 3 to work perfectly the first time if you do not have experience building RAC clusters.
    This means have appropriate hardware in place for perfecting your skills.
    Be sure, too, that you are not trying to do this with blade or 1U servers. You need a minimum of 2U servers to be able
    to plug in sufficient hardware to have redundant paths to storage and for cache fusion and public access (a minimum of 6 ports).
    And don't let any network admin try to convince you that they can virtualize the network paths: They can not do so successfully
    for RAC.

  • SCCM 2012 Replication between Central Admin Site and all Primary Sites is failing

    Let me start by saying I have made a mistake and now I am paying for it and attempting to fix it. All of our SCCM servers are virtual and exist on an ESX environment. The mistake I made is I restored our Central Admin Site from a backup without also
    restoring the two Primary Sites at the same time. Now the databases between the sites simply refuse to synchronize. I can run the Replication Link Analyzer until I'm blue in the face and even though the data gets replicated once, the replication immediately
    breaks and fails after that.
    Regrettably I no longer have access to backups that would take me back to a point where the three servers were happy. The problem there is our ESX administrator only keeps a limited number of backups per server (we have in excess of 180 virtual servers in
    our ESX environment) and the backups from a point in time where they worked is no longer available.
    As I have said I have tried running the Replication Link Analyzer many times. I have also tried going into the SQL server console and running the stored procedure spDrsSendReplicationInvalid.
    Can anyone provide me with any assistance on how best to restore replication between the Central Admin server and the two Primary servers?

    http://blogs.msdn.com/b/scstr/archive/2012/05/31/how_2d00_to_2d00_site_2d00_server_2d00_recovery_2d00_central_2d00_or_2d00_primary.aspx
    Just an addition: the option called "Recover central administration site:
    Then specify the FQDN of a
    Reference primary site" is the one to try first.
    Torsten Meringer | http://www.mssccmfaq.de

Maybe you are looking for