Two-way real-time replication

We are trying to set up a true distributed database system. We need to be able to load balance between two remote locations to run transactions which are heavily dependant upon the database. We are trying to replicate bewteen databases to keep them continually in sync (we have a dedicated T1 between our locations). It is not going to be a primary/secondary situation.
Our databases have up to 275 tables, and our volume can at times be as high as 100 transactions/sec.
Does oracle support this? Any idea on how it might compare to SQL server or DB2 (cost, performance...) Any info is greatly appreciated.
-Greg

Hi
Would you please answer my question or guide me to any relavent resource.
The case is that we have a centeralzed 9i database Enterprise edition(DB a)
holding the DB structure while data is stored in a SAN storage.
We have another 9i backup database on another server(DB b) that is conected to the same storage as well.
In case of replication between (DB a) and (DB b), what is being replicated exactly?
-is it a sort of pointer?
-or there is no need for replication as long as we are reling on third part storage and there is no modification done on the database structure?
Thank you so much

Similar Messages

  • Real time replication

    I think my initial load finished and I have couple questions.
    1. Checked data from both source and replicat, they are the same amount of data, is there a way to check from ggsci to ensure at the data is in-sync?
    2. I checked the extract status and it is stopped, which perfectly made sense. when I checked the replicat stats, it's still running, does it suppose to be stopped as well when it finished the initial load?
    3. after the initial load is finish, beside build the new parameter files for both extract/replicat and start them up, is there anything else that I need to watch out?
    thx

    user550338 wrote:
    I think my initial load finished and I have couple questions.
    1. Checked data from both source and replicat, they are the same amount of data, is there a way to check from ggsci to ensure at the data is in-sync?login to sql and do count(*) of all the objects for the table that is being replicated
    2. I checked the extract status and it is stopped, which perfectly made sense. when I checked the replicat stats, it's still running, does it suppose to be stopped as well when it finished the initial load?Not if you have started it.
    3. after the initial load is finish, beside build the new parameter files for both extract/replicat and start them up, is there anything else that I need to watch out?Mkase sure both the databases are pinging. Do a tnsping to each and check if then start extracts/replicats
    thx

  • Update real time account general data into a custom table

    Hi,
    I have created a z table for storing account general data for some business requirement. I have created a program and executing batch job to update this z table on periodic basis. Can anyone suggest some other alternative solution to update this z table whenever an account gets created/modified in CRM system? Instead of running a batch job periodically, I need a real time replication of the account data in this custom table.
    Thanks and Regards,
    Sneha.

    Hi,
    Thanks for your reply.
    But, Can you elaborate the solution, Can we use Business Transaction Events(BTEs) for updating Partner data? Will these be triggered for Account Creation/Modification. If so, Can u suggest any BTE which can be used for my requirement?
    Thanks and Regards,
    Sneha.

  • Best data provisioning tool for very large amount of data updated real time?

    about a few hundred million entries of data a day and it must be replicated to sap hana in real time, what would be the best option?

    Hi Wayne,
    If you are looking for real time replication, then SLT is the best option. What is the source system for this replication?
    Regards,
    Chandu.

  • Handling sequences in real time data transfer.

    Hi,
    We are merging two databases (real time data transfer).
    One is oracle 8i and other is 9i.
    Both databases are having some tables with same structure.
    We want to transfer data from table in one database to table in other database.
    We are implementing this functionality by using Adevanced Queueing.
    Both side tables are using different sequences for generating primary key values and currently contains large volume of data.
    The problem in inserting is, when we will be transferring any record from one table to other, the primary key value (sequence value) generated in both tables will be different.
    So, if any particular record is updated in one table, How can we update the same record in other table ?
    (As the primary key value would be different).
    Thanks,
    Shailesh

    There must be some common data in the two tables, that identifies the record. That is, ignorng the sequence generated PK, how would you tell that two records referred to the same thing? You need to ignore the PK values, since they are meaningless surrogate keys, and use the natural key in the data to match the two talbes.
    HTH
    John

  • TWO VERY TOUGH SUPPORT PROBLEMS FACED AND SOLVED IN REAL TIME

    pls any body post two very tough support problems faced and solved in real time.
    my id is: [email protected]

    Hi Priya.,
    For example client wants his customer legacy number in XD01 master data which is not there in Standard SAP.we can satisfy the requirement with USER EXIT
    2) If the client wants customer Phone number in the sales order we can use user exit to satify the requirement
    These are not provided in Standard SAP & we should work out on alternative way
    REWARD if helpfull
    Thanks & Regards
    Narayana

  • Data replication in real time

    For security reasons, we have two Oracle 9.i database servers (Windows Server 2003 as O/S):
    We have a database server A in our internal network, which is our Intranet, too.
    We have a database server B, out of our Intranet but in our internal network, which can be accessed through Internet and any Internet user with a certain password can insert, update or delete records.
    Both database servers A and B have the same structure, but their data is different. Database A can be accessed from our Intranet and its records can only be updated for our Intranet users. Database B is only available for Internet users (out of our Intranet) and they also can update its records.
    **We need that both databases have the same data at any time.**
    Is it possible to REPLICATE in real time both database servers? Is there any set up or configuration that we must follow to achieve that?
    Thanks in advance!

    If you want absolute transactional consistency, you have to have one system go down if the other goes down. You can configure multi-master replication to replicate data synchronously, but that requires that the two databases participate in the same distributed transaction, so any commit will incur the overhead of the two-phase commit protocol. If either one of the databases cannot commit the transaction, the transaction could not be committed.
    Most folks go with an asynchronous replication scenario, since that allows the two databases to be very closely aligned data-wise, has a lower overhead, and does not have the same availability limitations.
    I would strongly advise against it, but you could build your own replication system that attempted to repeat the transaction on the other system and, if it failed, queued the request until the other system was available. Theoretically, this would allow you to have a transactionally consistent system when both databases were up and to allow one system to continue processing if the other were down. Practically, though, I wouldn't consider this a viable approach unless the alternatives were hugely unsatisfactory, you have gobs of cash, time, and programmers to throw at the problem, and you are willing to deal with significant administrative headaches for the next few years as you go through the first few releases.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Is there a way to create dependency on the real-time jobs

    Hi,
    We have around 80 real-time services running and loading the changed data into the target.
    The process being used is
    IBM Informix > IBM CDC > JMS (xml messages) > DS real-time services > Oracle EDW.
    While using the above process,  when ever there is change in the fact table and the dimension table, both the real-time services are loading the data at the same time into the target. This is causing issues in looking up data with the timing issue.
    Is there a way where we can create a dependency and resolve the timing issue and make sure the lookup table is loaded and then the master table is loaded?
    Please let me know.
    Thanks,
    C

    Hello
    With the design you curently have, you will have potential sequencing issues.  There is no magic in Data Services to solve this.
    You might want to consider building more complex real-time jobs that accept more complex data structures and have logic to process the data in dependency order.
    Michael

  • Could u plz help me to find simple example for how to save data file in a spread sheet or any other way in the real time controller for Sbrio 9642 using memory or usb flash memory

    Could u plz help me to find simple example for how to save data file in a spread sheet or any other way in the real time controller for Sbrio 9642 using memory or usb flash memory

    Here are a few Links to a helpful Knowledge Base article and a White Paper that should help you out: http://digital.ni.com/public.nsf/allkb/BBCAD1AB08F1B6BB8625741F0082C2AF and http://www.ni.com/white-paper/10435/en/ . The methods for File IO in Real Time are the same for all of the Real Time Targets. The White Paper has best practices for the File IO and goes over how to do it. 
    Alex D
    Applications Engineer
    National Instruments

  • HT1553 What is the best system for a real time cloud back up of documents?  My MacBook crashed, and I lost 2 hours of writing and could not find a way to restore it.

    My MacBook Pro crashed while I was rewriting a book, lost more than an hour of work and could not find a way to restore it.  Did not have Time Machine set up, but it appears that Time Machine does not have Real Time back up and documents must be manually stored.
    I need an automatic, real time back up to keep this from happening - I'm not happy my MacBook has crashed twice now.   What is the best cloud system for Real Time backup?   Thanks to anyone who can help me, I'm not the most astutde computer guy... James

    One way would be to use Dropbox, or a similar sync service, and just keep your critical documents in the appropriate folder. Dropbox, at least, keeps a local copy of everything and syncs automatically to the cloud whenver a change is made. Dropbox is free for up to 2GB of data.
    There are also true backup services such as CrashPlan+:
    http://www.crashplan.com/consumer/crashplan-plus.html
    which provide automatic backups whenver a change is detected. It's not free, but usually such services aren't too expensive unless you need to back up a lot of data.
    Regards.

  • What is the best way to create shared variable for multiple PXI(Real-Time) to GUI PC?

    What is the best way to create shared variable for multiple Real time (PXI) to GUI PC? I have 16 Nos of PXI system in network and 1 nos of GUI PC. I want to send command to all the PXI system with using single variable from GUI PC(Like Start Data acquisition, Stop data Acquisition) and I also want data from each PXI system to GUI PC display purpose. Can anybody suggest me best performance system configuration. Where to create variable?(Host PC or at  individual PXI system).

    Dear Ravens,
    I want to control real-time application from host(Command from GUI PC to PXI).Host PC should have access to all 16 sets PXI's variable. During communication failure with PXI, Host will stop data display for particular station.
    Ravens Fan wrote:
    Either.  For the best performance, you need to determine what that means.  Is it more important for each PXI machine to have access to the shared variable, or for the host PC to have access to all 16 sets of variables?  If you have slowdown or issue with the network communication, what kinds of problems would it cause for each machine?
    You want to located the shared variable library on whatever machine is more critical.  That is probably each PXI machine, but only you know your application.
    Ravens Fan wrote:
    Either.  For the best performance, you need to determine what that means.  Is it more important for each PXI machine to have access to the shared variable, or for the host PC to have access to all 16 sets of variables?  If you have slowdown or issue with the network communication, what kinds of problems would it cause for each machine?
    You want to located the shared variable library on whatever machine is more critical.  That is probably each PXI machine, but only you know your application.

  • Best way to acquire data from both serial port and D/A board in real time?

    In my experiment, I have 2 kinds of data: analog and digital. Now, I have to write a programme to acquire both data not only in real time but also in sychronicity. My colleague tried to write a program for this purpose. However, the digital part was failed. For example, the data length found from "data buffer" is correct in the first 10 seconds; however, the format became wrong later.
    Is it one program involved two different data acquisition methods? 

    Hi,
    You need to figure out when the serial port sample occured by some technique and then obtain the equivalent sample from the aquisition board, probably from a circular buffer. If you know the sample rate (pretty implicit really) you can 'cherry pick' the appropriate measurement from the buffer to be synchronus with the serial port measurement.

  • MultiProvider with two real-time InfoCubes?

    Hi,
    I created a multiprovider with two real-time Infocubes.  I need to create a formula in FOX editor where I have to update only one InfoCube. Is it possible? If so, I am looking for syntax with InfoCube name in the formula. The KeyFigures have the same names in both InfoCubes.
    Any help would be appreciated.
    Thank you,
    Vidya

    Hi Pratyush,
    I am using IP. I created Aggregation Level, Filter, etc. It created a characteristic "0INFOPROV" and I used that cha. to read data and to write the data. It did not work.
    Here is the problem.
    - We have one basic cube (history data: HCUBE) and two transactional cubes (actual data: ACUBE and plan data: PCUBE)
    - Created a multiprovider (MULTI) for HCUBE, ACUBE, PCUBE
    - Also created Agg level and filter
    - Note: Three cubes have the same Key Figure (0Amount)
    - Created a planning function and tried something like {KF, PCUBE} = {KF, MULTI} (as suggested by Andrey)
    - Task is to read data from HCUBE and ACUBE and only write to PCUBE.
    -  I am able to read data but unable to write data.
    Any ideas?
    Thank you,
    Vidya

  • Could I use two regular computer to achieve realtime communicat​ion using the Ethercat and Labview Real Time Module?

    Could I use two regular computers (one acts master , another one acts as slave using the Ethercat Internet Card) to achieve realtime communication using the Ethercat and Labview Real Time Module? if it could, what hardware should I purchase from the NI ?
    Thank you!

    Hi Xiaolin,
    NI doesn't offer Windows based Ethercat master or slave software. Only LabVIEW RT can run the Ethercat driver. 
    However, you could use a LabVIEW RT target as an Ethercat master and use the Ethercat Internet Card with a slave PC (note: I think this will work from the Beckhoff description of the card you are describing. The card should integrate like any other non-NI slave. However I haven't tested the setup and don't advise saying it will work until you have tried). 
    You can use any NI RT target with two ethernet ports as the Ethercat master. This could be a cRIO, PXI or RT Desktop. 
    Jesse Dennis
    Design Engineer
    Erdos Miller

  • Best way to do near real-time ?

    Hello,
    What is the best way to do some near real-time to transfer data from:
    an Oracle DB to another Oracle DB: Streams, CDC + ETL, ... ?
    any DB to an Oracle DB ?
    The idea is to load a DWH in near real-time.
    Thanks in advance for your answers.

    DR Google
    http://www.orafaq.com/node/957
    1. Physical Standby Database
    Standby database is called “physical” if the physical structure of stand by exactly matches with stand by structure. Archived redo log transferred from primary database will be directly applied to the stand by database.
    2. Logical Standby Database
    Stand by database is called “logical”, the physical structure of both
    databases do not match and from the archived redo log we create SQL statements then these statements will be applied to stand by database.
    Basically your second database is a logical (ETL) copy of your main db updated via archive logs. it is not an easy thing to set up, but if you manage to get it right, its probably the least amount of maintenance over time.

Maybe you are looking for