Oracle Streams and Dataguard

Does anyone know how to configure oracle streams and dataguard to work together? I've set up an environment which successfully captures and applies records in a streams environment using
log_archive_config=SEND, RECEIVE, NODG_CONFIG
as soon as I try to introduce the streams setup into a database which already has dataguard setup it begins to get convoluted as setting DG_CONFIG requires you to then set the db_unique_name and it seems that the streams log mining will not work without being fully incorporated into the Dataguard configuration.
Has anyone set up streams in a dataguard environment?

Lets see what's missing from your request:
1. Oracle version number?
2. Physical or logical Data Guard?
3. Which Data Guard mode? Synch or Asynch?
4. Which Data Guard protection level?
5. ARCH or LGWR?
6. Which streams mode? Synch or Asynch?
7. Hotlog or Autolog?
8. Streams on the production server or the standby server?
I'm all out of guesses tonight. Please provide enough information for someone to help you.

Similar Messages

  • Setup between Oracle streams and MQ

    Hi All,
    I m trying to create the setup between oracle streams and Messing Queue(MQ).i have already install MQClient on my machine and messing gateways is also working fine,but accoring to setup docs i have created one user having all the granst they provide in docs and created queuetable,queue,dml handler,tables rules and starting queue after that we have done Capture and Apply process on that user, in other word u have say the entire setup but our data is not propagating that messaage to MQ.
    So if anyone has specific setup code then pl provide or any suggestion u want can give.
    thnks & regards
    Sanjeev

    Hi Sanjeev,
    There is a special forum dedicated to Oracle Streams: Streams
    You may want to try there.
    Extra tip: post the code you used. That way it is easier for people to see what you might have done wrong.
    Regards,
    Rob.

  • Differenece between oracle streams and oracle Replication

    Hi all,
    Can anyone tell me difference between oracle streams and oracle replication?
    Regards

    refer the link:Difference Between Oracle Replication & Oracle Streams.
    Oracle Replication is designed to replicate exact copies of data sets to various databases. Oracle Streams is designed to propagate individual data changes to various databases. Thus, Replication is probably easier if the end goal is to maintain identical copies of data, where Streams is easier if the end goal is to allow different databases to react differently to data changes.
    Oracle Replication is a significantly more mature product-- it is quite usable with older databases. Oracle Streams is a much newer technology and is only usable among different 9i databases. Most competent Oracle developers and DBA's are familiar with Oracle Replication, while many fewer have any real experience with Streams.
    The Streams architecture strikes me as a lot more flexible than Oracle Replication's. This leads me to suspect that Oracle will be pushing Streams over Replication in subsequent releases, so I would expect new features in Streams, like DDL changes, that aren't in Oracle Replication. Realistically, though, I don't expect any serious movement away from Replication for at least a few releases, so I wouldn't tend to be overly concerned on this front.
    answered by Justin
    Distributed Database Consulting, Inc.
    reference from forum thread:
    Difference Between Oracle Replication & Oracle Streams.

  • Oracle Streams and Oracle Apps  11i

    Hi,
    I am looking for an oracle solution to build a reporting instance off an e-business suite 11i and offload discoverer reporting to the reporting instance. I have tried dataguard logical standby but too many issues and I cannot use. I am wondering if anyone tried using oracle streams to build a reporting instance from an oracle apps instance?
    Thanks

    Hi,
    Is Streams supported in an E-Business Suite database? We once had a scenerio where some parameters for Streams were conflicting with parameters required for E-Business Suite performance improvement. I could not find any documents from Oracle to verify whether Streams on E-Business Suite database was a supported configuration or not.
    So if you have any such document, could you send the link to me?
    Also please provide the link to your white paper/presentation.
    Regards,
    Sujoy

  • Oracle Streams and CLOB column

    Hi there,
    We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit". My question is "Does Oracle Streams captures, propagates (source capture method) and applies CLOB column changes?"
    If yes, is this default behavior? Can we tell Streams to exclude the CLOB column from the whole (capture-propage-apply) process?
    Thanks in advance!

    You can exclude columns via a rule (dbms_streams_adm.delete_column).
    CLOBs are captured.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_capture.htm#i1006263

  • Oracle Streams and Database Encryption

    I am looking for encryption method for OLTP database.
    Oracle Streams will be used to synchronize the data between source and remote database.
    Business process wants all remote database to be fully encrypted.
    What Can I use?
    - TDE is not supported with Oracle Streams,
    - I can’t use DBMS_CRYPTO since it needs big Database modifications.
    Any other Oracle supported techniques, or maybe I’m gonna need to consider hardware encryption (like hard disk encryption) or OS level encryption.
    Thanks,

    TDE is not supported with LogMiner based technologies such as Streams, Data Guard (logical standby). Therefore you can not use TDE to replicated encrypted content. --You will get exception "Unsupported data type"
    But, you might replicate decrypted data into encrypted destination. This means your destination might have TDE encrypted columns.
    Unofficially Oracle will support TDE with logminer based technologies in the next database version 11.
    I am waiting for this.
    Regards,
    Mike

  • Infrastructure suggestion for Oracle RAC and Dataguard

    Hello,
    We are working on migration business proposal and need to suggest HA and DR infrastructure solution with Oracle 11G databases. I am thinking to suggest Oracle 11g RAC and Dataguard to meet this requirement. We also wanted to cover Dev, Test and Pre Prod along with Prod in new environment. Can someone suggest cost effective, stable and scalable infrastructure for this requirement
    Best regards,
    Venkat

    In terms of cost ? Perhaps.
    In terms of performance? NO.
    Bear in mind I'm not trying to sell anything here, but in my experience hardly anything matches the performance of Exadata. It has specific features for the DB such as de iDB protocol which enables the offloading of queries to the exadata storage servers. More unique features include: storage indexes, HCC compression, smart flash cache... It's all a matter of how much you want to spend.
    Yes, you can have all your servers on the same box if you want, just make sure it is not a single point of failure. (Have appropriate safeguards in place in case of power failure, Data center catastrophes, etc). In case of LPARs on the same box, be aware of one server over-utilizing resources which would affect the other LPARs (configurable).
    More info on Exadata: http://www.oracle.com/technetwork/server-storage/engineered-systems/exadata/dbmachine-x3-twp-1867467.pdf
    If your company has a tight budget you might also look at your options with Database appliance: http://www.oracle.com/technetwork/server-storage/engineered-systems/database-appliance/documentation/oracle-db-appliance…
    Database appliance has more flexibility in terms of licensing and it's also a complete hardware solution from oracle for Databases.
    Regards

  • Oracle Streams and Hetrogenous Enviornment

    Hi,
    As we can use Streams in Heterogeneous environment, then let say we implemented streams for replication between MS SQL server and Oracle and data has to be replicated from Oracle to SQL server then on SQL server side what sort of configuration we need.
    And if we want to replicate both data and also user messages then what we would do.
    Regards,
    Abbasi

    Viacheslav Ostapenko wrote:
    Sorry, Aman,
    I couldn't find any info about replication to MS SQL. Is it possible at all? Could you provide link where we can read about this? It could be very interesting.Sorry Viacheslav, even I couldn't find anything for the same. I am not sure that it can be done or not, I haven't heard anyone in my contact doing so. The only place where I have seen Streams being used around me is within Oracle db only. May be someone else can help if he/she has done it.
    Aman....

  • Oracle stream and ASM

    Does Oracle 11g stream run with ASM together?

    They are independent products, working at different layers for different purposes.
    You can have Streams with ASM.
    You can have Streams without ASM.
    You can have ASM with Streams.
    You can have ASM without Streams.
    Hemant K Chitale

  • Oracle stream and  standby database

    i want to implement stream(table level) in the following scenario
    what i must to set for standby as when i switch to standby database the stream work correctly like when primary was working.
    DB1--------------------stream---------------------DB2
    DB1standby .......................................DB2standby
    Narges.

    Your schema above do not tell us if this is a streams master/master. Also it is important to know if you a doing full schema replication or only some tables.
    For the db_name think global name : how do you setup a propagation when the source DB and target DB have the same TNS entry - whose name is driven by the rule dblink name = global_name = tns entry.
    Here it is a slightly different case : DG will succeed to SRC and appear as the same DB for the remote though they are on different hostname. Only way is the fail over in tns entry when you set multiple host for the same services.
    The rest should be ok except for the lost data of SRC which requires a re-sync.
    On the streams side problems remains and I am still skeptic on the feasibility : on paper no problems but problems come fast for the data's lost during the switch to DG (and this is to be expected with async DG, last SCN's will be missing on DG but not necessary on remote target site).
    Then both sites will start suffering on the loss of this data most probably with ORA 1403 data not found or Streams transactions pending and there is no way to predict the extend of this in advance. The re-sync of this status is then crucial and I am working on this. I will come back to the community with a generic procedure for master-master re-sync. One of the main problem is to identify table remote correspondents: These are easy to determine for declarative transformations but for transformation done within apply handler or transformation function associated to a context I have not find better than to comes back to the DBA and ask pitifully which are the remote target tables. The correspondence is a precious data that do not appear in any data dictionary view, but is only into the text of a pl/sql block.
    For my comments on the others thread, when the former SRC comes back, I have not considered the possibility in 11g to re-sync the former Master with a DG being the new master and sound like I should have.
    But this is all new also for me and I need to test it. if it works then you don't have the problem to setup a downstream capture between 2 DB having the same db_name.

  • Using OWB mappings with Oracle CDC/Streams and LCRs

    Hi,
    Has anyone worked with Oracle Streams and OWB? We're looking to leverage Streams to update our data warehouse using Streams to apply changes from the transactional/source DB. At some point we seem to remember hearing that OWB could leverage Streams, perhaps even using the Logical Change Records (LCRs) from Streams as input to mappings?
    Any thoughts much appreciated.
    Thanks,
    Jim Carter

    Hi Jim,
    We've built a fairly complex solution based on streams. We wanted to break up the various components into separate entities so that any network failure or individual component failure wouldn't cause issues for the other components. So, here goes:
    1) The OLTP source database is streaming LCR's to our Datawarehouse where we keep an operational copy of production, updated daily from those streams. This allows for various operational reports to be run/rerun in a given day with the end-of-yesterday picture without impacting the performance on the source system.
    2) Our apply process on the datamart side actually updates TWO copies of data. It does a default apply to our operational copy of production, and each of those tables have triggers that put a second copy of the data into daily partitioned tables. So, yesterday's partitions has only the data that was actually changed yesterday. After the default apply, we walk the Oracle dependency tree to fill in all of the supporting information so that yesterday's partition includes all the data needed to run our ETL queries for that day.
    Example: Suppose yesterday an address for a customer was updated. Streams only knows about the change to the address record, so the automated process would only put that address record into the daily partition. The dependency walk fills in the associated customer, date of birth, etc. data into that partition so that the partition holds all of the related data to that address record for updates without having to query against the complete tables. By the same token, a change to some other customer info will backfill in the adress record for this customer too.
    Now, our ETL queries run against views created against these partitoned tables so that they are only looking at the data for that day (the view s_address joins from our control tables to the partitiond address table so that we are only seeing one day's address records). This means that the ETL is running agains the minimal subset of data required to update dimensions and create facts. It also means that, for example, if there is a problem with the ETL we can suspend running ETL while we fix a problem, and the streaming process will just go on filling partitions until we are ready to re-launch ETL and catch up - one day at a time. We also back up the data mart after each load so that, if we discover an error in ETL logic and need to rebuild we can restore the datamart to a given day and then reprocess the daily partitions in order very simply.
    We have added control fields in those partitioned tables that show which record was inserted/updated/or deleted in production, and which was added by the dependency walk so, if neccessary, our ETL can determine which data elements were the ones that changed. As we do daily updates to the data mart as our finest grain, this process may update a given record in a given partition multiple times so that the status of this record at the end of the day in that daily partition shows the final version of that record for the day. So, for example, if you add an address record an then update it on the same day the partition for that day will show the final updated version of the record, and the control field will show this to be a new inserted record for the day.
    This satisfies our business requirements. Yours may be different.
    We have a set of control tables which manage what partition is being loaded from streams, and which have been loaded via ETL to the datamart. The only limitation is that, of course, the ETL load can only go as far as the last partition completely loaded and closed from streams. And we manage the sizing of this staging system by pruning partitions.
    Now, this process IS complex, and requires a fair chunk of storage, but it provides us with the local daily static copy of the OLTP system for running operational reports against without impacting production, and a guaranteed minimal subset of the OLTP system for speedy ETL runs.
    As for referencing LCRs themselves, we did not go that route due to the dependency issues (one single LTR will almost never include all of the dependant data from which to update a dimension record or build a fact record, so we would have had to constantly link each one with the full data set to get all of that other info).
    Anyway - just thought our approach might give you some ideas as you work out your own approach.
    Cheers,
    Mike

  • Oracle streams configuration problem

    Hi all,
    i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
    ERROR at line 1:
    ORA-01353: existing Logminer session
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
    ORA-06512: at line 1
    When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
    That i want is how to destroy or stop logminer session.
    Thnaks for your help
    regards
    raitsarevo

    Thanks Werner, it's ok now my problem is solved and here bellow the output of your script.
    I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
    many thanks again, and please send to my email address [email protected] if needed
    ABILLITY>DELETE FROM system.logmnr_uid$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnr_session$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtcs;
    0 rows deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtlo;
    13 rows deleted.
    ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    PL/SQL procedure successfully completed.
    regards
    raitsarevo

  • Data Guard with Oracle 9i and 10g -- have you done it?

    We have an ERP system (JDEdwards, db size approx 600 GB) and we'd like to copy the data in this system to a database on another computer. We'd use the remote database for reporting, so we need it to be available. We are looking at using Data Guard running in Logical Standby mode.
    The ERP database is Oracle 9i, and we are considering having the Logical Standby database be 10g. Have you done this and if so, what was your experience?
    Here's why we want to do this. There are many tables in the ERP database that contain descending indexes. We can't change this, and it is not supported with Data Guard under Oracle 9.2. Tables with descending indexes do not copy to the logical standby database.
    We were told that this is corrected in a newer release of Oracle, and we were wondering if a 10g database would properly copy tables with descending indexes.
    Thanks in advance for any experiences you can share.
    Best Regards,
    Mike

    What you are trying to do might not be possible. Because when you create logical standby you have to create a physical standby first and convert it. Primary and standby have to be same version.
    In furture it might be possible because 10G will support rolling upgrade of logical standby.
    However even it's possible you have to go through a lot of pain to setup and maintain it because Oracle don't support the setup.
    What you could try are Stream and Replication, I will say Stream is very interesting one. Because Oracle say : "Oracle Streams and Oracle Data Guard (including Data Guard SQL Apply) are independent features based on some common underlying infrastructure and technology. "
    http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardStreams.html

  • Oracle Streams Update conflict handler not working

    Hello,
    I've been working on the Oracle streams and this time we've to come up with Update conflict handler.
    We are using Oracle 11g on Solaris10 env.
    So far, we have implemented bi-directional Oracle Streams Replication and it is working fine.
    Now, when i try to implement Update conflict handler - it executed successfully but it is not fulfilling the desired functionality.
    Here are the steps i performed:
    Steap -1:
    create table test73 (first_name varchar2(20),last_name varchar2(20), salary number(7));
    ALTER TABLE jas23.test73 ADD (time TIMESTAMP WITH TIME ZONE);
    insert into jas23.test73 values ('gugg','qwer',2000,SYSTIMESTAMP);
    insert into jas23.test73 values ('papa','sdds',2050,SYSTIMESTAMP);
    insert into jas23.test73 values ('jaja','xzxc',2075,SYSTIMESTAMP);
    insert into jas23.test73 values ('kaka','cvdxx',2095,SYSTIMESTAMP);
    insert into jas23.test73 values ('mama','rfgy',1900,SYSTIMESTAMP);
    insert into jas23.test73 values ('tata','jaja',1950,SYSTIMESTAMP);
    commit;
    Step-2:
    conn to strmadmin/strmadmin to server1:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    Step-3
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-4
    conn to strmadmin/strmadmin to server2
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-5
    And now, if i try to update the value of salary, then it is not getting handled by update conflict handler.
    update jas23.test73 set salary = 1500,time=SYSTIMESTAMP where first_name='papa'; --server1
    update jas23.test73 set salary = 2500,time=SYSTIMESTAMP where first_name='papa'; --server2
    commit; --server1
    commit; --server2
    Note: Both the servers are into different timezone (i hope it wont be any problem)
    Now, after performing all these steps - the data is not same at both sites.
    Error(DBA_APPLY_ERROR) -
    ORA-26787: The row with key ("FIRST_NAME", "LAST_NAME", "SALARY", "TIME") = (papa, sdds, 2000, 23-DEC-10 05.46.18.994233000 PM +00:00) does not exist in ta
    ble JAS23.TEST73
    ORA-01403: no data found
    Please help.
    Thanks.
    Edited by: gags on Dec 23, 2010 12:30 PM

    Hi,
    When i tried to do it on Server-2:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    it throws me an error,
    Error -
    ERROR at line 1:
    ORA-32588: supplemental logging attribute all column exists

  • BLOB in Oracle Streams

    Oracle 10.2.0.4:
    I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
    We have some tables witih BLOB data.

    It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
    Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
    If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
    If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
    There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
    Justin

Maybe you are looking for

  • How do I download the 64-Bit version of Photoshop Elements?

    I just received the error code 7 with this message.This installer does not support installation on a 64-Bit Windows operating system. Please download the 64-Bit version of Photoshop Elements. What do I need to do?

  • Problem in the select

    Hi. i am having problem during execution in the select query and its taking time.what is to be done and do  i need to add line of code. plz reply sort i_aufk by aufnr. delete adjacent duplicates from i_aufk comparing aufnr. IF NOT i_aufk[] is INITIAL

  • Sales order - Idoc issue

    Hi all I need to find out what fields values are getting moved to idoc(outbound) when we create a sales order. My requirement is - In sales order we have different pricing procedure. Some pricing condition will have "netvalue2" and some will have "ne

  • Download-to-excel vs. web print to excel

    There is an old how-to guide about downloading web report to Excel and it worked fine. Now we are implementing the web print to Excel function. However, two concerns: 1) what's the advantage of web print to excel over download to excel function? 2) i

  • Can't access other accounts

    Suddenly I can't access other accounts in my Mac, everything works fine but when I choose to log into other user account it just remains in the progress bar and never logs in, even if I create a new user account it does not let me in (password or not