Oracle Streams PIT on Source DB

I'm testing backup/recovery in my streams environment, simulating a PITR on the source. The objective is to get both the source and destination to the same point in time, driven by the source. That being said, if recover to a time 5 hours prior, on both db's, what are the steps to recover the streams processes? I have a 'best practices' document but I'm not sure of the details of the steps. In it is says to:
1). recreate a new capture
2). set the maximum_scn of the capture to the resetlogs scn and set the start_scn of the capture to the oldest_scn of the apply. My question is...what capture? The new or the original?
3). Start the capture...again, start the new or orginal capture?
Can anyone help clarify?
Thanks.

If there's only one destination site in replication, then you can use the existing capture for resetting SCN.
If there are multiple destination sites and only one of them need recovery, it's suggested to create new capture and propagation to the crashed site and reset SCN. This approach will avoid applied txn to the sites that didn't crash.

Similar Messages

  • Oracle streams configuration problem

    Hi all,
    i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
    ERROR at line 1:
    ORA-01353: existing Logminer session
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
    ORA-06512: at line 1
    When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
    That i want is how to destroy or stop logminer session.
    Thnaks for your help
    regards
    raitsarevo

    Thanks Werner, it's ok now my problem is solved and here bellow the output of your script.
    I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
    many thanks again, and please send to my email address [email protected] if needed
    ABILLITY>DELETE FROM system.logmnr_uid$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnr_session$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtcs;
    0 rows deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtlo;
    13 rows deleted.
    ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    PL/SQL procedure successfully completed.
    regards
    raitsarevo

  • Oracle streams configuration

    Hi,
    Our organization is planning to implement Oracle streams. I have couple of fundamental questions:
    1. Can you configure Oracle Streams while the source database is up and running?
    2. I think the answer is yes, but please confirm that the LCR data can be extracted and transformed before applying to the target
    3. Is there any performance impact on the source database if Oracle Streams is enabled? If yes, how much?
    That’s all for now…
    Thanks…

    See my answers inline.
    1. Can you configure Oracle Streams while the source database is up and running?
    YES. Your database must be up and running to create streams processes.
    2. I think the answer is yes, but please confirm that the LCR data can be extracted and transformed before applying to the target
    You have to instantiate the target database so that there source and target are in sync.
    3. Is there any performance impact on the source database if Oracle Streams is enabled? If yes, how much?
    There is a minimal overhead of running the capture process on the source. But this should be negligable. This also depends upon what is being streamed i.e. LOB's are being streamed, high transaction volume etc.

  • Oracle Streams and CLOB column

    Hi there,
    We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit". My question is "Does Oracle Streams captures, propagates (source capture method) and applies CLOB column changes?"
    If yes, is this default behavior? Can we tell Streams to exclude the CLOB column from the whole (capture-propage-apply) process?
    Thanks in advance!

    You can exclude columns via a rule (dbms_streams_adm.delete_column).
    CLOBs are captured.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_capture.htm#i1006263

  • Oracle Streams b/w MS-Access 2007 and Oracle 10g.

    Can we set up Oracle Streams between MS-Access 2007 and Oracle 10g? Ms-Access as source and Oracle 10g as destination database. If so, can any one please give me little heads up with supported doc's or any source of info.

    Help Help....!!!

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Oracle Streaming in same database - 10g

    Oracle Streaming in 10g database
    Posted: Sep 21, 2006 4:25 AM Reply
    Hello,
    I am trying do streaming at table level for all dml changes from one schema(source) to another schema (target) controlled by admin schema (stream_admin). I have all these schemas in same database(10g-enterprise edition).
    As given in documentations , I created
    1. two queues(in_Q and out_Q) in Stream_admin,
    2. two process( capture and apply process) in Stream_admin ,
    3. A propagation rule for propagation between in_Q and out_Q,
    4. did instantiation,
    5. started both capture and apply process.
    Having done that , I insert in source table and check for the same in target but alas!! nothing happens. I fail to achieve streaming.
    I am not getting any error. Neither in process , nor in propagation or queues. And all queues,rules and process are enabled.
    Please help.

    datapump uses dbms_metadata extensively.
    Problem is twofold
    - the amount of data
    - why on earth do you need to 'copy' these 1.2 Tb a 'number of times'
    One would expect you would have tested the upgrade on a smaller test database. and you wouldn't need to fix bugs in a 1.2 Tb database.
    A better idea would be to duplicate the complete database using RMAN. This doesn't perform conventional INSERTs and doesn't create redo.
    Sybrand Bakker
    Senior Oracle DBA

  • Oracle Streams 'ORA-25215: user_data type and queue type do not match'

    I am trying replication between two databases (10.2.0.3) using Oracle Streams.
    I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    The main steps are:
    1. Set up ARCHIVELOG mode.
    2. Set up the Streams administrator.
    3. Set initialization parameters.
    4. Create a database link.
    5. Set up source and destination queues.
    6. Set up supplemental logging at the source database.
    7. Configure the capture process at the source database.
    8. Configure the propagation process.
    9. Create the destination table.
    10. Grant object privileges.
    11. Set the instantiation system change number (SCN).
    12. Configure the apply process at the destination database.
    13. Start the capture and apply processes.
    For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
    The problem is that, in the propagation process, I get this error:
    'ORA-25215: user_data type and queue type do not match'
    I have checked it, and the queue table and its associated queue are created as shown:
    sys.dbms_aqadm.create_queue_table (
    queue_table => 'CAPTURE_SFQTAB'
    , queue_payload_type => 'SYS.ANYDATA'
    , sort_list => ''
    , COMMENT => ''
    , multiple_consumers => TRUE
    , message_grouping => DBMS_AQADM.TRANSACTIONAL
    , storage_clause => 'TABLESPACE STREAMSTS LOGGING'
    , compatible => '8.1'
    , primary_instance => '0'
    , secondary_instance => '0');
    sys.dbms_aqadm.create_queue(
    queue_name => 'CAPTURE_SFQ'
    , queue_table => 'CAPTURE_SFQTAB'
    , queue_type => sys.dbms_aqadm.NORMAL_QUEUE
    , max_retries => '5'
    , retry_delay => '0'
    , retention_time => '0'
    , COMMENT => '');
    The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
    As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
    I would be really grateful for any idea!

    Hi
    You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
    DECLARE
    rc BINARY_INTEGER;
    BEGIN
    DBMS_AQADM.VERIFY_QUEUE_TYPES(
    src_queue_name => 'np_out_onlinex',
    dest_queue_name => 'np_out_onlinex',
    rc => rc, , destination => 'scnp.pfa.dk',
    transformation => 'TransformDim2JMS_001x');
    DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
    If you dont have transformations and/or a remote destination - then delete those params.
    Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
    regards
    Mette

  • Oracle streams versus oracle goldengate

    Hi all,
    I just found out about oracle goldengate and was wondering if anyone of you could share what are the differences between it and oracle streams when it comes to change data capture capabilities? Also, how does owb come into play when it comes to oracle goldengate? For instance owb 11gr2 has got cdc capabilties so does it mean its cdc capabilities is based on oracle streams?

    Hi,
    With CDC/Streams you have two choices:
    process the Oracle logfiles in the source-database/server and read the resulting changerecords from the target database/server or
    transport the logfiles to the target database/server and process them there.
    The advantage of the latter case is that you relieve the source from the load of processing the logfiles, but target and and source then need to have the same database and server versions. Golden Gate, if I understand correctly, converts the logfiles to its own format (with mimimal load) and these can be processed by Golden Gate on a target database and server of a different version from the source.
    So you have the advantage (little load on the source) without the disadvantage (source and target have to be of equal versions).
    Regards,
    Jaap.

  • BLOB in Oracle Streams

    Oracle 10.2.0.4:
    I am new to Oracle streams and just reading docs at this point. I read in http://download.oracle.com/docs/cd/B19306_01/server.102/b14229.pdf doc that BLOB are not supported by streams. I am just looking for basic stream configuration with some rule processing which will send LCR from source queue to destination queue. And as I understand I can do that by using ANYDATA payload.
    We have some tables witih BLOB data.

    It's all a balancing act. If you absolutely need both data centers processing transactions simultaneously, you'll need Streams.
    Lets start with the simplest possible case of this, two data centers A and B, with databases 1 and 2. Database 1 is in data center A, database 2 is in data center B. If database 1 fails, would you be able to shift traffic to database 2 relatively easily? Assuming that you're building in functionality to shift load between databases, which is normally the case when you're building this sort of distributed application, it may be easier to do this sort of shift regardless of the reason that database 1 fails.
    If you have a standby database in each data center (1A as the standby for database 1, 2A as the standby for database 2), when 1 fails, you have to figure out whether whatever caused 1 to fail will also cause 1A to fail. If data center A is having connectivity or power issues, for example, you would have to shift traffic to 2 rather than failing 1 over to 1A. On the other hand, if it was an isolated server failure, you could either shift traffic to 2 or fail over to 1A. There is some risk that having a more complex failure scenario makes it more likely that someone makes a mistake-- there will be a number of failover steps that you'd do only if you're failing from 1 to 1A and other steps that you'd do if you were shifting traffic from 1 to 2 and some steps that are common-- and makes it more difficult to fully test all the scenarios. On the other hand, there may well be benefits to having more options to respond to different sorts of failures. And politics/ reporting structure as well as geography plays a role here-- if the data centers are on different continents, shifting traffic is probably much less desirable than if you have two US data centers.
    If, rather than having standbys 1A and 2A, database 1 and 2 were really multi-node RAC clusters, both database 1 and database 2 would be able to survive most sorts of localized hardware failure (i.e. one node can fail on database 1 without affecting whether database 1 is up and processing transactions). If there was a data center wide failure, you'd still have to shift traffic. But one server dying in a pile wouldn't be an issue. Of course, there would be a handful of events that could take down the entire RAC cluster without affecting the data center where a standby could potentially be used (i.e. the SAN for the cluster fails but the standby is using a different SAN). Those may not be particularly likely, however, so it may make sense not to bother with contingency planning for them and just assume that anything that knocks out all the nodes of the cluster forces traffic to be shifted to 2 and that it wouldn't be worth trying to maintain a standby for those scenarios.
    There are lots of trade-offs here. You have simplicity of setup, you have simplicity of failover, you have robustness, etc. And there are going to be cases where you realistically need to take a stab at predicting how likely various events are which gets pretty deeply into hardware, setup, and politics (i.e. how likely a server is to fail depends on whether you've bought a high-end server with doubly-rundundant-everything or a commodity linux box, how likely a data center is to fail depends on the data center's redundancy measures and your level of confidence in those measures, etc)
    Justin

  • Oracle Streams - Dataguard Configuration

    Dataguard<------Streams<----Production------> Dataguard
    I'm planning to implement a 4-way System where My Production Database with its own Physical Standby will be streaming(Streams database) to A reporting Database with its own Physical Standby.So,Effectively My production database,Especially it's redo logs will be put under severe load.I would like get some light on the feasibility of such a Setup.What parameters can i Take care of so as to make it a profitable High Availability-High Preformance System?
    Any suggestions and advice will be highly appreciated..

    Remember that Streams check the source DB name of the LCR. Thus the db_name of each standby must be the same as the of the open DB of the remote DB will reject the LCR of the standby when it is activated.
    Also streams, dataguard and crash don't fit so well in respect of streams consistency. At the crach time, some transaction will be lost that would have already been sent by streams, since streams react beneath the second. Thus when you activate the dataguard with its loss of some data, you are going to miss some source transaction, that would have already been replicated. You may end with errors on target site, either being dup val on transaction or OLD value in target do not match new value in LCR.
    You can't avoid 100% this but you can decrease its extend. Use as method the 'LGW asynct' as dataguard destination.
    LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR ASYNC'
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm#i1265762
    This requires to create the standby redo logs on the dataguard DB (and also on the source DB, since it may becomes itself the standby) so that LGWR updates the remote redo as soon as it can ('async' otherwise 'sync' means that the commit on source is done AFTER the commit into the dataguard and you don't want that).
    From my own observation the 'LGWR async nowait' lag usually under 1 sec behind production, which is very good.

  • Execution of Row level trigger in Oracle Streams.

    Hi All,
    Oracle Database version : 9.2.0.4 on windows NT/2000 environment.
    We managed to install,configure oracle stream technologies.
    Oracle Stream seems to be working fine for replication of DML & DDL changes from source database to target database.
    Following is detail at source end.
    Source Sid = acc
    Source Schema = stream
    Source Table = dept
    structure of dept table.
    Name Null? Type
    DEPTNO NOT NULL NUMBER(5)
    DNAME NOT NULL VARCHAR2(10)
    LOC NOT NULL VARCHAR2(10)
    Streamadmin user = strmadmin
    Following is detail at target end.
    Target Sid = fin
    Target Schema = stream
    Target Table = dept
    structure of dept table.
    Name Null? Type
    DEPTNO NOT NULL NUMBER(5)
    DNAME NOT NULL VARCHAR2(10)
    LOC NOT NULL VARCHAR2(10)
    TRAN_DATE                    NULL DATE DEFAULT SYSDATE
    I checked on insert/update/delete of rows into dept table at source database, changes are correctly replicated to target table dept.
    I wrote a simple trigger which is as follows on dept table at target database.
    create or replace trigger dept_upd_del
    before delete or update of dname,loc on stream.dept
    for each row
    begin
    dbms_output.put_line('Inside Trigger');
    if updating then
    dbms_output.put_line('Update');     
    insert into stream.dept_change values (:old.deptno,'U',sysdate);
    end if;
    if deleting then
    dbms_output.put_line('Delete');
    insert into stream.dept_change values (:old.deptno,'D',sysdate);
    end if;
    end;
    I expect this trigger to get executed whenever changes occurs into dept table at target database whenever dml changes are propagated from source to target table. However i found that the above trigger is not executed at all.
    I was further surprised, since incase i update/delete rows from target table dept the above trigger is executing correctly.
    Can someone please let me know about this?
    I believe stream technology is using INSERT / UPDATE & DELETE statement when changes are applied at target table but this doesn't seems to be the case.
    Thanks in Advance.
    Regards,
    Vidyanand

    The trigger at the destination will not fire because it already has at the source site. Read about that in the streams documentation on page 4-25. To change the "fire once" property of the trigger, use the procedure SET_TRIGGER_FIRING_PROPERTY in the DBMS_DDL package.
    Hope this helps.
    Claudine

  • Oracle Streams 10gR2 Schedule Propagation or Application

    Hi all,
    Is there a way to schedule the propagation and application when configuring Oracle Streams?, I mean, I don't want to execute online replication because I have other object outside Oracle that need to replicate withing the db to have my aplication (red only) in sync.
    So, where I can do that.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'shm',
    streams_type => 'apply',
    streams_name => 'apply_from_db1',
    queue_name => 'strmadmin.from_db1',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1.world',
    inclusion_rule => true);
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'shm',
    streams_name => 'db1_to_db2',
    source_queue_name => 'strmadmin.captured_db1',
    destination_queue_name => '[email protected]',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1.world',
    inclusion_rule => true,
    queue_to_queue => true);
    END;
    /

    Hello, Am I not being clear with my question?. Or maybe I'm doing everything in the wrong way about Stream propagation and application processes?.
    I looked into all documentation available and I can't find how to schedule the processes ... what's the part I'm missundertanding?.
    AS long I have other non Oracle objects (file system objects, we say some ECM file system objects) to replicate with the data schema, I can't replicate (automatically) all the changes occurred at source table's schema to the destiny database schema. So, I'm replicating ECM file systems objects within other external tool, and It has Windows schedule integration ... but how I can schedule propagation and/or application processes in the a 2-way Streams environment.
    Please, I need a hint from somebody.
    Thanks.

  • Oracle Streams vs. Updatable Materialized View

    Does anyone have an idea in which cases Oracle Streams is better than Updatable MV or visa verse?

    Are you really talking about Updatable Materialized Views? Or multi-master replication? Personally, I'm rather hard-pressed to come up with a situation where updatable materialized views would be useful unless you're taking the next step and doing multi-master replication.
    In general, Streams is going to put less load on the source system than materialized views and is going to replicate data more quickly. The downside tends to be that it's a relatively new technology, so it's not appropriate for environments that have older versions of Oracle. Going along with that, you'll find a lot more people/ organizations/ setups using materialized views than Streams, which can be a good thing if you need to hire new staff/ get support from a local user group/ etc. Streams also tends to be more flexible, which can be a good thing, but also tends to make things a bit more complicated.
    If you can outline the particular problem you're trying to solve, we can probably be a lot more specific...
    Justin

  • Oracle Streams - First Load

    Hi,
    I have an Oracle Streams environment working well. I replicate and transform the data.
    My problem is:
    Initially I have a source database with 3 million of records and my destination database with no records.
    I have to equalize source and destination database before start to synchronize it.
    Do you know how can I replicate (and transform) this data for my first load?
    It's not only to copy all the data, it's to copy and transform.
    Is it possible to use the same transformation process for this first load?
    If I didn't transform the data I would use the Data Pump tool(e.g.). But I have to transform the data for my destination database.
    Thanks

    I am in DAC and trying to run the Informatica ETL for one of the prebuilt execution plan (HR- Oracle R12). I built the project plan and ran it. I got failed status for all of the ETL steps (Starting from the first one 'Load Row into Run table'). I have attached the error log for the Load Row into Run table ibelow.
    I took a closer look at all the steps, and it seem like they all have this common "fail parent if this task fails" error message.
    Error log for
    pmcmd startworkflow -u Administrator -p **** -s SOBI:4006 -f SILOS -lpf C:\Informatica\PowerCenter8.1.1\server\infa_shared\SrcFiles\SILOS.SIL_InsertRowInRunTable.txt SIL_InsertRowInRunTable
    Status Desc : Failed
    WorkFlowMessage :
    =====================================
    STD OUTPUT
    =====================================
    Informatica(r) PMCMD, version [8.1.1 SP5], build [186.0822], Windows 32-bit
    Copyright (c) Informatica Corporation 1994 - 2008
    All Rights Reserved.
    Invoked at Thu May 07 14:46:04 2009
    Connected to Integration Service at [SOBI:4006]
    Folder: [SILOS]
    Workflow: [SIL_InsertRowInRunTable] version [1].
    Workflow run status: [Failed]
    Workflow run error code: [36331]
    Workflow run error message: [WARNING: Session task instance [SIL_InsertRowInRunTable] failed and its "fail parent if this task fails" setting is turned on. So, Workflow [SIL_InsertRowInRunTable] will be failed.]
    Start time: [Thu May 07 14:45:43 2009]
    End time: [Thu May 07 14:45:47 2009]
    Workflow log file: [C:\Informatica\PowerCenter8.1.1\server\infa_shared\WorkflowLogs\SIL_InsertRowInRunTable.log]
    Workflow run type: [User request]
    Run workflow as user: [Administrator]
    Integration Service: [Oracle_BI_DW_Base_Integration_Service]
    Disconnecting from Integration Service
    Completed at Thu May 07 14:46:04 2009
    =====================================
    ERROR OUTPUT
    =====================================
    Error Message : Unknown reason for error code 36331
    ErrorCode : 36331
    If you have any input on how to fix this issue, please let me know.

Maybe you are looking for

  • My Apple TV 2 shows items no longer in my computer library specifically podcasts

    When browsing from my Apple TV 2, I see podcasts in my computers library that I don't see when I go directly to iTunes on my computer.  It seems to be showing old podcasts that I have deleted.

  • Blackberry Desktop SW fails to start

    I have just downloaded Desktop Software for PC from the website and installed it but when it tries to start the application I get an error message "BlackBerry Desktop Software has encountered a problem and needs to close.  We are sorry for the inconv

  • Catalog Manager not working with https url

    Hi, On the BI server, we had to implement SSL and now I do not seem to be able to connect to the catalogue manager. The connection to the BI User interface works fine with https, it is only the catalogue manager that shows this error: Unknown connect

  • Sales order - Idoc issue

    Hi all I need to find out what fields values are getting moved to idoc(outbound) when we create a sales order. My requirement is - In sales order we have different pricing procedure. Some pricing condition will have "netvalue2" and some will have "ne

  • How to specify the Process Flow Module with SQLPLUS_EXEC_TEMPLATE.SQL ?

    Hi, we have a couple of process flow modules that have PF Packages and Process Flows with the same name. E.g PFMOD1 (Module) FILELOAD (Package) PF1 (Pf) PFMOD2 (Module) FILELOAD (Package) PF1 (Pf) Normally we can specify "FILELOAD/PF1" as a paramater