ORACLE STREAMS CAPTURE/PROPAGATE  with Synonyms

In Source
CISADM -- Stores All objects
CISUSER -- Stores Synonyms of CISADM
In Target
CISADM -- Stores All objects
CISUSER -- Stores Synonyms of CISADM
Streams implemented successfully on CISADM schema.
What ever changes happening in source CISADM, that are propagated successfully in target CISADM.
Now the issue is any changes made through source CISUSER(Synonyms of CISADM), they are not propagating to target CISADM.
Thanks,
San

Hi San,
When you make any changes in CISADM objects of source through synonyms from CISUSER , they will be done in CISADM objects of source as well as replicated successfully in CISADM of destination (replicated database/schema).
See following test case for reference.
(Consider apps = your primary schema and testuser as your schema having synonyms. apps schema has been replicated using streams.).
From, primary database's main schema
SQL> grant insert,update,delete, select on xyz to testuser;
Grant succeeded.
SQL> select * from xyz;
N
1
2
3
4
From testuser schema of primary database:
SQL> select * from apps.xyz;
N
1
2
3
4
SQL> create synonym xyz for apps.xyz;
Synonym created.
SQL> select * from xyz;
N
1
2
3
4
SQL> insert into xyz values(6);
1 row created.
SQL> /
1 row created.
SQL> /
1 row created.
SQL> commit;
Commit complete.
SQL> select * from xyz;
N
6
6
6
1
2
3
4
7 rows selected.
From apps schema of primary database:
SQL> select * from xyz;
N
6
6
6
1
2
3
4
7 rows selected.
SQL>
From apps schema of replicated database:
SQL> select * from xyz;
N
6
6
6
1
2
3
4
7 rows selected.
SQL>
Hope this helps..
Regards,
Dipali..
Edited by: Dipali on Sep 16, 2008 4:31 AM

Similar Messages

  • Oracle Streams and CLOB column

    Hi there,
    We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit". My question is "Does Oracle Streams captures, propagates (source capture method) and applies CLOB column changes?"
    If yes, is this default behavior? Can we tell Streams to exclude the CLOB column from the whole (capture-propage-apply) process?
    Thanks in advance!

    You can exclude columns via a rule (dbms_streams_adm.delete_column).
    CLOBs are captured.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_capture.htm#i1006263

  • Oracle streams versus oracle goldengate

    Hi all,
    I just found out about oracle goldengate and was wondering if anyone of you could share what are the differences between it and oracle streams when it comes to change data capture capabilities? Also, how does owb come into play when it comes to oracle goldengate? For instance owb 11gr2 has got cdc capabilties so does it mean its cdc capabilities is based on oracle streams?

    Hi,
    With CDC/Streams you have two choices:
    process the Oracle logfiles in the source-database/server and read the resulting changerecords from the target database/server or
    transport the logfiles to the target database/server and process them there.
    The advantage of the latter case is that you relieve the source from the load of processing the logfiles, but target and and source then need to have the same database and server versions. Golden Gate, if I understand correctly, converts the logfiles to its own format (with mimimal load) and these can be processed by Golden Gate on a target database and server of a different version from the source.
    So you have the advantage (little load on the source) without the disadvantage (source and target have to be of equal versions).
    Regards,
    Jaap.

  • Oracle stream not working as Logminer is down

    Hi,
    Oracle streams capture process is not capturing any updates made on table for which capture & apply process are configured.
    Capture process & apply process are running fine showing enabled as status & no error. But, No new records are captured in ‘streams_queue_table’ when I update record in table, which is configured for capturing changes.
    This setup was working till I got ‘ORA-01341: LogMiner out-of-memory’ error in alert.log file. I guess logminer is not capturing the updates from redo log.
    Current Alert log is showing following lines for logminer init process
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
    LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
    But same log was like this before
    LOGMINER: Parameters summary for session# = 1
    LOGMINER: Number of processes = 3, Transaction Chunk Size = 1
    LOGMINER: Memory Size = 10M, Checkpoint interval = 10M
    LOGMINER: session# = 1, reader process P002 started with pid=18 OS id=5812
    LOGMINER: session# = 1, builder process P003 started with pid=36 OS id=3304
    LOGMINER: session# = 1, preparer process P004 started with pid=37 OS id=1496We can clearly see reader, builder & preparer process are not starting after I got Out of memory exception in log miner.
    To allocate more space to logminer, I tried to setup tablespace to logminer I got 2 exception which was contradicting each other error.
    SQL> exec DBMS_LOGMNR.END_LOGMNR();
    BEGIN DBMS_LOGMNR.END_LOGMNR(); END;
    *ERROR at line 1:
    ORA-01307: no LogMiner session is currently activeORA-06512: at "SYS.DBMS_LOGMNR", line 76
    ORA-06512: at line 1
    SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts');
    BEGIN DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts'); END;
    *ERROR at line 1:
    ORA-01356: active logminer sessions foundORA-06512: at "SYS.DBMS_LOGMNR_D", line 232
    ORA-06512: at line 1
    When I tried stopping logminer exception was ‘no logminer session is active’, But when I tried to setup tablespace exception was ‘active logminer sessions found’. I am not sure how to resolve this issue.
    Please let me know how to resolve this issue.
    Thanks
    siva

    The Logminer session associated with a capture process is a special kind of session which is called a "persistent session". You will not be able to stop it using DBMS_LOGMNR. This package controls only non-persistent sessions.
    To stop the persistent LogMiner session you must stop the capture process.
    However, I think your problem is more related to a lack of RAM space instead of tablespace (i. e, disk) space. Try to increase the size of the SGA allocated to LogMiner, by setting capture parameter SGASIZE. I can see you are using the default of 10M, which may be not enough for your case. Of course, you will have to increase the values of init parameters streams_pool_size, sga_target/sga_max_size accordingly, to avoid other memory problems.
    To set the SGASIZE parameter, use the PL/SQL procedure DBMS_CAPTURE_ADM.SET_PARAMETER. The example below would set it to 100Megs:
    begin
    DBMS_CAPTURE_ADM.set_parameter('<name of capture process','_SGA_SIZE','100');
    end;
    I hope this helps.
    Ilidio.

  • Capturing data of the previous time interval with Oracle Stream(HotLog)

    I read from 10g Oracle manual that Oracle Steam can capture data within specified time interval with begin_data and end_data options.
    For example :
    BEGIN
    DBMS_CDC_PUBLISH.CREATE_CHANGE_SET(
    change_set_name => 'set_cns',
    description => 'set_cns...',
    change_source_name => 'HOTLOG_SOURCE',
    stop_on_ddl => 'y',
    begin_date => sysdate,
    end_date => sysdate + 1);
    END;
    However if I set begin_date to previous time from now, Oracle doesn't caputre data anymore in this case.(HotLog method)
    (I set begin_date => sysdate - 1/24
    end_date => sysdate + 1/24.)
    Does anybody know to capture the previous time interval with Oracle Stream?

    Change C2 to:
    cursor c2(passing_date IN date) IS
      SELECT MONITOR_ID, SAMPLE_ID,
                   COLL_TIME, DEW_POINT
        FROM ARCHIVE_DATA
        WHERE COLL_TIME < passing_date
        ORDER BY COLL_TIME desc;And rather than populating a table with the three records, you could just select the three records using: where COLL_TIME between Prev3_time and Prev1_time

  • Capture Changes from Sql Server  using Oracle Streams  - Destination Oracle

    Is it possible to capture changes made to tables in Sql Server database and propagate the changes to Oracle Database using Oracle Streams and Heterogeneous Gateway. I see plenty of information about pushing data from Oracle to Sql server, but I haven't been able to find much information about going the other way. Currently we are using sql server 2005 replication to accomplish this. We are looking into the possibility of replacing it with streams.

    the brief understanding i have is that there is nothing out of the tin that Oracle provides to stream between SQL Server and Oracle. The senario is documented in Oracle docs however and says you need to implement the SQL Server side to grabe changes and submit to Oracle stream queues.
    i'm sure i've seen third parties who sell software to do this.
    If you know otherwise please let me know. Also wasn;t aware one could push from SQL Server to Oracle. Is this something only avail in SQL Server 2005 or does 200 also have it? How are you doing this?
    Cheers

  • Problem while Creating MVLOG with synonym in Oracle 9i:Is it an Oracle Bug?

    Hi All,
    I am facing a problem while Creating MVLOG with synonym in Oracle 9i but for 10G it is working fine. Is it an Oracle Bug? or i am missing something.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    SQL>
    SQL> create table t ( name varchar2(20), id varchar2(1) primary key);
    Table created.
    SQL> create materialized view log on t;
    Materialized view log created.
    SQL> create public synonym syn_t for t;
    Synonym created.
    SQL> CREATE MATERIALIZED VIEW MV_t
      2  REFRESH ON DEMAND
      3  WITH PRIMARY KEY
      4  AS
      5  SELECT name,id
      6  FROM syn_t;
    Materialized view created.
    SQL> CREATE MATERIALIZED VIEW LOG ON  MV_t
      2  WITH PRIMARY KEY
      3   (name)
      4    INCLUDING NEW VALUES;
    Materialized view log created.
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
    PL/SQL Release 9.2.0.6.0 - Production
    CORE    9.2.0.6.0       Production
    TNS for Solaris: Version 9.2.0.6.0 - Production
    NLSRTL Version 9.2.0.6.0 - Production
    SQL>
    SQL> create table t ( name varchar2(20), id varchar2(1) primary key);
    Table created.
    SQL> create materialized view log on t;
    Materialized view log created.
    SQL> create public synonym syn_t for t;
    Synonym created.
    SQL> CREATE MATERIALIZED VIEW MV_t
    REFRESH ON DEMAND
    WITH PRIMARY KEY
    AS
      2    3    4    5  SELECT name,id
    FROM syn_t;   6
    Materialized view created.
    SQL> CREATE MATERIALIZED VIEW LOG ON  MV_t
    WITH PRIMARY KEY
    (name)
      INCLUDING NEW VALUES;  2    3    4
    CREATE MATERIALIZED VIEW LOG ON  MV_t
    ERROR at line 1:
    ORA-12014: table 'MV_T' does not contain a primary key constraintRegards
    Message was edited by:
    Avinash Tripathi
    null

    Hi Nicloei,
    Thanks for the reply. Actually i don't want any work around (Creating MVLOG on table rather than synonym is fine with me) . I just wanted to know it is actually an oracle bug or something else.
    Regards
    Avinash

  • Register Oracle Streams with OID

    I'm trying to setup an Oracle Streams environment that registers queues with OID. I have the db registered, and set global_toppic_enabled=true. DB is in archivemode. When I try to setup a queue with:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    I get
    Error report:
    ORA-00600: internal error code, arguments: [kcbgtcr_5], [52583], [4], [0], [], [], [], []
    ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 739
    ORA-06512: at line 2
    00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
    *Cause:    This is the generic internal error number for Oracle program
    exceptions.     This indicates that a process has encountered an
    exceptional condition.
    *Action:   Report as a bug - the first argument is the internal error number
    Has anyone run into this? I searched metalink, but couldn't find anything. I'm running 10.2.0.1 on Windows 2K.
    Thanks in advance.

    I never use streams with OID but check Bug 4996133 - OERI[kcbgtcr_5] updating an IOT in RAC environment.
    I would consider to upgrade db to 10.2.0.3 - it is first really stable release of 10gR2
    Regards,
    Serge

  • Oracle stream with rac

    hi ,
    I’m trying to configure oracle stream one direction ( tables level )..
    my source and destination database is 10.2.0.4 and destination in rac (three nodes)
    source database is one node
    please help if there is some configuration required in rac

    Hello
    Please find the Oracle RAC Specific Configuration while implementing Oracle Bidirectional streaming Setup
    #Propagation
    queue_to_queue parameter
    -- Assign Primary / Secondary Instance IDs
    dbms_aqadm.alter_queue_table(queue_table => 'capture_srctab',
    primary_instance => 1,
    secondary_instance => 2);
    dbms_aqadm.alter_queue_table(queue_table => 'apply_srctab',
    primary_instance => 1,
    secondary_instance => 2);
    All Streams processing is done at the owning instance of the queue used by
    the Streams client. To determine the owning instance of each ANYDATA queue
    in a database, run the following query:
    SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, t.OWNER_INSTANCE
    FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
    WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
    q.QUEUE_TABLE = t.QUEUE_TABLE AND
    q.OWNER = t.OWNER;
    #tbsnames.ora
    Service_name=global_name=db_name
    Please find the metalink document
    10gR2 Streams Recommended Configuration [ID 418755.1]
    Regards
    Hitgon

  • Help with Oracle Streams. How to uniquely identify LCRs in queue?

    We are using strems for data replication in our shop.
    When an error occours in our processing procedures, the LCR is moved to the error queue.
    The problem we are facing is we don't know how to uniquely identify LCRs in that queue, so we can run them again when we think the error is corrected.
    LCRs contain SCN, but as I understand it, the SCN is not unique.
    What is the easy way to keep track of LCRs? Any information is helpful.
    Thanks

    Hi,
    When you correct the data,you would have to execute the failed transaction in order.
    To see what information the apply process has tried to apply, you have to print that LCR. Depending on the size (MESSAGE_COUNT) of the transaction that has failed, it could be interesting to print the whole transaction or a single LCR.
    To do this print you can make use of procedures print_transaction, print_errors, print_lcr and print_any documented on :
    Oracle Streams Concepts and Administration
      Chapter - Monitoring Streams Apply Processes
         Section - Displaying Detailed Information About Apply Errors
    These procedures are also available through Note 405541.1 - Procedure to Print LCRs
    To print the whole transaction, you can use print_transaction procedure, to print the error on the error queue you can use procedure print_errors and to print a single_transaction you can do it as follows:
    SET SERVEROUTPUT ON;
    DECLARE
       lcr SYS.AnyData;
    BEGIN
        lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE
                    (<MESSAGE_NUMBER>, <LOCAL_TRANSACTION_ID>);
        print_lcr(lcr);
    END;
    Thanks

  • Difference Between Oracle Replication & Oracle Streams.

    Hi All,
    I am trying to evaluate Oracle Streams & Oracle Replication technologies.
    I need to know what are exact differences between 2 technologies? Advantages (if any) of using Oracle Streams against Oracle Replication.
    After reading Oracle Streams documentation, i found that it can capture DML as well as DDL changes, whereas Oracle Replication can capture DML changes.
    Can someone provide more information?
    Thanks in Advance.
    Regards,
    Vidyanand

    Oracle Replication is designed to replicate exact copies of data sets to various databases. Oracle Streams is designed to propagate individual data changes to various databases. Thus, Replication is probably easier if the end goal is to maintain identical copies of data, where Streams is easier if the end goal is to allow different databases to react differently to data changes.
    Oracle Replication is a significantly more mature product-- it is quite usable with older databases. Oracle Streams is a much newer technology and is only usable among different 9i databases. Most competent Oracle developers and DBA's are familiar with Oracle Replication, while many fewer have any real experience with Streams.
    The Streams architecture strikes me as a lot more flexible than Oracle Replication's. This leads me to suspect that Oracle will be pushing Streams over Replication in subsequent releases, so I would expect new features in Streams, like DDL changes, that aren't in Oracle Replication. Realistically, though, I don't expect any serious movement away from Replication for at least a few releases, so I wouldn't tend to be overly concerned on this front.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Streams capture aborted in 9.2

    Good Day ,
    Streams Capture process was aborted.
    This is the trace file
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    Windows 2000 Version 5.2 , CPU type 586
    Instance name: rmi
    Redo thread mounted by this instance: 1
    Oracle process number: 16
    Windows thread id: 5396, image: ORACLE.EXE (CP01)
    *** SESSION ID:(28.841) 2009-04-09 09:29:45.000
    knlcfuusrnm: = STRMADMIN
    knlqeqi()
    knlcReadLogDictSCN: Got scn from logmnr_dictstate$:0x0001.4bda483d
    knlcInitCtx:
    captured_scn, applied_scn, logminer_start, enqueue_filter
    0x0000.00051828 0x0001.4be1945a 0x0001.4be1945a 0x0001.4be1945a
    last_enqueued, last_acked
    0x0000.00000000 0x0001.4be1945a
    knlrgdpctx
    DefProc streams$_defproc# 555
    DefProc: (defproc col,segcol num)=(1,1)
    DefProc: (defproc col,segcol num)=(2,2)
    DefProc: (defproc col,segcol num)=(3,3)
    DefProc: (defproc col,segcol num)=(4,4)
    DefProc: (defproc col,segcol num)=(5,5)
    DefProc: (defproc col,segcol num)=(6,6)
    DefProc: (defproc col,segcol num)=(7,7)
    DefProc: (defproc col,segcol num)=(8,8)
    DefProc: (defproc col,segcol num)=(9,9)
    DefProc: (defproc col,segcol num)=(10,10)
    DefProc: (defproc col,segcol num)=(11,11)
    DefProc: (defproc col,segcol num)=(12,12)
    DefProc: (defproc col,segcol num)=(13,13)
    DefProc: (defproc col,segcol num)=(14,14)
    DefProc: (defproc col,segcol num)=(15,15)
    knlcgetobjcb: obj# 4
    knlcgetobjcb: obj# 16
    knlcgetobjcb: obj# 18
    knlcgetobjcb: obj# 19
    knlcgetobjcb: obj# 20
    knlcgetobjcb: obj# 21
    knlcgetobjcb: obj# 22
    knlcgetobjcb: obj# 31
    knlcgetobjcb: obj# 32
    knlcgetobjcb: obj# 156
    knlcgetobjcb: obj# 230
    knlcgetobjcb: obj# 234
    knlcgetobjcb: obj# 240
    knlcgetobjcb: obj# 245
    knlcgetobjcb: obj# 249
    knlcgetobjcb: obj# 253
    knlcgetobjcb: obj# 258
    knlcgetobjcb: obj# 283
    knlcgetobjcb: obj# 288
    knlcgetobjcb: obj# 298
    knlcgetobjcb: obj# 306
    capturing objects for queue STRMADMIN.STREAMS_QUEUE 240 31 234 16 19 230 4 283 21 20 258 298 249 22 306 32 156 253 245 288 18
    Starting persistent Logminer Session : 1
    krvxats retval : 0
    krvxssp retval : 0
    krvxsda retval : 0
    krvxcfi retval : 0
    #2: krvxcfi retval : 0
    About to call krvxpsr : startscn: 0x0001.4be1945a
    knluSetStatus()+{
    knlcapUpdate()+{
    Updated streams$_capture_process
    finished knlcapUpdate()+ }
    finished knluSetStatus()+ }
    error 308 in STREAMS process
    ORA-00308: cannot open archived log 'C:\ORACLE\ORA92\RDBMS\1_893.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    OPIRIP: Uncaught error 447. Error stack:
    ORA-00447: fatal error in background process
    ORA-00308: cannot open archived log 'C:\ORACLE\ORA92\RDBMS\1_893.DBF'
    ORA-27041: unable to open file
    OSD-04002: unable to open file
    O/S-Error: (OS 2) The system cannot find the file specified.
    The archived log are in correct location. why it is taking this path C:\ORACLE\ORA92\RDBMS\1_893.DBF , the archived log file 1_893.dbf was available in the archived log located in Z:\archivedlog\..
    and i got error in the propagation process
    ORA-12500: TNS:listener failed to start a dedicated server process

    Hello,
    The streams capture reader slave will read the control file and it picks the first archive log destination it sees. You can not insist streams to read the archived logs from one specific archive destination, it just picks the first one it sees being the fact that all archived logs in all destination will be mirrored copies.
    As already stated in the previous post, 9.2.0.1 is just the base release and you would have to upgrade it to atleast 9.2.0.3 (first stable version for streams) or 9.2.0.4 (for few platforms 9.2.0.3 does not exist). However if you are using 9iR2, I would suggest to upgrade it to 9.2.0.8 which is the terminal patchset for 9iR2 where majority of the bugs are fixed. Also on top of it you need to apply all the recommended one-off patches (which would be included in RDBMS bundle patches for Windows) as per Note:437838.1.
    Hope this clarifies your questions.
    Thanks,
    Rijesh

  • Oracle Streams versus GoldenGate

    We are testing a system where Oracle Streams has been very problematic. There multiple source databases pointing to a single target. There is the need to change the value of a field so the software on the target recognizes the source.
    Oracle streams has been slow during large transaction sets. If a large delete is done on a source it sometimes brings streams down and winds up needing a full refresh.
    That being said, one of the solutions suggested is GoldenGate. Does anyone have experience with GoldenGate?

    Hello
    If you are using RDBMS 10gR2, then this may not be the case. There are several options to enhance the performance and to skip the large transactions etc. If you are sure that there is a large delete the you can make use of TAGs in Oracle Streams so that streams does not capture those transactions:
    execute dbms_streams.set_tag(hextoraw('99'));
    delete from abc.xyz where ----
    execute dbms_streams.set_tag(NULL);
    There are lot of options to handle large transactions like apply spilling etc in Oracle 10gR2. So basically you should be optimizing your environment for using Streams.
    Also there are tools like SharePlex etc, but I am not much aware of them. Please feel free to share if you have any thoughts.
    Thanks,
    Rijesh

  • Differenece between oracle streams and oracle Replication

    Hi all,
    Can anyone tell me difference between oracle streams and oracle replication?
    Regards

    refer the link:Difference Between Oracle Replication & Oracle Streams.
    Oracle Replication is designed to replicate exact copies of data sets to various databases. Oracle Streams is designed to propagate individual data changes to various databases. Thus, Replication is probably easier if the end goal is to maintain identical copies of data, where Streams is easier if the end goal is to allow different databases to react differently to data changes.
    Oracle Replication is a significantly more mature product-- it is quite usable with older databases. Oracle Streams is a much newer technology and is only usable among different 9i databases. Most competent Oracle developers and DBA's are familiar with Oracle Replication, while many fewer have any real experience with Streams.
    The Streams architecture strikes me as a lot more flexible than Oracle Replication's. This leads me to suspect that Oracle will be pushing Streams over Replication in subsequent releases, so I would expect new features in Streams, like DDL changes, that aren't in Oracle Replication. Realistically, though, I don't expect any serious movement away from Replication for at least a few releases, so I wouldn't tend to be overly concerned on this front.
    answered by Justin
    Distributed Database Consulting, Inc.
    reference from forum thread:
    Difference Between Oracle Replication & Oracle Streams.

Maybe you are looking for

  • VISA issues when switching from Labview 5.0 to 8.5

    Hello all I am using a pressure transducer that I read using RS-232. In Labview 5.0 I used the Read Serial VI and it worked with no issues (I also tested it with the VISA from 5.0 with no problems either). Now I have upgraded to 8.5 and I seem unable

  • Problems with Digitus DA-70577 on MAC PRO 4.1

    Hello My USB3 Digitus DA-70577 connected to my MAC PRO 4.1 through the Cal digit Card stops working randomly and requires to be switched off and on to work again. The apps with files linked to the drive freeze and occasionally I must reboot the Mac I

  • Best practice versioning XI szenario with java proxies

    Our XI scenario includes services with java proxies. By changing the message structure, java proxies have to be changed, too. But not all connected application can  migrate to new message structure at the same time. I need a solution to run different

  • I have a HP printer I am trying to scan a report into my computer how do I do that

    I have  a HP printer can someone tell me the steps to do to scan a report to my computer so I can email it to someone Thank  You

  • Aperture import process like a kettle boiling

    Everybody knows that age old platitude that a watched kettle never boils. Similarly I have the impression that an unwatched import process never properly finishes or sometimes even starts! I can leave the import process for about two minutes, got bac