ORacle Streams Queue Data

Hi,
I have setup a streams environment, can see statistics on changes being enqueued but cannot figure out how to see the actual LCR data. I have queried against my queue table and get no rows. I am hoping this is something very simple I just cannot find it late on a Friday. Please enlighten me.
Thanks
Tom

Captured LCRs are maintained in memory in the buffer queue and the contents of the LCR are not available for viewing.
Use the dynamic views V$STREAMS_CAPTURE, V$STREAMS_APPLY_COORDINATOR, V$STREAMS_APPLY_READER, and V$STREAMS_APPLY_SERVER to see the progress of LCRs through the systems.
If an error is encountered on apply, the error transaction will be placed in the Error queue. Once the error is in the error queue, you can use the scripts in the documentation (Display details of error transaction) to see the information in the LCR.
Only user-enqueued LCRs will be visible in the Streams queue table. A user-enqueued LCR is an LCR that is constructed and explicitly enqueued into the streams queue, as opposed to an LCR implicitly captured by a streams capture process.

Similar Messages

  • Oracle Streaming Queues in Oracle 10G standard Edition

    I would like to configure and implement Oracle Streaming Queues in Oracle 10G standard Edition. If it is possible then please guide me and give me some clues and if not then please advise me some alternate method.

    Here is the guidance you requested.
    License information:
    http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/toc.htm
    Technical information:
    http://tahiti.oracle.com/
    Since I don't even know what version you have ... this is as far as I can take you.

  • Publishing SYS.aq$_jms_text_message to Oracle Streams Queue

    I've created a streams queue using dbms_streams_adm and by default the payload type for the queue created is Sys.AnyData. How do I publish a message of type aq$_jms_text_message in PL/SQL to this streams Queue. I guess it all comes down to converting aq$_jms_text_message to AnyData in pl/sql. Sys.AnyData does NOT have anything to convert aq$_jms_text_message.
    Any help would be appreciated.
    Thanks,
    Das

    This has been asked a lot of times - I'm not sure how my initial searching missed all of the other questions/answers related to this topic.
    In our case, the solution was to:
    1) Leave the queue as a sys.aq$_jms_text_message type
    2) Construct a sys.xmltype object with our desired payload
    3) Do a getStringVal() on the xmltype object and use that string as the payload for our queue message
    - Nathan

  • Capture Changes from Sql Server  using Oracle Streams  - Destination Oracle

    Is it possible to capture changes made to tables in Sql Server database and propagate the changes to Oracle Database using Oracle Streams and Heterogeneous Gateway. I see plenty of information about pushing data from Oracle to Sql server, but I haven't been able to find much information about going the other way. Currently we are using sql server 2005 replication to accomplish this. We are looking into the possibility of replacing it with streams.

    the brief understanding i have is that there is nothing out of the tin that Oracle provides to stream between SQL Server and Oracle. The senario is documented in Oracle docs however and says you need to implement the SQL Server side to grabe changes and submit to Oracle stream queues.
    i'm sure i've seen third parties who sell software to do this.
    If you know otherwise please let me know. Also wasn;t aware one could push from SQL Server to Oracle. Is this something only avail in SQL Server 2005 or does 200 also have it? How are you doing this?
    Cheers

  • Data is not replicated on target database - oracle stream

    has set up streams replication on 2 databases running Oracle 10.1.0.2 on windows.
    Steps for setting up one-way replication between two ORACLE databases using streams at schema level followed by the metalink doc
    I entered a few few records in the source db, and the data is not getting replication to the destination db. Could you please guide me as to how do i analyse this problem to reach to the solution
    setps for configuration _ steps followed by metalink doc.
    ==================
    Set up ARCHIVELOG mode.
    Set up the Streams administrator.
    Set initialization parameters.
    Create a database link.
    Set up source and destination queues.
    Set up supplemental logging at the source database.
    Configure the capture process at the source database.
    Configure the propagation process.
    Create the destination table.
    Grant object privileges.
    Set the instantiation system change number (SCN).
    Configure the apply process at the destination database.
    Start the capture and apply processes.
    Section 2 : Create user and grant privileges on both Source and Target
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    In 10g :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    Section 3 : Steps to be carried out at the Destination Database PLUTO
    3.1 Add apply rules for the Schema at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'APPLY ',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    3.2 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    3.3 Start the Apply process :
    DECLARE
    v_started number;
    BEGIN
    SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
    FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';
    if (v_started = 0) then
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    end if;
    END;
    Section 4 :Steps to be carried out at the Source Database REP2
    4.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    4.2 Turn on supplemental logging for DEPT and EMPLOYEES table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk(deptno) ALWAYS;
    ALTER TABLE scott.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP dep_pk(empno) ALWAYS;
    Note: If the number of tables are more the supplemental logging can be
    set at database level .
    4.3 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK PLUTO connect to
    STRMADMIN identified by STRMADMIN using 'PLUTO';
    Test the database link to be working properly by querying against the
    destination database.
    Eg : select * from global_name@PLUTO;
    4.4 Add capture rules for the schema SCOTT at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'CAPTURE',
    streams_name => 'STREAM_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    4.5 Add propagation rules for the schema SCOTT at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'SCOTT',
    streams_name => 'STREAM_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@PLUTO',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    Section 5 : Export, import and instantiation of tables from
    Source to Destination Database
    5.1 If the objects are not present in the destination database, perform
    an export of the objects from the source database and import them
    into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/manager@rep2 OWNER=SCOTT FILE=scott.dmp
    LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp USERID=SYSTEM@pluto FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y
    COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    5.2 If the objects are already present in the desination database, there
    are two ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Specify ROWS=N during Export
    Specify IGNORE=Y during Import along with above import parameters.
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with
    this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure
    controls which LCRs for a table are to be applied by the
    apply process. If the commit SCN of an LCR from the source
    database is less than or equal to this instantiation SCN,
    then the apply process discards the LCR. Else, the apply
    process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    SOURCE_SCHEMA_NAME => 'SCOTT',
    source_database_name => 'REP2',
    instantiation_scn => &iscn );
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Note:In 9i, you must instantiate each table individually.
    In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
    is used for instantiation...
    Section 6 : Start the Capture process
    begin
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
    end;
    /

    You must have imported a JKM and after that these are the steps
    1. Go to source datastrore and click on CDC --> Add to CDC
    2. Click on CDC --> Start Journal
    3. Now go to the interface Choose the source table and select Journalized data only and then click on ok
    4. Now execute the interface
    If still it doesn't work, are you using transactions in your interface ?

  • Oracle Streams 'ORA-25215: user_data type and queue type do not match'

    I am trying replication between two databases (10.2.0.3) using Oracle Streams.
    I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    The main steps are:
    1. Set up ARCHIVELOG mode.
    2. Set up the Streams administrator.
    3. Set initialization parameters.
    4. Create a database link.
    5. Set up source and destination queues.
    6. Set up supplemental logging at the source database.
    7. Configure the capture process at the source database.
    8. Configure the propagation process.
    9. Create the destination table.
    10. Grant object privileges.
    11. Set the instantiation system change number (SCN).
    12. Configure the apply process at the destination database.
    13. Start the capture and apply processes.
    For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
    The problem is that, in the propagation process, I get this error:
    'ORA-25215: user_data type and queue type do not match'
    I have checked it, and the queue table and its associated queue are created as shown:
    sys.dbms_aqadm.create_queue_table (
    queue_table => 'CAPTURE_SFQTAB'
    , queue_payload_type => 'SYS.ANYDATA'
    , sort_list => ''
    , COMMENT => ''
    , multiple_consumers => TRUE
    , message_grouping => DBMS_AQADM.TRANSACTIONAL
    , storage_clause => 'TABLESPACE STREAMSTS LOGGING'
    , compatible => '8.1'
    , primary_instance => '0'
    , secondary_instance => '0');
    sys.dbms_aqadm.create_queue(
    queue_name => 'CAPTURE_SFQ'
    , queue_table => 'CAPTURE_SFQTAB'
    , queue_type => sys.dbms_aqadm.NORMAL_QUEUE
    , max_retries => '5'
    , retry_delay => '0'
    , retention_time => '0'
    , COMMENT => '');
    The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
    As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
    I would be really grateful for any idea!

    Hi
    You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
    DECLARE
    rc BINARY_INTEGER;
    BEGIN
    DBMS_AQADM.VERIFY_QUEUE_TYPES(
    src_queue_name => 'np_out_onlinex',
    dest_queue_name => 'np_out_onlinex',
    rc => rc, , destination => 'scnp.pfa.dk',
    transformation => 'TransformDim2JMS_001x');
    DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
    If you dont have transformations and/or a remote destination - then delete those params.
    Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
    regards
    Mette

  • Help with Oracle Streams. How to uniquely identify LCRs in queue?

    We are using strems for data replication in our shop.
    When an error occours in our processing procedures, the LCR is moved to the error queue.
    The problem we are facing is we don't know how to uniquely identify LCRs in that queue, so we can run them again when we think the error is corrected.
    LCRs contain SCN, but as I understand it, the SCN is not unique.
    What is the easy way to keep track of LCRs? Any information is helpful.
    Thanks

    Hi,
    When you correct the data,you would have to execute the failed transaction in order.
    To see what information the apply process has tried to apply, you have to print that LCR. Depending on the size (MESSAGE_COUNT) of the transaction that has failed, it could be interesting to print the whole transaction or a single LCR.
    To do this print you can make use of procedures print_transaction, print_errors, print_lcr and print_any documented on :
    Oracle Streams Concepts and Administration
      Chapter - Monitoring Streams Apply Processes
         Section - Displaying Detailed Information About Apply Errors
    These procedures are also available through Note 405541.1 - Procedure to Print LCRs
    To print the whole transaction, you can use print_transaction procedure, to print the error on the error queue you can use procedure print_errors and to print a single_transaction you can do it as follows:
    SET SERVEROUTPUT ON;
    DECLARE
       lcr SYS.AnyData;
    BEGIN
        lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE
                    (<MESSAGE_NUMBER>, <LOCAL_TRANSACTION_ID>);
        print_lcr(lcr);
    END;
    Thanks

  • Capturing data of the previous time interval with Oracle Stream(HotLog)

    I read from 10g Oracle manual that Oracle Steam can capture data within specified time interval with begin_data and end_data options.
    For example :
    BEGIN
    DBMS_CDC_PUBLISH.CREATE_CHANGE_SET(
    change_set_name => 'set_cns',
    description => 'set_cns...',
    change_source_name => 'HOTLOG_SOURCE',
    stop_on_ddl => 'y',
    begin_date => sysdate,
    end_date => sysdate + 1);
    END;
    However if I set begin_date to previous time from now, Oracle doesn't caputre data anymore in this case.(HotLog method)
    (I set begin_date => sysdate - 1/24
    end_date => sysdate + 1/24.)
    Does anybody know to capture the previous time interval with Oracle Stream?

    Change C2 to:
    cursor c2(passing_date IN date) IS
      SELECT MONITOR_ID, SAMPLE_ID,
                   COLL_TIME, DEW_POINT
        FROM ARCHIVE_DATA
        WHERE COLL_TIME < passing_date
        ORDER BY COLL_TIME desc;And rather than populating a table with the three records, you could just select the three records using: where COLL_TIME between Prev3_time and Prev1_time

  • Advice on implementing oracle streams on RAC 11.2 data warehouse database

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

  • Oracle Streams for Archiving Data

    We are considering using Oracle Streams for Archiving the Data that is older than six months, is this the right option? Has anybody done archiving with Oracle Streams? or would you recommend any other option?
    Edited by: vs**** on Feb 8, 2011 2:58 AM

    The tasks are
    1. Oracle checks everyday at a scheduled time in the required tables for "created_date" of records is 180 days old i.e (sysdate - created_date > 180).
    2. The results matching the criteria should then be sent to the Archive database or created as Files

  • Oracle Streams VS Oracle Data Guard

    Hello,
    Could you please explain me what are the different between Oracle Streams VS Oracle Data Guard?
    They are completely different or similar purposes?
    Thanks.

    812322 wrote:
    Hello,
    Could you please explain me what are the different between Oracle Streams VS Oracle Data Guard?
    They are completely different or similar purposes?
    Thanks.http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:14672061404704

  • Data guard vs. oracle stream

    Our db is 10g 2 nodes RAC db and the db server is window 2003 server. Our standby db was using data guard for feeding data from primary to standby. Since our standby db was not up to date now we are planning to rebuild our standby db. We are debating if we should use data guard or oracle stream to feed data to this new standby db. Can any body give us some insights on which one is better on this purpose?
    Thanks a lot in advance!!
    Shirley

    One has
    1 physical standby
    2 logical standby
    3 streams
    Only 1 is a zero-loss, high-availability solution.
    2 and 3 do not support all data types, and will automatically suppress not-supported datatypes.
    3 apart from that is also asynchronous, where 1 and 2 can be set up to be synchronous.
    3 will be also much more difficult to troubleshoot. Basically: when you are out-of-sync you have to rebuild. You can't re-ship redolog files.
    1+2 ship redolog files to the standby server, 1 uses them to recover the database, 2 uses them to mine them and to re-execute the transaction.
    3 mines redolog files at the source, and sends statements to re-execute them.
    Only 1 is a true HA solution.
    You can not use Streams to build a standby database.
    The purpose of Streams is replication, not standby.
    Sybrand Bakker
    Senior Oracle DBA
    It is just what you want

  • Will Oracle Data Guard be replaced by Oracle Stream soon?

    Will Oracle Data Guard be replaced by Oracle Stream soon?
    In my opinion Oracle Stream can replace Oracle Data Guard completely.
    Message was edited by:
    frank.qian

    While some of the technologies that underpin Streams are being increasingly incorporated into DataGuard, it's quite unlikely that DataGuard will go away.
    Streams is the successor to Advanced Replication, which is designed to allow a source database to propagate data to a distinct database in a different environment without necessarily having to have the two databases tightly coupled. You can have different databases in different regions managed by different DBA groups who don't necessarily care whether any of the other systems are up using Streams (or Advanced Replication before it). Failing over between these systems, while possible, requires a fair amount of custom scripting, but is certainly possible.
    DataGuard, on the other hand, is designed to allow you to have multiple copies of the same database that are tightly coupled for high availability. Similar in concept, but there are very different trade-offs in the design.
    That said, Streams and Logical Standby both use very similar technologies to mine the redo information for change records. As DataGuard uses Logical Standby more and more, potentially as a replacement for physical standby, they'll use more and more of the same underlying technologies. They'll still be very different products.
    Justin

  • Supplemental logging with Oracle 10gR2 Streams and Data Guard

    Hello,
    I have a environment with Oracle DB 10gR2 and Physical Standby with Data Guard DR Conf. Right now, this environment is going to be extended to a replication schema using 2-way Oracle Streams Replication (for replication to the central office from this branch office, other branchs will be added soon). The primary DB will be replicated to the other primary DB (in the remote central office).
    So, there is my question: It's completly necesary to specify Supplemental Logging on the sources databases (primaries) for setting 2-way Streams Replication?, and, if it's completly necesary, then, do I can set Supplemental Logging on primaries without affect theirs physical standbys, or do I need to do something special?
    Thanks in advance.

    Sorry, it's repeated. 'cus browser connection problem.

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

Maybe you are looking for

  • Connection Problem with Facetime on my Macbook Pro

    i have the same problems here .... here is the error report it isnt working on th macbook pro ... all my other products dont have issues ... imac, ipad air, ipad mini, iphone 5s .... PLEASE HELP Process:         FaceTime [2169] Path:            /Appl

  • Printing from ALV Grid

    Hi Friends, I am facing a problem while printing a report from ALV grid .. The report display is coming perfectly based on the SORT criteria .. but while trying  to print or print preview or Download to  Excel  the sorting is not working .. If any on

  • Error while sending PO output through mail in PDF format - Urgent

    Dear friends, Developed program to send sapscript output through mail in pdf format, the program running properly, even function module SO_NEW_DOCUMENT_ATT_SEND_API1 returning sy-subrc 0. But the external mail is not going to user lying in SAP outbox

  • Pc pavillion 20

     My Hp pavillion 20 comes up with error code 0xc0000185.I did DPS self test and it failed saying Drive replacement recommened??Can someone help please

  • Can't send or download messages after 31.4.0 update

    Auto-update to 31.4.0 on/about Jan 14, 2015 has caused my install to fail sending and receiving messages. Initial program load and subsequent Get Messages requests go nowhere: The progress display in the page footer shows "[acct name]: Connecting to