Oracle Streams - Apply Rules

I want to replicate two tables (A, B) using Oracle Streams from one database to another database using Oracle Streams. One column in table A contains salary information for Hourly employees and salaried employees. I do not want to replicate the salary information for salaried employees in the target system. Is there any way that I can set the salary to null for salaried employees at the same time applying any other changes to the table in the target database?
Any advise is appreciated.

yes i did..
SQL> select name,queue_table from user_queues;
NAME QUEUE_TABLE
AQ$_STREAMS_QUEUE_TABLE_E STREAMS_QUEUE_TABLE
STREAMS_QUEUE STREAMS_QUEUE_TABLE
AQ$_SYS$SERVICE_METRICS_TAB_E SYS$SERVICE_METRICS_TAB
SYS$SERVICE_METRICS SYS$SERVICE_METRICS_TAB
AQ$_KUPC$DATAPUMP_QUETAB_E KUPC$DATAPUMP_QUETAB
AQ$_AQ_PROP_TABLE_E AQ_PROP_TABLE
AQ_PROP_NOTIFY AQ_PROP_TABLE
AQ$_AQ_SRVNTFN_TABLE_E AQ_SRVNTFN_TABLE
AQ_SRVNTFN_TABLE_Q AQ_SRVNTFN_TABLE
AQ$_AQ_EVENT_TABLE_E AQ_EVENT_TABLE
AQ_EVENT_TABLE_Q AQ_EVENT_TABLE
the second row in above table has entry..

Similar Messages

  • Applying subset rules in Oracle streams

    Hi All,
    I am working to configure Streams.I am abe to do repliacation on table table unidirectional & bidirectional. I am facing problem in add_subset rules as capture,propagation & apply process is not showing error. The fillowing is the script i am using to configure add_subset_rules. Please guide me what is the wrong & how to go about it.
    he Global Database Name of the Source Database is POCSRC. The Global Database Name of the Destination Database is POCDESTN. In the example setup, DEPT table belonging to SCOTT schema has been used for demonstration purpose.
    Section 1 - Initialization Parameters Relevant to Streams
    •     COMPATIBLE: 9.2.0.
    •     GLOBAL_NAMES: TRUE
    •     JOB_QUEUE_PROCESSES : 2
    •     AQ_TM_PROCESSES : 4
    •     LOGMNR_MAX_PERSISTENT_SESSIONS : 4
    •     LOG_PARALLELISM: 1
    •     PARALLEL_MAX_SERVERS:4
    •     SHARED_POOL_SIZE: 350 MB
    •     OPEN_LINKS : 4
    •     Database running in ARCHIVELOG mode.
    Steps to be carried out at the Destination Database (POCDESTN.)
    1. Create Streams Administrator :
    connect SYS/pocdestn@pocdestn as SYSDBA
    create user STRMADMIN identified by STRMADMIN default tablespace users;
    2. Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    GRANT SELECT ANY DICTIONARY TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'ENQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'DEQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'MANAGE_ANY',
    grantee => 'STRMADMIN',
    admin_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    3. Create streams queue :
    connect STRMADMIN/STRMADMIN@POCDESTN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    4. Add apply rules for the table at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
    TABLE_NAME=>'SCOTT.EMP',
    STREAMS_TYPE=>'APPLY',
    STREAMS_NAME=>'STRMADMIN_APPLY',
    QUEUE_NAME=>'STRMADMIN.STREAMS_QUEUE',
    DML_CONDITION=>'empno =7521',
    INCLUDE_TAGGED_LCR=>FALSE,
    SOURCE_DATABASE=>'POCSRC');
    END;
    5. Specify an 'APPLY USER' at the destination database:
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    6. BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'STRMADMIN_APPLY',
    parameter => 'DISABLE_ON_ERROR',
    value => 'N' );
    END;
    7. Start the Apply process :
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    END;
    Section 3 - Steps to be carried out at the Source Database (POCSRC.)
    1. Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace. It is a good practice to create an alternate tablespace for the LogMiner tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'd:\oracle\oradata\POCSRC\logmnrts.dbf' SIZE 25M AUTOEXTEND ON MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    2. Turn on supplemental logging for DEPT table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.emp ADD SUPPLEMENTAL LOG GROUP emp_pk
    (empno) ALWAYS;
    3. Create Streams Administrator and Grant the necessary privileges :
    3.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN default tablespace users;
    3.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    GRANT SELECT ANY DICTIONARY TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQ TO STRMADMIN;
    GRANT EXECUTE ON DBMS_AQADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_FLASHBACK TO STRMADMIN;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_RULE_ADM TO STRMADMIN;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO STRMADMIN;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'ENQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'DEQUEUE_ANY',
    grantee => 'STRMADMIN',
    admin_option => FALSE);
    END;
    BEGIN
    DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => 'MANAGE_ANY',
    grantee => 'STRMADMIN',
    admin_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
    grantee => 'STRMADMIN',
    grant_option => TRUE);
    END;
    4. Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN@pocsrc
    CREATE DATABASE LINK POCDESTN connect to
    STRMADMIN identified by STRMADMIN using 'POCDESTN';
    Test the database link to be working properly by querying against the destination database.
    Eg : select * from global_name@POCDESTN;
    5. Create streams queue:
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table =>'STREAMS_QUEUE_TABLE',
    queue_user => 'STRMADMIN');
    END;
    6. Add capture rules for the table at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'SCOTT.EMP',
    streams_type => 'CAPTURE',
    streams_name => 'STRMADMIN_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'POCSRC');
    END;
    7. Add propagation rules for the table at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'SCOTT.emp’,
    streams_name => 'STRMADMIN_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@POCDESTN',
    include_dml => true,
    include_ddl => true,
    source_database => 'POCSRC');
    END;
    Section 4 - Export, import and instantiation of tables from Source to Destination Database
    1. If the objects are not present in the destination database, perform an export of the objects from the source database and import them into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=DEPT.dmp GRANTS=Y ROWS=Y LOG=exportDEPT.log OBJECT_CONSISTENT=Y INDEXES=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate information in the destination database corresponding to the SCN that is recorded in the export file.
    imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y CONSTRAINTS=Y FILE=DEPT.dmp IGNORE=Y GRANTS=Y ROWS=Y COMMIT=Y LOG=importDEPT.log STREAMS_INSTANTIATION=Y
    2. If the objects are already present in the desination database, check that they are also consistent at data level, otherwise the apply process may fail with error ORA-1403 when apply a DML on a not consistent row. There are 2 ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Export from the Source Database by specifying ROWS=N
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.DEPT FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.EMP FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    For Test table -
    exp USERID=SYSTEM/POCSRC@POCSRC TABLES=SCOTT.TEST FILE=tables.dmp
    ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y
    Import into the destination database using IGNORE=Y
    imp USERID=SYSTEM/POCDESTN@POCDESTN FULL=Y FILE=tables.dmp IGNORE=Y
    LOG=importTables.log STREAMS_INSTANTIATION=Y
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@POCSRC
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with this SCN value.
    The SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the apply process. If the commit SCN of an LCR from the source database is less than or equal to this instantiation SCN , then the apply process discards the LCR. Else, the apply process applies the LCR.
    connect STRMADMIN/STRMADMIN@POCDESTN
    BEGIN
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'SCOTT.DEPT',
    source_database_name => 'POCSRC',
    instantiation_scn => &iscn);
    END;
    connect STRMADMIN/STRMADMIN@POCDESTN
    BEGIN
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'SCOTT.EMP',
    source_database_name => 'POCSRC',
    instantiation_scn => &iscn);
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Finally start the Capture Process:
    connect STRMADMIN/STRMADMIN@POCSRC
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => ‘STRMADMIN_CAPTURE');
    END;
    Please mail me [email protected]
    Thanks.
    Raghunath

    What you are trying to do that you can not do is unclear. You wrote:
    "I am facing problem in add_subset rules as capture,propagation & apply process is not showing error."
    Personally I don't consider it a problem when my code doesn't raise an error. So what is not working the way you think it should? Also what version are you in (to 4 decimal places).

  • Granularity of change rules in CDC (using Oracle streams) at columns level.

    Is it possible to implement granularity of change rules in CDC (using Oracle streams) at columns level.
    E.g. table abc with columns a1, b1, c1 where c1 is some kind of auditing column. I want to use CDC on table abc but want to ignore changes on c1.
    Is it possible? My other option is to split table abc into a child table that has the Primary key and c1 only but it needs additional table and joins.
    Thanks
    Shyam

    The requirement can be implemented by a simple trigger.
    You seem to plan to kill a mosquito by using a nuclear bomb.
    Sybrand Bakker
    Senior Oracle DBA

  • Data is not replicated on target database - oracle stream

    has set up streams replication on 2 databases running Oracle 10.1.0.2 on windows.
    Steps for setting up one-way replication between two ORACLE databases using streams at schema level followed by the metalink doc
    I entered a few few records in the source db, and the data is not getting replication to the destination db. Could you please guide me as to how do i analyse this problem to reach to the solution
    setps for configuration _ steps followed by metalink doc.
    ==================
    Set up ARCHIVELOG mode.
    Set up the Streams administrator.
    Set initialization parameters.
    Create a database link.
    Set up source and destination queues.
    Set up supplemental logging at the source database.
    Configure the capture process at the source database.
    Configure the propagation process.
    Create the destination table.
    Grant object privileges.
    Set the instantiation system change number (SCN).
    Configure the apply process at the destination database.
    Start the capture and apply processes.
    Section 2 : Create user and grant privileges on both Source and Target
    2.1 Create Streams Administrator :
    connect SYS/password as SYSDBA
    create user STRMADMIN identified by STRMADMIN;
    2.2 Grant the necessary privileges to the Streams Administrator :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    In 10g :
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    2.3 Create streams queue :
    connect STRMADMIN/STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'STREAMS_QUEUE_TABLE',
    queue_name => 'STREAMS_QUEUE',
    queue_user => 'STRMADMIN');
    END;
    Section 3 : Steps to be carried out at the Destination Database PLUTO
    3.1 Add apply rules for the Schema at the destination database :
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'APPLY ',
    streams_name => 'STRMADMIN_APPLY',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    3.2 Specify an 'APPLY USER' at the destination database:
    This is the user who would apply all DML statements and DDL statements.
    The user specified in the APPLY_USER parameter must have the necessary
    privileges to perform DML and DDL changes on the apply objects.
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'STRMADMIN_APPLY',
    apply_user => 'SCOTT');
    END;
    3.3 Start the Apply process :
    DECLARE
    v_started number;
    BEGIN
    SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
    FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_APPLY';
    if (v_started = 0) then
    DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_APPLY');
    end if;
    END;
    Section 4 :Steps to be carried out at the Source Database REP2
    4.1 Move LogMiner tables from SYSTEM tablespace:
    By default, all LogMiner tables are created in the SYSTEM tablespace.
    It is a good practice to create an alternate tablespace for the LogMiner
    tables.
    CREATE TABLESPACE LOGMNRTS DATAFILE 'logmnrts.dbf' SIZE 25M AUTOEXTEND ON
    MAXSIZE UNLIMITED;
    BEGIN
    DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    END;
    4.2 Turn on supplemental logging for DEPT and EMPLOYEES table :
    connect SYS/password as SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP dept_pk(deptno) ALWAYS;
    ALTER TABLE scott.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP dep_pk(empno) ALWAYS;
    Note: If the number of tables are more the supplemental logging can be
    set at database level .
    4.3 Create a database link to the destination database :
    connect STRMADMIN/STRMADMIN
    CREATE DATABASE LINK PLUTO connect to
    STRMADMIN identified by STRMADMIN using 'PLUTO';
    Test the database link to be working properly by querying against the
    destination database.
    Eg : select * from global_name@PLUTO;
    4.4 Add capture rules for the schema SCOTT at the source database:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'SCOTT',
    streams_type => 'CAPTURE',
    streams_name => 'STREAM_CAPTURE',
    queue_name => 'STRMADMIN.STREAMS_QUEUE',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    4.5 Add propagation rules for the schema SCOTT at the source database.
    This step will also create a propagation job to the destination database.
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name => 'SCOTT',
    streams_name => 'STREAM_PROPAGATE',
    source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
    destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@PLUTO',
    include_dml => true,
    include_ddl => true,
    source_database => 'REP2');
    END;
    Section 5 : Export, import and instantiation of tables from
    Source to Destination Database
    5.1 If the objects are not present in the destination database, perform
    an export of the objects from the source database and import them
    into the destination database
    Export from the Source Database:
    Specify the OBJECT_CONSISTENT=Y clause on the export command.
    By doing this, an export is performed that is consistent for each
    individual object at a particular system change number (SCN).
    exp USERID=SYSTEM/manager@rep2 OWNER=SCOTT FILE=scott.dmp
    LOG=exportTables.log OBJECT_CONSISTENT=Y STATISTICS = NONE
    Import into the Destination Database:
    Specify STREAMS_INSTANTIATION=Y clause in the import command.
    By doing this, the streams metadata is updated with the appropriate
    information in the destination database corresponding to the SCN that
    is recorded in the export file.
    imp USERID=SYSTEM@pluto FULL=Y CONSTRAINTS=Y FILE=scott.dmp IGNORE=Y
    COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
    5.2 If the objects are already present in the desination database, there
    are two ways of instanitating the objects at the destination site.
    1. By means of Metadata-only export/import :
    Specify ROWS=N during Export
    Specify IGNORE=Y during Import along with above import parameters.
    2. By Manaually instantiating the objects
    Get the Instantiation SCN at the source database:
    connect STRMADMIN/STRMADMIN@source
    set serveroutput on
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
    END;
    Instantiate the objects at the destination database with
    this SCN value. The SET_TABLE_INSTANTIATION_SCN procedure
    controls which LCRs for a table are to be applied by the
    apply process. If the commit SCN of an LCR from the source
    database is less than or equal to this instantiation SCN,
    then the apply process discards the LCR. Else, the apply
    process applies the LCR.
    connect STRMADMIN/STRMADMIN@destination
    BEGIN
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
    SOURCE_SCHEMA_NAME => 'SCOTT',
    source_database_name => 'REP2',
    instantiation_scn => &iscn );
    END;
    Enter value for iscn:
    <Provide the value of SCN that you got from the source database>
    Note:In 9i, you must instantiate each table individually.
    In 10g recursive=true parameter of DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN
    is used for instantiation...
    Section 6 : Start the Capture process
    begin
    DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
    end;
    /

    You must have imported a JKM and after that these are the steps
    1. Go to source datastrore and click on CDC --> Add to CDC
    2. Click on CDC --> Start Journal
    3. Now go to the interface Choose the source table and select Journalized data only and then click on ok
    4. Now execute the interface
    If still it doesn't work, are you using transactions in your interface ?

  • Oracle Streaming in same database - 10g

    Oracle Streaming in 10g database
    Posted: Sep 21, 2006 4:25 AM Reply
    Hello,
    I am trying do streaming at table level for all dml changes from one schema(source) to another schema (target) controlled by admin schema (stream_admin). I have all these schemas in same database(10g-enterprise edition).
    As given in documentations , I created
    1. two queues(in_Q and out_Q) in Stream_admin,
    2. two process( capture and apply process) in Stream_admin ,
    3. A propagation rule for propagation between in_Q and out_Q,
    4. did instantiation,
    5. started both capture and apply process.
    Having done that , I insert in source table and check for the same in target but alas!! nothing happens. I fail to achieve streaming.
    I am not getting any error. Neither in process , nor in propagation or queues. And all queues,rules and process are enabled.
    Please help.

    datapump uses dbms_metadata extensively.
    Problem is twofold
    - the amount of data
    - why on earth do you need to 'copy' these 1.2 Tb a 'number of times'
    One would expect you would have tested the upgrade on a smaller test database. and you wouldn't need to fix bugs in a 1.2 Tb database.
    A better idea would be to duplicate the complete database using RMAN. This doesn't perform conventional INSERTs and doesn't create redo.
    Sybrand Bakker
    Senior Oracle DBA

  • Oracle Streams setup for multiple schemas in a same database

    We are on 11.1.0.7 and will be using Oracle 11g Streams that will replicate the data real-time for two schemas between the source and target set of schemas with in the same database. We will be doing DDL as well as DML replication.
    I created the following plan and want your inputs. After implementing this, I created a table in SCOTT but it's get replicated to RPT_SCOTT later I tried inserting a row in the table created under SCOTT but that too didn't get replicated to RPT_SCOTT.
    Here are the steps that I used to set up my STREAMS -
    Database Instance:     TESTDB
    Schemas:
         Source:          SCOTT
                   HR
         Target:          RPT_SCOTT
                   RPT_HR
    Configuring Streams:
    1.     Database is in Archive log mode
    2.     Set up the Streams administrator.
    create user STRMADMIN identified by STRMADMIN default tablespace USERS temporary tablespace temp;
    grant resource, dba, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'STRMADMIN',
    grant_privileges => TRUE);
    END;
    3.     Set up Streams queues
    CONNECT STRMADMIN/****
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table => 'STREAMS_QUETAB',
    queue_user => 'STRMADMIN');
    END;
    4.     Add the Apply rule
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name      => 'RPT_SCOTT',
    streams_type     => 'APPLY',
    streams_name     => 'APPLY_CC_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name      => 'RPT_HR',
    streams_type     => 'APPLY',
    streams_name     => 'APPLY_AB_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name     => 'APPLY_CC_STREAM',
    apply_user     => 'STRMADMIN');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name     => 'APPLY_AB_STREAM',
    apply_user     => 'STRMADMIN');
    END;
    5.     Add the Capture Rule
    CONNECT STRMADMIN/*****
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name     => 'SCOTT',
    streams_type     => 'CAPTURE',
    streams_name     => 'CAPTURE_CC_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name     => 'HR',
    streams_type     => 'CAPTURE',
    streams_name     => 'CAPTURE_AB_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    6.     Set the instantiation system change number (SCN)
    CONNECT STRMADMIN/******
    DECLARE
    source_scn NUMBER;
    BEGIN
    source_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'SCOTT',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);
    END;
    7.     Start the Apply
    CONNECT STRMADMIN/******
    BEGIN
    DBMS_APPLY_ADM.START_APPLY('APPLY_CC_STREAM');
    END;
    BEGIN
    DBMS_APPLY_ADM.START_APPLY('APPLY_AB_STREAM');
    END;
    8.     Start the Capture
    CONNECT STRMADMIN/******
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_CC_STREAM');
    END;
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_AB_STREAM');
    END;
    Waiting for your inputs!

    If I understand from your code, you want to do this on the same DB :
    SCOTT --> RPT_SCOTT
    HR  --> RPT_HRSo there is a schema transformation, where is it coded ?
    General info : http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_transform.htm
    More specific on schema rename : http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_mtransform.htm#CHDGDHDE
    Next : where are the initialisation of both capture schema ?
    Missing :
    execute DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name          => 'scott'');
    execute DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name          => 'HR'');This tell Streams from where to capture the SCN.
    Also there is an (WRONG) instantiation of SCOTT for the apply, which tells the apply to consider as valuable candidate all LCR after source_scn,
    but where is the code or RPT_HR. Alas, you put for the APPLY target schema 'SCOTT' while it should have been 'RPT_SCOTT'.
    the fact that is correct or false depends where you put the schema transformation. If you put the transformation at apply time then use the SOURCE schema name (SCOTT, HR) for the LCR will contains their name. If you put the transformation at capture time, then put target schema name for the LCR will contain their name (RPT_HR,RPT_SCOTT).
    Let's say you put the schema transformation at capture time then
    Missing:
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'RPT_HR',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);If you attache the transformation on the apply process then the code is :
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'HR',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);And this is useless:
    -- useless code
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'APPLY_CC_STREAM',
    apply_user => 'STRMADMIN');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'APPLY_AB_STREAM',
    apply_user => 'STRMADMIN');
    END;
    /Last : You are using the same queue for 2 separated capture/apply transformation.
    Do yourself a favor and give each couple capture/tranform/apply its own queue.

  • Oracle stream - first_scn and start_scn

    Hi,
    My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
    Once I will start the capture from which SCN it will start to capture from archive log?
    Regards,

    I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
    Regards,
    Below is the setup doc.
    1.1 Create the Streams Queue
    conn STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'NIG_Q_TABLE',
    queue_name => 'NIG_Q',
    queue_user => 'STRMADMIN');     
    END;
    1.2 Create apply process for the Schema
    BEGIN
    DBMS_APPLY_ADM.CREATE_APPLY(
    queue_name => 'NIG_Q',
    apply_name => 'NIG_APPLY',
    apply_captured => TRUE
    END;
    1.3 Setting up parameters for Apply
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
    /********** STEP 2.- Downstream capture process *****************/
    2.1 Create the downstream capture process
    BEGIN
    DBMS_CAPTURE_ADM.CREATE_CAPTURE (
    queue_name => 'NIG_Q',
    capture_name => 'NIG_CAPTURE',
    rule_set_name => null,
    start_scn => null,
    source_database => 'PNID.LOUDCLOUD.COM',
    use_database_link => true,
    first_scn => null,
    logfile_assignment => 'IMPLICIT');
    END;
    2.2 Setting up parameters for Capture
    exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
    exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
    2.3 Add the table level rule for capture
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    /**** Step 3 : Initializing SCN on Downstream database—start from here *************/
    import
    =================
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
    /********** STEP 4.- Start the Apply process ********************/
    sqlplus STRMADMIN
    exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY');

  • Setup between Oracle streams and MQ

    Hi All,
    I m trying to create the setup between oracle streams and Messing Queue(MQ).i have already install MQClient on my machine and messing gateways is also working fine,but accoring to setup docs i have created one user having all the granst they provide in docs and created queuetable,queue,dml handler,tables rules and starting queue after that we have done Capture and Apply process on that user, in other word u have say the entire setup but our data is not propagating that messaage to MQ.
    So if anyone has specific setup code then pl provide or any suggestion u want can give.
    thnks & regards
    Sanjeev

    Hi Sanjeev,
    There is a special forum dedicated to Oracle Streams: Streams
    You may want to try there.
    Extra tip: post the code you used. That way it is easier for people to see what you might have done wrong.
    Regards,
    Rob.

  • Oracle Streams and CLOB column

    Hi there,
    We are using "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit". My question is "Does Oracle Streams captures, propagates (source capture method) and applies CLOB column changes?"
    If yes, is this default behavior? Can we tell Streams to exclude the CLOB column from the whole (capture-propage-apply) process?
    Thanks in advance!

    You can exclude columns via a rule (dbms_streams_adm.delete_column).
    CLOBs are captured.
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17069/strms_capture.htm#i1006263

  • Oracle stream - Downstream new table setup

    Hi,
    I want to add new table to my existing oracle stream setup. Below are the step. Is this OK?
    1) stop apply/capture
    2) Add new rule to the the existing captue which internally will call DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION too(I guess).
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    3) Import (which will initialize the table
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part2_srm_expdp_%U.dmp exclude=grant,statistics,ref_constraint
    4) start apply/capture

    Have you applied this way? What was the result?
    regards

  • Doubt in Oracle streams

    Doubt in Oracle streams .Can you please help me in understanding the terms
    1. Message
    2.User defined Event
    3. Event
    4.Rules
    5.Oracle supplied PL/SQL packages
    6.Subscriber,Consumer

    Hi
    Message
    A message is the smallest unit of information that is inserted into and retrieved from a queue.
    Queue
    A queue is repository for messages. Queues are stored in queue tables
    Enqueue
    To place a message in queue
    Dequeue
    To comsume a message
    Agent
    An agent is a end user or the application uses a queue
    Thanks
    Venkat

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Oracle streams configuration problem

    Hi all,
    i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
    ERROR at line 1:
    ORA-01353: existing Logminer session
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
    ORA-06512: at line 1
    When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
    That i want is how to destroy or stop logminer session.
    Thnaks for your help
    regards
    raitsarevo

    Thanks Werner, it's ok now my problem is solved and here bellow the output of your script.
    I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
    many thanks again, and please send to my email address [email protected] if needed
    ABILLITY>DELETE FROM system.logmnr_uid$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnr_session$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtcs;
    0 rows deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtlo;
    13 rows deleted.
    ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    PL/SQL procedure successfully completed.
    regards
    raitsarevo

  • DATA Guard Logical Standby v.s. Streams Apply (10gR2, rac -- rac )

    Greetings -
    We currently have a 3 node cluster, doing a 'Schema' capture, with a 2 node cluster serving as the apply side. Both are on 10gR2 (solaris 10)
    We have been experiencing apply latency due to large transactions. The way logminer/streams evaluates the arch logs, it converts each updated row into a transaction as part of a transaction set, using 'decode (' statements.
    I am under the impression a physical standby will do the same thing. But what about a logical Standby ?
    (this from the Oracle Documentation)
    [10g Concepts|http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#SBYDB00010]
    ...Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database though SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database.
    Does anyone know firsthand if SQL APPLY re-issues the original sql, or if it relies on the 'decode' command as well ?
    Thanks
    The Nets Edge

    A Physical standby does not do anything with SQL. It is running media recovery under the covers. A Logical standby uses the Logminer to read the redo, converts it to SQL and Data and applies those transactions to the Logical standby.
    The both build logical change records, figure out order, dependencies etc etc and applies the transaction. Both are susceptible to long running transactions on the Primary (source) database.
    Streams uses Logminer, Logical Standby use part of the Streams apply so as Mr. Morgan said, they are very similar :^)
    If you are having apply performance issues you may want to look at Active data Guard.
    Larry

  • Oracle Streams 9i Database Link Error

    Can anyone be help me regarding Oracle Streams 9i
    Oracle Version: 9.2.0.5.0
    Initial parameters:
    global_name=true
    aq_tm_processes=1
    job_queue_process=10
    log_parallelism=1 scope=spfile
    database is in archivelog;
    I executed the following SQL statements:
    CONNECT sys/xxxxx@db1 AS SYSDBA
    CREATE USER strmadmin IDENTIFIED BY strmadminpw
    DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
    GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
    GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    CONNECT strmadmin/strmadminpw@db1
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
    CREATE DATABASE LINK db2 CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'DB2';
    CONNECT sys/xxxxx@db2 AS SYSDBA
    CREATE USER strmadmin IDENTIFIED BY strmadminpw
    DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
    GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
    GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    CONNECT strmadmin/strmadminpw@db2
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
    CONNECT sys/xxxxx@db2 as sysdba
    GRANT ALL ON scott.dept TO strmadmin;
    CONNECT sys/xxxxx@db1 AS SYSDBA
    CREATE TABLESPACE logmnr_ts DATAFILE 'D:\oracle\oradata\db1\logmnr01.dbf'
    SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
    EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnr_ts');
    CONNECT sys/xxxxx@db1 AS SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP log_group_dept_pk (deptno) ALWAYS;
    CONNECT strmadmin/strmadminpw@db1
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'scott.dept',
    streams_name => 'db1_to_db2',
    source_queue_name => 'strmadmin.streams_queue',
    destination_queue_name => 'strmadmin.streams_queue@db2',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1');
    END;
    CONNECT strmadmin/strmadminpw@db1
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'scott.dept',
    streams_type => 'capture',
    streams_name => 'capture_simp',
    queue_name => 'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true);
    END;
    Did this exp/imp
    exp userid=scott/tiger@db1 FILE=dept_instant.dmp TABLES=dept OBJECT_CONSISTENT=y ROWS=n
    imp userid=scott/tiger@db2 FILE=dept_instant.dmp IGNORE=y COMMIT=y LOG=import.log STREAMS_INSTANTIATION=y
    and then
    CONNECT sys/xxxxx@db2 AS SYSDBA
    ALTER TABLE scott.dept DROP SUPPLEMENTAL LOG GROUP log_group_dept_pk;
    CONNECT strmadmin/strmadminpw@db2
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'scott.dept',
    streams_type => 'apply',
    streams_name => 'apply_simp',
    queue_name => 'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1');
    END;
    CONNECT strmadmin/strmadminpw@db2
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'apply_simp',
    parameter => 'disable_on_error',
    value => 'n');
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'apply_simp');
    END;
    CONNECT strmadmin/strmadminpw@db1
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'capture_simp');
    END;
    Here is my problem:
    After i configured it the streams still cannot replicate. And when I checked the EM GUI for streams monitoring it displays a broken database link.
    Also it says that the database link is not active.
    Can anyone please help?
    Thanks.

    I don't know what Datacomp is, but you should be able
    to connect to any non-Oracle database if there is
    an ODBC driver for it. You would use Generic
    Heterogeneous Services to do this. See the following
    link to the documentation for details:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96544/gencon.htm#1656
    There are also several articles on the Net. Do a google
    search on oracle generic heterogeneous services /
    connectivity.
    Hope this helps.
    Kailash.

Maybe you are looking for