Oracle stream - Downstream

I am getting the below error now on downstream database. This constraint doesn't exist on downstream because I am
excluding constraint on downstream
when importing.
ORA-00001: unique constraint (PCAT_NT.PK01_DCS_CAT_CATINFO) violated
Is this mandatory to include constraint in Oracle stream setup?
My steps are as below:
1) Create the downstream capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE (
queue_name => 'STRMPCAT_QUEUE',
capture_name => 'DOWNSTRMPCAT_CAPTURE',
rule_set_name => null,
start_scn => null,
source_database => 'PCAT',
use_database_link => true,
first_scn => null,
logfile_assignment => 'IMPLICIT');
END;
2) Add schema rule on Downstream database
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT',
streams_type => 'CAPTURE',
streams_name => 'DOWNSTRMPCAT_CAPTURE',
queue_name => 'STRMPCAT.STRMPCAT_QUEUE',
include_dml => true,
include_ddl => false,
source_database => 'PCAT');
END;
3) Initializing SCN on Downstream database
--on source
expdp system SCHEMAS=PCAT_NT DIRECTORY=DATA_PUMP_DIR DUMPFILE=PCAT_expdp.dmp exclude=user,grant,statistics,synonyms,procedure,constraint FLASHBACK_SCN=7620529799615
--on target
impdp system SCHEMAS=PCAT_NT DIRECTORY=DATA_PUMP_DIR DUMPFILE=PCAT_expdp.dmp
Regards

Are you stating that the constraint PCAT_NT.PK01_DCS_CAT_CATINFO does not exist on the destination database and that you are getting a constraint violation error on the destination database? That strikes me as exceptionally unlikely.
How do you know that the constraint was not created on the destination database? I would tend to suspect that you created the constraint potentially unintentionally.
I would be interested in why you're getting duplicate rows as well. If there are no duplicate rows in the source database, duplicates in the destination would imply that you're doing something wrong in the setup (i.e. the SCN in your export is incorrect) or that you have something in the Streams config which is causing duplicates (i.e. a custom DML handler).
Justin

Similar Messages

  • Oracle stream - Downstream new table setup

    Hi,
    I want to add new table to my existing oracle stream setup. Below are the step. Is this OK?
    1) stop apply/capture
    2) Add new rule to the the existing captue which internally will call DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION too(I guess).
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    3) Import (which will initialize the table
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part2_srm_expdp_%U.dmp exclude=grant,statistics,ref_constraint
    4) start apply/capture

    Have you applied this way? What was the result?
    regards

  • Oracle Streams & Oracle Real Application Clusters

    Hello...i'am developing a new replication system to my company using Oracle Streams. I have already achieved data replication to a downstream database but now i would like to do it but in an RAC environment. So, i will appreciate any help you can give me. Best regards, walny

    I've been researching but now i have another doubt, i have a cluster of five instances, two of them donwstream, one primary and one secundary, i don't now if the standby redo log files configured for the primary instance will be the same for the secundary instance, what i want to achieve is the HA of the replication environment, so if i configured standby redo log files in both instances and with just one group the problem is solved, i'll be wasting resources.
    i hope you can help me
    regards
    Edited by: walny on 08-abr-2010 10:45
    Edited by: walny on 08-abr-2010 10:46

  • Oracle stream - first_scn and start_scn

    Hi,
    My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
    Once I will start the capture from which SCN it will start to capture from archive log?
    Regards,

    I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
    Regards,
    Below is the setup doc.
    1.1 Create the Streams Queue
    conn STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'NIG_Q_TABLE',
    queue_name => 'NIG_Q',
    queue_user => 'STRMADMIN');     
    END;
    1.2 Create apply process for the Schema
    BEGIN
    DBMS_APPLY_ADM.CREATE_APPLY(
    queue_name => 'NIG_Q',
    apply_name => 'NIG_APPLY',
    apply_captured => TRUE
    END;
    1.3 Setting up parameters for Apply
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
    /********** STEP 2.- Downstream capture process *****************/
    2.1 Create the downstream capture process
    BEGIN
    DBMS_CAPTURE_ADM.CREATE_CAPTURE (
    queue_name => 'NIG_Q',
    capture_name => 'NIG_CAPTURE',
    rule_set_name => null,
    start_scn => null,
    source_database => 'PNID.LOUDCLOUD.COM',
    use_database_link => true,
    first_scn => null,
    logfile_assignment => 'IMPLICIT');
    END;
    2.2 Setting up parameters for Capture
    exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
    exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
    2.3 Add the table level rule for capture
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    /**** Step 3 : Initializing SCN on Downstream database—start from here *************/
    import
    =================
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
    /********** STEP 4.- Start the Apply process ********************/
    sqlplus STRMADMIN
    exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY');

  • 3 10g db's not connected by network, Oracle Streams?

    Can I use Oracle Streams to replicate just using the swapping of hard drives? What is the best method to replicate like this? Roll your own?
    Rob

    You can use Archive log downstream capture process in order to achive this.
    You can copy archive logs to different site and configure capture, apply process on same database

  • Oracle Streams b/w MS-Access 2007 and Oracle 10g.

    Can we set up Oracle Streams between MS-Access 2007 and Oracle 10g? Ms-Access as source and Oracle 10g as destination database. If so, can any one please give me little heads up with supported doc's or any source of info.

    Help Help....!!!

  • Doubt in Oracle streams

    Doubt in Oracle streams .Can you please help me in understanding the terms
    1. Message
    2.User defined Event
    3. Event
    4.Rules
    5.Oracle supplied PL/SQL packages
    6.Subscriber,Consumer

    Hi
    Message
    A message is the smallest unit of information that is inserted into and retrieved from a queue.
    Queue
    A queue is repository for messages. Queues are stored in queue tables
    Enqueue
    To place a message in queue
    Dequeue
    To comsume a message
    Agent
    An agent is a end user or the application uses a queue
    Thanks
    Venkat

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Oracle streams configuration problem

    Hi all,
    i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
    ERROR at line 1:
    ORA-01353: existing Logminer session
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
    ORA-06512: at line 1
    When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
    That i want is how to destroy or stop logminer session.
    Thnaks for your help
    regards
    raitsarevo

    Thanks Werner, it's ok now my problem is solved and here bellow the output of your script.
    I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
    many thanks again, and please send to my email address [email protected] if needed
    ABILLITY>DELETE FROM system.logmnr_uid$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnr_session$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtcs;
    0 rows deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtlo;
    13 rows deleted.
    ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    PL/SQL procedure successfully completed.
    regards
    raitsarevo

  • Oracle Streaming in same database - 10g

    Oracle Streaming in 10g database
    Posted: Sep 21, 2006 4:25 AM Reply
    Hello,
    I am trying do streaming at table level for all dml changes from one schema(source) to another schema (target) controlled by admin schema (stream_admin). I have all these schemas in same database(10g-enterprise edition).
    As given in documentations , I created
    1. two queues(in_Q and out_Q) in Stream_admin,
    2. two process( capture and apply process) in Stream_admin ,
    3. A propagation rule for propagation between in_Q and out_Q,
    4. did instantiation,
    5. started both capture and apply process.
    Having done that , I insert in source table and check for the same in target but alas!! nothing happens. I fail to achieve streaming.
    I am not getting any error. Neither in process , nor in propagation or queues. And all queues,rules and process are enabled.
    Please help.

    datapump uses dbms_metadata extensively.
    Problem is twofold
    - the amount of data
    - why on earth do you need to 'copy' these 1.2 Tb a 'number of times'
    One would expect you would have tested the upgrade on a smaller test database. and you wouldn't need to fix bugs in a 1.2 Tb database.
    A better idea would be to duplicate the complete database using RMAN. This doesn't perform conventional INSERTs and doesn't create redo.
    Sybrand Bakker
    Senior Oracle DBA

  • Oracle Streams 9i Database Link Error

    Can anyone be help me regarding Oracle Streams 9i
    Oracle Version: 9.2.0.5.0
    Initial parameters:
    global_name=true
    aq_tm_processes=1
    job_queue_process=10
    log_parallelism=1 scope=spfile
    database is in archivelog;
    I executed the following SQL statements:
    CONNECT sys/xxxxx@db1 AS SYSDBA
    CREATE USER strmadmin IDENTIFIED BY strmadminpw
    DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
    GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
    GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    CONNECT strmadmin/strmadminpw@db1
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
    CREATE DATABASE LINK db2 CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'DB2';
    CONNECT sys/xxxxx@db2 AS SYSDBA
    CREATE USER strmadmin IDENTIFIED BY strmadminpw
    DEFAULT TABLESPACE users QUOTA UNLIMITED ON users;
    GRANT CONNECT, RESOURCE, SELECT_CATALOG_ROLE TO strmadmin;
    GRANT EXECUTE ON DBMS_AQADM TO strmadmin;
    GRANT EXECUTE ON DBMS_CAPTURE_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_STREAMS_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_APPLY_ADM TO strmadmin;
    GRANT EXECUTE ON DBMS_FLASHBACK TO strmadmin;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    BEGIN
    DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
    privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
    grantee => 'strmadmin',
    grant_option => FALSE);
    END;
    CONNECT strmadmin/strmadminpw@db2
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE();
    CONNECT sys/xxxxx@db2 as sysdba
    GRANT ALL ON scott.dept TO strmadmin;
    CONNECT sys/xxxxx@db1 AS SYSDBA
    CREATE TABLESPACE logmnr_ts DATAFILE 'D:\oracle\oradata\db1\logmnr01.dbf'
    SIZE 25 M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
    EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnr_ts');
    CONNECT sys/xxxxx@db1 AS SYSDBA
    ALTER TABLE scott.dept ADD SUPPLEMENTAL LOG GROUP log_group_dept_pk (deptno) ALWAYS;
    CONNECT strmadmin/strmadminpw@db1
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
    table_name => 'scott.dept',
    streams_name => 'db1_to_db2',
    source_queue_name => 'strmadmin.streams_queue',
    destination_queue_name => 'strmadmin.streams_queue@db2',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1');
    END;
    CONNECT strmadmin/strmadminpw@db1
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'scott.dept',
    streams_type => 'capture',
    streams_name => 'capture_simp',
    queue_name => 'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true);
    END;
    Did this exp/imp
    exp userid=scott/tiger@db1 FILE=dept_instant.dmp TABLES=dept OBJECT_CONSISTENT=y ROWS=n
    imp userid=scott/tiger@db2 FILE=dept_instant.dmp IGNORE=y COMMIT=y LOG=import.log STREAMS_INSTANTIATION=y
    and then
    CONNECT sys/xxxxx@db2 AS SYSDBA
    ALTER TABLE scott.dept DROP SUPPLEMENTAL LOG GROUP log_group_dept_pk;
    CONNECT strmadmin/strmadminpw@db2
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'scott.dept',
    streams_type => 'apply',
    streams_name => 'apply_simp',
    queue_name => 'strmadmin.streams_queue',
    include_dml => true,
    include_ddl => true,
    source_database => 'db1');
    END;
    CONNECT strmadmin/strmadminpw@db2
    BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
    apply_name => 'apply_simp',
    parameter => 'disable_on_error',
    value => 'n');
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'apply_simp');
    END;
    CONNECT strmadmin/strmadminpw@db1
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'capture_simp');
    END;
    Here is my problem:
    After i configured it the streams still cannot replicate. And when I checked the EM GUI for streams monitoring it displays a broken database link.
    Also it says that the database link is not active.
    Can anyone please help?
    Thanks.

    I don't know what Datacomp is, but you should be able
    to connect to any non-Oracle database if there is
    an ODBC driver for it. You would use Generic
    Heterogeneous Services to do this. See the following
    link to the documentation for details:
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96544/gencon.htm#1656
    There are also several articles on the Net. Do a google
    search on oracle generic heterogeneous services /
    connectivity.
    Hope this helps.
    Kailash.

  • Oracle Streaming Queues in Oracle 10G standard Edition

    I would like to configure and implement Oracle Streaming Queues in Oracle 10G standard Edition. If it is possible then please guide me and give me some clues and if not then please advise me some alternate method.

    Here is the guidance you requested.
    License information:
    http://download.oracle.com/docs/cd/B19306_01/license.102/b14199/toc.htm
    Technical information:
    http://tahiti.oracle.com/
    Since I don't even know what version you have ... this is as far as I can take you.

  • Performance tuning and Periodic Maintance in Oracle Streaming Environment

    We had Setup the Bi-Directional Oracle Streaming between two remote Sites each of 2-Node RAC Databases
    This is Our Enviroment Summary.
    Database Oracle 10g R2 version 10.2.0.4.0
    Os: Solaris[tm] OE (64-bit)
    Currenly Oracle Streaming working Successfully and I'm Daily Monitor the Same.
    As Mention in the Master Note for Streams Performance Recommendations [ID 335516.1]
    Purging Streams Checkpoints
    10.2: Alter the capture parameter CHECKPOINT_RETENTION_TIME from the default retention of 60 days to a realistic value for your database.
    A typical setting might be to retain 7 days worth of checkpoint metadata :
    exec dbms_capture_adm.alter_capture(capture_name=>'your_capture', checkpoint_retention_time=> 7);
    My Query
    ===> Currenly In My Environment CHECKPOINT_RETENTION_TIME is as default 60 days
    I want to change the checkpoint_retention_time=> 7 so what should I take care for before execute the above command in my Live Streaming Environment.
    Edited by: user8171787 on Apr 13, 2011 11:00 PM

    We had Setup the Bi-Directional Oracle Streaming between two remote Sites each of 2-Node RAC Databases
    This is Our Enviroment Summary.
    Database Oracle 10g R2 version 10.2.0.4.0
    Os: Solaris[tm] OE (64-bit)
    Currenly Oracle Streaming working Successfully and I'm Daily Monitor the Same.
    As Mention in the Master Note for Streams Performance Recommendations [ID 335516.1]
    Purging Streams Checkpoints
    10.2: Alter the capture parameter CHECKPOINT_RETENTION_TIME from the default retention of 60 days to a realistic value for your database.
    A typical setting might be to retain 7 days worth of checkpoint metadata :
    exec dbms_capture_adm.alter_capture(capture_name=>'your_capture', checkpoint_retention_time=> 7);
    My Query
    ===> Currenly In My Environment CHECKPOINT_RETENTION_TIME is as default 60 days
    I want to change the checkpoint_retention_time=> 7 so what should I take care for before execute the above command in my Live Streaming Environment.
    Edited by: user8171787 on Apr 13, 2011 11:00 PM

  • Oracle Streams 'ORA-25215: user_data type and queue type do not match'

    I am trying replication between two databases (10.2.0.3) using Oracle Streams.
    I have followed the instructions at http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    The main steps are:
    1. Set up ARCHIVELOG mode.
    2. Set up the Streams administrator.
    3. Set initialization parameters.
    4. Create a database link.
    5. Set up source and destination queues.
    6. Set up supplemental logging at the source database.
    7. Configure the capture process at the source database.
    8. Configure the propagation process.
    9. Create the destination table.
    10. Grant object privileges.
    11. Set the instantiation system change number (SCN).
    12. Configure the apply process at the destination database.
    13. Start the capture and apply processes.
    For step 5, I have used the 'set_up_queue' in the 'dbms_strems_adm package'. This procedure creates a queue table and an associated queue.
    The problem is that, in the propagation process, I get this error:
    'ORA-25215: user_data type and queue type do not match'
    I have checked it, and the queue table and its associated queue are created as shown:
    sys.dbms_aqadm.create_queue_table (
    queue_table => 'CAPTURE_SFQTAB'
    , queue_payload_type => 'SYS.ANYDATA'
    , sort_list => ''
    , COMMENT => ''
    , multiple_consumers => TRUE
    , message_grouping => DBMS_AQADM.TRANSACTIONAL
    , storage_clause => 'TABLESPACE STREAMSTS LOGGING'
    , compatible => '8.1'
    , primary_instance => '0'
    , secondary_instance => '0');
    sys.dbms_aqadm.create_queue(
    queue_name => 'CAPTURE_SFQ'
    , queue_table => 'CAPTURE_SFQTAB'
    , queue_type => sys.dbms_aqadm.NORMAL_QUEUE
    , max_retries => '5'
    , retry_delay => '0'
    , retention_time => '0'
    , COMMENT => '');
    The capture process is 'capturing changes' but it seems that these changes cannot be enqueued into the capture queue because the data type is not correct.
    As far as I know, 'sys.anydata' payload type and 'normal_queue' type are the right parameters to get a successful configuration.
    I would be really grateful for any idea!

    Hi
    You need to run a VERIFY to make sure that the queues are compatible. At least on my 10.2.0.3/4 I need to do it.
    DECLARE
    rc BINARY_INTEGER;
    BEGIN
    DBMS_AQADM.VERIFY_QUEUE_TYPES(
    src_queue_name => 'np_out_onlinex',
    dest_queue_name => 'np_out_onlinex',
    rc => rc, , destination => 'scnp.pfa.dk',
    transformation => 'TransformDim2JMS_001x');
    DBMS_OUTPUT.PUT_LINE('Compatible: '||rc);
    If you dont have transformations and/or a remote destination - then delete those params.
    Check the table: SYS.AQ$_MESSAGE_TYPES there you can see what are verified or not
    regards
    Mette

  • Oracle Streams Update conflict handler not working

    Hello,
    I've been working on the Oracle streams and this time we've to come up with Update conflict handler.
    We are using Oracle 11g on Solaris10 env.
    So far, we have implemented bi-directional Oracle Streams Replication and it is working fine.
    Now, when i try to implement Update conflict handler - it executed successfully but it is not fulfilling the desired functionality.
    Here are the steps i performed:
    Steap -1:
    create table test73 (first_name varchar2(20),last_name varchar2(20), salary number(7));
    ALTER TABLE jas23.test73 ADD (time TIMESTAMP WITH TIME ZONE);
    insert into jas23.test73 values ('gugg','qwer',2000,SYSTIMESTAMP);
    insert into jas23.test73 values ('papa','sdds',2050,SYSTIMESTAMP);
    insert into jas23.test73 values ('jaja','xzxc',2075,SYSTIMESTAMP);
    insert into jas23.test73 values ('kaka','cvdxx',2095,SYSTIMESTAMP);
    insert into jas23.test73 values ('mama','rfgy',1900,SYSTIMESTAMP);
    insert into jas23.test73 values ('tata','jaja',1950,SYSTIMESTAMP);
    commit;
    Step-2:
    conn to strmadmin/strmadmin to server1:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    Step-3
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-4
    conn to strmadmin/strmadmin to server2
    SQL>
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'first_name';
    cols(2) := 'last_name';
    cols(3) := 'salary';
    cols(4) := 'time';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name => 'jas23.test73',
    method_name => 'MAXIMUM',
    resolution_column => 'time',
    column_list => cols);
    END;
    Step-5
    And now, if i try to update the value of salary, then it is not getting handled by update conflict handler.
    update jas23.test73 set salary = 1500,time=SYSTIMESTAMP where first_name='papa'; --server1
    update jas23.test73 set salary = 2500,time=SYSTIMESTAMP where first_name='papa'; --server2
    commit; --server1
    commit; --server2
    Note: Both the servers are into different timezone (i hope it wont be any problem)
    Now, after performing all these steps - the data is not same at both sites.
    Error(DBA_APPLY_ERROR) -
    ORA-26787: The row with key ("FIRST_NAME", "LAST_NAME", "SALARY", "TIME") = (papa, sdds, 2000, 23-DEC-10 05.46.18.994233000 PM +00:00) does not exist in ta
    ble JAS23.TEST73
    ORA-01403: no data found
    Please help.
    Thanks.
    Edited by: gags on Dec 23, 2010 12:30 PM

    Hi,
    When i tried to do it on Server-2:
    SQL> ALTER TABLE jas23.test73 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    it throws me an error,
    Error -
    ERROR at line 1:
    ORA-32588: supplemental logging attribute all column exists

Maybe you are looking for

  • Gnome-online-accounts in gnome 3.2 not working[SOLVED]

    gnome-online-accounts in gnome 3.2 is not working.it gives a bad token request.are other arch users experiencing this problem. Last edited by PranavG (2011-10-02 15:52:33)

  • Solaris Driver Development

    I am trying to get information on how Sun is pursuing its Kernel Configuration model. Like in HP-UX, there master/system files associated with a specific dynamically-loadable kernel module, plus an object. All the above information is being used to i

  • Addressbook: sequence of phone numbers / emails

    Most of my contacts have e.g a home, work and mobile phone number. I would like to determine the order of theses, since whene I update one of them, the order changes quite often. Same with emails: if it changes and the contact is in a group, and if I

  • Makepkg fails if upper case md5 strings are used.

    makepkg fails if upper case md5 strings are used. For example 9e107d9d372bb6826bd81d3542a419d6 and 9E107D9D372BB6826BD81D3542A419D6 for makepkg are not the same thing. Is it a feature or a bug?

  • RFC Asynchronous

    Hello, Is it possible to use correlation with Asynchronous RFC (EOIO or EO) ? Or the only way to get the return message is use synchronous RFC ? Thanks, Regards. Chris