Execution of Row level trigger in Oracle Streams.

Hi All,
Oracle Database version : 9.2.0.4 on windows NT/2000 environment.
We managed to install,configure oracle stream technologies.
Oracle Stream seems to be working fine for replication of DML & DDL changes from source database to target database.
Following is detail at source end.
Source Sid = acc
Source Schema = stream
Source Table = dept
structure of dept table.
Name Null? Type
DEPTNO NOT NULL NUMBER(5)
DNAME NOT NULL VARCHAR2(10)
LOC NOT NULL VARCHAR2(10)
Streamadmin user = strmadmin
Following is detail at target end.
Target Sid = fin
Target Schema = stream
Target Table = dept
structure of dept table.
Name Null? Type
DEPTNO NOT NULL NUMBER(5)
DNAME NOT NULL VARCHAR2(10)
LOC NOT NULL VARCHAR2(10)
TRAN_DATE                    NULL DATE DEFAULT SYSDATE
I checked on insert/update/delete of rows into dept table at source database, changes are correctly replicated to target table dept.
I wrote a simple trigger which is as follows on dept table at target database.
create or replace trigger dept_upd_del
before delete or update of dname,loc on stream.dept
for each row
begin
dbms_output.put_line('Inside Trigger');
if updating then
dbms_output.put_line('Update');     
insert into stream.dept_change values (:old.deptno,'U',sysdate);
end if;
if deleting then
dbms_output.put_line('Delete');
insert into stream.dept_change values (:old.deptno,'D',sysdate);
end if;
end;
I expect this trigger to get executed whenever changes occurs into dept table at target database whenever dml changes are propagated from source to target table. However i found that the above trigger is not executed at all.
I was further surprised, since incase i update/delete rows from target table dept the above trigger is executing correctly.
Can someone please let me know about this?
I believe stream technology is using INSERT / UPDATE & DELETE statement when changes are applied at target table but this doesn't seems to be the case.
Thanks in Advance.
Regards,
Vidyanand

The trigger at the destination will not fire because it already has at the source site. Read about that in the streams documentation on page 4-25. To change the "fire once" property of the trigger, use the procedure SET_TRIGGER_FIRING_PROPERTY in the DBMS_DDL package.
Hope this helps.
Claudine

Similar Messages

  • Getting iteration number in row level trigger

    Hi. Is there a way to get iteration number in row level trigger? Or to access data inserted in statement level trigger from row level trigger (statement level trigger are supposed to be executed before row level triggers but I cannot access them).
    I'm using Oracle 10g.

    My oracle version:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    And business problem is like this:
    I need to have two log tables for some tables in my database:
    First log table is a statement level log. After insert or update or delete it should get one new row with mentioning date, time, sid, query type and some additional information.
    Second table should include all columns from table logged, date, time, sid and operation type.
    The problem is, I need exact the same date and time for each row in both log tables.
    Someone said that sysdate should return same value for query execution time. But it have nothing to do with triggers fired on this query.
    So you may say that I'm curious about getting exact same date and time for one statement level trigger and for each execution of row level trigger.

  • After update row level trigger help

    Hello,
    I have to update some data on a table. I need to be able to undo the update just in case something goes wrong after update is comitted. I decided to keep track of updated rows by inserting new and old values on a audit table using UPDATE ROW LEVEL trigger. Everything working fine as I wished, but here is what I am having a problem in separating each bulk of update by a unique ID... I am not talking about any primary key or autogenerated sequence key here.
    Audit table inserts values: primary key of table, old value before update, new value after update, and time stamp. Now, I want add one more field on the audit table that indentifies each bulk of UPDATE operation... I tried to use session ID, it works fine, but sometimes I may update two or three times, maybe around same time, on the same day from the same session (timestamp doesn't help me). In that case, each UPDATE operation inserts the same session ID on Audit table. I won't know which update operation populated which row of the audit table.
    Can somebody give me how I can resolve this situation. Again, this has to be inside the trigger. Is there any other IDs that I can use? I would appreciate your help. Thanks,

    Can you add a table level trigger in addition to your row level trigger? If so, you could do what you want in there. In the new trigger, formulate a unique value (such as session id || sysdate) and store it using dbms_application_info.set_module and set the MODULE to that value. Then, in your row level trigger code, execute dbms_application_info.read_module and pull the MODULE value and insert it into your audit table.
    The use of session id || sysdate should be fine (and unique) in this context. You'd just have to know at what time the UPDATE batch occurred that you wanted to undo.
    By the way, you could use LogMiner to do what I think you're trying to create with the use of your trigger code and table entries. Recall the Oracle keeps the undo and redo data for every row that is updated in the redo/archive logs. Using LogMiner, you could find and undo any change from any time. Just like your method, you'd have to know when the "bad" thing occurred in order to find the correct log and "mine" it, but all the functionality is there. There's a kinda old, but very good, article by Arup Nanda at http://www.oracle.com/technology/oramag/oracle/05-jul/o45dba.html that reviews how it all works. You may want to look at it to see if you can avoid re-inventing the wheel to meet your needs. Just a thought....
    Karen

  • Capturing value in after insert or update row level trigger

    Hi Experts,
    I'm working on EBS 11.5.10 and database 11g. I have trigger-A on wip_discrete_jobs table and trigger-B on wip_requirement_operations table.When ever i create discrete job, it inserts record in both wip_discrete_jobs and wip_requirement_operations.
    Note:2 tables are like master-child relation.
    Trigger-A: After Insert.Row Level trigger on wip_discrete_jobs
    Trigger-B:After Insert,Row Level trigger on wip_requirement_operations
    In Trigger A:
    I'm capturing wip_entity_id and holding in global variable.
    package.variable:=:new.wip_entity_id
    In Trigger B:
    I'm using the above global variable.
    Issue: Let's say i have create discrete job,it's wip_entity_id is 27, but the global variable is holding the previous wip_entity_id(26),not current value.It looks like before trigger A event is complete, trigger B is also in process, i think this could be the reason it's not storing the current wip_entity_id in the global variable.
    I need your help how to have the current value in the global variable so that i can use that in the trigger B.
    Awaiting response at the earliest.
    Thanks

    798616 wrote:
    Hi Experts,
    I'm working on EBS 11.5.10 and database 11g. I have trigger-A on wip_discrete_jobs table and trigger-B on wip_requirement_operations table.When ever i create discrete job, it inserts record in both wip_discrete_jobs and wip_requirement_operations.
    Note:2 tables are like master-child relation.
    Trigger-A: After Insert.Row Level trigger on wip_discrete_jobs
    Trigger-B:After Insert,Row Level trigger on wip_requirement_operations
    In Trigger A:
    I'm capturing wip_entity_id and holding in global variable.
    package.variable:=:new.wip_entity_id
    In Trigger B:
    I'm using the above global variable.
    Issue: Let's say i have create discrete job,it's wip_entity_id is 27, but the global variable is holding the previous wip_entity_id(26),not current value.It looks like before trigger A event is complete, trigger B is also in process, i think this could be the reason it's not storing the current wip_entity_id in the global variable.
    I need your help how to have the current value in the global variable so that i can use that in the trigger B.
    Awaiting response at the earliest.
    ThanksMy head hurts just thinking about how this is being implemented.
    What's stopping you from creating a nice and simple procedure to perform all this magic?
    Continue with the global/trigger magic at your own peril, as you can hopefully already see ... nothing good will come from it.
    Cheers,

  • Row level trigger updating the entire table instead of affected rows

    I am using orace 8.1.7. My problem is I have a row level trigger that should fire only once ( and insert a row in my auditing table). But it is doing it for the entire table. This only happens when I have more than two columns in an Update clause. Has anyone run into this problem before. Any help would be highly appreciated.
    thanks,
    dinesh

    create or replace trigger contact_audit
    before update or delete on contact
    for each row
    declare
    v_audit_type char(1);
    v_audit_item varchar2(64) := 'CONTACT';
    v_acct_seqid number;
    Begin
    if inserting then
         v_audit_type := 'I';
    elsif updating then
    v_audit_type := 'U';
    elsif deleting then
    v_audit_type := 'D';
    end if;
    select acct_seqid into v_acct_seqid
    from account_contact
    where email = :old.contact_email;
    insert into audit_event ( id, audit_item, audit_type, audit_date, acct_seqid, col_1, col_2, col_3, col_4, col_5, col_6, col_7, col_8,
    col_9, col_10, col_11, col_12, col_13, col_14, col_15, col_16, col_17, col_18, col_19, col_20)
    values (audit_event_sq.nextval, v_audit_item, v_audit_type, sysdate, v_acct_seqid, :old.contact_email, :old.contact_type, :old.contact_last, :old.contact_first,
    :old.title, :old.address1, :old.address2, :old.address3, :old.city, :old.state, :old.zip_code5, :old.zip_code4, :old.zip_code_barcode, :old.country, :old.phone_no,
    :old.phone_ext, :old.receive_info_email_yn, :old.pwd_encrypted, :old.pwd_question, :old.pwd_answer);
    End;

  • Form level v/s item level trigger in oracle forms

    Hello Experts,
                  I am new in oracle forms.I am using forms 11g with weblogic server 10.3.5 at windows 7.I am very confused between Form level and item level triggers.What is the sense of use of when-button-pressed trigger at item level & form level.If I have this trigger form level then how could I check that is fired.
    Thank  You
    regards
    aaditya

    979801 wrote:
    Hello Experts,
                  I am new in oracle forms.I am using forms 11g with weblogic server 10.3.5 at windows 7.I am very confused between Form level and item level triggers.What is the sense of use of when-button-pressed trigger at item level & form level.If I have this trigger form level then how could I check that is fired.
    Thank  You
    regards
    aaditya
    You need to clear you concept first..
    Form level Trigger: code applied all respective item within the form
    Item level Trigger: code applies for only the item that has the code.
    try in a form and you will see the difference.
    Hamid

  • Problems with row level trigger.

    Hi there. I'm trying to create a trigger for a table ... it's an attempt to audit all changes made to the table data. I'm getting an error message however, for my new and old interviewdate. The specific message is:
    Error(12,92): PLS-00049: bad bind variable 'NEW.INTERVIEWDATE'
    Error(12,72): PLS-00049: bad bind variable 'OLD.INTERVIEWDATE'
    here's the code:
    CREATE OR REPLACE TRIGGER AUDITCHANGES
    BEFORE INSERT OR DELETE OR UPDATE
    ON WTB
    FOR EACH ROW
    DECLARE
    BEGIN
    IF :new.InterviewDate != :old.InterviewDate THEN
    INSERT INTO WITAudit (DTCHG,FLDNAME, userid, oldval, newval)
    VALUES (SYSDATE, “InterviewDate”, SYS_CONTEXT('USERENV','OS_USER'),:old.InterviewDate, :new.InterviewDate);
    END IF;
    END;
    Can you point me in the right direction?
    Thanks.

    Remove the DOUBLE quotes from this “InterviewDate”. ..
    try posting your code in between tags
    {code}
    select....
    {code}
    SS                                                                                                                                                                                                                                                                                                                   

  • Confusion in Order of row and statement level trigger

    Hi can anyone tell me, if i create some trigger on table emp as in below order..
    BEFORE INSERT .. ROW LEVEL
    BEFORE INSERT .. STMNT LEVEL
    AFTER INSERT .. ROW LEVEL
    AFTER INSERT .. STMNT LEVEL
    than what will will be order of execution of trigers?
    How oracle will decide order?
    plz provide me some documents related to triggers execution order..thnx in advance..!

    PC wrote:
    Hi.. Got answer about order..but 1 strange point i m feeling that
    in case of, before insert.. stmnt level triger fire 1st then before insert.. row level.
    but in case of after insert.. row level trigger fireing 1st then after insert.. stmnt level ..
    can you explain this also..Why is it strange.
    You've got a statement you are executing.
    The first thing possible is that you are 'before' the statement.
    The next thing possible is that the statement executes for each row.
    Thus for each row, there is a point 'before' each row and a point 'after' each row.
    Once the statement has executed, you are 'after' the statement.
    So, it is only logical that the statement triggers surround the statement and the row triggers are within the statement, and of course 'before' comes before 'after'.

  • Using "v" function in row-level triggers

    I have row-level triggers that do
    l_user := nvl(v('APP_USER'),user);
    to get the user that is running the app and record that in the table or wherever.
    Since it is a row-level trigger, the "v" function will be called for each and every row even though it is going to be the same.
    Is there a way to avoid this repeated execution of the v function?
    Thanks

    To avoid calls to the V function, put the values in hidden database text items in the HTML DB page and allow the normal insert/update HTML DB process pass it to your triggers. Then you refer to their values in table triggers with the ":new" method. Doing it this way you remove your V function invocations from your table-level triggers so the triggers do not have to determine from what environment they were being fired.
    Example: In our applications we track user ID and date/time for row creates and last row modifications. The HTML DB page has a default value of :APP_USER for the row create user ID text item and default of TO_CHAR(SYSDATE,'DD-MON-RR') for the row create date text item. It also has conditional computations with the same values for last row modification user ID and last row modification date.

  • Column level trigger

    Hi All -
    I have a table which has 6 columns
    create table main_tbl
    (p_id integer,
    p_lname varchar2(20),
    p_fname varchar2(20),
    p_dept varchar2(15),
    p_office varchar2(15),
    p_ind char(1)
    And I have a corresponding audit/history table
    create table main_tbl_audit
    (p_id integer,
    p_lname varchar2(20),
    p_fname varchar2(20),
    p_dept varchar2(15),
    p_office varchar2(15),
    p_ind char(1)
    As part of the application audit process, I want to move a record from main_tbl to main_tbl_audit only when a column value changes. I can do this using a column level trigger but if I use column level trigger and if for example 3 values are changed in the main_tbl at once then it will create 3 different rows in the main_tbl_audit table.
    Is there a way to always create one row in main_tbl_audit table even if a record in the main_tbl table has one or more column value changes.
    Please share your expertise.
    Thanks,
    Seenu001

    I'm not quite sure what you mean by a "column-level trigger" since there is no such thing in Oracle. You can specify a list of columns in the OF clause of a row-level trigger, so I'm guessing that's what you're talking about. But then I don't understand why you would get three rows in the audit table. Unless you created three different row-level triggers each of which specified a single column?
    Why wouldn't you simply have a single row-level trigger that compared the old and new values, i.e. (ignoring NULLs)
    CREATE OR REPLACE TRIGGER trg_audit_main
      BEFORE UPDATE ON main_tbl
      FOR EACH ROW
    BEGIN
      IF( :new.p_lname != :old.p_lname or
          :new.p_fname != :old.p_fname or
      THEN
        INSERT INTO main_tbl_audit ...
      END IF;
    END;Justin

  • Row level constraint

    is there any row level constraint in oracle.
    if it is possible plz send me the statment of that constraint.
    OR how to implment row level secuirty.
    best wishes

    Duplicate ... lets stick with the other one in this forum,.

  • Granularity of change rules in CDC (using Oracle streams) at columns level.

    Is it possible to implement granularity of change rules in CDC (using Oracle streams) at columns level.
    E.g. table abc with columns a1, b1, c1 where c1 is some kind of auditing column. I want to use CDC on table abc but want to ignore changes on c1.
    Is it possible? My other option is to split table abc into a child table that has the Primary key and c1 only but it needs additional table and joins.
    Thanks
    Shyam

    The requirement can be implemented by a simple trigger.
    You seem to plan to kill a mosquito by using a nuclear bomb.
    Sybrand Bakker
    Senior Oracle DBA

  • How to check the row level security in TOAD for oracle

    Hi ,
    for ex, i have 2 types of users
    normal user and super user
    super user can see the group set (some column name) created by normal user
    but normal user can not see the set created by super user
    this set crestion aslso has 3 types "U','P',S'
    P & S can be viewed by even normal user
    but U should not
    so here we are having some row level security for the normal user .....
    So, in TOAD for oracle how to check that......
    Let me know if i'm not clear

    Like
    I'm the super user....
    And some records are inserted to a table by different users ('a' , 'b', etc....)
    So,if user 'a' logins then he can be able to see only the records inserted by 'a' only...
    how to see in TOAD where such type of scripts (filter conditions) are written.....

  • Mutating error : row level BEFORE UPDATE trigger

    Hi,
    I had an issue on mutating terror(was trying to write a row level BEFORE update trigger), however i got it resolved after refering tom kytes web site. I thought i would share it with everyone, might be helpful for a few... Thanks!
    I will be more than happy to learn on further better ways of resolving this issue.
    Below I have posted the trigger that was causing this error and the work around for that issue, I created a package and three other triggers to replace row level BEFORE update trigger:
    ++trigger that was causing this issue:++
    CREATE OR REPLACE TRIGGER C_F_BI
    BEFORE INSERT ON CONTACT_FUNCTION FOR EACH ROW
    declare
    cursor function_code_cur ( cur_contact_id CONTACT.Contact_Id%type,
    cur_function_type_code CONTACT_FUNCTION.Function_Type_Code%type)
    is
    select
    cft.function_type_code,
    cft.multiples_permitted
    from
    CONTACT_FUNCTION cf,
    CONTACT c,
    CONTACT_FUNCTION_TYPE cft
    where
    c.acct_id = (select acct_id from contact where contact_id = cur_contact_id)
    and c.contact_id = cf.contact_id
    and cf.function_type_code = cft.function_type_code
    and cft.function_type_code = cur_function_type_code;
    v_function_type_code contact_function_type.function_type_code%type;
    v_multiples_permitted contact_function_type.multiples_permitted%type;
    E_Multiples_Not_Permitted Exception;
    begin
    if not function_code_cur%isopen then
    open function_code_cur(:new.contact_Id,:new.function_type_code);
    end if;
    loop
    fetch function_code_cur into v_function_type_code, v_multiples_permitted;
    exit when not function_code_cur%found;
    end loop;
    ** if the fetch returns a v_multiples_permitted of 'Y' or no record is found, then it is
    ** ok to add the record. Otherwise raise an error because that function_type is already
    ** being used at the current acct.
    if v_multiples_permitted = 'N' then
    raise E_Multiples_Not_Permitted;
    end if;
    close function_code_cur;
    EXCEPTION
    when E_Multiples_Not_Permitted then
    raise_application_error( -20001,'Multiples not allowed for function type ' ||
    v_function_type_code || '. sqlerrm - ' || sqlerrm );
    when others then
    raise;
    end;
    ++solution for above issue :++
    create or replace package state_pkg
    is
    type ridArray_rec is record(rid rowid, cont_id number, cont_fn_type varchar2(20));
    type ridArray is table of ridArray_rec index by binary_integer;
    newRows ridArray;
    empty ridArray;
    end state_pkg;
    create or replace trigger contact_function_bu1
    before update on contact_function
    begin
    state_pkg.newRows := state_pkg.empty;
    end;
    create or replace trigger contact_function_bu2
    before update ON CONTACT_FUNCTION FOR EACH ROW
    DECLARE
    I NUMBER:=0;
    begin
    I := state_pkg.newRows.count+1;
    state_pkg.newRows( I ).rid := :new.rowid;
    state_pkg.newRows( I ).cont_id := :new.contact_id;
    state_pkg.newRows( I ).cont_fn_type := :new.function_type_code;
    end;
    create or replace trigger contact_function_bu
    after update ON CONTACT_FUNCTION
    declare
    cursor function_code_cur ( cur_contact_id CONTACT.Contact_Id%type,
    cur_function_type_code CONTACT_FUNCTION.Function_Type_Code%type,
    rid2 rowid)
    is
    select
    cft.function_type_code,
    cft.multiples_permitted
    from
    CONTACT_FUNCTION cf,
    CONTACT c,
    CONTACT_FUNCTION_TYPE cft
    where
    c.acct_id = (select acct_id from contact where contact_id = cur_contact_id)
    and c.contact_id = cf.contact_id
    and cf.function_type_code = cft.function_type_code
    and cft.function_type_code = cur_function_type_code
    and cf.rowid = rid2;
    v_function_type_code contact_function_type.function_type_code%type;
    v_multiples_permitted contact_function_type.multiples_permitted%type;
    E_Multiples_Not_Permitted Exception;
    begin
    for i in 1 .. state_pkg.newRows.count loop
    if not function_code_cur%isopen then
    open function_code_cur(state_pkg.newRows(i).cont_id, state_pkg.newRows(i).cont_fn_type, state_pkg.newRows(i).rid);
    end if;
    loop
    fetch function_code_cur into v_function_type_code, v_multiples_permitted;
    exit when not function_code_cur%found;
    end loop;
    ** if the fetch returns a v_multiples_permitted of 'Y' or no record is found, then it is
    ** ok to add the record. Otherwise raise an error because that function_type is already
    ** being used at the current acct.
    if v_multiples_permitted = 'N' then
    raise E_Multiples_Not_Permitted;
    end if;
    close function_code_cur;
    end loop;
    EXCEPTION
    when E_Multiples_Not_Permitted then
    raise_application_error( -20001,'Multiples not allowed for function type ' ||
    v_function_type_code || '. sqlerrm - ' || sqlerrm );
    when others then
    raise;
    end;
    /

    It seems you could have solved the issue otherwise:
    CREATE OR REPLACE TRIGGER c_f_bi
       BEFORE INSERT
       ON contact_function
       FOR EACH ROW
    DECLARE
       v_function_type_code        contact_function_type.function_type_code%TYPE;
       v_multiples_permitted       contact_function_type.multiples_permitted%TYPE;
       e_multiples_not_permitted   EXCEPTION;
    BEGIN
       BEGIN
          SELECT cft.function_type_code, cft.multiples_permitted
            INTO v_function_type_code, v_multiples_permitted
            FROM contact c, contact_function_type cft
           WHERE c.acct_id = (SELECT acct_id
                                FROM contact
                               WHERE contact_id = :NEW.contact_id)
             AND c.contact_id = :NEW.contact_id
             AND cft.function_type_code = :NEW.function_type_code;
       EXCEPTION
          WHEN NO_DATA_FOUND
          THEN
             v_function_type_code := :NEW.function_type_code;
             v_multiples_permitted := '?';
       END;
    ** if the query returns a v_multiples_permitted of 'Y' or no record is found, then it is
    ** ok to add the record. Otherwise raise an error because that function_type is already
    ** being used at the current acct.
       IF v_multiples_permitted = 'N'
       THEN
          RAISE e_multiples_not_permitted;
       END IF;
    EXCEPTION
       WHEN e_multiples_not_permitted
       THEN
          raise_application_error (-20001,
                                      'Multiples not allowed for function type '
                                   || v_function_type_code
                                   || '. sqlerrm - '
                                   || SQLERRM
       WHEN OTHERS
       THEN
          RAISE;
    END;
    /:p

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

Maybe you are looking for