Oracle stream - How to exclude constraint to downstream database

Hi,
I have setup Oracle downstream database capture with the below rule.
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'PCAT_NT_QA1',
streams_type => 'CAPTURE',
streams_name => 'DOWNSTRMQA1_CAPTURE',
queue_name => 'STRMADMIN.STRMQA1_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'PCATQA');
END;
BEGIN
DBMS_RULE_ADM.ALTER_RULE (
rule_name => 'PCAT_NBM_QA164',
condition => '((:ddl.get_object_owner() = ''PCAT_NBM_QA1'' or :ddl.get_base_table_owner() = ''PCAT_NBM_QA1'') and
:ddl.is_null_tag() = ''Y'' and
                    :ddl.get_source_database_name() = ''PCATQA'' and
:ddl.get_object_type() = ''TABLE'') ',
evaluation_context => NULL);
END;
But it still try to replicate ant constraint creation to downstream database. I only want "column change" and "truncate table" should go as DDL to downstream and nothing including index,constraint,package,procedure etc.
Regards,

Hi,
In my current condition as DDL statement, I only want to capture "table structure changes like adding column or modifying column" to the downstream database.
How I can do that. Apart from DDL i am capturing all DMLs.
Regards,

Similar Messages

  • Advice on implementing oracle streams on RAC 11.2 data warehouse database

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

    Hi,
    I would like to know high level overview on implementing one-way schema level replication within same database using oracle streams on RAC 11.2 data warehouse database.
    Are there any points that should be kept in mind before drafting the implementation plan.
    Please share your thougts and experiences.
    Thanks in advance
    srh

  • Oracle stream - Downstream

    I am getting the below error now on downstream database. This constraint doesn't exist on downstream because I am
    excluding constraint on downstream
    when importing.
    ORA-00001: unique constraint (PCAT_NT.PK01_DCS_CAT_CATINFO) violated
    Is this mandatory to include constraint in Oracle stream setup?
    My steps are as below:
    1) Create the downstream capture process
    BEGIN
    DBMS_CAPTURE_ADM.CREATE_CAPTURE (
    queue_name => 'STRMPCAT_QUEUE',
    capture_name => 'DOWNSTRMPCAT_CAPTURE',
    rule_set_name => null,
    start_scn => null,
    source_database => 'PCAT',
    use_database_link => true,
    first_scn => null,
    logfile_assignment => 'IMPLICIT');
    END;
    2) Add schema rule on Downstream database
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => 'PCAT_NT',
    streams_type => 'CAPTURE',
    streams_name => 'DOWNSTRMPCAT_CAPTURE',
    queue_name => 'STRMPCAT.STRMPCAT_QUEUE',
    include_dml => true,
    include_ddl => false,
    source_database => 'PCAT');
    END;
    3) Initializing SCN on Downstream database
    --on source
    expdp system SCHEMAS=PCAT_NT DIRECTORY=DATA_PUMP_DIR DUMPFILE=PCAT_expdp.dmp exclude=user,grant,statistics,synonyms,procedure,constraint FLASHBACK_SCN=7620529799615
    --on target
    impdp system SCHEMAS=PCAT_NT DIRECTORY=DATA_PUMP_DIR DUMPFILE=PCAT_expdp.dmp
    Regards

    Are you stating that the constraint PCAT_NT.PK01_DCS_CAT_CATINFO does not exist on the destination database and that you are getting a constraint violation error on the destination database? That strikes me as exceptionally unlikely.
    How do you know that the constraint was not created on the destination database? I would tend to suspect that you created the constraint potentially unintentionally.
    I would be interested in why you're getting duplicate rows as well. If there are no duplicate rows in the source database, duplicates in the destination would imply that you're doing something wrong in the setup (i.e. the SCN in your export is incorrect) or that you have something in the Streams config which is causing duplicates (i.e. a custom DML handler).
    Justin

  • Reconfigure the Oracle streams incase of Server move from 192 to 191

    Hi All,
    We have bi directioanl oracle streams setup between two databases.
    Recently, We have moved our server from cin192 to cin191. After server moved we checked the all the streams process.
    Capture process shows Wating for Dictionary Redo First SCN XXXXXXXXX on both the database.
    When i checked these SCN ,i got cin192 archive log.
    Can you please help how can i resolve these issue..
    Do we need to reconfigure the streams or we can assign new SCN to capture process without dropping anything with cin191 server archive log file...
    Means , How to point new server archive log file to capture process..
    Any help would be appreciated...
    It's urgent...Plzzzzzzzzzz Help.
    Thanks,
    Singh

    Hi Singh,
    If I would know what cin191 and cin192 are, I would probably be able to redirect you to the right forum.
    If you are looking for Oracle streams, I suggest you to try the Database - General forum: General Database Discussions
    If is Oracle replication what you are looking for, please check here: Replication
    This is the Berkeley DB High Availability forum ( http://www.oracle.com/technology/documentation/berkeley-db/db/ref/rep/intro.html )
    Bogdan

  • Oracle Streams roadmap

    I heard that Oracle Streams will be not implemented in next database versions, because of GoldenGate. Could you provide me any official link that affirm this information? I try to find on google, but can't find any official documents that say that.

    Hi;
    Please check below link 5th slide
    http://www.oracle.com/technetwork/database/features/availability/312833-129009.pdf
    Regard
    Helios

  • How to exclude create oracle job when during oracle imp

    Hi Expert,
    I would like to know how to exclude create oracle job during oracle import . It is schema export. Tks
    Regard
    Liang

    Oracle attempts to reimport job definitions as well. However, if you have an existing job with the same JOB_ID, the job definition fails (as there is a Unique Constraint on it).
    So, one "workaround" is to precreate dummy jobs before the import (which also means that the database account must be created in advance). To ensure that the JOB_ID is the same, you may have to keep incrementing the JOB_ID sequence.
    Hemant K Chitale

  • How to filter the "delete" command in oracle streams ?

    How to filter the "delete" command in oracle streams ? (oracle 9i R2 )
    Message was edited by:
    user507013

    Hello
    write a procedure doing "nothing" for delete transcation :
    connect adminstrm/..
    CREATE OR REPACE PROCEDURE write_delete_lcr (
    message IN SYS.ANYDATA,
    error_stack_depth IN NUMBER,
    error_numbers IN DBMS_UTILITY.NUMBER_ARRAY,
    error_messages IN EMSG_ARRAY )
    AS
    s_id sys.anydata ;
    ad SYS.ANYDATA;
    lcr SYS.LCR$_ROW_RECORD;
    ret PLS_INTEGER;
    vc VARCHAR2(30);
    apply_name VARCHAR2(30) ;
    errlog_rec log_errors%ROWTYPE ;
    ov2 SYS.LCR$_ROW_LIST;
    BEGIN
    return;
    End;
    for your apply do :
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => TABLE_NAME,
    object_type => 'TABLE',
    operation_name => 'DELETE',
    error_handler => true,
    user_procedure => 'write_delete_lcr',
    apply_database_link => NULL);
    END;
    Hafedh KADDACHI
    Oracle DBA

  • Help with Oracle Streams. How to uniquely identify LCRs in queue?

    We are using strems for data replication in our shop.
    When an error occours in our processing procedures, the LCR is moved to the error queue.
    The problem we are facing is we don't know how to uniquely identify LCRs in that queue, so we can run them again when we think the error is corrected.
    LCRs contain SCN, but as I understand it, the SCN is not unique.
    What is the easy way to keep track of LCRs? Any information is helpful.
    Thanks

    Hi,
    When you correct the data,you would have to execute the failed transaction in order.
    To see what information the apply process has tried to apply, you have to print that LCR. Depending on the size (MESSAGE_COUNT) of the transaction that has failed, it could be interesting to print the whole transaction or a single LCR.
    To do this print you can make use of procedures print_transaction, print_errors, print_lcr and print_any documented on :
    Oracle Streams Concepts and Administration
      Chapter - Monitoring Streams Apply Processes
         Section - Displaying Detailed Information About Apply Errors
    These procedures are also available through Note 405541.1 - Procedure to Print LCRs
    To print the whole transaction, you can use print_transaction procedure, to print the error on the error queue you can use procedure print_errors and to print a single_transaction you can do it as follows:
    SET SERVEROUTPUT ON;
    DECLARE
       lcr SYS.AnyData;
    BEGIN
        lcr := DBMS_APPLY_ADM.GET_ERROR_MESSAGE
                    (<MESSAGE_NUMBER>, <LOCAL_TRANSACTION_ID>);
        print_lcr(lcr);
    END;
    Thanks

  • Oracle stream - Downstream new table setup

    Hi,
    I want to add new table to my existing oracle stream setup. Below are the step. Is this OK?
    1) stop apply/capture
    2) Add new rule to the the existing captue which internally will call DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION too(I guess).
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    3) Import (which will initialize the table
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part2_srm_expdp_%U.dmp exclude=grant,statistics,ref_constraint
    4) start apply/capture

    Have you applied this way? What was the result?
    regards

  • How to exclude tables from Schema level replication

    Hi All,
    I am currently trying to setup Oracle Streams (Oracle 11.1.0.6 on RHEL5) to replicate a schema from one instance to another.
    The basic schema level replication is working well, and copying DDL and DML changes without any problems. However there are a couple of tables that I need to exclude from the stream, due to incompatible datatypes.
    Does anybody have any ideas or notes on how I could achieve this?? I have been reading the Oracle documentation and find it difficult to follow and confusing, and I have not found any examples in the internet.
    Thanks heaps.
    Gavin

    When you use SCHEMA level rules for capture and need to skip the replication of a few tables, you create rules in the negative rule set for the table.
    Here is an example of creating table rules in the negative rule set for the capture process.
    begin
    dbms_streams_adm.add_table_rules(
    table_name => 'schema.table_to_be_skipped',
    streams_type => 'CAPTURE',
    streams_name => 'your_capture_name',
    queue_name => 'strmadmin.capture_queue_name',
    include_dml => true,
    include_ddl => true,
    inclusion_rule => false
    end;
    table_name parameter identifies the fully qualified table name (schema.table)
    streams_name identifies the capture process for which the rules are to be added
    queue_name specifies the name of the queue associated with the capture process.
    Inclusion_rule=> false indicates that the create rules are to be placed in the negative rule set (ie, skip this table)
    include_dml=> true indicates DML changes for the table (ie, skip DML changes for this table)
    include_ddl=> true indicates DDL changes for the table (ie, skip DDL changes for this table)

  • Oracle stream - first_scn and start_scn

    Hi,
    My first_scn is 7669917207423 and start_scn is 7669991182403 in DBA_CAPTURE view.
    Once I will start the capture from which SCN it will start to capture from archive log?
    Regards,

    I am using oracle 10.2.0.4 version oracle streams. It's Oracle downstream setup. The capture as well as apply is running on target database.
    Regards,
    Below is the setup doc.
    1.1 Create the Streams Queue
    conn STRMADMIN
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'NIG_Q_TABLE',
    queue_name => 'NIG_Q',
    queue_user => 'STRMADMIN');     
    END;
    1.2 Create apply process for the Schema
    BEGIN
    DBMS_APPLY_ADM.CREATE_APPLY(
    queue_name => 'NIG_Q',
    apply_name => 'NIG_APPLY',
    apply_captured => TRUE
    END;
    1.3 Setting up parameters for Apply
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'disable_on_error','n');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'parallelism','6');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_dynamic_stmts','Y');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_hash_table_size','1000000');
    exec dbms_apply_adm.set_parameter('NIG_APPLY' ,'_TXN_BUFFER_SIZE',10);
    /********** STEP 2.- Downstream capture process *****************/
    2.1 Create the downstream capture process
    BEGIN
    DBMS_CAPTURE_ADM.CREATE_CAPTURE (
    queue_name => 'NIG_Q',
    capture_name => 'NIG_CAPTURE',
    rule_set_name => null,
    start_scn => null,
    source_database => 'PNID.LOUDCLOUD.COM',
    use_database_link => true,
    first_scn => null,
    logfile_assignment => 'IMPLICIT');
    END;
    2.2 Setting up parameters for Capture
    exec DBMS_CAPTURE_ADM.ALTER_CAPTURE (capture_name=>'NIG_CAPTURE',checkpoint_retention_time=> 2);
    exec DBMS_CAPTURE_ADM.SET_PARAMETER ('NIG_CAPTURE','_SGA_SIZE','250');
    2.3 Add the table level rule for capture
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'NIG.BUILD_VIEWS',
    streams_type => 'CAPTURE',
    streams_name => 'NIG_CAPTURE',
    queue_name => 'STRMADMIN.NIG_Q',
    include_dml => true,
    include_ddl => true,
    source_database => 'PNID.LOUDCLOUD.COM'
    END;
    /**** Step 3 : Initializing SCN on Downstream database—start from here *************/
    import
    =================
    impdp system DIRECTORY=DBA_WORK_DIRECTORY DUMPFILE=nig_part1_srm_expdp_%U.dmp table_exists_action=replace exclude=grant,statistics,ref_constraint logfile=NIG1.log status=300
    /********** STEP 4.- Start the Apply process ********************/
    sqlplus STRMADMIN
    exec DBMS_APPLY_ADM.START_APPLY(apply_name => 'NIG_APPLY');

  • How to exclude some tables from schema level replicatio????

    Hi,
    I am working on oracle10g stream replication.
    My replication type is "Schema Based".
    So can anyone assist me to undersatnd, how to exclude some tables from schema based replication.
    Thanks,
    Faziarain

    You can use rules and include them in the rule set, lets say you dont want LCR to be queued for table_1 in schema SALES, write two rules one for DDL and another for DML with NOT logical condition.
    DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'admin.SALES_not_TALBE_1_dml', condition => ' (:dml.get_object_owner() = ''SALES'' AND NOT ' ||
    ' :dml.get_object_name() = ''REGIONS'') AND ' ||
    ' :dml.is_null_tag() = ''Y'' ');
    DBMS_RULE_ADM.CREATE_RULE(
    rule_name => 'admin.hr_not_regions_dlll',
    condition => ' (:dml.get_object_owner() = ''SALES'' AND NOT ' ||
    ' :ddl.get_object_name() = ''table_!'') AND ' ||
    ' :dsl.is_null_tag() = ''Y'' ');
    just go through this document once, http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_rules.htm#i1017376
    Edited by: user8710159 on Sep 16, 2009 5:21 PM

  • Help on Oracle streams 11g configuration

    Hi Streams experts
    Can you please validate the following creation process steps ?
    What is need to have streams doing is a one way replication of the AR
    schema from a database to another database. Both DML and DDL shall do
    the replication of the data.
    Help on Oracle streams 11g configuration. I would also need your help
    on the maintenance steps, controls and procedures
    2 databases
    1 src as source database
    1 dst as destination database
    replication type 1 way of the entire schema FaeterBR
    Step 1. Set all databases in archivelog mode.
    Step 2. Change initialization parameters for Streams. The Streams pool
    size and NLS_DATE_FORMAT require a restart of the instance.
    SQL> alter system set global_names=true scope=both;
    SQL> alter system set undo_retention=3600 scope=both;
    SQL> alter system set job_queue_processes=4 scope=both;
    SQL> alter system set streams_pool_size= 20m scope=spfile;
    SQL> alter system set NLS_DATE_FORMAT=
    'YYYY-MM-DD HH24:MI:SS' scope=spfile;
    SQL> shutdown immediate;
    SQL> startup
    Step 3. Create Streams administrators on the src and dst databases,
    and grant required roles and privileges. Create default tablespaces so
    that they are not using SYSTEM.
    ---at the src
    SQL> create tablespace streamsdm datafile
    '/u01/product/oracle/oradata/orcl/strepadm01.dbf' size 100m;
    ---at the replica:
    SQL> create tablespace streamsdm datafile
    ---at both sites:
    '/u02/oracle/oradata/str10/strepadm01.dbf' size 100m;
    SQL> create user streams_adm
    identified by streams_adm
    default tablespace strepadm01
    temporary tablespace temp;
    SQL> grant connect, resource, dba, aq_administrator_role to
    streams_adm;
    SQL> BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE (
    grantee => 'streams_adm',
    grant_privileges => true);
    END;
    Step 4. Configure the tnsnames.ora at each site so that a connection
    can be made to the other database.
    Step 5. With the tnsnames.ora squared away, create a database link for
    the streams_adm user at both SRC and DST. With the init parameter
    global_name set to True, the db_link name must be the same as the
    global_name of the database you are connecting to. Use a SELECT from
    the table global_name at each site to determine the global name.
    SQL> select * from global_name;
    SQL> connect streams_adm/streams_adm@SRC
    SQL> create database link DST
    connect to streams_adm identified by streams_adm
    using 'DST';
    SQL> select sysdate from dual@DST;
    SLQ> connect streams_adm/streams_adm@DST
    SQL> create database link SRC
    connect to stream_admin identified by streams_adm
    using 'SRC';
    SQL> select sysdate from dual@SRC;
    Step 6. Control what schema shall be replicated
    FaeterBR is the schema to be replicated
    Step 7. Add supplemental logging to the FaeterBR schema on all the
    tables?
    SQL> Alter table FaeterBR.tb1 add supplemental log data
    (ALL) columns;
    SQL> alter table FaeterBR.tb2 add supplemental log data
    (ALL) columns;
    etc...
    Step 8. Create Streams queues at the primary and replica database.
    ---at SRC (primary):
    SQL> connect stream_admin/stream_admin@ORCL
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'streams_adm.FaeterBR_src_queue_table',
    queue_name => 'streams_adm.FaeterBR_src__queue');
    END;
    ---At DST (replica):
    SQL> connect stream_admin/stream_admin@STR10
    SQL> BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_table => 'stream_admin.FaeterBR_dst_queue_table',
    queue_name => 'stream_admin.FaeterBR_dst_queue');
    END;
    Step 9. Create the capture process on the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'capture',
    streams_name =>'FaeterBR_src_capture',
    queue_name =>'FaeterBR_src_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database => NULL,
    inclusion_rule => true);
    END;
    Step 10. Instantiate the FaeterBR schema at DST. by doing export
    import : Can I use now datapump to do that ?
    ---AT SRC:
    exp system/superman file=FaeterBR.dmp log=FaeterBR.log
    object_consistent=y owner=FaeterBR
    ---AT DST:
    ---Create FaeterBR tablespaces and user:
    create tablespace FaeterBR_datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create tablespace ws_app_idx datafile
    '/u02/oracle/oradata/str10/FaeterBR_01.dbf' size 100G;
    create user FaeterBR identified by FaeterBR_
    default tablespace FaeterBR_
    temporary tablespace temp;
    grant connect, resource to FaeterBR;
    imp system/123db file=FaeterBR_.dmp log=FaeterBR.log fromuser=FaeterBR
    touser=FaeterBR streams_instantiation=y
    Step 11. Create a propagation job at the source database (SRC).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
    schema_name =>'FaeterBR',
    streams_name =>'FaeterBR_src_propagation',
    source_queue_name =>'stream_admin.FaeterBR_src_queue',
    destination_queue_name=>'stream_admin.FaeterBR_dst_queue@dst',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 12. Create an apply process at the destination database (DST).
    SQL> BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name =>'FaeterBR',
    streams_type =>'apply',
    streams_name =>'FaeterBR_Dst_apply',
    queue_name =>'FaeterBR_dst_queue',
    include_dml =>true,
    include_ddl =>true,
    include_tagged_lcr =>false,
    source_database =>'SRC',
    inclusion_rule =>true);
    END;
    Step 13. Create substitution key columns for äll the tables that
    haven't a primary key of the FaeterBR schema on DST
    The column combination must provide a unique value for Streams.
    SQL> BEGIN
    DBMS_APPLY_ADM.SET_KEY_COLUMNS(
    object_name =>'FaeterBR.tb2',
    column_list =>'id1,names,toys,vendor');
    END;
    Step 14. Configure conflict resolution at the replication db (DST).
    Any easier method applicable the schema?
    DECLARE
    cols DBMS_UTILITY.NAME_ARRAY;
    BEGIN
    cols(1) := 'id';
    cols(2) := 'names';
    cols(3) := 'toys';
    cols(4) := 'vendor';
    DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
    object_name =>'FaeterBR.tb2',
    method_name =>'OVERWRITE',
    resolution_column=>'FaeterBR',
    column_list =>cols);
    END;
    Step 15. Enable the capture process on the source database (SRC).
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE(
    capture_name => 'FaeterBR_src_capture');
    END;
    Step 16. Enable the apply process on the replication database (DST).
    BEGIN
    DBMS_APPLY_ADM.START_APPLY(
    apply_name => 'FaeterBR_DST_apply');
    END;
    Step 17. Test streams propagation of rows from source (src) to
    replication (DST).
    AT ORCL:
    insert into FaeterBR.tb2 values (
    31000, 'BAMSE', 'DR', 'DR Lejetoej');
    AT STR10:
    connect FaeterBR/FaeterBR
    select * from FaeterBR.tb2 where vendor= 'DR Lejetoej';
    Any other test that can be made?

    Check the metalink doc 301431.1 and validate
    How To Setup One-Way SCHEMA Level Streams Replication [ID 301431.1]
    Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
    Cheers.

  • Oracle streams configuration problem

    Hi all,
    i'm trying to configure oracle stream to my source database (oracle 9.2) and when i execute the package DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS'); i got an error bellow:
    ERROR at line 1:
    ORA-01353: existing Logminer session
    ORA-06512: at "SYS.DBMS_LOGMNR_D", line 2238
    ORA-06512: at line 1
    When checking some docs, they told i have to destraoy all logminer session, and i verify to v$session view and cannot identify logminer session. If someone can help me because i need this sttream tools for schema synchronization of my production database and datawarehouse database.
    That i want is how to destroy or stop logminer session.
    Thnaks for your help
    regards
    raitsarevo

    Thanks Werner, it's ok now my problem is solved and here bellow the output of your script.
    I profit if you have some docs or some advise for my database schema synchronisation, is using oracle sctrems is the best or can i use anything else but not Dataguard concept or standby database because i only want to apply DMl changes not DDL. If you have some docs for Oracle streams and especially for schema synchronization not tables.
    many thanks again, and please send to my email address [email protected] if needed
    ABILLITY>DELETE FROM system.logmnr_uid$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnr_session$;
    1 row deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtcs;
    0 rows deleted.
    ABILLITY>DELETE FROM system.logmnrc_gtlo;
    13 rows deleted.
    ABILLITY>EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');
    PL/SQL procedure successfully completed.
    regards
    raitsarevo

  • How to exclude tables while doing import in Traditional import/export

    Hi OTN,
    Please suggest me, while using Oracle Datapump we can exclude tables as below i mentioned.... if i will use Traditional export/import how can i exclude tables.
    impdp tech_test10/[email protected] directory=EAMS04_DP_DIR dumpfile=EXP_DEV6_05092012.dmp logfile=IMP_TECH_TEST10_29MAY.log REMAP_SCHEMA=DEV6:TECH_TEST10 remap_tablespace=DEV6_DATA ATA_TECH_TEST10 remap_tablespace=DEV6_INDX:INDEX_TECH_TEST10 EXCLUDE=TABLE:\"IN \(\'R5WSMESSAGES_ARCHIVE\',\'R5WSMESSAGES\',\'R5AUDVALUES\'\)\"
    Your suggestions will be helpful to me find the way.

    you cannot exclude. But you can use tables parameter to list all the tables needs to be exported.
    exp scott/tiger file=emp.dmp tables=(emp,dept)
    similarly for import
    Edited by: Kiran on Jul 18, 2012 2:20 AM

Maybe you are looking for

  • Copied files to USB HDD - now can't find them!

    I had to reinstall my wife's macbook so I backed up all her files to a USB HDD formatted as MS-DOS (FAT) [according to disk utility] I then reinstalled and tried to find the files which do not show up on the USB HDD (shoulda checked!) I've seen posts

  • ITunes and Windows Vista 32 bit version install issues...need help

    Trying to install iTunes version 7.6 and it is not working all my research keeps mentioning 7.2. Is there any reason why 7.6 won't work?

  • Broadband Use increase since upgrade to Youview

    I recently upgraded from BT Vision to BT Youview, on an option 1 broadband package, and have noticed my broadband useage is more in the last 10 days, than it has been over the last 2 months. It looks as though I will exceed my 10G allowance this mont

  • Wiley Enterprise manager offline issue

    Hello All, The Wiley Enterprise Manager which was running smoothly went offline. It is not getting started when i try to start. Am pasting the contents from log file 'IntroscopeEnterpriseManager.log' below, Introscope Enterprise Manager failed to sta

  • Mac Pro Sleep Issue

    Hi, my Mac Pro apparently goes to sleep whenever I leave it for a significant amount of time, i.e. to go to sleep at night. When I come back to it, I hit the spacebar to wake it up. The computer makes noise like it is waking, the screen comes on long