Schema Level Problem !

{color:#000080}Hi,{color}
{color:#000080}I have successfully configured the schema level replication, but some issues are front of me to struck me.
So, I am going to explain my scenario as following.{color}
{color:#000080}1: Site1 having oracle 10g DB and having two schema level processes: Capture and Propagate
2: Site2 having oracle 10g DB and having two schema level processes: Capture and Propagate
3: Central_DB having oracle10g DB and having two APPLY processes one for site1 and second for site2.{color}
{color:#000080}I have a plan to connect the total 18 branches to our Central_DB in future.
From all sites to Central_DB we will configure the Schema Level Replication using streams.(All sites have same schema)(DML + DDL)
But From Central_DB to all sites we will configure Table Level Rule-Based replication.(Only DML's){color}
{color:#000080}My Questions are:{color}
{color:#000080}1: Please recommend me that it's a good model or not ?
2: When site1 doing DML or DDL changes locally then changes are captured by capture process and
After that changes are propagated from source queue(Site1) to destination queue(CEntral_DB) and
Applied by apply process of Central_DB.
     > If site1 doing changes and at the same time Central_DB goes shutdown and
restart again then changes are applied from site1 to Central_DB.(Normal Working)
     > If site1 doing changes and at the same time Central_DB goes shutdown and
site1 doing more changes locally, after few minutes site1 also goes shutdown , Now site1 and Central_DB both are down.
     > After one day when both machines are upped and running, but propagation process
     contains some connection errors and no further changes are replicated, even status of processes are enabled.{color}
{color:#ff0000}ERROR: {color}
{color:#ff0000}     ORA-12545: Connect failed because target host or object does not exis {color}
{color:#000080}Please assist me to troubleshoot this problem why no further changes are replicated?{color}
{color:#000080}Thanks,
Fazi {color}
{color:#000080}
{color}
{color:#000080}
{color}

That would be if an IP or DNS lookup (depending on what is specified as HOST in the tnsnames entry) is failing.
Possibly, the lookups before the servers went down were via DNS and now the DNS is no longer available (it has gone down ?). OR that someone has recreated a hosts file and the earlier entry for the target host is missing.

Similar Messages

  • How to exclude tables from Schema level replication

    Hi All,
    I am currently trying to setup Oracle Streams (Oracle 11.1.0.6 on RHEL5) to replicate a schema from one instance to another.
    The basic schema level replication is working well, and copying DDL and DML changes without any problems. However there are a couple of tables that I need to exclude from the stream, due to incompatible datatypes.
    Does anybody have any ideas or notes on how I could achieve this?? I have been reading the Oracle documentation and find it difficult to follow and confusing, and I have not found any examples in the internet.
    Thanks heaps.
    Gavin

    When you use SCHEMA level rules for capture and need to skip the replication of a few tables, you create rules in the negative rule set for the table.
    Here is an example of creating table rules in the negative rule set for the capture process.
    begin
    dbms_streams_adm.add_table_rules(
    table_name => 'schema.table_to_be_skipped',
    streams_type => 'CAPTURE',
    streams_name => 'your_capture_name',
    queue_name => 'strmadmin.capture_queue_name',
    include_dml => true,
    include_ddl => true,
    inclusion_rule => false
    end;
    table_name parameter identifies the fully qualified table name (schema.table)
    streams_name identifies the capture process for which the rules are to be added
    queue_name specifies the name of the queue associated with the capture process.
    Inclusion_rule=> false indicates that the create rules are to be placed in the negative rule set (ie, skip this table)
    include_dml=> true indicates DML changes for the table (ie, skip DML changes for this table)
    include_ddl=> true indicates DDL changes for the table (ie, skip DDL changes for this table)

  • Ask about DML Handler for Streams at the Schema level ?

    Hi all !
    I use Oracle version 10.2.0.
    I have two DB is A (at machine A, and it used as source database) and B (at machine B - destination database). Some changes from A will apply to B.
    At B, I installed oracle client to use EMC (Enterprise Manager Console) tool to generate some script, and use them to configure Streams environment, I configured Streams at the Schema level (DML and DDL) => I successed ! But I have two problems is:
    + I write a DML Handler, called "emp_dml_handler" and want set it to EMP table only. So, I must DBMS_STREAMS_ADM.ADD_TABLE_RULES ? (I configured: DBMS_STREAMS_ADM.ADD_SCHEMA_RULES) such as:
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name => '"HOSE"',
    streams_type => 'APPLY',
    streams_name => 'STRMADMIN_BOSCHOSE_REGRES',
    queue_name => 'apply_dest_hose',
    include_dml => true,
    include_ddl => true,
    source_database => 'DEVELOP.REGRESS.RDBMS.DEV.US.ORACLE.COM');
    END;
    and after:
    DECLARE
    emp_rule_name_dml VARCHAR2(50);
    emp_rule_name_ddl VARCHAR2(50);
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'HOSE.EMP,
    streams_type => 'APPLY',
    streams_name => 'STRMADMIN_BOSCHOSE_REGRES',
    queue_name => 'apply_dest_hose',
    include_dml => true,
    include_ddl => true,
    source_database => 'DEVELOP.REGRESS.RDBMS.DEV.US.ORACLE.COM',
    dml_rule_name => emp_rule_name_dml,
    ddl_rule_name => emp_rule_name_ddl);
    DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name => emp_rule_name_dml,
    destination_queue_name => 'apply_dest_hose');
    END;
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'HOSE.EMP',
    object_type => 'TABLE',
    operation_name => 'UPDATE',
    error_handler => false,
    user_procedure => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name => NULL);
    END;
    ... similar for INSERT and DELETE...
    I think that I only configure streams at the schema level and exclude EMP table, am i right ?
    + At the source, EMP table have a primary key. And I configured:
    ALTER TABLE HOSE.EMP ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
    ==> So, at the destination, have some works that I must configure the substitute key for EMP table ?
    Have some ideas for my problems ?
    Thanks
    Edited by: changemylife on Sep 24, 2009 10:45 PM

    If you want to discard emp from schema rule, then just add a negative rule, either on capture or apply.
    What is the purpose of :
    DBMS_APPLY_ADM.SET_ENQUEUE_DESTINATION(
    rule_name => emp_rule_name_dml,
    destination_queue_name => 'apply_dest_hose');sound like you are enqueunig into 'apply_dest_hose' all the rows for this table that comes from ... 'apply_dest_hose'
    Next you declare a DML_HANDLER that is attached to nobody :
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'HOSE.EMP',
    object_type => 'TABLE',
    operation_name => 'UPDATE',
    error_handler => false,
    user_procedure => 'strmadmin.emp_dml_handler',
    apply_database_link => NULL,
    apply_name => NULL);           <----- nobody rules the world!
    END;the sequence of evaluation is normally :
    APPLY_PROCESS (reader)
              |
              | -->  RULE SET
                          |
                          | --> RULE .....
                          | --> RULE
                                     |
                                     | --> evaluate OK then --> exist DML_HANDLER  --> YES --> call DML_HANDLER --> on LCR.execute call coordinator
                                                                                            |
                                                                                            | NO
                                                                                            |                                                                 
                                                                                       Implicit apply (give LCR to coordinator which dispatch to one apply server)    
                                                      Since your dml_handler is attached to null apply process it will never be called by anybody and your LCR for table emp will be implicit applied by its apply process.

  • EXPORT at schema level, but exclude some tables within the export

    I have been searching, but had no luck in finding the correct syntax for my situation.
    I'm simply trying to export at the schema level, but I want to omit certain tables from the export.
    exp cltest/cltest01@clprod file=exp_CLPROD092508.dmp log=exp_CLPROD092508.log statistics=none compress=N
    Thanks!

    Hi,
    Think in simple first.. you use the TABLES Clause..
    Example.
    exp scott/tiger file=empdept.expdat tables=(EMP,DEPT) log=empdept.log
    In case if you scehma contains less number of tables.. !!
    Logically if you have large number of tables, I say this solutuion might work ...all around... alternative solutions to solve the problems.. If you have hundered of tables... in your schema....
    Try to Create a New Schema and using CTAS create a tables which are skippable in the Current Scehma.
    Do an Export and once the Job Done.. you recreate the backup fom New schema
    and Import to DB (Destinaiton)
    - Pavan Kumar N

  • Schema Level Bidirectional streams - works only in one direction

    Hi All,
    we are implementing bidirectional streams at schema level(using scott schema for testing).
    Our environment and different parameters are:
    Source:
    OS =Win2003 64bit
    DB Version= 10.2.0.5.0 64bit
    DB SID=CIBSPROD
    log_archive_dest_1 LOCATION=E:\DBFILES\ArchiveLog\CIBSPROD\PrimaryRole VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=CIBSPROD
    log_archive_dest_2 SERVICE=CIBSREP LGWR ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CIBSREP
    job_queue_processes=2
    Dest:
    OS =Win2003 64bit
    DB Version= 10.2.0.5.0 64bit
    DB SID=CIBSREP
    log_archive_dest_1 LOCATION=E:\DBFILES\ArchiveLog\CIBSREP\PrimaryRole VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=CIBSREP
    log_archive_dest_2 SERVICE=CIBSPROD LGWR ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CIBSPROD
    job_queue_processes=2
    Follow the "Streams Bi-Directional Setup [ID 471845.1]" article
    Problem we are facing is changes are propagating from Source(CIBSPROD)  to Destination(CIBSREP) BUT NOT from Destination to Source Database(although archivelogs are shipping from Destination to Source).
    Executed below script for configuration:
    SET ECHO ON
    SPOOL strm-reconfig-scott.out
    conn sys/&sys_pwd_source@CIBSPROD as sysdba
    EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
    DROP USER STRMADMIN CASCADE;
    CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs;
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    EXECute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    drop database link CIBSREP;
    create public database link CIBSREP connect to strmadmin identified by strmadmin using 'CIBSREP';
    conn sys/&sys_pwd_downstream@CIBSREP as sysdba
    EXEC DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
    DROP USER STRMADMIN CASCADE;
    CREATE USER strmadmin IDENTIFIED BY strmadmin DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs;
    GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
    EXECute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
    drop user SCOTT CASCADE;
    create user SCOTT identified by scott175;
    GRANT CONNECT, RESOURCE to SCOTT;
    drop database link CIBSPROD;
    create public database link CIBSPROD connect to strmadmin identified by strmadmin using 'CIBSPROD';
    --Set up 2 queues for Capture and apply in CIBSPROD Database
    conn strmadmin/strmadmin@CIBSPROD
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'APPLY_CIBSPROD_TAB', queue_name => 'APPLY_CIBSPROD', queue_user => 'strmadmin'); 
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'CAPTURE_CIBSPROD_TAB', queue_name => 'CAPTURE_CIBSPROD', queue_user => 'strmadmin');
    --Set up 2 queues for Capture and apply in CIBSREP Database
    conn strmadmin/strmadmin@CIBSREP
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'APPLY_CIBSREP_TAB', queue_name => 'APPLY_CIBSREP', queue_user => 'strmadmin'); 
    EXEC DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'CAPTURE_CIBSREP_TAB', queue_name => 'CAPTURE_CIBSREP', queue_user => 'strmadmin');
    --Configure capture,apply and propagation process on CIBSPROD database.
    conn strmadmin/strmadmin@CIBSPROD
    EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'CAPTURE', streams_name => 'CAPTURE_CIBSPROD', queue_name => 'CAPTURE_CIBSPROD', include_dml => true, include_ddl => true, inclusion_rule => true);
    EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'APPLY', streams_name => 'APPLY_CIBSPROD', queue_name => 'APPLY_CIBSPROD', include_dml => true, include_ddl => true, source_database => 'CIBSREP');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => 'PARALLELISM', value => '5');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => '_HASH_TABLE_SIZE', value => '10000000');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => 'TXN_LCR_SPILL_THRESHOLD', value => '1000000');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSPROD', parameter => 'DISABLE_ON_ERROR', value => 'N');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => '_dynamic_stmts',value => 'Y');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => 'COMMIT_SERIALIZATION',value => 'NONE');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => '_RESTRICT_ALL_REF_CONS',value => 'N');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSPROD', parameter => 'ALLOW_DUPLICATE_ROWS',value => 'Y');
    EXEC dbms_streams_adm.add_schema_propagation_rules ( schema_name => 'scott', streams_name => 'PROP_CIBSPROD_to_CIBSREP', source_queue_name => 'CAPTURE_CIBSPROD', destination_queue_name => 'APPLY_CIBSREP@CIBSREP', include_dml => true, include_ddl => true, source_database => 'CIBSPROD');
    --Configure capture,apply and propagation process process on CIBSREP Database
    conn strmadmin/strmadmin@CIBSREP
    EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'CAPTURE', streams_name => 'CAPTURE_CIBSREP', queue_name => 'CAPTURE_CIBSREP', include_dml => true, include_ddl => true);
    EXEC dbms_streams_adm.add_schema_rules ( schema_name => 'scott', streams_type => 'APPLY', streams_name => 'APPLY_CIBSREP', queue_name => 'APPLY_CIBSREP', include_dml => true, include_ddl => true, source_database => 'CIBSPROD');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => 'PARALLELISM', value => '5');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => '_HASH_TABLE_SIZE', value => '10000000');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => 'TXN_LCR_SPILL_THRESHOLD', value => '1000000');
    EXEC DBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'APPLY_CIBSREP', parameter => 'DISABLE_ON_ERROR', value => 'N');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => '_dynamic_stmts',value => 'Y');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => 'COMMIT_SERIALIZATION',value => 'NONE');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => '_RESTRICT_ALL_REF_CONS',value => 'N');
    EXEC DBMS_APPLY_ADM.Set_parameter(apply_name => 'APPLY_CIBSREP', parameter => 'ALLOW_DUPLICATE_ROWS',value => 'Y');
    EXEC dbms_streams_adm.add_schema_propagation_rules ( schema_name => 'scott', streams_name => 'PROP_CIBSREP_to_CIBSPROD', source_queue_name => 'CAPTURE_CIBSREP', destination_queue_name => 'APPLY_CIBSPROD@CIBSPROD', include_dml => true, include_ddl => true, source_database => 'CIBSREP');
    --Import export schema
    host exp USERID=SYSTEM/cibsmgr@CIBSPROD parfile=expparfile-scott.txt
    host imp USERID=SYSTEM/cibsmgr@CIBSREP parfile=impparfile-scott.txt
    --Start capture and apply processes on CIBSREP
    conn strmadmin/strmadmin@CIBSREP
    EXEC dbms_capture_adm.start_capture (capture_name=>'CAPTURE_CIBSREP');
    EXEC DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_CIBSREP');
    --Start capture and apply processes on CIBSPROD
    conn strmadmin/strmadmin@CIBSPROD
    EXEC dbms_capture_adm.start_capture (capture_name=>'CAPTURE_CIBSPROD');
    EXEC DBMS_APPLY_ADM.START_APPLY(apply_name => 'APPLY_CIBSPROD');
    SPOOL OFF
    What we have missed in the configuration?
    Regards,
    Asim

    Find error SRC database "ORA-26687: no instantiation SCN provided fro SCOTT.STRMTEST in source database CIBSREP"
    SCOTT.STRMTEST is heartbeat table used from streams replication on Source and Destination database

  • Xml schema validation problem

    Hi All
    How to tackle this xml schema validation problem
    i am using the sample code provided by ORacle technet for xml
    schema validation in the Oracle database(817).
    The sample code works perfectly fine.
    Sample as provided by http://otn.oracle.com/tech/xml/xdk_sample/archive/xdksample_093001.zip.
    It works fine for normal xml files validated against
    xml schema (xsd)
    but in this case my validation is failing . Can you let me know why
    I have this main schema
    Comany.xsd
    ===========
    <?xml version="1.0"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    targetNamespace="http://www.company.org"
    xmlns="http://www.company.org"
    elementFormDefault="qualified">
    <xsd:include schemaLocation="Person.xsd"/>
    <xsd:include schemaLocation="Product.xsd"/>
    <xsd:element name="Company">
    <xsd:complexType>
    <xsd:sequence>
    <xsd:element name="Person" type="PersonType" maxOccurs="unbounded"/>
    <xsd:element name="Product" type="ProductType" maxOccurs="unbounded"/>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:element>
    </xsd:schema>
    ================
    which includes the following 2 schemas
    Product.xsd
    ============
    <?xml version="1.0"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    elementFormDefault="qualified">
    <xsd:complexType name="ProductType">
    <xsd:sequence>
    <xsd:element name="Type" type="xsd:string"/>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:schema>
    ==============
    Person.xsd
    ===========
    <?xml version="1.0"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
    elementFormDefault="qualified">
    <xsd:complexType name="PersonType">
    <xsd:sequence>
    <xsd:element name="Name" type="xsd:string"/>
    <xsd:element name="SSN" type="xsd:string"/>
    </xsd:sequence>
    </xsd:complexType>
    </xsd:schema>
    =================
    now when i try to validate a xml file against Company.xsd
    it throws an error saying unable to find Person.xsd.
    no protocol error
    Now where do i place these 2 schemas(.xsd files) Person & product
    so that the java schemavalidation program running inside Oracle
    database can locate these files
    Rgrds
    Sushant

    Hi Jinyu
    This is the java code loaded in the database using loadjava called by a wrapper oracle stored procedure
    import oracle.xml.parser.schema.*;
    import oracle.xml.parser.v2.*;
    import java.net.*;
    import java.io.*;
    import org.w3c.dom.*;
    import java.util.*;
    import oracle.sql.CHAR;
    import java.sql.SQLException;
    public class SchemaUtil
    public static String validation(CHAR xml, CHAR xsd)
    throws Exception
    //Build Schema Object
    XSDBuilder builder = new XSDBuilder();
    byte [] docbytes = xsd.getBytes();
    ByteArrayInputStream in = new ByteArrayInputStream(docbytes);
    XMLSchema schemadoc = (XMLSchema)builder.build(in,null);
    //Parse the input XML document with Schema Validation
    docbytes = xml.getBytes();
    in = new ByteArrayInputStream(docbytes);
    DOMParser dp = new DOMParser();
    // Set Schema Object for Validation
    dp.setXMLSchema(schemadoc);
    dp.setValidationMode(XMLParser.SCHEMA_VALIDATION);
    dp.setPreserveWhitespace (true);
    StringWriter sw = new StringWriter();
    dp.setErrorStream (new PrintWriter(sw));
    try
    dp.parse (in);
    sw.write("The input XML parsed without errors.\n");
    catch (XMLParseException pe)
    sw.write("Parser Exception: " + pe.getMessage());
    catch (Exception e)
    sw.write("NonParserException: " + e.getMessage());
    return sw.toString();
    This is the code i used initially for validating a xml file against single xml schema (.xsd) file
    In the above code could u tell how to specify the second schema validation code for the incoming xml.
    say i create another Schemadoc for the 2nd xml schema.
    something like this with another parameter(CHAR xsd1) passing to the method
    byte [] docbytes1 = xsd1.getBytes();
    ByteArrayInputStream in1 = new ByteArrayInputStream(docbytes1);
    XMLSchema schemadoc1 = (XMLSchema)builder.build(in1,null);
    DOMParser dp = new DOMParser();
    How to set for the 2nd xml schema validation in the above code or can i combine 2 xml schemas.
    How to go about it
    Rgrds
    Sushant

  • How to add new tables in Streams for Schema level replication ( 10.2.0.3 )

    Hi,
    I am in process of setting up Oracle Streams schema level replication on version 10.2.0.3. I am able to setup replication for one table properly. Now I want to add 10 more new tables for schema level replication. Few questions regarding this
    1. If I create new tables in source, shall I have to create tables in target database manually or I have to do export STREAMS_INSTANTIATION=Y
    2. Can you tell me metalink note id to read more on this topic ?
    thanks & regards
    parag

    The same capture and apply process can be used to replicate other tables. Following steps should suffice your need:
    Say table NEW is the new table to be added with owner SANTU
    downstr_cap is the capture process which is already running
    downstr_apply is the apply process which is already there
    1. Now stop the apply process
    2. Stop the capture process
    3. Add the new table in the capture process using +ve rule
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES
    table_name      => 'SANTU.NEW',
    streams_type    => 'capture',
    streams_name    => 'downstr_cap',
    queue_name      => 'strmadmin.DOWNSTREAM_Q',
    include_dml     => true,
    include_ddl     => true,
    source_database =>  ' Name of the source database ',
    inclusion_rule  => true
    END;
    4. Take export of the new table with "OBJECT_CONSISTENT=Y" option
    5. Import the table at destination with "STREAMS_INSTANTIATION=Y' option
    6. Start the apply process
    7. Start the capture process

  • SQL Trace for schema level

    Hi
    Database 10.1.0.4
    Sql trace file which I have used but didn't get the trace file. I have tried to get per session Id but not able to get the trace file, when ever user logged into application, virtually 6 user get lgged in and you never know about user. So I have desided to capture for schema
    I have used this for tracing
    SQL> ALTER SESSION SET sql_trace=TRUE;
    SQL> ALTER SESSION SET sql_trace=FALSE;
    Or
    SQL> EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
    SQL> EXEC DBMS_SESSION.set_sql_trace(sql_trace => FALSE);
    or
    SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>TRUE);
    SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>FALSE);
    I want to get trace file for schema, can anyone suggest how do I get trace file at schema level.
    Thanks for help

    Hi,
    Using instance-level tracing by setting the init.ora/spfile... parameter SQL_TRACE=TRUE, all processes against the instance will create their own trace files. This particular method of tracing should be used with care since it creates a great deal of overhead against the system. In addition, the default value for this parameter is FALSE.
    Cheers
    Legatti

  • AWR REPORT AT SCHEMA LEVEL

    Hello,
    Can anybody guide me how to generate an AWR (automatic workload repository) report at schema level? Actually I have created one user name xyz and imported some 1000 objects (tables, views etc) .
    I have little doubt here , when we create one user ,is schema for that user will it create automatically …..if its true then I need to generate one AWR report for that user or schema

    I don't think this is possible: AWR only works at database/instance level and not at schema level.

  • What level suplemental logging requires to setup Streams at Schema level

    Hi,
    Working on setting-up streams from 10g to 11g db @ schema level. And the session is hanging with statement "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" while running following command - generated using DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS.
    Begin
    dbms_streams_adm.add_schema_rules(
    schema_name => '"DPX1"',
    streams_type => 'CAPTURE',
    streams_name => '"CAPTURE_DPX1"',
    queue_name => '"STRMADMIN"."CAPTURE_QUEUE"',
    include_dml => TRUE,
    include_ddl => TRUE,
    include_tagged_lcr => TRUE,
    source_database => 'DPX1DB',
    inclusion_rule => TRUE,
    and_condition => get_compatible);
    END;
    The generated script also setting each table with table-level logging "'ALTER TABLE "DPX1"."DEPT" ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, FOREIGN KEY, UNIQUE INDEX) COLUMNS'".
    So my question is: Is Database level supplemental logging required to setup schema-level replication? If answer is no then why the following script is invoking "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" command.
    Thanks in advance.
    Regards,
    Sridhar

    Hi sri dhar,
    From what I found, the "ALTER DATABASE ADD SUPPLEMENTAL LOG DATA" is required for the first capture you create in a database. Once it has been run, you'll see V$DATABASE with the column SUPPLEMENTAL_LOG_DATA_MIN set to YES. It requires a strong level of locking - for example, you cannot run this alter database while an index rebuild is running (maybe an rebuild online?)
    I know it is called implicitly by DBMS_STREAMS_ADM.add_table_rules for the first rule created.
    So, you can just run the statement once in a maintenance window and you'll be all set.
    Minimal Supplemental Logging - http://www.oracle.com/pls/db102/to_URL?remark=ranked&urlname=http:%2F%2Fdownload.oracle.com%2Fdocs%2Fcd%2FB19306_01%2Fserver.102%2Fb14215%2Flogminer.htm%23sthref2006
    NOT to be confused with database level supplemental log group.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/mon_rep.htm#BABHHCCC
    Hope this helps,
    Regards,

  • Schema level and table level supplemental logging

    Hello,
    I'm setting up bi- directional DML replication between two oracle databases. I have enabled supplemental logging database level by running this command-
    SQL>alter database add supplemental log data (primary key) columns;
    Database altered.
    SQL> select SUPPLEMENTAL_LOG_DATA_MIN, SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI from v$database;
    SUPPLEME SUP SUP
    IMPLICIT YES NO
    -My question is should I enable supplemental logging table level also(for DML replication only)? should I run the below command also?
    GGSCI (db1) 1> DBLOGIN USERID ggs_admin, PASSWORD ggs_admin
    Successfully logged into database.
    GGSCI (db1) 2> ADD TRANDATA schema.<table-name>
    what is the deference between schema level and table level supplemental logging?

    For Oracle, ADD TRANDATA by default enables table-level supplemental logging. The supplemental log group includes one of the following sets of columns, in the listed order of priority, depending on what is defined on the table:
    1. Primary key
    2. First unique key alphanumerically with no virtual columns, no UDTs, no functionbased
    columns, and no nullable columns
    3. First unique key alphanumerically with no virtual columns, no UDTs, or no functionbased
    columns, but can include nullable columns
    4. If none of the preceding key types exist (even though there might be other types of keys
    defined on the table) Oracle GoldenGate constructs a pseudo key of all columns that
    the database allows to be used in a unique key, excluding virtual columns, UDTs,
    function-based columns, and any columns that are explicitly excluded from the Oracle
    GoldenGate configuration.
    The command issues an ALTER TABLE command with an ADD SUPPLEMENTAL LOG DATA clause that
    is appropriate for the type of unique constraint (or lack of one) that is defined for the table.
    When to use ADD TRANDATA for an Oracle source database
    Use ADD TRANDATA only if you are not using the Oracle GoldenGate DDL replication feature.
    If you are using the Oracle GoldenGate DDL replication feature, use the ADD SCHEMATRANDATA command to log the required supplemental data. It is possible to use ADD
    TRANDATA when DDL support is enabled, but only if you can guarantee one of the following:
    ● You can stop DML activity on any and all tables before users or applications perform DDL on them.
    ● You cannot stop DML activity before the DDL occurs, but you can guarantee that:
    ❍ There is no possibility that users or applications will issue DDL that adds new tables whose names satisfy an explicit or wildcarded specification in a TABLE or MAP
    statement.
    ❍ There is no possibility that users or applications will issue DDL that changes the key definitions of any tables that are already in the Oracle GoldenGate configuration.
    ADD SCHEMATRANDATA ensures replication continuity should DML ever occur on an object for which DDL has just been performed.
    You can use ADD TRANDATA even when using ADD SCHEMATRANDATA if you need to use the COLS option to log any non-key columns, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.
    Additional requirements when using ADD TRANDATA
    Besides table-level logging, minimal supplemental logging must be enabled at the database level in order for Oracle GoldenGate to process updates to primary keys and
    chained rows. This must be done through the database interface, not through Oracle GoldenGate. You can enable minimal supplemental logging by issuing the following DDL
    statement:
    SQL> alter database add supplemental log data;
    To verify that supplemental logging is enabled at the database level, issue the following statement:
    SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
    The output of the query must be YES or IMPLICIT. LOG_DATA_MIN must be explicitly set, because it is not enabled automatically when other LOG_DATA options are set.
    If you required more details refer Oracle® GoldenGate Windows and UNIX Reference Guide 11g Release 2 (11.2.1.0.0)

  • Dml_Handler at the Schema Level?

    Hi:
    I'm using 11g R2 and doing a one-way streams replication within the same database. I've got a subset of tables within the same schema setup now for capture and I'm using dml_handlers on apply. The handlers are specified on each table with a package that takes each trapped LCR and writes its data out to a different table than the one that the LCR got captured from. This was done for a bolt-on reporting issue that popped up. This is all working great. Streams rocks!
    Here's my next issue. I want to expand/morph the above approach in the following way. I want to do my capture at the schema level for all tables and also run just one dml_handler to take all LCRs for the specified schema and write them out as XML into a clob column. I've got the XML portion working and I pretty much know how I can get the streams part going as well, using a dml_handler-per-table approach similar to what I did above. What I would like to know is whether there's a way to avoid having to setup a dml_hander for each insert, update, and delete LCR on every table within the specified schema.
    Instead of doing this....
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'schema.table_a',
    object_type => 'TABLE',
    operation_name => 'INSERT',
    error_handler => false,
    user_procedure => 'package.procedure',
    apply_database_link => NULL,
    apply_name => 'apply_name');
    END;
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'schema.table_a',
    object_type => 'TABLE',
    operation_name => 'UPDATE',
    error_handler => false,
    user_procedure => 'package.procedure',
    apply_database_link => NULL,
    apply_name => 'apply_name');
    END;
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    object_name => 'schema.table_a',
    object_type => 'TABLE',
    operation_name => 'DELETE',
    error_handler => false,
    user_procedure => 'package.procedure',
    apply_database_link => NULL,
    apply_name => 'apply_name');
    END;
    Once for each table in the schema
    I'd like to be able to do the following:
    BEGIN
    DBMS_APPLY_ADM.SET_DML_HANDLER(
    schema_name => 'schema', ---This line is totally made up by me. The real argument is object_name
    object_type => 'TABLE',
    operation_name => 'ALL', ---This line is also totally made up by me. The real allowed options are Insert, Update, and Delete.
    error_handler => false,
    user_procedure => 'package.procedure',
    apply_database_link => NULL,
    apply_name => 'apply_name');
    END;
    Is there a way to do this? I don't see a procedure in dbms_apply_adm to accomplish it, or I just missed it. I could also do this within a loop using dynamic SQL but I'm hoping I won't have to.
    Thanks for any help!
    Cheers,
    Mike

    SCHEMA level is not possible with this procedure.
    However, you can set the operation_name to 'DEFAULT' - which indicates all operations (INSERT/UPDATE/DELETE/LOB_UPDATE)

  • Schema level with particular partition tables

    Hi All,
    I need to export all objects ie. schema level option but I need to export the particular partition of a table..
    ie. i need EXCLUDE particular partition data for schema level back up.
    Kindly suggest me how to archive the above..
    Thanks & Regards
    Sami
    Edited by: Sami on Jul 6, 2012 4:41 PM

    Hi All,
    I have used the following option to export schema level with partition tables
    YYYYY/********@devchn schemas=YYYYY EXCLUDE=TABLE:"IN('ACCOUNT_STATEMENT_HISTORY','CUSTOMER_IMAGE','XAPI_ACTIVITY_HISTORY','GL_ACCOUNT_SUMMARY$AUD','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART1','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART2','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART3','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART4','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART5','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART6','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART7','DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST1','DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST2', 'DEPOSIT_ACCOUNT_HISTORY:DEP_ACCT_HIST3','TXN_JOURNAL:TRX_JOURN_PART1','TXN_JOURNAL:TRX_JOURN_PART2')" directory=DUMPDIR1 dumpfile=MSB_06-July-2012.dmp logfile=exp_MSB_06-July-2012.log but its not work..
    Log file
    . . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART3"       1.801 GB 6650371 rows
    . . exported "YYYYY."."EVENT_JOURNAL"                       1.533 GB 15930785 rows
    . . exported "YYYYY."."LOAN_ACCOUNT$AUD"                    1.287 GB 6339368 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART6"  1.212 GB 5860272 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART5"  1.102 GB 5363721 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST3"  1.055 GB 5530280 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART4"  929.4 MB 4513889 rows
    . . exported "YYYYY."."DP_ACCT_INTEREST_HISTORY"            925.0 MB 7002553 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART2"  909.8 MB 4441940 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART3"  768.3 MB 3786671 rows
    . . exported "YYYYY."."ACCOUNT$AUD"                         709.8 MB 4348526 rows
    . . exported "YYYYY."."EVENT_CHARGE_JOURNAL"                663.9 MB 5303756 rows
    . . exported "YYYYY."."DP_ACCT_CYCLE_STAT_HIST"             655.4 MB 4389715 rows
    . . exported "YYYYY."."DP_ACCT_PERIOD_CYCLE_STAT_HIST"      569.1 MB 3733176 rows
    . . exported "YYYYY."."DP_ACCT_CHARGE_CYCLE_STAT_HIST"      535.4 MB 3712447 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART7"  473.3 MB 2238240 rows
    . . exported "YYYYY."."WF_WORK_ITEM_HISTORY"                474.4 MB 1887956 rows
    . . exported "YYYYY."."OFFLINE_OUTBOUND_TXN_LOG"            478.4 MB   32803 rows
    . . exported "YYYYY."."OPERATIONAL_SERVICE_ERROR_LOG"       291.8 MB   55333 rows
    . . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART1"       350.3 MB 1352267 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_STAT"                313.6 MB  414942 rows
    . . exported "YYYYY."."CORRESPONDENCE_QUEUE_BK"             295.6 MB 1816383 rows
    . . exported "YYYYY."."EXT_TXN_JOURNAL"                     255.6 MB 1234290 rows
    . . exported "YYYYY."."PERSONAL_CUSTOMER$AUD"               244.7 MB 1018705 rows
    . . exported "YYYYY."."GL_ACCOUNT_STAT"                     228.3 MB  873915 rows
    . . exported "YYYYY."."TXN_JOURNAL":"TRX_JOURN_PART2"       228.8 MB  855180 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST1"  210.5 MB 1119932 rows
    . . exported "YYYYY."."USER_ROLE_ALERT"                     180.9 MB 3059052 rows
    . . exported "YYYYY."."CUSTOMER$AUD"                        172.7 MB 1005897 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_HISTORY":"DEP_ACCT_HIST2"  162.3 MB  837967 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_SUMMARY"             160.7 MB  414942 rows
    . . exported "YYYYY."."CUSTOMER_IMAGE_HISTORY"              142.0 MB   10085 rows
    . . exported "YYYYY."."SYSUSER$AUD"                         137.9 MB  986069 rows
    . . exported "YYYYY."."TXN_BATCH_ITEM$AUD"                  143.9 MB  893961 rows
    . . exported "YYYYY."."ALERT"                               130.8 MB  652573 rows
    . . exported "YYYYY."."EXT_DP_ACCOUNT_SUMMARY"              132.3 MB  336115 rows
    . . exported "YYYYY."."GL_ACCOUNT_HISTORY":"GL_ACCT_HIST_PART1"  132.7 MB  681899 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_INTEREST"            113.0 MB  835126 rows
    . . exported "YYYYY."."EXT_DP_ACCOUNT_INTEREST"             112.3 MB  625117 rows
    . . exported "YYYYY."."GL_ACCOUNT_SUMMARY"                  103.5 MB  873885 rows
    . . exported "YYYYY."."DP_ACCT_INTEREST_TIER_HISTORY"       102.4 MB 1413589 rows
    . . exported "YYYYY."."GL_ACCOUNT_MONTHLY_STAT"             98.52 MB  852631 rows
    . . exported "YYYYY."."GL_ACCOUNT_QUARTERLY_STAT"           98.45 MB  852630 rows
    . . exported "YYYYY."."GL_ACCOUNT_YEARLY_STAT"              98.47 MB  852630 rows
    . . exported "YYYYY."."LN_ACCT_REPMNT_EVENT"                91.53 MB  902496 rows
    . . exported "YYYYY."."WF_WORK_ITEM_CHKLST_RESP"            83.03 MB  538855 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT$AUD"                 79.53 MB  450801 rows
    . . exported "YYYYY."."ACCOUNT_CHEQUE_INVENTORY"            73.38 MB  910879 rows
    . . exported "YYYYY."."EXT_DP_ACCOUNT_INTEREST_TIER"        71.44 MB  670870 rows
    . . exported "YYYYY."."PENDING_TXN_JOURNAL"                 44.14 MB    9578 rows
    . . exported "YYYYY."."CUSTOMER"                            67.22 MB  405785 rows
    . . exported "YYYYY."."TXN_BATCH_ITEM"                      66.98 MB  458442 rows
    . . exported "YYYYY."."ACCOUNT"                             58.82 MB  443064 rows
    . . exported "YYYYY."."ACCOUNT_CYCLIC_CHARGE"               56.00 MB  405963 rows
    . . exported "YYYYY."."EXT_CUSTOMER"                        57.16 MB  321364 rows
    . . exported "YYYYY."."EXT_LN_ACCT_REPMNT_EVENT"            61.50 MB  437790 rows
    . . exported "YYYYY."."ORGANISATION_CUSTOMER$AUD"           49.84 MB  241014 rows
    . . exported "YYYYY."."OFFLINE_ASYNCH_QUEUE"                52.14 MB    3562 rows
    . . exported "YYYYY."."OPERATIONAL_SVCE_MAN_RUN_HIST"       47.91 MB  766864 rows
    . . exported "YYYYY."."DEPOSIT_ACCOUNT_INTEREST_TIER"       46.44 MB  670938 rows
    . . exported "YYYYY."."LN_ACCT_PERIOD_CYCLE_STAT_HIST"      42.09 MB  260703 rows
    . . exported "YYYYY."."ADDRESS"                             41.39 MB  411753 rows
    . . exported "YYYYY."."EXT_ACCOUNT_RELATIONSHIP"            41.98 MB  305353 rowsEXCLUDE option is not working for partition tables.. But its working fine for other tables
    examples
    EXCLUDE=TABLE:"IN('ACCOUNT_STATEMENT_HISTORY','CUSTOMER_IMAGE','XAPI_ACTIVITY_HISTORY','GL_ACCOUNT_SUMMARY$AUD'
    {code}
    the above part is working fine..
    But.
    {code}
    EXCLUDE=TABLE:"IN('GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART5','GL_ACCOUNT_HISTORY:GL_ACCT_HIST_PART6')the above exclude option is not working.. data's from the table are exported into the dump..
    Thanks & Regards
    Sami.

  • Schema Level DML

    I have about 200 tables, and each table has two columns : ( table_name_ID, local_id ).
    example :
    COST ( cost_id, local_id )
    RATES ( rates_id, local_id )
    SALARY ( salary_id, local_id )
    I want the column local_id to be set to same value as table_id. ( local_id = cost_id for COST table; local_id = SALARY_ID in SALARY table etc.)
    The code for trigger on these table will be generic and can be automtically generated.
    But we are talking about 200 tables - means huge amount of testing with existing triggers on all these tables.
    Is there any simpler method.
    BTW - I purused following thread, and it talks about using AUDITING.
    Re: Schema level Database triggers
    But I wonder how AUDITING can be used in place of trigger to modify data ?

    If you are running 11g, you could also replace the "local_id" physical columns with virtual ones (below). Not very pretty, but way cheaper than triggers.
    SQL> CREATE TABLE COST ( cost_id  number
      2                    , local_id number generated always as (cost_id+0) )
      3  /
    Table created
    SQL> insert into cost (cost_id) values (1);
    1 row inserted
    SQL> select * from cost;
       COST_ID   LOCAL_ID
             1          1

  • Schema level tiggers

    hi all,
    is it possible to write a schema level trigger with DCL and DDL commands in it.
    Actually in my database there are more than one (X & Y)schemas with differnt privelages.
    if i create a table in ' X'schema,i have to create a synonym for the same table for the other schema 'Y' and i want to grant select privelage for that table for 'Y' .my verison is 10204
    create or replace
    TRIGGER bcs_trigger
    after  create ON X.SCHEMA
    declare
    Cursor table_cur is
    Select object_id, object_name, object_type, owner
    from DBA_OBJECTS
    where to_date(created,'DD/MM/YYYY')= to_date(SYSDATE,'DD/MM/YYYY');
      type TABLE_collect is table of table_cur %rowtype;
    TACOLL TABLE_collect;
    v_msg varchar2(1000) := 'SYNONYM HAS BEEN CREATED';
    V_error varchar2(1000) := 'ALREADY EXIST';
    BEGIN
    open table_cur;
    loop
              fetch table_cur bulk collect into TACOLL;
               exit when table_cur %notfound;
               end loop;
               close table_cur;
    for i in 1.. TACOLL.count
    loop
    IF
    TACOLL(i). OBJECT_TYPE ='TABLE'
    THEN
    execute immediate 'create synonym '||TACOLL(i).OBJECT_NAME||' for '||TACOLL(i).OBJECT_NAME;
    execute immediate 'grant select on '||TACOLL(i).OBJECT_NAME||' to Y ';
    dbms_output.put_line ('v_msg');
    end if;
    end loop;
    end ;

    Hi Suresh.
    Welcome to OTN Forums!
    I think this help you
    CREATE OR REPLACE TRIGGER bcs_trigger AFTER
    CREATE ON X.SCHEMA
      DECLARE
        V_MSG VARCHAR2(1000) := 'SYNONYM HAS BEEN CREATED';
      BEGIN
        FOR OBJ IN
        (SELECT object_id,
          object_name,
          object_type,
          owner
        FROM DBA_OBJECTS
        WHERE TO_DATE(CREATED,'DD/MM/YYYY')= TO_DATE(SYSDATE,'DD/MM/YYYY'))
        LOOP
          IF OBJ.OBJECT_TYPE = 'TABLE' THEN
            EXECUTE IMMEDIATE 'create or replace synonym '||OBJ.OBJECT_NAME||' for '||OBJ.OBJECT_NAME;
            EXECUTE IMMEDIATE 'grant select on '||OBJ.OBJECT_NAME||' to Y ';
            dbms_output.put_line (V_MSG|| ' : ' ||OBJ_OBJECT_NAME);
          END IF;
        END LOOP;
      END ;

Maybe you are looking for