Replicate to a different schema using streams

All,
I'm performing replication using streams, I have read the doc and steps by steps notes on metalink getting the replication successfully.
But now, I want to do the same but on differents schemas ie.
source DB: Schema= A Table=test
Destination DB: Schema= B Table=test
On both databases the schemas are basically the same but with different names.
What changes o additional steps I need to perform in order to complete the replication?
Thanks in advance..!

Here is the procedure
1) Follow the instructions from the documentation to create the streams administrator user on both source and target.(e.g. steams_adm)
2) Connect to the streams administrator user at the TARGET (10.2) database
--Create a queue
Begin
Dbms_streams_adm.set_up_queue(
queue_name => 'APPLY_Q',queue_table => 'APPLY_Q_QT');
End;
--Create rules for apply process
begin
dbms_rule_adm.create_rule(rule_name =>'APPLY_RULE',
condition => ':dml.get_object_owner()=''<put your source schema owner>'' ');
end;
begin
dbms_rule_adm.create_rule_set(rule_set_name => 'streams_adm.apply_ruleset',
evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT');
dbms_rule_adm.add_rule(rule_set_name => 'APPLY_RULESET',rule_name =>'APPLY_RULE');
end;
--Set the rule to rename the schema
begin
dbms_streams_adm.rename_schema(
rule_name=>'APPLY_RULE',
from_schema_name =>'<put the source schema>',
to_schema_name => '<put the target schema>');
end;
--Create the apply
begin
dbms_apply_adm.create_apply(apply_name => 'APPLY_CHANGES',
rule_set_name => 'APPLY_RULESET',
queue_name => 'APPLY_Q',
source_database => '<source database name>',
apply_captured => true);
end;
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'APPLY_CHANGES',
parameter => 'DISABLE_ON_ERROR',
value => 'N' );
END;
--Start apply
begin
dbms_apply_adm.start_apply('APPLY_CHANGES');
end;
3) Connected as strmadmin on SOURCE (9.2) database:
--Create a database link to the TARGET database connected as streams_adm. The name of the dblink must match the target database name since you have to set global_names=true in the init files of both databases
--Create a queue
Begin
Dbms_streams_adm.set_up_queue(
queue_name => 'CAPTURE_Q',queue_table => 'CAPTURE_Q_QT');
End;
--Create the rule and rulesets
*** Here replace the schema owner and the list of tables you'd like to capture changes *** from
begin
dbms_rule_adm.create_rule(rule_name =>'FILTER_TABLES',
condition => ':dml.get_object_owner()=''<put your schema owner>'' and :dml.get_object_name() in (''TAB1'',''TAB2'')');
end;
begin
dbms_rule_adm.create_rule_set(rule_set_name => 'streams_adm.capture_ruleset',
evaluation_context => 'SYS.STREAMS$_EVALUATION_CONTEXT');
dbms_rule_adm.add_rule(rule_set_name => 'CAPTURE_RULESET',rule_name =>'FILTER_TABLES');
end;
--Prepare the schema for instantiation
*** Replace the schema name with your schema name
BEGIN
DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION(
schema_name => '<put your schema name here>');
END;
--Create propagation
BEGIN
DBMS_PROPAGATION_ADM.CREATE_PROPAGATION(
propagation_name => 'streams_propagation',
source_queue => 'streams_adm.capture_q',
destination_queue => 'streams_adm.apply_q',
destination_dblink => '<database link>',
rule_set_name => 'streams_adm.capture_ruleset');
END;
--Create capture process
BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'streams_adm.capture_q',
capture_name => 'capture_tables',
rule_set_name => 'streams_adm.CAPTURE_RULESET',
first_scn => NULL);
END;
--start capture
begin
dbms_capture_adm.start_capture('CAPTURE_TABLES');
end;
Let me know if you have any questions

Similar Messages

  • Is it possible to export tables from diffrent schema using expdp?

    Hi,
    We can export tables from different schema using exp. Ex: exp user/pass file=sample.dmp log=sample.log tables=scott.dept,system.sales ...But
    Is it possible in expdp?
    Thanks in advance ..
    Thanks,

    Hi,
    you have to use "schemas=user1,user2 include=table:"in('table1,table2')" use parfileexpdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log{quote}
    I am not able to perform it using parfile also.Using parfile it shows "UDE-00010: multiple job modes requested, schema and tables."
    When trying the below, i get error
    {code}
    bash-3.00$ expdp directory=EXP_DUMP dumpfile=test.dmp logfile=test.log SCHEMAS=(\'MM\',\'MMM\') include=TABLE:\"IN\(\'EA_EET_TMP\',\'WS_DT\'\)\"
    Export: Release 10.2.0.4.0 - 64bit Production on Friday, 15 October, 2010 18:34:32
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Username: / as sysdba
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_SCHEMA_01": /******** AS SYSDBA directory=EXP_DUMP dumpfile=test.dmp logfile=test.log SCHEMAS=('MM','MMM') include=TABLE:"IN('EA_EET_TMP','WS_DT')"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "MM"."EA_EET_TMP" 0 KB 0 rows
    ORA-39165: Schema MMM was not found.
    Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
    /export/home/nucleus/dump/test.dmp
    Job "SYS"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at 18:35:19
    {code}
    When checking expdp help=y shows :-
    {code}TABLES Identifies a list of tables to export - one schema only.{code}
    As per few testing,tables from different schemas are not possible to export using expdp in a single command.
    Anand

  • Replicaition environment from non-Oracle to Oracle using Streams

    Hi guys,
    I'm finding my way to establish a replication environment from any non-Oracle db to Oracle using Streams.
    I've checked Oracle's documentation about this function. The answer is I have to write a custom application which will capture changes in Heterogeneous db and access pl/sql to format this changes to LCRs and enqueue these LCRs into Oracle Streams.
    theoretically, we all can understand this idea, but I don't know if there's any more detailed comments on this.
    Anyone have such experiences, pls give your introduction hereby, and we can make an example like replicate from Sqlserver to Oracle using Streams, or even any other techniques.
    Assuming we have establish the HS connectivity using Transparent gateway.
    Followups are welcomed.

    Hi Yukun,
    I'm developing right now an environment that will replicate changes from an Adabas (mainframe) database to Oracle. The greatest challenge is to pass your (sqlserver or other database) data to the Oracle db. When done you must have (like it is said in documentation) a manual process that will convert your data in LCRs then enqueue them. From there, it is a normal streams environment. I can (try to) answer more detailed questions if you need it. I don't have any experience with sqlserver database.
    Claudine

  • Replicate using streams to a different schema

    All,
    I'm performing replication using streams, I have read the doc and steps by steps notes on metalink getting the replication successfully.
    But now, I want to do the same but on differents schemas ie.
    source DB: Schema= A Table=test
    Destination DB: Schema= B Table=test
    On both databases the schemas are basically the same but with different names.
    What changes o additional steps I need to perform in order to complete the replication?
    Thanks in advance..!

    There are demos of both Change Data Capture (superior for what you appear to be doing) and Streams in Morgan's Library at www.psoug.org. Some of them duplicate exactly what you describe.
    PS: There is a specific Streams forum. In the future please post Streams related requests there. Thank you.

  • How streams replicate to a different table?

    How can I replicate a shared object from source database to another table that has the different name of the source table ?
    anyone who has this experience please give me a example, thanks .

    In our environment
    OS:solaris 5.9
    DB:oracle 10.2.1
    we use oracle streams to replicate tables from DB1 to DB2.for example
    DB1: table site on test schema,
    replicate to
    DB2: table site_opc on xxrpt schema
    because the shared database object (site) on DB1 have a different name and be in a different schema at the source database and destination database.I try to configure rule-based transformation to convert it.
    on destination database(DB2),I execute following
    BEGIN
    DBMS_STREAMS_ADM.RENAME_TABLE(
    rule_name => 'STRMADMIN.WBXSITE102',
    from_table_name => 'test.SITE',
    to_table_name => 'XXRPT.SITE_OPC',
    step_number => 0,
    operation => 'ADD');
    END;
    After configure capture, propagation on source and apply on destination database, I make a update on source database, but I don't find any change apply on DB2. And I query dba_apply_error, I can't find any error message. Why?

  • How do you join two tables from different Oracle schemas using a subquery

    I am trying to join two tables from different Oracle schemas using a subquery. I can extract data from each of the tables without a problem. However, when I combine the select statements using a subquery I get the Oracle error *'ORA-00936: missing expression'*. Since each SELECT statement executes on its own without error I don't understand what is missing. The result set I am trying to get is to match up the LINE_ID from PDTABLE_12_1 in schema DD_12809 with the MAT_DESCRIPTION from table PDTABLE_201 in schema RA_12809.
    The query is as follows:
    sql = "SELECT [DD_12809].[PDTABLE_12_1].LINE_ID FROM [DD_12809].[PDTABLE_12_1] JOIN " _
    + "(SELECT [RA_12809].[PDTABLE_201].MAT_DESCRIPTION " _
    + "FROM [RA_12809].[PDTABLE_201]) AS FAB " _
    + "ON [DD_12809].[PDTABLE_12_1].PIPING_MATER_CLASS = FAB.PIPING_MATER_CLASS"
    The format of the query is copied from a SQL programming manual.
    I also tried executing the query using a straight JOIN on the two tables but got the same results. Any insight would be helpful. Thanks!
    Edited by: user11338343 on Oct 19, 2009 6:55 AM

    I believe you are receiving the error because you are trying to JOIN on a column that doesn't exist. For example you are trying to join on FAB.PIPING_MATER_CLASS but that column does not exist in the subquery.
    If you want to do a straight join without a subquery you could do the following
    SELECT  DD_12809.PDTABLE_12_1.LINE_ID
    ,       FAB.MAT_DESCRIPTION
    FROM    DD_12809.PDTABLE_12_1
    JOIN    RA_12809.PDTABLE_201    AS FAB ON DD_12809.PDTABLE_12_1.PIPING_MATER_CLASS = FAB.PIPING_MATER_CLASS  HTH!

  • Different Schemas in different environments for same tables used in a Universe

    Hi,
    I have a Universe in Development where I have tables pointed to a schema (DW) in DEV but in TEST/INT I have the same tables under a different schema (TESTDW). So when I promote the Universe & reports from DEV to TEST, I will be getting errors as the schema is incorrect. So we need to find a way so that we can define the schema globally instead of having it at the individual table.
    I know we can repoint the schemas, etc but i need to avoid extra work in other environments.
    Known Solutions: Promote the reports to TEST, then select all the tables & change the schema by Right Clicking the selected tables and selecting Change Qualifier/Owner.
    One of the few reasons, I don't want to follow this route, because I have lot of derived tables which I need to change it manually by editing the SQL Statement,  &  also If I add new tables or columns again in future & promote them to TEST, then again I have to change the schema.
    Did anyone faced this kind of issue?Is there any other way that we can use like Begin_Sql , etc?
    (FYI, I am using BO4.0 SP5)

    Mark, Thanks a lot for your concern. We actually have same schema name across all 3 environments but there is a huge project going on in my company which is kind of hard to explain, so our team has decided to go ahead with a different names for schemas as 2 different teams will be working on paralelly on these schemas & they will combine them after a year or so. (I know this is not a solution)
    Thanks Swapnil for Synonyms solution. Your solution might have worked, if we were using BO 4.1 SP5 but unfortunately we are using BO 4.0 and in this version I can't view any synonym tables in the universe.

  • Applying Streams changes to a different schema

    We're trying to setup a Streams environment between two DBs 9.2.0.3.
    We have two different schemas, one in each DB, called TEST_HQ and TEST_CO, and both of them contain the same objects, tables, and procedures. What we're interested in is to apply DML/DDL changes to the destination DB even if the schema has a different name, but same structure.
    Right now CAPTURE and PROPAGATE processes are working fine, while the apply process is unable to perform any change. We've also tried creating a new schema in the destination DB called TEST_HQ, just like in the source schema, adding synonyms to the real tables contained within TEST_CO, but this is hitting ORA-23416 'missing primary key'.
    Any help or hint will be greatly appreciated!
    Thanks in advance!
    Max

    Thanks for your reply!
    Actually there is a primary key, but Streams seems to ignore it. I've also tried the SET_KEY_COLUMNS way plus supplementary logging on the source DB, but this didn't help at all. I think this is happenening because the tables are not in the TEST_HQ schema, there are only synonyms to the real tables contained in TEST_CO.
    Is there any other easy way to get Streams working between two different schemas?
    Thanks again for your help!!
    Max

  • Create tables in different database schemas using EJB 3 Entity Persistent

    Hi All,
    I would like to find out how to get the following tasks done using EJB 3.0 Java Entity Persistent:
    ( i ) Create tables in different schemas such as STUDENT under EDUCATION schema and table PATIENT in HOSPITAL schema. We can then reference them in SQL as EDUCATION.STUDENT and HOSPITAL.PATIENT.
    ( ii ) Reference these tables uniquely once they are created.
    There are no pre-existing tables or naming conventions that needs to be adhere to in this situation.
    I have no problem creating tables on the current schema in EJB 3.0 Java Entity Persistent.
    Any suggestions would be appreciated.
    Thanks,
    Jack

    Use the schema attribute of the Table annotation:
    package javax.persistence;
    @Target({TYPE}) @Retention(RUNTIME)
    public @interface Table
       String name( ) default "";
       String catalog( ) default "";
       String schema( ) default "";
       UniqueConstraint
    uniqueConstraints( ) default {};
    }

  • Any way to use a single application but point it to different schemas?

    From the searching I have done, it appears that when an end-user runs an application, it can only be associated with a pre-defined schema, which I guess I just need confirmation of. What I was hoping to be able to do was either dynamically change to a different schema at run-time, or create an end-user that is associated with a different schema than the one the application is associated to so the user could use the one application but access a given schema.
    The scenario is that in our production environment, we need to maintain a separate schema for each client we manage data for. But we'd like to maintain one application end users would use but could run against any one of the client schemas. However, it seems that we willneed to make a copy of the application within the production workspace for each client schema that would be the owner schema if I understand how this works. Thus, if we have 7 different schemas we would have to have 7 copies of the application, one associated with each application.
    Thanks in advance!
    Monte Malenke

    Thanks Scott for quick response.
    We will go with different workspace for each schema.
    Just to give you quick background of my requirement. We have a 3 oracle E-Business versions (11.5.8, 11.5.9 and R12 in future) and APEX installed on another 11g database. We don't want to installed APEX on EBS database because of DB patching issues, our EBS 11i database version is 9i DB, future oracle EBS supoort and at the same time we want to use APEX 3.1.2 with 11g. We want use APEX for custom EBS Web UI development.
    We are planning to create separate schema on APEX database for each EBS instance and DB link which points to EBS database. Under APEX schema we are planning to create a view (on our custom table which are same across all EBS instances) using DB link under each schema.
    We will develop APEX application under one of the workspace and they deploy to other workspaces. We have also looked into creating APEX pages based on pl/sql procedure and we can do dynamic sql to query data from EBS instances. But its lots of code (create pl/sql api for every table) and we can not take a advantages of some of the APEX build-in wizard like master-detail, APEX record locking etc
    Based on your APEX experience; Do you think this the way we should go? OR Is there any better way?.
    Thanks in advance
    RK

  • DB2 to Oracle conversion using SQL Developer Migration Wizard - different schemas

    I am performing a conversion between DB2  to Oracle 11 XE, using the SQL Developer Migration Wizard. Specifically I am trying to migrate the DB2User schema over to Oracle.
    Using the migration wizard, when I pick the Oracle target connection to be the same schema ( DB2User schema ) the migration is successful and all data is converted.
    However if I pick a different Oracle target connection ( say OracleUser ) , I run into issues.
    Firstly , the table schema is not created. When I check the project output directory, the .out file has the following errors:
       CREATE USER DB2User IDENTIFIED BY DB2User DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP
            SQL Error: ORA-01031: insufficient privileges
            01031. 00000 -  "insufficient privileges"
        connect DB2User/DB2User
        Error report:
        Connection Failed
        Commit
        Connection created by CONNECT script command disconnected
    I worked around this by manually executing the .sql in the project output directory using the OracleUser id  in the new DB.
    Then I continue with the migration wizard and perform the Move Data step.
    Now - the message appears as succuessful, however, when I review the Migrationlog.xml file, i see errors as follows:
    <level>SEVERE</level>
      <class>oracle.dbtools.migration.workbench.core.logging.MigrationLogUtil</class>
      <message>Failed to disable constraints: Data Move</message>
      <key>DataMove.DISABLE_CONSTRAINTS_FAILED</key>
      <catalog>&lt;null&gt;</catalog>
      <param>Data Move</param>
      <param>oracle.dbtools.migration.workbench.core.logging.LogInfo@753f827a</param>
      <exception>
        <message>java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist</message>
      <level>WARNING</level>
      <class>oracle.dbtools.migration.datamove.online.TriggerHandler</class>
      <message>ORA-01031: insufficient privileges
    </message>
    I think what is happening is that the wizard is attempting to perform the 'move data' process using the DB2User id.
    How do I tell the wizard that the target schema is different than my source schema.
    My requirement is that I need to be able to migrate the DB2User schema to different schemas in the same Oracle database
    ( since we will have multiple test environments under the same database ) .
    Thanks in advance .
    K.

    Perhaps the following from the SQL Developer documentation is helpful for you:
    Command-Line Interface for Migration
    As an alternative to using the SQL Developer graphical interface for migration operations, you can use the migration batch file (Windows) or shell script (Linux) on the operating system command line. These files are located in the sqldeveloper\sqldeveloper\bin folder or sqldeveloper/sqldeveloper/bin directory under the location where you installed SQL Developer.
    migration.bat or migration.sh accepts these commands: capture, convert, datamove, delcaptured, delconn, delconverted, driver, generate, guide, help, idmap, info, init, lscaptured, lsconn, lsconverted, mkconn, qm, runsql, and scan. For information about the syntax and options, start by running migration without any parameters at the system command prompt. For example:
    C:\Program Files\sqldeveloper\sqldeveloper\bin>migration
    You can use the -help option for information about one or more actions. For the most detailed information, including some examples, use the -help=guide option. For example:
    C:\Program Files\sqldeveloper\sqldeveloper\bin>migration -help=guide
    Regards
    Wolfgang

  • Using different schemas

    Is there some easy way of switching between two database schemas?
    In the .jdo metadata files we have table references like this:
    <schema>.<table>
    Let's say that we for some reason have to change the name of
    the schema or have to execute the JDO code against different
    schemas. In these situations, we don't want to change the
    metadata files. Is there some property with a "default schema"
    or something similar? I tried to use the SchemaName property,
    but it didn't help.
    Thanks,
    Joakim Bissmark

    Joakim-
    The SchemaName property is meant to do that. Otherwise, if your driver
    allows you to choose the schema based on a Driver property, could can
    use the com.solarmetric.kodo.ConnectionProperties property to configure
    the Connection (see the docs on this property for more details).
    If this does not help, can you provide more information about your
    database to us? The term "schema" means different things to different
    databases.
    In article <[email protected]>, Joakim Bissmark wrote:
    Is there some easy way of switching between two database schemas?
    In the .jdo metadata files we have table references like this:
    <schema>.<table>
    Let's say that we for some reason have to change the name of
    the schema or have to execute the JDO code against different
    schemas. In these situations, we don't want to change the
    metadata files. Is there some property with a "default schema"
    or something similar? I tried to use the SchemaName property,
    but it didn't help.
    Thanks,
    Joakim Bissmark
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • HTML DB application using table in a different schema

    I need to create a new HTML DB application. The table is in a different schema than mine. The DBA granted me rights to see it and I can see it from iSQL. HTML DB does not give the alternative of selecting this table.

    This thread covers your issue.
    New to HTML DB, have questions about service admin and schemas
    Ask your DBA to add workspace to schema mapping.
    Sergio

  • Can we capture changes made to the objects other than tables using streams

    Hello All,
    I have setup a schema level streams replication using local capture process. I can capture all the DML changes on tables but have some issues capturing DDL. Even though streams are used for sharing data at different or within a database I was wondering if we can replicate the changes made to the objects like views, procedures, functions and triggers at the source database. I am not able to replicate the changes made to the views in my setup.
    Also, when I do a "select source_database,source_object_type,instantiation_scn from dba_apply_instantiated_objects" under the column 'object_type' I just see the TABLE in all the rows selected.
    Thanks,
    Sunny boy

    Hello
    This could be a problem with your rules configured with capture,propagation or apply. Or might be a problem with your instantiation.
    You can replicate Functions, Views, Procedure, Triggers etc using Streams Schema level replication or by configuring the rules.
    Please note that the objects like Functions, Views, Procedure, Triggers etc will not appear in the DBA_APPLY_INSTANTIATED_OBJECTS view. The reason is because you do a schema level instantiation only the INSTANTIATION_SCN in DBA_APPLY_INSTANTIATED_SCHEMAS is accounted for these objects. At the same time tables would get recursively instantiated and you would see an entry in DBA_APPLY_INSTANTIATED_OBJECTS.
    It works fine for me. Please see the below from my database (database is 10.2.0.3):
    on capture site_
    SQL> connect strmadmin/strmadmin
    Connected.
    SQL> select capture_name,rule_set_name,status from dba_capture;
    CAPTURE_NAME RULE_SET_NAME STATUS
    STREAMS_CAPTURE RULESET$_33 ENABLED
    SQL> select rule_name from dba_rule_set_rules where rule_set_name='RULESET$_33';
    RULE_NAME
    TEST41
    TEST40
    SQL> set long 100000
    SQL> select rule_condition from dba_rules where rule_name='TEST41';
    RULE_CONDITION
    ((:ddl.get_object_owner() = 'TEST' or :ddl.get_base_table_owner() = 'TEST') and
    :ddl.is_null_tag() = 'Y' and :ddl.get_source_database_name() = 'SOURCE.WORLD')
    SQL> select rule_condition from dba_rules where rule_name='TEST40';
    RULE_CONDITION
    ((:dml.get_object_owner() = 'TEST') and :dml.is_null_tag() = 'Y' and :dml.get_so
    urce_database_name() = 'SOURCE.WORLD')
    SQL> select * from global_name;
    GLOBAL_NAME
    SOURCE.WORLD
    SQL> conn test/test
    Connected.
    SQL> select object_name,object_type,status from user_objects;
    OBJECT_NAME OBJECT_TYPE STATUS
    TEST_NEW_TABLE TABLE VALID
    TEST_VIEW VIEW VALID
    PRC1 PROCEDURE VALID
    TRG1 TRIGGER VALID
    FUN1 FUNCTION VALID
    5 rows selected.
    on apply site_
    SQL> connect strmadmin/strmadmin
    Connected.
    SQL> col SOURCE_DATABASE for a22
    SQL> select source_database,source_object_owner,source_object_name,source_object_type,instantiation_scn
    2 from dba_apply_instantiated_objects;
    SOURCE_DATABASE SOURCE_OBJ SOURCE_OBJECT_NAME SOURCE_OBJE INSTANTIATION_SCN
    SOURCE.WORLD TEST TEST_NEW_TABLE TABLE 9886497863438
    SQL> select SOURCE_DATABASE,SOURCE_SCHEMA,INSTANTIATION_SCN from
    2 dba_apply_instantiated_schemas;
    SOURCE_DATABASE SOURCE_SCHEMA INSTANTIATION_SCN
    SOURCE.WORLD TEST 9886497863438
    SQL> select * from global_name;
    GLOBAL_NAME
    TARGET.WORLD
    SQL> conn test/test
    Connected.
    SQL> select object_name,object_type,status from user_objects;
    OBJECT_NAME OBJECT_TYPE STATUS
    TEST_VIEW VIEW VALID
    PRC1 PROCEDURE VALID
    TRG1 TRIGGER VALID
    FUN1 FUNCTION VALID
    TEST_NEW_TABLE TABLE VALID
    5 rows selected.
    These Functions, Views, Procedure, Trigger are created on the source and got replicated automatically to the target site TARGET.WORLD. And note that none of these objects are appearing in DBA_APPLY_INSTANTIATED_OBJECTS view.
    I have used the above given rules for capture. For propagation I dont have a ruleset itself and for apply I have same rules as of the capture rules.
    Please verify your environment and let me know if you need further help.
    Thanks,
    Rijesh

  • MapViewer metadata problem - accessing spatial data in a different schema.

    I have a MapViewer application that uses data from three different schemas.
    1. Dynamic Themes come from schema A.
    2. Static Themes come from schema B.
    3. A newly added static theme in B whose data comes from schema C.
    The mapviewer datasource points to schema B where the static themes, data and metadata are defined while the dynamic themes have their own datasource specified as part of addJDBCTheme(...).
    To get the newly added map to work I've had to add a view in schema B that points to C instead of referencing directly the table and I've had to add the metadata twice, once for schema B and once for schema C.
    If I put the metadata in just one of the two schemas I get the following errors.
    08/11/21 13:58:57 ERROR [oracle.sdovis.ThemeTable] cannot find entry in ALL_SDO_GEOM_METADATA table for theme: AMBITOS_REST
    08/11/21 13:58:57 ERROR [oracle.sdovis.ThemeTable] java.sql.SQLException: Invalid column index
    OR
    08/11/21 13:53:39 ERROR [oracle.sdovis.theme.pgtp] java.sql.SQLException: ORA-29902: error in executing ODCIIndexStart() routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA view
    It's not a big deal but I'd like to know if anyone else has has similar problems.
    Saludos,
    Lew.
    Edited by: Lew2 on Nov 21, 2008 6:42 AM

    Hi Lew,
    if you are using a recent version (10.1.3.1 or later) there is no need to use a view and to create the metadata in both schemas.
    You need to grant selection on tables between the schemas.
    You can try the following. Assume you have the MVDEMO schema (from MapViewer kit) and SCOTT schema.
    1) grant select on MVDEMO Counties table to SCOTT
    SQL> grant select on counties to scott;
    2) Now you are ready to create a predefined theme in schema SCOTT using the MVDEMO Counties table.
    - Open MapBuilder and loads the SCOTT schema.
    - On the Data navigator (bottom left tree), go to Geometry tables and you should see the MVDEMO node and the COUNTIES node inside it.
    - Start a wizard to create a geometry theme based on this Counties table.
    - At the end you should see that the base table name is MVDEMO.COUNTIES. Therefore MapViewer will use the metadata in MVDEMO schema and there is no need to replicate it in SCOTT schema.
    Joao

Maybe you are looking for

  • SAP R/3 Enterprise for PI

    Hi, We currently have SAP R/3 Enterprise version which is 4.7 installed on Sun OS. We are thinking of introducing PI 7.1 or 7.3 into the landscape. Is it possible to integrate PI with SAP 4.7.  I know that there would not be a local integration engin

  • Status of transfer orders

    i have a delivery document if i see the document flow of that delivery document i found there are 4 WMS Transfer orders in that 4 transfer orders 2 are with status completed and 2 are with status open in my report i need only transfer orders with sta

  • Signal Generation by duration

    I am trying to output the signals created by the signal generator to an excel file. I first try to index the array sent by the generator but it keeps giving me an error. Is there a way to solve this problem. Here is my source code: The signal generat

  • Add Metadata from fields are missing...

    Hi there, I'm new to Aperture and am working through a lesson book. I'm working on Aperture 2.1.3 3H12. When I try to import some images, and I click Add Metadata from, I don't get any fields to add data to. All I see on my screen is: Add Metadata Fr

  • Java3D game problem(about wall)?(urgent " )

    I am developing a 3D shoot game which contain a 3D mase,but...why the wall is transparent when I run towards the wall??how can solve this problem?? Moreover,I wonder how to run smoothiy in 3D game??