CMR across Schemas

Hello
Is it possible to set up a CMR across different datasources ? I'm trying set up a CMR between two entities. The datasources for the entities are different. I can deploy the ear but when I try to access the child relation from the parent I get no results. No exceptions are thrown. It just seems as if the server cannot find the data.
thanks Tom

carrin wrote:
Does anyone know if you can have a container managed relationship between entity
beans in different packages - packages meaning jar files.No, it's a requirement of the EJB 2.0 spec that they be packaged together.
-- Rob
>
I have class A in a.jar and it has a logical relationship with class B in b.jar.
Both a.jar and b.jar have their own -ejb-.xml files.
I thought I read in WLS6 documentation that in order to have a CMR, both entity
beans needed to be declared in the same deployment descriptor. Has that changed
with WLS7?
Thanks.

Similar Messages

  • FK across schemas in Data Modeling

    Hello,
    how can I create a foreign key across schemas in SQL Developer Data Modeling? Must both tables be on the same relational model? Otherwise I cannot create the relation because both tables must be displayed on the same diagram in order to draw a FK relation between them. Or is there another way? If not, then this is very inconvenient as I must include all the tables from different schemas in one relational model.
    TIA,
    Peter

    Peter,
    Yes you need all the tables in one and the same relational diagram.
    However you can use the SUBVIEW facility.
    When you import one or more schemas from a catalog one global relational model will be created and one subview per subschema. You can create the foreign key on the main relational diagram or you can drag and drop the table you want the foreign key with on the subview and create the relation there. A subview and a main view are always kept in sync.
    When you have created your own main relational diagram you can create one or more subviews yourself. To populate a subview you can drag and drop tables to it or you can use the "select neighbors" facility (click right on table) and then use "create subview from selected". You can also do a multiple select on the diagram (hold shift key) and create a subview from selected.
    Kind regards,
    René De Vleeschauwer
    SQL Developer Data Modeling team

  • Spatial Query across schemas. one version enabled table another not -Hanged

    Hi,
    I am executing a PL/sql procedure where a Spatial query run across two schemas. One table(in x schema) is version enabled and second table(in y schema) is Unversioned. Add to that complexity I am running the procedure from third user logon. I think I have enough previleges, as I won't get any error message.
    But, Procedure worked fine when there is no table is version enabled. It started giving problem when one table got version enabled.
    I have tried by setting " DBMS_WM.Gotoworkspace('LIVE');" before running spatial query. But still no luck, process just hangs on the spatial query.
    I tried by using physical name of the Table (table1_LT) which is making it to work. But, as per Workspace manager guide, applications, programs should NOT use, this physical tables(because it is not the correct way on versioned table).
    1. How can I hint to my query, to use a table from only live version?
    2. Why Query is hanging forever (even tried by leaving it over night....)
    Normally it used to take one or two minutes(before versioning..)
    I have posted it Workspace manager forum, But No Luck (people seems to be shy away after seeing "Spatial query" )
    Any help is highly appriciated

    Hi,
    I will need to know more details about the specific query you are performing. So, please do the following:
    1. list the actual query that you are using
    2. generate an explain plan of the query both before and after the table was version enabled. use @?/rdbms/admin/utlxpls or anything that generates the predicate information.
    3. also, give any pertinent details about the table(size of the table, number of rows expected to be returned, column types in the table, etc).
    Based on that, I will see if I can suggest a possible hint that may be able to improve the performance of your query.
    Regards,
    Ben

  • How to find invalid views across schemas

    I just started on a project that, well let's put it this way, has a messy database. It has multiple schemas and contains many views, synonyms and database links.
    I want to add a column to a table, but I want to make sure
    this doesn't invalidate any views somewhere in the database.
    Is there a way to check for invalid views across multiple schemas? Or better yet, how to find out beforehand what views in what schemas look at the table?
    Thanks,
    Tim

    To find out, where the table is used you can select:
    select owner,name,type
    from all_dependencies
    where referenced_owner = (select user from dual)
    and referenced_name = 'TABLENAME'
    and referenced_type = 'TABLE'
    It not only shows Views but also show�s if the table is used inside a trigger and so on.
    To find out, what objects are invalid:
    select owner, object_name, object_type
    from all_objects
    where status = 'INVALID';
    regards
    Anna

  • Table does not exist when creating FK Constraint across schemas

    Hi all,
    This will probably boil down to a permissions issue since I'm sketchy on the various levels....
    I'm testing a conversion to Oracle from our legacy system. There are 4 schemas which I've created and each of those schema users have been granted DBA roles.
    After creating a number of tables I wrote the SQL to create the FK Constraints. Most of them went in, but the ones crossing schemas don't. Logged in as SYS, I can do a select from each table, I can even JOIN the two in the SELECT. However when I try creating the constraint it give me a: ORA-00942: table or view does not exist
    ALTER TABLE USERA.TABLEA ADD FOREIGN KEY (COLA) REFERENCES USERB.TABLEB (COLA) ON DELETE CASCADE
    Again, I have scads of commands that went in correctly so this must be a permissions type thing. I'm the only one logged into the database since it's my own test system. This is 10g BTW.
    If you have any suggestions as what to look into, please explain how to actually perform the checks since I'm still learning how to get around.
    Thanks very much!

    To bulk grant, you can use dynamic SQL; somthing like this:
    <BR><BR>
    SQL> declare
      2    l_grantor VARCHAR2(30) := 'USERA';
      3    l_grantee VARCHAR2(30) := 'USERB';
      4  begin
      5    for table_rec in (select owner,table_name from all_tables where OWNER=l_grantor) loop
      6      execute immediate 'GRANT REFERENCES ON '||table_rec.OWNER||'.'||table_rec.TABLE_NAME||' TO '||l_grantee;
      7    end loop;
      8  end;
      9  /

  • Blocking DDL Queries Across Schemas

    I am experiencing a problem where DDL queries executed on two different schemas seem to be blocking each other and hence taking longer time to execute. If they are not executed in parallel they complete within the expected time.
    What pointers should I research or look upon?
    P.S - We are using 64 bit Oracle 11gr2 Standard Edition RAC server.

    You are saying that parallel execution of DDL queries are blocking each other user request; so as Sb has said, post your DDL queries, how and when you are running DDL queries, what error you are getting; i mean provide us more information as much as you can (because none of here are able to see your monitor, what exactly you are doing and what exactly oracle is saying) etc. There are many ready made SQLs available on this forum and/or google by just search with this forum or google, post your effort to get the solution then we will try our best to give you solution.
    Not something like, "Doctor (forum members), today i am not feeling well (DDL queries are blocking each other), please give me some medicine (some sql to solve the question)"
    Regards
    Girish Sharma

  • Call procedure across schema

    I have oracle 8i
    i have multiple identical schema for different country.
    i write one procedure in one schema(country_1) and granted to all others for execute.
    now i am connected to country_2 which doesn't have procedure,
    and i am calling procedure of country_1(which has a procedure).
    upto this everything is fine
    but problem is i want to
    execute the procedure on the recently connected schema(country_2),it is executing on country_1(where actual procedure is )
    Thanks,
    prathesh
    ([email protected])

    You need to add AUTHID CURRENT_USER to your create procedure command.
    From the documentation
    invoker_rights_clause
    The invoker_rights_clause lets you specify whether the procedure executes with the privileges and in the schema of the user who owns it or with the privileges and in the schema of CURRENT_USER.
    This clause also determines how Oracle resolves external names in queries, DML operations, and dynamic SQL statements in the procedure.
    TTFN
    John

  • Different TIMEZONE Set up across schemas

    Hi,
    Is it possible to have different TIMEZONES or TIME DATE Settings for different schemas of the same Database.
    Thanks
    Kapil

    Hi ,
    Thanks for reply.
    By different schema here i mean different users and their respective tables.
    Actually we will have one database and multiple schema. (SCHEMA1 and SCHEMA2)
    Now a user in X country will make changes in SCHEMA1 there the time of X country should get reflected in tables.
    And a user in Y country will make changes in SCHEMA2 there the time of Y country should get reflected in tables.
    I hope this clarifies my question.
    Thanks
    Kapil

  • Moving Subpartitions to a duplicate table in a different schema.

    +NOTE: I asked this question on the PL/SQL and SQL forum, but have moved it here as I think it's more appropriate to this forum. I've placed a pointer to this post on the original post.+
    Hello Ladies and Gentlemen.
    We're currently involved in an exercise at my workplace where we are in the process of attempting to logically organise our data by global region. For information, our production database is currently at version 10.2.0.3 and will shortly be upgraded to 10.2.0.5.
    At the moment, all our data 'lives' in the same schema. We are in the process of producing a proof of concept to migrate this data to identically structured (and named) tables in separate database schemas; each schema to represent a global region.
    In our current schema, our data is range-partitioned on date, and then list-partitioned on a column named OFFICE. I want to move the OFFICE subpartitions from one schema into an identically named and structured table in a new schema. The tablespace will remain the same for both identically-named tables across both schemas.
    Do any of you have an opinion on the best way to do this? Ideally in the new schema, I'd like to create each new table as an empty table with the appropriate range and list partitions defined. I have been doing some testing in our development environment with the EXCHANGE PARTITION statement, but this requires the destination table to be non-partitioned.
    I just wondered if, for partition migration across schemas with the table name and tablespace remaining constant, there is an official "best practice" method of accomplishing such a subpartition move neatly, quickly and elegantly?
    Any helpful replies welcome.
    Cheers.
    James

    You CAN exchange a subpartition into another table using a "temporary" (staging) table as an intermediary.
    See :
    SQL> drop table part_subpart purge;
    Table dropped.
    SQL> drop table NEW_part_subpart purge;
    Table dropped.
    SQL> drop table STG_part_subpart purge;
    Table dropped.
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    Index created.
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'C');
    1 row created.
    SQL> insert into part_subpart values (11,'A');
    1 row created.
    SQL> insert into part_subpart values (11,'C');
    1 row created.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    Table created.
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    Table truncated.
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
         COL_1 COL_2
            11 A
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    no rows selected
    SQL>I have exchanged subpartition p_2_s_1 out of the table part_subpart into the table NEW_part_subpart -- even with a different name for the subpartition (n_p_2_s_1) if so desired.
    NOTE : Since your source and target tables are in different schemas, you will have to move (or copy) the staging table STG_part_subpart from the first schema to the second schema after the first "exchange subpartition" is done. You will have to do this for every subpartition to be exchanged.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Apr 4, 2011 10:19 AM
    Added clarification for cross-schema exchange.

  • Create materialized view throws insufficient privileges in single schema

    I'm trying to create a complex materialized view in my own schema, using only tables from my own schema:
    CREATE MATERIALIZED VIEW MYSCHEMA.MYMVIEW AS
        SELECT
            A.ONE_THING,
            B.ANOTHER_THING,
            C.A_THIRD_THING
        FROM MYSCHEMA.TABLEA A
        JOIN MYSCHEMA.TABLEB B
            ON A.COL1 = B.COL1
        JOIN MYSCHEMA.TABLEC C
            ON A.COL2 = C.COL2The line JOIN MYSCHEMA.TABLEB B throws an ORA-01031: insufficient privileges error, highlighting TABLEB.
    I understand that grants need to be explicit on tables when creating objects across schemas, but this code is operating within my own schema only; i created and own all the tables in this schema.
    What's going on?
    Thanks!

    Perhaps it is the tool that i am using that highlights the wrong item, because as it turns out, i don't have the CREATE MATERIALIZED VIEW permission after all (the permission was mistakenly granted to a different user instead of to me).
    SELECT * FROM SESSION_PRIVS;returns CREATE VIEW, but does not return CREATE MATERIALIZED VIEW.
    Sorry to have wasted your time.

  • Anyone know of a good DATA diff tool? assume schemas are the same...

    Hi,
    I was wondering if anyone has used or come across any tools that compare data across schemas. We can assume the table structures in 2 schemas are identical and we just want to detect data diffrences between them.
    I can think of a way to do this with some PL/SQL but thought the problem was common enough that a tool might already be available.
    Oh - and I need to wrap processing around this so it would need some sort of API (i.e. I'm not looking for a GUI)
    Thanks!
    Message was edited by:
    ennisb
    Message was edited by:
    ennisb

    Thanks.
    I'm on 10G and this looks to be an 11G tool?
    But this comment made me unsure...
    Can be used to compare tables, views, and materialized views backward compatible to 10gR1 due to need for ORA_HASH.

  • Single schema or multiple schemas

    Hello
    A few years ago I worked on a greenfield project where we were building a system to serve 20 or so different departments with 1200+ users. The approach the existing DBA had taken was to use a single schema for all objects and for all apps. The big advantage I found with this approach was for development, we could have one database and each developer could develop whatever they wanted in their own schema. Name resolution meant that they could override the main copy of whichever object they were working on just by having it in their schema. Unfortunately the project went nowhere and I left after 3 months so I never got to see what issues were raised with the system in production. So I'm wondering, has anyone else taken this approach and if so what would you say are the main things to be wary of? Especially things that aren't a problem when the objects are distributed between multiple schemas.
    Thank you in advance
    David

    user12065404 wrote:
    Hi Ed
    Thank you for your reply. I think I need to clarify what I meant a little.
    On a number of sites I've been to there have been multiple applications spread out over multiple schemas for the same business unit. To draw on the emp example it would be like having the EMP schema containing the emp table and the DEPT schema containing the dept table, with a single HR application (for a single buisness unit/department) selecting from both. If you scale this up to 000s of tables and 30 odd schemas, it's a bit of a nightmare from the perspective of code location and permissions because each department needs to look across schemas to get all of the data to do their job. Effectively tables and code had become segregated into separate schemas by business "function" and it means that the applicaitons are so intertwined with the different schemas that to have more than one business unit means you either need a completely separate database or you have to look at VPD.
    App         SCHEMA      Table
    HR          emp         emp
    dept        dept
    Invoicing   emp         emp
    invoicing   inv
    contracts   contract
    orders      ORDER
    Ordering    emp         emp
    contracts   contract
    orders      ORDER
    orders     orderitem
    ...The advantage I saw with a single schema for all the data for a single business unit is the one you have mentioned - i.e. you can very easily set up a new business unit that is totally separate from the other by having a new schema and point the same version of the same application at it. Or failing that, using VPD. And on top of that, security is simplified - no needing direct grants from one schema to the other and especially not requiring WITH GRANT OPTION in the case of a view that pulls data from various schemas and rights to select from the view have to be grated to a role.
    App         SCHEMA      Table
    HR          abc_ltd     emp
    abc_ltd     dept
    Invoicing   abc_ltd     emp
    abc_ltd     inv
    abc_ltd     contract
    abc_ltd     ORDER
    Ordering    abc_ltd     emp
    abc_ltd     contract
    abc_ltd     ORDER
    abc_ltd     orderitem
    ...The other big advantage I saw was from a development and testing perspective. You can have one testing database supporting lots of developers working on lots of different projects against the same core data set and code. The multiple schema setup I described above means that effectively you need to have one testing database per project - which becomes unmanageable when databases start growing to the TB range.
    David
    Edited by: user12065404 on 26-Mar-2010 08:30
    Saw a typo in my first exampleOK, now it is a lot more clear.
    What you have here is a problem of some data (employee) being enterprise data, and other data being application data.
    I take it your "invoiceing" app is used by the sales side of the business, and "ordering" is used by the purchasing side (my last assignment when I was an apps analyst was the purchasing/inventory control for an auto manufacturer).
    With what I know at this point, I'd have an HR schema, an INVOICING schema, and an ORDERING schema. (Well actually, I'd have SALES and PURCHASING, because INVOICEING and ORDERING are going to have to integrate with other stuff in those areas). Even though invoicing and ordering both have a CONTRACT table, I'd think the properties of a contract for one would be different than the other, so the table design itself would be different. How about tables SALES_CONTRACT and PURCHASING_CONTRACT. Same concept for ORDERS, and both sales and purchasing would need header and details .... SALES_ORDER_HEADER, SALES_ORDER_DETAIL, PURCH_ORDER_HEADER, PURCH_ORDER_DETAIL.
    These two apps need for employee data should be pulled directly from the HR emp table, with access limited to stored procedures that control what data is exposed. Or from materialized views. Never try to maintain the same data in two places at once. It will never maintain consistency.

  • Constraints From another schema

    Hi Guys,
    I have a table referral_contacts which has a column customer_id.This col is coming from a synonym whose base table is in another schema.
    So the current requirement is to establish the referential constratint from that table .I think there is no way to add constraint from a synonym whose base table is in another schema.Is there any method to achieve this without crating the same table .
    Any suggestions,coments are highly appreciated.
    Thanks,
    Prafulla

    Hi Prafulla
    It will be very tough for maintenance in long term if you create cross-reference constraints across schemas. That won't be a good design unless the same application owns both schemas. If each schema is used by different application and that only the second schema is populated with data realtime which is needed by the other application then you can create a new table and refresh it from the other schema in real time as well. You can check Materialized views (fast refresh, incremental refresh, using rowid, etc) for this purpose or you can also look at various other options (using trigger on the second schema to populate the new table in your schema and keep this in sync with that).
    Just for your information, Materialized views is a feature of EE. Tell us more about your environment

  • Copying tables between schema owners

    In Timesten, can you copy tables across schemas/owners?
    i.e. OWNER_A.TABLE_Y to OWNER_B.TABLE_Y
    Where TABLE_Y has the same definition? Basically, I'd like to be able to backup one datastore and restore it in another datastore that has the same table definitions, but may have a different owner name.
    Thanks,
    Larry

    I'm not completely clear on exactly what you are looking to do. On one hand you ask about copying tables between schemas. This is easily done:
    CREATE TABLE OWNER_B.TABLE_Y AS SELECT * FROM OWNER_A.TABLE_Y;
    This only works for TimesTen tables that are not part of a cache group; specifically the source table can be part of a cache group but the target table cannot. if the target table is part of a cache group then you need to:
    1. Create the cache group containing the target (cached) table.
    2. INSERT INTO OWNER_B.TABLE_Y SELECT * FROM OWNER_A.TABLE_Y;
    But then you mention backup and restore. Since TimesTen backup/retore (ttBackup/ttRestore) works at a physical level you cannot rename/copy tables as part of that. You might be able to use ttMigrate with the -rename oldOwner:newOwner option but there are some constraints around this (one being that PL/SQL cannot be enabled in the database).
    Chris

  • Using DBMS_METADATA

    I'm trying to get to grips with DBMS_METADATA to transfer table DDL across schemas, making some modifications in the process.
    Firstly, I'm not achieving table recreation using DBMS_METADATA.PUT. Example 18-7 from the Utilities manual is giving an "ORA-00942: Table does not exist" error, and my own attempt below does not result in a copy of the table being created. Can anyone point out what the problem is?
    xdba@ora10gr2 > select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    PL/SQL Release 10.2.0.1.0 - Production
    CORE    10.2.0.1.0      Production
    TNS for Linux: Version 10.2.0.1.0 - Production
    NLSRTL Version 10.2.0.1.0 - Production
    xdba@ora10gr2 > create user king identified by lion;
    User created.
    xdba@ora10gr2 > grant connect, resource to king;
    Grant succeeded.
    xdba@ora10gr2 > declare
      2
      3    src   number;
      4    destn number;
      5
      6    xml   xmltype;
      7    ddls  sys.ku$_ddls;
      8
      9    txh number;
    10
    11    created boolean;
    12    errs    sys.ku$_SubmitResults := sys.ku$_SubmitResults();
    13
    14    dbms_metadata_failure exception;
    15
    16  begin
    17
    18    src := dbms_metadata.open('TABLE');
    19
    20    dbms_metadata.set_filter(src, 'SCHEMA', 'SCOTT');
    21    dbms_metadata.set_count(src, 3);
    22
    23    txh := dbms_metadata.add_transform(src, 'MODIFY');
    24    dbms_metadata.set_transform_param(txh, 'OBJECT_ROW', 3);
    25    dbms_metadata.set_remap_param(txh, 'REMAP_SCHEMA', 'SCOTT', 'KING');
    26
    27    txh := dbms_metadata.add_transform(src, 'DDL');
    28
    29    dbms_metadata.set_transform_param(txh, 'REF_CONSTRAINTS', false);
    30    dbms_metadata.set_transform_param(txh, 'CONSTRAINTS', false);
    31    dbms_metadata.set_transform_param(txh, 'STORAGE', false);
    32    dbms_metadata.set_transform_param(txh, 'SEGMENT_ATTRIBUTES', false);
    33
    34    ddls := dbms_metadata.fetch_ddl(src);
    35
    36    for i in ddls.first()..ddls.last()
    37    loop
    38      dbms_output.put_line(ddls(i).ddlText);
    39    end loop;
    40
    41    destn := dbms_metadata.openw('TABLE');
    42
    43    created := dbms_metadata.put(destn, xml, 0, errs);
    44
    45    dbms_metadata.close(destn);
    46
    47    if not created
    48    then
    49      raise dbms_metadata_failure;
    50    end if;
    51
    52    dbms_metadata.close(src);
    53
    54  exception
    55
    56    when dbms_metadata_failure
    57    then
    58      for i in errs.first..errs.last
    59      loop
    60        for j in errs(i).errorlines.first..errs(i).errorlines.last
    61        loop
    62          dbms_output.put_line(errs(i).errorlines(j).errortext);
    63        end loop;
    64      end loop;
    65      raise;
    66
    67  end;
    68  /
    CREATE TABLE "KING"."EMP"
       (    "EMPNO" NUMBER(4,0),
            "ENAME" VARCHAR2(10),
            "JOB" VARCHAR2(9),
            "MGR" NUMBER(4,0),
            "HIREDATE"
    DATE,
            "SAL" NUMBER(7,2),
            "COMM" NUMBER(7,2),
            "DEPTNO" NUMBER(2,0)
    PL/SQL procedure successfully completed.
    xdba@ora10gr2 > connect king/lion@ora10gr2
    Connected.
    king@ora10gr2 > select dbms_metadata.get_ddl('TABLE', 'EMP') from dual;
    ERROR:
    ORA-31603: object "EMP" of type TABLE not found in schema "KING"
    ORA-06512: at "SYS.DBMS_METADATA", line 1546
    ORA-06512: at "SYS.DBMS_METADATA", line 1583
    ORA-06512: at "SYS.DBMS_METADATA", line 1901
    ORA-06512: at "SYS.DBMS_METADATA", line 2792
    ORA-06512: at "SYS.DBMS_METADATA", line 4333
    ORA-06512: at line 1Secondly, does anyone have experience of performing more complex transformations with DBMS_METADATA using external XLST? Are there any examples anywhere?

    I've now achieved table recreation in example 18-7 and my own code above. This does not work when XMLType storage is used, but does when CLOB is used. It is not clear to me from these experiments and the documentation if there is intended to be a difference between XMLType/dbms_metadata.fetch_xml() and CLOB/dbms_metadata.fetch_clob, and if so, what this is?
    I have also succeeded in applying more complex transformations using XSLT (changing table names, tablespaces; adding columns). However, I'm concerned that the dbms_metadata XML format seems to be undocumented.
    Has there been any advance since XML Schema for Oracle DDL statements??
    Can anyone shed any light on the use of the various &lt;SPAREn&gt; elements for example? Valid values for &lt;COL_LIST_ITEM&gt;...&lt;TYPE_NUM&gt;?

Maybe you are looking for