How to handle Index Organized Tables in schema capture?

I’m trying setup streams schema level between 2 database source 10.2.0.4 and 11g; one way streams;
The source has Index Organized Tables with column rowed; Oracle streams doesn’t support data type rowed;
Is it works around?
Thanks

At the moment, you can't do it declaratively. You have to do it in an event handler. Assuming you have the userid setup as a query parameter in the view object, something like this should get you started:
public EventResult handleEvent(
BajaContext context, Page page, PageEvent event) throws Throwable
HttpSession session = context.getServletRequest().getSession(true);
ViewObject view = ServletBindingUtils.getViewObject(context);
String userid = session.getAttribute("userid");
view.setWhereClauseParam(0, userid);
view.executeQuery();

Similar Messages

  • Index Organized Tables

    what is logical rowid in IOT?are they stored somwhere physically just like physical rowId's
    what are secondary indexes?
    what it means by leaf block splits?when and how it happens?
    and the primary key constraint for an index-organized table cannot be dropped, deferred, or disabled,,,,,Is it true,,,,,if Yes Then Y
    how does overflow works?how the two clauses are implemented PCTTHRESHOLD and INCLUDING.how they work?
    Edited by: Juhi on Oct 22, 2008 1:09 PM

    I'm sort-of tempted to just point you in the direction of the official documentation (the concepts guide would be a start. See http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#sthref759)
    But I would say one or two other things.
    First, physical rowids are not stored physically. I don't know why you'd think they were. The ROWID data type can certainly be used to store a rowid if you choose to do so, but if you do something like 'select rowid from scott.emp', for example, you'll see rowids that are generated on-the-fly. ROWID is a pseudo-column, not physically stored anywhere, but computed whenever needed.
    The difference between a physical rowid and a logical one used with IOTs comes down to a bit of relational database theory. It is a cast-iron rule of relational databases that a row, once inserted into a table, must never move. That is, the rowid it is assigned at the moment of its first insertion, must be the rowid it 'holds onto' for ever and ever. If you ever want to change the rowids assigned to rows in an ordinary table, you have to export them, truncate the table and then re-insert them: fresh insert, fresh rowid. (Oracle bends this rule for various maintenance and management purposes, whereby 'enable row movement' permits rows to move within a table, but the general case still applies mostly).
    That rule is obviously hopeless for index structures. Were it true, an index entry for 'Bob' who gets updated to 'Robert' would find itself next to entries for 'Adam' and 'Charlie', even though it now has an 'R' value. Effectively, a 'b' "row" in an index must be allowed to "move" to an 'r' sort of block if that's the sort of update that takes place. (In practice, an update to an index entry consists of performing a delete followed by a re-insert, but the physicalities don't change the principle: "rows" in an index must be allowed to move if their value changes; rows in a table don't move, whatever happens to their values)
    An IOT is, at the end of the day, simply an index with a lot more columns in it than a "normal" index would have -so it, too, has to allow its entires (its 'rows', if you like) to move. Therefore, an IOT cannot use a standard ROWID, which is assigned once and forever. Instead, it has to use something which takes account of the fact that its rows might wander. That is the logical rowid. It's no more "physical" than a physical rowid -neither are physically stored anywhere. But a 'physical' rowid is invariant; a logical one is not. The logical one is actually constructed in part from the primary key of the IOT -and that's the main reason why you cannot ever get rid of the primary key constraint on the IOT. Being allowed to do so would equate to allowing you to destroy the one organising principle for its contents that an IOT possesses.
    (See the section entitled "The ROWID Pseudocolumn" and following on this page: http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1845
    So IOTs have their data stored in them in primary key order. But they don't just contain the primary key, but every other column in the 'table definition' too. Therefore, just like with an ordinary table, you might want sometimes to search for data on columns which are NOT part of the primary key -and in that case, you might well want these non-primary key columns to be indexed. Therefore, you will create ordinary indexes on these columns -at this point, you're creating an index on an index, really, but that's a side issue, too! These extra indexes are called 'secondary indexes', simply because they are 'subsidiary indexes' to the main one, which is the "table" itself arranged in primary key order.
    Finally, a leaf block split is simply what happens when you have to make room for new data in an index block which is already packed to the rafters with existing data. Imagine an index block can only contain four entries, for example. You fill it with entries for Adam, Bob, Charlie, David. You now insert a new record for 'Brian'. If this was a table, you could throw Brian into any new block you like: data in a table has no positional significance. But entries in an index MUST have positional significance: you can't just throw Brian in amongst the middle of a lot of Roberts, Susans and Tanyas. Brian HAS to go in between the existing entires for Bob and Charlie. Yet you can't just put him in the middle of those two, because then you'd have five entries in a block, not four, which we imagined for the moment to be the maximum allowed. So what to do? What you do is: obtain a new, empty block. Move Charlie and David's entries into the new block. Now you have two blocks: Adam-Bob and Charlie-David. Each only has two entries, so each has two 'spaces' to accept new entries. Now you have room to add in the entry for Brian... and so you end up with Adam-Bob-Brian and Charlie-David.
    The process of moving some index entries out of one block into a new one so that there's room to allow new entries to be inserted in the middle of existing ones is called a block split. They happen for other reasons, too, so this is just a gloss treatment of them, but they give you the basic idea. It's because of block splits that indexes (and hence IOTs) see their "rows" move: Charlie and David started in one block and ended up in a completely different block because of a new (and completely unrelated to them) insert.
    Very finally, overflow is simply a way of splitting off data into a separate table segment that wouldn't sensibly be stored in the main IOT segment itself. Suppose you create an IOT containing four columns: one, a numeric sequence number; two, a varchar2(10); three, a varchar2(15); and four, a blob. Column 1 is the primary key.
    The first three columns are small and relatively compact. The fourth column is a blob data type -so it could be storing entire DVD movies, multi-gigabyte-sized monsters. Do you really want your index segment (for that is what an IOT really is) to balloon to huge sizes every time you add a new row? Probably not. You probably want columns 1 to 3 stored in the IOT, but column 4 can be bumped off over to some segment on its own (the overflow segment, in fact), and a link (actually, a physical rowid pointer) can link from the one to the other. Left to its own devices, an IOT will chop off every column after the primary key one when a record which threatens to consume more than 50% of a block gets inserted. However, to keep the main IOT small and compact and yet still contain non-primary key data, you can alter these default settings. INCLUDE, for example, allows you to specify which last non-primary key column should be the point at which a record is divided between 'keep in IOT' and 'move out to overflow segment'. You might say 'INCLUDE COL3' in the earlier example, so that COL1, COL2 and COL3 stay in the IOT and only COL4 overflows. And PCTTHRESHOLD can be set to, say, 5 or 10 so that you try to ensure an IOT block always contains 10 to 20 records -instead of the 2 you'd end up with if the default 50% kicked in.

  • Finding Parent Table of Index Organized Table

    I am using Oracle 10g Rel 2 and faced error message while granting select priviliges ORA-25191: cannot reference overflow table of an index-organized table
    I searched solution and found: Issue the statement against the parent index-organized table containing the specified overflow table.
    Question is how can i find parent index-organized table containing the specified overflow table.
    Further i faced an other error while granting select priviliges : ORA-22812: cannot reference nested table column's storage table
    Solution is same and i found parent table and first granted select priviliges on that parent table and then tried then tried to grant select priviliges on required table to public and did not worked.
    if some body could help me!!!!!!!!!!!!!!!!!!!!!

    And just what does this problem have to do with either the SQL and/or PL/SQL languages?
    Try the [url http://forums.oracle.com/forums/forum.jspa?forumID=61&start=0]Community Discussion Forums » Database » Database - General forum.
    Also suggest that you read up on just what an IOT is, and where and when overflows apply. See the [url http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref1060]Oracle® Database Concepts guide.

  • Why Index-organized Table (IOT) is so slow during bulk/initial insert?

    Tested in 11.1.0.7.0 RAC on RHEL 5 with ASM and 16KB block size.
    Table is not wide: PK contains 4 columns and the leading 2 are compressed because they have relatively low cardinality; 2 other columns are included; the table contains another 4 audit columns; overflow table space defined.
    Created 2 tables, one is IOT, the other is a normal heap-organized table with "COMPRESS FOR ALL OPERATIONS". Both tables have been range partitioned by the first column into 8 partitions, and DOP is set to 8.
    Initial load volume is about 160M rows. Direct Path insert is used with parallel degree 8.
    After initial load, create PK for the 4 columns with the leading 2 compressed on the normal table. The IOT occupied about 7GB storage; the normal table occupied 9GB storage (avg_row_len = 80 bytes) and the PK occupied 5.8GB storage.
    The storage saving of IOT is significant, but it took about 60 minutes to load the IOT, while it only took 10 minutes to load the heap-organized table and then 6 minutes to create the PK. Overall, the bulk insert for IOT is about 4 times slower than the equivalent heap-organized table.
    I have ordered the 4 columns in PK for the best compression ratio (lower cardinality comes first) and only compress the most repetitive leading columns (this matches ORACLE's recommendation in index_stats after validate structure), partition is used to reduce contention, parallel degree is amble, /*+ append */ is used for insert, the ASM system is backed with high-end SAN with a lot of I/O bandwidth.
    So it seems that such table is good candidate for IOT and I've tried a few tricks to get the best out of IOT, but the insert performance is quite disappointing. Please advise me if I missed anything, or you have some tips to share.
    Thanks a lot.
    CREATE TABLE IOT_IS_SLOW
      GROUP_ID      NUMBER(2)                   NOT NULL,
      BATCH_ID      NUMBER(4)                  NOT NULL,
      KEY1              NUMBER(10)                    NOT NULL,
      KEY2              NUMBER(10)                NOT NULL,
      STATUS_ID         NUMBER(2)                   NOT NULL,
      VERSION           NUMBER(10),
      SRC_LAST_UPDATED      DATE,
      SRC_CREATION_DATE     DATE,
      DW_LAST_UPDATED   DATE,
      DW_CREATION_DATE  DATE,
      CONSTRAINT PK_IOT_IS_SLOW
      PRIMARY KEY (GROUP_ID, BATCH_ID, KEY1, KEY2)
    ORGANIZATION INDEX COMPRESS 2
    INCLUDING VERSION
    NOLOGGING
    PCTFREE 20
    OVERFLOW
    PARALLEL ( DEGREE 8 )
    PARTITION BY RANGE(GROUP_ID)
         PARTITION P01 VALUES LESS THAN (2),
         PARTITION P02 VALUES LESS THAN (3),
         PARTITION P03 VALUES LESS THAN (4),
         PARTITION P04 VALUES LESS THAN (5),
         PARTITION P05 VALUES LESS THAN (6),
         PARTITION P06 VALUES LESS THAN (7),
         PARTITION P07 VALUES LESS THAN (8),
         PARTITION P08 VALUES LESS THAN (MAXVALUE)
    );Even if /*+ APPEND */ is ignored for IOT, it is too slow, isn't it?

    David_Aldridge wrote:
    oftengo wrote:
    >
    Direct-path INSERT into a single partition of an index-organized table (IOT), or into a partitioned IOT with only one partition, will be done serially, even if the IOT was created in parallel mode or you specify the APPEND or APPEND_VALUES hint. However, direct-path INSERT operations into a partitioned IOT will honor parallel mode as long as the partition-extended name is not used and the IOT has more than one partition.
    >
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_9014.htm
    Hmmm, that's very interesting. I'm still a bit cynical though -- in order for direct path to work on an index organized table by appending blocks I would think that some extra conditions would have to be satisfied:
    * the table would have to be empty, or the lowest-sorting row of the new data would have to be higher than the highest-sorting row of the existing data
    * the data would have to be sorted
    ... that sort of thing. Maybe I'm suffering a failure of imagination though.Could be. From a Tanel Poder post:
    >
    The “direct path loader” (KCBL) module is used for performing direct path IO in Oracle, such as direct path segment scans and reading/writing spilled over workareas in temporary tablespace. Direct path IO is used whenever you see “direct path read/write*” wait events reported in your session. This means that IOs aren’t done from/to buffer cache, but from/to PGA directly, bypassing the buffer cache.
    This KCBL module tries to dynamically scale up the number of asynch IO descriptors (AIO descriptors are the OS kernel structures, which keep track of asynch IO requests) to match the number of direct path IO slots a process uses. In other words, if the PGA workarea and/or spilled-over hash area in temp tablespace gets larger, Oracle also scales up the number of direct IO slots. Direct IO slots are PGA memory structures helping to do direct IO between files and PGA.
    >
    So I'm reading into this that somehow these temp segments handle it, perhaps because with parallelism you have to be able to deal anyway. I speculate the data is inserted past the high water mark, then any ordering issues left can be resolved before moving the high water mark(s). Maybe examining where segments wind up in the data files can show how this works.
    >
    I can't find anything in the documentation that speaks to this, so I wonder whether the docs are really talking about a form of conventional path parallel insert into an IOT and not true direct path inserts.
    One way to check, I think, would be to get the wait events for the insert and see whether the writes are direct.

  • Problem with a 2 columns Range Partitioning for a indexed organized table

    have an indexed organized table with a 2 column PK. the first field (datum) is a date field the second field (installatieid) is a number(2) field.
    Every minute a 7 records are inserted (installatieid 0-6).
    I like to partition this table with one partition per year per installatieid.
    I tried to do it with:
    partition by range(datum,installatieid)
    (partition P_2004_0 values less than (to_date('2004-01,'yyyy-mm'),1)
    ,partition P_2004_6 values less than (to_date('2004-01','yyyy-mm'),7)
    partition P_2005_0 values less than (to_date('2005-01','yyyy-mm'),1)
    ,partition P_2005_6 values less than (to_date('2005-01','yyyy-mm'),7)
    but now only the P_2004_0 and P_2005_0 are filled.
    I thought about to combine a range partition on datum with a list subpartition on installatieid, but I read this is not allowed with an index organized table.
    How can I solve this problem.

    partition by range(datum,installatieid)
    (partition P_2004_0 values less than
    (to_date('2004-01,'yyyy-mm'))
    ,partition P_2004_6 values less than
    (to_date('2004-07','yyyy-mm'))
    partition P_2005_0 values less than
    (to_date('2005-01','yyyy-mm'))
    ,partition P_2005_6 values less than
    (to_date('2005-07','yyyy-mm'))
    ? Sorry haven't got time to test it this morning ;0)

  • Heap tables and index organized tables

    I performing migration from mssql server to oracle 10gr2 rdbms, in mssql all tables have clustered pk, index. Is it necessary to use index organized tables for that migration, or enough ordinal heap organized tables and what differences between those tables, and mssql tables
    Thanx

    In Oracle, the typical table is a standard 'heap' table. Stuff goes into the heap table randomly, and randomly comes out.
    An Index Organizaed Tables is somewhat similar to a Cluster Index in SS. It can have some performance advantages over heap tables - when the heap table has an associated index on the primary key.
    The IOT can also have some disadvantages, such as the need for an Overflow table to handle the extra data when a row doesn't conveniently fit in a block (implying multiple I/Os), and an extra translation table if bitmap indexes are required (implying extra I/Os).
    An unintelligent developer will generally believe that Oracle and SQL Server are the same - after all they both run SQL - and will attempt to port by a simple translation of syntax.
    An intelligent developer will test both styles of tables, during a port. Such a developer will also be quick to learn about the changes in internals (such as locking mechanisms) and will realize that different styles of coding are required for many application situations.
    I recommend reading Tom Kyte's books to get handle on pros and cons as well as testing techniques to help a developer become intelligent.

  • Creation of context index on index-organized table

    I encountered a problem when creating a domain index(intermediate text context index) on a index-organised table in oracle 8i.
    The description of the error is stated below:
    "ORA-29866: cannot create domain index on a column of index-organized table "
    I have configured intermediate text properly and even it worked for those tables which are not index-organised(ordinary tables).
    This problem has occured only when i made the tables as index organised.
    Please provide us a solution to this problem as early as possible.
    In case if you require any more details i shall provide them.

    Please ask questions about Oracle Text (formerly interMedia text) in the Oracle Text forum. You will get a quicker, more expert answer there.

  • Creation of context index on index-organized tables

    I encountered a problem when creating a domain index(intermediate text context index) on a index-organised table in oracle 8i.
    The description of the error is stated below:
    "ORA-29866: cannot create domain index on a column of index-organized table "
    I have configured intermediate text properly and even it worked for those tables which are not index-organised(ordinary tables).
    This problem has occured only when i made the tables as index organised.
    Please provide us a solution to this problem as early as possible.
    In case if you require any more details i shall provide them.

    creation of domain indexes (such as context) on iot's
    is not currently supported in oracle.

  • Indexes on Index Organized Table

    I remember seeing a note regarding problems in using indexs on index organized tables. Of course, the index would be something other than the one implied in the table definition. I can't seem to remember what the problem was.
    Has anyone used indexes on index organized tables successfully? (unsuccessfully?)
    Oracle 8i on a Tru64 platform
    Thanks
    -dave

    I remember seeing a note regarding problems in using indexs on index organized tables. Of course, the index would be something other than the one implied in the table definition. I can't seem to remember what the problem was.
    Has anyone used indexes on index organized tables successfully? (unsuccessfully?)
    Oracle 8i on a Tru64 platform
    Thanks
    -dave

  • Primary key constraint for index-organized tables or sorted hash cluster

    We had a few tables dropped without using cascade constraints. Now when we try to recreate the table we get an error message stating that "name already used by an existing constraint". We cannot delete the constraint because it gives us an error "ORA-25188: cannot drop/disable/defer the primary key constraint for index-organized tables or sorted hash cluster" Is there some sort of way around this? What can be done to correct this problem?

    What version of Oracle are you on?
    And have you searched for the constraint to see what it's currently attached to?
    select * from all_constraints where constraint_name = :NAME;

  • Who know how to handle pl/sql table return from stored procedure calling from jsp

    I have some stored procedure which return pl/sql table (index by table), It is look like an array. how jdbc handle this?
    CallableStatement cs = con.prepareCall("EXECUTE bill.getcountry(?,?)");
    cs.setInt(1, cid);
    cs.registerOutParameter(2, java.sql.Types.VARCHAR);// ARRAY?
    ResultSet rs = cs.executeQuery();
    Array array = (Array) rs.getObject (1);
    ResultSet array_rset = array.getResultSet ();

    Not that familiar with the OCI (Oracle Call Interface), but I think this call will be problematic - the OCI deals with SQL data types and not with PL/SQL structures.
    The OCI has since Oracle 8i sported an object call interface (see OCI Runtime Environment for Objects for details).
    This allows you to use the CREATE TYPE command to create advance user data types - and these are supported by the SQL engine, PL/SQL engine and external languages via the OCI.
    So you need to have a look at the Perl-DBI documentation to see how it supports Oracle object types and consider using these. As for internal PL/SQL data structures. These are not supported by the SQL engine and I would expect limited or no support in the OCI for these. Anyway, using SQL data types makes a lot more sense ito flexibility and transparency across languages and environments.

  • How to set index to tables?

    Hi,
    My database is having more than 50000 tables, I would like to set index to all my tables, please suggest me how to set index to 50000 tables...?
    Thanks
    Giri

    user10737570 wrote:
    Hi,
    My DB version 10.2.0.3.0 running on IBM-AIX....Thanks for it
    As is said earlier my DB consist of 50k tables.
    Each table consist of millions of rows.
    So, while I try to access some rows, It takes more time (especially while running long query)
    Now I decided to set index to the tables by giving
    create index fbind on fdtab(owner);
    so that an index has been created with taking nearly 5 hours.
    so there are thousands of tables left to set the index.....Its an utterly bad idea to create indexes assuming that their presence will make the queries faster. The data selection, predicates, conditions, statistics of the tables, good/bad sql and lastly, optimizer issues, they all play a very important role in the performance. Just assuming that with index creation , your queries will be faster, I don't think its a reasonable thought to have. You have got a lot of tables as per your saying , it would take huge resource and time to create indexes on all of the tables. In addition to this issue, not all the tables would be requiring the similar type of indexes as well. Some may require B-tree while others may rquire Bitmap. Also , you must note that indexes will make dmls more slower. So Ishall again suggest that you benchmark the creation of indexes with a little more care.
    HTH
    Aman....

  • How to handle a repeating table in FrameMaker 8 with varying columns

    Hello,
    I have a unstructred FrameMaker 8 table question that I hope someone can answer for me.
    I have a RTF document that I have brought into FrameMaker 8.
    The table came into FrameMaker without any major problems.
    From FrameMaker 8, I now have to apply a special template to generate the special "Table Title" field, that allows the table to break across several pages; using the "Table Continuation Variable".
    Also, I must use select the "Add Rows or Columns" option and choose "Add 1 Row to Heading", so that my "Table/Column/Head" row repeats on every table page.
    With that said, here is where it gets interesting!
    ISSUE #1
    Usually, my next step is to add 20-30 rows to the last row on the first table page and then paste in the table data from the second (original) table into these new table cells. The end result is that the table will bump from page 1 to page 2 and "Table Title (Continued)" will automatically appear on the second table page - and so forth...
    It is not the fastest way to handle RTF tables, but this is the only approach I know off-hand.
    Does anyone know if there is a way to simply merge the legacy breaking tables together and then apply the "Table Continuation" variable and repeating Table Column Header row?
    ISSUE #2
    As mentioned above in Issue #1, I must manually add extra rows to the bottom of the first table page and then paste in the table data that flows over the next few pages.
    But, my multi-page table is based on procedural steps and therefore some table pages have 4 columns, while others have 5 columns.
    So, using the process stated above, I simply can't copy and paste.
    I must now manually add another column to the affected table page. But when I do this, FrameMaker 8 naturally adds an additional column to the first table page. When that happens, I must then go back to the first table page and straddle the new column into the original column.
    Does anyone know if there is a better way to handle these legacy (RTF) tables?
    Overall, I am hoping that there is an easier way of handling a legacy (RTF) table that breaks over several pages and has different columns, based on the section in the table.
    Thanks in advance for any solutions offered.
    Regards,
    Jim

    Rick Quatro has a plugin that handles some of your issues:
    http://www.frameexpert.com/plugins/tablecleaner/index.htm

  • How to handle pay scale tables in infotype 0008?

    Hi friends,
    I have 2 questions:
    1) What's the difference between "Pay Scale Groups And Levels" (view v_t510)  and "Pay Grades And Levels" (view v_t710). And In which case apply each one?.
    2) I have a pay scale table with differents levels, for each level i have a minimun and maximun salary. I think i can handle this by using "pay grades and levels", but in the infotype 8 in the pay scale data i indicate the type, the area, and when i going to select the group the search help displays the values of the "pay scale groups and levels" and i would like to know how to display the "pay grades and levels" instead of the displayed currently?.
    Thanks in advance,
    Albio Vivas.-

    1. Pay scales - >  These are used when you have logical step increases in employee's salary.  Fixed increases that can be defined in configuration.  These are good, becuase they are easily maintained through configuration and standard programs update the employee master data for you.
        Pay grades - > These are used when you do not have fixed compensation for employees and need a salary range to go by, so you use these are a range.
    2. As for manipulating whether an employee used a scale or a grade list on info type 8  is dictated by feature TARIF located in transaction PE03.
    thanks.
    JB

  • How to handle include tag in xml schema

    My XML schema makes reference to other schema, something like:
    <xsd:include schemaLocation="../LOCCommon/LOCCommon.xsd"/>
    How do I handle this when using XML DB?

    XDB does currently understand relative URLs. We are looking at this since thier use is becoming more common. You will need to register both schemas under absolute URLs and then adjust the URL of the include or import statement to reflect this...
    The following code sample may help...
    procedure fixRelativeURLs(xmlschema in out xmltype, schemaLocationHint varchar2)
    as
      cursor getImports is
        select SCHEMA_LOCATION
          from xmlTable
                 xmlnamespaces
                   default 'http://www.w3.org/2001/XMLSchema'
                 '/schema/import'
                 passing xmlSchema
                 columns
                 SCHEMA_LOCATION varchar2(700) path '@schemaLocation'
      cursor getIncludes is
        select SCHEMA_LOCATION
          from xmlTable
                 xmlnamespaces
                   default 'http://www.w3.org/2001/XMLSchema'
                 '/schema/include'
                 passing xmlSchema
                 columns
                 SCHEMA_LOCATION varchar2(700) path '@schemaLocation'
      baseURL        varchar2(700);
      schemaLocation varchar2(700);
      targetURL      varchar2(700);
    begin
      if instr(schemaLocationHint,'/',-1) > 0 then
        baseURL := substr(schemaLocationHint,1,instr(schemaLocationHint,'/',-1)-1);
      else
        baseURL := '/';
      end if;
      for import in getImports loop
        targetURL := baseURL;
        schemaLocation := import.SCHEMA_LOCATION;
        -- The following are treated as relative URLs
        -- URLs with no '/' character
        -- URLs which do not start with '/' and which do not contain '://'
        if ((instr(schemaLocation,'://') = 0) and (instr(schemaLocation,'/') <> 1)) then
          if (instr(schemaLocation,'..')  = 1 ) then
            while instr(schemaLocation,'..') = 1 loop
              schemaLocation := substr(schemaLocation,4);
              targetURL := substr(targetURL,1,instr(targetURL,'/',-1)-1);
            end loop;
          end if;
          schemaLocation := targetURL || '/' || schemaLocation;
          -- dbms_output.put_line('Import : re-mapping "' || import.SCHEMA_LOCATION || '" to "' || schemaLocation || '".');
          select updateXML
                   xmlSchema,
                   '/xsd:schema/xsd:import[@schemaLocation="' || import.SCHEMA_LOCATION || '"]/@schemaLocation',
                   schemaLocation,
                  NAMESPACES
          into xmlSchema
          from dual;
        else
          dbms_output.put_line('Import : skipping "' || import.SCHEMA_LOCATION || '".');
        end if;
      end loop;
      for include in getIncludes loop
        targetURL := baseURL;
        schemaLocation := include.SCHEMA_LOCATION;
        -- The following are treated as relative URLs
        -- URLs with no '/' character
        -- URLs which do not start with '/' and which do not contain '://'
        if ((instr(schemaLocation,'://') = 0) and (instr(schemaLocation,'/') <> 1)) then
          if (instr(schemaLocation,'..')  = 1 ) then
            while instr(schemaLocation,'..') = 1 loop
              schemaLocation := substr(schemaLocation,4);
              targetURL := substr(targetURL,1,instr(targetURL,'/',-1)-1);
            end loop;
          end if;
          schemaLocation := targetURL || '/' || schemaLocation;
          -- dbms_output.put_line('Include : re-mapping "' || include.SCHEMA_LOCATION || '" to "' || schemaLocation || '".');
          select updateXML
                   xmlSchema,
                   '/xsd:schema/xsd:include[@schemaLocation="' || include.SCHEMA_LOCATION || '"]/@schemaLocation',
                   schemaLocation,
                   NAMESPACES
            into xmlSchema
            from dual;
        else
          dbms_output.put_line('Inlcude : skipping "' || include.SCHEMA_LOCATION || '".');
        end if;
      end loop;
    end;
    --

Maybe you are looking for

  • Reg table control

    hi, Created table control with 10 lines, if user enters first line and presses enter , second line is getting disbled. i need all the lines in the table control to be in enable mode and if he enters second line it should be captured. please help me i

  • Google map widget markup did not show up?

    Hi guys, I am using Zizzer Zazzer version for Google map widget. When I click the checkmark on for "Show Marker". The marker pin did not show up on the map, I even tested with regular Google Map - it did show the markup pin. Do you know how to fix th

  • Best Practices - Antenna Placement

    We have 3 3602Es connected to a 2504 WLC. I was wondering if anyone has advice or a site they can send me with best practices for antenna placement. They are all mounted on the side of a wall, near the ceiling (above everyone's head). Thanks!        

  • Rebuild workflow queues

    Hi, how to rebuild the workflow queues based on oracle apps version 11.5.10.2 i want to run the script afwfqgnt.sql it is given in metalink id as sqlplus apps/apps @$FND_TOP/patch/115/sql/afwfqgnt.sql APPS <APPS PWD> APPLSYS <APPLSYSPWD> under $FND_T

  • Error while enabling  forien key constraint

    HI forum, I am facing a problem while enabling constraint..pls see the statement below. SQL> alter table EVENT enable validate constraint EVENT_4FK; alter table EVENT enable validate constraint EVENT_4FK ERROR at line 1: ORA-02298: cannot validate (A