Indexing XMLDB table of type XMLType

Hi
I have created an table in XML DB of type XMLType.
Can anyone tell me how I can index this table?
Oracle version is 9.2.0.3.0

You should ask this question in teh Oracle XMLDB forum, or perhaps the Oracle Text forum (formerly interMedia Text).
You will get a better, quicker answer there.

Similar Messages

  • Help on Index creation on complex type

    Hi:
    I have some problem in creating indexes on columns related to complex types.
    Here is the steps performed
    1)
    I registered a schema with complextypes. The relevant part of the schema is as below:
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
         <xs:element name="ProteinEntry" type="ProteinEntryType"/>
         <xs:complexType name="ProteinEntryType">
              <xs:sequence>
                   <xs:element name="header" type="headerType"/>
                   <xs:element name="protein" type="proteinType"/>
                   <xs:element name="organism" type="organismType"/>
              </xs:sequence>
              <xs:attribute name="id" type="xs:ID" use="required"/>
         </xs:complexType>
    <xs:complexType name="organismType">
              <xs:sequence>
                   <xs:element ref="source"/>
                   <xs:element ref="common" minOccurs="0"/>
              </xs:sequence>
         </xs:complexType>
    </xs:schema>
    2) I registered this schema and I created a table "bioseq" of xmltype based on the schema.
    3)when i do a describe of bioseq I get the following:
    SQL> desc bioseq
    Name Null? Type
    TABLE of SYS.XMLTYPE(XMLSchema "http://accelrys.com/pir_edited_pir3.xsd" Element "ProteinEntry") STORAGE Object-relational TYPE "ProteinEntryType551_T"
    4) when I do a describe of "ProteinEntryType551_T" I get the following :
    SQL> desc "ProteinEntryType551_T";
    "ProteinEntryType551_T" is NOT FINAL
    Name Null? Type
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    id VARCHAR2(4000)
    organism organismType511_T
    reference reference552_COLL
    desc "organismType511_T" gives the following:
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    source VARCHAR2(4000)
    common VARCHAR2(4000)
    What I want to do is create an index on the column source. I tried several syntaxes but failed. Is it possible to do this.
    Sudhakar

    Look at the Whitepaper on XML DB and the latest version of the XML DB Demo. It shows how to do this.

  • Difference between an XMLType table and a table with an XMLType column?

    Hi all,
    Still trying to get my mind around all this XML stuff.
    Can someone concisely explain the difference between:
    create table this_is_xmltype_tab of xmltype;and
    create table this_is_tab_w_xmltpe_col(id number, document xmltype);What are the relative advantages and disadvantages of each approach? How do they really differ?
    Thanks,
    -Mark

    There is another pointer Mark, that I realized when I was thinking about the differences...
    If you would look up in the manual regarding "xdb:annotations" you would learn about a method using an XML Schema to generate out of the box your whole design in terms of physical layout and/or design principles. In my mind this should be the preferred solution if you are dealing with very complex XML Schema environments. Taking your XML Schema as your single point design layout, that during the actual implementation automatically generates and builds all your needed database objects and its physical requirements, has great advantages in points of design version management etc., but...
    ...it will create automatically an XMLType table (based on OR, Binary XML of "hybrid" storage principles, aka the ones that are XML Schema driven) and not AFAIK a XMLtype column structure: so as in "our" case a table with a id column and a xmltype column.
    In principle you could relationally relate to this as:
    +"I have created an EER diagram and a Physical diagram, I mix the content/info of those two into one diagram." "Then I _+execute+_ it in the database and the end result will be an database user/schema that has all the xxxx amount of physical objects I need, the way I want it to be...".+
    ...but it will be in the form of an XMLType table structure...
    xdb:annotations can be used to create things like:
    - enforce database/company naming conventions
    - DOM validation enabled or not
    - automatic IOT or BTree index creation (for instance in OR XMLType storage)
    - sort search order enforced or not
    - default tablenames and owners
    - extra column or table property settings like for partitioning XML data
    - database encoding/mapping used for SQL and binary storage
    - avoid automatic creation of Oracle objects (tables/types/etc), for instance, via xdb:defaultTable="" annotations
    - etc...
    See here for more info: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#ADXDB4519
    and / or for more detailed info:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030452
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030995
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#CHDCEBAG
    ...

  • Index Vs table partition

    I have table whose growth is 1 million per month and may increase in future. I currently place an index on column which is frequently uses in where clause. there is another column which contains months so it may possible that I make 12 partitions of that. I want to know what is suitable. is there any connection between index and table partition?
    Message was edited by:
    user459835

    I think the question is more of what type of queries are answered by this table?
    is it that most of the times the results returned span across several months?
    is there any relation to the column you use in where clause with the data belonging to a particular month (or range there-of)?

  • Pl/sql table - row type records

    Hi,
    Is there any limit on the number of records that a pl/sql table (row type) can accomodate.Iam using oracle 10g

    user11200499 wrote:
    I have gone thru that url, nothing on the maximum number of records that can be present in pl/sql table is given there. Will be very helpful if you can let me know if there is any such limitation.There is no such thing as a PL/SQL "+table+". A table, in Oracle terminology, means colums and rows and indexes and the ability to scale data, effectively read and process and filter and aggregate data.
    A so-called PL/SQL "+table+" is nothing at all like this.
    The correct term for it, and used in all other programming languages, are arrays (procedural term) and collections (object orientated term).
    An array/collection is a local memory structure in the unit of code. In PL/SQL, that means PGA (process global area) memory. And as this uses server memory, you should not abuse it and only use as much that is truly needed.
    Make a PL/SQL array/collection too large, and PGA grows.. and can have a very negative impact on performance. It can even cause the server to crawl to halt, where you will struggle to enter a commandline command on the server as it is spending 99% of CPU time trying to deal with memory requests and page swapping.
    So what do we then use arrays/collections for in PL/SQL?
    For the very same reason we use in any other programming language - dealing with managing local programming data in a more effective memory structure. Such as bulk processing when requiring a buffer variable that can be passed to and from the PL and SQL engines.
    This does NOT mean using it as you would use it as if it is a SQL table. As it is not.
    So to answer your question of how large a PL/SQL array or collection can be? That depends entirely on the problem you are trying to solve. If it is for example bulk processing, then typically a collection of a 100 rows provides the best balance between the amount of (expensive) PGA memory being used versus the increase in performance by reducing context switching between the PL and SQL engines.
    If the rows are quite small, perhaps even a 1,000 row collection. More than that seldom decreases context switching enough to justify the increase in expensive PGA.
    So what should then be used to store larger data structures in PL/SQL? GTT or Global Temporary Tables. As this is a proper SQL table structure. Can be indexed. Natively supports SQL. Can scale with data volumes.
    And most importantly, it does not consume dedicated process memory and will not blow server memory.

  • Creation of secondary indexes for table "RSBATCHCTRL_PAR" failed

    Hi ,
    We have installed EHP1 on our BI7.0 system successfully, later we are trying to apply SPS01 for this EHP but we got the follwoing error during TBATG conversion.
    2 EGT092 Conversion of table "RSBATCHCTRL_PAR" was restarted
    2 EGT241 The conversion is continued at step "6"
    2 EGT246 Type of conversion: "T" -> "T"
    2 EGT240XBegin step "RSBATCHCTRL_PAR-STEP6":
    4 EGT281 sql:
    4 ED0314 CREATE
    4 ED0314 INDEX [RSBATCHCTRL_PAR~DB] ON [RSBATCHCTRL_PAR]
    4 ED0314 ( [JOBNAME] ,
    4 ED0314 [JOBCOUNT] ,
    4 ED0314 [SERVER] ,
    4 ED0314 [HOST] ,
    4 ED0314 [WP_NO] ,
    4 ED0314 [WP_PID] ,
    4 ED0314 [PROCESS_TYPE] )
    4 ED0314 WITH ( ONLINE=OFF )
    4 ED0314 ON [PRIMARY]
    2 ED0314 Line 1: Incorrect syntax near '('.
    3 EDA093 "DDL time(___1):" ".........6" milliseconds
    2EEGT236 The SQL statement was not executed
    2EEDI006 Index " " could not be created completely in the database
    2EEGT221 Creation of secondary indexes for table "RSBATCHCTRL_PAR" failed
    2EEGT239 Error in step "RSBATCHCTRL_PAR-STEP6"
    2 EGT253XTotal time for table "RSBATCHCTRL_PAR": "000:00:00"
    2EEGT094 Conversion could not be restarted
    2 EGT067 Request for "RSBATCHCTRL_PAR" could not be executed
    1 ED0327XProcess..................: "ferrari_12"
    1 ED0302X=========================================================================
    1 ED0314 DD: Execution of Database Operations
    1 ED0302 =========================================================================
    1 ED0327 Process..................: "ferrari_12"
    1 ED0319 Return code..............: "0"
    1 ED0314 Phase 001................: < 1 sec. (Preprocessing of TBATG)
    1 ED0314 Phase 002................: < 1 sec. (Partitioning)
    1 ED0309 Program runtime..........: "< 1 sec."
    1 ED0305 Date, time...............: "03.06.2009", "12:47:21"
    1 ED0318 Program end==============================================================
    1 ETP166 CONVERSION OF DD OBJECTS (TBATG)
    1 ETP110 end date and time   : "20090603124721"
    1 ETP111 exit code           : "8"
    1 ETP199 ######################################
    System properties:
    SAP - BI7.0 with EHP1
    Database - MSSQL 2000
    OS - Windows2003
    Please suggest.
    Thanks in advance,
    Pavan.

    > We have installed EHP1 on our BI7.0 system successfully, later we are trying to apply SPS01 for this EHP but we got the follwoing error during TBATG conversion.
    > 2 ED0314 Line 1: Incorrect syntax near '('.
    > 3 EDA093 "DDL time(___1):" ".........6" milliseconds
    > 2EEGT236 The SQL statement was not executed
    This is a known problem with SQL Server 2000, see
    Note 1180553 - Syntax error 170 during index creation on SQL 2000
    I highly suggest upgrading to SQL Server 2005 or 2008.
    Markus

  • Index Organized Tables

    what is logical rowid in IOT?are they stored somwhere physically just like physical rowId's
    what are secondary indexes?
    what it means by leaf block splits?when and how it happens?
    and the primary key constraint for an index-organized table cannot be dropped, deferred, or disabled,,,,,Is it true,,,,,if Yes Then Y
    how does overflow works?how the two clauses are implemented PCTTHRESHOLD and INCLUDING.how they work?
    Edited by: Juhi on Oct 22, 2008 1:09 PM

    I'm sort-of tempted to just point you in the direction of the official documentation (the concepts guide would be a start. See http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/schema.htm#sthref759)
    But I would say one or two other things.
    First, physical rowids are not stored physically. I don't know why you'd think they were. The ROWID data type can certainly be used to store a rowid if you choose to do so, but if you do something like 'select rowid from scott.emp', for example, you'll see rowids that are generated on-the-fly. ROWID is a pseudo-column, not physically stored anywhere, but computed whenever needed.
    The difference between a physical rowid and a logical one used with IOTs comes down to a bit of relational database theory. It is a cast-iron rule of relational databases that a row, once inserted into a table, must never move. That is, the rowid it is assigned at the moment of its first insertion, must be the rowid it 'holds onto' for ever and ever. If you ever want to change the rowids assigned to rows in an ordinary table, you have to export them, truncate the table and then re-insert them: fresh insert, fresh rowid. (Oracle bends this rule for various maintenance and management purposes, whereby 'enable row movement' permits rows to move within a table, but the general case still applies mostly).
    That rule is obviously hopeless for index structures. Were it true, an index entry for 'Bob' who gets updated to 'Robert' would find itself next to entries for 'Adam' and 'Charlie', even though it now has an 'R' value. Effectively, a 'b' "row" in an index must be allowed to "move" to an 'r' sort of block if that's the sort of update that takes place. (In practice, an update to an index entry consists of performing a delete followed by a re-insert, but the physicalities don't change the principle: "rows" in an index must be allowed to move if their value changes; rows in a table don't move, whatever happens to their values)
    An IOT is, at the end of the day, simply an index with a lot more columns in it than a "normal" index would have -so it, too, has to allow its entires (its 'rows', if you like) to move. Therefore, an IOT cannot use a standard ROWID, which is assigned once and forever. Instead, it has to use something which takes account of the fact that its rows might wander. That is the logical rowid. It's no more "physical" than a physical rowid -neither are physically stored anywhere. But a 'physical' rowid is invariant; a logical one is not. The logical one is actually constructed in part from the primary key of the IOT -and that's the main reason why you cannot ever get rid of the primary key constraint on the IOT. Being allowed to do so would equate to allowing you to destroy the one organising principle for its contents that an IOT possesses.
    (See the section entitled "The ROWID Pseudocolumn" and following on this page: http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm#CNCPT1845
    So IOTs have their data stored in them in primary key order. But they don't just contain the primary key, but every other column in the 'table definition' too. Therefore, just like with an ordinary table, you might want sometimes to search for data on columns which are NOT part of the primary key -and in that case, you might well want these non-primary key columns to be indexed. Therefore, you will create ordinary indexes on these columns -at this point, you're creating an index on an index, really, but that's a side issue, too! These extra indexes are called 'secondary indexes', simply because they are 'subsidiary indexes' to the main one, which is the "table" itself arranged in primary key order.
    Finally, a leaf block split is simply what happens when you have to make room for new data in an index block which is already packed to the rafters with existing data. Imagine an index block can only contain four entries, for example. You fill it with entries for Adam, Bob, Charlie, David. You now insert a new record for 'Brian'. If this was a table, you could throw Brian into any new block you like: data in a table has no positional significance. But entries in an index MUST have positional significance: you can't just throw Brian in amongst the middle of a lot of Roberts, Susans and Tanyas. Brian HAS to go in between the existing entires for Bob and Charlie. Yet you can't just put him in the middle of those two, because then you'd have five entries in a block, not four, which we imagined for the moment to be the maximum allowed. So what to do? What you do is: obtain a new, empty block. Move Charlie and David's entries into the new block. Now you have two blocks: Adam-Bob and Charlie-David. Each only has two entries, so each has two 'spaces' to accept new entries. Now you have room to add in the entry for Brian... and so you end up with Adam-Bob-Brian and Charlie-David.
    The process of moving some index entries out of one block into a new one so that there's room to allow new entries to be inserted in the middle of existing ones is called a block split. They happen for other reasons, too, so this is just a gloss treatment of them, but they give you the basic idea. It's because of block splits that indexes (and hence IOTs) see their "rows" move: Charlie and David started in one block and ended up in a completely different block because of a new (and completely unrelated to them) insert.
    Very finally, overflow is simply a way of splitting off data into a separate table segment that wouldn't sensibly be stored in the main IOT segment itself. Suppose you create an IOT containing four columns: one, a numeric sequence number; two, a varchar2(10); three, a varchar2(15); and four, a blob. Column 1 is the primary key.
    The first three columns are small and relatively compact. The fourth column is a blob data type -so it could be storing entire DVD movies, multi-gigabyte-sized monsters. Do you really want your index segment (for that is what an IOT really is) to balloon to huge sizes every time you add a new row? Probably not. You probably want columns 1 to 3 stored in the IOT, but column 4 can be bumped off over to some segment on its own (the overflow segment, in fact), and a link (actually, a physical rowid pointer) can link from the one to the other. Left to its own devices, an IOT will chop off every column after the primary key one when a record which threatens to consume more than 50% of a block gets inserted. However, to keep the main IOT small and compact and yet still contain non-primary key data, you can alter these default settings. INCLUDE, for example, allows you to specify which last non-primary key column should be the point at which a record is divided between 'keep in IOT' and 'move out to overflow segment'. You might say 'INCLUDE COL3' in the earlier example, so that COL1, COL2 and COL3 stay in the IOT and only COL4 overflows. And PCTTHRESHOLD can be set to, say, 5 or 10 so that you try to ensure an IOT block always contains 10 to 20 records -instead of the 2 you'd end up with if the default 50% kicked in.

  • Need suggestion on adding Index on table

    Hi,
    There is table called customer_locations in my database which has records about more then 5000 rows, When we write some select query to fetch data from this table takes too much time to load the data.
    Need a suggestion how to add index to increase the performance on this table. also which type of index to be added need a suggestion
    table sql script is mentioned below
    CREATE TABLE "CUSTOMER_LOCATIONS"
    (     "LOCATION_ID" NUMBER NOT NULL ENABLE,
         "COMPANY_NAME" VARCHAR2(512),
         "ADDRESS_LINE_1" VARCHAR2(512),
         "ADDRESS_LINE_2" VARCHAR2(512),
         "PHONE_NUMBER" VARCHAR2(255),
         "FAX_NUMBER" VARCHAR2(255),
         "CITY" VARCHAR2(512),
         "STATE" VARCHAR2(512),
         "ZIP" VARCHAR2(100),
         "COUNTRY" VARCHAR2(255),
         "CREATED_BY" VARCHAR2(512),
         "CREATED_DATE" TIMESTAMP (6),
         "MODIFIED_BY" VARCHAR2(512),
         "MODIFIED_DATE" TIMESTAMP (6),
         "DOMAIN_ID" NUMBER,
         "LOCATION_TYPE" VARCHAR2(255),
         "STATUS" VARCHAR2(50),
         "IB_STATUS" VARCHAR2(100),
         "OLD_LOCATION_ID" VARCHAR2(50),
         CONSTRAINT "SS_CUSTOMER_LOCATIONS_PK" PRIMARY KEY ("LOCATION_ID") ENABLE
    Please suggest
    Thanks
    Sudhir

    Hi Sudhir,
    Since you have no predicates it would be unavoidable to have FULL TABLE SCAN. But let me tell you that FULL TABLE SCANS are not bad.
    Just to help you with, the below code can enunciate the use of Indexes:
    drop table test_table;
    create table test_Table as select * from all_objects where rownum < 5001;
    select * from user_ind_columns where table_name = 'TEST_TABLE';
    --No rows fetched.
    explain plan for
    select object_name || ' is a ' || object_type as OBJ_DESC, object_id
      from test_table;
    --5000 Rows fetched
    select operation, options, object_name, object_alias, object_instance, object_type, optimizer, depth, position, cost, cardinality, cpu_cost, io_cost
    from plan_table;
    OPERATION               OPTIONS     OBJECT_NAME     OBJECT_ALIAS          OBJECT_INSTANCE     OBJECT_TYPE     OPTIMIZER     DEPTH     POSITION     COST     CARDINALITY     CPU_COST     IO_COST
    SELECT STATEMENT     (NULL)     (NULL)          (NULL)                    (NULL)               (NULL)          ALL_ROWS     0          19               19          5000          1698651          19     
    TABLE ACCESS          FULL     TEST_TABLE     TEST_TABLE@SEL$1     1                    TABLE          (NULL)          1          1               19          5000          1698651          19
    alter table test_Table add constraint pk_object_id PRIMARY KEY (object_id);
    explain plan for
    select object_name || ' is a ' || object_type as OBJ_DESC, object_id
      from test_table
    where object_id = 26;
    select operation, options, object_name, object_alias, object_instance, object_type, optimizer, depth, position, cost, cardinality, cpu_cost, io_cost
    from plan_table
    where statement_id = 'WITH_PK';
    OPERATION               OPTIONS               OBJECT_NAME          OBJECT_ALIAS          OBJECT_INSTANCE     OBJECT_TYPE          OPTIMIZER     DEPTH     POSITION     COST     CARDINALITY     CPU_COST     IO_COST
    SELECT STATEMENT     (NULL)               (NULL)               (NULL)                    (NULL)               (NULL)               ALL_ROWS     0          2               2          1               15543          2
    TABLE ACCESS          BY INDEX ROWID     TEST_TABLE          TEST_TABLE@SEL$1     1                    TABLE                              1          1               2          1               15543          2
    INDEX                    UNIQUE SCAN          PK_OBJECT_ID     TEST_TABLE@SEL$1     (NULL)               INDEX (UNIQUE)     ANALYZED     2          1               1          1               8171          1Let me know if it help or if you still have any concerns.
    Regards,
    P
    Edited by: PurveshK on May 29, 2012 12:40 PM

  • Modify Syntax on Internal table of type ANY TABLE

    Hi,
    I have declared one internal table which is of type ANY TABLE.
    In the Loop statement, I am trying to Modify that Internal table from WA.
    Then I am getting one Error message
    "You cannot use explicit or implicit index operations on tables with types "HASHED TABLE" or "ANY TABLE". "C_T_DATA" has the type "ANY TABLE".
    Above code I have placed in method of a corresponding Class.
    Can u please advise me on this..How to modify the Intenal table .
    Thanks and Regards,
    K.Krishna Chaitanya.

    Hi Krishna,
    the modify statement is obsolete.
    You can always LOOP AT [itab] ASSIGNING <field-symbol>.
    This makes the loop never slower, depending on the table structure faster or much faster.
    If you know the table structure at run time, you can use a field-symbol of that type. If not, you can use a field-symbol TYPE any. Then you have to assign the components to field-symbol to modify them, i.e.
    field-symbols:
      <table_line> type any,
      <matnr>        type mara-matnr.
    loop at itab assigning  <table_line>.
      assign component 'MATNR' of structure <table_line> to <matnr>.
      clear <matnr>.
    endloop.
    This technique (available more than ten years) works incredibly fast. My estimate is that if SAP would change all the old standard programs that way and use it consequently in the new ones, the whole system would be 20 % faster because myriads of unnecessary copy operations of LOOP INTO would not happen.
    Regards,
    Clemens.

  • Issue with index on table

    Hi,
    We have created an index(assume z2) on table CATSDB with 2 fields. There is an other index(Z1 assume) with the same fields and the order is also same. When a report accesing the table it is taking more time to run when index Z2 is on table. But when deleted then the report ran quickly. Is it with the duplicate index created???
    Please let me know
    Regards
    Shiva

    Hi
    i am giving total index and buffering concept details by seeing this you can understand how we can achive performance through these
    <b>reward if usefull</b>
    <b>Performance during table access</b>
    <b>Indexes</b>
    Primary and secondary indexes
    Structure of an index
    Accessing tables using indexes
    <b>Table buffering</b>
    Advantages of buffering
    Concept of buffering
    Buffering types
    Buffer synchronization
    <b>Primary and secondary indexes</b>
    Index: Technical key of a database table.
    Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
    Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
    <b>Structure of an Index</b>
    An index can be used to speed up the selection of data records from a table.
    An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
    When creating indexes, please note that:
    An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
    Only those fields whose values significantly restrict the amount of data are meaningful in an index.
    When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
    Make sure that the indexes on a table are as disjunctive as possible.
    (That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
    <b>Accessing tables using Indexes</b>
    The database optimizer decides which index on the table should be used by the database to access data records.
    You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
    The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
    If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
    When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
    <b>Database access using Buffer concept</b>
    Buffering allows you to access data quicker by letting you
    access it from the application server instead of the database.
    <b>Advantages of buffering</b>
    Table buffering increases the performance when the records of the table are read.
    As records of a buffered table are read directly from the local buffer of the application server on which the accessing transaction is running, time required to access data is greatly reduced. The access improves by a factor of 10 to 100 depending on the structure of the table and on the exact system configuration.
    If the storage requirements in the buffer increase due to further data, the data that has not been accessed for the longest time is displaced. This displacement takes place asynchronously at certain times which are defined dynamically based on the buffer accesses. Data is only displaced if the free space in  the buffer is less than a predefined value or the quality of the access is not satisfactory at this time.
    Entering $TAB in the command field resets the table buffers on the corresponding application server. Only use this command if there are inconsistencies in the buffer. In large systems, it can take several hours to fill the buffers. The performance is considerably reduced during this time.
    <b>Concept of buffering</b>
    The R/3 System manages and synchronizes the buffers on the individual application servers. If an application program accesses data of a table, the database interfaces determines whether this data lies in the buffer of the application server. If this is the case, the data is read directly from the buffer. If the data is not in the buffer of the application server, it is read from the database and loaded into the buffer. The buffer can therefore satisfy the next access to this data.
    The buffering type determines which records of the table are loaded into the buffer of the application server when a record of the table is accessed. There are three different buffering types.
    With full buffering, all the table records are loaded into the buffer when one record of the table is accessed.
    With generic buffering, all the records whose left-justified part of the key is the same are loaded into the buffer when a table record is accessed.
    With single-record buffering, only the record that was accessed is loaded into the buffer.
    <b>Buffering types</b>
    With full buffering, the table is either completely or not at all in the buffer. When a record of the table is accessed, all the records of the table are loaded into the buffer.
    When you decide whether a table should be fully buffered, you must take the table size, the number of read accesses and the number of write accesses into consideration. The smaller the table is, the more frequently it is read and the less frequently it is written, the better it is to fully buffer the table.
    Full buffering is also advisable for tables having frequent accesses to records that do not exist. Since all the records of the table reside in the buffer, it is already clear in the buffer whether or not a record exists.
    The data records are stored in the buffer sorted by table key. When you access the data with SELECT, only fields up to the last specified key field can be used for the access. The left-justified part of the key should therefore be as large as possible for such accesses. For example, if the first key field is not defined, the entire table is scanned in the buffer. Under these circumstances, a direct access to the database could be more efficient if there is a suitable secondary index there.
    With generic buffering, all the records whose generic key fields agree with this record are loaded into the buffer when one record of the table is accessed. The generic key is a left-justified part of the primary key of the table that must be defined when the buffering type is selected. The generic key should be selected so that the generic areas are not too small, which would result in too many generic areas. If there are only a few records for each generic area, full buffering is usually preferable for the table. If you choose too large a generic key, too much data will be invalidated if there are changes to table entries, which would have a negative effect on the performance.
    A table should be generically buffered if only certain generic areas of the table are usually needed for processing.
    Client-dependent, fully buffered tables are automatically generically buffered. The client field is the generic key. It is assumed that not all of the clients are being processed at the same time on one application server. Language-dependent tables are a further example of generic buffering. The generic key includes all the key fields up to and including the language field.
    The generic areas are managed in the buffer as independent objects. The generic areas are managed analogously to fully buffered tables. You should therefore also read the information about full buffering.
    Single-record buffering is recommended particularly for large tables in which only a few records are accessed repeatedly with SELECT SINGLE. All the accesses to the table that do not use SELECT SINGLE bypass the buffer and directly access the database.
    If you access a record that was not yet buffered using SELECT SINGLE, there is a database access to load the record. If the table does not contain a record with the specified key, this record is recorded in the buffer as non-existent. This prevents a further database access if you make another access with the same key
    You only need one database access to load a table with full buffering, but you need several database accesses with single-record buffering. Full buffering is therefore generally preferable for small tables that are frequently accessed.
    <b>Synchronizing local buffers</b>
    The table buffers reside locally on each application server in the system. However, this makes it necessary for the buffer administration to transfer all changes made to buffered objects to all the application servers of the system.
    If a buffered table is modified, it is updated synchronously in the buffer of the application server from which the change was made. The buffers of the whole network, that is, the buffers of all the other application servers, are synchronized with an asynchronous procedure.
    Entries are written in a central database table (DDLOG) after each table modification that could be buffered. Each application server reads these entries at fixed time intervals.
    If entries are found that show a change to the data buffered by this server, this data is invalidated. If this data is accessed again, it is read directly from the database. In such an access, the table can then be loaded to the buffer again.

  • How to handle Index Organized Tables in schema capture?

    I’m trying setup streams schema level between 2 database source 10.2.0.4 and 11g; one way streams;
    The source has Index Organized Tables with column rowed; Oracle streams doesn’t support data type rowed;
    Is it works around?
    Thanks

    At the moment, you can't do it declaratively. You have to do it in an event handler. Assuming you have the userid setup as a query parameter in the view object, something like this should get you started:
    public EventResult handleEvent(
    BajaContext context, Page page, PageEvent event) throws Throwable
    HttpSession session = context.getServletRequest().getSession(true);
    ViewObject view = ServletBindingUtils.getViewObject(context);
    String userid = session.getAttribute("userid");
    view.setWhereClauseParam(0, userid);
    view.executeQuery();

  • Why do we need varrays ,index by table,pl/sql table etc when cursor is avai

    hi,
    Why do we need Composite data types like Index by Table, varrays etc when we have cursors and we can do all the things with cursor.
    Thanks
    Ram

    I would have to create a collection type for each column in the select statement.No.
    SQL> select count(*) from scott.emp ;
      COUNT(*)
            14
    1 row selected.
    SQL> DECLARE
      2      TYPE my_Table IS TABLE OF scott.emp%ROWTYPE;
      3      my_tbl my_Table;
      4  BEGIN
      5      SELECT * BULK COLLECT INTO my_tbl FROM scott.emp;
      6      dbms_output.put_line('Bulk Collect rows:'||my_tbl.COUNT) ;
      7  END;
      8  /
    Bulk Collect rows:14
    PL/SQL procedure successfully completed.
    SQL> disc
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.7.0 - Production
    SQL>Message was edited by:
    Kamal Kishore

  • Reg: PLS-00418: array bind type must match PL/SQL table row type error

    I am trying to access a table of records through JDBC OracleCallableStatement. I am able to do it fine for all mappings except for the ones below
    TYPE CAT_CD_TYPE IS TABLE OF A.B %TYPE INDEX BY BINARY_INTEGER;
    TYPE ORG_CD_TYPE IS TABLE OF C.D %TYPE INDEX BY BINARY_INTEGER;
    Column B is CHAR(1) and Column D is CHAR(2). I am trying to register the out parameters of Oraclecallablestatement as
    cstmt.registerIndexTableOutParameter(2, 2000, OracleTypes.CHAR, 0);
    cstmt.registerIndexTableOutParameter(3, 2000, OracleTypes.CHAR, 0);
    All the other mappings work fine. These two fail with the error
    SQLException in invokeDBPackage() : ORA-06550: line 1, column 32:
    PLS-00418: array bind type must match PL/SQL table row type
    ORA-06550: line 1, column 35:
    PLS-00418: array bind type must match PL/SQL table row type
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    I tried other OracleTypes mappings too but no luck so far.
    Any advice on this would be greatly appreciated.

    Hi,
    I'm not sure it's reasonable to expect someone to sift through that much stuff.
    Which parameter is it having a problem with?
    Can you modify the following to reproduce the behavior?
    Thanks
    Greg
    create package mypack5 as
    TYPE v2array is table of emp.ename%type index by BINARY_INTEGER;
    PROCEDURE test_it(thearray IN v2array, numrecs out number);
    END;
    CREATE or replace PACKAGE BODY MYPACK5 AS
    PROCEDURE test_it(thearray IN v2array, numrecs out number)
    IS
    begin
    numrecs := thearray.count;
    END;
    END;
    using System;
    using System.Data;
    using Oracle.DataAccess.Client;
    public class indexby
    public static void Main()
    OracleConnection con = new OracleConnection("data source=orcl;user id=scott;password=tiger;");
    con.Open();
    OracleCommand cmd = new OracleCommand("mypack5.test_it", con);
    cmd.CommandType = CommandType.StoredProcedure;
    OracleParameter Param1 = cmd.Parameters.Add("param1", OracleDbType.Varchar2);
    Param1.Direction = ParameterDirection.Input;
    Param1.CollectionType = OracleCollectionType.PLSQLAssociativeArray;
    Param1.Size = 3;
    string[] vals = { "foo", "bar", "baz" };
    Param1.Value = vals;
    OracleParameter Param2 = cmd.Parameters.Add("param2", OracleDbType.Int32, DBNull.Value, ParameterDirection.Output);
    cmd.ExecuteNonQuery();
    Console.WriteLine("{0} records passed in", Param2.Value);
    con.Close();
    }

  • PLS-00418: array bind type must match PL/SQL table row type

    If a PL/SQL table is indexed by CHAR and is a parameter
    in a Stored Program, we are not able to call the stored
    program from the Java code.
    We get the following error code.
    java.sql.SQLException: ORA-06550: line 1, column 62:
    PLS-00418: array bind type must match PL/SQL table row type
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    But if we change the CHAR into VARCHAR2 then it works.
    We are using Oracle9i Enterprise Edition Release 9.2.0.5.0 -64bit Production ,
    JServer Release 9.2.0.5.0 - Production
    and JDK1.4.
    Thanks
    Push..

    Hi,
    I'm not sure it's reasonable to expect someone to sift through that much stuff.
    Which parameter is it having a problem with?
    Can you modify the following to reproduce the behavior?
    Thanks
    Greg
    create package mypack5 as
    TYPE v2array is table of emp.ename%type index by BINARY_INTEGER;
    PROCEDURE test_it(thearray IN v2array, numrecs out number);
    END;
    CREATE or replace PACKAGE BODY MYPACK5 AS
    PROCEDURE test_it(thearray IN v2array, numrecs out number)
    IS
    begin
    numrecs := thearray.count;
    END;
    END;
    using System;
    using System.Data;
    using Oracle.DataAccess.Client;
    public class indexby
    public static void Main()
    OracleConnection con = new OracleConnection("data source=orcl;user id=scott;password=tiger;");
    con.Open();
    OracleCommand cmd = new OracleCommand("mypack5.test_it", con);
    cmd.CommandType = CommandType.StoredProcedure;
    OracleParameter Param1 = cmd.Parameters.Add("param1", OracleDbType.Varchar2);
    Param1.Direction = ParameterDirection.Input;
    Param1.CollectionType = OracleCollectionType.PLSQLAssociativeArray;
    Param1.Size = 3;
    string[] vals = { "foo", "bar", "baz" };
    Param1.Value = vals;
    OracleParameter Param2 = cmd.Parameters.Add("param2", OracleDbType.Int32, DBNull.Value, ParameterDirection.Output);
    cmd.ExecuteNonQuery();
    Console.WriteLine("{0} records passed in", Param2.Value);
    con.Close();
    }

  • How can I use Indexed Temp tables to optimize performance?

    Instead of joining two CTEs together, I am now going to attempt to join two indexed temp tables. 
    The first temp table is a number of encounters that returns 147 rows in 2 seconds. The second temp table
    is progress notes for all those encounters returning 136 rows in 18 seconds. Joining the indexed views comes back at 3 1/2 minutes.
    What can I do to optimize performance?
    Code is below. Thanks in advance!
    if object_id('tempdb..#arpb') is not null begin drop table #arpb end;
    if object_id('tempdb..#progress_notes') is not null begin drop table #progress_notes end;
    SELECT DISTINCT
    ARPB.PAT_ENC_CSN_ID, ARPB.SERVICE_DATE, ARPB.BILLING_PROV_ID, ARPB.DEPARTMENT_ID,
    SER.PROV_NAME, DEP.DEPARTMENT_NAME, E.APPT_TIME, ZC_APPT.NAME AS APPT_STATUS
    INTO #arpb
    FROM ARPB_TRANSACTIONS ARPB
    LEFT OUTER JOIN CLARITY_SER AS SER ON SER.PROV_ID = ARPB.BILLING_PROV_ID
    LEFT OUTER JOIN CLARITY_DEP AS DEP ON DEP.DEPARTMENT_ID = ARPB.DEPARTMENT_ID
    LEFT OUTER JOIN PAT_ENC AS E ON E.PAT_ENC_CSN_ID = ARPB.PAT_ENC_CSN_ID
    LEFT OUTER JOIN ZC_APPT_STATUS AS ZC_APPT ON ZC_APPT.APPT_STATUS_C = E.APPT_STATUS_C
    WHERE ARPB.DEPARTMENT_ID = xxxx
    AND ARPB.TX_TYPE_C = 1
    AND ARPB.VOID_DATE IS NULL
    AND ARPB.BILLING_PROV_ID = xxxx
    AND ARPB.SERVICE_DATE BETWEEN
    'xxxx' AND 'xxxxx'
    create clustered index idx_temp_arpb on #arpb(PAT_ENC_CSN_ID)
    SELECT DISTINCT
    ARPB.PAT_ENC_CSN_ID,
    ZCNT.NAME AS NOTE_TYPE, PR.NAME AS PURPOSE, HNO_INFO.NOTE_ID,
    EMP.NAME AS EMP_NAME, STS.NAME AS NOTE_STATUS
    INTO #progress_notes
    FROM ARPB_TRANSACTIONS ARPB
    LEFT OUTER JOIN ENC_NOTE_INFO AS ENC_NOTE_INFO ON ARPB.PAT_ENC_CSN_ID = ENC_NOTE_INFO.PAT_ENC_CSN_ID
    LEFT OUTER JOIN HNO_INFO AS HNO_INFO ON ENC_NOTE_INFO.ENCOUNTER_NOTE_ID = HNO_INFO.NOTE_ID
    LEFT OUTER JOIN ZC_NOTE_TYPE AS ZCNT ON ZCNT.NOTE_TYPE_C = ENC_NOTE_INFO.NOTE_TYPE_C
    LEFT OUTER JOIN ZC_NOTE_PURPOSE AS PR ON PR.NOTE_PURPOSE_C = HNO_INFO.NOTE_PURPOSE_C
    LEFT OUTER JOIN CLARITY_EMP AS EMP ON EMP.EPIC_EMP_ID = HNO_INFO.CURRENT_AUTHOR_ID
    LEFT OUTER JOIN ZC_NOTE_STATUS AS STS ON STS.NOTE_STATUS_C = ENC_NOTE_INFO.NOTE_STATUS_C
    WHERE ARPB.DEPARTMENT_ID = xxxx
    AND ARPB.TX_TYPE_C = 1
    AND ARPB.VOID_DATE IS NULL
    AND ARPB.BILLING_PROV_ID = xxxx
    AND ZCNT.NAME = 'xxxx'
    AND ARPB.SERVICE_DATE BETWEEN
    'xxxx' AND 'xxxx'
    AND PR.NAME = 'xxxxx'
    create index idx_temp_pn on #progress_notes(PAT_ENC_CSN_ID)
    SELECT
    #arpb.PAT_ENC_CSN_ID,
    #arpb.APPT_TIME,
    #arpb.SERVICE_DATE,
    #arpb.BILLING_PROV_ID,
    #arpb.PROV_NAME,
    #arpb.DEPARTMENT_ID,
    #arpb.DEPARTMENT_NAME,
    #progress_notes.EMP_NAME as NoteEmp,
    CASE #progress_notes.NOTE_ID
    WHEN null THEN 'No Progress Note'
    ELSE #progress_notes.NOTE_ID
    END as NoteId,
    #progress_notes.NOTE_STATUS,
    #progress_notes.NOTE_TYPE,
    #progress_notes.PURPOSE
    FROM #ARPB
    LEFT JOIN #PROGRESS_NOTES ON #ARPB.PAT_ENC_CSN_ID = #PROGRESS_NOTES.PAT_ENC_CSN_ID
    To err is human, to REALLY foul things up requires a computer

    Something is not right here. What is the type of that column in both temp tables?
    I would assume that joining 147 and 136 rows should be done in less than 1 sec. even without indexes.
    How did you measure the time?
    BTW, I suggest to use aliases in your last query - it will be easier to read. Also use COALESCE function instead of CASE for the Note_ID (BTW, is Note_ID a character column - otherwise you supposed to get an error).
    For every expert, there is an equal and opposite expert. - Becker's Law
    My blog
    My TechNet articles

Maybe you are looking for

  • How do I get a URL to display in the Awesome Bar drop down list? Some URL's appear in this drop down but I can't figure how they got there.

    There is a drop down list from the Awesome Bar containing some of my most frequently visited sites. How can I add URL's to this list?

  • ISE 1.2 patch 5 My devices portal not showing regsitered devices

    Hello Guys, I am running ISE 1.2 with all recent patches installed. I have a weird issue where AD users login to mydevices portal and are not able to view any of their registered devices. even thou the devices were successfully registered and onboard

  • IDoc distribution

    Hi, I have one query while dealing with ALE configuration. Suppose I have to distribute IDoc in multiple system i.e. one sending say client 100 & receiver say client 200, 500, 700. How to deal if we have to distribute Idoc's to multiple systems. Ples

  • Updating Aperture 2.1.4 to 3.0

    I have tried to do the automatic updates but it won't update further than 2.1.4. I have tried to manually update by searching for the downloads on here but it won't let me update to 3.0.anything because I am missing the 3.0 update. Can anyone help? T

  • Transfering music to IPOD

    For a few years now I have periodically purchased music through Itunes. I have never owned an IPOD, opting for the more generic MP3 instead. I am always fear-full of getting locked in to any system or technology. However ..I just bought a car ...and