Performance on Z*** Tables

Hi ,
How to increase performance while creating Z**** Tables ?

Hi
You cannot have a limit on the number of Primary Keys but I have come across a situation where the length of the primary key when It exceeds 255 characters becomes an issue
when you say length you mean the sum of the lengths of each field in the order of the table which are to be taken as primary key.
You do not need to give all the primary keys while using a 'SELECT' statement but it would be best if you give all the primary keys for the best perfomance.
Message was edited by:
        Dominic  Pappaly

Similar Messages

  • How to improve Query performance on large table in MS SQL Server 2008 R2

    I have a table with 20 million records. What is the best option to improve query performance on this table. Is partitioning the table into filegroups  is a best option or splitting the table into multiple smaller tables? 

    Hi bala197164,
    First, I want to inform that both to partition the table into filegroups and split the table into multiple smaller tables can improve the table query performance, and they are fit for different situation. For example, our table have one hundred columns and
    some columns are not related to this table object directly (for example, there is a table named userinfo to store user information, it has columns address_street, address_zip,address_ province columns, at this time, we can create a new table named as Address,
    and add a foreign key in userinfo table references Address table), under this situation, by splitting a large table into smaller, individual tables, queries that access only a fraction of the data can run faster because there is less data to scan. Another
    situation is our table records can be grouped easily, for example, there is a column named year to store information about product release date, at this time, we can partition the table into filegroups to improve the query performance. Usually, we perform
    both of methods together. Additionally, we can add index to table to improve the query performance. For more detail information, please refer to the following document:
    Partitioning:
    http://msdn.microsoft.com/en-us/library/ms178148.aspx
    CREATE INDEX (Transact-SQL):
    http://msdn.microsoft.com/en-us/library/ms188783.aspx
    TechNet
    Subscriber Support 
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Allen Li
    TechNet Community Support

  • Performance on huge tables

    Hi,
    I have an application that should contain 100 million records.
    Each record has a primary key.
    The application fetches a row using the record primary key.
    Can anyone tell me what is the problem when using such a big table?
    What is the performance of the index on 100 million records?
    What is the performance of updates?
    Thanks
    dyahav

    user10952094 wrote:
    Can anyone tell me what is the problem when using such a big table?
    What is the performance of the index on 100 million records?
    What is the performance of updates?It is not about the size of the table.
    It is about the size of the I/O.
    In other words, how efficient the I/O paths are for getting to the required rows. A small table can cause worse performance problems than a table 10x its size due to the way the smaller table has been defined and is used.
    Simple (real world) example:
    SQL> select count(*) from daily_xxxxx;
      COUNT(*)
    2255362806
    Elapsed: 00:00:12.03
    SQL>Same database, a select against the data dictionary (containing only a couple of rows in comparison):
    SQL> select count(*) from all_objects;
      COUNT(*)
         50908
    Elapsed: 00:00:49.17The difference is caused by the amount and nature of I/O that was done - not by the sizes of the tables.
    There are however certain features in Oracle that can be used to effectively scale large tables for performance... and make data management significantly easier. The Partitioning Option is an Oracle Enterprise Edition feature that can be considered as an essential, if not a mandatory feature, for effectively dealing and scaling with very large tables (VLT).
    However, such a feature aside - the same rules for effective performance for a small table apply to effective performance on large tables. So do not treat a VLT differently. The fundamentals for performance and scalability do not change.

  • Performance difference between tables and materialized views

    hi ,
    I created a materialized view on a query that involves partition table in it.
    When i used the same query and created a table out of it <create table xyz as select * from (the query)> ,the table got created quickly.
    So does that mean performance wise creating table is faster than creating/refreshing the materialized view ?or is that due to the refresh method i use ?Currently i use a complete refresh

    I created a materialized view on a query that involves partition table in it.
    When i used the same query and created a table out of it <create table xyz as select * from (the query)> ,the table got created quickly.
    So does that mean performance wise creating table is faster than creating/refreshing the materialized view ?or is that due to the refresh method i use ?Currently i use a complete refresh Well, for starters, if you created the materialized view first and then the standard table, the data for the second one has already been fetched recently and so will reduce your I/O due to caching, and will therefore be quicker. There are also other factors such as the materialized view creating other internal bits that are required to allow for refreshes to be done quickly, such as the primary key etc which you haven't created on your second creation.
    What you have shown is that two completely different statements running at different times, appear to operate with different speed. It is not a comparison of whether the materialized view is slower or quicker than the create table statement.

  • Performance with internal tables

    Hi everybody,
    I'm having trouble with the performance of a report.
    The reports selects data out of many different tables and merges them together in one extraction file. (I would really like it to be executable in dialog task, because I don't want to write the file on the application sever)
    I've done what I could using runtime analysis to improve performance of the db accesses. Now 97% of runtime is consumed by abap.
    I'm using several internal tables which will have many entries, and at the moment I'm using nested loops to gather the information out of the internal tables into one extraction file. (Reading the internal tables with key is not always possible, I do this whenever possible)
    I've tried to use loop... assigning <fs> instead of loop... into, but it doesn't really help too much.
    Any other suggestions?
    Thank you for your help, regards, Kathrin!

    Hi,
    Some of these might be able to help for improving the performance.
    1. Read table with key using binary search.
    2. While comparing two internal tables, user Kernal
       method. Eg: If I_TAB1[] = I_TAB2[]. Enfif.
    3. Instead of appending records within loop/endloop,
       use 'Append lines of'.
    4. When populate different internal tables, try to create
       with key field which will make a big difference while
       accessing the data.
    5. Sort the internal tables.
    6. Use Case/Endcase in place of If/Endif.
    7. While making values negative, use 0 - <field>, in
       place of <field> * -1.
    8. If the field structures are similar in two internal
       tables, for appending records use I_TAB2[] = I_TAB1[].
    9. After the complete usage of internal tables, release
       the resources using FREE/FREFRESH <internal table>.
    10. For data extraction, use field by field selection
        from database.
    11. Try to avoid using 'ALL ENTRIES'. If using and make
        sure that the ref.internal table have entries.
    12. Try using Inner Join using Foriegn key fields.
    13. To get sum or count use Aggregate functions.
    14. Try to use Views if need appropriately.
    Hope this helps.
    Gopakumar

  • HOW TO USE A SINGLE PERFORM FOR VARIOUS TABLES ?

    perform test TABLES t_header.
    select
           KONH~KNUMH
           konh~datab
           konh~datbi
           konp~kbetr
           konp~konwa
           konp~kpein
           konp~kmein
           KONP~KRECH
           FROM konh INNER JOIN konp
                  ON konpknumh = konhknumh
           into table iTABXXX
            "ANY TEMPERARY INTERNAL TABLE.
           for all entries in t_header
           where
                 konh~kschl = t_header-kschl
             AND konh~knumh = t_header-knumh.
    endform.
    how can I use above perform for various internal tables of DIFFERENT LINE TYPES but having the fields KSCHL & KNUMH.

    u can use single perform....
    just see this example......hope this is what u r expecting....
    tables : pa0001.
    parameters : p_pernr like pa0001-pernr.
    data : itab1 like pa0001 occurs 0 with header line.
    data : itab2 like pa0002 occurs 0 with header line.
    perform get_data tables itab1 itab2.
    if not itab1[] is initial.
    loop at itab1.
    write :/ itab1-pernr.
    endloop.
    endif.
    if not itab2[] is initial.
    loop at itab2.
    write :/ itab2-pernr.
    endloop.
    endif.
    *&      Form  get_data
          text
         -->P_ITAB1  text
         -->P_ITAB2  text
    form get_data  tables   itab1 structure pa0001
                            itab2 structure pa0002.
    select * from pa0001 into table itab1 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
    select * from pa0002 into table itab2 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
    endform.                    " get_data
    Regards
    vasu

  • Regarding performance on cluster tables.vvv.urgent!

    friends,
      i know that cluster tables cannot be joined with transparent tables....
       however i need performance improvement for the following code....
       if possible is there a way to join bkpf or bseg to improve performance....can we create view foe bkpf and bseg if yes then how.....
      please modify the below code for improvement in performance.
      START-OF-SELECTION.
      SELECT bukrs belnr gjahr budat FROM bkpf INTO TABLE i_bkpf
      WHERE bukrs = p_bukrs AND "COMPANY CODE
              gjahr = p_gjahr AND "FISCAL YEAR
              budat IN s_budat. "POSTING DATE IN DOC
      IF sy-subrc = 0.
      SELECT bukrs belnr gjahr hkont shkzg dmbtr FROM bseg INTO TABLE
            i_bseg FOR ALL ENTRIES IN  i_bkpf
            WHERE bukrs = i_bkpf-bukrs AND "COMPANY CODE
                  belnr = i_bkpf-belnr AND "A/CING DOC NO
                  gjahr = i_bkpf-gjahr AND "FISCAL YEAR
                  hkont = p_hkont. "General Ledger Account"
        IF sy-subrc = 0.
         SELECT bukrs belnr gjahr hkont shkzg dmbtr FROM bseg INTO TABLE
                i_bseg1 FOR ALL ENTRIES IN  i_bseg
                WHERE bukrs = i_bseg-bukrs AND  "COMPANY CODE
                      belnr = i_bseg-belnr AND  "A/CING DOC NO
                      gjahr = i_bseg-gjahr.   "FISCAL YEAR
        ENDIF.
      ENDIF.
      IF NOT i_bseg1[] IS INITIAL.
        LOOP AT i_bseg1.
          IF i_bseg1-hkont = p_hkont AND i_bseg1-shkzg = 'S'.
            v_sumgl = v_sumgl + i_bseg1-dmbtr.
          ELSEIF i_bseg1-hkont = p_hkont AND i_bseg1-shkzg = 'H'.
            v_sumgl = v_sumgl - i_bseg1-dmbtr.
          ELSEIF i_bseg1-hkont NE p_hkont .
            IF i_bseg1-shkzg = 'H'.
              i_bseg1-dmbtr = - i_bseg1-dmbtr.
            ENDIF.
            i_alv-hkont = i_bseg1-hkont.
            i_alv-dmbtr = i_bseg1-dmbtr.
            APPEND i_alv.
            v_sumoffset = v_sumoffset + i_bseg1-dmbtr.
          ENDIF.
        ENDLOOP.
    regards
    Essam.([email protected])

    Hi there ...
    I have read the note - thats where I found the link to the trace note 286496.1 - on now to setup a trace
    But I still need an explanation for the methods (1,2,4 etc)
    regards
    Mette

  • Performance problem with table COSS...

    Hi
    Anyone encountered performance problem with these table : COSS, COSB, COEP
    Best Regards

    >
    gsana sana wrote:
    > Hi Guru's
    >
    > this is the select Query which is taking much time in Production. so please help me to improve the preformance with BSEG.
    >
    > this is my select query:
    >
    > select  bukrs
    >               belnr
    >               gjahr
    >               bschl
    >               koart
    >               umskz
    >               shkzg
    >               dmbtr
    >               ktosl
    >               zuonr
    >               sgtxt
    >               kunnr
    >         from  bseg
    >         into  table gt_bseg1
    >          for  all entries in gt_bkpf
    >        where  bukrs eq p_bukrs
    >          and  belnr eq gt_bkpf-belnr
    >          and  gjahr eq p_gjahr
    >          and  buzei in gr_buzei
    >          and  bschl eq  '40'
    >          and  ktosl  ne  'BSP'.
    >
    > UR's
    > GSANA
    Hi,
    This is what I know and please if any expert think its incorrect, please do correct me.
    BSEG is a cluster table with BUKRS, BELNR, GJAHR and BUZEI as the key whereas other key will be stored in database as raw data thus SAP apps will need to convert that raw data first if we are using other keys in where condition. Hence, I suggest to use up to buzei in the where condition and filter other condition in internal table level like using Delete statement. Hope its help.
    Regards,
    Abraham

  • Unable to perform "insert into table select ...." using DBF_JDBC30

    below i post my code :
    import java.util.*;
    import java.sql.*;
    import java.io.*;
    import com.hxtt.sql.*;
    public class backofficeDbfVerification {
    public backofficeDbfVerification(String path) throws SQLException {
    Properties prop = new Properties();
    prop.setProperty("user", "");
    prop.setProperty("OtherExtensions","true");
    prop.setProperty("Version Number", "03");
    try {
    Class.forName("com.hxtt.sql.dbf.DBFDriver").newInstance();
    connection = DriverManager.getConnection("jdbc:DBF:/"+path, prop);
    catch (ClassNotFoundException classnotfoundexception) {
    System.out.println("could ot find HXTT class, make sure the classpath have been defined in your system !");
    catch(Exception e) {
    System.out.println(e.getMessage());
    public void createBadRecord(String table, String newtable, int blth) throws SQLException {
    Statement stmt = connection.createStatement();
    String insert = "insert into \""+newtable+"\" select * from \""+table+"\"";
    boolean bInsert = stmt.execute(insert);
    stmt.close();
    public static void main (String args[]) {
         String path = "C:\\MASTER\\SOURCEDATA\\AREA";
    try {
    backofficeDbfVerification test = new backofficeDbfVerification(path);
    test.createBadRecord("source\\area\\123.AA2","source\\area\\others\\89964568.AA1");
    catch(SQLException sqx) {
    System.out.println("Error "+sqx.getMessage());
    i try to perform simple query : insert into tablea select from tableb which has same structure. the source table has only 9 rows, but the process ran very long time, no error message raised,it just run and never end (which make me upset for the whole day).
    i have tried another simple query : insert into table1 values (1,2) using the same driver and execute successfully.
    is it becoz of function limitation since i used evaluation copy ?

    Copy the answer from HXTT's support forum:
    No function limitation. I just tested:
    create table testa1 select * from test;
    insert into testa1 select * from test;
    Passed.
    I also run your backofficeDbfVerification.java sample. Passed too. I'm using the same code as you. The difference is only my 89964568.AA1 and 123.AA2 is simulative tables files. If possible, please zip your 89964568.AA1 and 123.AA2 and email to [email protected] Thanks.
    BTW, I pasted the little modified backofficeDbfVerification.java below:
    import java.sql.*;
    import java.util.*;
    public class backofficeDbfVerification {
    Connection connection;
    public backofficeDbfVerification(String path) throws SQLException {
    Properties prop = new Properties();
    prop.setProperty("user", "");
    prop.setProperty("OtherExtensions", "true");
    prop.setProperty("Version Number", "03");
    try {
    Class.forName("com.hxtt.sql.dbf.DBFDriver").newInstance();
    connection = DriverManager.getConnection("jdbc:DBF:/" + path, prop);
    catch (ClassNotFoundException classnotfoundexception) {
    System.out.println("could ot find HXTT class, make sure the classpath have been defined in your system !");
    catch (Exception e) {
    System.out.println(e.getMessage());
    public void createBadRecord(String table, String newtable) throws
    SQLException {
    Statement stmt = connection.createStatement();
    String insert = "insert into \"" + newtable + "\" select * from \"" +
    table + "\"";
    boolean bInsert = stmt.execute(insert);
    stmt.close();
    public static void main(String args[]) {
    String path = "f:\\dbffiles";
    try {
    backofficeDbfVerification test = new backofficeDbfVerification(path);
    test.createBadRecord("source\\area\\123.AA2",
    "source\\area\\others\\89964568.AA1");
    catch (SQLException sqx) {
    System.out.println("Error " + sqx.getMessage());
    }

  • Query performance on same table with many DML operations

    Hi all,
    I am having one table with 100 rows of data. After that, i inserted, deleted, modified data so many times.
    The select statement after DML operations is taking so much of time compare with before DML operations (There is no much difference in data).
    If i created same table again newly with same data and fire the same select statement, it is taking less time.
    My question is, is there any command like compress or re-indexing or something like that to improve the performance without creating new table again.
    Thanks in advance,
    Pal

    Try searching "rebuilding indexes" on http://asktom.oracle.com. You will get lots of hits and many lively discussions. Certainly Tom's opinion is that re-build are very rarley required.
    As far as I know, Oracle has always re-used deleted rows in indexes as long as the new row belongs in that place in the index. The only situation I am aware of where deleted rows do not get re-used is where you have a monotonically increasing key (e.g one generated by a seqence), and most, but not all, of the older rows are deleted over time.
    For example if you had a table like this where seq_no is populated by a sequence and indexed
    seq_no         NUMBER
    processed_flag VARCHAR2(1)
    trans_date     DATEand then did deletes like:
    DELETE FROM t
    WHERE processed_flag = 'Y' and
          trans_date <= ADD_MONTHS(sysdate, -24);that deleted the 99% of the rows in the time period that were processed, leaving only a few. Then, the index leaf blocks would be very sparsely populated (i.e. lots of deleted rows in them), but since the current seq_no values are much larger than those old ones remaining, the space could not be re-used. Any leaf block that had all of its rows deleted would be reused in another part of the index.
    HTH
    John

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Performance issues involving tables S031 and S032

    Hello gurus,
    I am having some performance issues. The program involves accessing data from S031 and S032.  I have pasted the SELECT statements below.  I have read through the forums for past postings regarding performance, but I wanted to know if there is anything that stands out as being the culprit of very poor performance, and how it can be corrected.  I am fairly new to SAP, so I apologize if I've missed an obvious error.  From debugging the program, it seems the 2nd select statement is taking a very long time to process. 
    GT_S032: approx. 40,000 entries
    S031:    approx. 90,000 entries
    MSEG:    approx. 115,000 entries
    MKPF:    approx. 100,000 entries
    MARA:    approx. 90,000 entries
    SELECT
      vrsio          "Version
      werks          "Plan
      lgort          "Storage Location
      matnr          "Material
      ssour          "Statistic(s) origin                  
    FROM s032
    INTO TABLE gt_s032
    WHERE ssour = space                                           AND   vrsio = c_000                                           AND   werks = gw_werks.
    IF sy-subrc = 0.
      SELECT
        vrsio        "Version
        werks        "Plant
        spmon        "Period to analyze - month
        matnr        "Material
        lgort        "Storage Location
        wzubb        "Valuated stock receipts value
        wagbb        "Value of valuated stock being issued
      FROM s031
      INTO TABLE gt_s031
      FOR ALL ENTRIES IN gt_s032
      WHERE ssour = gt_s032-ssour                                     
      AND   vrsio = gt_s032-vrsio                                     
      AND   spmon IN r_spmon
      AND   sptag = '00000000'                                      
      AND   spwoc = '000000'                                          
      AND   spbup = '000000'                               
      AND   werks = gt_s032-werks
      AND   matnr = gt_s032-matnr
      AND   lgort = gt_s032-lgort
      AND   ( wzubb <> 0 OR wagbb <> 0 ).
    ELSE.
      WRITE: 'No data selected'(m01).
      EXIT.
    ENDIF.
    SORT gt_s032 BY vrsio werks lgort matnr.
    SORT gt_s031 BY vrsio werks spmon matnr lgort.
    SELECT
      p~werks          "Plant
      p~matnr          "Material
      p~mblnr          "Document Number
      p~mjahr          "Document Year
      p~bwart          "Movement type
      p~dmbtr          "Amount in local currency
      t~shkzg          "Debit/Credit indicator
    INTO TABLE gt_scrap
    FROM mkpf AS h
    INNER JOIN mseg AS p
       ON hmblnr = pmblnr
      AND hmjahr = pmjahr
    INNER JOIN mara AS m
       ON pmatnr = mmatnr
    INNER JOIN t156 AS t
       ON pbwart = tbwart
    WHERE h~budat => gw_duepr-begda
      AND h~budat <= gw_duepr-endda
      AND p~werks = gw_werks.
    Thanks so much for your help,
    Jayesh

    Issue with table s031 and with for all entries.
    Hi,
    I have following code in which select statement on s031 is
    taking long time and after that it shows a dump. What should I do instead of
    exceeding the time limit of execution of an abap program.
    TYPES:
      BEGIN OF TY_MTL,  " Material Master
        MATNR TYPE MATNR,   " Material Code
        MTART TYPE MTART,   " Material Type
        MATKL TYPE MATKL,   " Material Group
        MEINS TYPE MEINS,   " Base unit of Measure
        WERKS TYPE WERKS_D, " Plant
        MAKTX TYPE MAKTX,   " Material description (Short Text)
        LIFNR TYPE LIFNR,   " vendor code
        NAME1 TYPE NAME1_GP, " vendor name
        CITY  TYPE ORT01_GP, " City of Vendor
        Y_RPT TYPE P DECIMALS 3, "Yearly receipt
        Y_ISS TYPE P DECIMALS 3, "Yearly Consumption
        M_OPG TYPE P DECIMALS 3, "Month opg
        M_OPG1 TYPE P DECIMALS 3,
        M_RPT TYPE P DECIMALS 3, "Month receipt
        M_ISS TYPE P DECIMALS 3, "Month issue
        M_CLG TYPE P DECIMALS 3, "Month Closing
        D_BLK TYPE P DECIMALS 3, "Block Stock,
        D_RPT TYPE P DECIMALS 3, "Today receipt
        D_ISS TYPE P DECIMALS 3, "Day issues
        TL_FL(2) TYPE C,
        STATUS(4) TYPE C,
    END OF TY_MTL,
    BEGIN OF TY_OPG     , " Opening File
           SPMON TYPE SPMON,   " Period to analyze - month
           WERKS TYPE WERKS_D, " Plant
           MATNR TYPE MATNR,   " Material No
           BASME TYPE MEINS,
           MZUBB TYPE MZUBB,   " Receipt Quantity
           WZUBB TYPE WZUBB,
           MAGBB TYPE MAGBB,   " Issues Quantity
           WAGBB TYPE WAGBB,
    END OF TY_OPG,
    DATA :
           T_M  TYPE STANDARD TABLE OF TY_MTL INITIAL SIZE 0,
           WA_M TYPE TY_MTL,
           T_O  TYPE STANDARD TABLE OF TY_OPG INITIAL SIZE 0,
           WA_O TYPE TY_OPG.
    DATA: smonth1      TYPE spmon.  
    SELECT
      a~matnr
      a~mtart
      a~matkl
      a~meins
      b~werks
      INTO TABLE t_m FROM mara AS a
      INNER JOIN marc AS b
      ON a~matnr = b~matnr
    *  WHERE a~mtart EQ s_mtart
      WHERE a~matkl IN s_matkl
      AND b~werks IN s_werks
      AND b~matnr IN s_matnr   .
      endif.
    SELECT spmon
           werks
           matnr
           basme
           mzubb
           WZUBB
           magbb
           wagbb
            FROM s031 INTO TABLE t_o
            FOR ALL ENTRIES IN t_m
            WHERE matnr = t_m-matnr
            AND werks IN s_werks
              AND spmon le smonth1
              AND basme = t_m-meins.

  • Performance with having tables in different schema's

    Hi all,
    We have a requirement to segregate tables into different schema's based on the sources. But everything is going to be on the same instance and database.
    Is there a performance hit (querying between tables) by having tables in different schema as apposed to having them in the same schema?
    Thanks
    Narasimha

    Most likely there is bit of performance impact if your application refers the tables from different schemas. You need to use database link to access the other schemas. Even you schemas may in instance but when you use database link the network also comes into picture. even queries on same instance will routed through network connect and get the data. Distributed transaction also have issues some time. So as far as possible the distribution of objects should be avoided into diffrent schemas.

  • How to perform operations on table control

    hello experts,
                         will u plz tell me how to perform operations like delete and update on tablecontrol in module pool.
                                thanks in advance,

    Hey Aravind,
    In case you want to delete just from your table control and not from database table, then you can use the commands:
    clear <workarea_name>
    delete <workarea_name>
    for your selected rows.
    And for deleting from database tables also, use:
    Delete from <Database_table_name> where <where_clause>.
    clear <workarea_name>
    delete <workarea_name>
    This will delete both from the table control and database table also.
    Reward if it proved useful to you.
    Regards
    Natasha Garg

  • Insert performance on a table with schema based XMLType column

    Hi,
    We are inserting around 500K rows into a table which has one XMLType column (schema based). Schema is simple and the size of the XMLType column is also not very large (on an average only around 100 bytes (max might be around 1k-2k), but it takes around 1 hr for every 20K rows, which seems very slow.
    The schema is like this :
    <schema targetNamespace="http://www.citadon.com/xml/test.xsd"
    xmlns="http://www.w3.org/2001/XMLSchema"
    xmlns:xdb="http://xmlns.oracle.com/xdb"
    xdb:storeVarrayAsTable="true"
    version="1.0" elementFormDefault="qualified">
    <element name="cas">
    <complexType>
    <sequence>
    <element name="ca" minOccurs="0" maxOccurs="unbounded">
    <complexType>
    <sequence>
    <element name="id" type="string"/>
    <element name="value" type="string"/>
    </sequence>
    </complexType>
    </element>
    </sequence>
    </complexType>
    </element>
    </schema>
    Any thoughts on how to improve performance?
    -Srini

    You need to have sufficient data.. Also the event show in the following code may help depending on the nature of the query....
    Note in the PurchaseOrder Example if I only have 133 docs, instead of 10,000 I will get tablescan and index full scans
    C:\oracle\xdb\bugs\xdbBasicDemo>sqlplus /nolog @testcase XDBTEST XDBTEST
    SQL*Plus: Release 10.1.0.3.0 - Production on Fri Aug 27 22:57:36 2004
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    SQL> spool testcase.log
    SQL> set trimspool on
    SQL> connect &1/&2
    Connected.
    SQL> --
    SQL> set timing on
    SQL> set long 10000
    SQL> set pages 10000
    SQL> set feedback on
    SQL> set lines 132
    SQL> set pages 50
    SQL> --
    SQL> drop index iPartNumberIndex
    2 /
    Index dropped.
    Elapsed: 00:00:02.25
    SQL> alter index LINEITEM_LIST rebuild
    2 /
    Index altered.
    Elapsed: 00:00:02.15
    SQL> desc PURCHASEORDER
    Name Null? Type
    TABLE of SYS.XMLTYPE(XMLSchema "http://localhost:8080/home/SCOTT/poSource/xsd/purchaseOrder.xsd" Element "Pu
    ject-relational TYPE "PURCHASEORDER_T"
    SQL> --
    SQL> col level format 99999
    SQL> col parent_table_column format A32
    SQL> col table_name format A32
    SQL> col table_type_name format A32
    SQL> --
    SQL> select level, PARENT_TABLE_COLUMN, TABLE_TYPE_NAME, TABLE_NAME
    2 from USER_NESTED_TABLES
    3 connect by PRIOR TABLE_NAME = PARENT_TABLE_NAME
    4 start with PARENT_TABLE_NAME = 'PURCHASEORDER'
    5 /
    LEVEL PARENT_TABLE_COLUMN TABLE_TYPE_NAME TABLE_NAME
    1 "XMLDATA"."ACTIONS"."ACTION" ACTION_V ACTION_TABLE
    1 "XMLDATA"."LINEITEMS"."LINEITEM" LINEITEM_V LINEITEM_TABLE
    2 rows selected.
    Elapsed: 00:00:13.60
    SQL> desc LINEITEM_T
    LINEITEM_T is NOT FINAL
    Name Null? Type
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    ITEMNUMBER NUMBER(38)
    DESCRIPTION VARCHAR2(256 CHAR)
    PART PART_T
    SQL> --
    SQL> desc PART_T
    PART_T is NOT FINAL
    Name Null? Type
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    PART_NUMBER VARCHAR2(14 CHAR)
    QUANTITY NUMBER(12,2)
    UNITPRICE NUMBER(8,4)
    SQL> --
    SQL> select count(*)
    2 from purchaseorder
    3 /
    COUNT(*)
    10000
    1 row selected.
    Elapsed: 00:00:05.31
    SQL> select count(*)
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 /
    COUNT(*)
    148814
    1 row selected.
    Elapsed: 00:09:40.54
    SQL> create index iPartNumberIndex
    2 on LINEITEM_TABLE l
    3 ( l.PART.PART_NUMBER,NESTED_TABLE_ID)
    4 /
    Index created.
    Elapsed: 00:00:36.11
    SQL> explain plan for
    2 select count(*)
    3 from purchaseorder
    4 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    Explained.
    Elapsed: 00:00:01.14
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2571550067
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 116 | 93 (2)| 00:00:02 |
    | 1 | SORT AGGREGATE | | 1 | 116 | | |
    | 2 | NESTED LOOPS | | 25 | 2900 | 93 (2)| 00:00:02 |
    | 3 | SORT UNIQUE | | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 4 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 5 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 6 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 49 | 1 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - access("SYS_NC00011$"='717951002372')
    6 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    7 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    26 rows selected.
    Elapsed: 00:00:03.12
    SQL> select count(*)
    2 from purchaseorder
    3 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    4 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:04.63
    SQL> select count(*)
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:00.32
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2571550067
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 116 | 93 (2)| 00:00:02 |
    | 1 | SORT AGGREGATE | | 1 | 116 | | |
    | 2 | NESTED LOOPS | | 25 | 2900 | 93 (2)| 00:00:02 |
    | 3 | SORT UNIQUE | | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 4 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 5 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 6 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 49 | 1 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - access("SYS_NC00011$"='717951002372')
    6 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    7 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    26 rows selected.
    Elapsed: 00:00:00.04
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder,
    4 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    5 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    6 /
    Explained.
    Elapsed: 00:00:00.07
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 713363872
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 8000 | 104 (0)| 00:00:02 |
    | 1 | NESTED LOOPS | | 25 | 8000 | 104 (0)| 00:00:02 |
    |* 2 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 4 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 253 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("SYS_NC00011$"='717951002372')
    3 - access("SYS_NC00011$"='717951002372')
    4 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    5 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    24 rows selected.
    Elapsed: 00:00:00.04
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.07
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder
    4 where existsNode
    5 (
    6 object_value,
    7 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    8 ) = 1
    9 /
    Explained.
    Elapsed: 00:00:00.02
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 849879259
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 8000 | 93 (2)| 00:00:02 |
    | 1 | NESTED LOOPS | | 25 | 8000 | 93 (2)| 00:00:02 |
    | 2 | SORT UNIQUE | | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 3 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 4 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 253 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("SYS_NC00011$"='717951002372')
    4 - access("SYS_NC00011$"='717951002372')
    5 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    6 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    25 rows selected.
    Elapsed: 00:00:00.03
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder
    3 where existsNode
    4 (
    5 object_value,
    6 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    7 ) = 1
    8 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.04
    SQL> alter session set events ='19027 trace name context forever, level 0x800000'
    2 /
    Session altered.
    Elapsed: 00:00:00.00
    SQL> explain plan for
    2 select count(*)
    3 from purchaseorder
    4 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    Explained.
    Elapsed: 00:00:00.03
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 3049344732
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 69 | 17 (6)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 69 | | |
    | 2 | NESTED LOOPS | | 25 | 1725 | 17 (6)| 00:00:01 |
    | 3 | SORT UNIQUE | | 25 | 750 | 3 (0)| 00:00:01 |
    |* 4 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 39 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    6 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    24 rows selected.
    Elapsed: 00:00:00.03
    SQL> select count(*)
    2 from purchaseorder
    3 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    4 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:00.01
    SQL> select count(*)
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:00.01
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 3049344732
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 69 | 17 (6)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 69 | | |
    | 2 | NESTED LOOPS | | 25 | 1725 | 17 (6)| 00:00:01 |
    | 3 | SORT UNIQUE | | 25 | 750 | 3 (0)| 00:00:01 |
    |* 4 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 39 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    6 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    24 rows selected.
    Elapsed: 00:00:00.03
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder,
    4 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    5 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    6 /
    Explained.
    Elapsed: 00:00:00.06
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1516269755
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 2450 | 28 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 25 | 2450 | 28 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 3 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 68 | 1 (0)| 00:00:01 |
    |* 4 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("SYS_NC00011$"='717951002372')
    3 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    4 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    22 rows selected.
    Elapsed: 00:00:00.04
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.01
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder
    4 where existsNode
    5 (
    6 object_value,
    7 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    8 ) = 1
    9 /
    Explained.
    Elapsed: 00:00:00.03
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1197255270
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 2450 | 17 (6)| 00:00:01 |
    | 1 | NESTED LOOPS | | 25 | 2450 | 17 (6)| 00:00:01 |
    | 2 | SORT UNIQUE | | 25 | 750 | 3 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 4 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 68 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("SYS_NC00011$"='717951002372')
    4 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    5 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    23 rows selected.
    Elapsed: 00:00:00.03
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder
    3 where existsNode
    4 (
    5 object_value,
    6 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    7 ) = 1
    8 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.03
    SQL> quit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options

  • The performance of the tables in the PSAPSTABD is poor

    Hi:
    After I reorganized the tablepsace "PSAPSTABD" with datafile , I
    found that the performance for visiting the tables in the PSAPSTABD is
    very poor , and the tcode of maintaining the custom master data and
    material master data is very slow. The tools for the reorganization is
    SAPDBA 6.40 and the type of the reorganization is offline.Why? How can I solve it?
    THANKS.
    sap envirement: aix 5.3 + oracle 9.2.0.7 + sap r/3 46c sr2
    sap kernel : 46d_ext
    dbatools: I used dbatools 640 to overwrite the dbatools 620 and "sapdba" to reorg the tablepsace.

    Hy,
    is not easy understand the problem by your analysys...
    try to analyze a single trx that you consider very slow , and through st04(oracle session) let me know in which table the trx remains in hang or in waiting...then you should see the cost estimated for the query that is doing the trx. i mean that could be an index not correctly configured....
    Could be also a problem of table statistic..did you regenerate the statistic after the reorg of tablespace?
    let me know also database and tablespace dimension ...
    Bye bye
    Nick

Maybe you are looking for