Oltp insert performance

Hi Experts ,
1. could someone guide me on understanding what are things that impact insert performance in an oltp application with ~25 concurrent sessions doing 20 inserts/session  into  table X. ? (env- oracle 11g ,3 node RAC , ASSM tablespace , tables X is range partitioned )
2. If any storage parameter is not property set then how to identify which one needs to be fixed?
Note: current insert performance is : 0.02 sec/insert.

Hi Garry,
Thanks for your response.
some more info regarding app : DB version  11.2.0.3 . Below is the awr info during peak load for 1 hr snap. any suggestions are helpful.
Cache Sizes                       Begin        End
~~~~~~~~~~~                  ---------- ----------
               Buffer Cache:    18,624M    18,624M  Std Block Size:         8K
           Shared Pool Size:     3,200M     3,200M      Log Buffer:    25,888K
Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):                     4.9                0.0                 0.01       0.00    
       DB CPU(s):                0.5                     0.0                 0.00       0.00         
       Redo size:               585,778.7            2,339.6
   Logical reads:                24,046.6               96.0
   Block changes:            2,374.5                9.5
  Physical reads:            1,101.6                4.4
Physical writes:              394.6                1.6
      User calls:                 2,086.6                8.3
          Parses:                9.5                     0.0
     Hard parses:                     0.5                     0.0
W/A MB processed:                5.8                0.0
          Logons:                     0.6                     0.0
        Executes:                   877.7                     3.5
       Rollbacks:                   218.6                     0.9
    Transactions:              250.4
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Buffer Nowait %:   99.99       Redo NoWait %:   99.99
            Buffer  Hit   %:   95.44    In-memory Sort %:  100.00
            Library Hit   %:   99.81        Soft Parse %:   95.16
         Execute to Parse %:   98.92         Latch Hit %:   99.89
Parse CPU to Parse Elapsd %:   92.50     % Non-Parse CPU:   97.31
Shared Pool Statistics        Begin    End
             Memory Usage %:   75.36   74.73
    % SQL with executions>1:   90.63   90.41
  % Memory for SQL w/exec>1:   83.10   85.49
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Event                                 Waits               Time(s)   Avg(ms)     %DBtime       Wait Class    
db file sequential read           3,686,200      15,658      4                  87.7            User I/O              
DB CPU                                                      1,802                         10.1                             
db file parallel read                19,646              189         10                 1.1            User I/O    
gc current grant 2-way              842,079         145          0               .8                 Cluster
gc current block 2-way              425,663         106           0               .6            Cluster

Similar Messages

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Jdbc thin driver and bulk binding slow insertion performance

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • XMLTYPE insert performance

    I am experiencing performance problems when inserting a 30 MB XML file into an XMLTYPE field - under Oracle 11 with the schema I am using the minimum time I can achieve is around 9 minutes which is too long... can anyone comment on whether this performance is normal and possibly suggest how it could be improved while retaining the benefits of structured storage...thanks in advance for the help :)

    sorry for the late reply - I didn't notice that you had replied to my earlier post...
    To answer your questions in order:
    - I am using "structured" storage because I read ( in this article: [http://www.oracle.com/technology/pub/articles/jain-xmldb.html] ) that this would result in higher xquery performance.
    - the schema isn't very large but it is complex. ( as discussed in above article )
    I built my table by first registering the schema and then adding the xml elements to the table such that they would be stored in structured storage. i.e.
    --// Register schema /////////////////////////////////////////////////////////////
    begin
    dbms_xmlschema.registerSchema(
    schemaurl=>'fof_fob.xsd',
    schemadoc=>bfilename('XFOF_DIR','fof_fob.xsd'),
    local=>TRUE,
    gentypes=>TRUE,
    genbean=>FALSE,
    force=>FALSE,
    owner=>'FOF',
    csid=>nls_charset_id('AL32UTF8')
    end;
    COMMIT;
    and then created the table using ...
    --// Create the XCOMP table /////////////////////////////////////////////////////////////
    create table "XCOMP" (
         "type" varchar(128) not null,
         "id" int not null,
         "idstr1" varchar(50),
         "idstr2" varchar(50),
         "name" varchar(255),
         "rev" varchar(20) not null,
         "tstamp" varchar(30) not null,
         "xmlfob" xmltype)
    XMLTYPE "xmlfob" STORE AS OBJECT RELATIONAL
    XMLSCHEMA "fof_fob.xsd"
    ELEMENT "FOB";
    No indexing was specified for this table. Then I inserted the offending 30 MB xml file using (in c#, using ODP.NET under .NET 3.5):
    void test(string myName, XElement myXmlElem)
    OracleConnection connection = new OracleConnection();
    connection.Open();
    string statement = "INSERT INTO XCOMP ( \"name\", \"xmlfob\"") values( :1, :2 )";
    XDocument xDoc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), myXmlElem);
    OracleCommand insCmd = new OracleCommand(statement, connection);
    OracleXmlType xmlinfo = new OracleXmlType(connection, xDoc.CreateReader());
    insCmd.Parameters.Add(FofDbCmdInsert.Name, OracleDbType.Varchar2, 255);
    insCmd.Parameters.Add(FofDbCmdInsert.Xmldoc, OracleDbType.XmlType);
    insCmd.Parameters[0].Value = myName;
    insCmd.Parameters[1].Value = xmlinfo;
    insCmd.ExecuteNonQuery();
    connection.Close();
    It took around 9 minutes to execute the ExecuteNonQuery statement, usingOracle 11 standard edition running under Windows 2008-64 with 8 GB RAM and 2.5 MHZ single core ( of a quad-core running under VMWARE )
    I would much appreciate any suggestions that could speed up the insert performance here - as a temporary solution I chopped some of the information out of the XML document and store it seperately in another table, but this approach has the disadvantage that I using xqueries is a bit inflexible, although the performance is now in seconds rather than minutes...
    I can't see any reason why Oracle's shredding mechanism should be less efficient than manual shredding the information.
    Thanks in advance for any helpful hints you can provide!

  • Single record insert performance problems

    Hi,
    we have on production environment a Java based application that makes aprox 40.000 single record Inserts per hour into a table.
    We ha traced the performance of this Insert and the medium time is 3ms, that is ok. Our Java architecture is based in Websphere Application Server and we access to Oracle 10g through a WAS datasource.
    But we have detected that 3 or 4 times a day, during aprox 30 seconds, the Java service is not able to make any insertion in that table. And suddenly it makes all the "queued inserts" in only 1 second. That "pause" in the insertion cause problems of navigation because is the top layer there is a web application.
    We are sure that is not a problem with the WAS or the Java code. We are sure that is a problem with the Oracle configuration, or some tunning action for this kind of applications that we don´t know. We first thought it could be a problem with a sequence field in the table. Also, a problem when occurs the change of the redo log. But we've checked it with our DBA and this is not the problem.
    Has anybody any idea of what could be the origin of this extrange behaviour?
    Thanks a lot in advance.
    Jose.

    There are a couple of things you'd need to look at to diagnose this - As Joe says it's not really a JDBC issue from what we know.
    I've seen issues with Oracle's automatic SGA resizing causing sporadic latency in OLTP systems. Another suspect would be log file sync wait events, which are associated with commits. Don't discount the impact of well meaning people using tools like TOAD to query the DB - they can sometimes cause more harm than good.
    Right now I'd suggest you run AWR at 10 minute intervals and compare reports from when you had your problem with a time when you didn't.

  • Suggestions to improve the INSERT performance

    Hi All,
    I have a table which has 170 columns .
    I am inserting huge data 50K and more records into this table.
    my insert would be look like this.
    INSERT INTO /*+ append */ REPORT_DATA(COL1,COL2,COL3,COL4,COL5,COL6)
    SELECT  DATA1,DATA2,DATA3,DATA4,DATA5,DATA5 FROM TXN_DETAILS
    WHERE COL1='CA';
    Here i want to insert values for only few columns.Hence i specifies only those column names in insert statement.
    But when huge data(50k+) returned by select query then this statement taking   very long time to execute(approximately 10 to 15 mins).
    Please  suggest me to improve this insert statement performance.I am also using 'append' hint.
    Thanks in advance.

    a - Disable/drop indexes and constraints - It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.
    b - Manage segment header contention for parallel inserts - Make sure to define multiple freelist (or freelist groups) to remove contention for the table header. Multiple freelists add additional segment header blocks, removing the bottleneck.  You can also use Automatic Segment Space Managementhttp://www.dba-oracle.com/art_dbazine_ts_mgt.htm (bitmap freelists) to support parallel DML, but ASSM has some limitations
    c - Parallelize the load - You can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL "APPEND" option. If you submit parallel jobs to insert against the table at the same time, using the APPEND hint may cause serialization, removing the benefit of parallel jobstreams.
    d - APPEND into tables - By using the APPEND hint, you ensure that Oracle always grabs "fresh" data blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don't need to specify an APPEND hint. Also, if you're going w/ APPEND, consider putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging."
    insert /*+ append */ into customer values ('hello',';there');
    e - Use a large blocksize - By defining large (i.e. 32k) blocksizes for the target table, you reduce I/O because more rows fit onto a block before a "block full" condition (as set by PCTFREE) unlinks the block from the freelist.
    f - Use  NOLOGGING
    f - RAM disk - You can use high-speed solid state disk (RAM-SAN) to make Oracle inserts run up to 300x faster than platter disk.

  • Truncate Table before Insert--Performance

    HI All,
    This post is in focus of special requirement where a table is truncated before inserting records in the table.
    Now, when a table is truncated the High Water Mark(HWK) is reset to lowest memory allocated for table in tablespace. After this, would insert with append can boost the performance of the insert query?
    In simple insert query, the oracle engine consults the free list to look for free spaces.
    But in insert with apppend, the engine starts above the HWM. And the argument is when truncate has been executes on table, would the freelist be used in simple insert.
    I just need to know if there are any benefits of using append insert on truncated table or simple insert would be same in term of performance with respect to insert with append.
    Regards
    Nits

    Hi,
    if you don't need the data truncate the table. There is no negativ impact whether you are using an conventional path or a direct path insert.
    If you use append less redo is written for the table if the table is in NOLOGGING mode, but redo is written for all indexes. I would recommand to create a full backup after that (if needed), because your table will not be recoverable after that (no REDO Information).
    Dim

  • Can insert performance be improved playing with env parameters?

    Below is the environment confioguration and results of my bulk load insert experiments. The results are from two scenarios that is also described below. The values for the two scenarios is separated by a space.
    Environment Configuration:
    setTxn     N
    DeferredWrite Y     
    Sec Bulk Load     Y
    Post Build SecIndex Y
    Sync Y
    Column1 value reflects for the scenario:
    Two databases
    a. Database with 2,500,000 records
    b. Database with 2,500,000 records
    Column2 value reflects for the scenario:
    Two databases
    a. Database with 25,000,000 records
    b. Database with 25,000,000 records
    1. Is there a good documentation which describes what the environment statistics mean.
    2. Looking at the statistics below, can you make any suggestions for performance improvement.
    Looking at the below statistics is the:
    Eviction Stats                    
    nEvictPasses               3929          146066
    nNodesSelected               309219          17351997
    nNodesScanned          3150809     176816544
    nNodesExplicitlyEvicted     152897     8723271
    nBINsStripped          156322     8628726
    requiredEvictBytes     524323     530566
    CheckPoint Stats     
    nCheckpoints     55     1448
    lastCheckpointID     55     1448
    nFullINFlush     54     1024
    nFullBINFlush     26     494
    nDeltaINFlush     116     2661
    lastCheckpointStart     0x6f/0x2334f8     0xb6a/0x82fd83
    lastCheckpointEnd     0x6f/0x33c2d6     0xb6a/0x8c4a6b
    endOfLog     0xb/0x6f22e     0x6f/0x75a843     0xb6a/0x23d8f
    Cache Stats     
    nNotResident     4591918     57477898
    nCacheMiss     4583077     57469807
    nLogBuffers     3     3
    bufferBytes     3145728     3145728
    (MB)     3.00     3.00
    cacheDataBytes     563450470     370211966
    (MB)     537.35     353.06
    adminBytes     29880     16346272
    lockBytes     1113     1113
    cacheTotalBytes     566596198     373357694
    (MB)     540.35     356.06
    Logging Stats          
    nFSyncs 59     1452
    nFSyncRequest     59     1452
    nFSyncTimeouts     0     0
    nRepeatFaultReads     31513     6525958
    nTempBufferForWrite     0     0
    nRepeatIteratorReads     0     0
    totalLogSize     1117658932     29226945317
    (MB)     1065.88     27872.99
    lockBytes     1113     1113

    Hello Linda,
    I am inserting 25,000,000 records of the type:
    Database 1
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    The secondary keys are on {long,long} and {String}
    Database 2
    Key --> Data
    [long,Integer,long] --> [{long,long}, {Integer}}
    The secondary keys are on {long,long} and {Integer}
    i set the env parameters to non-transactional and setDeferredWrite(True)
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
    private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
        try {
              SecondaryIndex<TDetailSecondaryKey, TDetailStringKey,
                                       TDetailStringRecord> secondaryIndex      = 
                                       store.getSecondaryIndex(
                                             dataAccessLayer.getPrimaryIndex() ,
                                             TDetailSecondaryKey.class,         
                                             SECONDARY_KEY_NAME
            } catch (DatabaseException e) {
                  throw new RuntimeException(e);
    We are inserting to 2 databases  as mentioned above.
    NumRecs        250,000x2   2,500,000x2     25,000,000x2
    TotalTime(ms)  16877             673623     30225781
    PutTime(ms)    7684             76636   1065030
    BuildSec(ms)   4952             590207     29125773
    Sync(ms)       4241             6780     34978Why does building secondaryIndex ( 2 secondary databases in this case) take so much longer than inserting to the primary database - 27 times longer !!!
    Its hard to believe that building of the tree for secondary database takes so much longer.
    Why doesnt building the tree for primary database take so long. The data in the primary database is same as its key to be able to search on these values.
    Hence its surprising it takes so long
    The cache stats mentioned above relate to these .
    Can you try explaining this. We are trying to figure out is it worth trying to build the secondary index later for bulk loading.

  • Improve Database adapter insert performance

    Hopefully this is an easy question to answer. I'm getting passed to my BPEL over 8,000 records and I need to take those records and then insert them into an Oracle database. I've been trying to tune the insert by using properties like inMemoryOptimization, but the load still takes severl hours. Any suggestions on how to get the Database adapter to perform better or load all 8,000 records at once? thanks in advance.

    Hello.
    8000 records doesn't sound "huge", unless a record is say 1 kB then you have 8 MB, which is a large payload to move around in one piece.
    A DB merge is typically slower than an insert, though you did say you were using an insert.
    If you are inserting each row one at a time that seems like it would be pretty slow.
    Normally the input to a DB adapter insert is a collection (of rows) vs. a single row. If you have been handed 8000 individual rows you can assemble them into a collection with an iteration - tedious in BPEL but works fine.
    Daren

  • Oracle 10g Merge Insert performance

    Hi All,
    Performance wise, is it better to use a regular insert statement or Merge (insert only) statement ... in Oracle10g. (no updates are used in this merge statement).
    Thanks for the input.

    thanks for the comment ... here is the more info for INSERT alone using Merge ... thought Oracle has a reason for this to add in 10g.
    http://www.oracle-developer.net/display.php?id=310
    I am looking for right answer about the performance

  • Materialized View Logs on OLTP DB- Performance issues

    Hi All,
    We have a request to check what would be the performance impact of having Matirialized View (with FAST refresh each 5 and each 30 min).
    We have been using some APIs( I don't have full details of this job) to refresh tables in Reportign DB and want to switch to MVIEWS in the next release.
    The base tables for this MVs are in DB1 with high DML activity.
    We are planing to create 7 MVs on a reporting DB pointing to the corresponding tables in DB1.
    I am setting up the env with the required tables to test this and also want to know your experiences in implementing Mviews with Fast refresh pointing to a typical OLTP DB as I am new to MVIEWS.
    How it affects the performance of DML statements on base tables?
    How often you had to do complete refresh because of invalid/outdated Mview Logs?
    other Maintenance overheads?
    any possible workarounds?
    Oracle Version: 9.2.0.8
    OS : HP-UX
    Thank you for sharing your experiences.

    Doing incremental refreshes every 5 minutes willadd some amount of load to the OLTP system. Again,
    depending on your environment, that may or may not be
    significant.
    what factors can effet this? Among other things, the size of the materialized view logs that need to be read, which in turn depends on the number of transactions and the setup of the materialized views, the current load on the OLTP system, etc. If you're struggling with an I/O or CPU bottleneck at peak processing now, for example, having a dozen materialized view refresh processes running as well would probably generate noticable delays in the system. If you have plenty of spare CPU & I/O bandwidth, the refreshes will be less noticable.
    is it the same in 10g R2 too? we are upgrading to the
    same in coming October.Streams is far easier to deal with in 10.2.
    Justin

  • Insert Performance is poor

    Hi,
    In my database there is one table which size is 500MB and on that table there is 5 indexes (2 are composite index).
    Through sql loader 15 to 20 batch files are running and those job are inserting into this table. Means there is high insertion on this table. PCTFREE of this table is 10% and PCTUSED is default.
    But insertion on this table is taking some time, even 10000 rows are taking more time to insert.
    Please help.
    Anand

    You can improve the performance of SQL*Loader on conventional loads in a number of ways:
    *Increase the readsize, I use 20971520 which may be the maximum
    *Increase the number of rows per commit to 1000 or even 10000 (default 64)
    *Increase the bindsize used to hold the values read from the data file, again I use 20971520
    SQL*Loader will use array inserts, so that one INSERT statement will be sent to the database server with many data records in a single round trip, rather than one round trip per data record. This is a big performance boost. Increasing the parameters I have listed will increase the array size, increasing efficiency and reducing the number of separate array inserts issued by SQL*Loader.
    Another option to test, is to drop the 5 indexes on the table, load the new data, then recreate the 5 indexes. Without the 5 indexes the load and insert of the new records will happen much faster. And the updating of the indexes could be a cause of contention between multiple, concurrent SQL*Loaders and slowing down the inserts. Depending on how big the table is, it might not take that long to recreate the indexes.
    Of course, with triggers spread around your database, you cannot remove the indexes if they are needed by any of the triggers themselves fired by the data being loaded for fast execution. And of course, no other part of the application should be running either.
    John

  • Question on PL/SQL / Insert Performance

    So I have a table (TABLEA) with one column that has approximately 420k records and a I have a second table (TABLEB) that stores data identified by a procedure.
    I have a PL/SQL Package with the two procedures.
    With the package I pass it two parameters (start and stop number).
    execute id_pkg.mrs(0,10000);These numbers are used to capture information into a CURSOR like this.
    I then have another procedure GET_CV that takes the value passed from emp_cur_mrs.ID and loops through each record to populate TABLEB. TABLEB is created with "NOLOGGING".
    for empi_cur_mrs in
    select id into v_tmpmrn from sourcemrns where id > startid  and id < endid)
    loop
    get_cv(empi_cur_mrs.id);
    end loop;When I run this against my first 10k records it takes approximately 22 seconds to complete. As I move I continue to add more data when identifying the next 10k records (e.g. execute id_pkg.mrs(10000,20000) the performance begins to drop.
    Here is the "insert" code that is stored within the GET_CV procedure.
    insert /*+ APPEND */ into TABLEB(fullrec) values(v_fullrecord);Can anyone provide me with any ideas on why this could be happening?

    Okay, this is a basic structure of what I pull and the end result of the GET_CV. I want to populate the end result into a table.
    MEMBER_TBL (IDENTIFIES MEMBERS FROM ALL SOURCES; INTERNAL ID IS UNIQUE in MEMBER_TBL AND ID COULD BE SAME IN MULTIPLE SOURCES:
    INTERNALID      SRCID       ID
    10          100     1200
    13          120     3543
    14          140     1354
    15          300     10980
    MEMLINKED_TBL (IDENTIFIES WHICH MEMBERS ARE LINKED)
    INTERNALID LINKID
    10        10
    13        10
    14        10
    15        12
    MEMNAME_TBL  (NAMEID: 12=MEMBER NAME, 13=EMERGENCY CONTACT NAME, I ONLY WANT MEMBER NAME): A-active I-inactive
    INTERNALID     NAMEID     NAME           STATUS          MODIFIED_KEY
    10          12     SMITH,JOHN     A          222
    10          12     SMIT,JON     I          099
    10          13     JONES,JIM     A          222
    13          12     SMITH,J          A          111
    14          12     SMITH,JON     A          212
    13          13     Thomas,Train     A          345The max number MODIFIED_KEY tells us the latest updated one (e.g. INTERNALID with Modified_key of 222 would be the latest for NAMEID=12)
    MEMPHONE_TBL (PHONEID:11 IS HOMEPHONE AND 55=EMERGENCY PHONE)
    INTERNALID     PHONEID      PHONE_AREA     PHONE_NUMBER     MODIFIED_KEY     STATUS
    10          11          800          8889999          133123          A
    10          11          800          8880000          000001          I
    10          55          888          7729999          323431          A
    13          11          888          7739999          123243          A
    14          55          888          7769999          454534          AI pass the pl/sql or sql the SRCID and the ID, I then need to look at all the members linked to that one and get the lastest information.
    For example, I pass it INTERNALID of '10',ID of '1200' and SRCID of '100' and expect to get back the LINKID from MEMLINKED_TBL, most recent NAME from MEMNAME_TBL and MEMPHONE_TBL.
    I would get the following:
    LINKID     NAME          HOMEPHONE
    10     SMITH,JOHN     8008889999If I could pull this all together without passing in an "ID", that would be great. I could not figure out how without actually passing it the ID and SRCID.
    Thanks for any guidance!

  • Insert performance on a table with schema based XMLType column

    Hi,
    We are inserting around 500K rows into a table which has one XMLType column (schema based). Schema is simple and the size of the XMLType column is also not very large (on an average only around 100 bytes (max might be around 1k-2k), but it takes around 1 hr for every 20K rows, which seems very slow.
    The schema is like this :
    <schema targetNamespace="http://www.citadon.com/xml/test.xsd"
    xmlns="http://www.w3.org/2001/XMLSchema"
    xmlns:xdb="http://xmlns.oracle.com/xdb"
    xdb:storeVarrayAsTable="true"
    version="1.0" elementFormDefault="qualified">
    <element name="cas">
    <complexType>
    <sequence>
    <element name="ca" minOccurs="0" maxOccurs="unbounded">
    <complexType>
    <sequence>
    <element name="id" type="string"/>
    <element name="value" type="string"/>
    </sequence>
    </complexType>
    </element>
    </sequence>
    </complexType>
    </element>
    </schema>
    Any thoughts on how to improve performance?
    -Srini

    You need to have sufficient data.. Also the event show in the following code may help depending on the nature of the query....
    Note in the PurchaseOrder Example if I only have 133 docs, instead of 10,000 I will get tablescan and index full scans
    C:\oracle\xdb\bugs\xdbBasicDemo>sqlplus /nolog @testcase XDBTEST XDBTEST
    SQL*Plus: Release 10.1.0.3.0 - Production on Fri Aug 27 22:57:36 2004
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    SQL> spool testcase.log
    SQL> set trimspool on
    SQL> connect &1/&2
    Connected.
    SQL> --
    SQL> set timing on
    SQL> set long 10000
    SQL> set pages 10000
    SQL> set feedback on
    SQL> set lines 132
    SQL> set pages 50
    SQL> --
    SQL> drop index iPartNumberIndex
    2 /
    Index dropped.
    Elapsed: 00:00:02.25
    SQL> alter index LINEITEM_LIST rebuild
    2 /
    Index altered.
    Elapsed: 00:00:02.15
    SQL> desc PURCHASEORDER
    Name Null? Type
    TABLE of SYS.XMLTYPE(XMLSchema "http://localhost:8080/home/SCOTT/poSource/xsd/purchaseOrder.xsd" Element "Pu
    ject-relational TYPE "PURCHASEORDER_T"
    SQL> --
    SQL> col level format 99999
    SQL> col parent_table_column format A32
    SQL> col table_name format A32
    SQL> col table_type_name format A32
    SQL> --
    SQL> select level, PARENT_TABLE_COLUMN, TABLE_TYPE_NAME, TABLE_NAME
    2 from USER_NESTED_TABLES
    3 connect by PRIOR TABLE_NAME = PARENT_TABLE_NAME
    4 start with PARENT_TABLE_NAME = 'PURCHASEORDER'
    5 /
    LEVEL PARENT_TABLE_COLUMN TABLE_TYPE_NAME TABLE_NAME
    1 "XMLDATA"."ACTIONS"."ACTION" ACTION_V ACTION_TABLE
    1 "XMLDATA"."LINEITEMS"."LINEITEM" LINEITEM_V LINEITEM_TABLE
    2 rows selected.
    Elapsed: 00:00:13.60
    SQL> desc LINEITEM_T
    LINEITEM_T is NOT FINAL
    Name Null? Type
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    ITEMNUMBER NUMBER(38)
    DESCRIPTION VARCHAR2(256 CHAR)
    PART PART_T
    SQL> --
    SQL> desc PART_T
    PART_T is NOT FINAL
    Name Null? Type
    SYS_XDBPD$ XDB.XDB$RAW_LIST_T
    PART_NUMBER VARCHAR2(14 CHAR)
    QUANTITY NUMBER(12,2)
    UNITPRICE NUMBER(8,4)
    SQL> --
    SQL> select count(*)
    2 from purchaseorder
    3 /
    COUNT(*)
    10000
    1 row selected.
    Elapsed: 00:00:05.31
    SQL> select count(*)
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 /
    COUNT(*)
    148814
    1 row selected.
    Elapsed: 00:09:40.54
    SQL> create index iPartNumberIndex
    2 on LINEITEM_TABLE l
    3 ( l.PART.PART_NUMBER,NESTED_TABLE_ID)
    4 /
    Index created.
    Elapsed: 00:00:36.11
    SQL> explain plan for
    2 select count(*)
    3 from purchaseorder
    4 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    Explained.
    Elapsed: 00:00:01.14
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2571550067
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 116 | 93 (2)| 00:00:02 |
    | 1 | SORT AGGREGATE | | 1 | 116 | | |
    | 2 | NESTED LOOPS | | 25 | 2900 | 93 (2)| 00:00:02 |
    | 3 | SORT UNIQUE | | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 4 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 5 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 6 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 49 | 1 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - access("SYS_NC00011$"='717951002372')
    6 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    7 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    26 rows selected.
    Elapsed: 00:00:03.12
    SQL> select count(*)
    2 from purchaseorder
    3 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    4 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:04.63
    SQL> select count(*)
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:00.32
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2571550067
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 116 | 93 (2)| 00:00:02 |
    | 1 | SORT AGGREGATE | | 1 | 116 | | |
    | 2 | NESTED LOOPS | | 25 | 2900 | 93 (2)| 00:00:02 |
    | 3 | SORT UNIQUE | | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 4 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 5 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 6 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 49 | 1 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - access("SYS_NC00011$"='717951002372')
    6 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    7 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    26 rows selected.
    Elapsed: 00:00:00.04
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder,
    4 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    5 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    6 /
    Explained.
    Elapsed: 00:00:00.07
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 713363872
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 8000 | 104 (0)| 00:00:02 |
    | 1 | NESTED LOOPS | | 25 | 8000 | 104 (0)| 00:00:02 |
    |* 2 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 4 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 253 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("SYS_NC00011$"='717951002372')
    3 - access("SYS_NC00011$"='717951002372')
    4 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    5 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    24 rows selected.
    Elapsed: 00:00:00.04
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.07
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder
    4 where existsNode
    5 (
    6 object_value,
    7 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    8 ) = 1
    9 /
    Explained.
    Elapsed: 00:00:00.02
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 849879259
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 8000 | 93 (2)| 00:00:02 |
    | 1 | NESTED LOOPS | | 25 | 8000 | 93 (2)| 00:00:02 |
    | 2 | SORT UNIQUE | | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 3 | INDEX UNIQUE SCAN | LINEITEM_DATA | 25 | 1675 | 79 (0)| 00:00:01 |
    |* 4 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | | 3 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 253 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("SYS_NC00011$"='717951002372')
    4 - access("SYS_NC00011$"='717951002372')
    5 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    6 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    25 rows selected.
    Elapsed: 00:00:00.03
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder
    3 where existsNode
    4 (
    5 object_value,
    6 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    7 ) = 1
    8 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.04
    SQL> alter session set events ='19027 trace name context forever, level 0x800000'
    2 /
    Session altered.
    Elapsed: 00:00:00.00
    SQL> explain plan for
    2 select count(*)
    3 from purchaseorder
    4 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    Explained.
    Elapsed: 00:00:00.03
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 3049344732
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 69 | 17 (6)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 69 | | |
    | 2 | NESTED LOOPS | | 25 | 1725 | 17 (6)| 00:00:01 |
    | 3 | SORT UNIQUE | | 25 | 750 | 3 (0)| 00:00:01 |
    |* 4 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 39 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    6 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    24 rows selected.
    Elapsed: 00:00:00.03
    SQL> select count(*)
    2 from purchaseorder
    3 where existsNode(object_value,'/PurchaseOrder/LineItems/LineItem[Part/@Id="717951002372"]') = 1
    4 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:00.01
    SQL> select count(*)
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    COUNT(*)
    33
    1 row selected.
    Elapsed: 00:00:00.01
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 3049344732
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 69 | 17 (6)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 69 | | |
    | 2 | NESTED LOOPS | | 25 | 1725 | 17 (6)| 00:00:01 |
    | 3 | SORT UNIQUE | | 25 | 750 | 3 (0)| 00:00:01 |
    |* 4 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 39 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    4 - access("SYS_NC00011$"='717951002372')
    5 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-in
    stance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-propert
    ies/><read-contents/></privilege>''))=1)
    6 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    24 rows selected.
    Elapsed: 00:00:00.03
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder,
    4 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    5 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    6 /
    Explained.
    Elapsed: 00:00:00.06
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1516269755
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 2450 | 28 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 25 | 2450 | 28 (0)| 00:00:01 |
    |* 2 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 3 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 68 | 1 (0)| 00:00:01 |
    |* 4 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("SYS_NC00011$"='717951002372')
    3 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    4 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    22 rows selected.
    Elapsed: 00:00:00.04
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder,
    3 table (xmlsequence(extract(object_value,'/PurchaseOrder/LineItems/LineItem'))) l
    4 where existsNode(value(l),'/LineItem[Part/@Id="717951002372"]') = 1
    5 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.01
    SQL> explain plan for
    2 select extractValue(object_value,'/PurchaseOrder/Reference')
    3 from purchaseorder
    4 where existsNode
    5 (
    6 object_value,
    7 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    8 ) = 1
    9 /
    Explained.
    Elapsed: 00:00:00.03
    SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'))
    2 /
    PLAN_TABLE_OUTPUT
    Plan hash value: 1197255270
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 25 | 2450 | 17 (6)| 00:00:01 |
    | 1 | NESTED LOOPS | | 25 | 2450 | 17 (6)| 00:00:01 |
    | 2 | SORT UNIQUE | | 25 | 750 | 3 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | IPARTNUMBERINDEX | 25 | 750 | 3 (0)| 00:00:01 |
    |* 4 | TABLE ACCESS BY INDEX ROWID| PURCHASEORDER | 1 | 68 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | LINEITEM_LIST | 1 | | 0 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("SYS_NC00011$"='717951002372')
    4 - filter(SYS_CHECKACL("ACLOID","OWNERID",xmltype(''<privilege
    xmlns="http://xmlns.oracle.com/xdb/acl.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-i
    nstance" xsi:schemaLocation="http://xmlns.oracle.com/xdb/acl.xsd
    http://xmlns.oracle.com/xdb/acl.xsd DAV:http://xmlns.oracle.com/xdb/dav.xsd"><read-proper
    ties/><read-contents/></privilege>''))=1)
    5 - access("NESTED_TABLE_ID"="PURCHASEORDER"."SYS_NC0003400035$")
    23 rows selected.
    Elapsed: 00:00:00.03
    SQL> select extractValue(object_value,'/PurchaseOrder/Reference')
    2 from purchaseorder
    3 where existsNode
    4 (
    5 object_value,
    6 '/PurchaseOrder/LineItems/LineItem/Part[@Id="717951002372"]'
    7 ) = 1
    8 /
    EXTRACTVALUE(OBJECT_VALUE,'/PU
    MWEISS-20030616154327385GMT
    NSARCHAN-20030703170041824GMT
    HBAER-20030206173836987GMT
    LOZER-20031110131149107GMT
    WTAYLOR-20030120174534374GMT
    MHARTSTE-20031103172937613GMT
    KGEE-20030919215826550GMT
    PSULLY-20030712141634504GMT
    JPATEL-20030630175356693GMT
    RMATOS-2003072920455000GMT
    DRAPHEAL-20030528180033254GMT
    JRUSSEL-20031121213026539GMT
    PTUCKER-20030918160532301GMT
    SVOLLMAN-20031027120838903GMT
    WGIETZ-20030208185026303GMT
    TFOX-20030110164614994GMT
    JPATEL-20030304214301386GMT
    GGEONI-20030606135257846GMT
    STOBIAS-20030817120358785GMT
    COLSEN-20030525200717658GMT
    SBAIDA-20030224182546606GMT
    IMIKKILI-20030118180347537GMT
    ABULL-20030429162730766GMT
    NSARCHAN-20031113183134873GMT
    LBISSOT-20030809134114505GMT
    JKING-20030420162058859GMT
    JMALLIN-20030506152048261GMT
    AFRIPP-20030311153808601GMT
    SHIGGINS-20030831151756257GMT
    DBERNSTE-20030626122725631GMT
    KPARTNER-20031021160248962GMT
    ABANDA-2003062721524842GMT
    DOCONNEL-20030904214708637GMT
    33 rows selected.
    Elapsed: 00:00:00.03
    SQL> quit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options

Maybe you are looking for

  • Down Payment not showing in PS Reports S_ALR_87013537

    I am executing the ps report S_ALR_87013537, but i am not getting the Down Payment information. Down Payment are released from Finance against the Purchase Order Account assignment WBS. In Report  S_ALR_87013537, Project  Cost Plan, Actual Cost & Act

  • New Update broke Itunes

    The MSI file was corrupted somehow, and now I can't install a new Itunes version or uninstall the old one. I don't know where to go from here.

  • Does your Macbook with Retina make a small (very quiet) fuse-like sound during start-ups?

    Dear fellow Macbook (preferable with retina users!) users! Whenever I press the power button to start up my Macbook with Retina (Mid-2012), I hear this very faint fuse-like (cannot describe it any better, sorry!) coming from the top (I think the top-

  • Reading the Installation directory from Registry

    Dear All, Wondering if it is possible somehow to read a path from system register and use that path as an installation (destination) directory for the application ? This should happen programmatically every time when the user starts the installation

  • My Podcast is not updating

    My podcast stopped updating back in January. I have not changed anything on my end. I am feeding the content to Itunes the way I always have. https://itunes.apple.com/us/podcast/friends-community-church-fairbanks/id5352316 98