ArrayList insert performance java 1.5 vs. 1.6

Hi,
Does anyone know what the deal is here?
import java.util.*;
public class Test
public static void main(String[] args)
insert(null);
insert(10000000);
insert(null);
insert(10000000);
insert(null);
insert(10000000);
private static void insert(Integer init)
List<Integer> list;
if (init != null)
list = new ArrayList<Integer>(init);
else
list = new ArrayList<Integer>();
System.out.println("using initial size of "+((init != null) ? init : "default"));
long now = System.currentTimeMillis();
for (int i=0; i<10000000; i++)
list.add(i);
System.out.println(System.currentTimeMillis()-now+" ms");
+ /usr/jdk/java-1.6/bin/java -cp . -server Test
using initial size of default
2536 ms
using initial size of 10000000
3346 ms
using initial size of default
2430 ms
using initial size of 10000000
1253 ms
using initial size of default
3017 ms
using initial size of 10000000
1203 ms
(and 1.5)
+ /usr/jdk/java-1.5/bin/java -cp . -server Test
using initial size of default
2575 ms
using initial size of 10000000
1613 ms
using initial size of default
3952 ms
using initial size of 10000000
1508 ms
using initial size of default
3994 ms
using initial size of 10000000
1468 ms
Once hotspot kicks in the perf advantage is huge, but why does it take so long for this to happen on 1.6? Is there anyway to tune it? Also, why does the default size slow down so much over time. Is it the GC?
thx.

Is Java 1.5 considered offially released (no longer
beta)?Yes.
If so, and I install it on my machine, could it break
any 1.4.2 apps?Yes - in Java 5, read about javac options -source and -target, and the Compatibility document for changes
If so, is there a way to install 1.5 and keep my
installation on 1.4.2?Yes - install in separate directories
If I do that, how do I specify
which one I want to run? That is, make the command
java MyAppName
unambiguous?Use batch or shell files to set the directories and classpath, and use the full path to the executables.
>
Thanks for any info,
John

Similar Messages

  • Array Inserts with Java?

    Hey,
    Just doing some forward thinking for a project and ran into a
    question that I can't seem to find a straight forward answer to.
    I'll be importing data from a flat file structure and want to
    make it as fast as possible to process as it may be a large set
    of records.
    Can Java (JDev 2.0) do array inserts. I'd like to achieve
    something like PRO*C in that I want to be able to pass an array
    of records to insert, batching them so that it's not a single
    network (or database) hit per record inserted.
    Any help appriciated!
    Doug
    null

    Doug,
    I think any kind of mass inserting is not going to be optimal
    over JDBC.
    I am pretty sure the batch update just saves you some network
    roundtrips. I believe each insert/update would still be a
    separate transaction once on the server side, if nothing else for
    the purposes of rollbacks, etc.
    -L
    Doug Gault (guest) wrote:
    : Guys,
    : Thanks for the quick response.
    : I had run into the Batch Update facility but on my initial
    : reading it seemed to be more about compacting network traffic
    : rather than enhancing mass insert performance.. Did I get the
    : wrong end of the stick here?
    : What (roughly) happens to the 'Batched' set of transactions
    once
    : the Database gets it. Is the set processed as a single
    : transaction (all or nothing)?
    : I'll re-read this section, and take a look at the Tech net
    : examples
    : Thanks
    : Doug
    : JDeveloper Team (guest) wrote:
    : : Doug,
    : : I ran into problems with trying to pass arrays. Namely, with
    : : stored procedures, JDBC treats an array argument as an
    attempt
    : to
    : : declare an IN/OUT type parameter, and only allows that array
    to
    : : contain one object.
    : : The JDBC doc has a section on performance enhancements that
    you
    : : may want to investigate, specifically the Batch Update
    facility
    : : described in Chapter 4 in the 'Additional Oracle Extensions'
    : : section. You can use this to 'batch up sets of inserts or
    : : updates' rather than send them one at a time.
    : : Alternately, I recommend you check out the JDBC sample code
    : : provided on OTN. They have some better 'real world' examples
    : : than the JDBC docs.
    : : -L
    null

  • How to insert a java object int derby database

    hi,
    i have a problem , ie..i want to insert my java object int database derby.and also i need to retrieve that object from database whenever i need. Any body help me to do that in derby...
    Thanks

    Or you would design a table where the columns of the table correspond to attributes of the object. Then you would make a row by writing out the attributes to the columns they correspond to.

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Oltp insert performance

    Hi Experts ,
    1. could someone guide me on understanding what are things that impact insert performance in an oltp application with ~25 concurrent sessions doing 20 inserts/session  into  table X. ? (env- oracle 11g ,3 node RAC , ASSM tablespace , tables X is range partitioned )
    2. If any storage parameter is not property set then how to identify which one needs to be fixed?
    Note: current insert performance is : 0.02 sec/insert.

    Hi Garry,
    Thanks for your response.
    some more info regarding app : DB version  11.2.0.3 . Below is the awr info during peak load for 1 hr snap. any suggestions are helpful.
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:    18,624M    18,624M  Std Block Size:         8K
               Shared Pool Size:     3,200M     3,200M      Log Buffer:    25,888K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                     4.9                0.0                 0.01       0.00    
           DB CPU(s):                0.5                     0.0                 0.00       0.00         
           Redo size:               585,778.7            2,339.6
       Logical reads:                24,046.6               96.0
       Block changes:            2,374.5                9.5
      Physical reads:            1,101.6                4.4
    Physical writes:              394.6                1.6
          User calls:                 2,086.6                8.3
              Parses:                9.5                     0.0
         Hard parses:                     0.5                     0.0
    W/A MB processed:                5.8                0.0
              Logons:                     0.6                     0.0
            Executes:                   877.7                     3.5
           Rollbacks:                   218.6                     0.9
        Transactions:              250.4
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.99       Redo NoWait %:   99.99
                Buffer  Hit   %:   95.44    In-memory Sort %:  100.00
                Library Hit   %:   99.81        Soft Parse %:   95.16
             Execute to Parse %:   98.92         Latch Hit %:   99.89
    Parse CPU to Parse Elapsd %:   92.50     % Non-Parse CPU:   97.31
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.36   74.73
        % SQL with executions>1:   90.63   90.41
      % Memory for SQL w/exec>1:   83.10   85.49
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Event                                 Waits               Time(s)   Avg(ms)     %DBtime       Wait Class    
    db file sequential read           3,686,200      15,658      4                  87.7            User I/O              
    DB CPU                                                      1,802                         10.1                             
    db file parallel read                19,646              189         10                 1.1            User I/O    
    gc current grant 2-way              842,079         145          0               .8                 Cluster
    gc current block 2-way              425,663         106           0               .6            Cluster

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Jdbc thin driver and bulk binding slow insertion performance

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • XMLTYPE insert performance

    I am experiencing performance problems when inserting a 30 MB XML file into an XMLTYPE field - under Oracle 11 with the schema I am using the minimum time I can achieve is around 9 minutes which is too long... can anyone comment on whether this performance is normal and possibly suggest how it could be improved while retaining the benefits of structured storage...thanks in advance for the help :)

    sorry for the late reply - I didn't notice that you had replied to my earlier post...
    To answer your questions in order:
    - I am using "structured" storage because I read ( in this article: [http://www.oracle.com/technology/pub/articles/jain-xmldb.html] ) that this would result in higher xquery performance.
    - the schema isn't very large but it is complex. ( as discussed in above article )
    I built my table by first registering the schema and then adding the xml elements to the table such that they would be stored in structured storage. i.e.
    --// Register schema /////////////////////////////////////////////////////////////
    begin
    dbms_xmlschema.registerSchema(
    schemaurl=>'fof_fob.xsd',
    schemadoc=>bfilename('XFOF_DIR','fof_fob.xsd'),
    local=>TRUE,
    gentypes=>TRUE,
    genbean=>FALSE,
    force=>FALSE,
    owner=>'FOF',
    csid=>nls_charset_id('AL32UTF8')
    end;
    COMMIT;
    and then created the table using ...
    --// Create the XCOMP table /////////////////////////////////////////////////////////////
    create table "XCOMP" (
         "type" varchar(128) not null,
         "id" int not null,
         "idstr1" varchar(50),
         "idstr2" varchar(50),
         "name" varchar(255),
         "rev" varchar(20) not null,
         "tstamp" varchar(30) not null,
         "xmlfob" xmltype)
    XMLTYPE "xmlfob" STORE AS OBJECT RELATIONAL
    XMLSCHEMA "fof_fob.xsd"
    ELEMENT "FOB";
    No indexing was specified for this table. Then I inserted the offending 30 MB xml file using (in c#, using ODP.NET under .NET 3.5):
    void test(string myName, XElement myXmlElem)
    OracleConnection connection = new OracleConnection();
    connection.Open();
    string statement = "INSERT INTO XCOMP ( \"name\", \"xmlfob\"") values( :1, :2 )";
    XDocument xDoc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), myXmlElem);
    OracleCommand insCmd = new OracleCommand(statement, connection);
    OracleXmlType xmlinfo = new OracleXmlType(connection, xDoc.CreateReader());
    insCmd.Parameters.Add(FofDbCmdInsert.Name, OracleDbType.Varchar2, 255);
    insCmd.Parameters.Add(FofDbCmdInsert.Xmldoc, OracleDbType.XmlType);
    insCmd.Parameters[0].Value = myName;
    insCmd.Parameters[1].Value = xmlinfo;
    insCmd.ExecuteNonQuery();
    connection.Close();
    It took around 9 minutes to execute the ExecuteNonQuery statement, usingOracle 11 standard edition running under Windows 2008-64 with 8 GB RAM and 2.5 MHZ single core ( of a quad-core running under VMWARE )
    I would much appreciate any suggestions that could speed up the insert performance here - as a temporary solution I chopped some of the information out of the XML document and store it seperately in another table, but this approach has the disadvantage that I using xqueries is a bit inflexible, although the performance is now in seconds rather than minutes...
    I can't see any reason why Oracle's shredding mechanism should be less efficient than manual shredding the information.
    Thanks in advance for any helpful hints you can provide!

  • Single record insert performance problems

    Hi,
    we have on production environment a Java based application that makes aprox 40.000 single record Inserts per hour into a table.
    We ha traced the performance of this Insert and the medium time is 3ms, that is ok. Our Java architecture is based in Websphere Application Server and we access to Oracle 10g through a WAS datasource.
    But we have detected that 3 or 4 times a day, during aprox 30 seconds, the Java service is not able to make any insertion in that table. And suddenly it makes all the "queued inserts" in only 1 second. That "pause" in the insertion cause problems of navigation because is the top layer there is a web application.
    We are sure that is not a problem with the WAS or the Java code. We are sure that is a problem with the Oracle configuration, or some tunning action for this kind of applications that we don´t know. We first thought it could be a problem with a sequence field in the table. Also, a problem when occurs the change of the redo log. But we've checked it with our DBA and this is not the problem.
    Has anybody any idea of what could be the origin of this extrange behaviour?
    Thanks a lot in advance.
    Jose.

    There are a couple of things you'd need to look at to diagnose this - As Joe says it's not really a JDBC issue from what we know.
    I've seen issues with Oracle's automatic SGA resizing causing sporadic latency in OLTP systems. Another suspect would be log file sync wait events, which are associated with commits. Don't discount the impact of well meaning people using tools like TOAD to query the DB - they can sometimes cause more harm than good.
    Right now I'd suggest you run AWR at 10 minute intervals and compare reports from when you had your problem with a time when you didn't.

  • How to Perform Java Vector Difference?

    Hello All,
    I need some help for Difference Between 2 Vectors Implementation
    I have written a short program to explain what I exactly one.. BasicallY I am looking for a method that returns a Vector containing the elements of vect1 minus the elements of vect2
    I tried using 2 for loops but its not possible. Is there a built in or a easier way to achieve this difference between 2 Vectors?
    import java.util.Vector;
    public class VectorDifference {
         static Student stud1 = new Student("Test1", 15);
         static Student stud2 = new Student("Test2", 15);
         static Student stud3 = new Student("Test3", 15);
         static Student stud4 = new Student("Test1", 15);
         static Student stud5 = new Student("Test3", 15);
         static Student stud6 = new Student("Test7", 15);
         static Vector vect1 = new Vector();
         static Vector vect2 = new Vector();
         public static void main(String[] args) {
              vect1.add(stud1);
              vect1.add(stud2);
              vect1.add(stud3);
              vect1.add(stud4);
              vect1.add(stud5);
              vect1.add(stud6);
              System.out.println("Vector 1 - Vector 2");
    class Student{
         String name;
         int age;
         public int getAge() {
              return age;
         public void setAge(int age) {
              this.age = age;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public boolean equals(Object obj){
                   if(!(obj instanceof Student)){
                        return false;
                   Student student2 = (Student)obj;
                   if(name.equals(student2.getName()) && age == student2.getAge()){
                        return true;
                   return false;
         public Student(String name, int age) {
              this.name = name;
              this.age = age;
    }

    Brynjar wrote:
    Btw, vectors went out of fashion many many years ago. Use ArrayList instead.Around '98 with the introduction of ArrayList.
    The OP could also consider using generics, and using a local variable instead of a field, esp a static field, when ever possible.
    Edited by: Peter__Lawrey on 04-Jul-2009 14:44

  • Suggestions to improve the INSERT performance

    Hi All,
    I have a table which has 170 columns .
    I am inserting huge data 50K and more records into this table.
    my insert would be look like this.
    INSERT INTO /*+ append */ REPORT_DATA(COL1,COL2,COL3,COL4,COL5,COL6)
    SELECT  DATA1,DATA2,DATA3,DATA4,DATA5,DATA5 FROM TXN_DETAILS
    WHERE COL1='CA';
    Here i want to insert values for only few columns.Hence i specifies only those column names in insert statement.
    But when huge data(50k+) returned by select query then this statement taking   very long time to execute(approximately 10 to 15 mins).
    Please  suggest me to improve this insert statement performance.I am also using 'append' hint.
    Thanks in advance.

    a - Disable/drop indexes and constraints - It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.
    b - Manage segment header contention for parallel inserts - Make sure to define multiple freelist (or freelist groups) to remove contention for the table header. Multiple freelists add additional segment header blocks, removing the bottleneck.  You can also use Automatic Segment Space Managementhttp://www.dba-oracle.com/art_dbazine_ts_mgt.htm (bitmap freelists) to support parallel DML, but ASSM has some limitations
    c - Parallelize the load - You can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL "APPEND" option. If you submit parallel jobs to insert against the table at the same time, using the APPEND hint may cause serialization, removing the benefit of parallel jobstreams.
    d - APPEND into tables - By using the APPEND hint, you ensure that Oracle always grabs "fresh" data blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don't need to specify an APPEND hint. Also, if you're going w/ APPEND, consider putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging."
    insert /*+ append */ into customer values ('hello',';there');
    e - Use a large blocksize - By defining large (i.e. 32k) blocksizes for the target table, you reduce I/O because more rows fit onto a block before a "block full" condition (as set by PCTFREE) unlinks the block from the freelist.
    f - Use  NOLOGGING
    f - RAM disk - You can use high-speed solid state disk (RAM-SAN) to make Oracle inserts run up to 300x faster than platter disk.

  • Slow Performance - Java Related?

    This is an old box that I bought used recently, but the system install is recent. The system performs very slowly - about 50% of the speed of comparable Mac's in the XBench database. If I use Xupport to manully run "All" of the system maintenance crons I get some improvement, but it quickly goes back to being slow.
    Looking at my logs I have a "boat load" of Java errors under CrashReporter; there will be a string of "JavaNativeCrash_pidXXX.log" entries - many of them, as follows:
    An unexpected exception has been detected in native code outside the VM.
    Unexpected Signal : Bus Error occurred at PC=0x908611EC
    Function=[Unknown.]
    Library=/usr/lib/libobjc.A.dylib
    NOTE: We are unable to locate the function name symbol for the error
    just occurred. Please refer to release documentation for possible
    reason and solutions.
    Many of the line entries that follow, but not all of them, refer to SargentD2OL, which is a Java app, which I installed, but it did not work properly so I removed it. Yet I continue to get Java errors that refer to this now non-existant app.
    I have read that Java apps use a lot of resources, and that D2OL in particular uses a lot of resources. Can my slow performance problem be Java related? If so, any idea of how I can fix this problem?
    G4 AGP Graphics   Mac OS X (10.3.9)   500 MHz, 512M RAM

    Sorry to take so long to respond, but other issues in life have demanded my attention.
    None of the solutions given have had any affect. My Java folder has both a 1.3.1 and a 1.4.2 app - the Java Update 2 will not reinstall because it sees an up-to-date app in the folder. But reading the update file it says the older Java will be removed - but it is still there. Problem?
    On XBench the system scores a 9 to 10, while similar boxes on the XBench database score around 18 to 20. My cpu, memory, and video scores are very low. The HD through-put scores are the only ones that are normal. TechTool Pro 4 finds no problems. I have removed the memory sticks one at a time and retested after each cycle - no difference.
    I have two drives, each with a 10.3.9 install. One works fine, scores around a 17 on XBench, the other scores a 9 to 10. So it appears to be a software problem. The slower install is a drive from a iMac G3 that has been moved to the G4 - are there issues with this?
    My favored drive is the prior G3 one (newer and faster than the other drive that system tests faster in XBench) - it has my profile and all my info on it. It worked fine in the G3 - no problems.
    Thanks for the help,
    G4 AGP Graphics Mac OS X (10.3.9) 500 MHz, 512M RAM, ATI 8500

  • Truncate Table before Insert--Performance

    HI All,
    This post is in focus of special requirement where a table is truncated before inserting records in the table.
    Now, when a table is truncated the High Water Mark(HWK) is reset to lowest memory allocated for table in tablespace. After this, would insert with append can boost the performance of the insert query?
    In simple insert query, the oracle engine consults the free list to look for free spaces.
    But in insert with apppend, the engine starts above the HWM. And the argument is when truncate has been executes on table, would the freelist be used in simple insert.
    I just need to know if there are any benefits of using append insert on truncated table or simple insert would be same in term of performance with respect to insert with append.
    Regards
    Nits

    Hi,
    if you don't need the data truncate the table. There is no negativ impact whether you are using an conventional path or a direct path insert.
    If you use append less redo is written for the table if the table is in NOLOGGING mode, but redo is written for all indexes. I would recommand to create a full backup after that (if needed), because your table will not be recoverable after that (no REDO Information).
    Dim

  • Can insert performance be improved playing with env parameters?

    Below is the environment confioguration and results of my bulk load insert experiments. The results are from two scenarios that is also described below. The values for the two scenarios is separated by a space.
    Environment Configuration:
    setTxn     N
    DeferredWrite Y     
    Sec Bulk Load     Y
    Post Build SecIndex Y
    Sync Y
    Column1 value reflects for the scenario:
    Two databases
    a. Database with 2,500,000 records
    b. Database with 2,500,000 records
    Column2 value reflects for the scenario:
    Two databases
    a. Database with 25,000,000 records
    b. Database with 25,000,000 records
    1. Is there a good documentation which describes what the environment statistics mean.
    2. Looking at the statistics below, can you make any suggestions for performance improvement.
    Looking at the below statistics is the:
    Eviction Stats                    
    nEvictPasses               3929          146066
    nNodesSelected               309219          17351997
    nNodesScanned          3150809     176816544
    nNodesExplicitlyEvicted     152897     8723271
    nBINsStripped          156322     8628726
    requiredEvictBytes     524323     530566
    CheckPoint Stats     
    nCheckpoints     55     1448
    lastCheckpointID     55     1448
    nFullINFlush     54     1024
    nFullBINFlush     26     494
    nDeltaINFlush     116     2661
    lastCheckpointStart     0x6f/0x2334f8     0xb6a/0x82fd83
    lastCheckpointEnd     0x6f/0x33c2d6     0xb6a/0x8c4a6b
    endOfLog     0xb/0x6f22e     0x6f/0x75a843     0xb6a/0x23d8f
    Cache Stats     
    nNotResident     4591918     57477898
    nCacheMiss     4583077     57469807
    nLogBuffers     3     3
    bufferBytes     3145728     3145728
    (MB)     3.00     3.00
    cacheDataBytes     563450470     370211966
    (MB)     537.35     353.06
    adminBytes     29880     16346272
    lockBytes     1113     1113
    cacheTotalBytes     566596198     373357694
    (MB)     540.35     356.06
    Logging Stats          
    nFSyncs 59     1452
    nFSyncRequest     59     1452
    nFSyncTimeouts     0     0
    nRepeatFaultReads     31513     6525958
    nTempBufferForWrite     0     0
    nRepeatIteratorReads     0     0
    totalLogSize     1117658932     29226945317
    (MB)     1065.88     27872.99
    lockBytes     1113     1113

    Hello Linda,
    I am inserting 25,000,000 records of the type:
    Database 1
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    The secondary keys are on {long,long} and {String}
    Database 2
    Key --> Data
    [long,Integer,long] --> [{long,long}, {Integer}}
    The secondary keys are on {long,long} and {Integer}
    i set the env parameters to non-transactional and setDeferredWrite(True)
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
    private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
        try {
              SecondaryIndex<TDetailSecondaryKey, TDetailStringKey,
                                       TDetailStringRecord> secondaryIndex      = 
                                       store.getSecondaryIndex(
                                             dataAccessLayer.getPrimaryIndex() ,
                                             TDetailSecondaryKey.class,         
                                             SECONDARY_KEY_NAME
            } catch (DatabaseException e) {
                  throw new RuntimeException(e);
    We are inserting to 2 databases  as mentioned above.
    NumRecs        250,000x2   2,500,000x2     25,000,000x2
    TotalTime(ms)  16877             673623     30225781
    PutTime(ms)    7684             76636   1065030
    BuildSec(ms)   4952             590207     29125773
    Sync(ms)       4241             6780     34978Why does building secondaryIndex ( 2 secondary databases in this case) take so much longer than inserting to the primary database - 27 times longer !!!
    Its hard to believe that building of the tree for secondary database takes so much longer.
    Why doesnt building the tree for primary database take so long. The data in the primary database is same as its key to be able to search on these values.
    Hence its surprising it takes so long
    The cache stats mentioned above relate to these .
    Can you try explaining this. We are trying to figure out is it worth trying to build the secondary index later for bulk loading.

  • Improve Database adapter insert performance

    Hopefully this is an easy question to answer. I'm getting passed to my BPEL over 8,000 records and I need to take those records and then insert them into an Oracle database. I've been trying to tune the insert by using properties like inMemoryOptimization, but the load still takes severl hours. Any suggestions on how to get the Database adapter to perform better or load all 8,000 records at once? thanks in advance.

    Hello.
    8000 records doesn't sound "huge", unless a record is say 1 kB then you have 8 MB, which is a large payload to move around in one piece.
    A DB merge is typically slower than an insert, though you did say you were using an insert.
    If you are inserting each row one at a time that seems like it would be pretty slow.
    Normally the input to a DB adapter insert is a collection (of rows) vs. a single row. If you have been handed 8000 individual rows you can assemble them into a collection with an iteration - tedious in BPEL but works fine.
    Daren

Maybe you are looking for