Why insert a row will cause a sort (it can be saw from autotrace)?

When I simply insert a row into a table, I could also see there is a sort in memory. But why insert need a sort?
C:\Documents and Settings\qiwu>sqlplus qihua/qihua
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jul 27 19:51:51 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> drop table t2;
Table dropped.
SQL> create table t2 (a int, b int);
Table created.
SQL> set autotrace on;
SQL> insert into t2 values(1,1);
1 row created.
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
Statistics
3 recursive calls
8 db block gets
2 consistent gets
0 physical reads
0 redo size
670 bytes sent via SQL*Net to client
558 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
2 sorts (memory) first time run, 2 sort, and after run it several times, one sort
0 sorts (disk)
1 rows processed
SQL> insert into t2 values(1,1);
1 row created.
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
Statistics
0 recursive calls
1 db block gets
1 consistent gets
0 physical reads
288 redo size
671 bytes sent via SQL*Net to client
558 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> insert into t2 values(1,1);
1 row created.
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | INSERT STATEMENT | | 1 | 100 | 1 (0)| 00:00:01 |
Statistics
0 recursive calls
1 db block gets
1 consistent gets
0 physical reads
288 redo size
672 bytes sent via SQL*Net to client
558 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

At a guess...
First execution makes a number of recursive calls - that'll be SQL run against the dictionary, and might well include one of your sorts.
After that, probably 1 sort for the explain part of the autotrace output.
Regards
Tony Berrington

Similar Messages

  • The inserted SD card will not allow me to upload or delete from it. Also with other SD cards

    I have placed multiple SD cards in to the SD slot of my iMac and all are unlocked (enabled to be edited). Every single one does not allow me to add documents or delete documents from them. I am starting to think this is an iMac issue. Please help

    Hi, in Finder, do a Get Info on them, what rights do you have?

  • Why doesn't/When will Apple TV support 24/192 audio streaming from iTunes???!  this is needed badly.

    apple tv truncates and resamples all files from itunes to 16bit/48k.  this is losing a large amount of audio information and very unwelcome for even neophyte audiophiles, just those of us who enjoy good music.  as music lovers we are forced to hop skip and jump around itunes and use other audio interfaces that don't play well with airplay or itunes and aren't nearly as flexible.  (like sonos or naim, or similar)  all just to get our higher rez music files to play at their intended bit depth and sample rate.  i have tried many solutions for this issue and still the very best would be to have apple make apple tv (which many of us use to stream audio) and the airport express output digital audio at full rates, ie. 24/192.  even 24/96 would be a very very welcome upgrade. 
    SO, APPLE, PLEASE PLEASE.  LET'S DO THIS.  WITH THE VERY NEXT ITERATION OF ATV.  PLEASE!!!!
    thanks.

    You have posted to the iTunes Match forum which your comment is not related to. You might want to do a search on the iTunes for Mac forum to see the previous discussions on this topic.
    Long story short, though, you are not addressing anyone from Apple here. You may submit your polite feedback here: http://www.apple.com/feedback/appletv.html

  • Inserting multiple rows in child table

    i have two entity beans (main and child) with relationship one to many .... when i insert one row in main table (ie when i make one object for main entity bean)... how to insert multiple rows in child table...

    Can anyone pls provide some sample code for the above.. how to pass a collection and populate it in the child table.
    1.Where to pass the collection, to the childbean directly or to the parent bean and then itereate to the collectio and create child bean.
    Much obliged if you could paste some code for the above..

  • How to check if row to be inserted will cause circular circular loop?

    Hi-
    I'm using connect by clause to do some processing and to check that the data is not causing circular loops (with SQLCODE = -1436).
    Customers can add data via API, and this data, after inserted, can cause the problem explained above (loop). Therefore I need to find a way to check that these new values don't collide with the existing data in the table.
    One way that currently works is to insert the data, use my pl/sql function to check that there's no loops; if there's a loop, i catch the exception, delete the rows I just inserted, and throw back the error to the user.
    I find it very ugly and unneficient, but I can't find the way to use CONNECT BY with data that is not present in the table. example:
    table my_table contains
    parent_id | child_id
    111 | 777
    777 | 333
    and now customer wants to insert:
    parent_id | child_id
    777 | 111
    Also, if customer wants to insert
    333 | 111
    if I insert the row and run my script, it will work OK, but only if I insert the row. Is there any way to validate this without inserting/removing the row ?
    the script I'm using is similar to the one posted here:
    problems using CONNECT BY
    thanks

    This approach may fail to detect loops introduced by concurrent transactions if one of the transaction uses a serializable isolation level.
    For example, assume there are two sessions (A and B), the table and trigger have been created, and the table is empty. Consider the following scenario.
    First, session A starts a transaction with serializable isolation level:
    A> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    Transaction set.Next, session B inserts a row and commits:
    B> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
    1 row created.
    B> COMMIT;
    Commit complete.Now, session A successfully inserts a conflicting row and commits:
    A> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
    1 row created.
    A> COMMIT;
    Commit complete.
    A> SELECT * FROM MY_TABLE;
    PARENT_ID  CHILD_ID
          111       777
          777       111Session A "sees" the table "as of" a point in time before session B inserted. Also, once session B commits, the lock it acquired is released.
    An alternative approach that would prevent this could use SELECT...FOR UPDATE and a "table of locks" like this:
    SQL> DROP TABLE MY_TABLE;
    Table dropped.
    SQL> CREATE TABLE MY_TABLE
      2  (
      3      PARENT_ID NUMBER,
      4      CHILD_ID NUMBER
      5  );
    Table created.
    SQL> CREATE TABLE LOCKS
      2  (
      3      LOCK_ID INTEGER PRIMARY KEY
      4  );
    Table created.
    SQL> INSERT INTO LOCKS(LOCK_ID) VALUES(123);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> CREATE OR REPLACE TRIGGER MY_TABLE_AI
      2      AFTER INSERT ON my_table
      3  DECLARE
      4      v_count NUMBER;
      5      v_lock_id INTEGER;
      6  BEGIN
      7      SELECT
      8          LOCK_ID
      9      INTO
    10          v_lock_id
    11      FROM
    12          LOCKS
    13      WHERE
    14          LOCKS.LOCK_ID = 123
    15      FOR UPDATE;
    16         
    17      SELECT
    18          COUNT (*)
    19      INTO  
    20          v_count
    21      FROM  
    22          MY_TABLE
    23      CONNECT BY
    24          PRIOR PARENT_ID = CHILD_ID;
    25 
    26  END MY_TABLE_AI;
    27  /
    Trigger created.Now the scenario plays out like this.
    First, session A starts a transaction with serializable isolation level:
    A> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
    Transaction set.Next, session B inserts a row and commits:
    B> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
    1 row created.
    B> COMMIT;
    Commit complete.Now, when session A tries to insert a conflicting row:
    A> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
    INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
    ERROR at line 1:
    ORA-08177: can't serialize access for this transaction
    ORA-06512: at "TEST.MY_TABLE_AI", line 5
    ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'To show that this still handles other cases:
    1. Conflicting inserts in the same transaction:
    SQL> TRUNCATE TABLE MY_TABLE;
    Table truncated.
    SQL> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
    1 row created.
    SQL> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
    INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
    ERROR at line 1:
    ORA-01436: CONNECT BY loop in user data
    ORA-06512: at "TEST.MY_TABLE_AI", line 15
    ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'2. Read-committed inserts that conflict with previously committed transactions:
    SQL> TRUNCATE TABLE MY_TABLE;
    Table truncated.
    SQL> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
    1 row created.
    SQL> COMMIT;
    Commit complete.
    SQL> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);
    INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
    ERROR at line 1:
    ORA-01436: CONNECT BY loop in user data
    ORA-06512: at "TEST.MY_TABLE_AI", line 15
    ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'3. Conflicting inserts in concurrent, read-committed transactions:
    a) First, empty out the table and start a read-committed transaction in one session (A):
    A> TRUNCATE TABLE MY_TABLE;
    Table truncated.
    A> SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    Transaction set.b) Now, start a read-committed transaction in another session (B) and insert a row:
    B> SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
    Transaction set.
    B> INSERT INTO my_table (parent_id, child_id) VALUES (111, 777);
    1 row created.c) Now, try to insert a conflicting row in session A:
    A> INSERT INTO my_table (parent_id, child_id) VALUES (777, 111);This is blocked until session B commits, and when it does:
    B> COMMIT;
    Commit complete.the insert in session A fails:
    INSERT INTO my_table (parent_id, child_id) VALUES (777, 111)
    ERROR at line 1:
    ORA-01436: CONNECT BY loop in user data
    ORA-06512: at "TEST.MY_TABLE_AI", line 15
    ORA-04088: error during execution of trigger 'TEST.MY_TABLE_AI'If updates are permitted on the table, then they could cause loops also, but the trigger could be modified to test for this.

  • Question why does 1 row(s) inserted display even though nothing was inserte

    My question is when I run the below code in the sql workshop/sql command even though the insert statement is commented out. I still get the following message:
    1 row(s) inserted.
    Why? I check the table and I can seel that nothing was inserted. So why do I get the message 1 row(s) inserted.
    I will be running this on an apex form. Should I worry about this?
    Howard
    declare
         getFm411ID number;
    distCount number;
    begin
    select count(*) into distCount from test where no = :P_NO and active = 'A';
    if distCount > 0 then
    -- insert into tbl_test(fm411_id,dist_id) values (getFm411ID,my411dist.dist_id);
    dbms_output.put_line('test');
    end if;
    end;

    It is a dummy message, and no need to worry about it.
    begin
      null;
      -- insert into no_such_table (x) values ('a');
    end;The above dummy code produces "1 row(s) inserted" message.
    Ravi

  • Why is it so laborious to insert table rows?

    I have a table that is at least 1,000 rows in length. I've
    used the shortcut Alt+A+I+R to insert a single table row, and I've
    found that if I select existing rows, such as five or ten at a
    time, I can insert that same number of rows in one action. However,
    to insert five rows at a time, it takes about 3-5 minutes. To
    insert 10 table rows at a time, it takes about 5-10 minutes.
    Is there anyway I can eliminate the time lag to insert table
    rows? I tried to create one new table of 1,000 rows initially, but
    RoboHelp limits the table creation to 100 rows at a time.
    thanks in advance,
    ChristyG

    Hi ChristyG
    I think what you are seeing is something inherent with the
    RoboHelp HTML editor. 1,000 rows is a pret-ty large table IMHO. The
    RoboHelp HTML WYSIWYG editor just wasn't designed to handle things
    that size. I'm just guessing here, but You probably also notice
    that you can easily "type ahead" in this topic and it looks as if
    you are typing i n s l o w m o t i o n.
    So unfortunately, I'm going to have to say that no, there is
    simply no way to "speed it up". The only possibility that comes to
    mind is to use a different HTML editor for such topics. I have no
    idea if Dreamweaver or other editors are up to that sort of task.
    Perhaps someone with more experience with different editors can
    comment on that.
    Sincerely... Rick

  • Different opertimizer_mode will cause different result( ie. row count).

    Hello expert,
    in my store procedure, I have change "opertimizer_mode" from several times among "rule" and "all_rows". I am wondering for the same statement if using different opertimizer_mode will cause different result for data volume ( such as row count?).
    Many Thanks,

    What version of Oracle?
    The rule hint should probably not be used in or after 10g except in very precise and very limited circumstances which usually involve querying data where no statstics exist - an unlikely event in 10g or 11g. Maybe with collection-to-table CAST() or TABLE() conversions and similar situations.
    The cost-based optimizer is almost always to be preferred to the rule-based optimizer in recent versions because almost all of the useful new features the database implements require it.
    Like BluShadow I wonder what the need to switch between the optimzers could possibly be

  • Inserting a Row in to a JTable

    I have been looking and looking in the tutorials and other topics but can not find exactly what I need. I just need to start with a blank table and once something happens it will trigger to insert a row in to the table for updates.
    Here is the code on how I start the table.
    1. Object[] columnNames = {"Type", "Date", "Time", "Computer"};
    2. Object[][] rowData = {};
    3. errorTable = new JTable(rowData, columnNames);
    4. DefaultTableModel dtm = (DefaultTableModel)errorTable.getModel();
    5. dtm.setDataVector(rowData, columnNames);
    I get a ClassCastException on line 4.....why is that???

    Okay, so that worked to use DefaultTableModel.....but when i try and insert a row later on I get a null pointer exception when hitting line 7 below:
    //Getting computer name from the tree node
    1. String computer = getName(((DefaultMutableTreeNode) serverTree.getLastSelectedPathComponent )).getParent().toString(), "/");
    //Adding information to a vector to add to table
    2. dataVector.addElement("Information");
    3. dataVector.addElement("");
    4. dataVector.addElement("");
    5. dataVector.addElement(computer);
    6. holdVector.addElement(dataVector);
    //Hoping to insert the row here (but get null pointer)
    7. errorTableModel.setDataVector(dataVector, holdVector);
    What is the issue??? Do I need to do an insert row now or what? Thank you in advance.

  • Inserting Multiple Rows into Database Table using JDBC Adapter - Efficiency

    I need to insert multiple rows into a database table using the JDBC adapter (receiver).
    I understand the traditional way of repeating the statement multiple times, each having its <access> element. However, I am just wondering whether this might be performance-inefficient, as it might insert records one by one.
    Is there a way to ensure that the records are inserted into the table as a block, rather than record-by-record?

    Hi Bhavesh/Kanwaljit,
    If we have multiple ACCESS tags then what happens is that the connection to the database is made only once. But the data is inserted row by row.
    Why i am saying this?
    If we add the following in JDBC Adapter..logSQLStatement = true. Then incase of multiple inserts we can see that there are multiple
    <i>Insert into tablename(EMP_NAME,EMP_ID) VALUES('J','1000')
    Insert into tablename(EMP_NAME,EMP_ID) VALUES('J','2000')</i>
    Doesnt this mean that rows are inserted one by one?
    Correct me if i am wrong.
    This does not mean that the transaction is not guaranted. Either all the rows will be inserted or rolled back.
    Regards,
    Sumit

  • Insert table row - moving Selection

    Have a table row selected and new data is inserted and the table rows are moving down. When first row is NOT selected and new table row has fired inserted, the selection row programatically moves down.
    However if the first row is selected, and new data is added to the first row, the first and second row are selected even tho its programatically should work, just like if the row selected was second, third etc...
    What's up with the first table row selected and new data inserted into the first table row to cause selection unions, even tho programatically it shouldn't?
    Thanks
    Abraham Khalil

    After the row being inserted, have you tried
    table.setRowSelectionInterval(row1,row2)?
    And in case it is the first row,
    table.setRowSelectionInterval(0,0);
    And you can even scroll to show the selection if you want by call
    table.scrollRectToVisible(getCellRect(selectedRow, 0, true));
    which will show the selectedRow
    HTH
    Bing
              

  • Fixed first row regardless of column sort

    I have a report where I need the first displayed row always be a specific row that have the input fields (like text fields and select list .. etc), all columns in report are sortable. The problem is that when user sort the report by any column, the first row that have the input fields will not be the first row anymore because of sort.
    does any one have an idea how to fix the input fields row at the top of the report (first row) regardless of sort?

    Why include the input fields in the report and not use page items at the top of the report region?

  • Database trigger to insert duplicated rows on audit table

    Hi
    It is possible to insert duplicate rows (at the moment database generate PK violation constraint for one specific table) within an audit table ?
    Certain code like this is not working, always the whole transaction makes a rollback and audit table will be empty:
    CREATE OR REPLACE TRIGGER USER.audit_TABLE_TRG
    before INSERT ON USER.TABLE
    REFERENCING NEW AS NEW OLD AS OLD
    FOR EACH ROW
    BEGIN
    declare
    V_conteo number(1) := 0;
    duplicate_record EXCEPTION;
    begin
    select count(*)
    into V_conteo
    from USER.TABLE
    where <PK conditions>
    if V_conteo > 0 then
    begin
    INSERT INTO USER.AUDIT_TABLE
    (<>)
    VALUES
    (<>);
    raise duplicate_record;
    exception
    when duplicate_record then
    INSERT INTO USER.AUDIT_TABLE
    (<>)
    VALUES
    (<>);
    raise_application_error(-20019,'Duplicated column1/column2:'||:NEW.column1||'/'||:NEW.column2);
    when others then
    dbms_output.put_line('Error ...'||sqlerrm);
    end;
    end if;
    end;
    END;
    /

    >
    Exactly this is my problem , one only transaction (insert into audit table and try to insert into target table), the reason of this post is to know whether exists another way to insert all the intent duplicate records on target table into one audit table, you right ,maybe I can add one date column and modify the PK on audit table but my main problem still happens.
    >
    Can I ask you why you want to go trigger route for this if your intention is only to capture the duplicate records? Assuming that you are on at least on 10gR2, you can look at DML error table. If you go this route, there is no need for additional overhead of trigger, code failures, etc.
    Oracle can automatically store the failed records in an error table which you could later on investigate and fix it or ignore it.
    Simple example:
    SQL> create table emp (empno number primary key, ename varchar2(10), sal number);
    Table created.
    SQL> insert into emp values(1010, 'Venkat', 100);
    1 row created.
    SQL> commit;
    Commit complete.
    Create error table to log the failed records
    BEGIN
          DBMS_ERRLOG.create_error_log (dml_table_name => 'emp');
    END;
    Now let's insert a duplicate record
    SQL> insert into emp values(1010, 'John', 200) ;
    insert into emp values(1010, 'John', 200)
    ERROR at line 1:
    ORA-00001: unique constraint (VENKATB.SYS_C002813299) violated
    Now use the log table to capture
    SQL> insert into emp values(1010, 'John', 200) LOG ERRORS INTO err$_emp ('INSERT') REJECT LIMIT UNLIMITED;
    0 rows created.
    Now check the error log table and do whatever you want
    SQL> r
      1* select ORA_ERR_MESG$, empno, ename, sal from err$_EMP
    ORA_ERR_MESG$
    EMPNO
    ENAME
    SAL
    ORA-00001: unique constraint (VENKATB.SYS_C00
    2813467) violated
    1010
    John
    200
    1 row selected.This will also capture when you do
    INSERT INTO EMP SELECT * FROM EMP_STAGE LOG ERRORS INTO err$_emp ('INSERT') REJECT LIMIT UNLIMITED;
    You can capture the whole record into the log table. All columns.
    Regards
    Edited : Code

  • Insert one row with FileInputStream bigger than 30M always OutOfMemoryError

    Hi,somebody can help me to solve my problem as follow:
    The following file 'insertTest.jsp' run with Tomcat-5.0.18 and MySQL-4.0.17.
    If I insert a row with a file bigger then 30M,I always get OutOfMemoryError messages shown as next.
    In fact , I have adjusted variables as:
    set-variable = max_allowed_packet=100M
    set-variable = max_heap_table_size=100M
    set-variable = tmp_table_size=100M
    set-variable = key_buffer_size=100M
    set-variable = innodb_buffer_pool_size=100M
    set-variable = max_connections=200
    set-variable = bulk_insert_buffer_size=100M
    set-variable = sort_buffer_size=100M
    set-variable = myisam_sort_buffer_size=100M
    'insertTest.jsp' :
    <%@ page language="java" contentType="text/html;charset=Big5" errorPage="" %>
    <%@ page import="java.sql.*,java.io.*" %>
    <%
    Connection conn = null;
    java.sql.Statement stmt = null;
    String tableName="goodtable";
    boolean tableNameExist=false;
    String createtable = "Create table "+tableName+" (sn INT AUTO_INCREMENT NOT NULL PRIMARY KEY ,fileImage longblob)";
    Class.forName("com.mysql.jdbc.Driver");
    conn =DriverManager.getConnection("jdbc:mysql://thisIpAddress/thisDatabaseName?user=thisUser&password=thisPsw");
    try {
    DatabaseMetaData dbmd = conn.getMetaData();
    String[] types = {"TABLE"};
    ResultSet resultSet = dbmd.getTables(null, null, "%", types);
    while (resultSet.next()) {
    String tName = resultSet.getString(3);
    if(tName.equals(tableName)){
    tableNameExist=true;
    break;
    String tableCatalog = resultSet.getString(1);
    String tableSchema = resultSet.getString(2);
    catch (SQLException e) {}
    out.print("<html><body bgcolor='#669999'>");
    if(!tableNameExist){
    stmt = conn.createStatement();
    stmt.execute(createtable);
    out.print("<p>&#12288;<p><center>CreateTable="+tableName+" Ok!<hr>");
    else
    out.print("<p>&#12288;<p><center>' "+tableName+" ' already exist!<hr>");
    //The test file(at C:\fileSize30M.zip) is put here.
    File file = new File("C:\\fileSize30M.zip");
    FileInputStream fis = new FileInputStream(file);
    String qry="Insert into "+tableName+"(fileImage) values(?)";
    java.sql.PreparedStatement pstmt=conn.prepareStatement(qry);
    pstmt.setBinaryStream(1,fis,fis.available());
    out.print("'before pstmt.executeUpdate()' is ok<p>");
    System.out.println("'before pstmt.executeUpdate()' is ok<p>");
    int ok=pstmt.executeUpdate();
    if(ok==1){
    out.print("'after pstmt.executeUpdate()' is ok");
    System.out.println("'after pstmt.executeUpdate()' is ok<p>");
    fis.close();
    pstmt.close();
    conn.close();
    out.print("</body></html>");
    %>
    type: Exception report
    description: The server encountered an internal error () that prevented it from fulfilling this request.
    exception :
    javax.servlet.ServletException
    org.apache.jasper.runtime.PageContextImpl.doHandlePageException(PageContextImpl.java:867)
    org.apache.jasper.runtime.PageContextImpl.handlePageException(PageContextImpl.java:800)
    org.apache.jsp.insertTest_jsp._jspService(insertTest_jsp.java:101)
    org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:133)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:311)
    org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:301)
    org.apache.jasper.servlet.JspServlet.service(JspServlet.java:248)
    javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    root cause :
    java.lang.OutOfMemoryError

    Increase you heap size on Tomcat startup. I haven't used Tomcat in some years but I think you can just set a JAVA_OPTS environment variable with the text Xmx128M and your problems will go away. I'm thinking this is an issue because I'm assuming you are running Tomcat on a Windows box where the default heap for Java is relatively small like 32M or 64M or something like that.
    Cliff

  • Soritng and inserting the rows of a bitset matrix....Brain teaser!!!

    Initially, i have two bitset matrices:
    1. temp_matrix:Initially, it has all the rows,say n, in the sorted order based on their cradinality.
    2.Final_matrix:Intially, empty.
    Operation being performed on the bitset matrice:
    1. Take each row of temp_matrix and perform "and" operation of it with all the other rows underneath it.Take the second row and perform and operation with all the orther rows underneath it and repeat the same with all the other rows till u reach the n-1the row which will be "Anded" with nth row.
    2."Anded" result of each of two rows has to be stored in the Final_matrix in a sorted order based on its cardinality.
    For example: say at any instant Final_matrix had the following state:
    {1}
    {3}
    {1,2,3}
    {3,4,5,6}
    {1,2,3,4,5,6}
    And suppose i have to insert a new row say: {4} in it at 3rd position
    How should i do that..i mean which sorting method will be time efficent and how should i implement it in terms of inserting a new row in the list with the issues of shifting the rows etc.
    The datastructure i am using is BitSet to store the bits.
    Any suggestions !!!

    Can u tell me how to wrap the BitSet entries in the Linked Lists.
    I am pasting snap shot of my code,if that helps:
    for( i=0;i<count_bit_vector_global-n;i=i+n)
    long initial=0;
    initial= System.currentTimeMillis();
    count_j=0;
    for(int j=i;j<i+n;j++)
    if(temp_vector1.get(j)==true)
    result_row_vector.set(count_j);
    else
    result_row_vector.clear(count_j);
    count_j++;
    }/*End for j:Result row created*/
    if(result_row_vector.cardinality()>num_one)
    for(int k=(i+n);k<count_bit_vector_global;k=k+n)
    check=0;
    if((temp_vector1.get(i,i+n)).intersects(temp_vector1.get(k,k+n))==true)/*All result elements are not zero*/
    count_j=0;
    for(int j=i;j<i+n;j++)
    if(temp_vector1.get(j)==true)
    result_row_vector.set(count_j);
    else
    result_row_vector.clear(count_j);
    count_j++;
    result_row_vector.and(temp_vector1.get(k,k+n));
    while(check<count_bit_vector)
    if((result_row_vector.get(0,count_j)).equals(bit_vector1.get(check,check+n)))
    flag=true;
    s2=track_vector.get(check/n).toString();
    s1= temp_track_vector.get(i/n).toString().concat(b).concat(temp_track_vector.get(k/n).toString());
    track_vector.set(check/n,concat_string(s2,s1));
    break;
    else
    flag=false;
         check=check+n;
    }/*End while*/
    In this code n decides the length of one row.U can see i am simulating a 1D BitSet as 2D BitSet matrix.
    Do pay any heed to the track_vector.
    But i will summarize what i am doing here.
    Temp_vector1 is the initial bitset storing the original matrix.
    bit_vector1 is the bitset storing the new "Anded" rows.
    "i" decides which row will be compared to all the other rows after that.
    "k" decides the rows which are after the row determined by "i".
    I am taking the row decided by "i" and then store that in result_vector and then "And" it with the other rows decided by "k".
    "count_bit_vector" tells the number of bit elements in temp_vector
    "check" tells the position of the row in the bit_vector1 which is similer to the row obtained after "And" operation.
    If i find the row i don't add it in the bit_vector1 but if its not there then it is added.
    Right now, before rejecting a row i have to compare all the rows which are there in the bit_vector1 and that is taking time.

Maybe you are looking for

  • SD certification questions - Help with answers

    Dear All, I am writing my certification exam tomorrow. Please help me in finding the answers for the below questions. 1. Can i have multiple condition records for a condition table? 2. Can two condition types share a condition table? 3. What happens

  • What is the best USB 3.0 (small) ext HDD for recording my music on

    Hello everybody! i start with saying........... A Happy Newyear!!!!! I wish you all the very best and love. Now for my question. I would like to know what is the best portable usb 3.0 external HDD for recording my music on. Because since today i am a

  • Block PR Release for Creator of PR

    Hi All, I have a simple requirement to popup an error message when the person who is going to release the PR in TCODE ME54N is the creator of the PR itself. In short it it suppose to prevent the creator of the PR from releasing his own PR. I know the

  • Pluggin for shaky camera

    Somebody told me about a plugin in adobe premier. that makes the footage look less shaky. Is there a pluggin in final cut pro for that also. thank you, caleb kingston

  • My USB Icon Changed

    Hello. I'm just a bit worried. My USB Icon changed. The above icon is the older version. What happened with this? Thanks for the tips!