Significance of ResultSet.setFetchSize()

We are using Oracle 8i as DB Server, WebLogic 6.1 as App Server and IIS 5.0 as Web Server on the different physical m/c. We are using OCI8 Driver.
We have a utility that will act as a cache manager. We have to provide a query, it will hold the result in ArrayList.
We are measuring time taken by the DB Server to execute query and how much by App Server to fetch rows using while(rs.next()).
Now in non-peak hours
     Query Execution Time : 1000
     Row Fetching Time : 100
But during the peak hours
     Query Execution Time : 2000
     Row Fetching Time : 2000 (sometime 3000).
I understood that during the peak hours DB Server is very busy hence it is taking more time to execute query. And during the peak hours, network traffic is very high + DB Server is already busy hence Row Fetching Time is going very high.
During the peak hours our App Server is 40% ideal always.
I believe that when we do while(rs.next()), each time App Server make a network trip to DB Server for fetching a single row. So if my query is returning 100 rows, there will be 100 network trip from App Server to DB Server and hence performance is degraded.
Now ResultSet has setFetchSize() method that determine now many records will be fetched on a call by OCI8 driver.
So now I set fetch size as 10...
(1) Will I gain a performance?
(2) Will I have any performance loss by any other means across the application?
Reg,
Chetan Parekh

Hi,
Have u got solution,If yes Plz,tell me the
solution,how to do.
Thanks,
Suresh jamma.Solution to what?

Similar Messages

  • Resultset return takes long time

    I have a query which will generate about 1500 rows back, when I try to process the record I find out that it takes about 1seconds to get the first 10 rows back, then 2 minutes to get the next 100 rows back, to get all the rows back, it took about more than 10 minutes. I set the resultset.setFetchsize(100);
    Is there any one has this problem before? Any idea how could I speed it up?
    thanks
    Kelly

    If I run the query directly from SQL plus, it just
    takes about 40-50 seconds, why it is taking so long
    when using resultset?Because your code is processing the resultset and doing so in an inefficient way. You're almost certainly adding your results to a data structure which becomes less and less efficient as you add items to it.
    Show us the code please.
    Dave.

  • Large records retrieve from network took too much time.How can I improve it

    HI All
    I have Oracle server 10g, and I have tried to fetch around 200 thousand (2 lack ) records. I have used Servlet that is deploy into Jboss 4.0.
    And this records come from network.
    I have used simple rs.next_ method but it took too much time. I have got the only 30 records with in 1 sec. So if I want all these 2 lacks records system take around more than 40 min. And my requirement is that it has to be retrieve within 40 min.
    Is there any another way around this problem? Is there any way that at one call Result set get 1000 records at one time?
    As I read somewhere that “ If we use a normal ResultSet data isn't retrieved until you do the next call. The ResultSet isn't a memory table, it's a cursor into the database which loads the next row on request (though the drivers are at liberty to anticipate the request). “
    So if we can pass the a request to around 1000 records at one call then maybe we can reduce time.
    Has anyone idea How to improve this problem?
    Regards,
    Shailendra Soni

    That true...
    I have solved my problem invokeing setFetchSize on ResultSet object.
    like ResultSet.setFetchSize(1000).
    But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
    Actually I had read a one nice article on net
    [http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
    They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
    They gave two solutions i,e
    Fetch small amount of data iteratively instead of fetching whole data at once
    Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
    1.     Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
    // But i don't understand How can I do it in java.
    2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
    // But i don't understand How can I do it in java.
    If you know any one of these solutions then can you please give me examples to do it.
    Thanks in Advance,
    Shailendra

  • Problem in loading jsp page

    Hi ,
    I'm facing a problem in displaying a JSP page . I'm displaying records from the database on the page . I want to display only 20 records at a time . I have given the navigation facility for previous,first ,last ,next etc.
    I'm getting all the records at once and displaying them at once . But , I show only 20 records at a time ,hiding the remaining records .
    This works fine for less no . of records but for more records ,say 2000-3000 records , it gets quite slow as expected .
    I don't want to fire the query again .
    Is there any other way possible using this technique .
    It's very urgent . So kindly help .
    Thanks.

    Just get a scrollable ResultSet with the row amount for 1 page, e.g.
          * Returns a result set for the given query, that is read only and scrollable
         * (cursor can move forward and backward). The default fetch size is used and
         * can be changed anytime by calling resultSet.setFetchSize(newFetchSize).
         * <p>
         * Note: be sure that the used JDBC driver supports scrollable result sets via
         * <code>DatabaseMetaData.supportsResultSetType(ResultSet.TYPE_SCROLL_SENSITIVE)</code>
          * @param select SQL select query.
         * @param fetchSize Number of rows to prefetch.
          * @return Scrollable, read-only result set.
        private ResultSet getScrollableResultSet(String select, int fetchSize) {
            ResultSet result = null;
            if (select != null) {
                try {
    //TYPE_SCROLL_SENSITIVE can throw SQLException "Unsupported syntax for refreshRow()" with the
    //current ORACLE driver (8.1.7.0.0).
    //                Statement stmt = createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY);
                    Statement stmt = createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
                    int maxRows = stmt.getMaxRows();
                    if (fetchSize < 0 || fetchSize > maxRows) {
                        fetchSize = 0; //ignore fetch size
                    stmt.setFetchSize(fetchSize);
                    long t = System.currentTimeMillis();
                    result = stmt.executeQuery(select);
                    logger.debug(select + (" ("+(System.currentTimeMillis() - t)+")"));
                    if (result == null) {
                        stmt.close();
                } catch (SQLException sqle) {
                    logger.error("Error getting scrollable ResultSet for '"+select+"'.", sqle);
                    throw new BavException(sqle);
            return result;
        }//getScrollableResultSet()and fetch the next/previous page records via the resultSets next()/previous() methods. But beware that the SQL statement doesn't use an ORDER BY or GROUP BY that needs to fetch all rows anyway.

  • HT1577 Movie is too large and is taking too much time.  How do I cancel?

    How do I cancel a movie which is taking all my GB for the iPhone hotspot?

    That true...
    I have solved my problem invokeing setFetchSize on ResultSet object.
    like ResultSet.setFetchSize(1000).
    But The problem sorted out for the less than 1 lack records. Still I want to do the testing for more than 1 lack records.
    Actually I had read a one nice article on net
    [http://www.precisejava.com/javaperf/j2ee/JDBC.htm#JDBC114]
    They have written a solutions for such type of the problem but they dont give any examples. Without examples i dont find how to resolve this type of the problem.
    They gave two solutions i,e
    Fetch small amount of data iteratively instead of fetching whole data at once
    Applications generally require to retrieve huge data from the database using JDBC in operations like searching data. If the client request for a search, the application might return the whole result set at once. This process takes lot of time and has an impact on performance. The solution for the problem is
    1.     Cache the search data at the server-side and return the data iteratively to the client. For example, the search returns 1000 records, return data to the client in 10 iterations where each iteration has 100 records.
    // But i don't understand How can I do it in java.
    2. Use Stored procedures to return data iteratively. This does not use server-side caching rather server-side application uses Stored procedures to return small amount of data iteratively.
    // But i don't understand How can I do it in java.
    If you know any one of these solutions then can you please give me examples to do it.
    Thanks in Advance,
    Shailendra

  • Is thre any row limit in JDBC ?

    Hi all .
    I have a DB Table that I use for logging in my application .
    And as you know a log table can have many rows ..
    In my application an administrator can
    se the log .
    So my question is :
    Is there any limit to the rows returned from a query .
    I am afraid that in future my program throws an
    OutOfMemory error or any other error.
    Is there any way to get best performance in such situation ?
    Thanks.
    Omar Dawod.

    > I can start , or give me sample codes ?
    Well there are many threads in this forums addressing the same issue if you are still not satisfied use google by puttin the right key words...
    http://onesearch.sun.com/search/onesearch/index.jsp?qt=pagination&qp_name=null&subCat=siteforumid%3Ajava48&site=dev&dftab=siteforumid%3Ajava48&chooseCat=javaall&col=developer-forums
    http://onesearch.sun.com/search/onesearch/index.jsp?qt=pagination&qp_name=null&subCat=siteforumid%3Ajava45&site=dev&dftab=siteforumid%3Ajava45&chooseCat=javaall&col=developer-forums
    http://www.google.co.in/search?hl=en&q=pagination+jdbc&meta=
    > About the "pagination" , It would be very nice to
    include "pagination" ,
    but does JDBC offer this to me
    It is not a readmade feature u may design it as per ur convinice......
    and this is because it is highly application(DB) specific.
    Just to quote an example....
    Say
    select COUNT(*) from TableName
    gives you total number of records....
    in a database like oracle... U can restructure the query like the one below
    select * from TableName WHERE rownum >= [LIMIT1] and rownum <= [LIMIT2]
    where LIMIT1 < LIMIT2 <= COUNT(*)
    fix a value for [LIMIT2] - [LIMIT1] depending on SYSPARAMETERS.....
    U can design a bean which does this task for U....
    In similar ways different Databases offer a similar functionality
    MYSQL --> use of LIMIT & OFFSET clause
    SQL SERVER 2005 ---> rownum()
    and so on....
    Other than this approch there are few other methods by which one can acheive it.
    Please go through the link below
    http://www.devx.com/Java/Article/21383/1763
    > any way to limit the rows in JDBC .
    U can do it to certain extent Using ResultSet.setFetchSize(int rows) method as said by my fellow forum mate cotton.m.

  • Reg Increasing the EJB Response Time

    We are facing below performance problems in our application,
    1. Response time from EJB (Stateless Session Bean) to Client (Swing).
    2. Time taken for looping through the Result Set.
    For 5000 records, our query is taking 0.2 Secs but looping through the ResultSet is taking around 16 seconds and Response time for transferring 160 kb (object data) from EJB to Client is taking around 30 Secs.
    We have achieved some improvement in ResultSet looping time by setting the setFetchSize of ResultSet to 250.
    Can anyone suggest?
    1. Is there any way to increase the Data Transfer time (Response time) from EJB to Client?
    2. What is the ideal value for setting the value for ResultSet.setFetchSize() ( No of records we fetch vary from 1 � 3,00,000 records)

    1. Response time from EJB (Stateless Session Bean) to
    Client (Swing). Consider non-EJB options. They might prove efficient in your case.
    2. Time taken for looping through the Result Set. Try and design your code as to not require all the records at a time.
    For 5000 records, our query is taking 0.2 Secs but
    looping through the ResultSet is taking around 16
    seconds and Response time for transferring 160 kb
    (object data) from EJB to Client is taking around 30
    Secs.
    We have achieved some improvement in ResultSet
    looping time by setting the setFetchSize of ResultSet
    to 250. There is a limit to what you can achieve with that.
    Can anyone suggest?
    1. Is there any way to increase the Data Transfer
    time (Response time) from EJB to Client? I suppose you would want to reduce the response time.
    2. What is the ideal value for setting the value for
    ResultSet.setFetchSize() ( No of records we fetch
    vary from 1 � 3,00,000 records)
    shrug

  • ResultSet processed at the DB server or App Server???

    Well,
    If I have a ResultSet object that is fetched with a couple of records, where does the actual records stay???
    Are these records fetched into your App server, or they still are with your DB?? So, when I call "rs.next()", evidently does the App server need to fetch each and every record from the DB, if the latter is true??? If the former is true, how do you explain that the ResultSets and other related objects are scarce DB resources???
    Thx for your time.
    fun_one

    A ResultSet is a Java object, so it resides on the application server. A ResultSet is associated with a database cursor, which resides on the database server.
    When you open a cursor, i.e. execute a query, the database figures out which rows match the query. The database servers builds a data structure of some sort in its memory, containing the selected data. The data structure is...cough...don't ask me, I don't know. Must be fairly significant to allow for transaction isolation, sorting, joins, ... If you just do "select * from foo" without a "where" clause, the db server may get by with a simpler data structure.
    The database then sends the first, say, 10 rows to the application server. The db server also says, "here's the data on the newly opened cursor, and let's call this cursor #22."
    After the application server has looped 10 times in while(res.next()), res.next() says to the db server, "dude, I have this cursor, #22, send me more data on it." The db server sends the next batch of 10 rows. This repeats until all rows are processed, or you close the ResultSet (aka close the cursor).
    Closing the ResultSet tells the database server that it can release the data structure that holds the stuff in the cursor. If you don't close the cursor, the data structure needs to stay there, reserving memory, in case you rewind it and start reading it over.
    So, a ResultSet + cursor take space on both the application server and at the db server.
    The number of rows that are fetched at a time can be adjusted; see setFetchSize(). It's a tradeoff between the number of times a round trip has to be made, vs. the memory it takes to keep the 10 or whatever rows in memory before res.next() gets to them.
    All of this depends on how the db server and the JDBC driver are implemented, but I'd guess the above is a pretty typical way of doing it.

  • ResultSet merge

    Does anyone know if there is a way of creating one result set from two?
    I need to write a query to return results from two databases that can not talk to each other (!). The results are only seen after some java fantasticness has turned them into an excel spreadsheet using POI. I'm using a class I wrote that takes the ResultSet data and turns this into an excel spreadsheet. As I already have this class, I'd rather use it than write something new.
    Basically however I look at it I get two ResultSet objects. I have three options - either find a way to merge the ResultSets (which I like the sound of, but can find no easy method to do this), use one of the ResultSets to write the sheet (leaving gaps) and fill in the gaps with the data from the other ResultSet, or write lots of new stuff that does this in another way.
    Incidentally, whilst looking through the API (java 1.3, so depressing to be so backwards!) I cannot find an implementing class of the ResultSet interface. Obviously one must exist... but where is it and what is it called (as while I'm happy to consider extending this as an option, implementing all the methods of the interface sounds like a time-consuming struggle and a tremendous bore to boot!)??

    The dbs are of two different types (Sybase and Oracle) and are both huge.
    Also the db team is based mainly in Pune, India, so while they are very good at what they do, we are seperated by their general belief that they understand what I'm saying. As we're miles from them (London) I can't tell them what I want them to do with any degree of confidence in their accuracy at following my instructions. Basically I'm preempting human errors, and also have no idea how you would start connecting the two different dbs (If I cared about that sort of thing, I'd be a DBA).
    I don't know hwat your experience of dealing with dedicated DBA's is, but mine is that if I have to rewrite this part of the program (so far a month and a half has been spent on this part), it will still be quicker than getting any major change done to an existing and robust db, let alone two. The amount of other systems that would have to be altered (albiet in a minor way) would be significant too. Of course, not being a DBa I'm probably missing a very quick easy way of effecting this change without these problems, but I generally find it easier to stick to what I know.

  • Resultset - retrieves all rows or only one at a time

    Hey guys,
    I am using oracle DB. Lets say the query i run has about 1000 records
    when i call:
    prepareStmt(cstmt);
    Rsultset rs = cstmt.executeQuery();
    Do I get all the records, or do i get them one by one
    when i call
    rs.next
    is this DB specific??
    This is simple q, but i been working with oracle and java for a while, and for some reason this never came up.
    thank you.

    The driver implementation is free to retrieve as many or as few rows at a time from the database as it wants. You can suggest to the driver how many rows at a time it should retrieve with setFetchSize(); the driver is free to ignore your suggestion (but many drivers honor it). The fetch size default varies wildly from driver to driver, from 1 row at a time to all rows at a once. For the Oracle thin driver, the default is 10, which is usually too small to give the best performance (I use between 100 and 1,000, usually 250).
    Note that when you're working with a very large ResultSet, a driver that does "all rows at once" can cause you to run out of memory. Also, some drivers will cache every single row when using a scrollable ResultSet (and again run you out of memory) but not do so with a forward-only ResultSet (Oracle has this issue).

  • Trying to get the last row from a resultset

    Hi,
    I'm trying to do a query to postgreSQL and have it return the last updated value, (last row).
    My prepared statement is returning the correct results, but i'm having a problem getting the latest value.
    I'm using a comboBox to drive a textfield, to load the last entered values in depending on which item in the comboBox is selected.
    I've tried a variety of things and most seem to return the first row, not showing the updated values.
    Or, if it does work, it takes to long to load, and i get an error.
    here is the working code;
    Object m = machCBX.getSelectedItem():
    try { PreparedStatment last = conn.prepareStatement("SELECT part, count FROM production WHERE machine = ?",
    ResultSet.TYPE_SCROLL_INSENSITIVE,  //tried both INSENSITIVE and SENSITIVE
    ResultSet.CONCUR_READ_ONLY);
    last.setString(1, String.valueOf(m));
    rs. = last.executeQuery();
    if(rs.isAfterLast) == false ) {
    rs.afterLast();
    while(rs.previous()) {
    String p = rs.getString("part");
    int c = rs.getInt("count");
    partJTX.setText(p);
    countJTX.setText(c);
    }this grabs values, but they are not the last entered values.
    Now if i try to use rs.last() it returns the value i'm looking for but takes to long, and i get:
    Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space I also know using ra.last() isn't the best way to go.
    I'm just wondering if there is another way other than getting into vectors and row count? or am i better off to go with the later?
    thanks
    -PD

    OK, you've got a major misunderstanding...
    The relational database model is built on the storage of sets - UNORDERED sets. In other words, when you hand a database a SELECT statement without an ORDER BY clause, the database is free to return the results in any order.
    Now it so happens that most databases will happen to return data retrieved by an unordered SELECT, at least for a while, in the same order that it was inserted, especially if no UPDATE or DELETE activity has occured, and no database maintenance has occured. However, eventually most tables have some operation that creates a "space" in the underlying storage, or causes a row to expand and have to be moved or extended, or something. Then the database will start returning unordered results in a different order. If you (or other people) never ever ever UPDATE or DELETE a table, then on some databases the data might well come out in insertion order for a very very long time; given human nature and the way projects tend to work, relying on that is a sucker's bet, IMHO.
    In other words, if you want the "most recent" something, you need to store a timestamp with your data. (With some databases, you might be able to take advantage of some non-standard feature to get "last updates" or "row change timestamps", but I know of no such for Postgres.
    While this won't solve your major problem, above, your issue with rs.last is probably occuring because Postgres by default will prefetch your entire ResultSet. Use Statement.setFetchSize() to change that (PreparedStatement inherits the method, of course).

  • Heap space error while creating XML document from Resultset

    I am getting Heap space error while creating XML document from Resultset.
    It was working fine from small result set object but when the size of resultset was more than 25,000, heap space error
    I am already using -Xms32m -Xmx1024m
    Is there a way to directly write to xml file from resultset instead of creating the whole document first and then writing it to file? Code examples please?
    here is my code:
    stmt = conn.prepareStatement(sql);
    result = stmt.executeQuery();
    result.setFetchSize(999);
    Document doc = JDBCUtil.toDocument(result, Application.BANK_ID, interfaceType, Application.VERSION);
    JDBCUtil.write(doc, fileName);
    public static Document toDocument(ResultSet rs, String bankId, String interfaceFileType, String version)
        throws ParserConfigurationException, SQLException {
            DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
            DocumentBuilder builder = factory.newDocumentBuilder();
            Document doc = builder.newDocument();
            Element results = doc.createElement("sims");
            results.setAttribute("bank", bankId);
            results.setAttribute("record_type", "HEADER");
            results.setAttribute("file_type", interfaceFileType);
            results.setAttribute("version", version);
            doc.appendChild(results);
            ResultSetMetaData rsmd = rs.getMetaData();
            int colCount = rsmd.getColumnCount();
            String columnName="";
            Object value;
            while (rs.next()) {
                Element row = doc.createElement("rec");
                results.appendChild(row);
                for (int i = 1; i <= colCount; i++) {
                    columnName = rsmd.getColumnLabel(i);
                    value = rs.getObject(i);
                    Element node = doc.createElement(columnName);
                    if(value != null)
                        node.appendChild(doc.createTextNode(value.toString()));
                    else
                        node.appendChild(doc.createTextNode(""));
                    row.appendChild(node);
            return doc;
    public static void write(Document document, String filename) {
            //long start = System.currentTimeMillis();
            // lets write to a file
            OutputFormat format = new OutputFormat(document); // Serialize DOM
            format.setIndent(2);
            format.setLineSeparator(System.getProperty("line.separator"));
            format.setLineWidth(80);
            try {
                FileWriter writer = new FileWriter(filename);
                BufferedWriter buf = new BufferedWriter(writer);
                XMLSerializer FileSerial = new XMLSerializer(writer, format);
                FileSerial.asDOMSerializer(); // As a DOM Serializer
                FileSerial.serialize(document);
                writer.close();
            } catch (IOException ioe) {
                ioe.printStackTrace();
            //long end = System.currentTimeMillis();
            //System.err.println("W3C File write time :" + (end - start) + "  " + filename);
        }

    you can increase your heap size..... try setting this as your environment variable.....
    variable: JAVA_OPTS
    value: -Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m

  • Statement.setFetchSize is not working properly

    I am using setFetchSize to limit the rows returned by query.
    Here is Code without using setFetchSize -
    Statement stmt = dao.createStatement();
    ResultSet rset = stmt.executeQuery("SELECT * FROM temp"); 
    while(rset.next ())
    System.out.println( rset.getString (1) ); Code using setFetchSize -
    Statement stmt = dao.createStatement();
    stmt.setFetchSize(20);
    ResultSet rset = stmt.executeQuery("SELECT * FROM temp"); 
    while(rset.next ())
    System.out.println( rset.getString (1) ); both the code returns same result. temp has has 50 records and all records are returned in both the cases.
    Edited by: seeking_solution on Jul 22, 2009 5:25 AM

    seeking_solution wrote:
    Actually we have a process in which we process records from a table and send an email for each record.
    This table can have >250,000 records. We create collection of VOs after getting record. Now having so many VOs in memory machine goes "out of Memory".
    I want to fetch some number of records at a time then process them and send emails. then fetch next some number of records and perform processing on them.
    Option 1
    Process them in the database (probably much faster anyways.). Put a entry in a task table for each one processed. Use a sequential key of some sort (date or integer.) It also has a marker indicating whether notification has occurred. Your java app retrieves rows that do not have the marker set and can limit the number, which ordering via the sequential key allows. It sends and marks each as sent. It repeats until no rows left.
    Option 2
    Same as option 1 but you process in java. You still create the task table in the database, and send email just like in option 1.
    Advantage of both of those is that you have an ongoing record of completion (row itself) and notification (which can include a timestamp.)

  • JTable and ResultSet TableModel with big resultset

    Hi, I have a question about JTable and a ResultSet TableModel.
    I have to develop a swing JTable application that gets the data from a ResultSetTableModel where the user can update the jtable data.
    The problem is the following:
    the JTable have to contain the whole data of the source database table. Currently I have defined a
    a TYPE_SCROLL_SENSITIVE & CONCUR_UPDATABLE statement.
    The problem is that when I execute the query the whole ResultSet is "downloaded" on the client side application (my jtable) and I could receive (with big resultsets) an "out of memory error"...
    I have investigate about the possibility of load (in the client side) only a small subset of the resultset but with no luck. In the maling lists I see that the only way to load the resultset incrementally is to define a forward only resultset with autocommit off, and using setFetchSize(...). But this solution doesn't solve my problem because if the user scrolls the entire table, the whole resultset will be downloaded...
    In my opinion, there is only one solution:
    - create a small JTable "cache structure" and update the structure with "remote calls" to the server ...
    in other words I have to define on the server side a "servlet environment" that queries the database, creates the resultset and gives to the jtable only the data subsets that it needs... (alternatively I could define an RMI client/server distribuited applications...)
    This is my solution, somebody can help me?
    Are there others solutions for my problem?
    Thanks in advance,
    Stefano

    The database table currently is about 80000 rows but the next year will be 200000 and so on ...
    I know that excel has this limit but my JTable have to display more data than a simple excel work sheet.
    I explain in more detail my solution:
    whith a distribuited TableModel the whole tablemodel data are on the server side and not on the client (jtable).
    The local JTable TableModel gets the values from a local (limited, 1000rows for example) structure, and when the user scroll up and down the jtable the TableModel updates this structure...
    For example: initially the local JTable structure contains the rows from 0 to 1000;
    the user scroll down, when the cell 800 (for example) have to be displayed the method:
    getValueAt(800,...)
    is called.
    This method will update the table structure. Now, for example, the table structure will contain data for example from row 500 to row 1500 (the data from 0 to 499 are deleted)
    In this way the local table model dimension will be indipendent from the real database table dimension ...
    I hope that my solution is more clear now...
    under these conditions the only solutions that can work have to implement a local tablemodel with limited dimension...
    Another solution without servlet and rmi that I have found is the following:
    update the local limited tablemodel structure quering the database server with select .... limit ... offset
    but, the select ... limit ... offset is very dangerous when the offset is high because the database server have to do a sequential scan of all previuous records ...
    with servlet (or RMI) solution instead, the entire resultset is on the server and I have only to request the data from the current resultset from row N to row N+1000 without no queries...
    Thanks

  • No. Of Rows in a resultset

    Is there a method in result which one could use to get the no. of rows in a resultset. I currently get my no. by looping thru the resultset and summing the number of times the loop runs.
    Cheers,
    Havasen

    I don't know if it's driver specific - what can you say what any driver does inside?
    But I think, the most common way should be, that the driver does the same like I and others did it formerly in C programs with embedded SQL:
    Let the DBMS create a cursor for the query.
    Open this cursor - so the DBMS creates a temporary result set inside.
    Position that cursor inside this result set (inside he DBMS).
    Fetch the actual row.
    After finnishing your logic, close the cursor. So the DBMS clears up its temporary result set.
    Now what JDBC presents you as a scrollable result set, should be nothing more than an opened cursor, ready to position on ra single row and fetch it, ... and at the end - when you call ResultSet.close() - close the cursor.
    So each row can stay inside the DBMS, until you cause JDBC to fetch it. I think, this is, when you position on it, and JDBC does some caching and retrieves a couple of rows together - there are methods to control this to a certain degree, like Statement.setFetchSize().
    But the DBMS must hold the cursor and maintain its temporary result set.
    So it's not equal if you cause JDBC to retrieve a complete resultset - and for positioning on the last row, it has to retrieve them all, I fear.
    As I said - I can't promise that your driver will work this way.
    If I would write a JDBC driver, I probably would try it this way. But I haven't yet, and why should I.
    That was the explanation.
    And to the original question:
    For counting the records only, I would never suggest the last() trick, but a simple query on
    "SELECT COUNT(*) FROM table".

Maybe you are looking for

  • How can I upload my CS5.5 programs to my new laptop without a CD player

    HELP!!! I purchased CS 5.5 illustrator/photoshop/acrobat in CD version.  I now have a new laptop which does not have a cd player.  How can I upload these programs? Adobe.com no longer has CS5 for downloads. I really need these programs on my new lapt

  • Standard Function Modules required

    I am look for a standard Function Module / RFC / BAPI for the below Get the list of table (for a given search criteria like if I give m* it should return me list of           tables starting with m) Get details of the table (its field details) Regard

  • Print problems double byte characters (ZH - simplified Chinese)

    We are currently preparing a 4.7 (WebAS 6.20 on SLES) rollout in China and struggle with a severe printer problem. Our SAPScript forms contain a mixture of western and simplified Chinese characters. The printouts come just with either hashes, blanks

  • Version of Autoconfig

    Hi everyone, I have a doubt, i hope somebody can help me. How can i know the current version of autoconfig that i had, i see in metalink how know if we have the last version, but how can i know my current version? Thank you very much. Regards

  • Processing ECN through workflow.

    Dear all, Can anyone provide a useful document on the steps involved in handling workflow for ECN and DCM. Your experts help will be much appreciated. Regards, M.M