How to handle large result sets?

Hi All,
I have a large result set to be displayed to user using jsp's. Problem is that result set is too big, so I can't display all the records in a single push. I want to show the results page by page say 25 per page. Now for every page I have to fetch data from database, means there are going to be many database calls which is not advisable. Or i can cache data in a CachedRowSet to reduce database calls, but in this case you have to store all the data in memory which is not a good solution in case you have very large data sets. Can anybody suggest me a solution to this problem?

The best thing for you to do is to implmeneting paging logic in conjunction with a scrollable resultset (JDBC 2.0+).
The logic would go like this assuming 30 rows per page:
- keep track of which page the user is on (e.g. page 3)
- issue the full sql
- scroll thru only the rows in the current page (e.g. rows 90-120)
- copy the page's rows to value objects
- close the resultset, statement, and connection
In the above example, you would scroll to row 90 using rs.absolute(90).
The efficiency comes from the fact that you're using a scrollable resultset. By using this, only the rows that you scroll thru are extracted out from the database. I performed some simple testing and with my data, and the scrollable resultset was about 10x in performance.
Good luck!

Similar Messages

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • Handling Multiple Result Sets

    Hi
    I have a problem while handling multiple result sets. To fix that problem i need to upgrade my JDBC version2.0 to 4.0.
    My Problem is that i dont know which jars need to be downloaded to upgrade my JDBC version.

    Mallika_23 wrote:
    Is that any more ideas ???1. Learn how google works.
    2. Find the drivers
    3. Read the documentation
    4. Ask questions here once you have actually read the documentation.

  • How to handle national character set datatypes in oracle?

    Hi
    Can anyone tell me how to handle national character set datatypes in oracle?
    Thanks in advance

    And for data manipulation, append "N" the literal values being used in the command.
    The "N" indicates that the string is to be treated as Unicode Text.
    For Example: insert into TableName (ColumnName) values (N'ValueToBeInserted');

  • How can we generate result set report?

    how can we generate result set report ?means the out put of one query be the input of another query?how it is?

    Hi
    You have to use APD ( analysis process designer) to use results of one query as the input for the other queries.
    Check this link
    http://help.sap.com/saphelp_nw70/helpdata/en/49/7e960481916448b20134d471d36a6b/frameset.htm
    Regards

  • I-bot not emailing when report returns large result set..

    Hi,
    I am trying to set up an i-bot to run daily and email the results to the user. Assuming the report in question is Report_A.
    Report_A returns around 60000 rows of data without any filter condition. When I tried to set up thei-bot for Report_A (No filter conditions on the report) the ibot is publishing results to dashboard but is not delivering via email. When I introduce a filter in Report_A to reduce the data returned then everything works fine and email is being sent out successfully.
    So
    1) Is there a size limit for i-bots to deliver by email?
    2) Is there a way to increase the limits if any so the report can be emailed even when returning large result sets?
    Please let me know.

    Sorry for late reply
    Below is the log file for one of the i-bots. Now I am getting an error message "***kmsgPortalGoRequestHasBeenCancelled: message text not found ***" and the i-bot alert message shows as "Cancelled".
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.551
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.553
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:04.554
    ... Sleeping for 8 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.642
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    iBotID: /shared/_ibots/common/TM/Claims Report
    ...Trying iBot Get Response Content loop again.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:12.644
    ... Sleeping for 6 seconds.
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.730
    [nQSError: 77006] Oracle BI Presentation Server Error: A fatal error occurred while processing the request. The server responded with: ***kmsgPortalGoRequestHasBeenCancelled: message text not found ***
    Error Codes: YLKKAV7S
    Error Codes: AGEGTYVF
    +++ ThreadID: f3c6cb90 : 2010-12-17 23:55:18.734
    iBotID: /shared/_ibots/common/TM/Claims Report
    Exceeded number of request retries.

  • How to save memory when processing large result set

    I need to dump multi millions of rows of data into excel files
    I query the tables and open excel to write in
    The problem is even I chopped the result into hundred files, close excel completely after 65536 rows, the memory usage keeps going up as the result set is looped and at one point hit the heap size
    How can I release the memory has been used in the result set?
    Thank you

    mycoffee wrote:
    936517 wrote:
    I think resultSet.close() will do what you want (you shouldn't have to set resultSet=null when you're done with it).
    You can't force the garbage collector to run and reclaim memory. It uses an intelligent algorithm to do so .
    I question why your project is sending millions of records to excel. Who is going to read a 10,000 page excel document(s)?
    Instead, I suggest you provide a (intelligent) filter mechanism to allow users to get a subset of data to send to an excel document rather than all data. For example: instead of sending him the entire telephone book, have him search for results based on lastName and/or firstName. That will cut down on the number of records returned. Next, does the user really need all the columns of data in each record? That will cut it down further.
    You can search Google for 'java heap size' to increase the memory for your program. However, your 65536 limit is probably due to Excel's limitation and not your Java program.Sorry I could not explain the need,
    No. That is not issue here. I already use max heap size I can
    but I can handle it now. Open files, write directly to the file instead of holding the data and dumping all at once. I save all the overhead and it works fine even the result set still consumes almost all the memory.is it possible you are using mysql? the mysql jdbc driver has a terrible default setup in that it keeps all results for the result set in memory ! i think some of the latest drivers finally allow you to stream results sensibly, but you have to use the correct options.

  • Displaying large result sets in Table View u0096 request for patterns

    When providing a table of results from a large data set from SAP, care needs to be taken in order to not tax the R/3 database or the R/3 and WAS application servers.  Additionally, in terms of performance, results need to be displayed quickly in order to provide sub-second response times to users.
    This post is my thoughts on how to do this based on my findings that the Table UI element cannot send an event to retrieve more data when paging down through data in the table (hopefully a future feature of the Table UI Element).
    Approach:
    For data retrieval, we need to have an RFC with search parameters that retrieves a maximum number of records (say 200) and a flag whether 200 results were returned. 
    In terms of display, we use a table UI Element, and bind the result set to the table.
    For sorting, when they sort by a column, if we have less than the maximum search results, we sort the result set we already have (no need to go to SAP), but otherwise the RFC also needs to have sort information as parameters so that sorting can take place during the database retrieval.  We sort it during the SQL select so that we stop as soon as we hit 200 records.
    For filtering, again, if less than 200 results, we just filter the results internally, otherwise, we need to go to SAP, and the RFC needs to have this parameterized also.
    If the requirement is that the user must look at more than 200 results, we need to have a button on the screen to fetch the next 200 results.  This implies that the RFC will also need to have a start point to return results from.  Similarly, a previous 200 results button would need to be enabled once they move beyond the initial result set.
    Limitations of this are:
    1.     We need to use custom RFC function as BAPI’s don’t generally provide this type of sorting and limiting of data.
    2.     Functions need to directly access tables in order to do sorting at the database level (to reduce memory consumption).
    3.     It’s not a great interface to add buttons to “Get next/previous set of 200”.
    4.     Obviously, based on where you are getting the data from, it may be better to load the data completely into an internal table in SAP, and do sorting and filtering on this, rather than use the database to do it.
    Does anyone have a proven pattern for doing this or any improvements to the above design?  I’m sure SAP-CRM must have to do this, or did they just go with a BSP view when searching for customers?
    Note – I noticed there is a pattern for search results in some documentation, but it does not exist in the sneak preview edition of developer studio.  Has anyone had in exposure to this?
    Update - I'm currently investigating whether we can create a new value node and use a supply function to fill the data.  It may be that when we bind this to the table UI element, that it will call this incrementally as it requires more data and hence could be a better solution.

    Hi Matt,
    i'm afraid, the supplyFunction will not help you to get out of this, because it's only called, if the node is invalid or gets invalidated again. The number of elements a node contains defines the number of elements the table uses for the determination of the overall number of table rows. Something quite similar to what you want does already exist in the WD runtime for internal usage. As you've surely noticed, only "visibleRowCount" elements are initially transferred to the client. If you scroll down one or multiple lines, the following rows are internally transferred on demand. But this doesn't help you really, since:
    1. You don't get this event at all and
    2. Even if you would get the event, since the number of node elements determines the table's overall rows number, the event would never request to load elements with an index greater than number of node elements - 1.
    You can mimic the desired behaviour by hiding the table footer and creating your own buttons for pagination and scrolling.
    Assume you have 10 displayed rows and 200 overall rows, What you need to be able to implement the desired behaviour is:
    1. A context attribute "maxNumberOfExpectedRows" type int, which you would set to 200.
    2. A context attribute "visibleRowCount" type int, which you would set to 10 and bind to table's visibleRowCount property.
    3. A context attribute "firstVisibleRow" type int, which you would set to 0 and bind to table's firstVisibleRow property.
    4. The actions PageUp, PageDown, RowUp, RowDown, FirstRow and LastRow, which are used for scrolling and the corresponding buttons.
    The action handlers do the following:
    PageUp: firstVisibleRow -= visibleRowCount (must be >=0 of course)
    PageDown: firstVisibleRow += visibleRowCount (first + visible must be < maxNumberOfExpectedRows)
    RowDown/Up: firstVisibleRow++/-- with the same restrictions as in page "mode"
    FirstRow/LastRow is easy, isn't it?
    Since you know, which sections of elements has already been "loaded" into the dataSource-node, you can fill the necessary sections on demand, when the corresponding action is triggered.
    For example, if you initially display elements 0..9 and goto last row, you load from maxNumberOfExpected (200) - visibleRows (10) entries, so you would request entries 190 to 199 from the backend.
    A drawback is, that the BAPIs/RFCs still have to be capable to process such "section selecting".
    Best regards,
    Stefan
    PS: And this is meant as a workaround and does not really replace your pattern request.

  • How to use same RESULT SET for two different events

    hello friends,
    I need to use same result set for two different events. How to do it.
    here My code,
    private void jComboBox1ItemStateChanged(java.awt.event.ItemEvent evt) {
    // TODO add your handling code here:
    try
    String selstate,selitem;
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    Connection con=DriverManager.getConnection("jdbc:odbc:tourismdatasource","sa","");
    selstate="select * from tab_places where state=?";
    PreparedStatement ps=con.prepareStatement(selstate);
    ps.setString(1, jComboBox1.getSelectedItem().toString().trim());
    ResultSet rs=ps.executeQuery();
    if(rs.next())
    jTextField1.setText(rs.getString("place_ID"));
    jTextField2.setText(rs.getString("place_name"));
    jTextField3.setText(rs.getString("category"));
    byte[] ba;
    ba=rs.getBytes("image");
    ImageIcon ic = new ImageIcon(ba);
    jLabel6.setIcon(ic);
    in=true;
    catch(ClassNotFoundException cfe){JOptionPane.showMessageDialog(null, cfe.getMessage());}
    catch(SQLException sqe){JOptionPane.showMessageDialog(null,sqe.getMessage());}
    Now i need the same Result Set(rs), in another event(jButton6ActionPerformed),
    private void jButton6ActionPerformed(java.awt.event.ActionEvent evt) {
    // TODO add your handling code here:
    }  how do i get dat resultset,
    help me out

    One post wasn't enough?
    {color:0000ff}http://forum.java.sun.com/thread.jspa?threadID=5246634{color}
    db

  • How to handle large heap requirement

    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue?
    Thanks,
    Atul

    user13640648 wrote:
    Hi,
    Our Application requires large amount of heap memory to load data in memory for further processing.
    Application is load balanced and we want to share the heap across all servers so one server can use heap of other server.
    Server1 and Server2 have 8GB of RAM and Server3 has 16 GB of RAM.
    If any request comes to server1 and if it requires some more heap memory to load data, in this scenario can server1 use serve3’s heap memory?
    Is there any mechanism/product which allows us to share heap across all the servers? OR Is there any other way to handle large heap requirement issue? That isn't how you design it (based on your brief description.)
    For any transaction A you need a set of data X.
    For another transaction B you need a set of data Y which might or might not overlap with X.
    The set of data (X or Y) is represented by discrete hunks of data (form is irrelevant) which must be loaded.
    One can preload the server with this data or do a load on demand.
    Once in memory it is cached.
    One can refine this further with alternative caching strategies that define when loaded data is unloaded and how it is unloaded.
    JEE servers normally support this in a variety of forms. But one can custom code it as well.
    JEE servers can also replicate cached data across server instances. Custom code can do this but it is more complicated than doing the custom caching.
    A load balanced system exists for performance and failover scenarios.
    Obviously in a failover situation a "shared heap" would fail completely (as asked about) because the other server would be gone.
    One might also need to support very large data sets. In that case something like Memcached (google for it) can be used. There are commercial solutions in this space as well. This allows for distributed caching solutions which can be scaled.

  • How to get the result set in batches

    I have a query which results into large data. This data i want to display in a group of 20. After every 20 records i want to add header and footer to it.
    Is it possible to get the result set data into batch of 20 ? means can i specify start and end index of query ?
    regards
    Manisha

    What I am saying is that a big query with lots of
    joins will probably be slow, and as such would be a
    ripe candidate for batching the responses, if it were
    not possible to speed/optimize it. Batching is nice
    to look at for the user, but is not a solution for
    performance problems. In essence it is irrelevant
    that it adds a little performance deficit, as it
    appears to be running a lot quicker, and gives more
    feedback to the user.Then let me say it again....
    - "Join" is a term that applies to a method of doing queries in the database....
    - Query 1 which uses a join and returns 15 rows
    - Query 2 which does not use a join and returns 1500 rows.
    Given the above then Query 1 will provide better overall performance for the system than Query 2 in a properly configured database.
    If it doesn't then the database is not set up correctly.
    And again this will be irrespective of whether the query is scrollable or not.

  • How to handle large payloads in WCF services?

    Hi All,
    I have developed WCF service,Which takes list of Employee Ids and return employee details.
    The service is working fine with 500 employee Ids but when we receive request with 5000 employee Ids the service taking much time to complete its action.
    In side the Method i am doing the following operations
    1) Each employee id i am calling stored procedure and getting the employee object and adding to list of employees
    looping through all the employeeId and contruction the list of all the employee objects
    could you please suggest us the best way /Fastest way to retreive the employee details for more than 5000 employeeIds to implement the above operation
    1) Can we use multithreading here? if possible give me an idea,
    Thanks
    Amar

    You can try using a Table Value parameter to a Stored Procedure.
    Basically:
    Create a Stored Procedure that takes the list of CustomerID's as a Table Parameter.
    Generate the Schemas using the WCF Adapter Wizard.
    Map the request message of CustomerIDs to the SQL Request message.
    In the Stored Procedure, select the customers by JOIN'ing to the Table Parameter.  In the SP, the Table Parameter will behave just like a physical table.
    Return the result set of all customers in whatever format is you need.
    This way, SQL Server is doing heavy data operation in one shot vs many individual queries.
    Here's two articles that describe how to use Table Value Parameters:
    http://msdn.microsoft.com/en-us/library/dd787869.aspx
    http://connectedcircuits.wordpress.com/2011/08/17/using-sql2008-data-table-type-biztalk-to-insert-parent-child-rows/

  • How to return the result set of multiple select statements as one result set?

    Hi All,
    I have multiple select statements in my stored procedure that I want to return as one result set 
    for instance 
    select id from tableA
    union 
    select name from table b 
    but union will not work because the result sets datatypes are not identical so how to go about this ?
    Thanks

    You have to CAST or CONVERT (or implicitly convert) the columns to the same datatype.  You must find a datatype that both columns can be converted to without error.  In your example I'm guessing id is an int and name is a varchar or nvarchar. 
    Since you didn't convert the datatypes, SQL will use its data precedence rules and attempt to convert name to an int.  If any row contains a row that has a value in name that cannot be converted to an int, you will get an error.  The solution is
    to force SQL to convert the int to varchar.  So you want something like
    select cast(id as varchar(12)) from tableA
    union
    select name from tableb
    If the datatypes are something other that int or varchar, you must find a compatable datatype and then convert one (or both) of the columns to that datatype.
    Tom

Maybe you are looking for

  • Webservice URL forma issue

    Hi experts i have a scenario JBoss server - > SAP XI - > SAP i had create the webservice in SAP XI through senderagreement --->publish URL its automatically generated the WSDL file URL is like this 1)"http://host:port/dir/wsdl?p=sa/0dacd6e6ede73794b0

  • Have lost sound on my hp g6000!

    laptop c. 18 months old always been fine. Suddenly have lost all sound...the 'speaker' icon on bottom right of screen has vanished too. Any help gratefully received please! Best wishes,  Kevin tech 'no' !

  • Transporter No and Transporter Name

    Hi , I want to get field Transporter No and Transporter Name it was maintained in sale order (Partner Tab).If it is maintained in Text means i can use 'READ_TEXT' function but it was maintained in (Partner Tab).Can you help me which table contain tra

  • VMware Edition Faulty downloads crc errors

    Hello - I have been trying for days to download the software at this site> http://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/102c6e47-cd8e-2a10-8b96-a2102923212f Download SAP NetWeaver 7.0 - Java and ABAP Trial Version on Linux -

  • CS3 freezes

    Photoshop CS3 freezes after making an adjustment layer using levels. A description of where the picture is located on the computer (usually only seen as a mouse over) appears and is also frozen. I am unable to save the file and have to end the progra