Paging a  large ResultSet in a WD Table

Hi,
Is there any standard way to handle this?
Let's have 100 000 records. We have a method
getData(startingRow, rowsCount)
which can be used for paging.
We have a context node for results and a table bound to it.
What we need, is
1. to see that there are actually 100 000 results,
2. but to keep only visible data in memory when user scrolls.
Is there any automatic way - for instance  some event that will be fired, when the table needs more data to visualize?
Regards, Konstantin

Hi Valery,
Thanks, that is what I supposed. I just wanted to make sure that there is no standard solution.
Yes, I agree, no user can browse milions of records. But the problem is - any user can obtain them, and the application should not crash, as well as the Web AS itself.
For instance, if you go to some search engine and search just for the word "is" - you will get a message "Results 1 - 10 from about 12,820,000,000", and everything keeps on working.
We don't have to invent the wheel, paging is already invented.
Regards, Konstantin

Similar Messages

  • PL/SQL Dev query session lost on large resultsets after db update

    We have a problem with our PL/SQL Developer tool (www.allroundautomations.nl) since updating our Database.
    So far we had Oracle DB 10.1.0.5 Patch 2 on W2k3 and XP-Clients using Instant Client 10.1.0.5 and PL/SQL Developer 5.16 or 6.05 to query your DB. This scenario worked well.
    Now we upgraded to ORACLE 10G 10.1.0.5 PATCH 25 and now our PL/SQL Developer 5.16 or 6.05 (on IC 10.1.0.5) can logon the db and also query small tables. But as soon as the resultset reaches a certain size, the query on a table won't come to an end and is always showing "Executing...". We can only press "BREAK" what the results in a "ORA-12152: TNS: unable to send break message" and "ORA-03114: not connected to ORACLE".
    If i narrow the resultset down on the same table it works like before.
    If i watch the sessions on small resultset-queries, i see the corresponding session, but on large resultset-queries the session seem to close immediately.
    To solve this issue a already tried to install the newest PL/SQL developer 7.1.5(trail) or/and installing a newer instant client version (10.2.0.4), which both did not solve the problem.
    Is there a new option in 10.1.0.5 Patch 25 (or before) which closes sessions if the resultsets getting to large over a slower internet connection?
    btw. using sqlplus in the instantclient directory or even excel over odbc on the same client, returns the full resultset without problems. Could this be some kind of timeout problem ?
    Edit:
    Here is a snippet of the tracefile on the client right after Executing the select-statement. Some data seems to be retrieved and than it ends with these lines:
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 2D 20 49 6E 74 72 61 6E |-.Intran|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 74 2D 47 72 75 6E 64 |et-Grund|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 6C 61 67 65 6E 02 C1 04 |lagen...|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 02 C1 03 02 C1 0B 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 51 00 02 C1 03 02 C1 2D |Q......-|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 05 48 4B 4F 50 50 01 80 |.HKOPP..|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 03 3E 64 66 01 80 07 78 |.>df...x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 07 78 |.......x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 3B 02 C1 02 01 80 00 00 |;.......|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 00 00 00 00 00 00 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 01 80 15 0C 00 |....... |
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: got NSPTDA packet
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: NSPTDA flags: 0x0
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: what=1, bl=2001
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: cid=0, opcode=85, bl=0, what=0, uflgs=0x0, cflgs=0x3
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: rank=64, nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctx: state=8, flg=0x100400d, mvd=0
    (1992) [20-AUG-2008 17:13:00:953] nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: switching to application buffer
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: entry
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: recving a packet
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: entry
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: reading from transport...
    (1992) [20-AUG-2008 17:13:00:968] nttrd: entry
    Message was edited by:
    vhbtech

    Found nothing in the \bdump alert.log or \bdump trace files. I only have the DEFAULT profile and everything is set to UNLIMITED there.
    But the \udump generates a trace file the moment i execute the query:
    Dump file <path>\udump\<sid>ora4148.trc
    Fri Aug 22 09:12:18 2008
    ORACLE V10.1.0.5.0 - Production vsnsta=0
    vsnsql=13 vsnxtr=3
    Oracle Database 10g Release 10.1.0.5.0 - Production
    With the OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:898M/3071M, Ph+PgF:2675M/4967M, VA:812M/2047M
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 33
    Windows thread id: 4148, image: ORACLE.EXE (SHAD)
    *** 2008-08-22 09:12:18.731
    *** ACTION NAME:(SQL Window - select * from stude) 2008-08-22 09:12:18.731
    *** MODULE NAME:(PL/SQL Developer) 2008-08-22 09:12:18.731
    *** SERVICE NAME:(<service-name>) 2008-08-22 09:12:18.731
    *** SESSION ID:(145.23131) 2008-08-22 09:12:18.731
    opitsk: network error occurred while two-task session server trying to send break; error code = 12152
    This trace is only generated if the query with a expected large resultset fails. If i narrow down the resultset no trace is written, well and the query then works of course.

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Returning a Large ResultSet

    At the momoent we use our own queryTableModel to fetch data from the database. Although we use the traditional (looping on ResultSet.next())method of loading the data into a vector, we find that large ResultSets (1000+ rows) take a considerable amount of time to load into the vector.
    Is there a more efficient way of storing the ResultSet other than using a vector? We believe the addElement method constantly expanding the vector is the cause of the slowdown.
    Any tips appreciated.

    One more thing:
    We believe the addElement method constantly expanding the vector is the cause of the slowdown.You probably are rigth, but this is easy to avoid: both Vector and ArrayList have one constructor in which you can specify the initial size, so could save much time in growing the List, and in Vector class, as in another collection classes as HashSet, there is a constructor in which you can specify plus the initial size the loafFactor
    Abraham.

  • Processing large Resultsets quickly or parallely

    How to process a large Resultset that contains a purchase entries of say 20 K users. Each user may have 1 or more purchase entries. The resultset is ordered by userid. And the other fields are itemname, quantity, price.
    Mine is a quad processor machine.
    Thanks.

    you're going to need to provide a lot more details. for instance, is the slow part reading the data from the database, or the processing that you are going to do on the data? if the former, then in order to do work in parallel, you probably need separate threads with their own resultsets. if the latter, then you could parallelize the work by having one thread read the resultset and push the data onto a shared work queue, from which multiple worker threads were reading. these are just a few of the possibilities.

  • Best way to return large resultsets

    Hi everyone,
    I have a servlet that searches a (large) database by complex queries and sends the results to an applet. Since the database is quite large and the queries can be quite general, it is entirely possible that a particular query can generate a million rows.
    My question is, how do I approach this problem from a design standpoint? For instance, should I send the query without limits and get all the results (possibly a million) back? Or should I get only a few rows, say 50,000, at a time by using the SQL limit construct or some other method? Or should I use some totally different approach?
    The reason I am asking this question is that I have never had to deal with such large results and the expertise on this group will help me avoid some of the design pitfalls at the very outset. Of course, there is the question of whether the servlet should send so may results at once to the applet, but thats probably for another forum.
    Thanks in advance,
    Alan

    If you are using a one of the premiere databases (Oracle, SQL Server, Informix) I am fairly confident that it would be best to allow the database to manage both the efficiency of the query, and the efficiency of the transport.
    QUERY EFFICIENCY
    Query efficiences in all databases are optimized by the DBMS to a general algorithm. That means there are assumptions made by the DBMS as to the 'acceptable' number of rows to process, the number of tables to join, the number of rows that will be returned, etc. These general algorithms do an excellent job on 95+% of queries run against database. However, f you fall outside the bounds of these general algorithms, you will run into escalating performance problems. Luckily, SQL syntax provides enourmous flexibility in how to get your data from the database, and you can code the SQL to 'help' the database do a better job when SQL performance becomes a problem. On the extreme, it is possible that you will issue a query that overwhelms the database, and the physical resources available to the database (memory, CPU, I/O channels, etc). Sometimes this can happen even when a ResultSet returns only a single row. In the case of a single row returned, it is the intermediate processing (table joins, sorts, etc) that overwhelms the resources. You can help manage the memory resource issue by purchasing more memory (obviously), or re-code the SQL to a more apply a more efficent algorithm (make the optimizer do a better job), or you may as a last resort, have to break the SQL up into seperate SQL statements, using more granual approach (this is your "where id < 1000"). BTW: If you do have to use this approach, in most casees using the BETWEEN is often more efficient.
    TRANSPORT
    Most if not all of the JDBC drivers return the ResultSet data in 'blocks' of rows, that are delivered on an as needed basis to your program. Some databases alllow you to specify the size of these 'blocks' to aid in the optimization of your batch style processes. Assuming that this is true for your JDBC driver, you cannot manage it better than the JDBC driver implementation, so you should not try. In all cases, you should allow the database to handle as much of the data manipulation and transport logic as possible. They have 1000's of programmers working overtime to optimzie that code. They just have you out numbered, and while it's possible that you can code an efficiency, it's possible that you will be unable to take advantage of future efficiencies within the database due to your proprietary efficiencies.
    You have some interesting, and important decisions to make. I'm not sure how much control of the architecture is available, but you may want to consider alternatives to moving these large amounts of data around through the JDBC architecture. Is it possible to store this information on the server, and have it fetched using FTP or some other simple transport? Far less CPU usage, and more efficient use of your bandwith.
    So in case it wasn't clear, no, I don't think you should break up the SQL initially. If it were me, I would probably spend the time in putting out some metic based information to allow you to better judge where you are having slow downs when or if any should occur. With something like this, I have seen I.T. spend hours and hours tuning SQL just to find out that the network was the problem (or vice versa). I would also go ahead and run the expected queries outside of application and determine what kind of problems there are before coding of the application is finished.
    Hey, this got a bit wordy, sorry. Hopefully there is something in here that can help you...Joel

  • Unable to fetch large resultset

    I got this error when i was trying to retreive records from a sql server table which cosists of around 2 lakhs of records
    com.microsoft.sqlserver.jdbc.SQLServerException: The system is out of memory. Use server side cursors for large result sets:Java heap space. Result set size:78,585,571. JVM total memory size:66,650,112.
    how can i fetch the above said records

    3 choices:
    1) make your java heap size big enough to hold the entire ResultSet. If you're using Sun's Java (almost everyone does), then you increase the maximum heap with the -Xmx parameter whent the JVM is started. It looks like you're using the default maximum, which is pretty small. See:
    http://java.sun.com/j2se/1.3/docs/tooldocs/solaris/java.html
    2) retrieve the ResultSet from the database incrementally, and process it incrementally. Statement.setFetchSize() suggests to the driver how much of the ResultSet to keep in memory (and to retrieve in one chunk from the database), but it's common for scrollable ResultSets to ignore this hint and try to keep everything in memory. A ForwardOnly ResultSet (the default) is more likely to work incrementally, but it depends totally on your driver.
    3) break the data into multiple queries....

  • OutOfMemoryError when retrieving large resultset

    In my application I have one type of object (Call it O for now) that has
    a lot of data in the database (+50000 rows). Now I need to create an
    object of class T that has a m-n binding to 40000 instances of O. In the
    database this is mapped to a link table between the table for O and T.
    Now I get an OutOfMemoryError when I perform the following code to add
    one T to the aforementioned O:
    PersistenceManager pm = VRFUtils.getPM(getServlet());
    T t = new T();
    //Fill arbitrary fields
    t.setToelichting(mtForm.getToelichting());
    //Add T to a set of O's
    Set os = new HashSet();
    Query q = pm.newQuery(O.class,"aangemaakt==parmaanmaak");
    q.declareParameters("java.util.Date parmaanmaak");
    os.addAll(q.execute(field));
    t.setOs(os);
    //Make T persistent
    pm.currentTransaction().begin();
    pm.makePersistent(toelichting);
    pm.currentTransaction().commit();
    pm.close();
    After debugging I've found that the OutOfMemoryError occurs even when I
    don't make anything persistent, but simply run the query that retrieves
    40000 records and do a c.size() on the result.
    I'm running Kodo against MySQL using Optimistic transactions.
    What I must appreciate, is that the OutOfMemoryError does not upset the
    rest of the Kodo system.
    Please advise,
    Martin van Dijken
    PS: The c.size() issue I've been able to resolve using KodoQuery's
    setResult("count(this)"), but I can't think of anything like that, that
    would apply to this situation.

    As you may know, Kodo has several settings that allow you to use large
    result sets without running out of memory (assuming your driver supports
    advanced features):
    http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_dbsetup_lrs.html
    Kodo also allows you to apply these settings to persistent fields, so
    that the entire collection or map field does not reside in memory:
    http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_pc_scos.html#ref_guide_pc_scos_proxy_lrs
    Unfortunately, you are trying to create a relation containing more
    objects than your system can handle, apparently, and when creating a
    relation you pretty much have to load all the related objects into
    memory at once right now.
    In the next beta release of 3.1, there is a solution to this problem.
    Our next release no longer holds hard references to objects that have
    been flushed. So by using the proper large result set settings as noted
    above, and by making your relation a large result set relation also as
    noted above, and by adding only a few hundred objects at a time from the
    query result to the relation and then manually flushing, you should be
    able to perform a transaction in which all the queried objects are
    transferred from the query result to the large result set relation
    without exhausting memory.

  • Problems sorting larger sets of XMLType columns/tables

    Hi,
    Here's something I tried to wrap my head around:
    I'm using Oracle 11g r1 for Windows Server 2003.
    I've imported a fairly large set of XML files into a temporary XMLType table. Now
    I want to sort the contents of the table and put them into another table which
    uses a sequence to get primary keys (would be a longer story to explain, basically
    sorting by this non-XML primary key is super-fast as opposed to everything else
    I've tried, plus I need something unique):
    INSERT INTO realtable SELECT 0, object_value
    FROM tmptable ORDER BY extractValue(object_value, '/some/*/field');It works fine for a very small number of rows but when I tried with about 30000 rows,
    still not too much, two kinds of things can happen. Either, Oracle gobbles up a huge amount
    of memory (>1,5GB) until the statement breaks:
    ORA-04030: Zu wenig Prozessspeicher fⁿr Versuch 33292 Bytes zuzuweisen
    (callheap,kllcqgf:kllsltba)Or I get something like this:
    ORA-00600: Interner Fehlercode, Argumente: [kqludp2], [0x1F31EC94], [0], [], [], [], [], []I haven't wasted too much time looking into this. I tried storage options clob and binary xml
    for tmptable. Clob seems to induce the latter problem and binary xml the former but I haven't
    made further experiments.
    I can create a workaround from outside Oracle I think, so it's not serious however I'd would
    be interesting to know what happened or if there is a better way to do this.
    Thanks!

    Unfortunately, the problems are not reproducible in a meaningful way. All I can say is that once a statement will result in a kqludp2, it will always fail with the exactly same error message until I reinstall the database from scratch. On the other
    side, when I found a configuration that works, I could delete and rebuild/refill a table multiple times without the thing ever breaking.
    As the most recent example, after the latest trouble I deleted the database and changed my DDL scripts to the last configuration I wanted to try and broke the last time (several tables with two normal columns and an XMLType column each), everything worked like a breeze.
    I can file a TAR when I get a support ID from my employer, however I installed the database on a virtual machine (I should have mentioned that earlier) and Oracle doesn't officially support that configuration from what I know so I'll doubt they'll do anything about it. I've procured a physical computer now and try to reproduce any of the problems when I get to it.

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • DbKona and large resultsets

    Hi,
    I'm working on a application that will allow a user to run ad-hoc queries against
    a database. These queries can return huge resultsets (between 50,000 & 450,000
    rows). Per our requirements, we can't limit (through the database anyway) the
    number of rows returned from a random query. In trying to deal with the large
    number of results, I've been looking at the dbKona classes. I can't find a solution
    with the regular JDBC stuff and CachedRowSet is just not meant to handle that
    much data.
    While running a test (based on an example given in the dbKona manual), I keep
    running into out of memory issues. The code follows:
    QueryDataSet qds = new TableDataSet(resultSet);
    while (!qds.allRecordsRetrieved()) {
    DataSet currentData = qds.fetchRecords(1000);
    // Process the hundred records . . .
    currentData.clearRecords();
    currentData = null;
    qds.clearRecords();
    I'm currently not doing any processing with the records returned for this trivial
    test. I just get them and clear them immediately to see if I can actually get
    them all. On a resultset of about 45,000 rows I get an out of memory error about
    halfway through the fetches. Are the records still being held in memory? Am
    I doing something incorrectly?
    Thanks for any help,
    K Lewis

    I think I found the problem. From some old test, the Statement object I made returned
    a Scrollable ResultSet. I couldn't find a restriction on this immediately in
    the docs (or maybe its just a problem of the Oracle driver?). As soon as I moved
    back to the the default type of ResultSet (FORWARD_ONLY) I was able to process
    150,000 records just fine.
    "Sree Bodapati" <[email protected]> wrote:
    Hi
    Can you tell me what JDBC driver you are using?
    sree
    "K Lewis" <[email protected]> wrote in message
    news:[email protected]..
    Hi,
    I'm working on a application that will allow a user to run ad-hoc queriesagainst
    a database. These queries can return huge resultsets (between 50,000&
    450,000
    rows). Per our requirements, we can't limit (through the databaseanyway)
    the
    number of rows returned from a random query. In trying to deal withthe
    large
    number of results, I've been looking at the dbKona classes. I can'tfind
    a solution
    with the regular JDBC stuff and CachedRowSet is just not meant to handlethat
    much data.
    While running a test (based on an example given in the dbKona manual),I
    keep
    running into out of memory issues. The code follows:
    QueryDataSet qds = new TableDataSet(resultSet);
    while (!qds.allRecordsRetrieved()) {
    DataSet currentData = qds.fetchRecords(1000);
    // Process the hundred records . . .
    currentData.clearRecords();
    currentData = null;
    qds.clearRecords();
    I'm currently not doing any processing with the records returned forthis
    trivial
    test. I just get them and clear them immediately to see if I can actuallyget
    them all. On a resultset of about 45,000 rows I get an out of memoryerror about
    halfway through the fetches. Are the records still being held in memory?Am
    I doing something incorrectly?
    Thanks for any help,
    K Lewis

  • How to insert large xml data into database tables.

    Hi all,
    iam new to xml. i want to insert data in xml file to my database tables.but the xml file size is very large. performance is also one of the issue. can anybody please tell me the procedure to take xml file from the server and insert into my database tables.
    Thanks in advance

    Unfortunately posting very generic questions like this in the forum tends not to be very productive for you, me or the other people who read the forum. It really helps everyone if you take a little time to review existing posts and their answers before starting new threads which replicate subjects that have already been discussed extensively in previous threads. This allows you to ask more sensible questions (eg, I'm using this approach and encountering this problem) rather than extremely generic questions that you can answer yourself by spending a little time reviewing existings posts or using the forum's search feature.
    Also in future your might want to try being a little more specific before posting questions
    Eg Define "very large". I know of customers who thing very large is 100K, and customers who think 4G is medium. I cannot tell from your post what size your files are.
    What is the content of the file. Is it going to be loaded into a single record, or a a single table, or will it need to be loaded into multiple records in a single table or multiple records in multiple tables ?
    Do you really need to load the data into exsiting relational tables or could your application work with relational views of the XML Content.
    Finally which release of the database are you working with.
    Define performance. Is it reasonable to expect to process this kind of document on this machine (Make, memory, #number of CPUs, CPU Speed, number of discs) in this period of time.
    WRT to your original question. If you take a few minutes to search this forum you will find a very large number of threads with very similar titles to yours. These theads document a number of different approaches that can be used to solve this problem.
    I suggest you start by looking for threads that cover topics like DBMS_XMLSTORE, XMLTable(), Relational Views of XML content, loading XML content in relational tables.

  • Large gaps in Sharepoint Page table

    I am amending some pages on our company SP and in the page I am putting a 2 column table with hyperlinks in some of the cells. When I check in the page the display leaves huge gaps between the cells, or something is resizing the cells. 
    So in edit page the links might looks like this:
    link1 link2
    Link3 link4
    But after checking in they look like this:
    link1 link2
    Link3 link4
    I've tried resetting the table to settings the same as other tables, and also cells widths and heights, that adon't have this problem but still no joy.
    Thanks
    Edit: Also checked code for <br> tags and I have also found viewing the page in chrome there is no large whitespace and it looks fine so something with IE perhaps?
    Andrew (MCDST)

    Hi Konstantin,
    There is a property in Table UI element called firstVisibleRow. So, bind it to calculated attribute and in setter method do your swapping logic.
    Best regards, Maksim Rashchynski.

  • Handling large ResultSets

    I want to retrieve about 30 rows at a time from our DB2/AS400. The table contains over 4,000,000 rows. I would like to begin at the first row and drag 30 rows over the network. Then get the next 30 if the user requests them. I know the answer is to use cursors but I cannot use these statements within my code on the AS400. Websphere Studio allows me to create jsp's using a <tsx:repeat> tag to iterate over the result set but the instructions on using these is pretty vague.
    Can anyone direct me to some informative sites with examples or recommend a way to go about this?

    That would be fantastic and my ideal approach but the
    manager of the department wants to keep the
    functionality of what they presently use and he and
    his team wrote 20 years ago written in RPG with
    loverly green screens. They scroll 10 rows at a time,
    jump to the top or the bottom of of the table and
    type in the first two three or four letters of the
    search parameter and get the results which can also
    be scrolled. Some tables are even worse, one contains
    over 10 million rows.We have a lot of those green screen applications in our AS/400 systems too. So I can tell you (and your manager should be able to confirm this) that the "subfiles" that they scroll cannot contain more than 9,999 records. But in real life, even in our AS/400 environment, nobody ever starts at the beginning of our customer file (which does have more than 9,999 records) and scrolls through it looking for something. They put something into the search fields first.
    So displaying the first 10 records of the file before allowing somebody to enter the search criteria is pointless. And jumping to the end of the table is pointless too -- unless the table is ordered by date and you want to get recent transactions, in which case you should be sorting it by descending date anyway. My point is that those AS/400 programs were written that way because it was easy to write them that way, not necessarily because people would ever use those features. When you have hundreds of tables (as we do), it's easier just to copy and paste an old program to produce a maintenance program for a new table than it is to start from scratch and ask the users what they really need. That's why all the programs look alike there. It's not because the requirements are all the same, it's because it's easier for the programmers to write them.
    Here's another example: Google. When you send it a query it comes back with something like "Results 1 - 10 of about 2,780,000". But you can't patiently page through all of those 2,780,000 results: Google only saves the first 1000 for you to look at, and won't show you more than that.
    So I agree, a program that's simply designed to let somebody page through millions of records needs to be redesigned. If you want to write a generic program that lets people page through small files (less than 1000 records, let's say) there's nothing wrong with that, but your users will curse you if you make them use it for large files.

  • Internal server error after query. Large amount of data in table.

    Hello All,
    I have created a custom search page. Before executing a query i had to call a PLSQL procedure (I had to parametrize the table by search parameters so the data in table are calculated and actual before showing to user). This is quick and should not couse any problems, but when there is a lot of data in the table, i receive an internal server error.
    How to limit the query to take only 200 rows and stop or throw Exception when 200 row limit is reached?
    Any clue to solve it?
    Edited by: user11986623 on 2013-04-15 10:52

    Hi,
    Try by using below method in restricting VO rows return.
    vo.setMaxFetchSize(xyz);
    public void setMaxFetchSize(int max)
    Maximum number of rows to fetch for this View Object. This number takes effect the next time the query for this View Object is executed.
    Passing -1 to this method will retrieve an unlimited number of rows. This is the default.
    Br, 903096

Maybe you are looking for

  • My iphone is not recognised by itunes or my computer

    My iphone is not recognised by either i tunes or my computer. Everything was working fine but now when i plug my phone into the computer it does not charge up the battery nor does it appear in tunes or my computer. I have gone through the apple check

  • 2nd gen time capsule and airport express g

    Ive looked high and low but none of the discussions have worked for me. this is what I have. a 2nd gen time capsule running my network of timewarner modem. I just got airport express g older model and I want to use it as a ethernet bridge for my Deno

  • Web Dynpro ABAP ALV menu button icon image

    Hi Does anybody know how to add or link an image or an icon to a menu button in a Web Dynpro ABAP ALV view? I know how to do it in the alv table column but can't find a method for a menu button. Best regards Lars

  • Event clips playback doesn't function anymore

    2 FCP X anomalies - OK, here goes...this is a two-parter but the first is not such a big deal from a working pov - 1. once I place a clip on the timeline, the graphic disappears. It's not an impediment to working on the video, because I'm not really

  • Can reset the clearing doc when invoice is paid to Vendor via F110.

    Hello Experts, I know the usage of FBRA.I want to know can we  reset a clearing document where the value is already paid to the Vendor. Please Clarify. Thanks, B