Large Servlet Results... Pagination?

Ok here is my problem... It's kinda of a two parter. I have a servlet that generates about 150 items and creates an HTML table. It takes about 45 - 60 seconds to run.
First, I would like to create pages with 20 - 30 items per page. I through of a way to do this with JavaScript and DIV tags but we know why that isn't the best solution for multi-browser support and doesn't help with the issue below.
Second, I would like to provide the illusion of speed where as soon as the servlet is done with the first 20 results, throw up the HTML and let the process keep going with the other 130 while the users looks at the first 20.
I'm using Tomcat as the server and don't have an issue with JSP if that helps. My attemps have failed. It seems that it just buffers everything I try to send why the servlet is running until it finishes.
Thanks!
Ross

First, I would like to create pages with 20 - 30items
per page. I through of a way to do this with
JavaScript and DIV tags but we know why that isn'tthe
best solution for multi-browser support and doesn't
help with the issue below.Put all of the items into an array. Create a variable
called beginRow. In the first instance set it to 0.
Also have a variable called endRow. Set it to
beginRow + 20. Loop through the array and display
y all items between beginRow and endRow. Create a
"Next" link that passes a dynamic link to the same
servlet with a parameter of beginRow=21 (You must
dynamically insert the number). Now it will loop
through and display items 21-40 with a "Next" link
that passes a variable of beginRow=41.How does the array persist after the results are generated?
>
Second, I would like to provide the illusion ofspeed
where as soon as the servlet is done with the first20
results, throw up the HTML and let the process keep
going with the other 130 while the users looks atthe
first 20. Why is it taking so long. We are doing a query on a
database to populate the array I mentioned above. The
database contains hundreds of thousands of items and
we are displaying possibly thousands at a time, and it
runs quickly.It's not coming from the database, the data is coming from a dozen HTTP request from other servers. The network time is what take a few seconds as well as the sorting seems to take a few seconds as well.
Ross

Similar Messages

  • Configuring Servlet Result Cache

    Greetings.
    I am trying to configure the iAS servlet result cache feature under iAS 6.0 SP2. I have the following entry for a servlet in my ias-web.xml file:
    <servlet>
    <servlet-name>MessageRouter</servlet-name>
    <guid>{CAC10848-06B4-1C5F-848B-08002085745B}</guid>
    <validation-required>false</validation-required>
    <servlet-info>
    <sticky>false</sticky>
    <encrypt>false</encrypt>
    <number-of-singles>10</number-of-singles>
    <disable-reload>false</disable-reload>
    </servlet-info>
         <caching>
              <cache-timeout>300</cache-timeout>
              <cache-size>10</cache-size>
              <cache-criteria>Xml</cache-criteria>
              <cache-option>TIMEOUT_LASTACCESS</cache-option>
         </caching>
    </servlet>
    However, it does not appear to be caching the results of this servlet because my request takes about 10 seconds to run (it's a database query). I would expect that subsequent requests would return results much faster than 10 seconds if the servlet results are in fact cached.
    So, I am hoping that someone can help me with the following questions:
    1. An example ias-web.xml with a cached servlet.
    2. Any instructions on how to monitor from iASTAT or logs or something so I can see that the servlet result is read from iAS cache.
    Thanks
    Jeffery Cann

    Check out the samples included with the appserver. There is a servlet caching sample that includes four different examples of deployment descriptors.
    Here is the servlet tag from the default:
    <servlet>
    <servlet-name>ServCache</servlet-name>
    <guid>{DD6402C6-CC11-1AA1-CF6F-080020CFEAC8}</guid>
    <validation-required>false</validation-required>
    <error-handler></error-handler>
    <servlet-info>
    <sticky>true</sticky>
    <encrypt>false</encrypt>
    <number-of-singles>10</number-of-singles>
    <disable-reload>false</disable-reload>
    <caching>
    <cache-timeout>900</cache-timeout>
    <cache-size>64</cache-size>
    <cache-criteria>inputtext</cache-criteria>
    <cache-option>TIMEOUT_CREATE</cache-option>
    </caching>
    </servlet-info>
    </servlet>
    The KXS log (assuming that you have info messages) will report servlet cache hits.
    David
    Shameless plug for my iAS book : http://www.amazon.com/exec/obidos/ASIN/076454909X/
    P.S.
    I just realized that the caching sample was only added in sp3. Even if you don't upgrade your server (the functionality is unchanged) you may want to download sp3 so that you can check out the sample. The sample goes into a reasonable amount of depth about this feature.

  • Java servlet: how to store large data result across multiple web session

    Hi, I am writing a java servlet to process some large data.
    Here is the process
    1), user will submit a query,
    2) servlet return a lot of results for user to make selection,
    3). user submit their selections (with checkboxes).
    4). servlet send back the complete selected items in a file.
    The part I have trouble with (new to servlet) is that how I can store the results arraylist (or vector) after step 2 so I needn't re-search again in step 4.
    I think Session may be helpful here. But from what I read from tutorial, session seems only store small item instead of large dataset. Is it possible for session to store large dataset? Can you point me to an example or provide some example code?
    Thanks for your attention.
    Mike

    I don't know whether you connect the databases and store the resultset?

  • Search result pagination loosing/doubling items

    Hi,
    my client is running Portal 10.1.2.0.2 (Build: 139) and has a custom search portlet on a page.
    It's set up to search for items of a specific custom item type in one page group only.
    When users search the results sometimes (maybe always) lacks some items and doubles some (same item shown twice).
    I am a fairly experienced Portal developer and have verified this myself.
    If I search for some word X I get 408 document hits. If the portlet is set to paginate the result at 200 hits per page I get 15 items doubled. The last 14 on page 1 (1-200) are the first 14 on page 2 (201-400). The last item on page 2 is the first on page 3 (401-408).
    Next I set pagination to 150 and search for the same word. Still 408 hits. This time 9 on the first page (1-150) showed up on the second page as well (151-300) (this time not last/first but 4 "single" items after/before the doubles). The last 12 items on page 2 was the same as the first 12 on page 3 (301-408). A total of 21 doubles.
    So I set pagination to 500 (all on one page) and serached for the word again. This time I'm unable to find any doubles.
    As the total number of hits was constant, this means that the first pagination did not show 15 items and the second pagination lost 21.
    I have searched the forums and MetaLink but found nothing.
    Has anyone heard of this before?
    Kind regards
    Tomas Albinsson
    Stockholm, Sweden

    OK, a month later...
    The same search word now gives 410 document hits (more docs have been added).
    A pagination of 200 now gives 15 doubles on page one (last 15 is the same as first 15 on page two) and one double on page two (last item is the same as first on page three).
    I set up a new custom search portlet, restricted it to just find items of a certain subtype and let the user search for text and select a year (custom attribute of the item type). Same result as the original search portlet.
    If I select another year I just get 26 docs. If pagination is set to 50 I see them all and no doubles. Changing pagination to 5 items per page produces two doubles. The last two on page 3 (items 14-15) are the same as the first two on page 4.
    Compared to the complete list I can see that items 14 and 15 are wrong, they hide two docs in those places in the complete list.
    Next I turned off the subtype restriction (ie all types are OK), leaving just search text and year for the user.
    The same search word that gave 26 above now gave 40 hits.
    A pagination of 15 was fine.
    A pagination of 10 was fine.
    A pagination of 8 gave two doubles: the last on page 3 (item 24) was the same as the first on page 4 (item 25) and the last on page 4 (item 32) was the same as the first on page 5 (item 33).
    A pagination of 24 gave one double: the last item on page 1 (item 24) was the same as the first on page 2 (item 25).
    A pagination of 23 gave two doubles: the last two on page 1 (items 22-23) was the same as the first two on page 2 (items 24-25).
    Next I turned off the year custom attribute LOV restriction, leaving only the search text (for items in one page group).
    A search for three words gave 42 hits.
    A pagination of 17 gave one double, the last item on page 1 (item 17) was the same as the first on page 2 (item 18).
    This seems fairly easy to reproduce. A custom search portlet restricted to items in a page group.
    Find search terms that give a reasonable number of hits. Set the pagination higher then the amount first, then to something that gives a few pages. The doubles seem to show up at the page "borders" (last/first on a page).
    As I have only one environment I'd be happy if someone could test this.
    Kind regards
    Tomas

  • Large servlet or several small ones??

    I am building a servlet for a web application which is becoming quite large. I am beginning to wonder about the resources that this will use on the server.
    Does anyone have any views on wether it would be better to split the servlet into several smaller or keep one large do it all.
    cheers
    chris

    I read these question and answers, and I'm sure small servlets are a better programming way, but my question is what is faster?
    I mean one big servlet need time to load and initialize but this is only for the first time, many small servlets or a framework need time to instantiate objects at every call.
    Am I wrong?

  • Query result pagination performance

    Hi
    I have CQ5.4 code (extract below) which uses QueryBuilder to create a query.  The result set is quite large (~2000 nodes) and ordered. I set the hits per page and the start page as I display the results using a pager.
    Performance is good for the first page of results, but performance degrades quite significantly as the page start value is increased towards the end of the result set.  I find this strange as all nodes must always be accessed as the result set is sorted.
    Does anyone have suggestions as to how I might resolve this performance issue?
    Thanks
    Simon
         QueryBuilder queryBuilder = resource.getResourceResolver().adaptTo(QueryBuilder.class);  
         Session session = resource.getResourceResolver().adaptTo(Session.class);
         Query query = queryBuilder.createQuery(PredicateGroup.create(map), session);
         if (query != null) {
             int hitLimit = (pageMaximum>0) ? pageMaximum : limit;
             if (limit>0 && hitLimit>limit)
                 hitLimit = limit;
             query.setHitsPerPage(hitLimit);
             query.setStart(pageStart);
             SearchResult result = query.getResult();
             totalMatches = result.getTotalMatches();
             actLimit = (limit>0) ? Math.min(limit, (int)totalMatches) : (int)totalMatches;

    Sure, I can provide lots more information.  I've tried many different styles of query, but found the behaviour to always be the same.  The log extracts below show an example query and how increasing the offset using query.setStart() impacts the time taken by query.getResult().
    I modified the code a little for some extra debug so it's clear where the time is being spent, snippet below.
    I guess my questions really is, has anyone else used query.setStart() to set the offset for paging, did they see similar increases in response times and did they find a resolution ?
    Does anyone else agree that it's strange that all the hard work of searching and sorting is fast, and, as expected, takes the same time irrespective of the offset, but returning the results takes longer when the offset is increased ?
    Regards
    Simon
         Query query = queryBuilder.createQuery(PredicateGroup.create(map), session);
         if (query != null) {
            int hitLimit = (pageMaximum>0) ? pageMaximum : limit;
            if (limit>0 && hitLimit>limit)
               hitLimit = limit;
            query.setHitsPerPage(hitLimit);
            query.setStart(pageStart);
             log.debug("Time to prepare : " + ((new Date()).getTime()-start) + "ms");
             SearchResult result = query.getResult();
             log.debug("Prep and getResult : " + ((new Date()).getTime()-start) + "ms");
    Example 1:  Offset 0, query takes 328 ms, results returned in 328 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.xxx.wcm.components.List Time to prepare : 0ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (URL):
    group.0_property=jcr%3acontent%2fpathTags&group.0_property.0_value=testing&group.0_propert y.1_value=portal&group.0_property.2_value=de&group.0_property.and=true&group.p.or=true&ord erby=%40jcr%3acontent%2fcq%3alastModified&orderby.index=true&orderby.sort=desc&p.limit=25& p.offset=0
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (predicate tree):
    ROOT=group: limit=25, offset=0[
        {group=group: or=true[
            {0_property=property: 1_value=portal, 2_value=de, property=jcr:content/pathTags, 0_value=testing, and=true}
        {orderby=orderby: index=true, sort=desc, orderby=@jcr:content/cq:lastModified}
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query: //*[((jcr:content/@pathTags = 'portal' and jcr:content/@pathTags = 'de' and jcr:content/@pathTags = 'testing'))] order by jcr:content/@cq:lastModified descending
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query took 328 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl >> xpath query returned 2643 results
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl entire query execution took 328 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=0.html HTTP/1.1] com.xxx.wcm.components.List Prep and getResult : 328ms
    Example 2:  Offset 50, query takes 312ms, results returned in 890ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.xxx.wcm.components.List Time to prepare : 0ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (URL):
    group.0_property=jcr%3acontent%2fpathTags&group.0_property.0_value=testing&group.0_propert y.1_value=portal&group.0_property.2_value=de&group.0_property.and=true&group.p.or=true&ord erby=%40jcr%3acontent%2fcq%3alastModified&orderby.index=true&orderby.sort=desc&p.limit=25& p.offset=50
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (predicate tree):
    ROOT=group: limit=25, offset=50[
        {group=group: or=true[
            {0_property=property: 1_value=portal, 2_value=de, property=jcr:content/pathTags, 0_value=testing, and=true}
        {orderby=orderby: index=true, sort=desc, orderby=@jcr:content/cq:lastModified}
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query: //*[((jcr:content/@pathTags = 'portal' and jcr:content/@pathTags = 'de' and jcr:content/@pathTags = 'testing'))] order by jcr:content/@cq:lastModified descending
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query took 312 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl >> xpath query returned 2643 results
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl entire query execution took 312 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=50.html HTTP/1.1] com.xxx.wcm.components.List Prep and getResult : 890ms
    Example 3: Offset 2625, query takes 359ms, results returned in 10250ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.xxx.wcm.components.List Time to prepare : 0ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (URL):
    group.0_property=jcr%3acontent%2fpathTags&group.0_property.0_value=testing&group.0_propert y.1_value=portal&group.0_property.2_value=de&group.0_property.and=true&group.p.or=true&ord erby=%40jcr%3acontent%2fcq%3alastModified&orderby.index=true&orderby.sort=desc&p.limit=25& p.offset=2625
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl executing query (predicate tree):
    ROOT=group: limit=25, offset=2625[
        {group=group: or=true[
            {0_property=property: 1_value=portal, 2_value=de, property=jcr:content/pathTags, 0_value=testing, and=true}
        {orderby=orderby: index=true, sort=desc, orderby=@jcr:content/cq:lastModified}
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query: //*[((jcr:content/@pathTags = 'portal' and jcr:content/@pathTags = 'de' and jcr:content/@pathTags = 'testing'))] order by jcr:content/@cq:lastModified descending
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl xpath query took 359 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl >> xpath query returned 2643 results
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.day.cq.search.impl.builder.QueryImpl entire query execution took 359 ms
    GET /cm/content/testing/portal/de/simon/article2.contentparsys_list_start=2625.html HTTP/1.1] com.xxx.wcm.components.List Prep and getResult : 10250ms

  • Servlet results image - intermittent error

    Hi!
    I have an html that calls a servlet from the src of an image tag. The servlet brings a image as result. The intermittent error is that the image doesn't appears in the very first time. Only when I refresh the page is that the image appears. If I call the page from another browser, the image always appears. But if I restart the application server, the image doesn't appear at the first time, again.
    I use the WebSphere Application Server 3.5, and develop the applications with VisualAge for Java 3.5. In both environments the error occurs.
    Can someone helps me?
    thanks
    html: index.html
    <HTML>
    <HEAD>
    </HEAD>
    <BODY bgcolor="#FFFFFF">
    This is your logo:
    <p><img src="servlet/pLog.Counter"></p>
    </BODY>
    </HTML>servlet pLog.Counter
    package pLog;
    import java.io.*;
    import java.net.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.util.*;
    import com.sun.image.codec.jpeg.*;
    import java.awt.image.*;
    import java.awt.*;
    public class Counter extends HttpServlet {
         public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException {
              doPost(request, response);
         public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException {
              response.setContentType("image/jpeg");
              ServletOutputStream out = response.getOutputStream();          
              String strImagePath = "http://localhost:8080/images/logogedas.jpg";
              Toolkit toolkit = Toolkit.getDefaultToolkit();
              URL urlIMAGE =     new URL(strImagePath);
              Image image1 = toolkit.getImage(urlIMAGE);
              int iWid = image1.getWidth(null);
              int iHei = image1.getHeight(null);               
              BufferedImage image = new BufferedImage(iWid,iHei, BufferedImage.TYPE_INT_RGB);
              Graphics g = image.getGraphics();
              g.drawImage(image1, 0, 0, null);               
              JPEGImageEncoder encoder =JPEGCodec.createJPEGEncoder(out);
              encoder.encode(image);
              out.close();
    }

    Hi,
    You'll need to define a MediaTracker to wait for the image to be loaded completely. Here's the modified code:
    package pLog;
    import java.io.*;
    import java.net.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.util.*;
    import com.sun.image.codec.jpeg.*;
    import java.awt.image.*;
    import java.awt.*;
    public class Counter extends HttpServlet {
         public void doGet(HttpServletRequest request, HttpServletResponse response)
              throws ServletException, java.io.IOException {
                   doPost(request, response);
         public void doPost(HttpServletRequest request, HttpServletResponse response)
              throws ServletException, java.io.IOException {
                   response.setContentType("image/jpeg");
                   ServletOutputStream out = response.getOutputStream();
                   String strImagePath = "http://localhost:8080/images/logogedas.jpg";
                   Toolkit toolkit = Toolkit.getDefaultToolkit();
                   URL urlIMAGE =     new URL(strImagePath);
                   Image image1 = toolkit.getImage(urlIMAGE);
                   Frame frame = new Frame();
                   frame.addNotify();
                   MediaTracker tracker = new MediaTracker(frame);
                   tracker.addImage(image1, 0);
                   try {
                        tracker.waitForAll();
                   catch (InterruptedException e) {
                        log("Interrupted while loading image");
                        throw new ServletException(e.getMessage());
                   int iWid = image1.getWidth(null);
                   int iHei = image1.getHeight(null);
                   BufferedImage image = new BufferedImage(iWid,iHei, BufferedImage.TYPE_INT_RGB);
                   Graphics g = image.getGraphics();
                   g.drawImage(image1, 0, 0, null);
                   JPEGImageEncoder encoder =JPEGCodec.createJPEGEncoder(out);
                   encoder.encode(image);
                   out.close();
    }Another possibility is to implement the ImageObserver interface (although it's a little more complicated).
    Hope this helps,
    Kurt.

  • Large query result set

    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for search in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It results in
    too much memory consumtion in our ejb application. What is the best way to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

    you can think of following options
    1) paging: read only few thousands at a time and maintain a index to page
    through complete dataset
    2) caching!
    a) you can create a serialized data file in server to cache the result set
    and can use that to browse through. you may do on the fly
    compression/uncompression while sending data to client.
    b) applet based solution where caching could be in client side. Look in
    http://www.sitraka.com/software/jclass/cs_ims.html
    thanks,
    Srinivas
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Thanks Slava Imeshev,
    We already have search criteria and a limit. When records exceeds thatlimit
    then we prompt user that it may take sometime, do you want to proceed? If
    he clicks yes then we retrieve those records. This results in lot ofmemory
    consumtion.
    I was thinking if there is some way that from database I can retrieve some
    block of records at a time rather the all records of a query. I wander how
    internet search sites work, where thousnds of sites/pages match criteriaand
    client can move back & front on any page.
    Regards,
    Parvez
    "Slava Imeshev" <[email protected]> wrote in message
    news:[email protected]...
    Hi chauhan,
    You may want to narrow search criteria along with processing a
    limited number of resulting records. I.e. if the size of the result
    is bigger than a limit, you stop fetching results and notify the client
    that search criteria should be narrowed.
    HTH.
    Regards,
    Slava Imeshev
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for
    search
    in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It
    results
    in
    too much memory consumtion in our ejb application. What is the best
    way
    to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

  • PHP results pagination

    Hi All,
    I'm using the following script to display search results on various pages, but there are so many pages of results the list of pages takes up too much room, see;
    http://dev.merco.co.uk/node/9
    I would like the list to look more like this;
    1  2 3 ....  57 58 59 [Next] Last Page
    With dots to show there are more pages between the first and last pages.
    Any advice on how to do this would be much appreciated!
    Isabelle
    CODE START ***************************************************************************************** **************************************
    <?php
    // how many rows to show per page
    $rowsPerPage = 5;
    // by default we show first page
    $pageNum = 1;
    // if $_GET['page'] defined, use it as page number
    if(isset($_GET['page']))
        $pageNum = $_GET['page'];
    // counting the offset
    $offset = ($pageNum - 1) * $rowsPerPage;
    $user_name = "xxxx";
    $password = "xxxx";
    $database = "xxxx";
    $server = "xxxx";
    $db_handle = mysql_connect($server, $user_name, $password);
    $db_found = mysql_select_db($database, $db_handle);
    // how many rows we have in database
    $query   = "SELECT COUNT(job_title) AS numrows FROM jobs";
    $result  = mysql_query($query) or die('Error, query failed');
    $row     = mysql_fetch_array($result, MYSQL_ASSOC);
    $numrows = $row['numrows'];
    // how many pages we have when using paging?
    $maxPage = ceil($numrows/$rowsPerPage);
    // print the link to access each page
    $self = $_SERVER['PHP_SELF'];
    $nav  = '';
    for($page = 1; $page <= $maxPage; $page++)
       if ($page == $pageNum)
          $nav .= " $page "; // no need to create a link to current page
       else
          $nav .= " <a href=\"9?page=$page\">$page</a> ";
    // creating previous and next link
    // plus the link to go straight to
    // the first and last page
    if ($pageNum > 1)
       $page  = $pageNum - 1;
       $prev  = " <a href=\"9?page=$page\">[Prev]</a> ";
       $first = " <a href=\"9?page=1\">First Page</a> ";
    else
       $prev  = ''; // we're on page one, don't print previous link
       $first = ''; // nor the first page link
    if ($pageNum < $maxPage)
       $page = $pageNum + 1;
       $next = " <a href=\"9?page=$page\">[Next]</a> ";
       $last = " <a href=\"9?page=$maxPage\">Last Page</a> ";
    else
       $next = ''; // we're on the last page, don't print next link
       $last = ''; // nor the last page link
    // print the navigation link
    $nav  = '';
    for($page = 1; $page <= $maxPage; $page++)
       if ($page == $pageNum)
          $nav .= " $page "; // no need to create a link to current page
       else
          $nav .= " <a href=\"9?page=$page\">$page</a> ";
    ?>

    Hi,
    if you have the parameters in the $_GET array, which you most probably do, you can just do this:
    $myget = $_GET;
    $myget['offset'] = (insert your offset here);
    $newurl = 'http://www.mysite.com/results.php?' . http_build_query($myget);
    Also, being that this is an Oracle/PHP forum, you might have better luck asking these kinds of questions in some generic PHP forum, as your question has not much to do with Oracle itself.
    Hope this helps,
    Michal

  • Applet-to-Servlet results in a new browser window

    Hi Everyone,
    I am trying to show a page generated by some servlet in a new browser window when the button is clicked on the applet. I can accomplish that by using AppletContext as in:
    myApplet.getAppletContext().showDocument(servletUrl, "_blank");
    The only problem with this is that I don't have any control over how the new browser window is displayed. Using JavaScript I can set size, hide toolbar and address bar, etc. But how can I hide address bar on a new browser window generated in this scenario?
    I tried to call intermediate servlet from my applet which in turn generates a new window by writing out JavaScript code. This way I lose my previous page with the applet even though a new window is displayed the way I wanted but I want this new browser window to act as a pop-up box and my applet page stay on screen. Also, is there a way to send a POST request from an applet by using AppletContext.showDocument() method?
    Any help will be greatly appreciated!
    Thanks in advance,
    Y.M.

    Hi,
    You can specify an intermediary html which inturn will make the call to servlet.
    myApplet.getAppletContext().showDocument(html,
    "_blank");
    in the HTML onload make a call to servletURL using
    self.open(servletURL, "_self", options);
    in the options you can specify size, toolbar, etc.
    Hope this helps.
    KD
    Hi Everyone,
    I am trying to show a page generated by some servlet
    in a new browser window when the button is clicked on
    the applet. I can accomplish that by using
    AppletContext as in:
    myApplet.getAppletContext().showDocument(servletUrl,
    "_blank");
    The only problem with this is that I don't have any
    control over how the new browser window is displayed.
    Using JavaScript I can set size, hide toolbar and
    address bar, etc. But how can I hide address bar on a
    new browser window generated in this scenario?
    I tried to call intermediate servlet from my applet
    which in turn generates a new window by writing out
    JavaScript code. This way I lose my previous page with
    the applet even though a new window is displayed the
    way I wanted but I want this new browser window to act
    as a pop-up box and my applet page stay on screen.
    Also, is there a way to send a POST request from an
    applet by using AppletContext.showDocument() method?
    Any help will be greatly appreciated!
    Thanks in advance,
    Y.M.

  • ValueListHandler, Large Results and Clustering

    Has anybody got experience of using the ValueListHanlder pattern with a session facade and potentially very large query results, e.g. millions of results (even when filtered)?
    How did this solution scale with many users each with a stateful session bean containing all of the results? How did state replication over a cluster scale? Are there any better solutions you have implemented?
    Any experience/tips would be much appreciated.
    Duncan Eley

    Ah, ValueListHandler, a pattern whose soleexistence
    is due to the limitations of entity beans. Ah,the
    old painful days of EJB. (I digress)
    Yes, there are several solutions. Do you
    need
    millions of rows? There are a few ways to getaround
    this, depending on your requirements:Unfortunatley, the current implementation of the
    system could result in millions of rows, paged of
    course, being delivered to the end user. I am yet to
    discuss how useful this could be to the end user -
    - it is quite possibly useless but that's for our
    users to decide.
    There are business requirements, and there are also technical realities. First approach them with, "How would you even scroll through a million records?" Then, if they persist, "Well, it doesn't matter anyway because a million records will either break the server or require you to buy ten for every one you would have purchased otherwise."
    If you are storing all those rows to perform aseries of calculations, perform the calculations
    'close' to the ResultSet itself, meaning read each
    row and update your calculations accordingly. You
    should then simply have to return the calculation
    results. This would typically be done during abatch
    process run or a report.
    If you are only displaying, say, a hundred at atime, implement pagination. This would be like in
    Google where you see 1 ... n for however manypages
    of data there are. Rather than returning amillion
    rows, SELECT the count first and then SELECThowever
    many rows are appropriate for a page. You can use
    ROWNUM (for Oracle) or LIMIT (for ANSI compliant
    RDBMS) to 'page' the results returned by the
    database.This approach would require two queries to begin
    with (count and first page) then a query for each
    page. What worries me about this approach is that if
    the query consists of multiple joins on tables with
    millions of rows, the queries can be quite slow. And
    having used this technique once before on a complex
    query with GROUP BY, ORDER BY and HAVING, using LIMIT
    was not much quicker than not using LIMIT (in MySQL
    4.0).
    You can always serialize the results to the file system or store the query results in a temporary table. The latter is nice because LIMIT works on that (smaller with fewer joins) query. The issue you will run into is how and when to clear out 'stale' query results. Depending on how much disk you have, you could conceivably dedicate a parent record for each user. When the user requested another query, the existing one would be overwritten. A batch process could expire all stored results at the end of the night, week, month, etc.
    If all else fails, and this is a very rarerequirement that literally millions of rows
    must be sent to either the app server orthe
    client, then store the results temporarily in the
    file system of the app server. This is a last
    resort. I would be shocked to find real, valid
    business requirements to actually hold ontomillions
    of rows.I agree this would be a last resort: would not work
    in a cluster, clean up issues etc. I've seen one
    solution where results were stored back in the
    database as a BLOB!? See:
    http://wldj.sys-con.com/read/45563.htm
    BLOB is a possibility, but I think a dedicated temporary table is more elegant. How would you paginate a BLOB without loading it into memory first?
    Hope that stimulates a few ideas. Why do you have
    millions of rows? (BTW, regarding statereplication,
    this would make a horrendous situation that much
    worse; it would in all likelihood gum up yournetwork
    and cause all your machines to run out of memory
    soon).
    Thanks for your input. If the requirements
    cannot change then I guess at the moment I'll have to
    compare the 'one query, page through results in
    stateful session bean' approach with the 'multiple
    but limited queries approach'.
    I think the former has memory scaling issues and the latter may have performance issues.
    Has anybody already compared these two approaches?
    What do people think of the 'results stored in the
    database' approach?
    - SaishDuncan EleyInteresting discussion!
    - Saish

  • Refresh browser causes to increment results

    Hi,
    I have a servlet that is using a database to query results from a survey. The survey has 3 questions and each one of them has 4 possible answers. The query runs fine and displays in the web browser, but once I do a refresh some of the data increments its value at a constant rate.
    Here is the code:
    import java.io.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.sql.*;
    import java.util.StringTokenizer;
    public class Survey extends HttpServlet {
    private Connection con = null;
    private Statement stmt = null;
    private String url = "jdbc:odbc:survey";
    private String table = "results";
    private int numQues = 3;
    private int [] numAns = {4,4,4};
    private int num = 0;
    public void init() throws ServletException {
    try {
    // loading the jdbc-odbc bridge
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    // making a connection
    con = DriverManager.getConnection(url,"anonymous","guest");
    } catch (Exception e) {
    e.printStackTrace();
    con = null;
    public void doPost(HttpServletRequest req, HttpServletResponse res)
    throws ServletException, IOException {
    String [] results = new String[numQues];
    for (int i=0;i<numQues;i++) {
    results = req.getParameter("q"+i);
    // test if the user has answered all the question
    String resultsDb = "";
    for (int i=0;i<numQues;i++) {
    if (i+1!=numQues) {
    resultsDb += "'" + results + "',";
    else {
    resultsDb += "'" + results + "'";
    boolean success = insertIntoDb(resultsDb);
    // print a thank you message
    res.setContentType("text/html");
    PrintWriter output = res.getWriter();
    StringBuffer buffer = new StringBuffer();
    buffer.append("<HTML>");
    buffer.append("<HEAD>");
    buffer.append("</HEAD>");
    buffer.append("<BODY BGCOLOR=\"#FFFFFF\">");
    buffer.append("<P>");
    if (success) {
    buffer.append("Thank you for participating!");
    else {
    buffer.append("An error has occurred. Please press the back button of your browser");
    buffer.append(" and try again.");
    buffer.append("</P>");
    buffer.append("</BODY>");
    buffer.append("</HTML>");
    output.println(buffer.toString());
    output.close();
    public void doGet(HttpServletRequest req, HttpServletResponse res)
    throws IOException {
    // get info from file
    res.setContentType("text/html");
    PrintWriter output = res.getWriter();
    StringBuffer buffer = new StringBuffer();
    buffer.append("<HTML>");
    buffer.append("<HEAD>");
    buffer.append("</HEAD>");
    buffer.append("<BODY BGCOLOR=\"#FFFFFF\">");
    buffer.append("<P>");
    try {
    stmt = con.createStatement();
    // find the number of participation
    for (int i=0;i<1;i++) {
    String query = "SELECT q" + i + " FROM " + table;
    ResultSet rs = stmt.executeQuery(query);
    while (rs.next()) {
    rs.getInt("q"+i);
    num++;
    // loop thru each question
    for (int i=0;i<numQues;i++) {
    int [] results = new int[num];
    String query = "SELECT q" + i + " FROM " + table;
    ResultSet rs = stmt.executeQuery(query);
    int j=0;
    while (rs.next()) {
    results[j]=rs.getInt("q"+i);
    j++;
    //call method
    int[] total = percent(results,4);
    buffer.append("Question" + i + ":<BR>");
    for (int k=0;k<4;k++) {
    buffer.append(" > Answer " + k + ":" + total[k]);
    buffer.append("<BR>");
    buffer.append("\n");
    } catch (SQLException ex) {
    ex.printStackTrace();
    // display the results
    buffer.append("</P>");
    buffer.append("</BODY>");
    buffer.append("</HTML>");
    output.println(buffer.toString());
    output.close();
    public void destroy() {
    try {
    con.close();
    } catch (Exception e) {
    System.err.println("Problem closing the database");
    public boolean insertIntoDb(String results) {
    String query = "INSERT INTO " + table + " VALUES (" + results + ");";
    try {
    stmt = con.createStatement();
    stmt.execute(query);
    stmt.close();
    } catch (Exception e) {
    System.err.println("ERROR: Problems with adding new entry");
    e.printStackTrace();
    return false;
    return true;
    public int [] percent(int [] array, int numOptions) {
    System.out.println("==============================================");
    int [] total = new int[numOptions];
    // initialize array
    for (int i=0;i<total.length;i++) {
    total=0;
    for (int j=0;j<numOptions;j++) {
    for (int i=0;i<array.length;i++) {
    System.out.println("j="+j+"\t"+"i="+i+"\ttotal[j]="+total[j]);
    if (array==j) {
    total[j]++;
    System.out.println("==============================================");
    return total;
    Thanks!

    Hi,
    I do encounter similar problem. The root cause was that URL at the location bar of the browser still pointing to the same servlet (which handle the HTTP POST/GET request to update the database) after the result was returned by the servicing servlet.
    For example, in your case the "Survey" servlet URL.
    The scenario is like this.
    1.Browser (HTML form) ---HTTP post ---> Survey Servlet (service & update database)
    2.Survey Servlet -- (Result in HTML via HttpServletResponse)---> Browser (display the result)
    Note that after step 2, the browser's location bar value still pointing to the Survey Servlet URL (used in the HTML form's action value). So if the refresh is performed here, the same action will be repeated, as the result, 2 identical records have been created in your database table instead of one.
    A way to work around this is to split the servlet in to two, one performing the update of database and one responsible to display the result.
    Thing become as follow:
    1.Browser (HTML form) ---HTTP post ---> Survey Servlet (service & update database)
    2.Survey Servlet -- (Redirect the Http request)---> Display Result ServletBrowser
    3.Display Result Servlet --(Acknowledgement in HTML via HttpServletResponse) ---> Browser (display the result)
    Note that now the browser's location bar will point to the Display Result Servlet. Refresh action will be posted to Display Result Servlet which will not create duplication of database update activity.
    to redirect the request to another servlet, use the res.sendRedirect(url); where res is the HttpServletResponse object and url is the effective url of calling the target servlet.
    Hope that this help.

  • Can we pass temporary result set to the  procedure?

    Hi,
    The result set is stored in a temporary storage. Can we pass that resultset to the procedure?
    Thanks and regards
    Gowtham Sen.

    I'm still unclear just what this result set is... Is is a table or a cursor or something else?
    If you imply the physical result set of a SQL query, there is no such thing in Oracle. Oracle does not create and store a result set in memory containing the rows of the SQL SELECT. Creating such sets in memory does not scale. A single SELECT on such a database can kill the performance of the entire database by exhasuting all memory with a single large physical result set.
    Oracle "results" live in the database cache (residing in the SGA). Rows (as data blocks) are paged into and out of this case as demand dictates. When PL/SQL code, for example, fetches a row, the SQL engine grabs the row from the db cache (SGA) and copies it to the PGA (the private memory area of the PL/SQL process). The row also may not yet exist in the db cache - in which case it needs a physical read from disk to get the data block containing that row into the db cache (after which it is copied to the PGA).
    A PL/SQL process can do a bulk fetch - e.g. fetch a 100 rows from the SQL query/cursor at a time. In that case, a 100 rows are copied from the SGA db cache to the PGA.
    At no time is there are single large unique and dedicated memory struct in the SGA that contains the complete "result set" of a SQL query.
    Once you have fetched that row, that is it. Deal is done. You cannot reverse the cursor and fetch the row again. After you have fetched the last row in that cursor, you cannot pass that cursor to another process - the cursor is now empty. That other process cannot rewind the cursor and start fetching from the 1st row again. You will need to pass the SQL to that process in order for it to create its own cursor - also keeping in mind that in between the rows can have changed and that this other process could now see different results with its cursor.
    If you want to create such a physical temporary result set that is consistent and re-usable, you can use a temporary table - and insert the results of the SELECT into this temp table for further processing. This temp table is session specific and is auto destroyed when the session terminates.
    A comment though - it sounds like you're approaching the date warehouse processing (scrubbing, transformation and loading of data) as a row-by-row process.
    That is a flawed approach. Row-by-row processing does not scale. One should deal with data sets. Especially when the volumes are large. One should also attempt to perform minimal passes through a data set. Processing a bunch of rows, then passing that rows to another process to do some processing on the same rows.. this means multiple passes through the same data. That is very inefficient performance and resource wise.

  • How to create a Servlet

    Hello All,
    I am trying to learn creating a servlet in oaf. Although I kind of understand the concept of a servlet....have doubts about where the servlet class file lives/resides? Does it matter where it resides....at least for now in the Jdeveloper environment?
    The way I understand it is:
    1) In the oaf page need to place the item in which the servlet result is displayed
    2) In the PR of the CO of the above page, need to find the above item
    2.1) set the source of the above item to be the servlet class while passing in the params into the set source call.
    2.2) Does it matter where the servlet class file resides? ie., can it live anywhere in the project??
    Pls. help me understand this concept.
    Practially in steps2,3 of the servlet wizard of the Jdeveloper what do these do?
    Step#2: URL Pattern
    Step#3: Parameters. Do I have to necessarily set up the params here in this step or can they be dynamically set when the serveletis invoked in the PR of the CO? If yes, could you pls. provide a sample code. If params can be done either way...which way is effective. Reason for asking is....in my other post, I mentioned that while trying to create a dynamic VO, it seemed to force me to explicitly set the sql statement instead of simply running the sql statement associate with the VO of the page (I am not sure why it is like that...but that worked)
    Thanks,
    Edited by: OAF_Monkey on Nov 16, 2012 4:11 PM

    The way I understand (after following several older posts) is
    1) When my destination page (say page2 renders), in the CO need to find the image item/bean (I am assuming bean and an item mean the same)
    2) After finding the item item/bean, need to set its source as "ReadImage" telling it to go get it from the web.xml while passing in the id
    for which it needs to find an image/blob in the db.
    At this point, if it finds the class "ReadImage" then it should get the
    2.1)db connection
    2.2)get the parameter passed in using
          String image_id = request.getParameter("s1");
         2.3)get the sql statement provided
    2.4)get the image/blob as an input stream and write to the output stream using the below
            InputStream in = null;
            in = rs.getBinaryStream("IMAGE");
         Below is my code in the PR of CO
          System.out.println("I am back in the IMG CO : primaykey val is "+ s1 );
    BP1-->  OAImageBean imageItem = (OAImageBean)webBean.findIndexedChildRecursive("SJImage");
          System.out.println("I am back in the IMG CO1 : after finding the image item" );
    BP2-->     if (imageItem !=null ){
          System.out.println("I am back in the IMG CO2 : before setting the source of the image item" );    
    BP3-->     imageItem.setSource("/OA_HTML/WEB-INF/ReadImage?id=" + s1);
          System.out.println("I am back in the IMG CO3 : after setting the source of the image item" + imageItem.getSource());
          }When I ran it in the debug mode (break points shown above BP1,2,3), I can see that the value is getting passed in ok with the 's1' at BP1 and BP2.
    However when it reaches BP3, if I expand all the nodes of the debug window, I can see that in the 'Statisfields of the CO', it shows
    "Failed to establish the database connection." for the "DBCONNECT_ERROR_MSG" node.
    In the "ReadImage" class file the code to get connection is shown below:
        public void init(ServletConfig config) throws ServletException {
            super.init(config);
            try
            Class.forName("oracle.jdbc.driver.OracleDriver");
            catch (Exception exception )
            exception.printStackTrace();
            throw new UnavailableException(exception.getMessage());
        }In the doPost method:
            if (conn == null)
            try
            WebAppsContext ctx = null;
            ctx = WebRequestUtil.validateContext(request, response);
            ctx = WebRequestUtil.createWebAppsContext(request, response);
            conn = ctx.getJDBCConnection();
            stmt = conn.createStatement();
            }Is this code correct to get a db connection? Pls. share thoughts/ideas/suggestions...
    Thanks,

  • ObjectInputStream with large files?

    I write two arrays to their own separate files using:
    FileOutputStream fos = new FileOutputStream("C:/location");
    ObjectOutputStream oos = new ObjectOutputStream(fos);
    oos.writeObject(array);
    oos.close(); One file is 2MB and the other is 600MB.
    When attempting to read them using:
    InputStream fis = classname.class.getResourceAsStream("resources/filename");
    ObjectInputStream ois = new ObjectInputStream(fis);
    array = (double[][][]) ois.readObject();
    ois.close(); The smaller array is loaded normally, but the larger one results in a java.io.EOFException.
    The code for reading and writing is exactly the same for each array with the exception of the array name.
    Why is this exception being thrown, and how might I correct this?

    It seems to work fine. Some streams detect end-of-file just by catching EOFException. Try to look here.
    Edited by: elOpalo on Dec 23, 2009 2:19 PM

Maybe you are looking for