Retrieve html page before parsing it

Hi:
I'm trying to parse some query result returned from a web site. The following code returned me nothing, not even the html header and the pre-filled tags. When I replace "while(s2!=null){" with "a finite number of loops, say, 100, it worked. But when I increased the number of loops to 300 (actual page returns 327 lines of html code), it gave me nothing again. Could anybody please let me know what's wrong with my code or what should I do to retrieve an html page before I parse it? Thanks.
          String s1 = new String();
          String s2 = new String();
          try{
               URL u = new URL(url);
               InputStream ins = u.openStream();
               InputStreamReader isr = new InputStreamReader(ins);
               BufferedReader br = new BufferedReader(isr);
               while(s2!=null){
                    s2 = br.readLine();
                    s1 = s1.concat(s2);
               //test part
               response.setContentType("text/html");
          PrintWriter out = response.getWriter();
     out.print(ServletUtilities.headWithTitle("Hello WWW") +
     "<BODY>\n" + s1 +     
                    "MANUALLY-ADDED" +
     "</BODY></HTML>");

Here is a simple [url http://forum.java.sun.com/thread.jsp?forum=31&thread=285107]example. Don't use String.concat(..) method. Use a StringBuffer and convert it to a String once the entire file has been read.

Similar Messages

  • Retrieving html pages back to Muse?

    I made some changes to a muse site and uploaded the revs as html. Now I can't find the original Muse revisions that are now up on the site. Any way to "bring back" the uploaded pages into Muse. I need to make more changes, but don't want to have to recreate the changes I already made.
    Any thoughts, recommendations?

    Hi,
    You can open only .muse files in Muse. There isn't a way to open the .html files in Muse.
    Regards,
    Aish

  • How ias integrate with Snacktory for getting main text from an html page

    Hi All,
    i am new to endeca and ias, i have an requirement, need to get main text from whole html page before ias save text to Endeca_Document_Text property,
    as ias save all text in page to endeca_document_text property, it is not ok for reading when show in web page, i use an third party API to filter out the main text from original page,
    now i want to save these text to endeca_document_text property,
    an another question,
    i get zero page when doing the logic of filtering main text from original html text in ParseFilter( HTMLMetatagFilter implements ParseFilter) using Snacktory.
    if only do little things, it will work fine, if do more thing, clawer fail to crawl page. any one know how to fix it.
    log for clawler.
    Successfully set recordstore configuration.
    INFO    2013-09-03 00:56:42,743    0    com.endeca.eidi.web.Main    [main]    Reading seed URLs from: /home/oracle/oracle/endeca/IAS/3.0.0/sample/myfirstcrawl/conf/endeca.lst
    INFO    2013-09-03 00:56:42,744    1    com.endeca.eidi.web.Main    [main]    Seed URLs: [http://www.liferay.com/community/forums/-/message_boards/category/]
    INFO    2013-09-03 00:56:43,497    754    com.endeca.eidi.web.db.CrawlDbFactory    [main]    Initialized crawldb: com.endeca.eidi.web.db.BufferedDerbyCrawlDb
    INFO    2013-09-03 00:56:43,498    755    com.endeca.eidi.web.Crawler    [main]    Using executor settings: numThreads = 100, maxThreadsPerHost=1
    INFO    2013-09-03 00:56:44,163    1420    com.endeca.eidi.web.Crawler    [main]    Fetching seed URLs.
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:46,519    3776    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:56:52,889    10146    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:52,889    10146    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:52,890    10147    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-1]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:56:59,184    16441    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:56:59,185    16442    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:56:59,185    16442    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into EndecaHtmlParser getParse
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    come into HTMLMetatagFilter
    INFO    2013-09-03 00:57:07,057    24314    com.endeca.eidi.web.parse.HTMLMetatagFilter    [pool-1-thread-2]    meta tag viewport ==minimum-scale=1.0, width=device-width
    INFO    2013-09-03 00:57:07,058    24315    com.endeca.eidi.web.Crawler    [main]    Seeds complete.
    INFO    2013-09-03 00:57:07,090    24347    com.endeca.eidi.web.Crawler    [main]    Starting crawler shut down
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    Waiting for running threads to complete
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    Progress: Level: Cumulative crawl summary (level)
    INFO    2013-09-03 00:57:07,095    24352    com.endeca.eidi.web.Crawler    [main]    host-summary: www.liferay.com to depth 1
    host    depth    completed    total    blocks
    www.liferay.com    0    0    1    1
    www.liferay.com    1    0    0    0
    www.liferay.com    all    0    1    1
    INFO    2013-09-03 00:57:07,096    24353    com.endeca.eidi.web.Crawler    [main]    host-summary: total crawled: 0 completed. 1 total.
    INFO    2013-09-03 00:57:07,096    24353    com.endeca.eidi.web.Crawler    [main]    Shutting down CrawlDb
    INFO    2013-09-03 00:57:07,160    24417    com.endeca.eidi.web.Crawler    [main]    Progress: Host: Cumulative crawl summary (host)
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]   Host: www.liferay.com:  0 fetched. 0.0 mB. 0 records. 0 redirected. 4 retried. 0 gone. 0 filtered.
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]    Progress: Perf: All (cumulative) 23.6s. 0.0 Pages/s. 0.0 kB/s. 0 fetched. 0.0 mB. 0 records. 0 redirected. 4 retried. 0 gone. 0 filtered.
    INFO    2013-09-03 00:57:07,162    24419    com.endeca.eidi.web.Crawler    [main]    Crawl complete.
    ~/oracle/endeca
    -======================================
    source code for parsefilter
    package com.endeca.eidi.web.parse;
    import java.util.Map;
    import java.util.Properties;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.log4j.Logger;
    import org.apache.nutch.metadata.Metadata;
    import org.apache.nutch.parse.HTMLMetaTags;
    import org.apache.nutch.parse.Parse;
    import org.apache.nutch.parse.ParseData;
    import org.apache.nutch.parse.ParseFilter;
    import org.apache.nutch.protocol.Content;
    import de.jetwick.snacktory.ArticleTextExtractor;
    import de.jetwick.snacktory.JResult;
    public class HTMLMetatagFilter implements ParseFilter {
        public static String METATAG_PROPERTY_NAME_PREFIX = "Endeca.Document.HTML.MetaTag.";
        public static String CONTENT_TYPE = "text/html";
        private static final Logger logger = Logger.getLogger(HTMLMetatagFilter.class);
        public Parse filter(Content content, Parse parse) throws Exception {
            logger.info("come into EndecaHtmlParser getParse");
            logger.info("come into HTMLMetatagFilter");
            //update the content with the main text in html page
            //content.setContent(HtmlExtractor.extractMainContent(content));
            parse.getData().getParseMeta().add("FILTER-HTMLMETATAG", "ACTIVE");
            ParseData parseData = parse.getData();
            if (parseData == null) return parse;
            extractText(content, parse);
            logger.info("update the content with the main text content");
            return parse;
        private void extractText(Content content, Parse parse){
            try {
                ParseData parseData = parse.getData();
                if (parseData == null) return;
                 Metadata md = parseData.getParseMeta();
                ArticleTextExtractor extractor = new ArticleTextExtractor();
                String sourceHtml = new String(content.getContent());
                JResult res = extractor.extractContent(sourceHtml);
                String text = res.getText();
                md.set("Endeca_Document_Text", text);
            } catch (Exception e) {
                // TODO: handle exception
        public static void log(String msg){
            System.out.println(msg);
        public Configuration getConf() {
            return null;
        public void setConf(Configuration conf) {

    but it only extracts URLs from <A> (anchor) tags. I want to be able to extract URLs from <MAP> tags as wellGee, do you think you could modify the code to check for "Map" attributes as well.
    Can someone maybe point a page containing info on the HTML toolkit for me?It's called the API. Since you are using the HTMLEditorKit and an ElementIterator and an AttributeSet, I would start there.
    There is no such API that says "get me all the links", so you have to do a little work on your own.
    Maybe you could use a ParserCallback and every time you get a new tag you check for the "href" attribute.

  • Firefox downloads html page instead of opening it

    I have an Apache web server (2.2.24) on a Red Hat Linux server (RHEL 5.8). On the web site I have a link to an html page that is not in the web directory, but is on the file system. When I click on the link to go to the web page, Firefox will not render the html page within the URL of the web site--it instead wants to download the html page before it displays the html. So, instead of getting http://mywebsite.com/help.html I get file:///C:Users/myuser/AppData/Local/Temp/Help.html. In internet explorer, the web page opens just fine: http://mywebsite.com/help.html--it does not try and download the file

    Our httpd.conf file has DefaultType text/plain
    The web page in question has a header of:
    <META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
    <META NAME="GENERATOR" CONTENT="Mozilla/4.04 [en] (X11; I; HP-UX B.10.20 9000/879) [Netscape]">
    And yet Firefox still wants to download the page instead of render it directly in the browser :-(

  • Parsing the FRAME tag from HTML pages

    Hello to everybody,
    I am trying to parse the A tags & the Frame tags from HTML pages. I have developed the code below, which works for the A tags but it does not work for the Frame tags. Is there any idea about this?
    private void getLinks() throws Exception {
         System.out.println(diskName);
    links=new ArrayList();
    frames=new ArrayList();
    BufferedReader rd = new BufferedReader(new FileReader(diskName));
    // Parse the HTML
    EditorKit kit = new HTMLEditorKit();
    HTMLDocument doc = (HTMLDocument)kit.createDefaultDocument();
    doc.putProperty("IgnoreCharsetDirective", new Boolean(true));
    try {
         kit.read(rd, doc, 0);
    catch (RuntimeException e) {return;}
    // Find all the FRAME elements in the HTML document, It finds nothing
         HTMLDocument.Iterator it = doc.getIterator(HTML.Tag.FRAME);
    while(it.isValid()) {
    SimpleAttributeSet s = (SimpleAttributeSet)it.getAttributes();
    String frameSrc = (String)s.getAttribute(HTML.Attribute.SRC);
         frames.add(frameSrc);
    // Find all the A elements in the HTML document, it works ok
    it = doc.getIterator(HTML.Tag.A);
    while (it.isValid()) {
    SimpleAttributeSet s = (SimpleAttributeSet)it.getAttributes();
    String link = (String)s.getAttribute(HTML.Attribute.HREF);
    int endOfSet=it.getEndOffset(),
    startOfSet=it.getStartOffset();
    String text=doc.getText(startOfSet,endOfSet-startOfSet);
    if (link != null)
         links.add(new Link(link,text));
    it.next();
    }

    Hello to everybody,
    I am trying to parse the A tags & the Frame tags from HTML pages. I have developed the code below, which works for the A tags but it does not work for the Frame tags. Is there any idea about this?
    private void getLinks() throws Exception {
         System.out.println(diskName);
    links=new ArrayList();
    frames=new ArrayList();
    BufferedReader rd = new BufferedReader(new FileReader(diskName));
    // Parse the HTML
    EditorKit kit = new HTMLEditorKit();
    HTMLDocument doc = (HTMLDocument)kit.createDefaultDocument();
    doc.putProperty("IgnoreCharsetDirective", new Boolean(true));
    try {
         kit.read(rd, doc, 0);
    catch (RuntimeException e) {return;}
    // Find all the FRAME elements in the HTML document, It finds nothing
         HTMLDocument.Iterator it = doc.getIterator(HTML.Tag.FRAME);
    while(it.isValid()) {
    SimpleAttributeSet s = (SimpleAttributeSet)it.getAttributes();
    String frameSrc = (String)s.getAttribute(HTML.Attribute.SRC);
         frames.add(frameSrc);
    // Find all the A elements in the HTML document, it works ok
    it = doc.getIterator(HTML.Tag.A);
    while (it.isValid()) {
    SimpleAttributeSet s = (SimpleAttributeSet)it.getAttributes();
    String link = (String)s.getAttribute(HTML.Attribute.HREF);
    int endOfSet=it.getEndOffset(),
    startOfSet=it.getStartOffset();
    String text=doc.getText(startOfSet,endOfSet-startOfSet);
    if (link != null)
         links.add(new Link(link,text));
    it.next();
    }

  • How to parse HTML page

    What API or package can I use to parse an HTML page and to obtain
    HTML DOM interfaces.

    Use JTidy to make the HTML well-formed, then use the DOM parser in the Xerces API:
    JTidy (recommended by W3C, so its probably pretty good):
    http://www.w3.org/People/Raggett/tidy/
    http://sourceforge.net/projects/jtidy

  • Parse table data from HTML page

    Hello. I have a program that creates an HTML page with several tables to present some data. What I would like to do is extract the data row by row from one of the tables, and parse the data I'm interested in from each row. Can anyone suggest how I should approach this problem?

    Andrew,<br /><br />1. If you want to append these data to the existent one, you've to read the XMP of the file.<br /><br />2. You've to add or modify the Dublin Core Description field <dc:description><br /><br />For example:<br /><rdf:Description rdf:about='uuid:d659be9a-21d7-11d9-9b6a-c1fd593acb83'<br />  xmlns:dc='http://purl.org/dc/elements/1.1/'><br /> <dc:format>image/jpeg</dc:format><br /> <dc:description><br />  <rdf:Alt><br />   <rdf:li xml:lang='x-default'>Image Caption</rdf:li><br />  </rdf:Alt><br /> </dc:description><br /></rdf:Description><br /><br />3. You've to replace the app1 block on the JPG with the new XMP<br /><br />Regards,<br /><br />Juan Pablo

  • Parsing a html page

    I want to parse page for specified contents.
    I feel that easy to do but my problem is there are many URLs in the html page and it has to enter into each link and grab the specified content from that html page.In this way it has to parse all the links.
    Can anyone help me in this? Also if anyone has a code for it please say me.I have been trying a lot for it.
    Thanks Swetha.

    Sounds like you are making a web spider. Here are a few open source spiders you could "dissect" (pun intended).
    http://java-source.net/open-source/crawlers
    Also there are a fair amount of tutorials on this kind of project if you poke around google for little bit. There are several ways you could approach this, most likely the one you choose will be based on how many urls you plan on visiting and what you plan on doing there.
    If you only plan on visiting a small number or url's you could simply maintain a list of unvisited pages and a list of visited pages. These could be linkedlists if you don't care about seeing the same page twice, or perhaps a hashset if you do. So you pull off your first url, read the contents of the page, and then find the occurences of http:// and then add that url to your unvisited list. when you are done with the current page move that url to the visited list.
    when you are parsing out the urls you could do something as simple as using a StringTokenizer and breaking the html code into words. then you could tell it was a link by calling something similar to s.startsWith("href="); and then go from there...
    If you are going to be visited many pages you might want to investigate using multiple threads. In this case you'll need to use a list that is threadsafe and you might want to throttle the threads (have them sleep a little inbetween url requests) so you don't go blasting their/your bandwidth...

  • Flash plays before html page is completely loaded

    A flash intro for a website's homepage starts to play before
    the html page
    elements are completely loaded and visible. The html page is
    not
    complicated so it would seem the flash is even slowing down
    the loading of
    the html and it looks bad, but not terrible. Is there a way
    to, like,
    prioritize the loading of the page elements?

    Sorry, I didn't design the HTML page but I'm sure it was
    tested in a few
    different browsers. I'm not sure if it happened on all or
    some. The sites
    already live so the designer did not think it was a problem.
    Just thought
    there may be a stacking order or something that would load
    the background
    before the Flash starts playing. Thanks for your help.

  • How to parser a HTML page to get its variable and values?

    Hi, everyone, here is my situation:
    I need to parser a HTML page to get the variables and their associated values between <form>...</form> tag. for example, if you have a piece of HTML as below
    <form>
    <input type = "hidden" name = "para1" value = "value1">
    <select name = "para2">
    <option>value2</option>
    </form>
    the actual page is much complex than this. I want retrive pare1 = value1 and para2 = value2, I tried Jtidy but it doesn't reconginze select, could you recomend some good package this purpose? better with sample code.
    Thanks a lot
    Kevin

    See for example Request taglib from Coldtags suite:
    http://www.servletsuite.com/jsp.htm

  • In 10.7.3, do i Have to go back to prev ver to get my Appleworks files so I can put them in Pages? Do I install Pages before or after I retrieve the old files?

    In 10.7.3, do I have to go back to the prev ver to get my Appleworks files so I can put them in Pages? Do I install Pages before or after I retrieve the old files?

    I would get Pages, if it is word processing AW documents, and try to open them without changing the system back to Snow Leopard. You could have Snow Leopar d on an externalhard drive if you really need to have Rosetta and AW.

  • Parsing an html page

    i am trying to parse an html page read from the internet.
    i assume i need to create a URL of it's address, but after that im not sure how i go about reading the lines of that html page.
    i would like to load each line as a String into an array or a Vector so that i can easily parse each line from there...
    any tips?
    thanks.

    haha...this is what i ended up doing...and i was able to parse it all pretty easy...
    BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
    in.readLine();
    thanks!

  • Displaying content on multiple html pages

    I’m building a basic website for a business/charity I
    work in. I’m no pro so all my pages and templates are written
    in HTML. For convenience it would be nice to have sustain bits of
    info appear throughout the website on different pages. As I
    understand the best way to do this is to create a RSS feed and then
    have the relevant web pages display the feed.
    However I’ve been reading up on how to do this and I am
    finding it very complicated and am not even sure if it can be done
    with an HTML page. All the examples I have come across seem to be
    done in PHP. I’m not even sure what PHP is.
    My question therefore is: Firstly, is RSS, what I need, or is
    there a simpler way of having bits of text appear on multiple web
    pages? And, if so, can I have RSS feeds display in an HTML? And,
    again, if so, can someone point me in the right direction to do
    this in the most simple but yet still efficient/reliable way?
    Thank You
    Ps, Merry Christmas and Happy New Year

    > Firstly, the web page displaying the SSI code seems to
    require a .shtml
    > extension, is this correct?
    Yes. It is true *unless* the host enables server parsing of
    all extensions.
    > Secondly, I don?t seem to have to change the SSI source
    file?s extension
    > from
    > .html to .ssi. And in fact I think it will make updating
    the website
    > easier for
    > my colleagues if I keep the extension to .html as, when
    I change the file
    > extension to .ssi, Dreamweaver?s properties inspector
    and CSS panels
    > become
    > inactive meaning that the only way to edit the file is
    going state into
    > the
    > code rather than using a point and click interface. Is
    there any reason
    > the SSI
    > source file should not keep an .html extension.
    Name the file being included anything you want. It doesn't
    matter to its
    functionality as an included file.
    > Is there any open source or low cost software out there
    that would make it
    > easier for my colleagues to update the website?s SSI
    files and still be
    > able to
    > format the text with the same CSS fill the whole website
    uses. Is this
    > what the
    > Contribute program in Macromedia Studio does?
    A properly constructed include file should only contain
    references to CSS
    rules specified in the parent page. That being the case, if
    you are editing
    the include file directly, you cannot style the text unless
    you are doing it
    in Dreamweaver, or unless you make reference to the existing
    styles
    specified in the parent page.
    Contribute does lots more than what you ask. Go to Adobe's
    site and read
    about it.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.dreamweavermx-templates.com
    - Template Triage!
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    ==================
    "Chopo^2" <[email protected]> wrote in
    message
    news:[email protected]...
    > Ok, cool, thanks for all the help, I've even managed to
    get the text in
    > the
    > .ssi document to obey the same CSS rules as the rest of
    my web pages. Just
    > a
    > few last question concerning extensions and user
    friendliness before I?m
    > perfectly comfortable with using SSI.
    >
    > Firstly, the web page displaying the SSI code seems to
    require a .shtml
    > extension, is this correct?
    >
    > Secondly, I don?t seem to have to change the SSI source
    file?s extension
    > from
    > .html to .ssi. And in fact I think it will make updating
    the website
    > easier for
    > my colleagues if I keep the extension to .html as, when
    I change the file
    > extension to .ssi, Dreamweaver?s properties inspector
    and CSS panels
    > become
    > inactive meaning that the only way to edit the file is
    going state into
    > the
    > code rather than using a point and click interface. Is
    there any reason
    > the SSI
    > source file should not keep an .html extension.
    >
    > Is there any open source or low cost software out there
    that would make it
    > easier for my colleagues to update the website?s SSI
    files and still be
    > able to
    > format the text with the same CSS fill the whole website
    uses. Is this
    > what the
    > Contribute program in Macromedia Studio does?
    >
    > Thanks a lot everyone,
    >
    > Chopo
    >

  • Passing form data from html page to JSP page

    Hi,
    I have a simple HTML page with a form on it that sends the information to a JSP page that stores the form data in a JSP session. I created a simple form that asks for the user's name, sends it to the JSP page, stores that in a session, then sends it to another JSP page which displays the name. This worked fine.
    However, I added another input box to my form that asks for the user's age to do the same steps as outlined above. This does not work, and I'm not sure why. Here's the code from my HTML page:
    <form method=post action="savename.jsp">
    What's your name?
    <input type=text name=username size=20 />
    <p />
    What's your age?
    <input type=text name=age size=20 />
    <p />
    <input type=submit />Here's the code from my JSP page, savename.jsp (later on in the JSP page it links to another page, but that is not relevant):
    <%
    String name = request.getParameter("username");
    String age = request.getParamater("age");
    session.setAttribute("theName", name);
    session.setAttribute("theAge", age);
    %>Finally, here is the error message from Tomcat 6.0.9:
    HTTP Status 500 -
    type Exception report
    message
    description The server encountered an internal error () that prevented it from fulfilling this request.
    exception
    org.apache.jasper.JasperException: Unable to compile class for JSP:
    An error occurred at line: 3 in the jsp file: /savename.jsp
    The method getParamater(String) is undefined for the type HttpServletRequest
    1: <%
    2: String name = request.getParameter("username");
    3: String age = request.getParamater("age");
    4: session.setAttribute("theName", name);
    5: session.setAttribute("theAge", age);
    6: %>
    Stacktrace:
         org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:85)
         org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:330)
         org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:415)
         org.apache.jasper.compiler.Compiler.compile(Compiler.java:308)
         org.apache.jasper.compiler.Compiler.compile(Compiler.java:286)
         org.apache.jasper.compiler.Compiler.compile(Compiler.java:273)
         org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:566)
         org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:308)
         org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:320)
         org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
         javax.servlet.http.HttpServlet.service(HttpServlet.java:803)I do not understand the error, as it works fine when I simply try to retrieve "theName", but breaks when I try to retrieve both.
    What is it I am missing here?
    Thanks,
    Dan

    Ummm.... you misspelled "parameter" the second time
    you tried to use it.......
    That is incredibly embarrassing. Sorry if I made anyone think it was something more serious. Holy mackerel, I think it's time for me to read over my code much more carefully before posting about a problem.
    Thanks for the help DrClap.

  • Reading an HTML page in Java

    I have done some server-side java coding before but nothing quite like this. I want to parse and HTML and extract information from it to process and create a new HTML page. This sounds like it should be easy enough, I just don't know where to start. Can anyone give me a pointed to the correct package/class(es) to research?
    If you are curious to know what I am planning on doing, read on. I am part of a Yahoo NFL Picks competition and I think it would greatly benefit from having a "What If?" scenario analyzer. Currently it does not. However, I can access each member's public picks page, extract their predictions, compare against my own predictions and then enter in results for games yet to be played. There are 46 members so I would need to read in 46 HTML pages, collect the predictions and then process the information. I'm pretty god at figuring out how to use classes, etc., I am just a little unsure of where to start looking.
    TIA, Max

    i know you could use JEditorPane
    here a code i found somewhere, already try it, its works:
    import javax.swing.*;
    import java.awt.*;
    import java.awt.event.*;
    import java.io.*;
    import javax.swing.event.*;
    import java.net.*;
    import javax.swing.text.*;
    public class Browser extends JFrame {
         Browser() {
              getContentPane().setLayout (new BorderLayout (5, 5));
              final JEditorPane jt = new JEditorPane();
              final JTextField input =
              new JTextField("http://java.sun.com");
              // make read-only
              jt.setEditable(false);
              // follow links
              jt.addHyperlinkListener(new HyperlinkListener () {
                   public void hyperlinkUpdate(
                   final HyperlinkEvent e) {
                        if (e.getEventType() ==
                             HyperlinkEvent.EventType.ACTIVATED) {
                                  SwingUtilities.invokeLater(new Runnable() {
                                       public void run() {
                                            // Save original
                                            Document doc = jt.getDocument();
                                            try {
                                                 URL url = e.getURL();
                                                 jt.setPage(url);
                                                 input.setText (url.toString());
                                            } catch (IOException io) {
                                                 JOptionPane.showMessageDialog (
                                                 Browser.this, "Can't follow link",
                                                 "Invalid Input",
                                                 JOptionPane.ERROR_MESSAGE);
                                                 jt.setDocument (doc);
              JScrollPane pane = new JScrollPane();
              pane.setBorder (
              BorderFactory.createLoweredBevelBorder());
              pane.getViewport().add(jt);
              getContentPane().add(pane, BorderLayout.CENTER);
              input.addActionListener (new ActionListener() {
                   public void actionPerformed (ActionEvent e) {
                        try {
                             jt.setPage (input.getText());
                        } catch (IOException ex) {
                             JOptionPane.showMessageDialog (
                             Browser.this, "Invalid URL",
                             "Invalid Input",
                             JOptionPane.ERROR_MESSAGE);
              getContentPane().add (input, BorderLayout.SOUTH);
         public static void main(String args[])
              Browser bro = new Browser();
              bro.setSize(500,500);
              bro.setVisible(true);
    }hope that help

Maybe you are looking for

  • Linking to a Portable DVD

    Is there any way to link my macbook pro to a portable dvd player. Would like to play one movie on my macbook and have it show on both screens. Thanks.

  • How to extract hierarchy data from r/3 to bw

    Hi, Please give me the information regarding hierarchies in R/3 and its extraction procedure to Bw. Thanks & Regards, Santosh Edited by: santosh dha on Feb 22, 2008 7:18 AM

  • Nokia n73 backup file

    sorry for my bad english. i have been made a backup file long time ago cuz i couldn't copy the pictuers that i'v been captured. so i took a backup file and now when i wanna backup it again it seems that the PC suit doesn't regocnize the file but i'm

  • [Resolved] Finding Libs in Repositories

    Hello All, I am trying hard, but starting to get frustrated about ArchLinux... Maybe someone can help me before I give up and try SuSE 10.3: I am having two issues, i.e. two threads: Thread 1) I installed ArchLinux a while back but rarely used it. So

  • Exporting graphs

    I am using Numbers to produce graphs for use in my dissertation because it is so much simpler than Excel (and more attractive!). The graph needs to be pasted into a Word document but when I do this I get a black background around the edges of the gra