URL openStream problem. URL String length

Hi,
My problem is about length of the url string. I'm simply creating a URL and trying to call openStream() to get content. But if the length of url string exceeds 118 characters, I'm getting an exception. It says Server returned Http response code 505.
If I try the same url string in a browser it's working.
Thanks for all replies in advance...

the problem solved, it was about white space not length... I didn't notice the white space there sorry!

Similar Messages

  • URL.openStream() works in Windows but not in Linux

    I am having a problem with this line:
    BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
    in the code sample further below.
    A simple program using this line works when compiled in my Windows XP:
    java version "1.6.0_03"
    Java(TM) SE Runtime Environment (build 1.6.0_03-b05)
    Java HotSpot(TM) Client VM (build 1.6.0_03-b05, mixed mode, sharing)
    but not when compiled on my RedHat FC 4 server:
    java version "1.4.2"
    gij (GNU libgcj) version 4.0.2 20051125 (Red Hat 4.0.2-8)
    The program (making using of a previous froum example and pared down to minimize tangent topics):
    The code works for all 3 URLs in Windows. In Linux it only works for the 1st one (bbc.co site)
    Error is listed below the code:
    import java.net.*;
    import java.io.*;
    public class BBC {
    public static void main(String[] args) throws Exception
    //    URL url = new URL("http://news.bbc.co.uk/sport1/hi/football/eng_prem/6205747.stm");
    //    URL url = new URL("http://www.weatherunderground.com/global/stations/71265.html");
        URL url = new URL("http://www.weatherunderground.com");
        BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));
        int nLineCnt = 0;
        String inputLine;
        while ((inputLine = in.readLine()) != null)
            nLineCnt++;
        System.out.println("nLineCnt=" + nLineCnt);
    //--------------------------------------------------------------------------------------------------------------------------------------------Exception in thread "main" java.lang.StringIndexOutOfBoundsException
    at java.lang.String.substring(int, int) (/usr/lib/libgcj.so.6.0.0)
    at gnu.java.net.protocol.http.Request.readResponse(gnu.java.net.LineInputStream) (/usr/lib/libgcj.so.6.0.0)
    at gnu.java.net.protocol.http.Request.dispatch() (/usr/lib/libgcj.so.6.0.0)
    at gnu.java.net.protocol.http.HTTPURLConnection.connect() (/usr/lib/libgcj.so.6.0.0)
    at gnu.java.net.protocol.http.HTTPURLConnection.getInputStream() (/usr/lib/libgcj.so.6.0.0)
    at java.net.URL.openStream() (/usr/lib/libgcj.so.6.0.0)
    at BBC.main(java.lang.String[]) (Unknown Source)
    at gnu.java.lang.MainThread.call_main() (/usr/lib/libgcj.so.6.0.0)
    at gnu.java.lang.MainThread.run() (/usr/lib/libgcj.so.6.0.0)
    Can anyone please suggest what I can do to be able to process the weatherunderground URL?
    Claude

    To me it would suggest a bug in the VM that you are using.
    Solutions
    1. Use a different VM
    2. Write your own code to process the http code. Depending on licensing for the VM in use and the VM itself. you might be
    able to find the bug in that code, fix it yourself, and then use your fix (start up command line options for VM.) Otherwise
    you have to duplicate the functionality. You might look to jakarta commons, there might be code there that does that.

  • JSP bug? url.openStream() = server redirected too many times

    I tried something the other day that works in Java, so it should work as a JSP scriptlet, but I received an error message that others have posted elsewhere without a compete resolution. Specifically, given a URL, say u, one ought to be able to do u.openStream() and eventually read the remote page. Typically, one might want to try
    URL u = new URL("http://someserver.com/path/file.xxx")
    BufferedReader bfr = new BufferedReader(new InputStreamReader(u.openStream()))and then read bfr line-by-line. The problem that seems to be fairly common is that the openStream() call throws a ProtocolException claiming "server redirected too many times (20), ."
    What I've seen is that this exception occurs whenever the URL is outside the Tomcat server whence the call is being made; in our case, we're running "out-of-the-box" Jakarta Tomcat 4.1.29 on port 8080 of a w2k server. The code works perfectly in native Java and in JSP for a URL of the form "/anotherpage.jsp"
    Is this a bug in JSP, or in our version of Tomcat, or is there just some configuration parameter that needs to be changed from its default? As I said, I've seen similar posts (with less detailed analysis) in the Usenet newsgroups, but not one has generated a response that explains and resolves the matter.
    Perhaps a JSP guru out there could set the record straight? Thanks.
    P.S. I know that the use of scriptlets in JSP is being discouraged, but they are still supported AFAIK.

    Sure scriptlets are still supported. Most times though you can do things better with a custom tag. Why reinvent the wheel?
    Just as a suggestion, you might try the JSTL <c:import> tag.
    It basically does just this behind the scenes.
    However I don't think that will help you in the end - it will probably hit the same error message.
    My guess would be that the problem is not caused by java/JSP as such, but by a firewall, or configuration somewhere.
    The following works fine for me (ignoring broken images of course)
    <%@ page import="java.net.*, java.io.*" %>
    <%
    URL u = new URL("http://www.google.com");
    BufferedReader bfr = new BufferedReader(new InputStreamReader(u.openStream()));
    String line = null;
    while ((line = bfr.readLine()) != null){
      out.println(line);
    %>Hope this helps,
    evnafets

  • URL.openstream doesn't work

    Hi,
    I run the demo program form java tutorial. Code as below:
    import java.net.*;
    import java.io.*;
    public class urltest {
    public static void main(String[] args) throws Exception {
         URL yahoo = new URL("http://www.yahoo.com/");
         BufferedReader in = new BufferedReader(
                        new InputStreamReader(
                        yahoo.openStream()));
         String inputLine;
         while ((inputLine = in.readLine()) != null)
         System.out.println(inputLine);
         in.close();
    but it gives the follow output
    Exception in thread "main" java.net.NoRouteToHostException: Operation timed out:
    no further information
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.PlainSocketImpl.doConnect(Unknown Source)
    at java.net.PlainSocketImpl.connectToAddress(Unknown Source)
    at java.net.PlainSocketImpl.connect(Unknown Source)
    at java.net.Socket.<init>(Unknown Source)
    at java.net.Socket.<init>(Unknown Source)
    at sun.net.NetworkClient.doConnect(Unknown Source)
    at sun.net.www.http.HttpClient.openServer(Unknown Source)
    at sun.net.www.http.HttpClient.openServer(Unknown Source)
    at sun.net.www.http.HttpClient.<init>(Unknown Source)
    at sun.net.www.http.HttpClient.<init>(Unknown Source)
    at sun.net.www.http.HttpClient.New(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown So
    urce)
    at java.net.URL.openStream(Unknown Source)
    at urltest.main(urltest.java:30)
    Plz advise.

    Thx Jean, I modify the OS to Windows 95, but the program still don't work
    code:
    import java.net.*;
    import java.io.*;
    public class urltest {
    public static void main(String[] args) throws Exception {
         URL yahoo = new URL("http://www.yahoo.com/");
         HttpURLConnection uc = (HttpURLConnection)yahoo.openConnection();
    uc.setRequestProperty("User-Agent","Mozilla/4.0 (compatible; MSIE 5.5; Windows 95)");
    uc.setRequestProperty("Connection","Keep-Alive");
    uc.setDoOutput(true);
    uc.setDoInput(true);
    uc.setUseCaches(false);
    uc.setRequestMethod("GET");
    uc.setFollowRedirects(false);
    uc.setInstanceFollowRedirects(false);
         BufferedReader in = new BufferedReader(
                        new InputStreamReader(
                        uc.getInputStream()));
         String inputLine=in.readLine();
         //while ((inputLine = in.readLine()) != null)
         System.out.println(inputLine);
         in.close();
    The output is
    Exception in thread "main" java.net.NoRouteToHostException: Operation timed out:
    no further information
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.PlainSocketImpl.doConnect(Unknown Source)
    at java.net.PlainSocketImpl.connectToAddress(Unknown Source)
    at java.net.PlainSocketImpl.connect(Unknown Source)
    at java.net.Socket.<init>(Unknown Source)
    at java.net.Socket.<init>(Unknown Source)
    at sun.net.NetworkClient.doConnect(Unknown Source)
    at sun.net.www.http.HttpClient.openServer(Unknown Source)
    at sun.net.www.http.HttpClient.openServer(Unknown Source)
    at sun.net.www.http.HttpClient.<init>(Unknown Source)
    at sun.net.www.http.HttpClient.<init>(Unknown Source)
    at sun.net.www.http.HttpClient.New(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source)
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown So
    urce)
    at urltest.main(urltest.java:39)
    Plz advise

  • URL  access problem

    I have JDK1.5.0_06 (build 1.5.0_06-b05) on my PC with Windows 2000 OS and when I am running the following code
    public class A
    public static void main(String[] args)
    try{
    URL u = new URL("http://www.aaaa.com/xxxx%F3P%F3Rblabla");
    ur = u.openConnection();
    ur.getInputStream().close();
    }catch(Exception e){
    e.printStackTrace();
    }finally{
    the following exception appears:
    java.lang.IllegalArgumentException
         at sun.net.www.ParseUtil.decode(ParseUtil.java:179)
         at sun.net.www.ParseUtil.toURI(ParseUtil.java:253)
         at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:738)
         at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:669)%
         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:913)
    I think that the problem is using "%F3" in the URL. But, I have a site where this code appears in the URL and from IE it works fine.
    If somebody knows how to solve please give me a sign.
    If I use 1.5.0_02 (build 1.5.0_02-b09) it works fine.
    Best regards,
    Leo

    Are regular users able to access Files? Is the user seeing the SSO Login page or do they just get a HTTP 500?
    Please have the user delete all their cookies and temporary internet files and try it again.
    In addition please see if the users request makes it to the Files server by checking the HTTP access_log and the Files servlet's application.log.
    Hope that helps,
    -sancho
    I have a user who is having trouble accessing the
    administrator and files URL. He is using the same URL
    as other administrators but keeps getting HTTP 500
    error or page can not be displayed.
    The problem appears to be with his browser settings
    in IE version 5.5. He has reinstalled IE and is still
    getting this error. He can use someone else's PC and
    is able to log in using the URLs but on his PC he
    keeps getting an error.
    Does anyone have any suggestions where to start
    looking for a problem resolution?
    Thanks for any help...

  • Help with URL.openStream

    Hi,
    This small Java program hangs if run on 1.3.1 but not 1.2.2 or some version of 1.4:
    import java.net.*;
    import java.io.*;
    import java.security.*;
    public class V {
        public static void main(String args[]) throws MalformedURLException, IOExcep
    tion, AccessControlException {
            String u = "http://www.rsmas.miami.edu/~angel/a.sh";
            InputStream     inp;
            BufferedReader  dis;
            try {
                int                 i;
                String              s;
                URL                 url;
                URLConnection       urlConn;
                url = new URL(u);
                System.out.println("URL constructor");
                inp = url.openStream();
                System.out.println("stream opened");
                dis = new BufferedReader(new InputStreamReader(inp));
                System.out.println("BufferedReader created");
                //  skip leading html headers
                while ((s = dis.readLine()) != null) {
                    if (s.startsWith("<body"))
                        break;
                System.out.println("found <body>");
            catch (MalformedURLException mue) {
                System.out.println("Bad URL: " + mue);
                throw(mue);
            catch (IOException ioe) {
                System.out.println("Error reading from remote: " + ioe);
                throw(ioe);
            catch (AccessControlException e) {
                System.out.println("Access Control Exception");
                throw(e);
    }The CGI script is
        #!/bin/sh
        echo "Content-type: text/html\r"
        echo "\r"
        echo "<html>"
        echo "<body>"
        echo "<pre>"
        while true; do
            echo "Hi there"
            sleep 3
        doneAny thoughts?
    Thanks,
    Angel

    what's it hanging after? does it print any of your lines out? You don't need to call connect() on the URLConnection, do you? Maybe not if using openStream().
    BTW, is that script going to loop forever? The URL isn't working.

  • Extract URL from a href string

    Greetings,
    I am trying to solve a problem (in a specific way) that will lead to the solution for another problem (which is similar).
    I am trying to extract a URL from an HTML <a> tag like this:
    <a href="www.someurl.com">Go to www.someurl.com</a>Can anyone help with the best solution? I've tried it using String.split() and StringTokenizer (which might work for this example), but, for my real problem, they seem quite inadequate. I'm guessing the solution involves some regex but I don't know where to look (and am quite unfamiliar with regex to know how).
    Thank you in advance.

    Actually, I am trying to break down many sections of the URL.
    The URL I am trying to breakdown has a lot of data that I need.
    I guess a better example would be:
    <a href="www.someurl.com" id="url-id-43612">Link Text - data here</a>In my program, I need to grab three things: the href URL, the id, and "Link Text - data here."
    Here is an SSCCE of how I am currently dealing with this:
    import java.util.StringTokenizer;
    * To change this template, choose Tools | Templates
    * and open the template in the editor.
    * @author Ryan
    public class htmlparser {
        public static void main(String[] args) {
            String str = "<a href=\"www.someurl.com\" id=\"url-id-43612\">Link Text - data here</a>";
            str = str.replaceAll("<a href=\"", "\n")
                .replaceAll("\" id=\"url-id-", "\n")
                .replaceAll("\">", "\n")
                .replaceAll("</a>", "\n");
            StringTokenizer tokenizer = new StringTokenizer(str, "\n");
            String url = tokenizer.nextToken();
            String id = tokenizer.nextToken();
            String text = tokenizer.nextToken();
            System.out.println(url);
            System.out.println(id);
            System.out.println(text);
    }But I am looking for a more elegant solution to the problem, is there one?

  • Passing user/pass by URL (url.openstream())

    Hi guys,
    I have a tough one for you...hehe
    In a JSP, I create a URL... and open it URL.openstream...read each line and write them into another file.
    1)Since the calling JSP is in a directory that ask username and password, when I call the URL, it cannot open it because no username and password give access to the page.
    How can I pass username and password?
    2)I could let the called jsp in a public zone, but the problem is that even if I use jsessionid the session change... do you know why?
    Thanks guys!

    You should have session variables attached to your page. Check Connection and Session, you should be able to use something like getProperties ("userid")

  • How to use "url.openStream()" . What this function does?

    how to use "url.openStream()" . What this function does?
    Edited by: sahil1287 on Apr 16, 2009 10:02 PM

    http://java.sun.com/javase/6/docs/api/java/net/URL.html#openStream()
    http://java.sun.com/docs/books/tutorial/networking/urls/readingWriting.html

  • Set "url" in HTTPService using String variable

    Hi,
    How can I set "url" in HTTPService using String variable (see below). I've tried different formats of string & variable concatenations.
    privtate var myurl:String = "http://localhost/";
        <mx:HTTPService id="post_submit_service"
            url="{myurl+'test.php'}"
            method="POST"
            resultFormat="text"
          result="result_handler(event)"
          fault="fault_handler(event)">
            <mx:request xmlns="">
          </mx:request>
        </mx:HTTPService>
    Thanks,
    ASM

    try following:
    url="{myurl}test.php"

  • URL cache problem in UCM and sitestudio

    Hi ,
    We have created Metadata Field : xPageName, and have configure an entry in config.cfg as SSUrlFieldName=PageName, Scenario is we have assigned abc.html to xPageName for a content item on suppose /oracle/cms1/ URL, by mistake we assigned abc.html to xPageName to another content item on suppose /oracle/cms2/ URL. Problem is abc.html is registered with two content item and on two URL's, and now we have changed it for /oracle/cms2/ URL to xyz.html, but when we hit /oracle/cms2/abc.html it is also served with blank page. Please suggest how we can get away with this ASAP.
    Thanks,

    That is definitely a limitation that each custom page name must be unique.
    If think what you are saying is that you have resolved this issue but the URL is still taking you to the wrong page...
    THis stuff is cached IIRC in some js files in the weblayout
    Have you tried, clearing browser cache, recreating the navigation in SS Designer, or restarting UCM and webserver?

  • URL cut before query string

    Hi,
    I am writing a standalone client making SOAP calls to a server. The endpoint I am sending the SOAP message to is a URL of the form
    http://myserver.org/vdir/app?x=1&y=2
    I create this url by
    URL endpoint = new URL("http://myserver.org/vdir/app?x=1&y=2");
    Then I do the SOAP call via
    SOAPMessage response = con.call(request, endpoint);
    Tracing down what the web server receives, I see the following line:
    POST /vdir/app HTTP/1.1
    but actually, what I am expecting is
    POST /vdir/app?x=1&y=2 HTTP/1.1
    Any idea, why the URL is truncated before the query string?
    TIA,
    Chris

    OK, thank you.
    Is there any trick to make the request NOT ignore the URL parameters?
    Chris

  • Url.openstream():strange result of available().

    I test my first network programming,and the result is very strange,It seems that the result of available() is random!
    URL url= new URL("ftp://ftp.pku.edu.cn/pub/proxy.txt");
    InputStream is=url.openStream();
    /***** the output is 0 ********/
    System.out.println(is.available());
    URL url1= new URL("ftp://ftp.pku.edu.cn/pub/proxy.txt");          is=url1.openStream();
    /******* the result is 4092 ************/
         System.out.println(is.available());
    Why this happen? the result should be equal!

    is that the exact code that shows those results? anyway, it doesn't matter, available() is not random, it just may return different values as it reads in more bytes, so if the other send keeps sending data, available() will keep returning larger and larger values!
    perhaps the first time the connection with a little slow, the server may not have responded very quickyl, or the underlying buffer might not have been full enough so it didn't flush it out to you, in any case, available() is not random.

  • Url.openStream() 5x

    Hello
    Please, I know this is impossible, and how quickly one can get into a recursive rut, however I have an application that when I connect to a url that is a struts app, it makes 5 calls per single url.openStream(). I know it is impossible for little stupid me to have "discovered" this however I tried to make sure that it was not multiple threads, put in probes, tested the struts app, even worked up using Apache's HttpClient, of course to no difference.
    I am calling, a subclass of httpservlet within which I am making the url call. I know the atomicity of the call going in, can see the call in doPost, but see five calls against the struts app. The stuts app works just fine when executed from a webpage; one call, one pass. Is it just that somehow 5 threads are being created. Can I somehow monitor the main thread in my subclassed servlet and see it spawn the threads after the url.openStream()? I have tried 1.4.2_14 and 1.5._09 to the same effect. Ideas? tia.

    Nevermind. The sound of chagrin.

  • Yet another url mapping problem - wildcards

    Hello
    I have one servlet which I want to handle a range of urls. If I configure the web.xml with
    <url-pattern>/servlet</url-pattern>
    then /prjServ/servlet works in a browser but of course any other urls (eg /prjServ/servlet/fred) do not.
    If I change the web.xml to be
    <url-pattern>/servlet/*</url-pattern>
    and try going to /prjServ/servlet, prjServ/servlet/ or prjServ/servlet/fred they all fail with requested resource is not available.
    Any ideas what I am doing wrong? The servlet class is in a subdirectory of WEB-INF/classes and works fine for the url-pattern which does not use a wildcard.

    Does /servlet/* already have a global mapping perhaps? In Tomcat you can turn this off in the global web.xml (look for the "invoker" servlet), other servlet containers may have a similar option, or you could simply choose a different name for your mapping.

Maybe you are looking for