HTTP persistent connection

Hello,
I know this subject has already been touched but I did not manage to find a practical solution.
I would like to send more than one http request (and check the response) to the same URL without having to reopen a new HttpURLConnection each time. The following piece of code does not seem to give any error but the server does not seem to receive more than one request. Could you give me any help ??
Thanks,
Francesco.
public void exec() {
try {
url = new java.net.URL(m_hostURL);
connection = (java.net.HttpURLConnection)url.openConnection();
connection.setRequestProperty("connection", "Keep-Alive");
//set the output to true
connection.setDoOutput(true);
out = new java.io.OutputStreamWriter(connection.getOutputStream(), "ASCII");
for (int i=1; i<= 10; i++) {
//place request in output stream
out.write("hello world!!");
out.flush();
//send request and receive response code
System.out.println(" Code : " + connection.getResponseCode());
out.close();
connection.disconnect();
} catch (Exception e) {
System.err.println("ERROR: Failed to send to URL - " + m_hostURL);
e.printStackTrace();

I also have been surfing the web to find out how to implement
an applet-to-servlet communication link over a persistent http connection.
There is a working implementation available for download at
http://www.ustobe.com/
After clicking on the 'news' item in the menu, there are links
available to download HttpKeepalive.java, with javadoc explaining
how to use this class.
Mike

Similar Messages

  • How to get and maintain Http persistent connection to get pushed data from

    At MangoSpring ,www.mangospring.com we are working on application which uses IMPS protocol where we always required to receive data pushed by server.
    To achive this we'll have to maintain one persistent Http connection on client side, so that we can be notified whenever some data is pushed by server.
    Reply With Quote

    our problem is :: " How to get and maintain Http persistent connection to get pushed data from Server "

  • HTTP Persistent Connection - Cannot Change Timeout

    On an HTTP persistent connection, the client and server both can have timeouts on a connection. In the case of a Java client, I can find no way to set the client's timeout value. It is always 5 seconds (assuming you exactly jump through the hoops necessary to get it to work at all). According to my reading of decompiled code, this will be the case unless it is a proxy connection, in which case 10 seconds will be used.
    For a client-server app, the best performance occurs if the connection is truly persistent - as long as the server has resources. This means one wants to minimize or eliminate reconnections. But the Java implementation appears to always drop the connection after the 5 or 10 second limit.
    Has anybody gotten around this? All of this logic and parameters are deep into sun.net classes, so I can't get at them by subclassing... and the only factories I can find to override require essentially rewriting the stuff I gain by using the URL/HttpURLConnection framework in the first place.
    NOTE... because the persistent connection transparently hides the persistence, you can only tell how it is working if you have a packet monitor.
    I use ethereal (open source) on Windows to do this.It can be found at http://www.ethereal.com/
    For more details: http://www.tinyvital.com/JavaHTTPProblem.html
    Thanks!

    I also have been surfing the web to find out how to implement
    an applet-to-servlet communication link over a persistent http connection.
    There is a working implementation available for download at
    http://www.ustobe.com/
    After clicking on the 'news' item in the menu, there are links
    available to download HttpKeepalive.java, with javadoc explaining
    how to use this class.
    Mike

  • Http persistent connection, several request-response same http connection??

    -I have a problem: I need to make a sever side java application to comunicate over http/1.1 using persistent connections, in other words: A http client sends an httpRequest to my application, my application receives the request and then sends a response, the httpClient receives the response and sends a second http Request to the application, the application receives this second request over the same connection and sends a second response to the client. All these things would be made over the same connection without closing any socket.
    I`m trying to made this for 3 weeks but when the server side application responses the first request, the connection closes.
    Anybody can help please???
    I don�t speak english very well, sorry!!!!

    HTTP has a connection-keep alive feature. If you are using a standard api for HTTP communication and a standard HTTP server you will have this feature enabled already. But it is not good idea to depend on connection keep alive feature because HTTP is not designed to keep track of the state of session on its own.
    Why do you want to have it that way?. Cant you implement same requirement using HTTP sessions/cookies

  • HTTP persistent connection (keep Alive) problem

    Hi,
    I'm trying to deploy a RPC web service and I would like to use the HTTP 1.1 keep alive feature (which should be the default).
    I generated the server side and client side code using jdev 903. The AS is OC4j 903.
    The problem is that the server is sending a ‘Connection: close:’ header in the SOAP response, despite the client sending a ‘Connection: keep-alive’ header in the request. This is obviously causing the client to close the connection and open another for the next message.
    The documentation implies that the server will always try to use persistent connections, even so, in the file:
    <OC4J_HOME>/j2ee/home/config/http-web-site.xml
    there is a config option ‘use-keep-alives’. Setting this to true has no effect.
    I’m guessing there must be another option somewhere to turn on connection persistence.
    Can anyone please help ?

    Does anyone know whether Oracle Supports persistent connections ?
    I tested this on JBOSS and I seems to get better performance.
    cheers.

  • Network.http.max-persistent-connections-per-server keeps reverting to user set 4

    I didn't used to have this problem as far as I'm aware, but as of recently, I've noticed that if I try to download more than 4 of one thing at a time from the same site, it would fail. For instance, if I have up 5 web pages of streaming content, only 4 will load, the other 5th one will do nothing until I finish one of the others. Upon looking up some information on this, I saw that the network.http.max-persistent-connections-per-server is set at "user set = 4", and if I change that number to anything else, it corresponds. If I click to 'reset' the number, it goes to the default of 6 - which is still less than what it used to be. I used to not even have a limit on this as far as I'm aware, or if I did, it was well beyond 10 and that wasn't a product of me manipulating it.
    What I'm asking is this:
    1) How do I make this so it permanently switches to a higher number? Every time I reset it to 6 or I manually change the number to something higher, once I close it, those settings go back to 4.
    2) Why is it reverting and not staying the way I change it?
    3) What's the purpose of this and why is it just recently doing it when I haven't put any new add-ons or made any other refinements to Firefox that I'm aware of?
    Any help would be appreciated

    Two possible causes for it resetting to 4 that I can think of.
    # You have an add-on handing that pref and that is resetting it to 4 when Firefox is restarted.
    # That pref is set via user.js and that causes it to go to that value as Firefox starts.
    http://support.mozilla.com/en-US/kb/troubleshooting+extensions+and+themes <br />
    http://kb.mozillazine.org/User.js_file

  • HTTP/1.1 Persistent Connections Revisited!

    Persistent Connections Revisited!
    There have been many posts on this subject but no real answers, but I have narrowed the problem to some extent.
    Problem: Open a connection using a URL and keep the socket open so it can be reused. This is especially important for HTTPS connections.
    What works: The following simple code works from an applications. It will open a single socket to www.xyz.com:1234 and reuse it to do all three operations.
    What doesn�t work: If this code is used in a signed applet running java plugin 1.4.0 under I.E it will open three separate connections and hold them open but not re-use them.
    Same code but it doesn�t work where you really need it to.
    If anyone has any real answer to this PLEASE HELP!
    Code snippet:
    URL base = new URL("http://www.xyz.com:1234/");
    // Do a get
    System.out.println("Doing GET");
    URL url = new URL(base,"index.html");
    HttpURLConnection con = (HttpURLConnection) url.openConnection();
    BufferedInputStream in = new BufferedInputStream(con.getInputStream());
    int c;
    while((c = in.read()) != -1) {
    System.out.print((char) c);
    in.close();
    // Do a post
    System.out.println("Doing POST");
    con = (HttpURLConnection) url.openConnection();
    con.setDoOutput(true);
    byte[] somedata = "xyz=1234\n".getBytes();
    con.setRequestProperty("Content-Length",new Integer(somedata.length).toString());
    BufferedOutputStream bo = new BufferedOutputStream(con.getOutputStream());
    bo.write(somedata);
    in = new BufferedInputStream(con.getInputStream());
    while((c = in.read()) != -1) {
    System.out.print((char) c);
    in.close();
    // Do a get of an image.
    System.out.println("Doing Get image");
    url = new URL(base,"TestImage.gif");
    in = new BufferedInputStream(url.openConnection().getInputStream());
    int sz=0;
    while((c = in.read()) != -1)
    sz++;
    System.out.println("image size = "+sz);
    in.close();

    More Info:
    Upgraded to Java Plugin 1.4.1_01. This made matters worse. Now the last "GET" actually does two "GETs" creating two sockets. I have no clue as to what's up with that! Either SUN has some real serious bugs or I'm totally lost.
    Somebody please give my a hand here, it looks like my ship is sinking fast.

  • Persistent HTTPS/SSL connections

    Dear all,
    Does anybody know how to make an HTTPS/SSL connection persistent.
    We need to make multiple HTTPS requests to a server and we found that most of the time it gets new SSL session ID and makes all the master-secret processing whenever it gets the enw session ID.
    I have seen (with -Djavax.net.debug=ssl option) that the JSSE tries to resume, but the server sends new session id, do I need to set/force anything from my side?
    Thanks in advance for the answer!
    Vijay

    We have solved the problem!
    For those who wish to know what happened
    I have done some debugging with the JRE option -Djavax.net.debug=ssl and I could see the JSSE libraries (1.0.2, with JDK 1.3.1) trying to resume the seesion with the SSL session ID which it got from previous communication, however, the server gave a new session ID back and it had to do all the compautations for the secret exchanges from scratch.
    Then we found the load balancer transfers each requests to different servers causing the creation of new session IDs. The problem is solved after making our requests "sticky" to the load balancer and the SSL accelerator.

  • How do i change http max connections, http persistent, pipling to true, and proxy pipling to true

    how do i change http max connections, http max persistent, pipling to true, proxy pipling to true.

    Simpler question: Can I change the max connections in Flex?
    Thank you.

  • I can't cap the number of active downloads (via persistent-connection-to-server cap) AND still be able to load pages hosted on the same server, *because*...?

    I'm aware that, by setting "network.http.max-persistent-connections-per-server" (in about:config) to 1, I'm effectively preventing myself from being able to download a single file from a server AND load a page hosted on the same server simultaneously.
    My question is, why can't Firefox tell the difference between a file download and loading a website?
    The reason I ask is that I feel I should be able to create a download queue and have my browser download one file at a time (to minimize the impact downloads have on available bandwidth and my own downstream, while allowing me to set up a long chain of sequential downloads), YET still be able to browse websites hosted on the same server.
    I do not want to use an add-on or a separate download manager to solve this problem. I think that's stupid. Firefox should be able to tell the difference between a request to load an html file and a request to download any other type of file.
    After all, I can already configure Firefox's file handling behavior for each type separately (I have, for example, previously set PDF files to prompt me to "Save as...", in order to prevent exploits from triggering when I click on a link, by preventing Firefox from opening them in-browser).
    There is no reason I shouldn't be able to do what I have said I would like to. I should not HAVE to limit the number of max server connections in order to ACHIEVE filling a simple download queue with a 1-file-at-a-time limit.

    In your Keychain under 'login' delete the VeriSign certificates and then quit and restart all browsers/itunes/app store.
    http://apple.stackexchange.com/questions/180570/invalid-certificate-after-securi ty-update-2015-004-in-mavericks

  • Persistent connections on Cisco ACE

    Hello Team,
    Currently I have the below persistence settings for the vips and they are set as cookie based which would match where a session cookie is used.The cookie is set for "browser-expire" . So, this will allow the clients browser to expire the cookie when the session ends.
    ASH
    ====
    sticky http-cookie http-cookie-default 206.80.50.110:80
    cookie insert browser-expire
    serverfarm 206.80.50.110:80
    sticky http-cookie http-cookie-default 206.80.50.110:443
    cookie insert browser-expire
    serverfarm 206.80.50.110:443
    Please see the following snippet of information from Akamai:
    ============================================
    "It has come to our attention that your Origin does not support pconns. This may possibly result in a significant overhead. Can you enable pconns on the load balancer and also back to the Origin with a idle timeout of 301 or 302 seconds? Our default is 300 secs of idling before terminating a pconn, so one sec more will avoid a race condition (assuming your server and NLB clock is sync'ed using NTP)"
    I am assuming that this refers to TCP connections from the Akamai Edge Server to our end Load Balancer
    I am presuming that they make a TCP connection(s) to the LB from the Akamai Edge Server and use it for multiple requests from clients and they would need ties connections to remain active for at least 301 seconds
    Can this request be made possible. Pls assist. Thanks !!
    regards,
    Karthik

    Hi Karthik,
    I believe you are confusing persistence with stickiness. What they mean by a persistent connection is one in which multiple HTTP requests are sent inside the same TCP flow, and ACE will always accept those.
    I would suggest you to go back to Akamai and clarify what exactly is the issue, along with some more details on how they reached that conclussion, and then we can see if there is any way to avoid the problems.
    Regards
    Daniel

  • There appears to be a persistent connection limit of 2 for cross domain requests

    I am building a service that allows anybody to add real-time push updates to their website or web application. To do this I'm using the Access-Control-Allow-Origin header response from a cloud hosted server.
    When establishing a connection we create one persistently connected HTTP connection and use a second connection to send requests (subscriptions/commands).
    The problem I'm seeing looks very similar to the old 2 socket to a server connection limit (related to about:config "network.http.max-persistent-connections-per-server").
    Is there a limitation on the number of persistent cross domain connections that can be established from a single Firefox process? If so, where is the configuration options to override this?
    This problem can be easily duplicated by going to the following page in two tabs within the same Firefox process:
    http://kwwika.com/Standalone/Demos/WorldCup2010/
    You will notice that one page connects and starts to receive updates. The second will not connect until you close the first tab/page.
    I'd like to be able to provide feedback to the users of Kwwika about this problem so any information you can provide me with would be very much appreciated.
    == URL of affected sites ==
    http://kwwika.com/Standalone/Demos/WorldCup2010/
    == User Agent ==
    Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.70 Safari/533.4

    [bump]

  • Does HttpURLConnection use persistent Connection ?

    Hi
    Can any one of you know that how to make persistent Connections using HttpURLConnections or how to reuse HttpURLConnection so that i don't need to create a new new connection every time for requested URL ? Let say i am right now connecting with some website using urlobject.openconnection() method. I have configure some request and then read response from that website. Now suppose i want to configure another request to same website, how can i do that? Should i need to create every time new connection with same website using urlobject.openconnection() method or is there any trick by which i can reuse the httpurlconnection ?
    How can a client make pipelining request in java? I am using HttpURLConnection class.
    Any further comment or suggestions will be appreciated.
    Thanks.

    Yes. it uses persistent connections by default. It does not support pipelining (many HTTP servers don't support that either).

  • Java.io.IOException: Persistent connection dropped

    Hi all,
    I am currently testing my midlet that uses a HTTP connection to link to a server using the emulator. However, I keep getting the following exception about a minute after the connection is established:
    java.io.IOException: Persistent connection dropped after first chunk sent, cannot retry
    at com.sun.midp.io.j2me.http.Protocol.sendRequest(+102)
    at com.sun.midp.io.j2me.http.Protocol.sendRequest(+6)
    at com.sun.midp.io.j2me.http.Protocol.closeOutputStream(+4 )
    at com.sun.midp.io.BaseOutputStream.close(+14)
    at java.io.DataOutputStream.close(+7)
    at Conn.sendData(+25)
    at Conn.doPost(+17)
    at Conn.upload(+4)
    at UploadThread.run(+7)
    Here is the code for my sendData() method
    public static void sendData( HttpConnection connection, String data ) throws IOException
         byte[ ] dataOut = data.getBytes();
         DataOutputStream os = connection.openDataOutputStream();
            try
              os.write( dataOut );
              os.flush();
         finally
              os.close();
    }Any help or suggestions as to why I am getting this exception would be greatly appreciated.

    Should i use a persistent connection and could you try to explain what the
    exception actually means?I just thought that the connection may be dropped somehow (maybe a timeout) and it may be related to closing OutputStream you obtain from the connection.
    Every time i send data i create a new thread that opens up a new connection
    and sends the data and receives the response.
    I does this as i am only sending and receiving very small amounts of data. I am thinking that there can be a better design for your application. Is it possible?
    1. If you are using the same connection (i mean socket connection not output)
    why do you close the OutputStream after sending each data. As far as i know flush does this already.
    So why dont you close both the connection and the OutputStream when the job of your program ends.
    2. If you are not using the same connection for each thread, (aagain) why dont you open and close both the connection and the OutputStream in your finally block of your thread?
    Regards.

  • Pesistent HTTPS/SSL connections

    Dear all,
    Does anybody knows how to make an HTTPS/SSL connection persistent.
    We need to make multiple HTTPS requests to a server and we found that most of the time it gets new SSL session ID and makes all the crypto/certificate processing whenever it gets the enw session ID.
    I have seen (with -Djavax.net.debug=ssl option) that the JSSE tries to resume, but the server sends new session id, do I need to set/force anything from my side?
    Thanks in advacne for the answer!
    Vijay

    We have solved the problem!
    For those who wish to know what happened
    I have done some debugging with the JRE option -Djavax.net.debug=ssl and I could see the JSSE libraries (1.0.2, with JDK 1.3.1) trying to resume the seesion with the SSL session ID which it got from previous communication, however, the server gave a new session ID back and it had to do all the compautations for the secret exchanges from scratch.
    Then we found the load balancer transfers each requests to different servers causing the creation of new session IDs. The problem is solved after making our requests "sticky" to the load balancer and the SSL accelerator.

Maybe you are looking for

  • How to register the apple tv???

    i don know how to register ...

  • Function module to get invoice details ...

    Hi friends ... I need function module to get sales invoice details including items .... Thanks & Regards .. Ashish

  • Almost there, still need email php help!

    Hey there everyone, I posted my issue earlier but the thread seems to be dead.  I have been researching php syntax and looked at some other examples online, and I think I almost have it.  I just need a little extra assistance to get there. I had my p

  • Autocomplete PHP

    I have been using Golive until now. We have the CS3 that includes Dreamweaver. One thing I need help with is how to get DW to auto-complete or auto-suggest PHP code like GL does.

  • Accessing SOAP over JMS webservice

    Hi, From Webdynpro can I access a webservice which is using SOAP over JMS. Is there any example on it. Thanks. MS.