URLConnection POST to external server requiring keep-alive fails because request is HTTP/1.0

          I have a class that when run as a "main" transmits a HTTP/1.1 post successfully
          to an external server. This external server requires keep-alive connections.
          However when instantiated inside a weblogic servlet container, the post fails
          because the HTTP protocol is set to HTTP/1.0. I have tried this with V5.1 SP11
          and then with V6.1 SP2 with the same result. The code works under Tomcat.
          I can find no way to force HTTP/1.1 in the URLConnection. Any suggestions?
          

Great. I have a question to BEA folks, if they ever read this newsgroup:
          what is the reason for installing WLS protocol handlers, and, if there is
          one, why the implementation is still buggy? I saw many, many instances when
          code making outgoing connections failed to work in WLS, and the solution is
          always the same - use handler which comes with the JVM.
          Bob Bowman <[email protected]> wrote:
          > <[email protected]> wrote:
          >>If it works as a standalone application and fails inside WebLogic, most
          >>likely this
          >>is caused by WebLogic http handler implementation. You can try to modify
          >>your code
          >>like this:
          >>
          >>URL url = new URL(null, "http://some_url", new sun.net.www.protocol.http.Handler());
          >>HttpURLConnection conn = (HttpURLConnection)url.openConnection();
          >>
          >>(you will need to modify weblogic.policy to allow your code to specify
          >>protocol
          >>handler).
          >>
          >>Bob Bowman <[email protected]> wrote:
          >>
          >>> I have a class that when run as a "main" transmits a HTTP/1.1 post
          >>successfully
          >>> to an external server. This external server requires keep-alive connections.
          >>> However when instantiated inside a weblogic servlet container, the
          >>post fails
          >>> because the HTTP protocol is set to HTTP/1.0. I have tried this with
          >>V5.1 SP11
          >>> and then with V6.1 SP2 with the same result. The code works under
          >>Tomcat.
          >>
          >>> I can find no way to force HTTP/1.1 in the URLConnection. Any suggestions?
          >>
          >>--
          >>Dimitri
          > Worked like a champ! Thanks.
          Dimitri
          

Similar Messages

  • The document could not be saved. The server said: "The operation failed because an unexpected error occurred. (Result code 0×80020005)" Please ensure you have completed all required properties with the correct information and try again.

    I am having problems  saving documents back to SharePoint when any of the document properties (metadata columns) are set to be "managed metadata". The check-in/save fails with error:
    The document could not be saved. The server said:
    “The operation failed because an unexpected error occurred. (Result code 0×80020005)”
    Please ensure you have completed all required properties with the correct information and try again.
    I have seen similar threads that suggest this is a known issue with this version of Acrobat but I would like conformation from Adobe that this is a known issue and whether it is fixed in a newer version?
    Adobe Acrobat version 10.1.13
    SharePoint 2010

    Hi quodd,,
    We are sorry for the issue being faced by you. I need some information from you so that I take further steps:
    1. Which Adobe product are you using Acrobat or Adobe reader- what is the complete version?
    2. How are you opening and saving the PDF, the exact workflow?
         Are you doing it from within Adobe Reader/Acrobat application or opening it from browser, doing changes and saving it using browser itself.
    3. Can you try to save a PDF to library with Custom template and managed metadata columns using browser directly.
    4. Please verify that columns name do not contain spaces or some other special characters.
       Can you try to save PDF to library with Custom template and just a single managed metadata column  with a simple name
    Thanks,
    Nikhil Gupta

  • Seagate external drive: Partition map check failed because no slices were found

    I have a 3TB Seagate Expansion external backup drive connected to my Retina Macbook Pro via USB. The disk will not eject from a regular Finder window - it will only eject from Disk Utility. Backups seem to otherwise be running fine. I tried to verify/repair the disk in Disk Utility, and I got the following error: "Partition map check failed because no slices were found." I am able to verify/repair the partition without any problem, and no errors are found.
    I'm concerned about relying on a backup drive that may be heading south. Here is the output from diskutil info:
    diskutil info disk3
       Device Identifier:        disk3
       Device Node:              /dev/disk3
       Part of Whole:            disk3
       Device / Media Name:      Seagate Expansion Desk Media
       Volume Name:              Not applicable (no file system)
       Mounted:                  Not applicable (no file system)
       File System:              None
       Content (IOContent):      GUID_partition_scheme
       OS Can Be Installed:      No
       Media Type:               Generic
       Protocol:                 USB
       SMART Status:             Not Supported
       Total Size:               3.0 TB (3000592977920 Bytes) (exactly 5860533160 512-Byte-Units)
       Volume Free Space:        Not applicable (no file system)
       Device Block Size:        4096 Bytes
       Read-Only Media:          No
       Read-Only Volume:         Not applicable (no file system)
       Ejectable:                Yes
       Whole:                    Yes
       Internal:                 No
       OS 9 Drivers:             No
       Low Level Format:         Not supported

    @ Allan Eckert: Unfortunately, reformatting is out of the question, I have 5 years of work on this!
    @ Loner T: Yes, the firmware did successfully upgrade after router reboot and a bit of troubleshoot.
    Thanks for the quick replies. Keep 'em coming!

  • Using SSL Module to Encrypt HTTP post to external Server

    I would like to know if it's possible for a CSM with its SSL module to receive an HTTP POST from our internal web servers, encrypt that POST w/ SSL, and finally to forward the newly created SSL transmission to a remote external SSL server? If it is possible, is this good practice or is it better to let the web server do the encryption?

    this is possible.
    It is good practice if you do not want to overload your server with the heavy task of encryption/decryption.
    If your server is very powerfull and far from being used to its maximum capacity, you can do it on the server.
    Another advantage of using an SSL module is that the CSM will see your request in clear text and can therefore perform so *smart* loadbalancing before it gets encrypted by the SSL module.
    [ie: cookie stickyness, url hashing, ...]
    Regards,
    Gilles.

  • Http keep-alive with SOAP webservices

    Just had an interesting experience with a web service setup behind UAG...under low load conditions all SOAP responses were coming across with no issue...once a high load was introduced by the client app the behavior changed where an initial request was
    processed and subsequent requests were rejected by UAG with a Bad Parameter : Length message...but high load (relatively) on a web browser ran without issue...the client app had to wait ~100 seconds before the next request would process correctly...browser
    works - app has problems...turns out it was the keep-alive "default" settings on both the client app server, as well as my web service in IIS...so -apparently UAG thought the 2nd (and subsequent < 100 seconds) were a parameter of the first request...and
    was way too long and got rejected...I unchecked the keep-alive enabled box on the http response header (set common headers) and all is fine now! Apparently the browsers were closing the connection with each page load and the service responded appropriately
    - so it looked like all was well. Using SOAP UI revealed the error...but it also had it's connection set to keep-alive...once that was disabled in SOAP UI all requests ran correctly...that's when I poked around and found that response header setting.

    ejp wrote:
    and also an incorrect implementation of the timeout period. This code will still wait forever if no data arrives.You're right... actually I didn't mean to put that while loop there! What I meant was simply
    if (!reader.ready())
        wait(keepAliveTimeout);
    if (!reader.ready())
        break mainLoop;
    // If we get here there is a new request to read...and I agree that it's ugly, that's why I'm asking you guys for help!
    setSoTimeout() is of course a way to go... didn't think of that although I have kind of already added it to my code but with a different timeout. Thanks!
    Last question then is:
    does reader.readLine() block like reader.read(), or do I have to use the latter?
    I would test for myself I could, but at the moment I can't...

  • Just FYI, new blog post "More Windows Server 2008 Guides Available in TechNet Gallery"

    Just FYI, new blog post "More Windows Server 2008 Guides Available in TechNet Gallery" at
    http://aka.ms/Sqatv1
    Thanks -
    James McIllece

    Hi James,
    Thanks for your sharing!
    Best Regards.
    Steven Lee
    TechNet Community Support

  • Test-ActiveSyncConnectivity fails with The remote server returned an error: (400) Bad Request.

    Hi all,
    I'm on the process of transition from Exchange 2003 to 2010, everything is going perfectly alright however ActiveSync is bugging me!
    when I try to test activesync I get the following error:
    [PS] C:\>Test-ActiveSyncConnectivity -MailboxCredential $user -TrustAnySSLCertificate |FL
    RunspaceId                  : 136b8f68-26ec-4e29-a5bb-cf5ee816e04b
    LocalSite                   : SITE
    SecureAccess                : True
    VirtualDirectoryName        :
    Url                         :
    UrlType                     : Unknown
    Port                        : 0
    ConnectionType              : Plaintext
    ClientAccessServerShortName : cas01
    LocalSiteShortName          : SITE
    ClientAccessServer          : CASSERVERNAME
    Scenario                    : Options
    ScenarioDescription         : Issue an HTTP OPTIONS command to retrieve the Exchange ActiveSync protocol version.
    PerformanceCounterName      :
    Result                      : Success
    Error                       :
    UserName                    : user1
    StartTime                   : 12/12/2012 1:02:23 PM
    Latency                     : 00:00:00.0312496
    EventType                   : Success
    LatencyInMillisecondsString : 31.25
    Identity                    :
    IsValid                     : True
    RunspaceId                  : 136b8f68-26ec-4e29-a5bb-cf5ee816e04b
    LocalSite                   : Reckon_NS
    SecureAccess                : True
    VirtualDirectoryName        :
    Url                         :
    UrlType                     : Unknown
    Port                        : 0
    ConnectionType              : Plaintext
    ClientAccessServerShortName : CASSERVERNAME
    LocalSiteShortName          : SITE
    ClientAccessServer          : CASSERVERNAME
    Scenario                    : FolderSync
    ScenarioDescription         : Issue a FolderSync command to retrieve the folder hierarchy.
    PerformanceCounterName      : DirectPush Latency
    Result                      : Failure
    Error                       : [System.Net.WebException]: The remote server returned an error: (400) Bad Request.
                                  HTTP response headers:
                                  MS-Server-ActiveSync: 6.5.7638.1
                                  Content-Length: 46
                                  Cache-Control: private
                                  Content-Type: text/html
                                  Date: Wed, 12 Dec 2012 02:02:23 GMT
                                  Server: Microsoft-IIS/7.5
                                  X-AspNet-Version: 2.0.50727
                                  X-Powered-By: ASP.NET
    UserName                    : user1
    StartTime                   : 12/12/2012 1:02:23 PM
    Latency                     : -00:00:01
    EventType                   : Error
    LatencyInMillisecondsString :
    Identity                    :
    IsValid                     : True
    environment: 
    Ex 2003 'Exchange' virtual directory permission: Integrated Windows Authentication, Basic 
    Ex 2003 'OMA' permission: Basic Authentication
    Ex 2003 'ActiveSync' permission: Integrated, Basic
    Ex 2010 successfully redirects users from 2010 to 2003 webmail if you login to OWA with a mailbox on 2003

    Yes Martina,
    It has been done through ESM 
    I cannot test using testexchangeconnectivity.com since I cannot put the 2010 one into production, I will get into trouble if I change the DNS record to the new mail server!
    Yes, EAS works perfectly fine with 2010 mailboxes.
    OK.
    It might be that it's not possible to run Test-ActiveSyncConnectivity against a mailbox stored in Exchange 2003.
    Installing KB937031 and enabling Windows Authentication is really all that needs to be done in EX03, in order for Exchange 2010 to proxy the EAS requests.
    Martina Miskovic

  • HTTP POST from SAP to an external server

    Experts.
    I have a XML file encased in MIME and SOAP format. Essentially it's a .xml file.
    I need to post this to an external server (have the IP address and logon credentials) using http post functionality.
    Can this be accomplished in SAP using ABAP, Function module? I need to post the entire file.
    If anyone has done this, can you please post the steps needed?
    Thank you so much.
    Raj

    Hi Raj,
    a good starting point for you would be the SAP Help. [Here|http://help.sap.com/saphelp_nw04/helpdata/en/1f/93163f9959a808e10000000a114084/frameset.htm] is some sample code of how to make a HTTP call from ABAP.
    Cheers
    Graham Robbo
    Edited by: Graham Robinson on Oct 28, 2009 2:44 PM

  • 3270 Emu program keep alive setup cause 7507(TN3270 Server) Down

    Hi
    need all of your suggest
    my cx use 3270 Emu program which enable keep alive 30 second in it's default config
    when client idle 30 second,it auto triger keepalive to TN3270 server(7507)
    But 7507 send back TN_DEV_IN use(TN device in use)to client,and client must reconnect to server
    does any one know how to bypass 7507 send this to client,
    my cx have exceed four hundred 3270 client
    it takes so long to set per client
    so any command can issue in 7507 to bypass this
    tks

    I don't know if these commands will bypass the problem, only that they might. I'm really guessing here, because the result you describe doesn't add up. My guess is that your emulator is missing a timing-mark or no-op keepalive from the server, and when the client then sends it's own keepalive the server views it as a sequence error and closes the session. The best way to work this out is probably to start with a trace of a complete session.

  • URLConnection send data to server

    I am using the following code to upload a large binary data to a http
    server.
    URL u = new URL(urlString);
    URLConnection c = u.openConnection();
    c.setDoOutput(true);
    c.setDoInput(true);
    c.setUseCaches(false);
    // set some request headers
    c.setRequestProperty(
    "Connection",
    "Keep-Alive");
    // get codebase of the this (the applet) to use for referer
    c.setRequestProperty(
    "HTTP_REFERER",
    codebase);
    c.setRequestProperty(
    "Content-Type",
    "multipart/form-data; boundary=" + boundary);
    DataOutputStream dstream = new
    DataOutputStream(c.getOutputStream());
    for (int i = 0; i < 1000; i++){
    byte[] data = new byte[1000];
    data = ....
    dstream.write(dataq,0,1000);
    dstream.writeBytes("\r\n--" + boundary + "--\r\n");
    dstream.flush();
    dstream.close();
    I found that the data was never sent. If I put the following line after
    the code above, however, the data got sent. Why?
    c.getInputStream();

    Thank you for your post.
    I have another question. If the data is not update until I call c.getInputString(), how can I monitor the progress of the upload and display the progress with a JProgressBar?
    Thanks,

  • Workstation keep-alive and connectivity failure detection

    Dear all,
    I intend to automatically detect network failures from my workstation client and have it attempt to reconnect to the server on its own.
    Browsing the documentation for related information I saw the keep-alive option. If I understood correctly you can have the workstation generate periodic keep-alive messages to the server and if these are not acknowledged, all subsequent ATMI calls will fail with TPESYSTEM.
    My doubts regarding this are the following:
    1) Is there some dummy ATMI function I can invoke periodically to check for a disconnection? Otherwise network failures might go by unnoticed if there are periods where my WSC doesn't require to communicate with the server.
    2) Once I get a TPESYSTEM failure, is there some way to programatically discriminate if the cause is a network disconnection? It is a pretty generic error after all.
    3) Once all ATMI functions start failing due to keep-alive detecting a network failure, can the system recover on its own? or must I reconnect to the server? What I mean is, maybe some keep alive messages where lost, the system passes to the 'disconnected' state, but then the K.A. messages keep being generated and upon correct acknowledgement of these, subsequent ATMI calls will stop failing.
    4) Does the WSC go to 'disconnected' state after the first unanswered keep alive message? or is there some sort of tolerance like, say, 'If N messages in a row are not answered then assume the connection to be dead'.
    I'm sorry if it was too long a post, but neither searching the documentation nor this forum clarified any of these issues for me.
    Many thanks for your attention.
    Regards.

    Cacho,
    1. I assume that you're talking about the WSL CLOPT option "-K
    {client|handler|both|none}", which causes periodic keepalive messages to be
    sent. When this option is used, there is no need for the WSC to perform any
    periodic operations to keep the network connection alive.
    However, if you are worried that the network might suffer a transient
    failure and you want to perform your first ATMI call after an extended
    period of inactivity as quickly as possible without waiting for another
    tpinit(), you may want to perform periodic dummy operations so that you can
    reissue the tpinit() if needed as soon as the network connection is
    restored.
    One option is to make a tpcall() to the ".TMIB" service with an FML32 buffer
    where TA_OPERATION is set to "GET" and the other fields are set to retrieve
    some system attribute. There is an example of how to call .TMIB at the end
    of the TM_MIB(5) manual page. You would want to perform a GET operation
    rather than the SET operation shown in that example.
    Another option is to write a null service in one of your application servers
    which just calls tpreturn() and have your client make periodic cals to that
    service.
    Calling a verb such as tpalloc() or tpconvert() would not be sufficient,
    since these calls do not result in any network traffic.
    2. The tperrordetail() function is available in newer releases of Tuxedo to
    provide additional information about certain Tuxedo errors. When a
    workstation client is disconnected, tperrordetail(0) will return TPED_TERM.
    3. If there is a network failure, workstation clients must call tpinit() to
    reauthenticate to the system. This is necessary for security reasons.
    4. The keepalive option is passed down to the underlying network provider
    and Tuxedo has no further involvement with keepalive once it informs the
    network of this option. For Sockets the option passed is SO_KEEPALIVE; you
    can do a web search on SO_KEEPALIVE to see exactly how the network treats
    keepalive messages on your particular platform.
    Ed
    <Cacho Nakamura> wrote in message news:[email protected]...
    Dear all,
    I intend to automatically detect network failures from my workstation client
    and have it attempt to reconnect to the server on its own.
    Browsing the documentation for related information I saw the keep-alive
    option. If I understood correctly you can have the workstation generate
    periodic keep-alive messages to the server and if these are not
    acknowledged, all subsequent ATMI calls will fail with TPESYSTEM.
    My doubts regarding this are the following:
    1) Is there some dummy ATMI function I can invoke periodically to check for
    a disconnection? Otherwise network failures might go by unnoticed if there
    are periods where my WSC doesn't require to communicate with the server.
    2) Once I get a TPESYSTEM failure, is there some way to programatically
    discriminate if the cause is a network disconnection? It is a pretty generic
    error after all.
    3) Once all ATMI functions start failing due to keep-alive detecting a
    network failure, can the system recover on its own? or must I reconnect to
    the server? What I mean is, maybe some keep alive messages where lost, the
    system passes to the 'disconnected' state, but then the K.A. messages keep
    being generated and upon correct acknowledgement of these, subsequent ATMI
    calls will stop failing.
    4) Does the WSC go to 'disconnected' state after the first unanswered keep
    alive message? or is there some sort of tolerance like, say, 'If N messages
    in a row are not answered then assume the connection to be dead'.
    I'm sorry if it was too long a post, but neither searching the documentation
    nor this forum clarified any of these issues for me.
    Many thanks for your attention.
    Regards.

  • HttpConnection keep-alive

    Hello!
    Guys i need your help to create a connection to web server with DataOutputStream and DataInputStream.
    The problem is that I need to create connection that can be open all the time and I can write and read data from server.
    I had make some variant but it's working only first time, after it say's connection already open.
    here are code, little bit different, now it says Write attempted after request finished
    here is source code
    package super_7_blackjack;
    import java.io.DataInputStream;
    import java.io.DataOutputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import javax.microedition.io.Connector;
    import javax.microedition.io.HttpConnection;
    public class HTTPQuery {
            private HttpConnection connection;
            private DataOutputStream dos;
            private DataInputStream dis;
            private String URL;
            private String sessionId;
            public HTTPQuery(String URL, String sessionId) {
                System.out.println("URL=" + URL);
                this.URL = URL;
                this.sessionId = sessionId;
                try {
                    connection = (HttpConnection)Connector.open(URL);
                    connection.setRequestProperty("Content-type", "application/x-www-form-urlencoded");
                    connection.setRequestProperty("cookie","JSESSIONID=" + sessionId + "; AnyJavaPresent=1.4.2_05");
                    connection.setRequestProperty("Cache-Control","no-cache");
                    connection.setRequestProperty("Pragma","no-cache");
                    connection.setRequestProperty("User-Agent","Mozilla/4.0 (Windows XP 5.1) Java/1.4.2_05");
                    connection.setRequestProperty("Accept","text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2");
                    connection.setRequestProperty("Connection","keep-alive");
                    connection.setRequestMethod("POST");
                    if( connection.getResponseCode() == HttpConnection.HTTP_OK ) {
                        dos = new DataOutputStream(connection.openOutputStream());
                        dis = new DataInputStream(connection.openInputStream());
                } catch (IOException ieox) {
                 System.out.println("Error while opening connection.");
         public synchronized String executeRequestQuery(String query) {
              StringBuffer b = new StringBuffer();
              long len = 0;
              try {
                   dos.write(query.getBytes());
                   dos.flush();
                   DataInputStream dis = new DataInputStream(connection.openInputStream());
                   len = connection.getLength();
                   if (len!=-1) {
                                byte servletData[] = new byte[(int)len];
                                dis.readFully(servletData);
                                b.append(new String(servletData));
                             } catch (Exception e) {
                        System.out.println("Error while closing: "  + e.getMessage());
              System.out.println(b.toString());
              return b.toString();
    }and I use it
    httpQuery = new  HTTPQuery(URL,sessionId);
    httpQuery.executeRequestQuery("CREATESESSION=NO&CMD=ENTER&DBLINK=DBLINKFREE1");

    jhd
    package super_7_blackjack;
    import java.io.DataInputStream;
    import java.io.DataOutputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import javax.microedition.io.Connector;
    import javax.microedition.io.HttpConnection;
    public class HTTPQuery {
            private HttpConnection connection;
            private DataOutputStream dos;
            private DataInputStream dis;
            private String URL;
            private String sessionId;
            public HTTPQuery(String URL, String sessionId) {
                System.out.println("URL=" + URL);
                this.URL = URL;
                this.sessionId = sessionId;
                try {
                    connection = (HttpConnection)Connector.open(URL);
                    connection.setRequestProperty("Content-type", "application/x-www-form-urlencoded");
                    connection.setRequestProperty("cookie","JSESSIONID=" + sessionId + "; AnyJavaPresent=1.4.2_05");
                    connection.setRequestProperty("Cache-Control","no-cache");
                    connection.setRequestProperty("Pragma","no-cache");
                    connection.setRequestProperty("User-Agent","Mozilla/4.0 (Windows XP 5.1) Java/1.4.2_05");
                    connection.setRequestProperty("Accept","text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2");
                    connection.setRequestProperty("Connection","keep-alive");
                    connection.setRequestMethod("POST");
                    if( connection.getResponseCode() == HttpConnection.HTTP_OK ) {
                        dos = new DataOutputStream(connection.openOutputStream());
                        dis = new DataInputStream(connection.openInputStream());
                } catch (IOException ieox) {
                 System.out.println("Error while opening connection.");
         public synchronized String executeRequestQuery(String query) {
              StringBuffer b = new StringBuffer();
              long len = 0;
              try {
                                                               dos = new DataOutputStream(connection.openOutputStream());
                   dos.write(query.getBytes());
                   dos.flush();
                   DataInputStream dis = new DataInputStream(connection.openInputStream());
                   len = connection.getLength();
                   if (len!=-1) {
                                byte servletData[] = new byte[(int)len];
                                dis.readFully(servletData);
                                b.append(new String(servletData));
                             } catch (Exception e) {
                        System.out.println("Error while closing: "  + e.getMessage());
              System.out.println(b.toString());
              return b.toString();
    dos.close();
    dis.close();
    }see in block latter what you need to change is>>>>
    this will not going to close your HttpConnection but it will close only ur Stream connection and that will not required never at all.
    try with this code i hope this will helps you.
    Regards,
    Jasmit vala
    [email protected]

  • Weblogic proxy plugin closes keep-alive connections to clients randomly

    In short we have following arhitecture:
    clients ---> wl proxy plugin 1 ----> weblogic 1
    clients ---> wl proxy plugin 2 ----> weblogic 2
    Beacuse of the application/installation specific requirements, we are not using failover, one wl proxy always forwards requests to one weblogic (simple configuration).
    Application is TR-069 protocol based (SOAP over HTTP) so it very much relays on persistence TCP connections (Connection: keep-alive). This TCP persistence has to work correctly in order that TR-069 messages are exchanged in required order, otherwise we have a error on application layer.
    Here and there we've noticed applications errors which suggest that we have some problems in TCP connection between the client and the weblogic server. After sniffing, we've noticed that weblogic proxy plugin (Apache) randomly, or because of some other reason we do not know, decides to close TCP connection to client, even app on weblogic did not request so ???
    As a result, client opens new connection to the server with new TR-069 session and it gets bounced beacuse it allready has one open on weblogic server.
    We've sniffed, traced everything we could, we were searching for patterns in time, etc... but we can not find the reason why proxy plugin decides to close the connection to the client (not to the weblogic server).
    Trace (replaced sensitive information):
    Thu Apr 29 15:05:50 2010 <958012725463463784> URL::parseHeaders: CompleteStatusLine set to [HTTP/1.1 200 OK]
    Thu Apr 29 15:05:50 2010 <958012725463463784> URL::parseHeaders: StatusLine set to [200 OK]
    Thu Apr 29 15:05:50 2010 <958012725463463784> parsed all headers OK
    Thu Apr 29 15:05:50 2010 <958012725463463784> sendResponse() : r->status = '200'
    Thu Apr 29 15:05:50 2010 <958012725463463784> canRecycle: conn=1 status=200 isKA=1 clen=545 isCTE=0
    Thu Apr 29 15:05:50 2010 <958012725463463784> closeConn: pooling for '$IP$/$PORT$'
    Thu Apr 29 15:05:50 2010 <958012725463463784> request [$URL$] processed successfully..................
    !!!! Now it closes the TCP connection and inserts "Connection: close" HTTP header !!!
    WL proxy plugin conf params are:
    WebLogicCluster $IP$:$PORT$
    DynamicServerList OFF
    KeepAliveTimeout 90
    MaxKeepAliveRequests 0
    KeepAliveSecs 55
    Apache worker configuration is:
    <IfModule mpm_worker_module>
    PidFile var/run/httpd-worker.pid
    LockFile var/run/accept-worker.lock
    StartServers 2
    MinSpareThreads 25
    MaxSpareThreads 75
    ThreadLimit 200
    ThreadsPerChild 200
    MaxClients 2000
    MaxRequestsPerChild 0
    AcceptMutex pthread
    </IfModule>
    Why weblogic proxy plugin ignores Keep-alive directive and decides to close connection to the client by itself?
    Any help?

    If a WebLogic Server instance listed in either the WebLogicCluster parameter or a dynamic cluster list returned from WebLogic Server fails, the failed server is marked as "bad" and the plug-in attempts to connect to the next server in the list.
    MaxSkipTime sets the amount of time after which the plug-in will retry the server marked as "bad." The plug-in attempts to connect to a new server in the list each time a unique request is received (that is, a request without a cookie).
    Note: The MaxSkips parameter has been deprecated as the MaxSkipTime parameter.
    See also here: http://download-llnw.oracle.com/docs/cd/E13222_01/wls/docs81/plugins/plugin_params.html
    You said the problem arises under significant load. Maybe, it is wise to tune the number file descriptor's on your operating system. HTTP connections are nothing more than TCP sockets on the operating system. All modern operating systems treat sockets as a specialized form of file access and use data structures called file descriptors to track open sockets and files for an operating system process. To control resource usage for processes on the machine, the operating system restricts the number of open file descriptors per process. You should be aware that all TCP connections that have been gracefully closed by an application will go into what is known as the TIME_WAIT state before being discarded by the operating system.
    On most unix systems you can use netstat -a | grep TIME_WAIT | wc -l to detemine the number of socket in time_wait state. You have to check with your system adminstrator how to tune the tcp_time_wait_interval. On solaris you can use: /usr/sbin/ndd -set /dev/tcp tcp_time_wait_interval 60000

  • How to Keep Alive a specific HFM application

    My PRD landscape has 2-3 applications. Whenever any application loads for the first time in the memory, it takes quite some time before it is loaded. Once the application is loaded, the response times are pretty quick and acceptable.
    During specific parts of the day / month, a specific application becomes active. During this time, other applications may or may not be used, but one specific application is definitely used. I want to keep this application's cubes loaded in the RAM, so that response times are quick, whenever the user needs it - whether for the first time or otherwise.
    How to keep alive a specific application's cubes alive and loaded in RAM? I am not looking for general solution like setting the "FM Service" to Automatic (currently it is set Manual) - setting this to Automatic, will affect all applications. However, I am looking for a specific application at a specific period of time.
    I am ok with a custom solution if required. Let me know, what are my options.

    I think you are looking for an inappropriate solution to your problem. Most HFM applications take a few seconds to start up. If your application takes such a long time to start up you really need to examine the root cause. Start by removing the Sub NoInput and Sub Input routines in your rules to see if this is the reason the application takes so long to start.
    If the removal of those routines does not significantly change the startup time, look through the event logs for possible causes. It is possible you have a connection problem with the Oracle database for example, or you have an environment configuration problem where the HFM application servers have a very slow connection to the database server, or possibly the database server performs poorly.
    I strongly suggest you investigate these root casues first before trying to make HFM behave in a non-standard way. Certainly you can start the HFM Management Service, which will start all of your HFM applications and keep them all up and running. This approach does not allow the applications to shut down on their own though. Only use this if you have exchausted your investigations (as I suggsted above).
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • How do I change the http connection type from close to keep-alive

    I am using a browser that appears as though it needs a connection type of keep-alive. When a page is requested the server sends back a connection type of close. It appears images are not requested from the server when this connection type is requested.

    Some older web server provides inaccurate content-length information.If the content length value is less than the amount of data,the web server treats the difference as a new request, this creates problem with iplanet Web Server.
    If you are using browser with HTTP1.1 enabled, choose the option to enable it manually and try once again by posting the request.
    Hope it helps.

Maybe you are looking for

  • Layout Issue

    Export Table Data / click on where / causes BAD LAYOUT Format Message was edited by: Frank C. Hoffmann

  • How can you collapse several video tracks into one track?

    Hi, I have a stack of five video tracks that make up an identification placard for a speaker. I need to create motion on this, but don't want to have to do it to every track individually. I believe there is a way to incapsulate all five tracks into o

  • Synchronising iTunes over 2 or more Macs

    Is there a foolproof way of synchronising iTunes with particular emphasis on iPhone sync? I normally sync my phone with my desktop but would like to be able to switch the sync to my laptop when travelling. This came to mind because I am abroad on Jun

  • Localization in OBIEE 10g(Error "start row is set higher than the row count

    Hi All, I m Working on Localization. I have used exteranize string and saved excel file and added language column and inserted data en and fr and loaded the file into the database. then i have created 2 variables one is for weblanguage SELECT 'VALUEO

  • Email and keychain passwords needed after iTunes 6.0

    I just upgraded to iTunes 6.0 from 4.9. Everything seems to work fine, but when I wanted to check my email, I got a window asking me for my keychain password or phrase. I entered my system password, but said "sorry, invalid..." so I hit cancel a few