HTTP 1.1 Chunked Encoding

I have a problem when using Documentum's UCF to transfer large files (4GB) when deployed on WebLogic 8.1 SP4. After some investigation, I have found that this is likely a problem with WebLogic. Has anyone else had problems transfering large files using chunked encoding? How were these issues resolved? Is this problem fixed in WL 8.1 SP5? I would appreciate any solution other than turning off the chunked encoding.

HTTP 1.1 chunked encoding is supported Documentum 5.3 sp2 and above. I would strongly suggest that you open a case with EMC - Documentum on this particular issue. There were alot of issues with UCF in both 5.3 and 5.3 sp1.
Good Luck

Similar Messages

  • Weblogic 10.0 sends chunked encoding with a HTTP 1.0 request

    This took ages to find, it was only the mod_weblogic's Debug ALL parameter that pointed me in the right direction because it printed all the headers in each stage.
    Server:
    Apache 2.0 with mod_weblogic and SSL (Dev: Windows XP SP2)
    Apache 2.2 with mod_weblogic and SSL (Prod: Unix/Solaris/Linux)
    Weblogic 10.0 running on SPARC Solaris 9
    Proxy:
    Squid/2.6.STABLE22 (Prod: Unix/Solaris, Dev:Windows XP SP2)
    Client:
    IE 6.0.2900.2180
    Windows XP SP2
    Under IE if a user doesn't have both of the "Use HTTP 1.1" and "Use HTTP 1.0 through proxy connections" selected they will get an error message when trying to download a file sent with "Content-disposition attached" header through a proxy server and over SSL.
    This is because IE is sending a HTTP 1.0 request to weblogic and weblogic is responding with HTTP 1.1 "Transfer-encoding: chunked" response which is causing IE 6 to fail to display the error. It manifests itself as an 'apache bridge error'.
    If either of the other two requirements of a proxy server or SSL aren't there then IE handles it properly.
    Workaround is to disable chunked transfer for that particular server. In WLST set /Servers/<server>/WebServer/<server>/ChunkedTransferDisabled = 'true'
    Any further information of a patch or additional settings would be appreciated.
    This seems less than ideal as some of the reports are quite large and would benefit from chunked transfer particularly those that use the correct IE settings and those using Firefox.

    Does your MDB require transactions ? If so you need to XA enable the connection factory. Looks like you are using a transactional MDB with non XA connection factory.

  • Http chunked encoding

    Hi there
    Does anyone know if there is an issue with http chunked encoding to go through ASA even when http inspection is not enabled?
    Thanks
    Naresh

    HTTP 1.1 chunked encoding is supported Documentum 5.3 sp2 and above. I would strongly suggest that you open a case with EMC - Documentum on this particular issue. There were alot of issues with UCF in both 5.3 and 5.3 sp1.
    Good Luck

  • Corrupt chunk encoding, corrupt html code

    We had corrupt chunk encoding on longish webpages (HTTP 1.1, HTTPS, HTML Code >2MB) and finally found out that the load balancer F5 from BIG-IP had a problem with HTTPS encoding .. updated the thing .. problem solved.
              Hope this helps.

    Can you please share the solution? What did you need to update in the load balancer?

  • Chunked encoding

    Hi
    Does any one know about how chunked encoding work, and how to implement it on Java.
    any guidance will be great appreciate.

    Thanks for the reply
    the URLConnection is great but it can't do what want, eg I can get or set any cookie, which I needed to maintain the persistent state that so call session, I can't keep the connection alive as it close every time, that why I need to implement my own http connection class, I have overcome the chunked problem, the program work for some web server eg tomcat apache, but with the IIS I have some problem with the received content length, which the server tell me is more then what I actually read, anyway the real prblem that keep boring me is that even I send exact same header (except the Accept-Encoding: gzip, deflat, which I think is not important) as the browser to the server, but it just response me a 404 not found or 400 bad request, the same request work on both IE and firefox, I wonder is there any secret between them, anyone who has the similar experience would like to share .
    Thanks in advence.

  • Content-length and chunked-encoding

    Hi,
    Need to include content-length and chunked-encoding headers with response, I am using Sun One Webserver SP 6.5.
    Thanks in anticipation,
    Adnan

    upgrade to apache 2.x and you'll be good (i've dealt with this issue, it sucks.)

  • How to generate chunked-encoding httpServletResponse

    Hi all,
    tomcat6.0, jdk5.0.
    I need implement a servlet which uses some very long string as the response to POST request.
    So I hope to divide the long string into several chunked http response messages instead of one huge response message. I tried below method:
    public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{
    StringBuffer buf = new StringBuffer();
    while(i<1000){
    buf.append("some words");
    String encoding = request.getCharacterEncoding();
    response.setCharacterEncoding(encoding);
    response.setStatus(200);
    response.addDateHeader("Date",System.currentTimeMillis());
    response.addHeader("Transfer-Encoding","chunked");
    response.getOutputStream().write(buf.toString().getBytes());
    But it failed. Client can't understand the response from the server.
    Could anyone tell me what's the right way to implement chunked http servlet response?
    Thanks a lot.
    Rare

    You could write (indeed as you mentioned option in 2) your own, for instance, as shown here:
    Re: Concatenation, Attributes, and Processing Instruction
    and/or use XMLROOT
    but that said, you can't use XMLROOT in 9.2 yet.
    but based on your requirements it is not easy / a lot of work to do, although I wonder what you are trying to achieve and don't forget that if you pick the database characterset you will probably overrule situations based on:
    (http://download.oracle.com/docs/cd/B19306_01/server.102/b14225/ch3globenv.htm#i1006415)
    NLS_SESSION_PARAMETERS shows the NLS parameters and their values for the session that is querying the view. It does not show information about the character set.
    NLS_INSTANCE_PARAMETERS shows the current NLS instance parameters that have been explicitly set and the values of the NLS instance parameters.
    NLS_DATABASE_PARAMETERS shows the values of the NLS parameters for the database. The values are stored in the database.
    or in other words if NLS settings are manually changed within a session, instance or database context.
    Message was edited by:
    Marco Gralike

  • Tips or tools for handling very large file uploads and downloads?

    I am working on a site that has a document repository feature. The documents are stored as BLOBs in an Oracle database and for reasonably sized files its not problem to stream the files out directly from the database. For file uploads, I am using the Struts module to get them on disk and am then putting the blob in the database.
    We are now being asked to support very large files of 250MB+. I am concerned about problems I've heard of with HTTP not being reliable for files over 256MB. I'd also like a solution that would give the user a status bar and allow for restarts of broken uploads or downloads.
    Does anyone know of an off-the-shelf module that might help in this regard? I suspect an ActiveX control or Applet on the client side would be necessary. Freeware or Commercial software would be ok.
    Thanks in advance for any help/ideas.

    Hi. There is nothing wrong with HTTP handling 250MB+ files (per se).
    However, connections can get reset.
    Consider offering the files via FTP. Most FTP clients are good about resuming transfers.
    Or if you want to keep using HTTP, try supporting chunked encoding. Then a user can use something like 'GetRight' to auto resume HTTP downloads.
    Hope that helps,
    Peter
    http://rimuhosting.com - JBoss EJB/JSP hosting specialists

  • Help me...How to read the content if "Transfer-Encoding:chunked" is used?

    I am doing a project for internet control using Java,PHP and MySql.All sites should go through the proxy server only.If the HTTP header contains Content-Length,am getting the content length as below:
    public class HTTPResponseReader extends HTTPMessageReader
        String statusCode;
        public HTTPResponseReader(InputStream istream) throws IOException,                     NoSuchElementException
      BufferedInputStream distream = new BufferedInputStream(istream);
      retrieveHeader(distream);
      StringTokenizer st =  new StringTokenizer(new String(HTTPMessageReader.toArray(header)));
      versionProtocol = st.nextToken();
      statusCode = st.nextToken();
      String s;
      while (st.hasMoreTokens())
            s = st.nextToken();
            if (s.equals("Transfer-Encoding:"))
           transferEncoding = new String(st.nextToken());
         if (s.equals("Content-Length:"))
           contentLength = Integer.parseInt(st.nextToken());
         if (s.equals("Connection:"))
          connection = new String(st.nextToken());
          if (connection.equals("keep-alive")) mustCloseConnection = false;
       retrieveBody(distream);     
    }After getting the Content-Length,i used read method to read the content upto that content length.Then i concatenated the HTTP header and body and the requested site was opened.But some sites dont have Content-Length.Instead of that,Transfer-Encoding is used.I got the HTTP Response header as "Transfer-Encoding:chunked" for some sites.If this encoding is used how to get the length of the message body and how to read the content.
    Can anybody help me.
    Thanks in advance...
    Message was edited by:
    VeeraLakshmi

    Why don't you use HttpUrlConnection class to retrieve data from HTTP server? This class already supports chunked encoding...
    If you want to do anything by yourself then you need to read HTTP RFC and find all required information. Well in two words you may reject advanced encoding by specifying HTTP 1.0 in your request or download chunked answer manually. Read RFC anyway :)

  • Turn off chunked transfer-encoding

    Hi. I have to interface with another company's client that has a broken implementation
    of http/1.1 and does not understand chunked transfer encoding. Is there some way
    I can tell Weblogic not to use chunked encoding for a particular servlet response?

    Nagesh Susarla <[email protected]> wrote:
    Joe Humphreys wrote:
    Hi. I have to interface with another company's client that has a brokenimplementation
    of http/1.1 and does not understand chunked transfer encoding. Is theresome way
    I can tell Weblogic not to use chunked encoding for a particular servletresponse?
    >
    The easiest way would be to set content-length on the response
    and the response wudnt be chunked
    -nagesh
    Thanks, but that is not an option here because the content is dynamic and may
    be in excess of 100K. I can't afford to buffer that much data in memory just to
    count its length. Since the server doesn't use chunked transfer-coding for http/1.0
    responses, I was hoping there would be some way to just turn it off. (But only
    for particular servlets.)
    Joe H

  • Transfer-Encoding: chunked

    Hi
    I'm creating an http client.. but got problems with the chunked data encoding.. does anybody know a link where there is more info about it? how much should I read from each chunk and where is it written? did anybody face such a problem before? My client tells the server it's firefox:
    GET / HTTP/1.1
    Host: mail.yahoo.com
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6
    Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
    Accept-Language: en-us,en;q=0.5
    Accept-Encoding: gzip,deflate
    Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
    Keep-Alive: 300
    Connection: keep-alive
    and the server sends this:
    HTTP/1.1 200 OK
    Date: Sun, 25 Sep 2005 13:45:31 GMT
    P3P: policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV"
    Cache-Control: private
    Connection: close
    Transfer-Encoding: chunked
    Content-Type: text/html
    Content-Encoding: gzip
    Set-Cookie: B=9dikf6d1jdafr&b=3&s=5m; expires=Tue, 02-Jun-2037 20:00:00 GMT; path=/; domain=.yahoo.com
    1562
    please help!
    hmm.. looks like nobody is gonna answer.. :(

    3.6.1 Chunked Transfer Coding
    The chunked encoding modifies the body of a message in order to
    transfer it as a series of chunks, each with its own size indicator,
    followed by an OPTIONAL trailer containing entity-header fields. This
    allows dynamically produced content to be transferred along with the
    information necessary for the recipient to verify that it has
    received the full message.
    Chunked-Body = *chunk
    last-chunk
    trailer
    CRLF
    chunk = chunk-size [ chunk-extension ] CRLF
    chunk-data CRLF
    chunk-size = 1*HEX
    last-chunk = 1*("0") [ chunk-extension ] CRLF
    chunk-extension= *( ";" chunk-ext-name [ "=" chunk-ext-val ] )
    chunk-ext-name = token
    chunk-ext-val = token | quoted-string
    chunk-data = chunk-size(OCTET)
    trailer = *(entity-header CRLF)
    The chunk-size field is a string of hex digits indicating the size of
    the chunk. The chunked encoding is ended by any chunk whose size is
    zero, followed by the trailer, which is terminated by an empty line.
    The trailer allows the sender to include additional HTTP header
    fields at the end of the message. The Trailer header field can be
    used to indicate which header fields are included in a trailer (see
    section 14.40).
    A server using chunked transfer-coding in a response MUST NOT use the
    trailer for any header fields unless at least one of the following is
    true:
    a)the request included a TE header field that indicates "trailers" is
    acceptable in the transfer-coding of the response, as described in
    section 14.39; or,
    b)the server is the origin server for the response, the trailer
    fields consist entirely of optional metadata, and the recipient
    could use the message (in a manner acceptable to the origin server)
    without receiving this metadata. In other words, the origin server
    is willing to accept the possibility that the trailer fields might
    be silently discarded along the path to the client.
    This requirement prevents an interoperability failure when the
    message is being received by an HTTP/1.1 (or later) proxy and
    forwarded to an HTTP/1.0 recipient. It avoids a situation where
    compliance with the protocol would have necessitated a possibly
    infinite buffer on the proxy.
    An example process for decoding a Chunked-Body is presented in
    appendix 19.4.6.
    All HTTP/1.1 applications MUST be able to receive and decode the
    "chunked" transfer-coding, and MUST ignore chunk-extension extensions
    they do not understand.

  • Transfer-encoding: chunk

    Hello,
    we are useing SunOne Webserver 6.1SP3.
    We have some Problems with sending pdf to the
    MS IE it seems that not the total document is send to
    the Browser or that there are some errors in the connection
    between Server an Browser. It is possible that the IE has a Problem with the Transfer-encoding: chunk ?
    Can we switch it off or can we change it for some mime-types?
    Thanks for your help.

    That seems unlikely; chunked encoding is a central part of HTTP/1.1.
    Perhaps you could be more specific about the problem. For example, why do you think there are problems sending PDF files? What appears in MSIE? Which versions of MSIE and Acrobat appear to be affected? What do the corresponding access log entries look like? Is anything written to the errors log?

  • JSP transfer-encoding: other that chunked?

    Is it somehow possible not to use the chunked transfer encoding in JSP pages? I am using Tomcat 4.0.3 and there seems to be problems with Java Web Start and the chunked encoding.. (see my other topic "Chunked encoding")..

    I found the workaround that I was looking for:
    I added the argument allowChunking="false" to the Tomcat configuration file.
    The actual bug that causes the problem is most propably in JRE 1.3..

  • HTTP POST to PHP server problem

    Hi, im trying post a long string to php from a MIDLET, but i have some problems. When i send the whole String, my php server cant receive the request (i have not any response), but, if the string that i send is 1/5 from the original, the process is successful correctly. have somebody any idea?
    thx

    this is my problem, extracted from another topic on this forum:
    "Hi everyone.
    I have a problem, and hope someone may help me.
    My midlet is uploading sizeable data via http POST.
    I'm using WTK104, since i need MIDP1.0
    The code have been tried on DefaultGrayPhone emulator
    and add-on Nokia's Series 60 Emulator.
    Both emulators chunck data, however in different ways.
    Deafult one simply produces wrong chunk length (possibly a bug),
    Nokia's one always chunks by equal offsets of 2016 bytes.
    I'm not using flushing, just close. All the data is being send
    at once by one output stream write call.
    So I believe (after proper investigation) that MIDP will use chunked Transfer-Encoding method whatever
    on such sizeable a data as mine is (up to 50KB) and there's no way to override this behaviour.
    Here the main problem appears - Apache refuses to accept chunked encoding in request. The corresposnding message is given in error log
    *chunked transfer encoding forbidden*. The returned code is 411 - Content-Length requred. I see no way to override this behaviour of Apache. I was trying to upload my data into Zope web-server, which is my primary goal, but it doesn't handle chunked request either.
    Has anyone faced the same problem? Who has managed to POST sizeable data from midlet? Which web servers did you use for that?
    Any inputs are highly appreciated!
    Anton"
    Another:
    "> So I believe (after proper investigation) that MIDP
    will use chunked Transfer-Encoding method whatever
    on such sizeable a data as mine is (up to 50KB) and
    there's no way to override this behaviour.Is this true? When I try to set the content-length headers and then write a large byte[] to the output stream I got from an HttpConnection, the HttpConnection appears to remove the content-length header altogether and automatically sets the transfer-encoding to chunked.
    Note- I am not calling flush on the outputstream, but I am calling httpconnection.getResponseCode, which I believe calls flush on the outputstream.
    Abraham"
    I have the identical problem.

  • Chunking and Tomcat Servlets

    HI all, I've got a method which transmits data to a java servlet. I;ve added Java 1.5's chunking capability on the client and it works on 1 machine with no problems. Yet a whole lot of others break...
    Client side:
    URLConnection con = null;
    con = servletAddress.openConnection();
    con.setDoInput(true);
    con.setDoOutput(true);
    con.setUseCaches(false);
    if(chunked && supportsChunking()){
    ( (HttpURLConnection) con).setChunkedStreamingMode(0);
    after this, i open a DataInputStream from the connection and send an object...
    I get an exception on the reply...
    java.io.IOException: Server returned HTTP response code: 501 for URL: http://xxxxxx.xxxxxxxxx.xxxxxxxx
         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1133)
    There is a bit of stuff on the forums about chunking but my problem seems to be on the server. Its generating a 501 not supported error, yet all versions of tomcat, and all JREs are identical. What could the reason be for 1 machien working and a whole bunch of others breaking?
    Dan

    HI all, its been a while since I touched this, but just found the cause - posted here in case anyone else has the same troubles.
    Most of our servers here are using the Squid proxy. This does not support HTTP1.1 and falls back to HTTP1.0. This means that Tomcat and my client are 100% fine, but going through squid causes java to throw an exception, as HTTP1.1 and especially chunked encoding are not supported
    Dan

Maybe you are looking for