Upload of very big files (300MB+)

Hello all,
I am trying to create application in HTMLDB to store files via webbrowser in DB(BLOB). I created all needed components and now stuck with the problem of uploading big files. Basicaly when file is over 100MB it becomes unreliable and for big files does not work at all. Can somebody help me to figure out how to upload big files into htmldb application. Any hints and sugestions are welcome. Examples will be even more appreciated.
Sincerely,
Ian

Ian,
When you say "big files does not work at all", what do you see in the browser? Is no page returned at all?
When a file is uploaded, it takes some amount of time to simply transfer the file from the client to modplsql. If you're on a local Gbit network, this is probably fast. If you're doing this over a WAN or over the Internet, this is probably fairly slow. As modplsql gets this uploaded file, it writes it to a temporary BLOB. Once fully received, modplsql will then insert this into the HTML DB upload table.
I suspect that the TimeOut directive in Apache/Oracle HTTP Server is kicking in here. The default setting for this is 300 (5 minutes).
I believe the timeout is reset by modplsql during file transfer to avoid a timeout operation while data is still being sent. Hence, I believe the insertion of your large file into the file upload table is taking longer than the TimeOut directive.
The easy answer is to consider increasing your TimeOut directive for Apache/Oracle HTTP Server.
The not so easy answer is to investigate why it takes so long for this insert, and tune the database accordingly.
Hope this helps.
Joel

Similar Messages

  • How do i open a VERY big file?

    I hope someone can help.
    I did some testing using a LeCroy LT342 in segment mode. Using the
    Labview driver i downloaded the data over GPIB and saved it to a
    spreadsheet file. Unfortunately it created very big files (ranging from
    200MB to 600MB). I now need to process them but Labview doesn't like
    them. I would be very happy to split the files into an individual file
    for each row (i can do this quite easily) but labview just sits there
    when i try to open the file.
    I don't know enough about computers and memory (my spec is 1.8GHz
    Pentium 4, 384MB RAM) to figure out whether if i just leave it for long
    enough it will do the job or not.
    Has anyone any experience or help they could offer?
    Thanks,
    Phil

    When you open (and read) a file you usually move it from your hard disk (permanent storage) to ram.  This allows you to manipulate it in high speeds using fast RAM memory, if you don't have enough memory (RAM) to read the whole file,  you will be forced to use virtual memory (uses swap space on the HD as "virtual" RAM) which is very slow.  Since you only have 384 MB of RAM and want to process Huge files (200MB-600MB) you could easily and inexpensively upgrade to 1GB of RAM and see large speed increases.  A better option is to lode the file in chunks looking at some number of lines at a time and processing this amount of data and repeat until the file is complete, this will be more programming but will allow you to use much lass RAM at any instance.
    Paul
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • VERY big files (!!) created by QuarkXPress 7

    Hi there!
    I have a "problem" with QuarkXPress 7.3 and I don't know if this is the right forum to ask...
    Anyway, I have createed a document, about 750 pages, with 1000 pictures placed in it. I have divided it in 3 layouts.
    I'm saving the file and the file created is 1,20GB !!!
    Isn't that a very big file for QuarkXPress??
    In that project there are 3 layouts. I tried to make a copy of that file and delete 2 of 3 layouts and the project's file size is still the same!!
    (Last year, I had created (almost) the same document and as I checked that document now, its size is about 280 MB!!)
    The problem is that I have "autosave" on (every 5 or 10 minutes) and it takes some time to save it !
    Can anyone help me with that??
    Why does Quark has made SO big file???
    Thank you all for your time!

    This is really a Quark issue and better asked in their forum areas. However, have you tried to do a Save As and see how big the resultant document is?

  • Question about reading a very big file into a buffer.

    Hi, everyone!
    I want to randomly load several characters from
    GB2312 charset to form a string.
    I have two questions:
    1. Where can I find the charset table file? I have used
    google for hours to search but failed to find GB2312 charset
    file out.
    2. I think the charset table file is very big and I doubted
    whether I can loaded it into a String or StringBuffer? Anyone
    have some solutions? How to load a very big file and randomly
    select several characters from it?
    Have I made myself understood?
    Thanks in advance,
    George

    The following can give the correspondence between GB2312 encoded byte arrays and characters (in hexadecimal integer expression).
    import java.nio.charset.*;
    import java.io.*;
    public class GBs{
    static String convert() throws java.io.UnsupportedEncodingException{
    StringBuffer buffer = new StringBuffer();
    String l_separator = System.getProperty("line.separator");
    Charset chset = Charset.forName("EUC_CN");// GB2312 is an alias of this encoding
    CharsetEncoder encoder = chset.newEncoder();
    int[] indices = new int[Character.MAX_VALUE+1];
    for(int j=0;j<indices.length;j++){
           indices[j]=0;
    for(int j=0;j<=Character.MAX_VALUE;j++){
        if(encoder.canEncode((char)j)) indices[j]=1;
    byte[] encoded;
    String data;
    for(int j=0;j<indices.length;j++) {
         if(indices[j]==1) {
                encoded =(Character.toString((char)j)).getBytes("EUC_CN");
                          for(int q=0;q<encoded.length;q++){
                          buffer.append(Byte.toString(encoded[q]));
                          buffer.append(" ");
                buffer.append(": 0x");buffer.append(Integer.toHexString(j));
                buffer.append(l_separator);
        return buffer.toString();
    //the following is for testing
    /*public static void main(String[] args) throws java.lang.Exception{
       String str = GBs.convert();
       System.out.println(str);*/

  • A very big file

    i have a very big text file . almost 800 MB . in some lines i have a special pattern "from the end" is present. i want to pick up those lines.
    whats are the possible efficient solutions ?
    as the file is large , i hesitate to use BufferedReader + readLine + indexOf("from the end") because its not efficient . it reads all the lines.
    is RandomAccessFle will be a good solution ? but RAF dont have readLine + indexOf("from the end") .how do i solve it then ?
    i have java 1.4 . in java 1.4 what is the best solution for solving this kind of problem ?

    Under all circumstance you'll have to read all bytes
    of the file once.Not true! If the file consists of bytes with a character representation that uses only one byte (e.g. ASCII or iso-8859-x) then the following class if very efficient.
    import java.io.*;
    import java.util.*;
    public class GetLinesFromEndOfFile
        static public class BackwardsFileInputStream extends InputStream
            public BackwardsFileInputStream(File file) throws IOException
                assert (file != null) && file.exists() && file.isFile() && file.canRead();
                raf = new RandomAccessFile(file, "r");
                currentPositionInFile = raf.length();
                currentPositionInBuffer = 0;
            public int read() throws IOException
                if (currentPositionInFile <= 0)
                    return -1;
                if (--currentPositionInBuffer < 0)
                    currentPositionInBuffer = buffer.length;
                    long startOfBlock = currentPositionInFile - buffer.length;
                    if (startOfBlock < 0)
                        currentPositionInBuffer = buffer.length + (int)startOfBlock;
                        startOfBlock = 0;
                    raf.seek(startOfBlock);
                    raf.readFully(buffer, 0, currentPositionInBuffer);
                    return read();
                currentPositionInFile--;
                return buffer[currentPositionInBuffer];
            public void close() throws IOException
                raf.close();
            private final byte[] buffer = new byte[4096];
            private final RandomAccessFile raf;
            private long currentPositionInFile;
            private int currentPositionInBuffer;
        public static List<String> head(File file, int numberOfLinesToRead) throws IOException
            return head(file, "ISO-8859-1" , numberOfLinesToRead);
        public static List<String> head(File file, String encoding, int numberOfLinesToRead) throws IOException
            assert (file != null) && file.exists() && file.isFile() && file.canRead();
            assert numberOfLinesToRead > 0;
            assert encoding != null;
            LinkedList<String> lines = new LinkedList<String>();
            BufferedReader reader= new BufferedReader(new InputStreamReader(new FileInputStream(file), encoding));
            for (String line = null; (numberOfLinesToRead-- > 0) && (line = reader.readLine()) != null;)
                lines.addLast(line);
            reader.close();
            return lines;
        public static List<String> tail(File file, int numberOfLinesToRead) throws IOException
            return tail(file, "ISO-8859-1" , numberOfLinesToRead);
        public static List<String> tail(File file, String encoding, int numberOfLinesToRead) throws IOException
            assert (file != null) && file.exists() && file.isFile() && file.canRead();
            assert numberOfLinesToRead > 0;
            assert (encoding != null) && encoding.matches("(?i)(iso-8859|ascii|us-ascii).*");
            LinkedList<String> lines = new LinkedList<String>();
            BufferedReader reader= new BufferedReader(new InputStreamReader(new BackwardsFileInputStream(file), encoding));
            for (String line = null; (numberOfLinesToRead-- > 0) && (line = reader.readLine()) != null;)
                // Reverse the order of the characters in the string
                char[] chars = line.toCharArray();
                for (int j = 0, k = chars.length - 1; j < k ; j++, k--)
                    char temp = chars[j];
                    chars[j] = chars[k];
                    chars[k]= temp;
                lines.addFirst(new String(chars));
            reader.close();
            return lines;
        public static void main(String[] args)
            try
                File file = new File("/usr/share/dict/words");
                int n = 10;
                    System.out.println("Head of " + file);
                    int index = 0;
                    for (String line : head(file, n))
                        System.out.println(++index + "\t[" + line + "]");
                    System.out.println("Tail of " + file);
                    int index = 0;
                    for (String line : tail(file, "us-ascii", n))
                        System.out.println(++index + "\t[" + line + "]");
            catch (Exception e)
                e.printStackTrace();
    }

  • How kann I store a very big file in to Oracle XML DB?

    Hello,
    I´m looking for a fast method to store a XML file in to a oracle 10g XE. I had try to store the 500 kb file in to the database as a xmltype or a clob, but I still have the same Error: "ORA-01704: string literal too long". I´m looking for a long time for a possibility to store this file and another one(113 MB) in to the database. I had searched by google to see if any solution are available, and the unique solution found is to splitt the document in a loop statement(due to the 32 kb limit). But this solution don´t allow any storage with a XML Schema and is to slow.
    here is an example how I did it(but it didn´t work):
    create table Mondial(Nr int, xmldata xmltype);
    INSERT INTO Mondial VALUES (1, 'big xml file');
    I would also try the alternative with a bind variable like this:
    create or replace PROCEDURE ProcMondial IS
    poXML CLOB;
    BEGIN
    poXML := 'big xml file';
    INSERT INTO Mondial VALUES (1, XMLTYPE(poXML));
    EXCEPTION
    WHEN OTHERS THEN
    raise_application_error(-20101, 'Exception occurred in Mondial procedure :'||SQLERRM);
    END ProcMondial;
    I become also the same Error:String to long!
    I am using sql developer for the Query.
    please help me, I´m despaired.
    thanks!
    Michael

    If you use the suggested statement
    create table Mondial(Nr int, xmldaten xmltype) TABLESPACE mybigfile;than I hope for you that this XML data content will not be used for content driven procedures like selecting, updating or deletion of parts of XML data. The default for "xmltype" has a CLOB physical representation that, if not on 11g combining it with an XMLIndex, is only useful for document driven XML storage. That means - You ALWAYS delete, select, update or insert the WHOLE XML content (/per record basis). If this is not your intent, you will encounter performance problems.
    Use instead Object Relational or Binary XML (11g) while using the XMLType datatype or the XMLType datatype in conjunction with CLOB based storage AND an XMLIndex if on 11g. Carefully read the first and/or second chapter of the 10/11g XMLDB Development Guide. Carefully choose your XMLType needed (and or XML design) if you don't want to be disappointed regarding the end result and/or have to redesign your XML tables.
    A short introduction on the possibilities can be found here (not only 11g related btw): http://www.liberidu.com/blog/?p=203

  • Spilt a big file into 3 files??

    Hello,
    Is there any software I can use to spilt a very big file into
    3 or 4 small files please?
    Thanks.

    Use one of these applications to split the file.
    (14020)

  • How to read a very BIG XML-File

    Hello together,
    how can I read a XML file of e.g. 2 GByte in ABAP ( Release 4.6C ). The file should be read from the Application-Server ( not from the Front-End ).
    Problem: Too much MEMORY is needed by the upload of the complete file into an internal table. In order to produce a stream, the complete file must be uploaded. The parser works with parse_event ( Event-triggered ) and is not creating a DOM. The parsing itself is no problem.
    Possible solution: Feed the stream with parts of the file. But how !?
    Here is some coding of the program:
      data: l_xml_filename    type localfile,
            l_event_sub       type i,
            l_boolean         type c,
            l_subrc           type i.
    Datei mit Größe in XML-Tabelle einlesen:
    l_xml_filename = 'I:TEMPGeboIsp-PSGK37.xml'.
      perform read_file_in_xml_table using    p_xml_filename
                                     changing g_xml_table
                                              g_xml_size.
    XML-Dokument aus Tabelle bilden:     =========================
    iXML factory erzeugen:
      go_ixml = cl_ixml=>create( ).
    stream factory erzeugen:
      p_streamfactory = go_ixml->create_stream_factory( ).
    XML-Dokument erzeugen:
      p_xml_document = go_ixml->create_document( ).
    Input-Stream erzeugen:
      p_istream = p_streamfactory->create_istream_itable(
                          table = g_xml_table
                          size  = g_xml_size ).
      p_parser = go_ixml->create_parser( stream_factory = p_streamfactory
                                         istream        = p_istream
                                         document       = p_xml_document ).
      l_event_sub = if_ixml_event=>co_event_element_pre +
                    if_ixml_event=>co_event_element_post.
      call method p_parser->set_event_subscription( l_event_sub ).
      l_boolean = p_parser->set_dom_generating( ' ' ).
    The program works fine but needs too much memory because of the big internal table.
    I would be very happy if somebody could help me!
    Thanks in advance!
    Thomas
    P.s.: German answers are welcome!  
    Message was edited by:
            Thomas13 Scheuermann
    Message was edited by:
            Thomas13 Scheuermann

    Hello myself,
    nobody has answered my question, so now I answer myself!!  
    The wrong part is to read the file with "open dataset" and to create the inputstream with
    p_istream = p_streamfactory->create_istream_itable(
    table = g_xml_table
    size = g_xml_size ).
    Better ist to create the inputstream with
    p_istream = p_streamfactory->create_istream_uri(
    .......................PUBLIC_ID = ''
    .......................SYSTEM_ID = '
    applserver\I$\TEMP\Datei.XML' ).
    In this way no space is needed for the file.
    Best regards,
    Thomas
    Message was edited by:
            Thomas13 Scheuermann

  • FTP Delivery error when uploading big file

    Hi,
    I've setup a Schedule with Delivery through SFTP. Usually (when the output size is not very big) it works fine.
    However, when my report (pdf) is too big, the job's status remains "Running" forever. When I go to my FTP directory the file's size is always 91.906.794 bytes. I checked BI Publisher's cache and the generated output is there but its size is always bigger than 91.906.794 bytes. That file works fine when I open it in Adobe Reader.
    Somehow, it seems that BI Publisher can't upload files bigger than 91.906.794 bytes but I can't find any maximum size in the documentation. And besides, I guess that if there was any maximum size, it should throw an error instead of getting stuck in "Running" status.
    I'm running Oracle BI Publisher version 11.1.1.6.0 in Red Hat Enterprise Linux Server release 5.5 (Tikanga). I've tested this feature in 2 different servers and the result is exactly the same.
    Anyone had this problem?
    Thank you,
    Tiago

    Hello,
    Here is the all possible fixes to upload large file, please check and verify with your settings:
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    http://blogs.technet.com/b/sammykailini/archive/2013/11/06/how-to-increase-the-maximum-upload-size-in-sharepoint-2013.aspx
    Hemendra:Yesterday is just a memory,Tomorrow we may never see
    Please remember to mark the replies as answers if they help and unmark them if they provide no help

  • Upload very large file to SharePoint Online

    Hi,
    I tried uploading very large files to SharePoint online via the REST API using JAVA. Uploading files with a size of ~160 mb works well, but if the filesize is between 400mb and 1,6 gb I either receive a Socket Exception (Connection reset by peer) or an error
    of SharePoint telling me "the security validation for the page is invalid". 
    So first of all I want to ask you, how to work around the security validation error. As far as I understand the token, which I have added to my X-RequestDigest header of the http post request seems to be expired (by default it expires after 1800 seconds).
    Uploading such large files is time consuming, so I can't change the token in my header while uploading the file. How can I extend the expiration time of the token (if it is possible)? Could it help to send post requests with the same token
    continuously while uploading the file and therefore prevent the token from expiration? Is there any other strategy to upload such large files which need much more than 1800 seconds for upload?
    Additionally any thoughts on the socket exception? It happens quite sporadically. So sometimes it happens after uploading 100 mb, and the other time I have already uploaded 1 gb.
    Thanks in advance!

    Hi,
    Thanks for the reply, the reason I'm looking into this is so users can migrate there files to SharePoint online/Office 365. The max file limit is 2gb, so thought I would try and cover this.
    I've looked into that link before, and when I try and use those end points ie StartUpload I get the following response
    {"error":{"code":"-1, System.NotImplementedException","message":{"lang":"en-US","value":"The
    method or operation is not implemented."}}}
    I can only presume is that these endpoints have not been implemented yet, even though the documentation says they only available for Office 365 which is what  I am using. 
    Also, the other strange thing is that I can actually upload a file of 512mb to the server as I can see it in the web UI. The problem I am experiencing is get a response, it hangs on this line until the TimeOut is reached.
    WebResponse wresp = wreq.GetResponse();
    I've tried flushing and closing the stream, sendchunked but still no success

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Very large file upload 2 GB with Adobe Flash

    Hello, anyone know how can I upload very large files with Adobe Flash or how to use SWFUpload?
    Thanks in Advance, All help will be very appreciated.

    1. yes
    2. I'm getting error message from php
    if( $_FILES['Filedata']['error'] == 0 ){ 
      if(move_uploaded_file( $_FILES['Filedata']['tmp_name'], $uploads_dir.$_FILES['Filedata']['name'] ) ){ 
      echo 'ok'; 
      exit(); 
      echo 'error';    // Overhere
      exit();

  • Keeping "CS Web Service session" alive while uploading big files.

    Hi.
    I have a problem when I'm uploading big files, which takes longer than the session timeout value, causing the upload to fail.
    As you all know uploading a file is a three step process:
    1). Create a new DocumentDefinition Item on the server as a placeholder.
    2). Open an HTTP connection to the created placeholder and transfer the data using the HTTPConnection.put() method.
    3). Create the final document using the FileManager by passing in the destination folder and the document definition.
    The problem is that step 2 take so long that the "CS Web Service Session" times out and thus step 3 can not be completed. The Developer guide gives a utility method for creating an HTTP connection for step 2 and it states the folllowing "..you must create a cookie for the given domain and path in order to keep the session alive while transferring data." But this only keeps the session of the HTTP Connection alive and not the "CS Web Service Session". As in my case step 2 completes succesfully and the moment I peform step 3 it throws an ORACLE.FDK.SessionError:ORACLE.FDK.SessionNotConnected exception.
    How does one keep the "CS Web Service Session" alive?
    Thanks in advance
    Regards.

    Okay, even a thread that pushes dummy stuff through once in a while doesn't help. I'm getting the following when the keep alive thread kicks in while uploading a big file.
    "AxisFault
    faultCode: {http://xml.apache.org/axis/}HTTP
    faultSubcode:
    faultString: (409)Conflict
    faultActor:
    faultNode:
    faultDetail:
    {}:return code: 409
    &lt;HTML&gt;&lt;HEAD&gt;&lt;TITLE&gt;409 Conflict&lt;/TITLE&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;H1&gt;409 Conflict&lt;/H1&gt;Concurrent Requests On The Same Session Not Supported&lt;/BODY&gt;&lt;/HTML&gt;
    {http://xml.apache.org/axis/}HttpErrorCode:409
    (409)Conflict
         at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.java:732)
         at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:143)
         at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32)
         at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
         at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
         at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165)
         at org.apache.axis.client.Call.invokeEngine(Call.java:2765)
         at org.apache.axis.client.Call.invoke(Call.java:2748)
         at org.apache.axis.client.Call.invoke(Call.java:2424)
         at org.apache.axis.client.Call.invoke(Call.java:2347)
         at org.apache.axis.client.Call.invoke(Call.java:1804)
         at oracle.ifs.fdk.FileManagerSoapBindingStub.existsRelative(FileManagerSoapBindingStub.java:1138)"
    I don't understand this, as the exception talks about "Concurrent Requests On The Same Session", but if their is already a request going on why is the session timing out in the first place?!
    I must be doing something really stupid somewhere. Aia ajay jay what a unproductive day...
    Any help? It will be greatly appreciated...

  • Tips or tools for handling very large file uploads and downloads?

    I am working on a site that has a document repository feature. The documents are stored as BLOBs in an Oracle database and for reasonably sized files its not problem to stream the files out directly from the database. For file uploads, I am using the Struts module to get them on disk and am then putting the blob in the database.
    We are now being asked to support very large files of 250MB+. I am concerned about problems I've heard of with HTTP not being reliable for files over 256MB. I'd also like a solution that would give the user a status bar and allow for restarts of broken uploads or downloads.
    Does anyone know of an off-the-shelf module that might help in this regard? I suspect an ActiveX control or Applet on the client side would be necessary. Freeware or Commercial software would be ok.
    Thanks in advance for any help/ideas.

    Hi. There is nothing wrong with HTTP handling 250MB+ files (per se).
    However, connections can get reset.
    Consider offering the files via FTP. Most FTP clients are good about resuming transfers.
    Or if you want to keep using HTTP, try supporting chunked encoding. Then a user can use something like 'GetRight' to auto resume HTTP downloads.
    Hope that helps,
    Peter
    http://rimuhosting.com - JBoss EJB/JSP hosting specialists

  • Uploading Very Large Files via HTTP

    I am developing some classes that must upload files to a web server via HTTP and multipart/form-data. I am using Apache's Tomcat FileUpload library contained within the commons-fileupload-1.0.jar file on the server side. My code fails on large files or large quantities of small files because of the memory restriction of the VM. For example when uploading a 429 MB file I get this exception:
    java.lang.OutOfMemoryError
    Exception in thread "main"I have never been successful in uploading, regardless of the server-side component, more than ~30 MB.
    In a production environment I cannot alter the clients VM memory setting, so I must code my client classes to handle such cases.
    How can this be done in Java? This is the method that reads in a selected file and immediately writes it upon the output stream to the web resource as referenced by bufferedOutputStream:
    private void write(File file) throws IOException {
      byte[] buffer = new byte[bufferSize];
      BufferedInputStream fileInputStream = new BufferedInputStream(new FileInputStream(file));
      // read in the file
      if (file.isFile()) {
        System.out.print("----- " + file.getName() + " -----");
        while (fileInputStream.available() > 0) {
          if (fileInputStream.available() >= 0 &&
              fileInputStream.available() < bufferSize) {
            buffer = new byte[fileInputStream.available()];
          fileInputStream.read(buffer, 0, buffer.length);
          bufferedOutputStream.write(buffer);
          bufferedOutputStream.flush();
        // close the files input stream
        try {
          fileInputStream.close();
        } catch (IOException ignored) {
          fileInputStream = null;
      else {
        // do nothing for now
    }The problem is, the entire file, and any subsequent files being read in, are all being packed onto the output stream and don't begin actually moving until close() is called. Eventually the VM gives way.
    I require my client code to behave no different than the typcial web browser when uploading or downloading a file via HTTP. I know of several commercial applets that can do this, why can't I? Can someone please educate me or at least point me to a useful resource?
    Thank you,
    Henryiv

    Are you guys suggesting that the failures I'm
    experiencing in my client code is a direct result of
    the web resource's (servlet) caching of my request
    (files)? Because the exception that I am catching is
    on the client machine and is not generated by the web
    server.
    trumpetinc, your last statement intrigues me. It
    sounds as if you are suggesting having the client code
    and the servlet code open sockets and talk directly
    with one another. I don't think out customers would
    like that too much.Answering your first question:
    Your original post made it sound like the server is running out of memory. Is the out of memory error happening in your client code???
    If so, then the code you provided is a bit confusing - you don't tell us where you are getting the bufferedOutputStream - I guess I'll just assume that it is a properly configured member variable.
    OK - so now, on to what is actually causing your problem:
    You are sending the stream in a very odd way. I highly suspect that your call to
    buffer = new byte[fileInputStream.available()];is resulting in a massive buffer (fileInputStream.available() probably just returns the size of the file).
    This is what is causing your out of memory problem.
    The proper way to send a stream is as follows:
         static public void sendStream(InputStream is, OutputStream os, int bufsize)
                     throws IOException {
              byte[] buf = new byte[bufsize];
              int n;
              while ((n = is.read(buf)) > 0) {
                   os.write(buf, 0, n);
         static public void sendStream(InputStream is, OutputStream os)
                     throws IOException {
              sendStream(is, os, 2048);
         }the simple implementation with the hard coded 2048 buffer size is fine for almost any situation.
    Note that in your code, you are allocating a new buffer every time through your loop. The purpose of a buffer is to have a block of memory allocated that you then move data into and out of.
    Answering your second question:
    No - actually, I'm suggesting that you use an HTTPUrlConnection to connect to your servlet directly - no need for special sockets or ports, or even custom protocols.
    Just emulate what your browser does, but do it in the applet instead.
    There's nothing that says that you can't send a large payload to an http servlet without multi-part mime encoding it. It's just that is what browsers do when uploading a file using a standard HTML form tag.
    I can't see that a customer would have anything to say on the matter at all - you are using standard ports and standard communication protocols... Unless you are not in control of the server side implementation, and they've already dictated that you will mime-encode the upload. If that is the case, and they are really supporting uploads of huge files like this, then their architect should be encouraged to think of a more efficient upload mechanism (like the one I describe) that does NOT mime encode the file contents.
    - K

Maybe you are looking for

  • HDMI output to TV is no longer working

    I've seen similar problems posted here and have tried most of the "solutions" but nothing has worked.... A few months ago, the HDMI output worked fine; I plugged the HDMI cable into my laptop and didn't have to change a thing (no resolution tampering

  • What is the standard inbound idoc for substation management?

    Hello Experts, Can anybody help me to find out the standard inbound idoc for substation management? The following field are requied to be  filled in idoc. 1)Power_Produced(STAGR) 2) Reduction_Power_Consumed(STAGR) 3)Ancillary_Power_Consumed (STAGR) 4

  • Screwed up itunes

    hi... Long and short: I had an external hard drive with all my music on it, it died. I was able to pull a few things off of it including my itunes folder with my itunes music library and a folder containing a the few tunes I could salvage. Anyhow, I

  • Split a ";" delimited record using SQL

    Post Author: GraemeG CA Forum: Data Connectivity and SQL I receive a string of semi-colon delimited data output by a third-party system. The idea is to drag this record into Crystal and match it up to some reference information in our database and th

  • Using my MIni Mac with a Sony handycam

    Hi all, I'm thinking of buying a Sony Hnadycam with built in HD but while reading the manual online i noticed the software provided does not work with a mac. Does this mean i can't use it with my mac and that there will be problems with dumping video