Sending a large amount of data over a post

Hi.
I'm trying to send a large amount of text (about 400,000 characters) with a post to a servlet running in Tomcat. For debugging my servlet just writes the data to standard out and a text file.
The problem is, I get about 2048 characters printed correctly and then a long stream of garbage.
Has anyone seen a problem like this, or know of any limitations of the length of a post?
Thanks,
Jerry

there might be a limitation for reading input text
u better use object input stream for reading.....
any more idea do mail me
[email protected]

Similar Messages

  • Sending a large amount of data by "streaming"

    Hey guys,
    I'm writing an app that requires the user to download a large (a few GBs potentially) amount of data that will be stored in a database.  These are essentially thousands of records that we're transfering from the remote database to the local AIR SQLite database.  I'd like to store the records in the database as they're transferred from the server rather than waiting for the full response before storing them.  I think I'd like to transfer them using AMF.  Is there a way to do this using AMF?  Ideas?  Tips?  Thanks.

    Oh shnaps!  It looks like URLStream gives some support for AMF.  I think that's what I'm after.

  • Sending large amounts of data spontaneously

    In my normal experience with the internet connection, the bits of data sent is about 50 to 80% of that received, but occasionally Firefox starts transmitting large amounts of data spontaneously; what it is, I don't know and where it's going to, I don't know. For example, today the AT&T status screen showed about 19 MB received and about 10 MB sent after about an hour of on-line time. A few minutes later, I looked down at the status screen and it showed 19.5 MB received and 133.9 MB sent, and the number was steadily increasing. Just before I went on line today, I ran a complete scan of the computer with McAfee and it reported nothing needing attention. I ran the scan because a similar effusion of sending data spontaneously had happened yesterday. When I noticed the data pouring out today, I closed Firefox and it stopped. When I opened Firefox right afterward, the transmission of data from did not recommence. My first thought was that my computer had been captured by the bad guys and now I was a robot, but McAfee says not to worry. But should I worry anyway? What's going on, or that not having a good answer now, how can I find out what's going on? And how can I make it stop, unless I'm seeing some kind of maintenance operation that Mozilla or Microsoft is subjecting me to?

    Instead of using URLConnection open a Socket to the server port (80 probably) send a POST http request followed by the data, you may then (optional) recieve data from the server to check that the servlet is ok, this is the same protocol as URLConnection, but you have control over when the data is actually sent...
    Socket sock=new Socket(getHost(),80);
    DataOutputStream dos=new DataOutputStream(sock.getOutputStream());
    dos.writeBytes("POST servletname\r\n");
    dos.writeBytes("Content-type: text/plain\r\n");  //optional, but good if you know
    dos.writeBytes("Content-length: "+lengthOfData+"\r\n")  //again, optional, but good if you can know it without caching the data first
    dos.writeBytes("\r\n");   // gotta have a blank line before the data
      // send data now
    DataInputStream=new DataInputStream(sock.getInputStream());  //optional if you want to recieve
      // recieve any feedback from servlet if you want
    dis.close();
    dos.close();
    sock.close();im guessing that URLConnection caches the data so it can fill in "Content-length"

  • Store agent started to transfer large amounts of data, eating all my bandwidth.  Seemed to just start 5-6 days ago.

    About5-6 days ago store agent started to continuously run, sending and receiving large amounts of data.  This eats all my bandwidth quickly, essentially rendering my internet access worthless since I have to use satellite internet.  I have tried stopping it in the Activity Monitor , but it restarts again. I thought I might have had a virus or something.  I downloaded trend micro for Mac, but found its core services essentially did the same thing. I uninstalled, but found that store agent is still running non stop. Ideas?

    The storeagent process is a normal part of Mac OS X, not a virus. Remove Trend Micro, which is a quite poor choice for protecting yourself against malware in the first place (see the results of my Mac anti-virus testing 2014), and which isn't really necessary anyway (see my Mac Malware Guide).
    As for what it might be doing, as babowa points out, it should be present when the App Store app is open, and at that time, it might be occupied with downloading updates or something similar. If you keep force-quitting it in Activity Monitor, that probably ruins whatever download it was working on, so it has to start all over again, perpetuating the cycle. In general, it is a very bad idea to force-quit processes that are part of Mac OS X without a very good reason and an understanding of what they are.
    Go to System Preferences -> App Store:
    You will probably want to turn off automatic download of newly available updates, as well as automatic download of apps purchased on other Macs (if you have other Macs). I do not advise turning off the master "Automatically check for updates" box, or the one for installing security updates, as disabling those will reduce the security of your system. These security updates are typically small, so they should have very little impact on your total internet usage.

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

  • Azure Cloud service fails when sent large amount of data

    This is the error;
    Exception in AZURE Call: An error occurred while receiving the HTTP response to http://xxxx.cloudapp.net/Service1.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being
    aborted by the server (possibly due to the service shutting down). See server logs for more details.
    Calls with smaller amounts of data work fine. Large amounts of data cause this error.
    How can I fix this??

    Go to the web.config file, look for the <binding> that is being used for your service, and adjust the various parameters that limit the maximum length of the messages, such as
    maxReceivedMessageSize.
    http://msdn.microsoft.com/en-us/library/system.servicemodel.basichttpbinding.maxreceivedmessagesize(v=vs.100).aspx
    Make sure that you specify a size that is large enough to accomodate the amount of data that you are sending (the default is 64Kb).
    Note that even if you set a very large value here, you won't be able to go beyond the maximum request length that is configured in IIS. If I recall correctly, the default limit in IIS is 8 megabytes.

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Is there any way to connect time capsule to a MacBook Pro directly via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...)?

    Perhaps via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...? I want to use TimeCapsule as back-up for an archive which is curently stored on a 2 TB WESC HD. 

    No, you cannot backup via direct usb connection..
    But gigabit ethernet is much faster anyway.. are you connected directly by ethernet?
    Is the drive you are backing up from plugged into the TC? That will slow it down something chronic.. plug that drive in by its fastest connection method.. WESC sorry I have no idea. If ethernet use that.. otherwise USB direct to the computer.. always think what way the files come and go.. but since you are copying from the computer everything has to go that way.. it makes things slower if they go over the same cable.. if you catch the drift.

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • How can I edit large amount of data using Acrobat X Pro

    Hello all,
    I need to edit a catalog that contains large amount of data - mainly the product price. Currently I can only export the document into excel file and then paste the new price onto the catalog using Acrobat X Pro one by one, which is extremely time-consuming. I am sure there's a better way to make this faster while keeping the accuracy of the data. Thanks a lot in advance if any one's able to help! 

    Hi Chauhan,
    Yes I am able to edit text/image via tool box, but the thing is the catalog contains more than 20,000 price data and all I can do is deleteing the orginal price info from catalog and replace it with the revised data from excel. Repeating this process over 20,000 times would be a waste of time and manpower... Not sure if I make my situation clear enough? Pls just ask away, I really hope to sort it out, Thanks! 

  • Query about clustering unrelated large amounts of data together vs. keeping it separate.

    I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.
    A description of the general situation:
    A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.
    What I think Labview does:
    As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.
    If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.
    In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.
    Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)
    Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?
    The attachment:
    A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
    The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
    Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.
    I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.
    Thank you all very much for your time and for sharing your Labview experience,
    Richard Dwan
    Attachments:
    Fibonacci_test.vi ‏71 KB

    > That is an interesting observation you have made and seems to me to be
    > quite inexplicable. The trick is interesting but not practical for me
    > to use in developing a large piece of software. Thanks for your input
    > - I think I'll be contacting technical support for an explaination
    > along with some other anomolies involving large arrays that I have
    > spottted.
    >
    The deal here is that the bundle and unbundle nodes must be very careful
    when they are swapping elements around. This used to make copies in the
    normal cases, but that has been improved. The reason that the sequence
    affects it is that it affects the algorithm so that it orders the
    element movement so that the algorithm succeeds in avoiding a copy.
    Another, more obvious way
    is to use a regular bundle and unbundle, not
    the named variety. These tend to have an easier time in the algorithm also.
    Technically, I'd report the diagram to tech support to see if the named
    bundle/unbundle case can be handled as well. In the meantime, you can
    leave the data unbundled, as in the faster version.
    Greg McKaskle

  • Streaming large amounts of data of socket causes corruption?

    I'm wrinting an app to transfer large amounts of data via a simple client/server architecture between two machines.
    Problem: If I send the data too 'fast', the data arrives corrupted:
    - Calls to read() returns wrong data (wrong 'crc')
    - Subsequent calls to read() do not return -1 but allow me to read e.g. another 60 or 80 KBytes.
    - available() returns always '0'; but I'll get rid of that method anyway (as recommended in other forum entries).
    The behaviour is somewhat difficult to repeat, but it fails for me reliably when transferring the data between two separate machines and when setting the number of packets (Sender.TM) to 1000 or larger.
    Workaround: Reduce number of packages send to e.g. 1; or intruduce the 'sleep' on the Sender side. Another workaround: Changing alone to java.nio.* did not help, but when I got rid of the Streams and used solely ByteBuffers, the problem disappeared. Unfortunately the Streams are required by other parts of my application.
    I'm running the code on two dual-CPU machines connected via
    Below are the code of the Sender and the Listener. Please excuse the style as this is only to demonstrate the problem.
    import java.io.IOException;
    import java.io.OutputStream;
    import java.net.InetSocketAddress;
    import java.nio.channels.Channels;
    import java.nio.channels.SocketChannel;
    import java.util.Arrays;
    public class SenderBugStreams {
        public static void main(String[] args) throws IOException {
            InetSocketAddress targetAdr = new InetSocketAddress(args[0], Listener.DEFAULT_PORT);
            System.out.println("connecting to: "+targetAdr);
            SocketChannel socket = SocketChannel.open(targetAdr);
            sendData(socket);
            socket.close();
            System.out.println("Finished.");
        static final int TM = 10000;
        static final int TM_SIZE = 1000;
        static final int CRC = 2;
        static int k = 5;
        private static void sendData(SocketChannel socket) throws IOException {
            OutputStream out = Channels.newOutputStream(socket);
            byte[] ba = new byte[TM_SIZE];
            Arrays.fill(ba, (byte)(k++ % 127));
            System.out.println("Sending..."+k);
            for (int i = 0; i < TM; i++) {
                out.write(ba);
    //            try {
    //                Thread.sleep(10);
    //            } catch (InterruptedException e) {
    //                // TODO Auto-generated catch block
    //                e.printStackTrace();
    //                throw new RuntimeException(e);
            out.write(CRC);
            out.flush();
            out.close();
    import java.io.IOException;
    import java.io.InputStream;
    import java.net.InetSocketAddress;
    import java.nio.channels.Channels;
    import java.nio.channels.ServerSocketChannel;
    import java.nio.channels.SocketChannel;
    public class ListenerBugStreams {
        static int DEFAULT_PORT = 44521;
         * @param args
         * @throws IOException
        public static void main(String[] args) throws IOException {
            ServerSocketChannel serverChannel = ServerSocketChannel.open();
            serverChannel.socket().bind(new InetSocketAddress(DEFAULT_PORT));
            System.out.print("Waiting...");
            SocketChannel clientSocket = serverChannel.accept();
            System.out.println(" starting, IP=" + clientSocket.socket().getInetAddress() +
                ", Port="+clientSocket.socket().getLocalPort());
            //read data from socket
            readData(clientSocket);
            clientSocket.close();
            serverChannel.close();
            System.out.println("Closed.");
        private static void readData(SocketChannel clientSocket) throws IOException {
            InputStream in = Channels.newInputStream(clientSocket);
            //read and ingest objects
            byte[] ba = null;
            for (int i = 0; i < SenderBugStreams.TM; i++) {
                ba = new byte[SenderBugStreams.TM_SIZE];
                in.read(ba);
                System.out.print("*");
            //verify checksum
            int crcIn = in.read();
            if (SenderBugStreams.CRC != crcIn) {
                System.out.println("ERROR: Invalid checksum: "+SenderBugStreams.CRC+"/"+crcIn);
            System.out.println(ba[0]);
            int x = in.read();
            int remaining = 0;
            while (x != -1) {
                remaining++;
                x = in.read();
            System.out.println("Remaining:"+in.available()+"/"+remaining);
            System.out.println(" "+SenderBug.TM+" objects ingested.");
            in.close();
    }

    Here is your trouble:
    in.read(ba);read(byte[]) does not read N bytes, it reads up to N bytes. If one byte has arrived then it reads and returns that one byte. You always need to check the return value of read(byte[]) to see how much you got (also check for EOF). TCP chops up the written data to whatever packets it feels like and that makes read(byte[]) pretty random.
    You can use DataInputStream which has a readFully() method; it loops calling read() until it gets the full buffer's worth. Or you can write a little static utility readFully() like so:
        // Returns false if hits EOF immediately. Otherwise reads the full buffer's
        // worth. If encounters EOF in mid-packet throws an IOException.
        public static boolean readFully(InputStream in, byte buf[])
            throws IOException
            return readFully(in, buf, 0, buf.length);
        public static boolean readFully(InputStream in, byte buf[], int pos, int len)
            throws IOException
            int got_total = 0;
            while (got_total < len) {
                int got = in.read(buf, pos + got_total, len - got_total);
                if (got == -1) {
                    if (got_total == 0)
                        return false;
                    throw new EOFException("readFully: end of file; expected " +
                                           len + " bytes, got only " + got_total);
                got_total += got;
            return true;
        }

  • Storing large amounts of data

    Hello,
    I'd like to use Berkeley DB for logging large amounts of data - i.e. structures that are ~400KB in size and I need to store them ~10 times per second for up to several hours, but I get into quite big performance issues the more records I insert into the database. I've set the pagesize to its maximum (64KB - I split my data into several packages so it doesn't get stored on an overflow page) and experimented with several chache sizes (8MB, 64MB, 2GB, 4GB), but I haven't managed to get rid of the performance issues, independent of which access method I use (although I got the "best" results when using DB_QUEUE, but that varies heavily from day to day).
    To get to the point: Performance starts at "0" seconds per insert (where 1 "insert" = 7 real inserts because of splitting up the data), between the 16750. and 17000. insertion it takes ~0.00352 per insert and when reaching the 36000. insertion it already takes about 0.0074 seconds per insert, and so on ...
    Does anyone have an idea on how I can increase my performance? Because when the time needed for each insertion keeps increasing over time, it's not possible to keep the program running at its intended speed at some point.
    Thanks,
    Thomas

    Hello,
    A good starting point are the suggestions in the Berkeley DB Reference Guide at:
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/am_misc_tune.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_tune.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_throughput.html
    Thanks,
    Sandra

  • Java NIO - reading large amount of data

    Hi,
    I have diffuculties of reading large amount of data with SocketChannel (using directAllocated buffer & allocated one). Files greater than 300KB are cut even though I tried write the data into FileChannel.
    My Code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(150000);
    buffer = ByteBuffer.allocate(6000000);
    out = new     FOS("d:\\msgData.tmp");
    fc=out.getFOS().getChannel(); // FileChannel               
    int fileLength = (int)fc.size();
    while (clientChannel.read(directBlockBuffer)>0)
    {                              directBlockBuffer.flip()                         buffer.put(directBlockBuffer);
         directBlockBuffer.compact();
    //close data file
                                       buffer.flip();
                                       fc.write(buffer);
                                       fc.close();
    FOS.close();
    // end of code
    Any ideas?
    Thanks
    AST

    I don't understand how the "write" result will help read the whole data.
    Anyway, I changed the code so the SocketChannel will read in smaller chunks (~8KB) & the FileChannel writes in every read
    but the data stream is cut again (to ~5KB no matter what size of file I send).
    In the updated code when try to compare socketChannel.read to -1 I got endless loop.
    I'm basically trying to write POP3/SMTP server program, this part of code handles attachment that is received by the SocketChannel in one unit (i.e 1+ MB of data, the other SMTP commands/lines are no more than 27 chars and simple to handle).
    Therefore I need to be ready to accept large amount of data to the buffer & write it to filechannel. (In the POP3 thread I'm using MappedByteBuffer successfully).
    Updated code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(8192);
    while (clientChannel.read   (directBlockBuffer>0&&directBlockBuffer.hasRemaining))
              directBlockBuffer.flip();
              fc.write(directBlockBuffer);
              directBlockBuffer.clear();
         }I think based on API my code is logical (and good for small files) but what about handling bigger files (up to 5MB)?
    Thanks,
    AST 

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

Maybe you are looking for

  • Quicktime wont uninstall or reinstall because of error 2738

    I have the Ipod classic and one day, it stopped allowing me to download files into it, no music, no videos nothing. So I tried to uninstall and reinstall. I was able to uninstall itunes, but I couldnt uninstall quicktime because it said there was an

  • DDL Wait option and ddl_lock_timeout in oracle 11G

    SQL> select * from v$version; BANNER Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production PL/SQL Release 11.1.0.6.0 - Production CORE 11.1.0.6.0 Production TNS for Solaris: Version 11.1.0.6.0 - Production NLSRTL Version 11.1.0

  • Error when import application

    hi everybody, i get error when i try to import application ORA-20001: GET_BLOCK Error. ORA-20001: GET_STMT error. ORA-20001: Execution of the statement was unsuccessful. ORA-06550: line 6, column 41: PLS-00103: Encountered the symbol &amp;quot;PK&amp

  • Finder window opens on startup - alternate cause

    I just had this problem come up, and it took quite a lot of digging to find out why. I thought I would post the cause of my particular problem and it's solution as a reference for others who have been unsuccessful at stopping a finder window from loa

  • The COMBO 3.5" ION 120GB Drive w/ Tri-Select

    Hi, i saw this drive on sale at a local computer store. I am just wondering if anyone knows if the COMBO 3.5" ION™ 120GB Drive w/ Tri-Select is a good drive or not. all i need it is for backing up some large files. i wont keep it on all the time. tha