Packaging large files ( 200mb) using dataPath mode

Hi...
I have a problem when trying to package an eBook >200mb to my test environment with Adobe Content Server.
Here's the XML I have constructed for the eBook:
<?xml version="1.0" encoding="UTF-8"?>
<package xmlns="http://ns.adobe.com/adept">
     <action>add</action>
     <dataPath>complete path to PDF/EPUB file</dataPath>
</package>
Here's the command used for trying to package the book:
java.exe -Xms512M -Xmx1024M -jar C:\AdobeContentServer\uploadtest\UploadTest-1_2.jar http://localhost:8080/packaging/Package C:\eBooks -datapath -xml -pass password -verbose
Here's the error response (with -verbose) given in the console window:
<error xmlns="http://ns.adobe.com/adept" data="E_ADEPT_UNKNOWN http://localhost:
8080/packaging/Package"/>
In the packaging.log, I read that the java process complains about heap size, which is weird, because I have specifically told java to use between 512mb and 1024mb.
How can this be fixed?

Anyone?
PS. It works with smaller files than 200mb...
Edit:
Increasing Tomcat's memory in the "Apache Tomcat Configuration" helped.
Baaaah

Similar Messages

  • Problem in Receiver side File Adapter using FTPS mode

    Hello,
    Here I am facing some problem in receiver side while using FTPS mode in the channel configuration.
    The error message which i could see in the audit log is,
    Message processing failed. Cause: com.sap.aii.af.ra.ms.api.RecoverableException: Error when getting an FTP connection from connection pool: com.sap.aii.af.service.util.concurrent.ResourcePoolException: Unable to create new pooled resource: java.lang.NullPointerException
    Can anyone help me out in solving this issue ASAP?
    I am using Per file transfer mode ni the reciever channel.
    Thanks in advance,
    Yours
    Soorya

    HI Surya,
    First check wether server started or not and then check you are connecting to FTP server by
    go to run -> cmd and write ping and ipaddress which is used and see whether u r getting reponse from teh FTP server.
    Try to login to the ftp server which you have mentioned in CC using the user name and pwd, to chk whether you have the permissions to login to the server.
    Also the check whether the folder you are trying to access is having permission for delete/read/write.
    Restart the FTP server and try it again.
    Regards
    Sridhar Goli

  • Which format on a flash drive for large files for use by Mac and PC

    I need to copy large files (9GB of movie file exported from iMovie of school graduation) onto 16GB flash drives so they can be used by school parents who may have Mac, PC or even TV.
    My first attempt says the files are too large.
    A lot of googling tells me this is a recognised problem but I am confursed by which advice to take.
    I am on a 2012 iMac running OSX version 10.7.4.
    Do I need to download some software to run NTFS.  It all sounds so confusing!
    I ended up in this predicament because the quality of my photo slideshows I copied to my DVD-R was so bad.  Thought a flash drive would be easy. Ha, not at all.
    Please answer in laymans terms - I could barely follow some of the advice I found.....

    Format the flash drives with Disk Utility in ExFAT format.

  • How to package .json files for use in an ANE for iOS

    I am creating an Adobe Native Extension (ANE) for use with iOS. The native code in the main .a library file used for the ANE depends on 3rd party frameworks which themselves depend on the definitions of JSON objects defined in several .json files. I can package the 3rd party frameworks with the ANE just fine - it's the .json files that I'm having difficulties with.
    I've tried packaging the .json files into the main .a library file, although I don't know if I did it the right way.
    Please help.

    You can read through the resources documentation for iOS here:
    http://help.adobe.com/en_US/air/extensions/WSf268776665d7970d-2e74ffb4130044f3619-7ff8.htm l
    Of particular interest is the line:
    "3. It moves the resource files to the top-level directory of the application."
    So not sure if you are using an ant build script or whatever to build your ANE, but as long as you place these json files into the folder with your library when you build the ANE, these will end up getting copied to the top-level directory of the application when an application is built using this ANE.
    This is exactly how I include an iOS .bundle of resources along with a .a in an ANE and it works fine. Nice clean packaging when distributing the ANE too.

  • Trying to transfer a large file using new ipod touch.

    I am trying to transfer a large file (17gigs)using my new ipod touch. I got a new laptop and i am trying to get some things on it. I was able to do this last time with my old ipod classic. It would let me copy/paste the file or drag it. But with the ipod touch i can not do either of those. Any help will be appreciated. Thank you very much.
    Message was edited by: usagisailormoon
    Message was edited by: usagisailormoon

    ah really. ok thank you very much.

  • File Splitting for Large File processing in XI using EOIO QoS.

    Hi
    I am currently working on a scenario to split a large file (700MB) using sender file adapter "Recordset Structure" property (eg; Row, 5000). As the files are split and mapped, they are, appended to a destination file. In an example scenario a file of 700MB comes in (say with 20000 records) the destination file should have 20000 records.
    To ensure no records are missed during the process through XI, EOIO, QoS is used. A trigger record is appended to the incoming file (trigger record structure is the same as the main payload recordset) using UNIX shellscript before it is read by the Sender file adapter.
    XPATH conditions are evaluated in the receiver determination to eighther append the record to the main destination file or create a trigger file with only the trigger record in it.
    Problem that we are faced is that the "Recordset Structure" (eg; Row, 5000) splits in the chunks of 5000 and when the remaining records of the main payload are less than 5000 (say 1300) those remaining 1300 lines get grouped up with the trigger record and written to the trigger file instead of the actual destination file.
    For the sake of this forum I have a listed a sample scenario xml file representing the inbound file with the last record wih duns = "9999" as the trigger record that will be used to mark the end of the file after splitting and appending.
    <?xml version="1.0" encoding="utf-8"?>
    <ns:File xmlns:ns="somenamespace">
    <Data>
         <Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
         <Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Cage_Code>"3NQN1"</Cage_Code>
              <Extract_Code>"A"</Extract_Code>
         </Row>
    </Data>
    </ns:File>
    In the sender file adapter I have for test purpose changed the "Recordset structure" set as "Row,5" for this sample xml inbound file above.
    I have two XPATH expressions in the receiver determination to take the last record set with the Duns = "9999" and send it to the receiver (coominication channel) to create the trigger file.
    In my test case the first 5 records get appended to the correct destination file. But the last two records (6th and 7th record get sent to the receiver channel that is only supposed to take the trigger record (last record with Duns = "9999").
    Destination file: (This is were all the records with "Duns NE "9999") are supposed to get appended)
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
         <R3Row>
              <Duns>"001001924"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001925"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
         <R3Row>
              <Duns>"001001926"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</xtract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001927"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"001001928"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Extract_Code>"A"</Extract_Code>
         </R3Row>
    </R3File>
    Trigger File:
    <?xml version="1.0" encoding="UTF-8"?>
    <R3File>
              <R3Row>
              <Duns>"001001929"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
              <R3Row>
              <Duns>"9999"</Duns>
              <Duns_Plus_4>""</Duns_Plus_4>
              <Ccr_Extract_Code>"A"</Ccr_Extract_Code>
         </R3Row>
    </R3File>
    I ve tested the XPATH condition in XML Spy and that works fine. My doubts are on the property "Recordset structure" set as "Row,5".
    Any suggestions on this will be very helpful.
    Thanks,
    Mujtaba

    Hi Debnilay,
    We do have 64 bit architecture and still we have the file processing problem. Currently we are splitting the file into smaller chuncks and processsing. But we want to process as a whole file.
    Thanks
    Steve

  • NIO- how to best transfer large binary files 200mb

    Hi there,
    For my dissertation I am in the process of implementing a LAN- NIO based distributed system most of it is working except I cant figure out how to best transfer large files >200mb.
    1. Client registers as available
    2. Server notifies that a job is available on file x
    3. Client requests file x
    4. Server transfers file x- without reading the whole file into memory-
    5- Client does its thing and tells the server the results are ready-
    6. Server requests results.
    This is all implemented, except for the actual file transfers, which just wont work the way I want it. I got access to a shared drive that I could copy to and from, but Java still reads all bytes fully into memory. I did a naughty workaround of calling windows native copy command ( runtime() ) which does the trick, though I would prefer that my app did the file handling since I need to know when things are done.
    Can anyone provide me with a link to an example / tutorial of how to do this, or is there a better way of transferring large files like tunneling, streaming, ftp or the old way of splitting into chunks with headers (1024 bits, byteFlip() etc) how is this usually done out there in the proper world?
    I did search the forum, and found hints and tips though I thought perhaps I would see if there is a best practice for this problem--
    Thanks,
    Thorsan

    Hi,
    I have tried various approaches with out much luck, think part of the problem is I have never done file transfers in java over networks before...
    At the moment I let the user select files from a JFileChooser, and store the String path+filename in a list. When a file is to be transferred I do the following:
    static void copyNIO(File f, String newFile) throws Exception {
            File f2 = new File(newFile);     
            FileChannel src = new FileInputStream(f).getChannel();
            FileChannel dst = new FileOutputStream(f2).getChannel();
            dst.transferFrom(src, 0, src.size());
            src.close();
            dst.close();
        }This lets me copy the file to wherever I want to without going through a socket, since I am developing in a lan environment this was ok for debugging. Though sooner or later it has to go through the network.
    I have allso played around with ObjectOutputStream and ObjectInputStream but again I just cant figure out how to inhibit my source from loading the whole file into memory. I want to read / write in blocks to keep the memory usage to a minimal.
    Thorsan

  • Large File Encryption is Too Slow!! Help

    I am trying to write some methods to handle encryption of zip files, most are very large (over 200 mb). At first it was attempted to load the file into memory, create a Base64encoded string and encrypt that back out to a file. But not nearly enough memory to do that, and generally as files get larger, it would not be possible to support that. So instead I am opting for something like CipherOutputStreams, read in, write out without buffer. on Small files, this is great (smaller than 1 mb) as it happens instantly. But when it get up to the 200MB minimum size of our zips, it takes hours. Is there any way I can speed this up? (See code below)
    import java.util.*;
    import java.io.*;
    import java.util.zip.*;
    import javax.crypto.*;
    import javax.crypto.spec.*;
    public class EncryptedZipUtil
      private static final String ALGO = "DESede";
      private static final String CIPHER_ALGO = "DESede/CBC/PKCS5Padding";
      private static final byte[] IV_PARAMS = new byte[] {
              127, 111, 13, 120, 12, 34, 56, 78
      private static final String ENCODING = "UTF-8";
      private static final byte[] ENCRIPTION_KEY = "Encryption key must be at least 30 characters long".getBytes();
      private static Cipher getCipher(int mode) throws Throwable
          DESedeKeySpec spec = new DESedeKeySpec(ENCRIPTION_KEY);
          SecretKeyFactory keyFactory = SecretKeyFactory.getInstance(ALGO);
          SecretKey theKey = keyFactory.generateSecret(spec);
          Cipher dcipher = Cipher.getInstance(CIPHER_ALGO);
          IvParameterSpec IvParameters = new IvParameterSpec(IV_PARAMS);
          dcipher.init(mode, theKey, IvParameters);
          return dcipher;
      public static ZipInputStream getDecryptedZipStream(File f) throws SecurityException, IOException
        FileInputStream fin = new FileInputStream(f);
        return new ZipInputStream(new CipherInputStream(fin,getCipher(Cipher.DECRYPT_MODE)));
      public static void streamEncryptZipFile(File zipFile, File outzip) throws Throwable
        FileInputStream fin = new FileInputStream(zipFile);
        CipherOutputStream cos = new CipherOutputStream(new FileOutputStream(outzip),getCipher(Cipher.ENCRYPT_MODE));
        while(fin.available() != 0)
          cos.write(fin.read());
        fin.close();
        cos.close();
      public static void streamDecryptZipFile(File e_zipFile, File outzip) throws Throwable
        CipherInputStream cin = new CipherInputStream(new FileInputStream(e_zipFile),getCipher(Cipher.DECRYPT_MODE));
        FileOutputStream fos = new FileOutputStream(outzip);
        int the_byte = -1;
        while((the_byte = cin.read()) != -1)
          fos.write(the_byte);
        cin.close();
        fos.close();
      public static void main(String[] args) throws Throwable
        EncryptedZipUtil.streamEncryptZipFile("D:\\ziptest\\original.zip","D:\\ziptest\\encrypted.zip");
        EncryptedZipUtil.streamDecryptZipFile("D:\\ziptest\\encrypted.zip","D:\\ziptest\\decrypted.zip");
    }Like I said, the code here works great with small files. Quick, correct and simple. My Machine is 2.8ghz P4, with 1024mb Ram, and when I run this on my larger files, it uses 99% CPU and no memory in addition to the standard JVM Heap.
    Any advice would be greatly appreciated

    Nice answer, but for additional duke points I've though of an issue with it: although this defeintly speeds up encrypt/decrypt time, there is no longer any way for me to access this file as a stream directly: if you see the method getDecryptedZipStream(File f), since the file is generally a zip, getting a stream in this manner wont work if we want to thumb through the contents and extract entries on the fly, because now the encryption is done in chunks rather than by bytes which the underlying stream would be should we want to browse the zip. So this solution doesn't solve all aspects of the issue. My alternative was then to imagine that I would enncrypt the individual contents of the zips prior to zipping and leave the zip intact. I could the browse the zip but can not access any of the data without decrypting each file first. I don't know if that is the best solution as of yet.

  • Exception while Read very large file 300 MB

    Hi
    I want to read a file which has size of more than 300 MB.
    While i m executing my code, i m getting
    Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    Below is my code
    1. FileChannel fc = new FileInputStream(fFile).getChannel();
    2.CharBuffer chrBuff = Charset.forName("8859_1").newDecoder().decode(fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size()));
    3.fc.close();Im getting the exception in the line 2.
    Even though i increased the Heap space up to 900 Mb, i m getting this error.
    (FYI)
    I executed in command prompt like below,
    java -Xms128m -Xmx1024m readLargeFile
    Kindly give a solution for this. is there any other best way to read large text file ??
    I m waiting for your reply..
    Thanks in advance

    Thanks for your reply.
    My task is to open a large file in READ&WRITE mode and i need to search some portion of text in that file by searching starting and end point.
    Then i need to write that searched area of text to a new file and delete that portion from the original file.
    The above process i will do more times.
    So I thought that for these process, it will be easy by loading the file into memory by CharBuffer and can search easily by MATCHER class.
    Above is my task, then now suggest me some efficient ways.
    Note that my file size will be more and using JAVA only i've to do these things.
    Thanks in Advance...

  • Loading large files in Java Swing GUI

    Hello Everyone!
    I am trying to load large files(more then 70 MB of xml text) in a Java Swing GUI. I tried several approaches,
    1)Byte based loading whith a loop similar to
    pane.setText("");
                 InputStream file_reader = new BufferedInputStream(new FileInputStream
                           (file));
                 int BUFFER_SIZE = 4096;
                 byte[] buffer = new byte[BUFFER_SIZE];
                 int bytesRead;
                 String line;
                 while ((bytesRead = file_reader.read(buffer, 0, BUFFER_SIZE)) != -1)
                      line = new String(buffer, 0, bytesRead);
                      pane.append(line);
                 }But this is gives me unacceptable response times for large files and runs out of Java Heap memory.
    2) I read in several places that I could load only small chunks of the file at a time and when the user scrolls upwards or downwards the next/previous chunk is loaded , to achieve this I am guessing extensive manipulation for the ScrollBar in the JScrollPane will be needed or adding an external JScrollBar perhaps? Can anyone provide sample code for that approach? (Putting in mind that I am writting code for an editor so I will be needing to interact via clicks and mouse wheel roatation and keyboard buttons and so on...)
    If anyone can help me, post sample code or point me to useful links that deal with this issue or with writting code for editors in general I would be very grateful.
    Thank you in advance.

    Hi,
    I'm replying to your question from another thread.
    To handle large files I used the new IO libary. I'm trying to remember off the top of my head but the classes involved were the RandomAccessFile, FileChannel and MappedByteBuffer. The MappedByteBuffer was the best way for me to read and write to the file.
    When opening the file I had to scan through the contents of the file using a swing worker thread and progress monitor. Whilst doing this I indexed the file into managable chunks. I also created a cache to further optimise file access.
    In all it worked really well and I was suprised by the performance of the new IO libraries. I remember loading 1GB files and whilst having to wait a few seconds to perform the indexing you wouldn't know that the data for the JList was being retrieved from a file whilst the application was running.
    Good Luck,
    Martin.

  • Compressing large file into several small files

    what can i use to compress a 5gb file in several smaller files, and that will easy rejoined at a later date?
    thanks

    Hi, Simon.
    Actually, what it sounds like you want to do is take a large file and break it up into several compressed files that can later be rejoined.
    Two ideas for you:
    1. Put a copy of the file in a folder of its own, then create a disk image of that folder. You can then create a segmented disk image using the segment verb of the hditutil command in Terminal. Disk Utility provides a graphical user interface (GUI) to some of the functions in hdiutil, but unfortunately not the segment verb, so you have to use hditutil in Terminal to segment a disk image.
    2. If you have StuffIt Deluxe, you can create a segmented archive. This takes one large StuffIt archive and breaks it into smaller segments of a size you define.2.1. You first make a StuffIt archive of the large file, then use StuffIt's Segment function to break this into segments.
    2.2. Copying all the segments back to your hard drive and Unstuffing the first segment (which is readily identifiable) will unpack all the segments and recreate the original, large file.I'm not sure if StufIt Standard Edition supports creating segmented archives, but I know StuffIt Deluxe does as I have that product.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X

  • How do I email large files

    What is the best way of emailing some large emails

    Email is typically limited to 1-5 MB of attachments per email - depends on your ISP's limits.
    For larger files, consider using Dropbox.  Free.  Easy.

  • Processing Large Files using Chunk Mode with ICO

    Hi All,
    I am trying to process Large files using ICO. I am on PI 7.3 and I am using new feature of PI 7.3, to split the input file into chunks.
    And I know that we can not use mapping while using Chunk Mode.
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    Thanks in Advance,
    - Pooja.

    Hello,
    While trying I noticed below points:
    1) I had Created Data Type, Message Type and Interfces in ESR and used the same in my scenario (No mapping was defined)Sender and receiver DT were same.
    Result: Scenario did not work. It created only one Chunk file (.tmp file) and terminated.
    2) I used Dummy Interface in my scenario and it worked Fine.
    So, Please confirm if we should always USE DUMMY Interfaces in Scenario while using Chunk mode in PI 7.3 Or Is there something that I am missing.
    According to this blog:
    File/FTP Adapter - Large File Transfer (Chunk Mode)
    The following limitations apply to the chunk mode in File Adapter
    As per the above screenshots, the split never cosiders the payload. It's just a binary split. So the following limitations would apply
    Only for File Sender to File Receiver
    No Mapping
    No Content Based Routing
    No Content Conversion
    No Custom Modules
    Probably you are doing content conversion that is why it is not working.
    Hope this helps,
    Mark
    Edited by: Mark Dihiansan on Mar 5, 2012 12:58 PM

  • Process large file using BPEL

    My project have a requirement of processing large file (10 MB) all at once. In the project, the file adapter reads the file, then calls 5 other BPEL process to do 10 different validations before delivering to oracle database. I can't use debatch feature of adapter because of Header and detail record validation requirement. I did some performace tuing (eg: auditlevel to minimum, logging level to error, JVM size to 2GB etc..) as per performance tuing specified in Oracle BPEL user guide. We are using 4 CPU, 4GB RAM IBM AIX 5L server. I observed that the Receive activity in the begining of each process is taking lot of time, while other transient process are as per expected.
    Following are statistics for receive activity per BPEL process:
    500KB: 40 Sec
    3MB: 1 Hour
    Because we have 5 BPEL process, so lot of time is wasted in receive activity.
    I did't try 10 MB so far, because of poor performance figure for 3 MB file.
    Does any one have any idea how to improve performance of begining receive activity of BPEL process?
    Thanks
    -Simanchal

    I believe the limit in SOA Suite is 7MB if you want to use the full payload and perform some kind of orchastration. Otherwise you need to do some kind of debatching, which you stated will not work.
    SOA Suite is not really designed for your kind of use case as it needs to parocess this file in memory, when any transformation occurs it can increase this message between 3 - 10 times. If you are writing to a database why can you read the rows one by one?
    If you are wanting to perform this kind of action have a look at ODI (Oracle Data Integrator). I Also believe that OSB (Aqua Logic) can handle files upto 200MB this this can be an option as well, but it may require debatching.
    cheers
    James

  • Reading large files -- use FileChannel or BufferedReader?

    Question --
    I need to read files and get their content. The issue is that I have no idea how big the files will be. My best guess is that most are less than 5kb but some with be huge.
    I have it set up using a BufferedReader, which is working fine. It's not the fastest thing (using readLine() and StringBuffer.append()), but so far it's usable. However, I'm worried that if I need to deal with large files, such as a PDF or other binary, BufferedReader won't be so efficient if I do it line by line. (And will I run into issues trying to put a binary file into a String?)
    I found a post that recommended FileChannel and ByteBuffer, but I'm running into a java.lang.UnsupportedOperationException when trying to get the byte[] from ByteBuffer.
    File f = new File(binFileName);
    FileInputStream fis = new FileInputStream(f);
    FileChannel fc = fis.getChannel();
    // Get the file's size and then map it into memory
    int sz = (int)fc.size();
    MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
    fc.close();
    String contents = new String(bb.array()); //code blows up
    Thanks in advance.

    If all you are doing is reading data I don't think you're going to get much faster than InfoFetcher
    you are welcome to use and modify this class, but please don't change the package or take credit for it as your own work
    InfoFetcher.java
    ==============
    package tjacobs.io;
    import java.io.IOException;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.Iterator;
    * InfoFetcher is a generic way to read data from an input stream (file, socket, etc)
    * InfoFetcher can be set up with a thread so that it reads from an input stream
    * and report to registered listeners as it gets
    * more information. This vastly simplifies the process of always re-writing
    * the same code for reading from an input stream.
    * <p>
    * I use this all over
         public class InfoFetcher implements Runnable {
              public byte[] buf;
              public InputStream in;
              public int waitTime;
              private ArrayList mListeners;
              public int got = 0;
              protected boolean mClearBufferFlag = false;
              public InfoFetcher(InputStream in, byte[] buf, int waitTime) {
                   this.buf = buf;
                   this.in = in;
                   this.waitTime = waitTime;
              public void addInputStreamListener(InputStreamListener fll) {
                   if (mListeners == null) {
                        mListeners = new ArrayList(2);
                   if (!mListeners.contains(fll)) {
                        mListeners.add(fll);
              public void removeInputStreamListener(InputStreamListener fll) {
                   if (mListeners == null) {
                        return;
                   mListeners.remove(fll);
              public byte[] readCompletely() {
                   run();
                   return buf;
              public int got() {
                   return got;
              public void run() {
                   if (waitTime > 0) {
                        TimeOut to = new TimeOut(waitTime);
                        Thread t = new Thread(to);
                        t.start();
                   int b;
                   try {
                        while ((b = in.read()) != -1) {
                             if (got + 1 > buf.length) {
                                  buf = IOUtils.expandBuf(buf);
                             int start = got;
                             buf[got++] = (byte) b;
                             int available = in.available();
                             //System.out.println("got = " + got + " available = " + available + " buf.length = " + buf.length);
                             if (got + available > buf.length) {
                                  buf = IOUtils.expandBuf(buf, Math.max(got + available, buf.length * 2));
                             got += in.read(buf, got, available);
                             signalListeners(false, start);
                             if (mClearBufferFlag) {
                                  mClearBufferFlag = false;
                                  got = 0;
                   } catch (IOException iox) {
                        throw new PartialReadException(got, buf.length);
                   } finally {
                        buf = IOUtils.trimBuf(buf, got);
                        signalListeners(true);
              private void setClearBufferFlag(boolean status) {
                   mClearBufferFlag = status;
              public void clearBuffer() {
                   setClearBufferFlag(true);
              private void signalListeners(boolean over) {
                   signalListeners (over, 0);
              private void signalListeners(boolean over, int start) {
                   if (mListeners != null) {
                        Iterator i = mListeners.iterator();
                        InputStreamEvent ev = new InputStreamEvent(got, buf, start);
                        //System.out.println("got: " + got + " buf = " + new String(buf, 0, 20));
                        while (i.hasNext()) {
                             InputStreamListener fll = (InputStreamListener) i.next();
                             if (over) {
                                  fll.gotAll(ev);
                             } else {
                                  fll.gotMore(ev);
         }

Maybe you are looking for

  • I need urgent help in business catalyst SOAP api for [Product_UpdateInsert] operation.

    Hi Everyone, I am new in Business Catalyst, but, my company is working with BC for a while. I have a task to import external data (such as catalogue, products, orders, shopping cart etc) to business catalyst native tables via SOAP api. I am able to u

  • Set reports_path in AS 10g

    Hi everyone, I am having a problem setting the reports_path on AS 10g. I changed the environment variable to C:\ORACLE\ORAWINFRS\REPORTS on a windows 2003 machine. That's where my reports are saved. But, when I try to run a report from a form, it can

  • Role of business system

    while creating a business system, the  business system role can be given as an application system or integration server what is the signifance of using it as an integration server and as application system

  • Editing After Laying Final Audio Mix Into The Time Line - Possible?

    Hello Board, I finished my film recently and layed the final audio mix (stereo track) into the timeline, replacing all the seperate audio tracks. Now, I want to go in and do some edits at the beginning of the movie. I still have the main video tracks

  • OPatch error 41, Windoze, 11.2.0.3

    Hi all, I've read the entire thread from back in October 2011, "Applying Patch failed with error code 41" The platform is Windoze Server Std Edition R2, x64. Oracle 11.2.0.3, just installed, "Software only" I've followed Werner's (uid=oradba) suggest