InputStreamReader and OutputStreamReader

how can I USED InputStreamReader and OutputStreamReader method without using file ?
Currently, I'm key in japanese word (shift-jis) from jsp page and then store it into database.
I also want to extract the japanese data from my database(UTF-8) and display in JSP.
But I don't have any idea how to use InputStreamReader and OutputStreamReader if I don't want to using file.
Can anybody show me some example !
As currently, I'm getting ??? in storing data into database
and display in JSP too.

To display japanese use 'response.setContentType("text/html;charset=Shift_JIS)' before getting the writer from the response object.
While inserting the string into the database convert it using the appr. encoding..
For ex. bfore inserting a String var. named test use:
<code>
//Get test from the request object
test = new String(test.getBytes(),"Shift_JIS");
//insert test into database
</code>

Similar Messages

  • Javaagent - does not instrument InputStreamReader and BufferedReader

    I have created JavaAgent as described below and runned as:
    java -javaagent:rewriteagent.jar MainToBeRewritten Could anyone give me a hint why the classes InputStreamReader and BufferedReader were not instrumented but the class PrintWriter was instrumented?
    Thanks in advance.
    import java.lang.instrument.Instrumentation;
    public class AgentMain {
         public static void premain(String agentArguments,
                   Instrumentation instrumentation) {
              instrumentation.addTransformer(new SimpleTransformer());
    import java.io.InputStreamReader;
    import java.io.PrintWriter;
    public class MainToBeRewritten {
         public static void main(String[] args) {
              PrintWriter p = new PrintWriter(System.out);
              p.write("Hello world.\n");
              p.flush();
              InputStreamReader s = new java.io.InputStreamReader(System.in);
              BufferedReader br = new java.io.BufferedReader(s);
              try {
                   int i = br.read();
              } catch (IOException e1) {
                   e1.printStackTrace();
              try {
                   s.close();
                   br.close();
              } catch (IOException e) {
                   e.printStackTrace();
    public class SimpleTransformer implements ClassFileTransformer {
         public byte[] transform(ClassLoader loader, String className,
                   Class<?> redefiningClass, ProtectionDomain domain, byte[] bytes)
                   throws IllegalClassFormatException {
              System.out.println("REWRITING: "+className);
              return bytes;
    }javaagent.jar contains file:
    META-INF/MANIFEST.MF
    Manifest-Version: 1.0
    Premain-Class: AgentMain--------------------------------------------------------------
    Output after running:
    REWRITING: MainToBeRewritten
    REWRITING: java/io/IOException
    REWRITING: java/io/PrintWriter
    Hello world.
    My keybord input
    REWRITING: java/util/AbstractList$Itr
    REWRITING: java/util/IdentityHashMap$KeySet
    REWRITING: java/util/IdentityHashMap$KeyIterator
    REWRITING: java/util/IdentityHashMap$IdentityHashMapIterator
    REWRITING: java/io/DeleteOnExitHook
    REWRITING: java/util/LinkedHashSet
    REWRITING: java/util/HashMap$KeySet
    REWRITING: java/util/LinkedHashMap$KeyIterator
    REWRITING: java/util/LinkedHashMap$LinkedHashIterator

    As Dennis said, you need to create the property node from the ring that is inside of the array.
    But here's what I had in mind
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Ring String.png ‏13 KB

  • InputStreamReader and TextBoxes

    I am trying to save the contents of 2 textboxes into
    a text file using the menubar (FileSave). I know I
    have to use the InputStreamReader.
    Most code examples I have seen read from standard
    input. I have not found any that read from a text
    box.
    I then am trying to read the text file back into
    the text boxes. I have to paste the right text
    into the right textbox.
    Ideas ?

    You can probably use getText and setText on your text boxes.
    And you can use
    BufferedReader br = new BufferedReader(new FileReader("text.txt"));
    to read from a text file, and
    PrintWriter pw = new PrintWriter(new FileWriter("text.txt"), true);
    to write to a file, just like you are used to with System.out.

  • Encode data

    I want to encode the US-ASCII data into UTF-8.
    For that what is the procedure.
    I have to use
    1.InputStreamReader and OutPutStreamReader
    2.Charset/CharsetEncoder/CharsetDecoder
    What is the difference between the two processes?
    Process 1:
    FileInputStream fis = new FileInputStream(inputFile);
    InputStreamReader isr = new InputStreamReader(fis,"UTF-8");
    FileOutputStream fos = new FileOutputStream(outputFile);
    OutputStreamWriter osw = new OutputStreamWriter(fos, "UTF-8");
    Process 2:
    Charset charset = Charset.forName(encode);
    CharsetDecoder decoder = charset.newDecoder();
    CharsetEncoder encoder = charset.newEncoder();

    InputStreamReader is for converting non unicode
    charecter bytes to unicode.
    outpuStreamWriter is for converting unicode charecter
    to non unicode byte stream.is it correct?Yep, that's correct...Are you sure? Maybe it's just badly worded, but it doesn't sound correct to me. A CharsetDecoder converts a ByteBuffer to a CharBuffer, while an InputStreamReader converts a stream of bytes to a stream of characters. You couldn't replace one with the other--in fact, an InputStreamReader may use a CharsetDecoder under the hood, depending on how you construct it.
    I think it's a mistake to talk about converting "to (or from) Unicode", especially in this case. Sure, Java uses one of the Unicode-standard encodings internally, but which one it uses is an implementation detail that you don't need to know. All you need to know is which encoding was used to create the byte stream you're trying to read at the moment.
    @ssa_sbobba, if I understand you correctly, you want to read a file that you believe to be in the US-ASCII encoding, and write the text back out using the UTF-8 encoding. That's simple enough:   FileInputStream fis = new FileInputStream(inputFile);
      InputStreamReader isr = new InputStreamReader(fis,"US-ASCII");
      FileOutputStream fos = new FileOutputStream(outputFile);
      OutputStreamWriter osw = new OutputStreamWriter(fos, "UTF-8"); If that's all you're doing, though, it's pointless; as Jos pointed out, US-ASCII is a subset of UTF-8, so the new file will be identical to the old one.

  • Input a string and assign to a variable in the console window

    i'm absolutely and completely new to java
    i need to be able to input a string and then assign it to a variable, entirely using the console window, basically i'm looking for the Java alternative to C++'s "cin"
    i've tried the System.in. method but nothing seems to work
    i know this should be simple, but i cant find it anywhere
    cheers

    i've tried the System.in. method That thing is not a method: it's an object, an intstantiation of the InputStream
    class. If you read the API documentation for that class you'll notice that it
    isn't much more sophisticated than C's file IO. You can wrap an
    object if this class in a more high level class (see InputStreamReader
    and BufferedReader).
    The latter allows you to read an entire line of characters. If you really want
    to go fancy you can wrap the first class in a Scanner which is only
    available in Java 1.5
    kind regards,
    Jos

  • File I/O and encoding (J2SDK 1.4.2 on Windows)

    I encountered a strange behavior using the FileReader / Writer classes for serializing the contents of a java string. What I did was basically this:
    String string = "some text";
    FileWriter out = new FileWriter(new File("C:/foo.txt"));
    out.write(string);
    out.flush();
    out.close();In a different method, I read the contents of the file back:
    FileReader in = new FileReader(new File("C:/foo.txt"));
    StringWriter out = new StringWriter();
    char[] buf = new char[128];
    for (int len=in.read(buf); len>0; len=in.read(buf)) {
        out.write(buf, 0, buf.length);
    out.flush(); out.close(); in.close();
    return out.toString();Problems arise as soon as the string contains non ascii characters. After writing and reading, the value of the string differs from the original. It seems that different character encodings are used when reading and writing, although the doc states that, if no explicit encoding is specified, the platform's default encoding (in my case CP1252) will be used.
    If I use streams directly instead of writers, it does not work, either, as long as I do not specify the encoding when converting bytes to strings and vice versa.
    When I specify the encoding (no matter which one, as long as I specify the same for reading as for writing), the resulting string is equal to the original one.
    If I replace the FileReader and Writer by StringReader and StringWriter (bypassing the serialization), it works, too (without specifying the encoding).
    Is this a bug in the file i/o classes or did I miss something?
    Thanks for your help
    Ralph

    first.... if you are writing String objects via serialization, encoding doesn't matter whatsoever. Not sure you were saying you tried that, but just for future reference.
    For String.getBytes() and String(byte[]) or InputStreamReader and OutputStreamWriter: If you don't specify an encoding, the system default (or default specified on the command-line or set in some other way) will be used in all cases.
    For byte streams: If you are reading/writing bytes thru streams, then the character conversion is up to you. You call getBytes on a string or create a string with the byte[] constructor.
    For readers/writers: If you are reading/writing characters thru readers/writers, then the character conversion is done by that class.
    However, StringReader and StringWriter are just writing to/from String objects and they are writing Unicode char's, so it's really a special case.
    Okay...
    So if you have a string which has characters outside the range of the encoding being used (default or explicitly specified), then when it's written to the file, those characters are messed up. So say you have a Chinese character which needs 2 bytes. Generally, the 2 bytes are written, but when read back, that one character shows as 2. Whether 2 bytes are written or 1, probably depends on the encoding. But the result is the same, you get a munged up string.
    Generally speaking, you are going to get better storage on most text when using UTF-8 as your encoding. You need to specify it always for reads and writes, or set it as the default. The reason is that chars are written in as many bytes as needed. And it'll support anything Unicode supports, thus anything String supports.

  • Problems about reading Japanese and writing Chinese

    Hi all experts,
    Now I have a encoding problem.
    My task is, read some Japanese in a text-based file, and then replace that Japanese with a Chinese string.
    That means, the program must be able to understand both the Japanese encoding and the Chinese encoidng. But the dilemma is that, in the OS I am using (Windows), you can only have one locale at a time. If you set your OS locale to Chinese, the program is able to write Chinese, but it cannot properly detect the Japanese. On the othe hand, if you set your OS locale to Japanese, the program is able to detect Japanese, but it cannot properly write Chinese.
    So can any one help me get around this problem (for instance, there is some special API dealing with encodings and so on)?
    Thanks a lot!

    InputStreamReader and OutputStreamWriter constructors both accept parameters to specify the encoding of the data stream.
    The New IO (NIO) classes have these and additional capabilities, including converting from one encoding to another using code like this:
    Charset charset = Charset.forName("ISO-8859-1");
    CharsetDecoder decoder = charset.newDecoder();
    CharBuffer charBuffer = decoder.decode(buffer);Look at the java.nio.charset package.

  • Java files and the Unicode!!!

    how to read from a unicode(UTF) file in a java program
    (give me an example please)

    Use InputStreamReader and pass in the appropriate constructor arguments.
    Drake

  • FileInput and OutputStream problem

    Hi,
    I want to read a text file and copy its contents in another file using UTF-8 encoding scheme. Below is my source code, but I don't know how to output the contents of the copied file? I used input2.read(), input2.readUTF() etc, but I can't get character/string output. It prints ASCII equivelent or other types of numbers, but not characters.
    By the way, the name of the files are sent through console arguments.
    Any helpful input is greatly appreciated. Thanks.
    * Introduction to Java Programming: Comprehensive, 6th Ed.
    * Excercise 18.4
    * @Kaka Kaka
    import java.io.File;
    import java.io.IOException;
    import java.io.BufferedInputStream;
    import java.io.BufferedOutputStream;
    import java.io.DataInputStream;
    import java.io.DataOutputStream;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    public class Ex18_04 {
        public static void main(String[] args) throws IOException{
            // Declare a file, for reading
            if(args.length != 2){
                new Exception("Usage: java command sourceFile distinationFile!");
                System.exit(0);
            File file1 = new File(args[0]);
            File file2 = new File(args[1]);
            // Check if the files exist
            if(!file1.exists() || !file2.exists()){
                new IOException("Usage: java Ex18_04 sourceFile distinationFile!");
                System.exit(0);
            // Declare an input and output file stream
            DataInputStream input =
                    new DataInputStream(new FileInputStream(file1));
            DataOutputStream output =
                    new DataOutputStream(new FileOutputStream(file2));
            int value = 0;
            while((value = input.read()) != -1){
                output.writeUTF(String.valueOf(value));
            // Close the files
             input.close();
             output.close();
            DataInputStream input2 =
                    new DataInputStream(new FileInputStream(file2));
            for(int i = 0; i <= input2.available(); i++){
                System.out.println(input2.readUTF()); // I used everything here
            input2.close();
    }Edited by: ChangBroot on Jan 19, 2009 8:55 PM

    Do not use DataInputStream or DataOutputStream when working with text files. It's a common mistake, but those are specialized classes meant to be used with a particular kind of binary file. It's entirely possible that you will never run into a situation where you need to use those classes.
    In order to read/write text files in an encoding that you specify, you want to use InputStreamReader and OutputStreamWriter.

  • How can the InputStreamReader do the translation!?

    import java.io.*;
    public class Copy {
    public static void main(String[] args) throws IOException {
    File inputFile = new File("a.txt");
    File outputFile = new File("b.txt");
    FileReader in = new FileReader(inputFile);
    FileWriter out = new FileWriter(outputFile);
    int c;
    while ((c = in.read()) != -1)
    out.write(c);
    in.close();
    out.close();
    Here a.txt is the source file and b.txt is the destination file. This little
    program copies a.txt to b.txt.
    My question is:
    FileReader is a subclass of InputStreamReader. and the read() method used here
    is implemented from InputStreamReader. according to the API, the class
    InputStreamReader reads bytes and translates them into characters according
    to a specified character encoding. but if the file a.txt is a file full of
    characters, how come the InputStreamReader can do the translation!? i mean,
    everything will be fine if a.txt is a file full of bytes; but what if b.txt
    is a file full of characters?

    Readers and Writers are provided to make easier the use of text files. Suppose you want to write "This is a line of text" to a file. If you take a FileWriter to do it, the string will be written in the file by using the caracter encoding specific to the os were your application is running. So your application will work on different platforms and will always use the appropriate encoding. This also mean that if you read a file that was created with a different encoding, it will be automatically converted to the current encoding. When saving that file to the disk, it might not be he same than the original.
    If you try to write the same string to a file by using a FileOutputStream, it will not be converted to the correct encoding format and become unreadable in the file (when you open that file with notepad, for ex). This is because the Output/InputStreams are made for bytes (binary files) access and Readers/Writers are made for chars (text files) access.
    But, files can always be considred as a sequence of bytes - no matter what they really contains. So, if you use some Input/OutputStream classes to copy a file, the copied file will be an exact copy of the original.

  • How do I save images downloaded from the Internet?

    I have the URL to an image stored on the Internet. Is it possible to open a connection and store the image on my computer? I'm not totally new to this, I've written a small program that could read text files from an ftp server, but it's my first try at downloadning images. Here's the code I'm using
    String wwwfile = ....;
                   System.out.println(protocol+host+wwwfile);
                        URL url = new URL(protocol+host+wwwfile);
                        URLConnection urlc = url.openConnection();
                        InputStreamReader reader = new InputStreamReader(urlc.getInputStream());
                        File file = new File(path,fileName);
                        OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream(file));
                        int c;
                        while((c = reader.read())!=-1)
                        writer.write(c);
                        reader.close();
                        writer.flush();
                        writer.close();
    When I read text files, I used BufferedReader/PrintWriter instead of InputStreamReader/OutputStreamReader. The picture I want to download seems to be downloaded, it's stored on the computer, but upon viewing the picture, the browser displays the icon telling me the picture is abscent.
    What is it that I'm doing wrong?
    regards
    simon

    Ah, this might be the problem. You're using an InputStreamReader and OutputStreamWriter to wrap the streams with reader/writers. This is bad. This will corrupt binary data.
    Try this:
            System.out.println(protocol + host + wwwfile);
            URL url = new URL(protocol + host + wwwfile);
            URLConnection urlc = url.openConnection();
            // Use InputStream/OutputStreams instead of Reader/Writers for binary data.
            // Otherwise it will get corrupted.
            InputStream in = urlc.getInputStream();
            File file = new File(path, fileName);
            OutputStream out = new BufferedOutputStream(new FileOutputStream(file));
            int c;
            while ((c = in.read()) != -1) {
                out.write(c);
            in.close();
            // You don't have to flush the file if you're
            // closing it right after.
            out.close();

  • How to do it?  Alternate readLine(), read bytes?

    Problem: How to alternate between an InputStreamReader and a raw InputStream when reading data from an InputStream. The catch is that the InputStreamReader may read ahead a few bytes and so subsequent reading from the underlying InputStream will miss those buffered bytes.
    Background:
    I have an application that communicates with a Tomcat servlet with Http protocol.
    I want to send a request, and receive a response as a Java Object. The problem is receiving the response as a Java Object.
    Actually, I have no problem if I use a URLConnection and create an ObjectInputStream from the URLConnection InputStream. But the URLConnection is very slow to use for uploading large files.
    There is a code project called JUpload which uploads files very fast, at least 10 times as fast as URLConnection. It uses sockets and their input/output streams. So I'm adapting some of that code for my application.
    The problem is that the JUpload code is harder to use. It parses the HTTP headers in both directions. When receiving the response, it parses the HTTP headers using an InputStreamReader wrapped in a BufferedReader to read lines, and continues reading lines for any other responses destined to the application.
    However, I need to read a Java object after the header. Therefore, I need to get the underlying InputStream so I can create my ObjectInputStream.
    There is a conflict here. The BufferedReader may (and does) read ahead, so I can't get a "clean" InputStream.
    I have a workaround, but I'm hoping there is a better way. It is this: After reading the HTTP header lines, I read all the remaining characters from the BufferedReader and write them to a ByteArrayOutputStream wrapped in an OutputStreamWriter. Then I create a new ByteArrayInputStream by getting the bytes from the ByteArrayOutputStream. I then "wrap" my ObjectInputStream around the ByteArrayInputStream, and I can read my Java Object as a response.
    This technique is not only clumsy, but it requires buffering everything in memory after the header lines are processed (which happens to be OK for now as my response Object is not large). It also gets really clumsy if the returned HTTP reponse is chunked, because the BufferedReader is needed alternately to read the chunk headers. Fortunately, I haven't encountered a chunked response.
    It feels like what I need is a special Reader object that allows you to alternately read characters or read bytes without "stuttering" (missing buffered bytes).
    I would think the authors of the URLConnection would have encountered this same problem.
    Any thoughts on this would be appreciated.
    -SB
    Edited by: SB4 on Mar 21, 2010 8:08 PM
    Edited by: SB4 on Mar 21, 2010 8:09 PM
    Edited by: SB4 on Mar 21, 2010 8:10 PM

    Yes, that is the problem as you noted. My solution is to continue reading all the characters from the BufferedReader and to convert them back to a byte stream using the ByteArrayOutputStream, OutputStreamWriter, etc.
    It works, just seems clumsy and not scalable.
    I was hoping there might exist a special "CharByteReader" (invented name) class somewhere that would be like a BufferedReader, but that would implement a read(byte[], offset, length) type of method that could access the read-ahead buffer in the same way the read(char[], offset, length) method does.
    URLConnection is just too slow, including chunked mode.

  • Issue while parsing the chinese character from Mime Message

    Hi,
    I have a issue with the chinese characters while parsing the mime message (MimeBodypart). In the MimeMessage charset is mentioned as "gb2312". i am using the MimeBodyPart.getContent() to get the content. When mimetype is html, it will be uploaded as a file to an FTP site (wapache commons net - ftp client). When uploaded file is viewed, the content is displayed as garbage text.
    i tried the following but it didnt work. i got the inputstream from the Mimebodypart. and then created InputStreamReader and used the encoding "GB18030" while initializing the content. i got the String out of it and stored in the file. i just replaced "Gb2312" with "UTF-8" in the html string. While creating the output file, i used the UTF-8 encoding. when opened this file using IE, it is displaying the character without any issues. i examined the file and the file encoding is UTF-8 as expected.
    but when i upload the file to FTP site and view, the text is not displayed correctly. It seems the file encodig is ANSI. i used the Notepad++ to examine these files. Please note that we use apache comments net - FTp client to upload the file.
    below are my questions:
    am i doing the right thing? it seems mime message was created using outlook.
    How to upload a file to FTp withe file base encoding is "UTF-8" or some other ?
    below are few references
    http://www.anyang-window.com.cn/tag/java-gb2312/
    JavaMail: Chinese Simplified Character Problem

    Thank you for the Replies. i am using the binary mode and it works fine for most of the files. i found that the issue here is not while uploading but the content itself. the characters present in the Mimemessge is not as per the charset. Hence i could not upload the content as it is. This happens only when charset is GB2312 (chinese). it seems that Mimemessage contains the characters which cannot be represented by Gb2312 but can be represented by Gb18030. Hence i converted the content using from Gb18030 to UTF-8 and created a file. i used the SetControlEncoding("utf-8") to upload the file and it is working fine.

  • CONVERSION FROM ANSI ENCODED FILE TO UTF-8 ENCODED FILE

    Hi All,
    I have some issues in conversion of ANSI encoded file to utf encoded file. let me tell you in detail
    I have installed the Language Support for Thai Language on My Operating System.
    now, when I open my notepad and add thai character on the file and save it as ansi encoding. it saves it perfectly and also I able to see it on opening the file again.
    This file need to be read by my application , store in database and should display thai character on jsp after fetching the data from database. Currently it is showing junk character on jsp reason being that my database (UTF8 compliant database) has junk data . it has junk data because my application is not able to read it correctly from the file.
    If I save the file with encoding as UTF 8 it works fine. but my business requirement is such that the file is system generated and by default it is encoded in ANSI format. so I need to do the conversion of encoding from ANSI to UTF8 . so Any of you can guide me on the same how to do this conversion ?
    Regards
    Gaurav Nigam

    Guessing the encoding of a text file by examining its contents is tricky at best, and should only be done as a last resort. If the file is auto-generated, I would first try reading it using the system default encoding. That's what you're doing whenever you read a file with a FileReader. If that doesn't work, try using an InputStreamReader and specifying a Thai encoding like TIS-620 or cp838 (I don't really know anything about Thai encodings; I just picked those out of a quick Google search). Once you've read the file correctly, you can write the text to a new file using an OutputStreamWriter and specifying UTF-8 as the encoding. It shouldn't really be necessary to transcode files like this, but without knowing a lot more about your situation, that's all I can suggest.
    As for native2ascii, it isn't for encoding conversions. All it does is replace each non-ASCII character with its six-character Unicode escape, so "voil&#xE1;" becomes "voil\u00e1". In other words, it avoids the problem of character encodings by converting the file's contents to a form that can be stored as ASCII. It's mainly used for converting property or resource files to a form that can be read by the Properties and ResourceBundle classes.

  • CFMX 6.1's Virtual Memory Use problem!!

    I appologise for the long post in advance...
    Ok... so I have this script that, using cfdirectory, will
    check a directory for any files that may have been uploaded, if
    there are files, it loops through the results and reads the files
    one at a time, line by line, using the FileReader.cfc (Uses the
    Java FileInputStream, InputStreamReader, and BufferedReader to
    provide a way to incrementally read large files). The files are
    just pipe "|" delimited data, each line represents a record for a
    db table.
    Now as it's reading each line, it will perform some basic
    string parsing to clean up the file line to make sure the data is
    valid, blah blah blah and then it will write that "cleaned" line to
    another file using FileWriter.cfc (Java component once again). Once
    it's completely done reading the original file, it will close it
    and it will open the new "cleaned" version of the file, read it
    (FileReader.cfc), create an INSERT statement and then update the
    database table.
    Now... this all works GREAT... until it has to loop through
    more than a few files... 3 - 4 files are NO problem! works like a
    charm, but throw 6 - 8 files at it and it dies, not a timeout mind
    you but an actual "java.lang.OutOfMemoryError" (now, I've tried
    making all the files exactly the same (just changed the name) and
    the weird thing is, it takes longer and longer to process each as
    it goes through the loop... I have the script write some stats as
    it's looping:
    FILE 1 STATS
    Name: COA0607_Intranet1.DAT
    Status: Import Successfull
    Line Count: 32,971
    Processing Time: 74,237ms
    FILE 2 STATS
    Name: COA0607_Intranet2.DAT
    Status: Import Successfull
    Line Count: 32,971
    Processing Time: 82,018ms
    FILE 3 STATS
    Name: COA0607_Intranet3.DAT
    Status: Import Successfull
    Line Count: 32,971
    Processing Time: 94,476ms
    FILE 4 STATS
    Name: COA0607_Intranet4.DAT
    Status: Import Successfull
    Line Count: 32,971
    Processing Time: 145,108ms
    I know what you guy are probably thinking; "Woah man... CF
    isn't really meant to do that kind of processing...", I know, trust
    me I know... however, I really neeeeeed it too lol.
    Ok, so as the script is running, I watch the Virtual Memory
    use of jrun.exe, processing say 3 - 4 of these files brings up the
    usage to approx 300,000k which yes, is a LOT but that's fine...
    this process is meant to run at night via a Scheduled Task...
    When I run more than 4 files, things start to get ugly, keep
    in mind that these are EXACTLY the same files just re-named
    differently. The script will start lagging BIG time and on the last
    file (usually the last file) I'll see the memory usage spike from
    350,000K all the way up to 600,000K and that's when it throws the
    "java.lang.OutOfMemoryError" and dies... I've tried commenting out
    the part of the script that updates the db, but still get the same
    problem...
    So... what gives? How come CF Server does this??? I mean, it
    runs fine for the first few files... and then WAM, it dies... sorry
    for the long post... any insight here is VERY much appreciated...
    it would be AWESOME if the wonderful folks at Adobe could shed some
    light on this for me : )
    CFMX 6.1 version: 6,1,0,83762
    Windows XP Pro SP2
    Intel P4 2.8Ghz
    1Gb of Ram

    quote:
    Originally posted by:
    Mr Black
    300M memory usage while using "incremental" file reader??
    Looks like it is "incremental" only in the sense that it increments
    memory usage. Did you try non-Java C/C++ file reader tags?
    Well I did try cffile originally... and it didn't even run...
    lol

Maybe you are looking for