Probs with read-in of UTF-16LE file

hi all,
the following code works only on a small UTF-16LE file, but not on a file of say > 100 KB... with such a file the first isr.read() causes the program to hang...! wrapping with BufferedReader does not solve the prob...
InputStreamReader isr = new InputStreamReader( new FileInputStream("filename"), "UTF-16LE");
for( int i = 0; i < 50; i++ ){
int ch = isr.read();
System.out.println( "char " + i + ": " + ch );
FLUMMOXED... PLS HELP!!

Simple example
try {
        BufferedReader in = new BufferedReader(new FileReader("FileName.txt"));
        String str;
        while ((str = in.readLine()) != null) {
            //do whatever whith str       
        in.close();
    } catch (IOException e) {
        //Handle Exception...

Similar Messages

  • Manually adding BOM to UTF-16LE file?

    hi.
    i have a bash script that needs to preform something on a string from standard input, save it in a file and convert the file to UTF-16LE with BOM for further processing by another application.
    i use iconv to convert the text file to UTF-16LE, but iconv actually creates a little-endian file WITHOUT the bom. (converting to UTF-16 creates a big endian file WITH bom)
    i see no way of creating LE with BOM with iconv, so i thought maybe i could simply add the byte-order marks (FF FE) to the beginning of the unicode file. how can i do that?
    many thanks in advance
    tench

    If you want to do everything from within bash script, then you can use something like
    {code}
    #!/bin/sh
    # I think xpg_echo is ON by default, but just in case...
    shopt -s xpg_echo
    cat > infile
    # assume the input is in UTF-8
    (echo '\xFF\xFE\c'
    iconv -f UTF-8 -t UTF-16LE infile) > outfile
    {code}
    Of course use of infile can be omitted if you don't need it.

  • Assistance with reading the contents of a file, taking some of these contents to make a new file

    Hello,
    I have mulitiple .txt files in a folder each with the following type of Move-ClusterGroup instructions for each VM in our cluster:
    Move-ClusterGroup "SCVMM MYVM1 Resources" -Node HYPERVA01
    Move-ClusterGroup "SCVMM MYVM2 Resources" -Node HYPERVA01
    Move-ClusterGroup "SCVMM MYVM3 Resources" -Node HYPERVA01
    I want to read through each .txt file in that folder location and create the same number of .txt files to start each VM.  The resulting files would have this data in them:
    Start-ClusterResource "SCVMM MYVM1 Resources"
    Start-ClusterResource "SCVMM MYVM2 Resources"
    Start-ClusterResource "SCVMM MYVM3 Resources"
    Any tips on where I can start with this task?
    Thank you.

    This will read all the .txt files in the specified folder and createsnew .txt files in the same folder with the desired content appending _new to the name:
    $folder="c:\myFolder"
    $files=Get-ChildItem $folder -Filter "*.txt"
    foreach ($file in $files){
    $outFileName=[IO.Path]::GetFileNameWithoutExtension($file.Name) + "_new.txt"
    Get-Content $file.FullName | foreach{
    Add-Content $folder\$outFileName "Start-ClusterResource ""$($_.Split('"')[1])"""

  • Having trouble with reading hex from an input file - please help

    Hi, I have a txt file with rows of hex, and I need to read each line and add it to an int array. So far I have:
    BufferedReader fileIn = new BufferedReader(new FileReader("memory.txt"));
                    int count = 0 ;
                    String temp = fileIn.readLine();
                    int file_in = Integer.parseInt(temp) ;
                    while(temp!=null) {
                         data[count] = file_in;
                         temp = fileIn.readLine();
                         file_in = Integer.parseInt(temp,16);// Integer.parseInt() ;
                         count++ ; /* increment counter */
                    } memory.txt:
    4004000
    4008000
    3FDF4018
    4108200
    3C104001
    FFFFFFE8
    4010C6C0
    FFFFFFE8
    94000000The above code crashes on the third input (my guess is becuase there are letters in and it can't parseInt letters.
    I think I need to parse it into an array of chars instead, however I don't know how to get from the string (temp) to the char array.
    can anyone help?

    ok turns out it's just a null pointer exception on the data[count] line, becuase I've only initialised the first two slots of data. i didn't see it before becase it was just throwing an error and i never printed it out.
    here's how I've defined data:
    in the class
    int[] data;just before the code to input the file data:
    for ( int i = 0; i < length; i++ )       
                    data [ i ] = 0;thinking this shoudl go through all the array and initialise it. but it gives a NullPointerException on the data=0; line.
    any ideas?
    Edited by: rudeboymcc on Feb 6, 2008 10:50 PM
    Edited by: rudeboymcc on Feb 6, 2008 10:51 PM

  • Issue with Reading and Writing to a File

    Hello all,
    I'm having trouble when I run the following example;
    import java.io.FileReader;
    import java.io.FileWriter;
    import java.io.BufferedReader;
    import java.io.PrintWriter;
    import java.io.IOException;
    public class ReadWriteTextFile {
        private static void doReadWriteTextFile() {
            try {
                // input/output file names
                String inputFileName  = "README_InputFile.txt";
                String outputFileName = "ReadWriteTextFile.out";
                // Create FileReader Object
                FileReader inputFileReader   = new FileReader(inputFileName);
                FileWriter outputFileReader  = new FileWriter(outputFileName);
                // Create Buffered/PrintWriter Objects
                BufferedReader inputStream   = new BufferedReader(inputFileReader);
                PrintWriter    outputStream  = new PrintWriter(outputFileReader);
                outputStream.println("+---------- Testing output to a file ----------+");
                outputStream.println();
                String inLine = null;
                while ((inLine = inputStream.readLine()) != null) {
                    outputStream.println(inLine);
                outputStream.println();
                outputStream.println("+---------- Testing output to a file ----------+");
                outputStream.close();
                inputStream.close();
            } catch (IOException e) {
                System.out.println("IOException:");
                e.printStackTrace();
        public static void main(String[] args) {
            doReadWriteTextFile();
    }Im getting the error
    java.io.FileNotFoundException: README_InputFile.txt (The system cannot find the file specified)However the file README_InputFile.txt is definately in the same folder as the class file. So why is this not working?
    Any help would be greatly appreciated.
    Jaz

    Sorry you've lost me. All I get are error messages
    when I try to compile that statement. What am I
    missing?I don't know, it should work:
    import java.io.*;
    public class Test {
    public static void main(String[] args) throws
    IOException {
    System.out.println(new
    File(".").getCanonicalPath());
    Sorry I forgot to add the "throws IOException" bit. It works and told me that the path is;
    C:\Documents and Settings\Jaz\workspace\Tutorial
    I've amended the code so it now looks like this;
    String inputFileName  = "C:/Documents and Settings/Jaz/workspace/TutorialREADME_InputFile.out";
                String outputFileName = "C:/Documents and Settings/Jaz/workspace/TutorialReadWriteTextFile.out";but I still get the error below even though the files are present in that directory;
    IOException:
    java.io.FileNotFoundException: C:\Documents and Settings\Jaz\workspace\TutorialREADME_InputFile.out (The system cannot find the file specified)

  • Read in a file in UTF-16LE

    hi all,
    the following code works only on a small UTF-16LE file, but not on a file of say > 100 KB... with such a file the FIRST command isr.read() causes the program to hang...! wrapping with BufferedReader does not solve the prob...
    InputStreamReader isr = new InputStreamReader( new FileInputStream("filename"), "UTF-16LE");
    for( int i = 0; i < 50; i++ ){
    int ch = isr.read();
    System.out.println( "char " + i + ": " + ch );
    FLUMMOXED... PLS HELP!!

    the file is a normal file with normal line lengths...
    it can be read in to sthg like TextPad and viewed...
    so somehow this app is able to decode it OK...
    question is what exactly is happening between
    BufferedReader, InputStreamReader and
    FileInputStream... and how to buffer the whole
    process so that manageable chunks can be decoded... I have used BufferedReader the way you have but on (UTF-8) log files with lengths in excess of 100 MBytes without any problem like this. I have to believe that BufferedReader is not able to detect the end of line.
    I would create a wrapped Reader that prints (in hex?) the characters it is reading before passing them to the BufferedReader.

  • How to read / convert UTF-16 file

    Does anyone have a piece of code to read a unicode UTF-16 file and convert it (either to UTF-8 or non unicode), possible using CL_ABAP_CONV_IN_CE
    Thankx
    Norbert

    outdated now - and never answered as you can see....

  • Indesign CS3-JS - Problem in reading text from a text file

    Can anyone help me...
    I have an problem with reading text from an txt file. By "readln" methot I can read only the first line of the text, is there any method to read the consecutive lines from the text file.
    Currently I am using Indesign CS3 with Java Script (for PC).
    My Java Script is as follows........
    var myNewLinksFile = myFindFile("/Links/NewLinks.txt")
    var myNewLinks = File(myNewLinksFile);
    var a = myNewLinks.open("r", undefined, undefined);
    myLine = myNewLinks.readln();
    alert(myLine);
    function myFindFile(myFilePath){
    var myScriptFile = myGetScriptPath();
    var myScriptFile = File(myScriptFile);
    var myScriptFolder = myScriptFile.path;
    myFilePath = myScriptFolder + myFilePath;
    if(File(myFilePath).exists == false){
    //Display a dialog.
    myFilePath = File.openDialog("Choose the file containing your find/change list");
    return myFilePath;
    function myGetScriptPath(){
    try{
    myFile = app.activeScript;
    catch(myError){
    myFile = myError.fileName;
    return myFile;
    Thanks,
    Bharath Raja G

    Hi Bharath Raja G,
    If you want to use readln, you'll have to iterate. I don't see a for loop in your example, so you're not iterating. To see how it works, take a closer look at FindChangeByList.jsx--you'll see that that script iterates to read the text file line by line (until it reaches the end of the file).
    Thanks,
    Ole

  • Reading UCS-2LE or UTF-16LE from file

    Hello,
    I am writing a program to compile some simple statistics from an exported iTunes library .txt file. I have determined that when iTunes exports the file, it exports it in either the UCS-2LE or UTF-16LE encoding. I have viewed these files in a hex editor and the first 4 digits are ff fe.
    I would like to read text from this file into my program, but I can't because it is in a weird encoding. Right now, I have to "Save as" the file to a different encoding (usually I use ASCII), to be able to read it.
    This is the code I am currently using to read the input.
    FileReader freader = new FileReader(file_name);
    BufferedReader input_file = new BufferedReader(freader);I am fairly new to Java so any help is appreciated, and please try to be as specific as possible.
    Thanks.
    Message was edited by:
    imdandman

    Hello,
    I am writing a program to compile some simple
    statistics from an exported iTunes library .txt file.
    I have determined that when iTunes exports the file,
    it exports it in either the UCS-2LE or UTF-16LE
    encoding. I have viewed these files in a hex editor
    and the first 4 digits are ff fe.iTunes also exports an XML file at the library root, in case that would be a more attrative option. I think there's a parsing library here, but it may have been retired:
    http://www.macdevcenter.com/pub/a/mac/2003/09/03/mytunes.html
    There's also an iTunesFileChooser that parses the XML file quite well and returns arrays of Song and Playlist objects. I've used it with iTunes 7, and it works for me.
    http://www.robbiehanson.com/iTunesJava.html

  • Message Mapping Problem with UTF-16LE Encoded XML

    Hello,
    we have the following scenario:
    IDoc > BPM > HTTP Sync Call > BPM > IDoc
    Resonse message of the HTTP call is a XML file with UTF-16LE processing instruction. This response should then be mapped to a SYSTAT IDoc. However the message mapping fails "...XML Parser: No data allowed here ...".
    So obviously the XML is not considered as well-formed.
    When taking a look at SXMB_MONI the following message appears: "Switch from current encoding to specific encoding not supported.....".
    Strange thing however is if I save the response file as XML and use the same XML file in the test tab message mapping is executed successfully.
    I also tried to use a Java Mapping to switch encodings before executing message mapping, but the error remains.
    Could the problem be, that the codepage UTF-16LE is not installed on the PI system ? Any idea on that ?
    Thank you!
    Edited by: Florian Guppenberger on Feb 2, 2010 2:29 PM
    Edited by: Florian Guppenberger on Feb 2, 2010 2:29 PM

    Hi,
    thank your for your answer.
    This is what I have tried to achieve. I apply the java conversion mapping when receiving the response message - i tried to convert the response to UTF-16, UTF-8 but none of them has helped to solve the problem.
    I guess that using adapter modules is not an option either as it would modify the request message, but not the response, right?

  • Encoding Problem - can't read UTF-8 file correctly

    Windows XP, JDK 7, same with JDK 6
    I can't read a UTF-8 file correctly:
    Content of File (utf-8, thai string):
    &#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;
    When opened in Editor and copy pasted to JTextField, characters are displayed correctly:
    String text = jtf.getText();
    text.getBytes("utf-8");
    -32 -71 -128 -32 -72 -95 -32 -71 -121 -32 -72 -108 -32 -71 -128 -32 -72 -91 -32 -72 -73 -32 -72 -83 -32 -72 -108 -32 -72 -126 -32 -72 -78 -32 -72 -89
    Read file with FileReader/BufferedReader:
    line = br.readLine();
    buffs = line.getBytes("utf-8"); //get bytes with UTF-8 encoding
    -61 -65 -61 -66 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes(); // get bytes with default encoding
    -1 -2 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Read file with:
    FileInputStream fis...
    InputStreamReader isr = new InputStreamReader(fis,"utf-8");
    BufferedReader brx = new BufferedReader(isr);
    line = br.readLine();
    buffs = line.getBytes("utf-8");
    -17 -65 -67 -17 -65 -67 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes();
    63 63 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Anybody has an idea? The file seems to be UTF-8 encoded. What could be wrong here?

    akeiser wrote:
    Windows XP, JDK 7, same with JDK 6
    I can't read a UTF-8 file correctly:
    Content of File (utf-8, thai string):
    &#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;
    When opened in Editor and copy pasted to JTextField, characters are displayed correctly:
    String text = jtf.getText();
    text.getBytes("utf-8");
    -32 -71 -128 -32 -72 -95 -32 -71 -121 -32 -72 -108 -32 -71 -128 -32 -72 -91 -32 -72 -73 -32 -72 -83 -32 -72 -108 -32 -72 -126 -32 -72 -78 -32 -72 -89 These values are the bytes of your original string "&#3648;&#3617;&#3655;&#3604;&#3648;&#3621;&#3639;&#3629;&#3604;&#3586;&#3634;&#3623;" utf-8 encoded with no BOM (Byte Order Marker) prefix.
    >
    Read file with FileReader/BufferedReader:
    line = br.readLine();
    buffs = line.getBytes("utf-8"); //get bytes with UTF-8 encoding
    -61 -65 -61 -66 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes(); // get bytes with default encoding
    -1 -2 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    Read file with:
    FileInputStream fis...
    InputStreamReader isr = new InputStreamReader(fis,"utf-8");
    BufferedReader brx = new BufferedReader(isr);
    line = br.readLine();
    buffs = line.getBytes("utf-8");
    -17 -65 -67 -17 -65 -67 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14
    buffs = line.getBytes();
    63 63 32 0 64 14 33 14 71 14 20 14 64 14 37 14 55 14 45 14 20 14 2 14 50 14 39 14 These values are the bytes of your original string UTF-16LE encoded with a UTF-16LE BOM prefix.
    This means that there is nothing wrong (the String has been read correctly) with the code and that your default encoding is UTF-16LE .
    Edited by: sabre150 on Aug 1, 2008 5:48 PM

  • Need help to read and write using UTF-16LE

    Hello,
    I am in need of yr help.
    In my application i am using UTF-16LE to export and import the data when i am doing immediate.
    And sometimes i need to do the import in an scheduled formate..i.e the export and imort will happend in the specified time.
    But in my application when i am doing scheduled import, they used the URL class to build the URL for that file and copy the data to one temp file to do the event later.
    The importing file is in UTF-16LE formate and i need to write the code for that encoding formate.
    The problem is when i am doing scheduled import i need to copy the data of the file into one temp place and they doing the import.
    When copying the data from one file to the temp i cant use the UTF-16LE encoding into the URL .And if i get the path from the URl and creating the reader and writer its giving the FileNotFound exception.
    Here is the excisting code,
    protected void copyFile(String rootURL, String fileName) {
    URL url = null;
    try {
    url = new URL(rootURL);
    } catch(java.net.MalformedURLException ex) {
    if(url != null) {
    BufferedWriter out = null;
    BufferedReader in = null;
    try {
    out = new BufferedWriter(new FileWriter(fileName));
    in = new BufferedReader(new InputStreamReader(url.openStream()));
    String line;
    do {
    line = in.readLine();
    if(line != null) {
    out.write(line, 0, line.length());
    out.newLine();
    } while(line != null);
    in.close();
    out.close();
    } catch(Exception ex) {
    Here String rootURL is the real file name from where i have to get the data and its UTF-16LE formate.And String fileName is the tem filename and it logical one.
    I think i tried to describe the problem.
    Plz anyone help me.
    Thanks in advance.

    Hello,
    thanks for yr reply...
    I did the as per yr words using StreamWriter but the problem is i need a temp file name to create writer to write into that.
    but its an logical one and its not in real so if i create Streamwriten in that its through FileNotFound exception.
    The only problem is the existing code build using URL and i can change all the lines and its very difficult because its vast amount of data.
    Is anyother way to solve this issue?
    Once again thanks..

  • Why are newer versions of Firefox having problems with UTF-16le (Windows default unicode set)

    I have a website that has multiple languages on it, and so I've been using UTF-16le for it. Everything was working well on multiple browsers until the last few months, when only Firefox stopped displaying it properly. I can force the page into UTF-16le, but then some of my graphical links no longer work and I cannot navigate through the pages unless I force every single page to UTF-16le EVERY SINGLE TIME. This problem is not unique to my computer, either, as this has happened with every computer I have tried in the last few months.

    As answered before a few weeks back [[/questions/770955 *]]: the server sends the pages as UTF-8 and that is what Firefox uses to display the pages. You need to reconfigure the server and make them send the pages with the correct content type (UTF-16) or with no content type at all if you want Firefox to use the content type (BOM) in the file.
    A good place to ask questions and advice about web development is at the mozillaZine Web Development/Standards Evangelism forum.<br />
    The helpers at that forum are more knowledgeable about web development issues.<br />
    You need to register at the mozillaZine forum site in order to post at that forum.<br />
    See http://forums.mozillazine.org/viewforum.php?f=25

  • Can I obtain a CD-ROM with the latest revision of Adobe Reader for a Windows XP system w/ Service Pack 3. I do not want to go online with this system. I have dedicated it to read all of my PDF Files only.

    I have 4 computer systems, 2 of which run under Windows XP w/ Service Pack 3. I have dedicated these systems to the task of reading all of my PDF Files which I have collected from my recent college career. The desktop system I want to use is an old Dell Optiplex GX240 with Acrobat Reader 4.0. The other Windows XP system I have is an old HP Laptop with Adobe Reader 8.1.4 installed. I want to update both systems to the latest version that is available for Windows XP w/Service Pack 3 installed. So, because I do not want to place these system online, would it be possible for me to obtain a copy of the Adobe Reader software I need on a CD-ROM? - Ken DeWitt, a 68-Year-Young Vietnam Veteran and recent college graduate...Summa Cum Laude.

    You can use an in-line computer to download the full offline Reader installer from
    http://get.adobe.com/reader/enterprise/

  • I am trying to get help with WMPlayer tech support Adobe Reader cant open wmpsupport.htm files why?

    i am trying to get help with WMPlayer tech support Adobe Reader cant open wmpsupport.htm files why?

    Adobe Reader opens PDF files, nothing else.

Maybe you are looking for