Barcode 128B and wrong character encoding?

Hello.
I'm using Barcode Type Code 128B in Adobe Form. The Barcode is linked to data field MATNR.
If MATNR is numeric barcode information is set correctly. But if MATNR is e.g. 012236-602-26
then our barcode reader recognizes the "-" characters as char "ß".
In the properties of the barcode i can't adjust codepage or font type.
Anyone has experience with this issue ?
Best regards,
Sebastian

I don't know anything about Icelandic characters, but Flash
generally really likes UTF-8. So it should be sending that if that
is what it is starting with.
You aren't using any kind of useCodePage? That will mess it
up.
Are you sure that the input method is Icelandic?
In the testing environment can you list variables (from the
debug menu) and see if they look proper? If they do then Flash is
readying them correctly and the problem must be coming in further
down stream.

Similar Messages

  • NetBeans problem: Issue with servlets and Chinese character encoding

    Java Version: JDK1.5.0_01, JRE1.5.0_01 (International version)
    Netbeans Version: Netbeans IDE 4.0
    OS: Windows XP Personal Edition
    Dear Sirs,
    First at all thanks for reading this post. I am having the following issue. I am creating an application using html pages and servlets. I am using Chinese and English languages on them (html encoding UTF-8).
    I created a project in Netbeans and added an idex.html screen reporting to a servlet. Both index.html and in the servlet generated html page contains the line:
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    Additional, I setup the character code settings in Netbeans:
    (tools-options-Java sources-Expert-default encoding=UTF-8
    When I run the project, index.html displays itself perfectly, with the Chinese characters displayed properly. The problem comes when the html created servlet is displayed, which instead of the Chinese characters some strange characters are displayed (�� instead of Chinese).
    I have tried different encodings from http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html without any luck. I also setup the encoding of the file itself (using right click-properties in the project menu of Netbeans).
    Also, when I am editing the servlet, the characters are displayed properly. I type them directly without any issue, but then the display is wrong at runtime.
    Also, just in case this have something to do with the problem, my PC was bought in US, therefore the default character set is not Chinese. I had to install the Chinese typing stuff later on. But like I said earlier, the html page is displayed properly, so I really think is some problem with Netbeans.
    After a week trying to find a solution, I decided to post it here in the hopes that someone will show me the way of the light.
    Thanks in advance for any ideas or help provided
    Aral.

    Ok, I found out some problems with Netbeans as well.
        public void doGet(HttpServletRequest request,
                          HttpServletResponse response)
            throws IOException, ServletException
            response.setCharacterEncoding("UTF-8");
            request.setCharacterEncoding("UTF-8");
            response.setContentType("text/html");
            PrintWriter out = response.getWriter();
            byte[] st = {-25,-75,-124,-27,-100,-106,-17,-68,-102,-27,-80,-113,-27,-72,-125,-26,-118,-75,-26,-105,-91,-27,-82,-93};
            out.println("this works: ");
            out.println(new String(st,"UTF-8"));
            out.println("<br>");
            out.println("this doesn't: ");
            out.println("some chinese copied from the Internet<br>");Right click the .java file and choose properties -> encoding UTF-8
    Then I make a copy of the .java file, rename it to html and open it with IE sure enough
    the Chinise is allready unreadable (not it's still readable in the IDE);
    When I compile the file with F9 I get the following error:
    whatever.java:101: warning: unmappable character for encoding Cp1252
    Tried to set the encoding to UNICODE but then the file doesn't compile.
    I gues you have to download the Japanese version for it to work correctly.

  • Wrong character encoding from flash to mysql

    Hi, im experiencing problems with character encoding not
    functioning correctly when sending from flash to mysql. What i am
    doing is doing a contact form in flash which then sends the value
    to a php file which takes the values and inserts them into a table.
    As i'm using icelandic charecters i need the char encoding to be
    either latin1 or utf8 in mysql, or at least i think so. But it
    seems that flash or the php document isn't sending in the same
    format as i have selected in mysql because all special icelandic
    characters come scrambled in the mysql table. Firefox tells me
    tough that the html document containing the flash movie is using
    utf-8.

    I don't know anything about Icelandic characters, but Flash
    generally really likes UTF-8. So it should be sending that if that
    is what it is starting with.
    You aren't using any kind of useCodePage? That will mess it
    up.
    Are you sure that the input method is Icelandic?
    In the testing environment can you list variables (from the
    debug menu) and see if they look proper? If they do then Flash is
    readying them correctly and the problem must be coming in further
    down stream.

  • Wrong character encoding in error messages

    The Java compiler can be adjusted to source file encoding with the option javac -encoding ...
    The Java runtime can be adjusted to terminal encoding with java -Dfile.encoding=...
    While this appears somehow inconsistent, it works and can be used e.g. when running the tools from Cygwin (the POSIX layer on Windows) which uses UTF-8 by default, while Java, following the Windows mechanism, uses some other character encoding by default (this works more seemlessly on Unix/Linux, by the way).
    Now if I compile UTF-8 source with non-ASCII characters, and there is an error message related to them, the error message printed to the console will not be UTF-8 encoded, resulting in mangled text output.
    (Arguably, source and terminal encoding could be different, but then there is no option available to the compiler to adjust this;
    it does not accept -Dfile.encoding=....)
    Example: Error message looks like this:
    FM.java:1: error: class, interface, or enum expected
    b▒h
    While the string is actually "bäh" in the source.
    This is a bug. Any proper place to actually report a bug?
    Edited by: 994195 on 15-Mar-2013 09:42

    I'll ignore you just blatantly assuming it is a bug because you say so, you did not think to type "java report bug" into Google?

  • Internationalisation ServletFilter and UTF8 Character-Encoding

    Hello to all
    I use a non common but nice way to internationalize my web-application.
    I use a ServletFilter to replace the text-keys. So static-resources stays static-resources that can be cached and don't need to be translated each time they are requested.
    But there is a little problem to get it working with utf-8.
    In my opinion there is only one way to read the response-content. I have to use a own HttpServletResponseWrapper like recommended under [http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Servlets8.html#82361].
    If I do so it is no more possible to use ServletQutputStream (wrapper.getOutputStream()) to write back the modified/internationalized content (e.g. with german umlauts/Umlaute) back to the response in the right encoding.
    Writer out = new BufferedWriter(new OutputStreamWriter(wrapper.getOutputStream(), "UTF8"));Using PrintWriter for writing does not work. Because umlauts are send in the wrong encoding (iso-8859-1). With the network-sniffer Wireshark I've seen that � comes as FC that means as iso-8859-1 encoded character.
    It obviously uses the plattforms default-encoding although the documentation does not mentions this explictly for the Constructor ([PrintWriter(java.io.Writer,boolean)|http://java.sun.com/j2se/1.4.2/docs/api/java/io/PrintWriter.html#PrintWriter(java.io.Writer,%20boolean)]).
    So my questions:
    1. Is there a way to get response-content without loosing option to call wrapper.getOutputStream().
    or
    2. can I set the encoding for my printwriter
    or
    3. can I encode the content before writing it to the printwriter and will this solve the problem
    new String(Charset.forName("UTF8").encode(content).array(), "UTF8") did not work.
    Here comes my code:
    The Filter to tanslate the resources/response:
    import java.io.IOException;
    import java.io.PrintWriter;
    import javax.servlet.Filter;
    import javax.servlet.FilterChain;
    import javax.servlet.FilterConfig;
    import javax.servlet.ServletException;
    import javax.servlet.ServletRequest;
    import javax.servlet.ServletResponse;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    import de.modima.util.lang.Language;
    public class TranslationFilter implements Filter
         private static final Log log = LogFactory.getLog(TranslationFilter.class);
         public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException
              String lang = Language.setLanguage((HttpServletRequest) request);
              CharResponseWrapper wrapper = new CharResponseWrapper((HttpServletResponse)response, "UTF8");
              PrintWriter out = response.getWriter();
              chain.doFilter(request, wrapper);
              String content = wrapper.toString();
              content = Language.translateContent(content, lang);
              content += "                                                                                  ";
              wrapper.setContentLength(content.length());
              out.write(content);
              out.flush();
              out.close();
         public void destroy(){}
         public void init(FilterConfig filterconfig) throws ServletException{}
         }The response-wrapper to get acces to the response content:
    import java.io.CharArrayWriter;
    import java.io.PrintWriter;
    import javax.servlet.http.HttpServletResponse;
    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    public class CharResponseWrapper extends TypedSevletResponse
         private static final Log log = LogFactory.getLog(CharResponseWrapper.class);
         private CharArrayWriter output;
         public String toString()
              return output.toString();
         public CharResponseWrapper(HttpServletResponse response, String charsetName)
              super(response, charsetName);
              output = new CharArrayWriter();
         public PrintWriter getWriter()
              return new PrintWriter(output, true);
         }The TypedResponse that takes care for setting the right http-header informations according to the given charset:
    import java.nio.charset.Charset;
    import java.util.StringTokenizer;
    import javax.servlet.http.HttpServletResponse;
    import javax.servlet.http.HttpServletResponseWrapper;
    public class TypedSevletResponse extends HttpServletResponseWrapper
         private String type;
         private String charsetName;
          * @param response
          * @param charsetName the java or non-java name of the charset like utf-8
         public TypedSevletResponse(HttpServletResponse response, String charsetName)
              super(response);
              this.charsetName = charsetName;
         public void setContentType(String type)
              if (this.type==null && type!=null)
                   StringTokenizer st=new StringTokenizer(type,";");
                   type=st.hasMoreTokens()?st.nextToken():"text/html";
                   type+="; charset="+getCharset().name();
                   this.type=type;
              getResponse().setContentType(this.type);
       public String getContentType()
          return type;
       public String getCharacterEncoding()
              try
                   return getCharset().name();
              catch (Exception e)
                   return super.getCharacterEncoding();
         protected Charset getCharset()
              return Charset.forName(charsetName);
         }Some informations about the enviroment:
    OS: Linux Debian 2.6.18-5-amd64
    Java: IBMJava2-amd64-142
    Apserver: JBoss 3.2.3
    Regards
    Markus Liebschner
    Edited by: MaLie on 30.04.2008 11:52
    Edited by: MaLie on 30.04.2008 11:54 ... typo
    Edited by: MaLie on 30.04.2008 12:04

    Hello cndvg
    yes I did.
    I found the solution in this forum at [Filter inconsistency Windows-Solaris?|http://forum.java.sun.com/thread.jspa?threadID=520067&messageID=2518948]
    You have to use a own implementation of ServletOutputStream.
    public class TypedServletOutputStream extends ServletOutputStream
         CharArrayWriter buffer;
         public TypedServletOutputStream(CharArrayWriter aCharArrayWriter)
              super();
              buffer = aCharArrayWriter;
         public void write(int aInt)
              buffer.write(aInt);
         }Now the CharResponseWrapper looks like this.
    public class CharResponseWrapper extends TypedSevletResponse
         private static final Log log = LogFactory.getLog(CharResponseWrapper.class);
         private CharArrayWriter output;
         public String toString()
              return output.toString();
         public CharResponseWrapper(HttpServletResponse response, String charsetName)
              super(response, charsetName);
              output = new CharArrayWriter();
         public PrintWriter getWriter() throws IOException
              return new PrintWriter(output,true);
         public ServletOutputStream getOutputStream()
              return new TypedServletOutputStream(output);
         }Regards
    MaLie

  • Wls 6.1 and wrong document encoding

    Hi
    In weblogic 6.0 sp2 we use the following startup parameters for weblogic: -Dfile.encoding=ISO8859_1
    -Duser.region=SE -Duser.language=sv. We does this to make the swedish characters
    Å Ä and Ö work in .jsp pages.
    But in weblogic 6.1 sp1 it does not work. My å, ä and ö looks like an ?.
    Anyone know what to do ?
    best regards Anders Pettersson

    Anders,
    I have had the same problems with german characters.
    With the "Edit WebApplication Descriptor" I have set -encoding
    ISO8859_1 in the JSP Descriptor Section under Compile Flags and so You
    will have the swedish characters.
    György
    Pen Friend <[email protected]> wrote in message news:<[email protected]>...
    Anders,
    Could you please post this question in the weblogic.developer.interest.internationalization newsgroup.
    Anders Pettersson wrote:
    Hi
    In weblogic 6.0 sp2 we use the following startup parameters for weblogic: -Dfile.encoding=ISO8859_1
    -Duser.region=SE -Duser.language=sv. We does this to make the swedish characters
    Å Ä and Ö work in .jsp pages.
    But in weblogic 6.1 sp1 it does not work. My å, ä and ö looks like an ?.
    Anyone know what to do ?
    best regards Anders Pettersson

  • Wrong character encoding in MS Excel

    Hi!
    I get and update data in the database (8i) using oo4o in MS Excel VBA. Data is in Latvian (NLS_LANG is AMERICAN_AMERICA.WE8ISO8859P1)
    The code is:
    Set OSes = CreateObject("OracleInProcServer.XOraSession")
    Set ODb = OSes.OpenDatabase("test", "scott/tiger", 0&)
    Set ODy = ODb.createdynaset("select ename from emp ", 0)
    The problem is, when I get data (ename), or put data into OraPrameter object, all Latvian letters are converted to some simbols. The example of the result is: 'KredØta', where at the place of Ø, should be letter ī.
    I've used this code for some years with Office 2000 and Oracle Client 8i, and everything was ok, but now I have Office 2003 and Client 10g, and it does not work :(

    Try to format the column in your query. Use to_char. ie. SELECT to_char(ename) ename FROM emp

  • Why differing Character Encoding and how to fix it?

    I have PRS-950 and PRS-350 readers, both since 2011.  
    In the last year, I've been getting books with Character Encoding that is not easy to read.  In playing around with my browsers and View -> Encoding menus, I have figured out that it has something to do with the character encoding within the epub files.
    I buy books from several ebook stores and I borrow from the library.
    The problem may be the entire book, but it is usually restricted to a few chapters, with rare occasion where the encoding changes within a chapter.  Usually it is for a whole chapter, not part, and it can be seen in chapters not consecutive to each other.
    It occurs whether the book is downloaded directly to my 950 reader or if I load it to either reader from my computer(s), which are all Mac OS X of several versions fom 10.4 to Mountain Lion.  SInce it happens when the book is downloaded directly, I figure the operating system of my computer is not relevant.
    There are several publishers involved, though Baen (no DRM ebooks) has not so far been one of them.
    If I look at the books with viewers on the computer, the encoding is the same.  I've read them in Calibre, in the Sony Reader App, and in Adobe Digital Editions 2.0.  It's always the same.
    I believe the encoding is inherent to the files.  I would like to fix this if I can to make the books I've purchased, many of them in paper and electronically, more enjoyable to read on my readers.
    Example: I’ve is printed instead of I've.
    ’ for apostrophe
    “ the opening of a quotation,
    â€?  for closing the quotation,
    and I think — is for a hyphen.
    When a sentence had “’m  for " 'm at the beginning of a speech (when the character was slurring his words) it took me a while to figure out how it was supposed to read.
    “’Sides, â€™tis only for a moon.  That ain’t long.â€?
    was in one recent book.
    Translation: " 'Sides, 'tis only for a moon. That ain't long."
    See what I mean? 
    Any ideas?

    Hi
    I wonder if it’s possible to download a free ebook with such issue, in order to make some “tests”.
    Perhaps it’s possible, on free ebooks (without DRM), to add fonts by using softwares like Sigil.

  • Reading Advance Queuing with XMLType payload and JDBC Driver character encoding

    Hi
    I've got a problem retrieving the message from the queue with XMLType payload in Java.
    It was working fine in 10g database but after the switch to 11g it returns corrupted string instead of real XML message. Database NLS_LANG setting is AL32UTF8
    It is said that JDBC driver should deal with that automatically but it obviously don't in this case. When I dequeue the message using database functionality (DBMS_AQ package) it looks fine but not when using JDBC driver so Ithink it is character encoding issue or so. The message itself is enqueued by the database and supposed to be retrieved by dedicated EJB.
    Driver file used: ojdbc6.jar
    Additional libraries: aqapi.jar, xdb.jar
    All file taken from 11g database installation.
    What shoul dI do to get the xml message correctly?

    Do you mean NLS_LANG is AL32UTF8 or the database character set is AL32UTF8? What is the database character set (SELECT value FROM nls_database_parameters WHERE parameter='NLS_CHARACTERSET')?
    Thanks,
    Sergiusz

  • Character Encoding for JSPs and HTML forms

    After having read loads of postings on character encoding problems I'm still puzzled about the following problem:
    I have an instance (A) of WL 8.1 SP3 on a WinXP machine and another instance (B) of WL 8.1 without any SP on a Win2K machine. The underlying Windows locale is english(US) in both cases.
    The same application deployed as a war file to these instances does not behave in the same way when it comes to displaying non-Latin1-characters like the Euro symbol: Whereas (A) shows and accepts these characters as request-parameters, (B) does not.
    Since the war file is the same (weblogic.xml, jsps and everything), the reason for this must either be the service-pack-level or some other configuration setting I overlooked.
    Any hints are appreciated!

    Try this:
    Prefrences -> Content -> Fonts & Color -> Advanced
    At the bottom, choose your Encoding.

  • Problems with Forms and character encoding

    I'm having problems trying to read unicode data inputted into a Form on my JSP page.
    I've used the meta tag <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> to set the charset of the page to UTF-8. I've inputted some chinese characters inot my form and when I try to read the subsequent request parameter in my servlet using request.getParameter() the string returned is this
    "&#26469;&#28304;" which is the escape sequence required by HTML to display these characters.
    From what I've read on the subject this doesn't seem like the expected value. I've tried other ways of getting the correct string value such as setting the character encoding request.setCharacterEncoding("UTF-8") and then converting the bytes using this encoding value but it doesn't seem to work.
    I could write a method to split up the string using the ; as a token and working out the correct unicode character but this doesn't seem like the right thing to do.
    Any help on how to pass the correct information from the Form in the JSP page to the servlet would be greatly appreciated

    I don't believe that is correct, but if it's returning HTML escapes instead of URL Encoded characters, then it's the browser doing it. This is my test page for playing with Chinese...
    <%@ page language="java" contentType="text/html; charset=UTF-8" %>
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <html>
    <head>
         <title></title>
         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    </head>
    <body bgcolor="#ffffff" background="" text="#000000" link="#ff0000" vlink="#800000" alink="#ff00ff">
    <%
    request.setCharacterEncoding("UTF-8");
    String str = "\u7528\u6237\u540d";
    String name = request.getParameter("name");
    %>
    req enc: <%= request.getCharacterEncoding() %><br />
    rsp enc: <%= response.getCharacterEncoding() %><br />
    str: <%= str %><br />
    name: <%= name %><br />
    <form method="GET" action="_lang.jsp" encoding="UTF-8">
    Name: <input type="text" name="name" value="" >
    <input type="submit" name="submit" value="GET Submit" />
    </form>
    <form method="POST" action="_lang.jsp" encoding="UTF-8">
    Name: <input type="text" name="name" value="" >
    <input type="submit" name="submit" value="POST Submit" />
    </form>
    </body>
    </html>

  • Seeing � etc despite having View--Character encoding as unicode and auto-detect universal

    On viewing some web pages see characters such as �, ,  (for example). But View-Character Encoding is set at Unicode (UTF-8) or Western (ISO8859-1) and Tools-Options-Content-Fonts-Advanced Encoding set with either of those

    example of page:
    http://scienceofdoom.com/2010/09/17/on-missing-the-point-by-chilingar-et-al-2008/
    - a little over half way down, the section headed "Anthropogenic Imact on the Earth’s Climate – Tiny" from paragraph "And continue: " there are these non-characters in the equation (12) and subsequently.
    Another page : http://www.zimbabwesituation.com/sep26_2010.html in the topic " Red warning lights" .
    Most web-pages I read are without problem.
    I contacted the writer of the first page and s/he had no idea why it happens.

  • Web pages display OK, but print with garbage characters. I think it's character encoding, but don't know WHICH I should use. Have tried all Western and UTF options. Firefox 3.6.12

    I used to only have troubles with headers & footers printing out as garbage characters. I tried changing Character Encoding, now entire pages have garbage characters, even though pages view ok when browsing.

    If the pages look OK when you are browsing then it is not a problem with the encoding.<br />
    It can be a problem with the font that is used and you can try to disable website fonts and posibly try a few different default fonts to see if that helps.
    Tools > Options > Content : Fonts & Colors: Advanced (Allow pages to choose their own fonts, instead of my selections above)

  • Locale and character encoding. What to do about these dreadful ÅÄÖ??

    It's time for me to get it into my head how this works. Please, help me understand before I go nuts.
    I'm from Sweden and we use a few of these weird characters like ÅÄÖ.
    If I create a file called "övrigt.txt" in windows, then the file will turn up as "?vrigt.txt" on my Linux pc (At least in the console, sometimes it looks ok in other apps in X). The same is true if I create the file in Linux and copy it to Windows, it will look just as weird on the other side.
    As I (probably) can't change the way windows works, my question is what I have to do to have these two systems play nicely with eachother?
    This is the output from locale:
    LANG=en_US.utf8
    LC_CTYPE="en_US.utf8"
    LC_NUMERIC="en_US.utf8"
    LC_TIME="en_US.utf8"
    LC_COLLATE=C
    LC_MONETARY="en_US.utf8"
    LC_MESSAGES="en_US.utf8"
    LC_PAPER="en_US.utf8"
    LC_NAME="en_US.utf8"
    LC_ADDRESS="en_US.utf8"
    LC_TELEPHONE="en_US.utf8"
    LC_MEASUREMENT="en_US.utf8"
    LC_IDENTIFICATION="en_US.utf8"
    LC_ALL=
    Is there anything here I should change? I have tried using ISO-8859-1 with no luck. Mind you that I want to have the system wide language set to english. The only thing I want to achieve is that "Ö" on widows should turn up as "Ö" i Linux as well, and vice versa.
    Please save my hair from being torn off, I'm going bald here...

    Hey, thanks for all the answers!
    I share my files in a number of ways, but mainly trough a web application called Ajaxplorer (very nice btw...). The thing is that as soon as a windows user uploads anything with special chatacters in the file name my programs, xbmc, console etc, refuses to read them correctly. Other ways of sharing is through file copying with usb sticks, ssh etc. It's really not the way of sharing that is the problem I think, but rather the special characters being used sometimes.
    I could probably convert the filenames with suggested applications but then I'll set the windows users in trouble when they want to download them again, won't I?
    I realize that it's cp1252 that is the bad guy in this drama. Is there no way to set/use cp1252 as a character encoding in Linux? It's probably a bad idea as utf8 seems like the future way to go, but the fact that these two OS's can't communicate too well in this area is pretty useless if you ask me.
    To wrap this up I'll answer some questions...
    @EVRAMP: I'm actually using pcmanfm, but that is only for me and I'm not dealing very often with vfat partitions to be honest.
    @pkervien: Well, I think I mentioned my forms of sharing above. (kul med lite arch-svenskar!)
    @quarkup: locale.gen is edited and both sv.SE and en_US have utf-8 and ISO-8859 enabled and generated.
    ...and to clearify things even further. It doesn't matter if I get or provide a file via a usb stick, samba, ftp or by paper. All I want is for "Ö" to always be "Ö", everywhere.
    I can't believe how hard this is to get around. Linus is finish for crying out loud. I thought he'd sorted this out the first thing he did. Maybe he doesn't deal with windows or their users at all

  • Character encoding: Ansi, ascii, and mac, oh my!

    I'm writing a program which has to search & replace data in user-supplied Rich Text documents (.rtf). Ideally, I would like to read the whole thing into a StringBuffer, so that I can use all of the functionality built into String and StringBuffer, and so that I can easily compare with constant Strings and chars.
    The trouble that I have is with character encoding. According to the rtf spec, RTFs can be encoded in four different character encodings: "ansi", "mac", IBM PC code page 437, and IBM PC code page 850, none of which are supported by Java (see http://impulzus.sch.bme.hu/tom/szamitastechnika/file/rtfspec/rtfspec_6.htm#rtfspec_8 for the RTF spec and http://java.sun.com/j2se/1.3/docs/api/java/lang/package-summary.html#charenc for the character encodings supported by Java).
    I believe, from a bit of googling, that they are all 8 bits/character, so I could read everything into a byte array and manipulate that directly. However, that would be rather nasty. I would have to be careful with the changes that I make to the document, so that I do not insert values that do not encode correctly in the document's character encoding. Overall, a large hassle.
    So my question is - has anyone done something like this before? Any libraries that will make my job easier? Or am I missing something built into Java that will allow me to easily decode and reencode these documents?

    DrClap, thanks for the response.
    If I could map from the encodings listed above (which are given in the rtf doucment) to a java encoding name from the page that you listed, that would solve all my problems. However, there are a couple of problems:
    a) According to this page - http://orwell.ru/info/diffs.htm - ANSI is a superset of ISO-8859-1. That page isn't exactly authoritative, but I can't afford to lose data.
    b) I'm not sure what to do about the other character encodings. "mac" may correspond to "MacRoman" but that page lists a dozen or so other macintosh encodings. Gotta love crystal-clear MS documentation.

Maybe you are looking for