File encoding cp1252 problem

Hi there,
I have a problem concerning the file encoding in a web application.
I'll sketch the problem for you.
I'm working on adjustments and bug fixes of an e-mail archive at the company i work for. With this archive, users can search e-mails using a Struts 1.0 / JSP web application, read them, and send them back to their mail inbox.
Recently a bug has appeared, concerning character sets.
We have mails with french characters or other uncommon characters in it.
Like the following mail:
Subject: Test E-mail archief co�rdinatie Els
Content: Test co�rdinatie r�d�marrage ... test weird characters � � �
In the web application itself, everything is fine...but when i send this mail back to my inbox, the subject gets all messed up:
=?ANSI_X3.4-1968?Q?EMAILARCHIVE_*20060419007419*_Tes?=
=?ANSI_X3.4-1968?Q?t_E-maill_archief_co=3Frdinatie_Els?=
The content appears to be fine.
We discovered this problem recently, and a lot of effort and searching has been done to solve it.
Our solution was to put the following line in catalina.sh , with what our Tomcat 4.1 webserver starts.
CATALINA_OPTS="-server -Dfile.encoding=cp1252"
On my Local Win2K computer, the encoding didn't pose a problem, so catalina.sh wasn't changed. It was only a problem (during testing) on our Linux test server ... a VMWare server which is a copy of our production environment.
On the VMWare, i added the line to the catalina.sh file. And it worked fine.
Problem Solved !
Yesterday, we were putting the archive in production. On our production server ... BANG --> NullPointerException.
We thought it has something to do with jars he couldn't find, older jars, cache of tomcat ... but none of this solved the problem.
We put the old version back into production, but the same NullPointerException occured.
We then put the "CATALINA_OPTS="-server -Dfile.encoding=cp1252" " line in comment ... and then it worked again.
We put the new version into production (without the file encoding line), and it worked perfectly, except for those weird ANSI characters.
Anyone have any experience with this?
I use that same file encoding to start a batch, but there i call it Cp1252 (with a capital C) ... might that be the problem? But i have to be sure...because the problem doesn't occur in the test environment, and i can't just test in production ... and switch off the server whenever i'd like to.
Does anyone see if making cp1252 --> Cp1252 might be a solution, or does anyone have another solution?
Thanks in advance.

First, I will start by saying that JInitiator was not intended to run on Win7, especially 64bit. So, it may be time to think about moving to the Java Plugin. Preferably one which is certified with your Forms version.
To your issue, I suspect you need to change the "Region and Language" settings on the client machine. This can be found on the Control Panel. If that doesn't help, take a look at this:
http://stackoverflow.com/questions/4850557/convert-string-from-codepage-1252-to-1250

Similar Messages

  • File.encoding = Cp1252

    Dear List,
    I have noticed in my system.properties that my windows xp 1.4 jdk has
    file.encoding = Cp1252
    What does this mean generally ? In my jsp web application, I have noticed that the jsp pages themselves which have html company headers have been corrupted somewhat, e.g. copyright symbol becomes something else (looks like utf8!). Could this be something to do with it ? Does this mean that text stream IO readers etc will adopt this encoding as default
    what affects can it have if I change this using the setProperty method?
    regards
    Ben

    Setting with
    java -Dfile.encoding= works.
    public class a {
         public static void main(String a[] ) {
              System.out.println(System.getProperty("file.encoding") + " \u0150\u0151\u0170\u0171");
    }D:\doku\source\colors\src\web>java a
    Cp1252 ????
    D:\doku\source\colors\src\web>java -Dfile.encoding=utf8 a
    utf8 ┼�┼�┼░┼▒
    D:\doku\source\colors\src\web>java -Dfile.encoding=latin2 a
    latin2 ╒⌡█√

  • How can a convert a file encoded Cp1252 (Windows Latin-1) to text file.

    please this is urgent
    could some one tell me how I can a convert a file encoded Cp1252 (Windows Latin-1) to a text file.

    I need to convert a file encoded in Cp1252
    (well that is what I get as when I invoke the method
    in.getEncoding())to a plain text file a readable format... (UTF8 format?? )so I that i can parse it with StringTokenizer...
    sorry for some missing part of the first message.
    thanks

  • Problem with file.encoding system property

    Hi all
    I develop a web application with Tomcat app server.
    I have to set file.encoding system property to "Cp1252", but when I set it programatically using System.setProperty("file.encoding","Cp1252") it doesnt affect to my program outcome but when I put it in catalina.bat with set JAVA_OPTS=-Dfile.encoding=Cp1252 it works fine.
    what is the differences and why I cant change that property programatically?
    thanks

    why I cant change that property programatically?It is a system property reflecting the initial settings read when the JVM started. Resetting the system property java.home will not change your home directory either.

  • Jinitiator 1.3.1.2.6 on win 7 64 and win xp (different file.encoding)

    Hello,
    our customer has moved from windows XP to Windows 7 and he uses Jinitiator 1.3.1.2.6...
    In some "Forms" I have implemented a PJC to save datas from clob to local file system..
    But there is a problem....
    If I run the same application with Windows XP I get file.encoding=Cp1250 which is ok....
    If I run the same application with Windows 7 (64) I get file.encoding=CP1252 and here is the problem...
    Is there any way to run Jinitiator (or set up file.encoding to/with) Cp1250?
    Maybe is this a local problem with windows?
    thank you..

    First, I will start by saying that JInitiator was not intended to run on Win7, especially 64bit. So, it may be time to think about moving to the Java Plugin. Preferably one which is certified with your Forms version.
    To your issue, I suspect you need to change the "Region and Language" settings on the client machine. This can be found on the Control Panel. If that doesn't help, take a look at this:
    http://stackoverflow.com/questions/4850557/convert-string-from-codepage-1252-to-1250

  • How to set File Encoding to UTF-8 On Save action in JDeveloper 11G R2?

    Hello,
    I am facing issue when I am modifying a File using JDeveloper 11G R2. JDeveloper is changing the Encoding of the File to System default Encoding (ANSI) instead of UTF-8. I have updated the Encoding to UTF-8 in "Tools | Preferences | Environment | Encoding" option and restarted the JDeveloper. I have also updated "Project Properties | Compiler | Character Encoding" option to UTF-8. None of them are working.
    I am using below version of JDeveloper,
    Oracle JDeveloper 11g Release 2 11.1.2.3.0
    Studio Edition Version 11.1.2.3.0
    Product Version: 11.1.2.3.39.62.76.1
    I created a file in UTF-8 Encoding. I opened it, do some changes and Save it.
    When I open the "Properties" tab using "Help | About" Menu, I can see that the Properties of JDeveloper are showing encoding as Cp1252. Is it related?
    Properties
    sun.jnu.encoding
    Cp1252
    file.encoding
    Cp1252
    Any idea how to make sure JDeveloper saves the File in UTF-8 always?
    - Sujay

    I have already done that. That is the first thing I did as mentioned in my Thread. I have also added below 2 options in jdev.conf and restarted JDeveloper, but that also did not work.
    AddVMOption -Dfile.encoding=UTF-8
    AddVMOption -Dsun.jnu.encoding=UTF-8
    - Sujay

  • File.encoding in windows influence  by the locale

    How can I set the file.encoding in windows platform that will not be influence by the locale.
    For example, in the Control Panel->Regional Options the locale is set to Russian
    and what I get is that I use file.encoding Cp1251 even though I pass the parameter in the command line
    -Dfile.encoding=Cp1252 (I want to keep Cp1251,
    Cp1251 is US westen default and Cp1252 is Windows Cyrillic)
    I run java program to see what encoding I use
    D:\ProgramFiles\jdk1.3.1\bin\java -Dfile.encoding= Cp1252 TestEncodingThe locale in my pc is Russian and the result is:
    System.getProperty("file.encoding") == Cp1252
    Default ByteToChar Class == sun.io.ByteToCharCp1251
    Default CharToByte Class == sun.io.CharToByteCp1251
    Default CharacterEncoding == Cp1251
    OutputStreamWriter encoding == Cp1251
    InputStreamReader encoding == Cp1251
    TestEncoding.java
    import java.io.PrintStream;
    import java.io.ByteArrayOutputStream;
    import java.io.OutputStreamWriter;
    import java.io.InputStream;
    import java.io.ByteArrayInputStream;
    import java.io.InputStreamReader;
    class TestEncoding{
    public static void main(String[] args) {
    String encProperty = System.getProperty("file.encoding");
    System.out.println("System.getProperty(\"file.encoding\") == " + encProperty);
    String byteToCharClass = sun.io.ByteToCharConverter.getDefault().getClass().getName();
    System.out.println("Default ByteToChar Class == " + byteToCharClass);
    String charToByteClass = sun.io.CharToByteConverter.getDefault().getClass().getName();
    System.out.println("Default CharToByte Class == " + charToByteClass);
    String defaultCharset = sun.io.ByteToCharConverter.getDefault().getCharacterEncoding();
    System.out.println("Default CharacterEncoding == " + defaultCharset);
    ByteArrayOutputStream buf = new ByteArrayOutputStream(10);
    OutputStreamWriter writer = new OutputStreamWriter(buf);
    System.out.println("OutputStreamWriter encoding == " + writer.getEncoding());
    byte[] byteArray = new byte[10];
    InputStream inputStream = new ByteArrayInputStream(byteArray);
    InputStreamReader reader = new InputStreamReader(inputStream);
    System.out.println("InputStreamReader encoding == " + reader.getEncoding());

    What are you really trying to accomplish? Applications should avoid relying on undocumented or implementation dependent features, such as the file.encoding property and sun.* classes (see http://java.sun.com/products/jdk/faq/faq-sun-packages.html).
    On the other hand, there's plenty of documented public API that lets you work with specific character encodings. For example, you can specify the character encoding for conversion between byte arrays and String objects (see the String class specification) or when reading or writing files (see the InputStreamReader and OutputStreamWriter classes in java.io).
    The default encoding is needed by the Java runtime when accessing the Windows file system, for example file names, so changing it would likely result in erroneous behavior.
    Norbert Lindenberg

  • Problem with file.encoding in Linux

    Hello,
    I am currently migrating a java project from Windows to Linux. The project is finally shaping up now, except for some encoding problems.
    All configuration files are saved in ISO-8859-1/Cp1252 format. When reading and displaying these files in Swing (e.g. JTextArea), the special characters ���� are displayed wrong. I have tried to start the VM with -Dfile.encoding=ISO-8859-1 and -Dfile.encoding=Cp1252 without success (this is done in Eclipse under Linux).
    I then tried the opposite. I created some UTF-8 files, started the application under Windows/Eclipse, read the files and displayed them in a JTextArea. Garbage characters were shown instead of ���� (as expected). I then used -Dfile.encoding=UTF-8, and voila, all characters were displayed correctly.
    Why does not -Dfile.encoding work for ISO-8859-1/Linux but -Dfile.encoding work for UTF-8/Windows? Anyone here know?
    The JRE I have been using is 1.4.2_06.
    The Linux is a SuSE 10.0

    Continue the "discussion" here:
    http://forum.java.sun.com/thread.jspa?threadID=737153

  • XI File Adapter Custom File Encoding for  issues between SJIS and CP932

    Dear SAP Forum,
    Has anybody found a solution for the difference between the JVM (IANA) SJIS and MS SJIS implementation ?
    When users enter characters in SAPGUI, the MS SJIS implementation is used, but when the XI file adapter writes SJIS, the JVM SJIS implementation is used, which causes issues for 7 characters:
    1. FULLWIDTH TILDE/EFBD9E                 8160     ~     〜     
    2. PARALLEL TO/E288A5                          8161     ∥     ‖     
    3. FULLWIDTH HYPHEN-MINUS/EFBC8D     817C     -     −     
    4. FULLWIDTH CENT SIGN/EFBFA0             8191     ¢     \u00A2     
    5. FULLWIDTH POUND SIGN/EFBFA1            8192     £     \u00A3     
    6. FULLWIDTH NOT SIGN/EFBFA2              81CA     ¬     \u00AC     
    7. REVERSE SOLIDUS                             815F     \     \u005C
    The following line of code can solve the problem (either in an individual mapping or in a module)
    String sOUT = myString.replace(\u0027~\u0027,\u0027〜\u0027).replace(\u0027∥\u0027,\u0027‖\u0027).replace(\u0027-\u0027,\u0027−\u0027).replace(\u0027¢\u0027,\u0027\u00A2\u0027).replace(\u0027£\u0027,\u0027\u00A3\u0027).replace(\u0027¬\u0027,\u0027\u00AC\u0027);
    But I would prefer to add a custome Character set to the file encoding. Has anybody tried this ?

    Dear SAP Forum,
    Has anybody found a solution for the difference between the JVM (IANA) SJIS and MS SJIS implementation ?
    When users enter characters in SAPGUI, the MS SJIS implementation is used, but when the XI file adapter writes SJIS, the JVM SJIS implementation is used, which causes issues for 7 characters:
    1. FULLWIDTH TILDE/EFBD9E                 8160     ~     〜     
    2. PARALLEL TO/E288A5                          8161     ∥     ‖     
    3. FULLWIDTH HYPHEN-MINUS/EFBC8D     817C     -     −     
    4. FULLWIDTH CENT SIGN/EFBFA0             8191     ¢     \u00A2     
    5. FULLWIDTH POUND SIGN/EFBFA1            8192     £     \u00A3     
    6. FULLWIDTH NOT SIGN/EFBFA2              81CA     ¬     \u00AC     
    7. REVERSE SOLIDUS                             815F     \     \u005C
    The following line of code can solve the problem (either in an individual mapping or in a module)
    String sOUT = myString.replace(\u0027~\u0027,\u0027〜\u0027).replace(\u0027∥\u0027,\u0027‖\u0027).replace(\u0027-\u0027,\u0027−\u0027).replace(\u0027¢\u0027,\u0027\u00A2\u0027).replace(\u0027£\u0027,\u0027\u00A3\u0027).replace(\u0027¬\u0027,\u0027\u00AC\u0027);
    But I would prefer to add a custome Character set to the file encoding. Has anybody tried this ?

  • File encoding problen (charset) on glassfish / Sun App Server

    Hi all!
    I hope someone here can point me the right way since I am trying to solve my problems for quite some time now. First my setup: Suse Linux Box with Glassfish V2. I am creating files for users of our website with an EnterpriseBean in an EJB-Module. The users are supposed to be able to choose the file encoding themselves. The files are created from lucene index files that are UTF-8 encoded. When writing a result file I use a OutputStreamWriter with a CharsetEncoder Object and the user-chosen encoding. This works perfectly when the result is utf-8 too. But whenever I try to generate ISO-8859-1 files the encoding in the output files is messed up. It's neither utf-8 nor Latin 1 or any other valid encoding. On my development windows machine it seemed to work just fine.
    So thanks in advanve and many greetings from germany!
    Phil

    For future references:
    this happens to me too and I found that the cause is that the AM server you are going to configure, is already registered into the directory server.
    Try running this command (with the obvious parameters substituted)
    ldapsearch -B -T -D 'cn=directory manager' -w YOUR_CREDENTIALS -b ou=1.0,ou=iPlanetAMPlatformService,ou=services,YOUR_BASEDN -s base objectclass=* | grep YOUR_SERVERNAMEIf you found that the server you are configuring is listed here try going to AMserver console (if you have another AMserver configured) and browse to Configuration->System Properties->Platforms. If the server is here, remove it, if not, just hit Save (very important).
    If this is your first AM and is a first installation you can just get rid of the Directory Server suffix and recreate it with the Top Entry alone.
    Edited by: flistello on Mar 27, 2008 4:30 PM

  • File.encoding won't swap

    trying to help someone who has a windows file server originally installed for russian locale. they managed to swap it back to what they require (en_GB) but the JVM file encoding's still stuck on cp1251. they say they've changed the JVM config's file encoding ("-Dfile.encoding=cp1252") yet the system still reports cp1251 for the file encoding (even after several reboots). i don't see anything in the bug parade & i'm at a loss as to why this box won't swap it's JVM file encoding.
    any ideas?
    thanks.

    which version of the videoencoder are you using?
    2.0.0.494 edition (Brand New)
    Remember that you can't crop all files at once . .
    Oh, didn't see that in the manual!
    Where the heck is it listed?
    I tried to save a profile with the compression setting -
    including cropping - but upon trying to use that saved profile - a
    dialog box comes up with:
    Warning: some of the settings (crop, trim, cue point . . . .
    Too bad, so sad!
    2,000+ movies is gonna take a long time!
    Their huge files - broadcast DVC-Pro - and the overscan - and
    interlace top & bottom show up . . . . Aaarrrgggg!
    Bye the way - is it 4 pixels cropped - top & bottom - to
    clean it up ?
    And shouldn't a few (maybe 2 pixels) on each side be cropped
    as well?
    Any other suggestions on how to automate what I need to do
    (deinterlace, crop 4 pixels top & bottom & 2 pixels sides -
    exported "High Quality / large)
    Thanks for responding so far - sure helps clarify things!
    Luke

  • File encoding with UTF-8

    Hello all,
    My scenario is IDoc -> XI -> File (txt).
    Everything was working fine until i have to handle eastern european language with weird symbol
    So in my file adapter receiver, i'm using the file encoding code UTF-8 and when i look my field in output, everything is fine.
    BUT, when i look in binary, the length of these field is not longer fixed because a special character takes 2 bytes instead of one.
    I would like to know if it's possible to handle those characters with a file encoding code UTF-8 in a fixed length field of 40 characters for example don't want a variable length for my fields...
    Thanks by advance,
    JP

    I agree with you. In XI, i don't have this problem, i have it in my ouput file when i edit my text file in binary mode !
    My field should be on 40 characters but the special symbol which take 2 bytes instead of 1 make the length of my output fields variable !!!
    My question was to know if there is a way to have a fixed length in my output file..
    Sorry if i wasn't clear in my first post.
    JP

  • File adapter, File encoding national characters

    Hi,
    I have a problem with national characters (ÅÄÖ) when sending (receiver adapter) files with the fileadapter.
    When i specify Transfere mode = Binary and File Type = Binary everything works fine but when i use Transfere mode =+ Text+ the national characters gets converted to "?". I have tried to set File Type = text and tryed File Encoding with UTF-8 and ISO-8859-1 without success.
    Please help!
    Regards
    Claes

    Hi,
    Check this out: <a href="https://www.sdn.sap.comhttp://www.sdn.sap.comhttp://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/502991a2-45d9-2910-d99f-8aba5d79fb42">How To… Work with Character Encodings in Process Integration</a>
    Regards,
    Jakub

  • The source file encoding may be different with this platform encoding.

    I got the below error after I have upgraded my Iplanet from 4.1 SP3 to 6.0 SP5 so what seems to be the problem here ? The code is working perfect under 4.1 SP3.
    [02/Jan/2003:00:01:35] info (10457): JSP: JSP1x compiler threw exception
    org.apache.jasper.JasperException: Unable to compile class for JSP/software/data
    /stage/mainBean/printParser2.java:1: The source file encoding may be different w
    ith this platform encoding. Please use -encoding option to adjust file encoding,
    or apply native2ascii utility to make source file ASCII encoding.

    Looks like the file has been save in a non-ascii format. Perhaps someone opened and saved the file as a Word document and the compiler is unable to re-compile it. Try opening the file in Wordpad and resaving it in TextFormat.

  • Source file encoding

    Hi all,
    Here is my problem :
    Page 1:
    <%@ page contentType="text/html;charset=UTF8" %>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html;charset=UTF-8">
    </head>
    <body>
    ����@�
    </body>
    </html>
    All the special characters display fine.
    If I write:
    Page 1:
    <%@ include file="/page2.jsp" %>
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html;charset=UTF-8">
    </head>
    <body>
    ����@�
    </body>
    </html>
    Page 2 : <%@ page contentType="text/html;charset=UTF8" %>
    The display is ugly !!
    Why ??
    I understand that the directive
    <%@ page contentType="text/html;charset=UTF8" %> defines the source file encoding but obviously the include directive does something I don't understand. Any help appreciated.
    Thanks in advance,

    The server/compiler you're using may need to see the content type of the source file you're using BEFORE it is able to process any of the code. Basically, when it gets to the<%@ include file="/page2.jsp" %> line, it doesn't know what content type it's supposed to be using yet (because it's in that separate JSP), and it pukes. You probably have to declare the content type before you can do anything else.

Maybe you are looking for

  • Passing a Prompt value to Webi using Xcelsius with OpenDocument

    -Operating System: Windows XP -OS Patch level: Service Pack 3 -Office Version: 2007 -Flash Player version: 10 -Xcelsius Version and Patch Level: Enterprise 2008 with SP2 (version: 5.2.0.0) -Xcelsius Build number (Help>About Xcelsius): 12,2,0,608 -Bus

  • Display column values as a series/legend on a line chart where one column value can be more than one category

    Hi Everyone I don't know if this is possible or not, so I will describe it. I have a column in a table called Filing. There are currently three values in it ("Abuse", "Neglect", "Voluntary") and each record has this populated. I also have the below s

  • Touble with TV out from my Ipod Touch

    Here's the deal. I am out of the states for a while, and I only get three english channels. So I bought some shows on itunes and I bought the A/V cable so i can watch them on my tv in my apartment. The only problem is, they start playing and about 30

  • Tagged Text and Hyperlinks

    I am attempting to create a document with two sections that are generated as tagged text via a database. This has worked for years. Now the client wants me to link a name in the first section with the appropriate name/description in the second sectio

  • How can I retrieve last month and last mont -1?

    Hi guys, I am finishing to build a complex SSIS package, the last task is pretty simple but I'd like you to suggest the better solution in terms of performance (table with millions rows). The task is to create two table, one with the last update (arc