FOI Servlet non-unicode characters cannot be processed

Hello,
I'm using Oracle MapViewer 10.1.3.1 quickstart kit to test some map features
my database is in CL8MSWIN1251 charset
I made a simple map application to display some data using JavaScript API
when I define a theme based FOI layer in the map and the predefined theme has some non-Unicode characters in the labeling or in hidden info fields I get the folowing error:
Cannot process the following response from FOI server:
{"foiarray":[{"id":"AAARiqAAEAAAzFgAAA","name":"\u422\u414","gtype":"2001","imgurl":"http://localhost:8888/mapviewer/images/foi/p_16_13_MVDEMO_M.IMAGE131_BW.png","x":"50.0","y":"50.0","width":"16","height":"13","attrs":["987654321","100"]}],"attrnames":["BBB","Osn"]}
As you can see "\u422\u414" shoud be "\u0422\u0414" otherwise JavaScript cannot display characters in the right way. I think FOIServlet is the problem here.
Anyone has the same problems or has a solution for this problem pls

require that I predefine the invalid charactersThis isn't hard to do and it isn't subject to change. The XML recommendation tells you here exactly what characters are valid in XML documents.
However if your problems extend to the sort of case where users paste code including the "&" character into a text node without escaping it properly, or they drop in MS Word "smart quotes" in the incorrect encoding, then I think you'll just have to face up to the fact that allowing naive users to generate uncontrolled wannabe-XML documents is not really a viable idea.

Similar Messages

  • Scanning files for non-unicode characters.

    Question: I have a web application that allows users to take data, enter it into a webapp, and generate an xml file on the servers filesystem containing the entered data. The code to this application cannot be altered (outside vendor). I have a second webapp, written by yours truly, that has to parse through these xml files to build a dataset used elsewhere.
    Unfortunately I'm having a serious problem. Many of the web applications users are apparently cutting and pasting their information from other sources (frequently MS Word) and in the process are embedding non-unicode characters in the XML files. When my application attempts to open these files (using DocumentBuilder), I get a SAXParseException "Document root element is missing".
    I'm sure others have run into this sort of thing, so I'm trying to figure out the best way to tackle this problem. Obviously I'm going to have to start pre-scanning the files for invalid characters, but finding an efficient method for doing so has proven to be a challenge. I can load the file into a String array and search it character per character, but that is both extremely slow (we're talking thousands of LONG XML files), and would require that I predefine the invalid characters (so anything new would slip through).
    I'm hoping there's a faster, easier way to do this that I'm just not familiar with or have found elsewhere.

    require that I predefine the invalid charactersThis isn't hard to do and it isn't subject to change. The XML recommendation tells you here exactly what characters are valid in XML documents.
    However if your problems extend to the sort of case where users paste code including the "&" character into a text node without escaping it properly, or they drop in MS Word "smart quotes" in the incorrect encoding, then I think you'll just have to face up to the fact that allowing naive users to generate uncontrolled wannabe-XML documents is not really a viable idea.

  • Function for non unicode-characters

    Hi
    is there a function that permit to  translate a  unicode characters to a non-unicode characters?
    For example with this function  "  à " must become " a ".
    thank you for your help

    Copy paste the below code and execute. This could also solve your problem.
    DATA: BEGIN OF trans OCCURS 0,
    auml TYPE x VALUE 'C4', "'Ä'
    c_8e TYPE c VALUE 'A',
    gra TYPE x VALUE 'E0', "'à'
    c_gra TYPE c VALUE 'a',
    END OF trans.
    DATA : input(40).
    DATA : output(40).
    input = 'ÄBàp'.
    output = input.
    TRANSLATE output USING trans.
    condense output no-gaps.
    write :/ input.
    write:/ output.
    Thanks,
    Senthil

  • A Download servlet: non-ASCII characters not working

    This is my servlet used for file download:
    public void doPost(HttpServletRequest request, HttpServletResponse response) {
      String filepath = request.getParameter("filepath");
      String filename = request.getParameter("filename");
      response.setContentType("application/zip");
      response.setHeader("Content-Disposition", "attachment;filename=\""+filename+"\";");
      ServletOutputStream sos = null;
      BufferedInputStream bis = null;
      try {
        sos = response.getOutputStream();
        bis = new BufferedInputStream(new FileInputStream(source));
        byte buffer[] = new byte[2048];
        int c;
        while((c = bis.read(buffer)) != -1)
          sos.write(buffer, 0, c);
      } catch(Exception e) {
      } finally {
        bis.close();
        sos.close();
    }It does not work when the filename contains non-ASCII characters (e.g., extended ASCII, CJK ...)
    What do I fix this? Thanks!

    One possiblitiy that occurs to me is you have too many encoding things going on and you are sorta "over-encoding" things, as it were....
    All I can think to do is give you this sample JSP page that I created when I was trying to figure all this web encoding stuff with forms back in the day. So perhaps you can use this as a basis for your own page.
    // _lang.jsp
    <%@ page language="java" contentType="text/html; charset=UTF-8" %>
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
    <html>
    <head>
         <title></title>
         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    </head>
    <body bgcolor="#ffffff" background="" text="#000000" link="#ff0000" vlink="#800000" alink="#ff00ff">
    <%
    request.setCharacterEncoding("UTF-8");
    String str = "\u7528\u6237\u540d";
    String name = request.getParameter("name");
    %>
    req enc: <%= request.getCharacterEncoding() %><br />
    rsp enc: <%= response.getCharacterEncoding() %><br />
    str: <%= str %><br />
    name: <%= name %><br />
    <br />
    <a href="_lang.jsp?name=<%= java.net.URLEncoder.encode(str, "UTF-8") %>">as link</a>
    <br />
    <br />
    <form method="GET" action="_lang.jsp" encoding="UTF-8">
    Name: <input type="text" name="name" value="" >
    <input type="submit" name="submit" value="GET Submit" />
    </form>
    <form method="POST" action="_lang.jsp" encoding="UTF-8">
    Name: <input type="text" name="name" value="" >
    <input type="submit" name="submit" value="POST Submit" />
    </form>
    </body>
    </html>

  • Java.io.File and non-unicode characters in file name

    Unix filesystem object names are byte sequences. These byte sequences are not required to correspond to any character sequence in the current or any locale. How do I open a file if it has characters that do not corrospond to a valid unicode encoding for some current locale? Unless I am missing something, if I do a list on a parent directory that has some file names like this, those file names do not get added to the list. Hmmm....
    R.

    OK, create.c is a program that will create a file whose name is not a character in the 'ja' locale.
    Lister.java defines a class that lists files in the current directory. For each file, it spits out the 'toString()' version of the file, the char array of the name as hex, and the 'getBytes' byte array of the name.
    So, what you can do is compile and run create.c, which will create a file whose name is a single byte whose hex value is 99. Then compile and run Lister.java, which will give you the following output (shown for two different locales:
    $ export LANG=
    $ java Lister
    name:?; chars:99,; bytes:99,
    $ export LANG=ja
    $ java Lister
    name:?; chars:fffd,; bytes:3f,
    ---------------------------------------------Note that when running in the JA locale, there is no character corresponding to byte value 0x99. So, Java uses the replacement character 0xFFFD, and the '?' character 0x3F, as a replacement.
    The point is that there are files which Java cannot uniquely represent as a straight String. I suppose we could get the filename via JNI, do the conversion ourselves, and then use the private-use area of Unicode to encode all our strings, but ugh.
    //create.c
    #include <stdio.h>
    int main()
       const char* name = "\x99";
       FILE* file = fopen( name, "w" );
       if( file == NULL )
          printf( "could not open file %s\n", name );
          return 1;
       fclose( file );
       return 0;
    // Lister.java
    import java.io.*;
    public class Lister
        public static void main( String[] args )
            new Lister().run();
        public void run()
            try
                doRun();
            catch( Exception e )
                System.out.println( "Encountered exception: " + e );
        private void doRun() throws Exception
            File cwd = new File( "." );
            String[] children = cwd.list();
            for( int i = 0; i < children.length; ++i )
                printName( children[ i ] );
        private void printName( String s )
            System.out.print( "name:" );
            System.out.print( s );
            System.out.print( "; chars:" );
            printCharsAsHex( s );
            System.out.print( "; bytes:" );
            printBytesAsHex( s );
            System.out.println();
        private void printCharsAsHex( String s )
            for( int i = 0; i < s.length(); ++i )
                char ch = s.charAt( i );
                System.out.print( Integer.toHexString( ch ) + "," );
        private void printBytesAsHex( String s )
            byte[] bytes = s.getBytes();
            for( int i = 0; i < bytes.length; ++i )
                byte b = bytes[ i ];
                System.out.print( Integer.toHexString( unsignedExtension( b ) ) + "," );
        private int unsignedExtension( byte b )
            return (int)b & 0xFF;
    }

  • Non-unicode program support?

    Hi, I'm new to mac. In Windows' international setting, I can select a language to use for non-unicode programs, so that non-unicode characters in the selected language (such as song tags) can be displayed properly. But in Leopard, I cannot find the equivalent setting. Some songs with non-unicode tags are not displayed properly in iTunes, although they are fine in iTunes on my Windows computers. Also, I have some non-unicode contact information in Palm Desktop synchronized from my Treo phone. It looks OK in Palm Desktop on Windows, but not on Leopard.
    Does such setting exist in mac os x?
    Thank you.

    Does such setting exist in mac os x?
    No, OS X only uses Unicode, so you have to convert the legacy character set stuff to that.
    Of course some apps, like TextEdit and Safari and Mail, can read things in legacy charsets, but they convert it to Unicode when they do. TextEdit can also save things in legacy charsets.

  • Issue with Data flow between Unicode and Non Unicode systems

    Hello,
    I have scenario as below,
    We have  a Unicode – ECC 6.0 and a UTF 7 – Legacy system.
    A message flow between Legacy system to ECC 6.0 system and the data is of 700 KB size.
    Will there be any issue in this as one is Unicode and other is non Unicode?
    Kindly let me know.
    Thanks & Regards
    Vivek

    Hi,
    To add to Mike's post...
    You indicate that your legacy system is non-Unicode and the ERP system is Unicode.  You also said that the data flow is only <i>from</i> the legacy system <i>to</i> the ERP system.  In this case, you should have no data issues, since the Unicode system is the receiving system.  There <b>are</b> data issues when the data flow is in the other direction: <i>from</i> a Unicode system <i>to</i> a non-Unicode system.  Here, the non-Unicode system can only process characters that exist on its codepage and care must be taken from sending systems to ensure that they only send characters that are on the receiving system's codepage (as Mike says above).
    Best Regards,
    Matt

  • Unicode and non-unicode string data types Issue with 2008 SSIS Package

    Hi All,
    I am converting a 2005 SSIS Package to 2008. I have a task which has SQL Server as the source and Oracle as the destination. I copy the data from a SQL server view with a field nvarchar(10) to a field of a oracle table varchar(10). The package executes fine
    on my local when i use the data transformation task to convert to DT_STR. But when I deploy the dtsx file on the server and try to run from an SQL Job Agent it gives me the unicode and non-unicode string data types error for the field. I have checked the registry
    settings and its the same in my local and the server. Tried both the data conversion task and Derived Column task but with no luck. Pls suggest me what changes are required in my package to run it from the SQL Agent Job.
    Thanks.

    What is Unicode and non Unicode data formats
    Unicode : 
    A Unicode character takes more bytes to store the data in the database. As we all know, many global industries wants to increase their business worldwide and grow at the same time, they would want to widen their business by providing
    services to the customers worldwide by supporting different languages like Chinese, Japanese, Korean and Arabic. Many websites these days are supporting international languages to do their business and to attract more and more customers and that makes life
    easier for both the parties.
    To store the customer data into the database the database must support a mechanism to store the international characters, storing these characters is not easy, and many database vendors have to revised their strategies and come
    up with new mechanisms to support or to store these international characters in the database. Some of the big vendors like Oracle, Microsoft, IBM and other database vendors started providing the international character support so that the data can be stored
    and retrieved accordingly to avoid any hiccups while doing business with the international customers.
    The difference in storing character data between Unicode and non-Unicode depends on whether non-Unicode data is stored by using double-byte character sets. All non-East Asian languages and the Thai language store non-Unicode characters
    in single bytes. Therefore, storing these languages as Unicode uses two times the space that is used specifying a non-Unicode code page. On the other hand, the non-Unicode code pages of many other Asian languages specify character storage in double-byte character
    sets (DBCS). Therefore, for these languages, there is almost no difference in storage between non-Unicode and Unicode.
    Encoding Formats: 
    Some of the common encoding formats for Unicode are UCS-2, UTF-8, UTF-16, UTF-32 have been made available by database vendors to their customers. For SQL Server 7.0 and higher versions Microsoft uses the encoding format UCS-2 to store the UTF-8 data. Under
    this mechanism, all Unicode characters are stored by using 2 bytes.
    Unicode data can be encoded in many different ways. UCS-2 and UTF-8 are two common ways to store bit patterns that represent Unicode characters. Microsoft Windows NT, SQL Server, Java, COM, and the SQL Server ODBC driver and OLEDB
    provider all internally represent Unicode data as UCS-2.
    The options for using SQL Server 7.0 or SQL Server 2000 as a backend server for an application that sends and receives Unicode data that is encoded as UTF-8 include:
    For example, if your business is using a website supporting ASP pages, then this is what happens:
    If your application uses Active Server Pages (ASP) and you are using Internet Information Server (IIS) 5.0 and Microsoft Windows 2000, you can add "<% Session.Codepage=65001 %>" to your server-side ASP script.
    This instructs IIS to convert all dynamically generated strings (example: Response.Write) from UCS-2 to UTF-8 automatically before sending them to the client.
    If you do not want to enable sessions, you can alternatively use the server-side directive "<%@ CodePage=65001 %>".
    Any UTF-8 data sent from the client to the server via GET or POST is also converted to UCS-2 automatically. The Session.Codepage property is the recommended method to handle UTF-8 data within a web application. This Codepage
    setting is not available on IIS 4.0 and Windows NT 4.0.
    Sorting and other operations :
    The effect of Unicode data on performance is complicated by a variety of factors that include the following:
    1. The difference between Unicode sorting rules and non-Unicode sorting rules 
    2. The difference between sorting double-byte and single-byte characters 
    3. Code page conversion between client and server
    Performing operations like >, <, ORDER BY are resource intensive and will be difficult to get correct results if the codepage conversion between client and server is not available.
    Sorting lots of Unicode data can be slower than non-Unicode data, because the data is stored in double bytes. On the other hand, sorting Asian characters in Unicode is faster than sorting Asian DBCS data in a specific code page,
    because DBCS data is actually a mixture of single-byte and double-byte widths, while Unicode characters are fixed-width.
    Non-Unicode :
    Non Unicode is exactly opposite to Unicode. Using non Unicode it is easy to store languages like ‘English’ but not other Asian languages that need more bits to store correctly otherwise truncation will occur.
    Now, let’s see some of the advantages of not storing the data in Unicode format:
    1. It takes less space to store the data in the database hence we will save lot of hard disk space. 
    2. Moving of database files from one server to other takes less time. 
    3. Backup and restore of the database makes huge impact and it is good for DBA’s that it takes less time
    Non-Unicode vs. Unicode Data Types: Comparison Chart
    The primary difference between unicode and non-Unicode data types is the ability of Unicode to easily handle the storage of foreign language characters which also requires more storage space.
    Non-Unicode
    Unicode
    (char, varchar, text)
    (nchar, nvarchar, ntext)
    Stores data in fixed or variable length
    Same as non-Unicode
    char: data is padded with blanks to fill the field size. For example, if a char(10) field contains 5 characters the system will pad it with 5 blanks
    nchar: same as char
    varchar: stores actual value and does not pad with blanks
    nvarchar: same as varchar
    requires 1 byte of storage
    requires 2 bytes of storage
    char and varchar: can store up to 8000 characters
    nchar and nvarchar: can store up to 4000 characters
    Best suited for US English: "One problem with data types that use 1 byte to encode each character is that the data type can only represent 256 different characters. This forces multiple
    encoding specifications (or code pages) for different alphabets such as European alphabets, which are relatively small. It is also impossible to handle systems such as the Japanese Kanji or Korean Hangul alphabets that have thousands of characters."<sup>1</sup>
    Best suited for systems that need to support at least one foreign language: "The Unicode specification defines a single encoding scheme for most characters widely used in businesses around the world.
    All computers consistently translate the bit patterns in Unicode data into characters using the single Unicode specification. This ensures that the same bit pattern is always converted to the same character on all computers. Data can be freely transferred
    from one database or computer to another without concern that the receiving system will translate the bit patterns into characters incorrectly.
    https://irfansworld.wordpress.com/2011/01/25/what-is-unicode-and-non-unicode-data-formats/
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • CRVS2010 Beta - Cannot export report to PDF with unicode characters

    My report has some unicode data (Chinese), it can be previewed properly in the windows form report viewer. However, if I export the report document to PDF file, the unicode characters in exported file are all displayed as a square.
    In the version of Crystal Report 2008 R2, it can export the Chinese characters to PDF when I select a Chinese font in report. But VS2010 beta cannot export the Chinese characters even a Chinese font is selected.

    Barry, what is the specific font you are using?
    The below is a reformatted response from Program Management:
    Using non-Chinese font with Unicode characters (Chinese) the issue is reproducible when using Arial font in Unicode characters field. After changing the Unicode character to Simsun (A Chinese font named 宋体 in report), the problem is solved in Cortez and CR both.
    Ludek

  • Cannot convert between unicode and non-unicode string data types.

    I'm trying to copy the data from 21 tables in a SQL 2005 database to a MS Access database using SSIS. Before converting the SQL database from 2000 to 2005 we had this process set up as a DTS package that ran every month for years with no problem.  The only way I can get it to work now is to delete all of the tables from the Access DB and have SSIS create new tables each time. But when I try to create an SSIS package using the SSIS Import and Export Wizard to copy the SQL 2005 data to the same tables that SSIS itself created in Access I get the "cannot convert between unicode and non-unicode string data types" error message. The first few columns I hit this problem on were created by SSIS as the Memo datatype in Access and when I changed them to Text in Access they started to work. The column I'm stuck on now is defined as Text in the SQL 2005 DB and in Access, but it still gives me the "cannot convert" error.

    I was getting same error while tranfering data from SQL 2005 to Excel , but using following method i was able to tranfer data. Hopefully it may also help you.
    1) Using Data Conversion transformation
       data types you need to select is DT_WSTR (unicode in terms of SQL: 2005)
    2) derived coloumn transformation
       expression you need to use is :
        (DT_WSTR, 20) (note : 20 can be replace by your character size)
    Note:
    Above teo method create replica of your esting coloumn (default name will be copy of <coloumn name>).
    while mapping data do not map actual coloumn to the destination but select the coloumn that were created by any of above data transformer (replicated coloumn).

  • Non-English characters processed correctly by XML Parser 2 XSLT?

    I'm trying to transform an XML document (parsed as an XMLDocmument) using an XSL stylesheet (parsed as an XSLStylesheet) and the XSLProcessor class in Java, I encounter the following problem:
    Non-US characters such as German umlauts, stored in the XML in &#xxx; style, are not processed properly. "|" (&#252;), for example, comes out as "C<". Is this a bug in the XSLProcessor class or am I doing something wrong? I'm using this stylesheet declaration:
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns="http://www.w3.org/1999/xhtml1/strict"> Or should I mess with the encoding attribute of the ?xml ...? PI?
    tia
    John Smith

    I have not specified any encoding.
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="completeproduct.xsl"?>
    <PRODUCT connection="demosample" xmlns:xsql="urn:oracle-xsql">
    <xsql:query>
    select * from products
    </xsql:query>
    should i specify encoding
    null

  • Unable to load German characters in NON Unicode Essbase Cube

    Hi Guys,
    This is what we want to do:
    Build a Cube for Germany on our Essbase server in US. Our users will access cube using Excel Add-In from Germany. But since the Essbase server is in US, system environment variable ESSLANG is set to English_UnitedStates.Latin1@Binary.
    The version of Essbase we are using is 7.1.3.
    What we tried & failed:
    To load German characters from our dimension build Text file, we added a header: //ESS_LOCALE German_Germany.Latin1@Default
    at the beginning of the dim build Text file hoping the rule file will understand that the file contains German characters & load it correctly. Then using EAS I load the dimensions using its corresponding rule file.
    Essbase loads the dimensions correctly, with NO Error, but when it encounters German characters it Replaces it with a Question Mark "?"
    Some of the German characters are:ß Ü ü Ö ö Ä ä Å Ä Ö
    Lastly, the reason we do not want to build Unicode cube is because Excel Add-In will not work with Unicode cubes.
    Its urgent. Please help.
    Thanks.

    The simple and easy way to check
    non-unicode character sets are not supported on unicode system any longer. Am I right?
    Transaction code i18N
    Select
    trouble shooting --> printing  test --> smartforms --> multiple scripts, select your output device and see print preview. it will display all supported characters.
    I guess, above information will be useful for closing the thread.
    Regards,
    SaiRam

  • Cannot see folder- in non english characters.

    I have set up PySDM to mount one of my harddrives on boot. The problem is that I cannot see a folder (ls and nautilus ) there , which is non -english characters (russian). Ubuntu was able to see it properly. I verified it was there by booting back to Windows.

    *vitali* wrote:Ok what should it be?
    UTF-8 maybe (or utf8 I'm never sure which one to use).
    Try mounting it manually and check for errors and if it works as you want.
    Last edited by R00KIE (2009-12-27 12:49:57)

  • Cannot create file with Non-latin characters- I/O

    I'm trying to create a file w/ Greek (or any other non-latin) characters ... for use in a RegEx demo.
    I can't seem to create the characters. I'm thinking I'm doing something wrong w/ IO.
    The code follows. Any insight would be appreciated. - Thanks
    import java.util.regex.*;
    import java.io.*;
    public class GreekChars{
         public static void main(String [ ] args ) throws Exception{
              int c;
              createInputFile();
    //          String input = new BufferedReader(new FileReader("GreekChars.txt")).readLine();
    //          System.out.println(input);
              FileReader fr = new FileReader("GreekChars.txt");
              while( (c = fr.read()) != -1)
                   System.out.println( (char)c  );
         public static void createInputFile() throws Exception {
              PrintStream ps = new PrintStream(new FileOutputStream("GreekChars.txt"));
              ps.println("\u03A9\u0398\u03A0\u03A3"); // omega,theta,pi,sigma
              System.out.println("\u03A9\u0398\u03A0\u03A3"); // omega,theta,pi,sigma
              ps.flush();
              ps.close();
              FileWriter fw = new FileWriter("GreekChars.txt");
              fw.write("\u03A9\u0398\u03A0\u03A3",0,4);
              fw.flush();
              fw.close();
    // using a printstream to create file ... and BufferedReader to read
    C:> java GreekChars
    // using a Filewriter to create files  .. and FileReader to read
    C:> java GreekChars
    */

    Construct your file writer using a unicode format. If
    you don't then the file is written using the platform
    "default" format -probably ascii.
    example:
    FileWriter fw = new FileWriter("GreekChars.txt",
    "UTF-8");I don't know what version of FileWriter you are using, but not that I know of take two string parameters. You should try checking the API before trying to help someone, instead of just making things up.
    To the OP:
    The proper way to produce a file in UTF-8 format would be this:
    OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("filename"), "UTF-8");Then to read the file, you would use:
    InputStreamReader reader = new InputStreamReader(new FileInputStream("filename"), "UTF-8");

  • We cannot type Polish (non-latin) characters in WebDynpro applications

    We cannot type Polish (non-latin) characters in WebDynpro application (in runtime) because 'Browser Help Shortcuts' are fired.
    To type a polish character in polish keyboard you need to press AltGr + letter (ie. AltGr + a/c/e/s/o/l/z/x/n). To type an uppercase polish character you need to press AltGr + Shift + letter. This comination is in fact the same as pressing Alt + Ctrl + Shift + letter (because AltGr produces Alt + Ctrl) and it fires some of 'Browser Help Shortcuts'. For example AltGr + Shift + O should produce a letter O with a dash on it's top but instead it fires 'Show nesting of HTML containers'.
    We tried to turn off sap-wd-lightspeed, but then other key combinations are reserved for u2018Browser Help Shortcutsu2019.
    We need to be able to use AltGr + Shift + a/c/e/s/o/l/z/x/n in runtime.
    Product: SAP NW 7.11 SP04
    WebDynpro for Java
    I hope there is a somewhere a hidden parameter that solves our problem Maybe we're in some kind of debug mode?
    Thanks for your help!!

    The funny thing is that bold font [when message unread in message list] shows OK, ie in greek, but when i click on unread message, it is assumed to have been read, so it changes over to medium [non bold] and the encoding changes as well into the one that is not greek and thus unreadable.  In ~/.sylpheed/sylpheedrc the fonts are:
    widget_font=
    message_font=-microsoft-sylfaenarm-medium-r-normal-*-*-160-*-*-p-*-iso8859-7
    normal_font=-monotype-arial-medium-r-normal-*-12-*-*-*-*-*-iso8859-7
    bold_font=-monotype-arial-bold-r-normal-*-12-*-*-*-*-*-iso8859-7
    small_font=-monotype-arial-medium-r-normal-*-12-*-*-*-*-*-iso8859-7
    In /etc/gtk, for gtk1.2 apps the file refering to greek encoding [el] seems to be fine [exactly the same as in slackware 9.1].

Maybe you are looking for