Charset problem on E75

Hello,
I'm using an exchange mail account on my phone. It seems like the default charset for the phone is ISO-8859-1 but I need it to be UTF-8. How do I change this on a E75?
/Zarre

Hi
I have the same problem with E75 and embeded mail client for exchange, update to latest sw doesn`t solve issue. Using E66 or E71 I have an option when creating a new e-mail message to change charset from ISO to UTF-8. Does anyone know how to resolve this issue?
Thanks!

Similar Messages

  • [Gnome - GDM] Locale/Charset Problem

    Hi everybody,
    I´m using Gnome with GDM as Display Manager.
    I got a problem with my keyboard settings, but don´t know why
    The first charset/locale option is set in the /boot/grub/menu.lst
    kernel /vmlinuz26 lang=de locale=de_DE.UTF8 root=/dev.....
    I use a LUKS encrypted /home partition and need a german locale/charset for the passphrase/during bootup.
    The second charset option is in my /etc/rc.conf:
    LOCALE="de_DE.UTF8"
    KEYMAP="de"
    With those settings I got an almost "normal" keyboard behaviour.
    The Umlaut-Character for example arent´t working during bootup or in a tty. In a later started shell inside of Gnome everything works fine.
    The second charset problem is during password input inside GDM. The layout isn´t working right here either.
    I already tried the ./dmrc settings of Language and Keys, but nothing works.
    Anyone got a clue how to fix that?
    Greets
    Flo

    Maybe you can change the subject of this thread, as the issue is not only related to gdm, IMHO.
    I don't get it either. Settings are LOCALE="sv_SE"  and KEYMAP="sv-latin1" in rc.conf and locale -a gives me
    [root@localhost ~]# locale -a
    C
    POSIX
    sv_SE
    sv_SE.iso88591
    swedish
    But something is still wrong, because calling up some manpages gives me
    Cannot open the message catalog "man" for locale "sv_SE"
    (NLSPATH="<none>")
    I ran locale-gen, but I'm pretty lost in this locale business too...

  • Mail attachment charset problem

    Hello,
    I have made a program which is able to send Icalendar files as an attachment. I get the data as an InputStream.
    My problem is that the Icalendar file doesn�t show the letters '�', '�' and '�'. I have tried to use iso-8859-1 in the MimeBodyPart headerline and in the ByteArrayDataSource, but it doesn�t work?!
    Where can I specify which charset I want to use?
    MimeBodyPart mbp3 = new MimeBodyPart();
    mbp3.setFileName( m.getAttachmentFileName() );
    mbp3.setHeader("Content-Class", "urn:content-classes:calendarmessage");
    mbp3.setHeader("Content-ID","calendar_message");
    mbp3.addHeaderLine("charset=iso-8859-1");
    java.io.InputStream inputStream = null;
    try {
          inputStream = m.getAttachmentFile().getBinaryStream();
          mbp3.setDataHandler( new DataHandler( new javax.mail.util.ByteArrayDataSource( inputStream, "text/calendar;charset=iso-8859-1;method=REQUEST" ) ) );
    catch ( Exception e ){}
    mpRoot.addBodyPart(mbp3);

    Yes you are right... Thank you.
    I removed the line:
    mbp3.addHeaderLine("charset=iso-8859-1"); - and now the letters are shown correct when opening the ICalendar file using a text editor.
    But when openning the file using Outlook the letters '�', '�', '�' are removed?! I know that isn�t a problem in my mail code but certainly in the iCal file?!

  • JSP, Javabean charset problem

    I have some JSP pages where I try to dynamically present some drop-down
    menus for the users to select values. I use a simple bean to manage it.
    The problem is that those values are in non-iso8859-1 charset and I only
    get ?????? rendered in the select box. I define an array (inline in the
    JSP page as code scriptlet), write all possible (String) options for the
    drop-down menu there and in the bean I do some calculations and render
    the drop-down menu.
    String label[]={"something in iso-8859-7 encoding in here","something in
    iso-8859-7 encoding in here","something in iso-8859-7 encoding in here"};
    and in the bean I have a for-loop to access this.
    The page directive is set to iso-8859-7.
    I think there is some kind of transparent translation, that has to do
    with Java language, and after the rendering I only get ???? instead of
    the correct iso-8859-7 value in the browser.
    Any help appreciated.
    (Tomcat, Apache web server, JDK 1.3,)
    PS: This JSP page is used to submit some data in an Oracle database
    (according to the selection of the user in the drop-down box), so I also
    use JDBC 1.3, but I don't think that's relevant at all with my problem...
    null

    I have some JSP pages where I try to dynamically present some drop-down
    menus for the users to select values. I use a simple bean to manage it.
    The problem is that those values are in non-iso8859-1 charset and I only
    get ?????? rendered in the select box. I define an array (inline in the
    JSP page as code scriptlet), write all possible (String) options for the
    drop-down menu there and in the bean I do some calculations and render
    the drop-down menu.
    String label[]={"something in iso-8859-7 encoding in here","something in
    iso-8859-7 encoding in here","something in iso-8859-7 encoding in here"};
    and in the bean I have a for-loop to access this.
    The page directive is set to iso-8859-7.
    I think there is some kind of transparent translation, that has to do
    with Java language, and after the rendering I only get ???? instead of
    the correct iso-8859-7 value in the browser.
    Any help appreciated.
    (Tomcat, Apache web server, JDK 1.3,)
    PS: This JSP page is used to submit some data in an Oracle database
    (according to the selection of the user in the drop-down box), so I also
    use JDBC 1.3, but I don't think that's relevant at all with my problem...
    null

  • JDev 1013 EA1 : embeded OC4J default-charset problem

    Hi
    I'm a web-application programmer interesting in JDeveloper.
    I had tried development using JDeveloper 10.1.2
    When I used 10.1.2, I add [default-charset="EUC-KR"] to [JDEV_HOME/systemXX/config/global-application.xml]
    - or [deployment-application/XXX/XXX/orion-web.xml] -
    Because I don't use alphabet.
    Anyway, it have effect.
    But I encountered problem, after use JDeveloper 10.1.3 EA1.
    I set default-charset in embeded-OC4J.
    But it had no effect.
    If that is a bug, please tell me how to use a installed OC4J in JDeveloper.
    Please help me.
    Thanks for read this message
    Message was edited by:
    user453010

    This looks like a bug in OC4J
    Can you try this and let us know if it works?
    Set the charset in the application level orion-web.xml and deploy it to the Stand alone OC4J and see if it works.
    1. Add a new OC4J Deployment Descriptor to your project
    2. Choose orion-web.xml
    3. Open the file in editor
    4. Set the default-charset setting
    <orion-web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:noNamespaceSchemaLocation="http://xmlns.oracle.com/oracleas/schema/orion-web-10_0.xsd"
    schema-major-version="10" schema-minor-version="0"
    servlet-webdir="/servlet/" default-charset="EUC-KR"></orion-web-app>
    (Make sure all the other settings are set right. Similar to what you see in [deployment-application/XXX/XXX/orion-web.xml] )
    5. Create a WAR deployment profile
    6. Deploy the application to Standalone OC4J connection

  • Prime Infrastructure 2.0 email charset problem

    Dear all,
    I have question regarding Guest portal managed through PI. Everything is set, but only one thing doesn't work as I expected.
    When I make an account and print it, everything looks fine. When I sent it via email, then enconding of mail is wrong.
    Problem is, that PI gives Charset : ANSI_X3.4-1968 (can see in email header).
    I would like to have ther (in email) : CP1252 (or UTF) - basically something what can interpret national characters.
    I have found this about similar problem, but absolutely have no idea where set it up.
    http://thwack.solarwinds.com/thread/58101
    http://helpdesk.ibs-aachen.de/2006/05/13/javamail-error-in-subject-ansi_x34-1968qtestmail_-_best3ftigung_ihrer_email-adresse/
    In old WCS, there was no problem with this.
    Does anyone has solution for this?
    Thank you
    Pavel

    I see now, this is a poor carry over from NCS. The option that says include "controller" in email does provide the switch hostname. Hopefully by version 3.0 they will have all the wireless naming fixed when it applies to converged.
    Thanks Rob!

  • Charset problems

    Have also problems with charset.
    Installed German translation and IdM doesn't save German special types in my MYSQL DB but shows me cryptic types.
    When deleting a user in the english Admin site there's no problem but IdM shows me an errors message when deleting a user in the german surface. Also the user is deleted IdM isn't able to assign the special german fonts...

    Hi,
    i migrated a production environment with 40k userIds and 160k accounts from mysql 4.0.x to 4.1.x two weeks ago (unsure about the exact subversions atm).
    We did have some issues (unable to delete IDM orgs) about collations as there were mayor characterset and collation changes between those mysql versions. This could be fixed with a few alter tables tho.
    Oh and i did forget to mention before that 80% of our 380 admins are located in germany just as the servers are. So there are plenty of ����.
    As it is a production environment dozens of users get deleted on a dayly basis and i didn't encounter your problem. What jdbc driver (version) are you using?
    Regards,
    Patrick

  • Charset problem!!

    Hello everybody, there is something i need help with..
    I have xml files, created by me in java, i create xml files in different charsets. The problem is when i try to read a UTF-8 File, with :
    StreamSource sts=new StreamSource(new BufferedReader(new InputStreamReader(new FileInputStream(filename),"UTF-8")));when i read a ISO-8859-1 file, it works fine, but with a UTF-8 is seems to be a problem, cause i get a error parsing the xml, but if i use :
    StreamSource sts=new StreamSource(filename);it works for UTF-8, but not for ISO, (i think UTF-8 is java's default)..
    so, any ideas??

    If you pass the XML parser an InputStream or
    something else where it can read the bytes, it will
    get things right. (Assuming the XML declares its
    encoding correctly.) If you pass the XML parser a
    Reader where you apply the wrong encoding, the parser
    will not be able to get things right. Choose one of
    those two.That's why i used:
    StreamSource sts=new StreamSource(new BufferedReader(new InputStreamReader(new FileInputStream(filename),"UTF-8")));I read that InputStreamReader is the best way to read characters well using a given encoding.

  • WebService response charset problems

    Some time ago I posted a problem regarding unicode characters in WebService
    responses from WL7.0. I received a patch that helped partially - after the
    patch was applied, WL no longer threw any exception when an utf character
    was included in the response. So far so good.
    However, a problem arises when I call the WebService from a .NET client;
    .NET doesn't understand that the response is utf-8 encoded, so when the
    response is deserialized on the client side the encoded characters (such as
    å, ä, ö) come out as question marks. It seems that the Content-Type header
    doesn't specify the correct charset (I would expect something like
    'Content-Type:text/xml; charset=utf-8', but the charset=... part seems to be
    missing)
    By fiddling about a bit with the .NET generated proxy class I managed to
    force .NET to think that the Content-Type mime header does in fact contain
    the correct value (quite messy - I can supply the code if anyone should be
    interested). However, this should not be necessary - the solution I came up
    with is awkward and the only thing needed is that the correct Content-Type
    header be included in the WebService response. Is there a way to specify a
    default value for this?
    I tried creating a handler to intercept the response and set this specific
    mime header, but no luck - the value I set seems to be ignored (i tried
    ctx.getMessage().getMimeHeaders().setHeader("Content-Type", "text/xml;
    charset=utf-8");, as well as ...addHeader()). Besides, even if this did work
    it would seem unnecessarity complicated to create a handler and set it to
    handle all the methods in my WebService (there are quite a few).
    Any ideas?
    /Mattias Arthursson
    Compost Marketing

    This problem should be fixed in SP1. If the system property
    user.lang is not english, then SP1 will use utf-8 as charset
    (I think this will be your case).
    In SP1 you can also set a system property to change charset :
    weblogic.webservice.i18n.charset="my-char-set"
    regards,
    -manoj
    "Mattias Arthursson" <[email protected]> wrote in message
    news:[email protected]...
    Some time ago I posted a problem regarding unicode characters inWebService
    responses from WL7.0. I received a patch that helped partially - after the
    patch was applied, WL no longer threw any exception when an utf character
    was included in the response. So far so good.
    However, a problem arises when I call the WebService from a .NET client;
    .NET doesn't understand that the response is utf-8 encoded, so when the
    response is deserialized on the client side the encoded characters (suchas
    å, ä, ö) come out as question marks. It seems that the Content-Type header
    doesn't specify the correct charset (I would expect something like
    'Content-Type:text/xml; charset=utf-8', but the charset=... part seems tobe
    missing)
    By fiddling about a bit with the .NET generated proxy class I managed to
    force .NET to think that the Content-Type mime header does in fact contain
    the correct value (quite messy - I can supply the code if anyone should be
    interested). However, this should not be necessary - the solution I cameup
    with is awkward and the only thing needed is that the correct Content-Type
    header be included in the WebService response. Is there a way to specify a
    default value for this?
    I tried creating a handler to intercept the response and set this specific
    mime header, but no luck - the value I set seems to be ignored (i tried
    ctx.getMessage().getMimeHeaders().setHeader("Content-Type", "text/xml;
    charset=utf-8");, as well as ...addHeader()). Besides, even if this didwork
    it would seem unnecessarity complicated to create a handler and set it to
    handle all the methods in my WebService (there are quite a few).
    Any ideas?
    /Mattias Arthursson
    Compost Marketing

  • Charset problems using SQLLDR

    Hello!
    My task is to import data from a Microsoft Access Database into an Oracle Database. I got a Script which creates Flatfiles and Controlfiles with the data from Access. My problem is the characterset. There are some signs (e.g. the typical german quotationmark at the BOTTOM oder the "longer dash" you get from MS Word by using "blank+dash+blank"), which obviously cannot be convertet. So there are "turn-round questionsmarks". So I tried many different char-sets. I tried different nls_charset settings at oracle side and many charset as the parameter of the control files. But none combination let me get the right result. Does anybody know how I can get rid of this consersation desaster?
    Best regards, Sascha Meyer

    Hello!
    Of cource I can give you the information:
    Code points:
    201C, 201D, 201E, 2013,2014
    nls_database_parameters:
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD.MM.RR HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_TZ_FORMAT DD.MM.RR HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY ?
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_RDBMS_VERSION 10.2.0.2.0
    nls_session_parameter:
    PARAMETER VALUE
    NLS_LANGUAGE GERMAN
    NLS_TERRITORY GERMANY
    NLS_CURRENCY €
    NLS_ISO_CURRENCY GERMANY
    NLS_NUMERIC_CHARACTERS ,.
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD.MM.RR
    NLS_DATE_LANGUAGE GERMAN
    NLS_SORT GERMAN
    NLS_TIME_FORMAT HH24:MI:SSXFF
    NLS_TIMESTAMP_FORMAT DD.MM.RR HH24:MI:SSXFF
    NLS_TIME_TZ_FORMAT HH24:MI:SSXFF TZR
    NLS_TIMESTAMP_TZ_FORMAT DD.MM.RR HH24:MI:SSXFF TZR
    NLS_DUAL_CURRENCY €
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    Default value is WE8ISO8859P15
    I tried this setting:
    Database: WE8ISO8859P15
    Charset used in controlfile: WE8MSWIN1252, UTF8, WE8ISO8859P15, we8iso8859p1, AL32UTF8
    Database: UTF8
    Charset used in controlfile:WE8ISO8859P15, UTF8, we8iso8859p1, AL32UTF8
    Database:WE8MSWIN1252
    Charset used in controlfile:WE8ISO8859P15, UTF8, WE8MSWIN1252, we8iso8859p1, AL32UTF8

  • Russian charset problem

    I've got Oracle database 8.1.7 and Oracle 9iAS installed on the same machine, Windows NT.
    When I try to create form containing russian field names with Portal, it stores them (and then displays) in wrong charset. NLS_LANG parameters are all set to RUSSIAN_CIS.CL8MSWIN1251.
    Does anyone know how to avoid this problem?

    It's a bug.
    When you create a new form edit it and add labels in Russian, before clicking Ok you should click on the root of the component tree (Form). In this case all Russian characters will be saved with proper encoding.

  • Charset problem -- NLS_LANG the reason

    If hab the following problem
    I use a unix server where a Oracle Server 10.2.0.2.0 64bit Version runs
    on this database i save my SAP data
    Now i have also a Windows 2000 Server where i run a Oracle 10.2.0.1.0 32 bit Client
    From this client i wanna make queries with MS Access 2003 via ODBC
    There is no connection problem but a problem with the signs which are shown in linked table from Access
    in SAP on the server i see for example "Moneriő" in my Access table i see only "Monerio"---> a modified "o"
    Where could be the error
    Is the reason a wrong adjusted NLS_Lang in the Windows registry on the client ???
    There i can read " GERMAN_GERMANY.WE8MSWIN1252"
    On the server i can see with "select * from nls_database_parameters" the following
    NLS_NCHAR_CHARSET ---> UTF8
    NLS_LANGUAGE -->AMERICAN
    NLS_TERRITORY --> AMERICA
    Would it solve my problem, if i modifie NLS_LANG on the client to the same values like those on the Server
    Please help me thanks

    Hi thanks for your response
    i had a look on NLS_CHARACTERSET of my database. It is also UTF8.
    This charset is optimal for my requirements, it contains all necessary characters i wanna use.
    Now my question again: IS it necessary to change
    NLS_NCHAR_CHARSET to UTF8
    NLS_LANGUAGE to AMERICAN
    NLS_TERRITORY to AMERICA
    Or what can be the rease else??

  • Informix 2 Oracle... charset problem?

    Hi,
    i'm tring to migrate data from informix 9 to oracle 9.2 but while migrating 2 tables i 've got this errors:
    ORA-01401: inserted value too large for column
    and
    ORA-01461: can bind a LONG value only for insert into a LONG column
    The same procedure works ok if i migrate data to another oracle db server... the differences between this 2 oracle are:
    WORKS:
    NLS_RDBMS_VERSION= 10.2.0.1.0
    NLS_CHARACTERSET= WE8MSWIN1252
    NOT WORKS
    NLS_CHARACTERSET=UTF8
    NLS_RDBMS_VERSION=9.2.0.5.0
    Any suggestions? is only a problem of characterset??

    I reckon thats exactly what it is, a problem of charset and the storage requirements of the data therein. do you know which table in the repository is having this problem inserting data?
    Message was edited by:
    Barry McGillin

  • Charset problems in LC

    Hi all,<br />I have the following promblem:<br />I've made the form in LC Designer that is filled by data from XML file.<br />When I request this form from LiveCycle server I'm get the right PDF, except following charset issue:<br />All '&#8470;' symbols (AKA Numero sign) in data from XML file that merges with PDF gets changed to '¹' (AKA Superscript one).<br />Seems like somewhere on the server I've got utf-8 --> cp1251 conversion.<br />Both resulting PDF and XML file with data seems to be ok and have utf-8 charset. Only one place i could find "CP1251" is in the config.xml, that i've exported from the LC Server AdminUI page.<br /><br /><node name="Output"><br /><br /><entry key="RenderedOutputCacheEnabled" value="true"/><br /><entry key="RenderedOutputCachePolicy" value="LRU"/><br /><entry key="charset" value="CP1251"/><br />...etc<br /><br />But when i'm tried to change this to "utf-8" in config.xml and import it back to the server nothing happens and another export gives me same config.xml with "CP1251" in it.<br />Maybe somone can help me to solve this problem and have my '&#8470;' back? )<br /><br />Thank you!

    Hello trankiemhung,
    Thanks for using Apple Support Communities.
    I found the following information that will help resolve your issue:
    Using Game Center
    http://support.apple.com/kb/HT4314
    Additional Information
    If you are having difficulty logging in to Game Center or staying connected
    Verify that you are connected to the Internet.
    If you are unable to create or sign in to your Game Center account from within a game, try creating or signing in to your account using the Game Center app.
    Try signing out of your Game Center account, then sign back in. If you can't sign in to your Game Center account with an Apple ID, try resetting your password or using another email address. To manage your Apple ID account, go to My Apple ID.
    When using a Wi-Fi connection, verify that your Wi-Fi router is configured for Game Center.
    Take care,
    Alex H.

  • Can't get file size, possibly because of charset problem

    I have an application that needs to recursively find directory sizes, I have a class extending java.io.File with an internal method to do this:
    private static long getSize(File file) {
                    long size = file.length();
                    if (file.exists()) {
                            if (file.isDirectory()) {
                                    File[] files = file.listFiles();
                                    for (int i = 0; i < files.length; i++) {
                                            size += getSize(files);
    } else {
    System.out.println("Problem checking size on \"" + file + "\", reported file size is: " + file.length());
    return size;
    It works (matches du -b output) with the exception of some problematic files I have.
    Output from ls:
    $ ls -l
    total 5461
    -rw-r--r--  1 xyz users    1003 Feb 16 22:06 FileWRS.class
    -rw-r--r--  1 xyz users     831 Feb 16 22:04 FileWRS.java
    -rw-r--r--  1 xyz users     489 Feb 16 22:07 FolderSizes.class
    -rw-r--r--  1 xyz users     198 Feb 16 21:23 FolderSizes.java
    -rwxrwxrwx  1 xyz users 5568138 Apr 25  2004 test.?.abcOutput from du -b:
    $ du -b
    5583203 .Output from my code:
    $ java FolderSizes .
    Problem checking size on "./test.?.abc", reported file size is: 0
    15065The character displaying as "?" is actually an accented e character ("�") and displays correctly in konqueror (my gui file browser).
    I don't really know why it can't pick up the character, but I'm more interested in just getting the file size and ignoring the character problem, any solutions/suggestions for places to look? Do I need to supply more information?

    Try:
    LC_ALL=en_US.ISO8859-1 java FolderSizes .
    Does this help?Yes, that does fix it for my terminal, but if possible I would like a solution that can determine this at run time and solve (so that when it's running on some other terminal, with some other charset, with someother file system charset it will still work).
    Any ideas for doing this from withing the VM?
    p.s. thanks for the fast reply.

Maybe you are looking for