[SOLVED] Non english chars kdemod 4 problem

Hello, I have a little problem with KDE and the non english charactes.
If I open a file with non english chars in its name I get something like this:
(In this case kwrite opens "other" file but in other applications it fails with an error of file not found)
Other sympton is that in KDE menu my name have bad chars too:
(It must be López)
And the third sympton is that if try to rename a file in the desktop, I can't write accented chars (á é í ó ú). At the begining the keyboard in this rename dialog was totally in english but i have got a semi spanish keyboard (i can write ñ letters) with the apropiate /etc/hal/fdi/policy/10-keymap.fdi file.
But the most strange is that in general, in all Kde and non-kde applications and even in the console, non english chars works ok. I can go to the file->Open menu of the application and open a file with non english chars in its name. The problem seems to reside in the part of kde that passes the name of the file to the application (¿kwin?)
my locale is es_ES@UTF8 and as I said I have configured correctly the 10-keymap.fdi file.
I have read in some forums that something like this could be a kde or qt bug, but for me it's not clear as i don't see a general complaining about this.
Any idea will be apreciated.
Thanks in advance,
Christian.
Last edited by christian (2009-03-27 14:52:17)

SanskritFritz wrote:
That should be "es_ES.utf8"
Sorry, i mispelled it in the post.
Of course, my locale is es_ES.utf8:
LANG=es_ES.utf8
LC_CTYPE="es_ES.utf8"
LC_NUMERIC="es_ES.utf8"
LC_TIME="es_ES.utf8"
LC_COLLATE=C
LC_MONETARY="es_ES.utf8"
LC_MESSAGES="es_ES.utf8"
LC_PAPER="es_ES.utf8"
LC_NAME="es_ES.utf8"
LC_ADDRESS="es_ES.utf8"
LC_TELEPHONE="es_ES.utf8"
LC_MEASUREMENT="es_ES.utf8"
LC_IDENTIFICATION="es_ES.utf8"
LC_ALL=
I don't think this could be the source of the problem, because, except in the places I said in the firs post, the rest of my system works perfectly.

Similar Messages

  • How to load file thru reader which contains non-english char in file name

    Hi ,
    I want to know how to load file in english machine thru reader which contains non-english chars in file names (eg. 置顶.pdf)
    as LoadFile gives error while passing unicode converted file name.
    Regards,
    Arvind

    You don't mention what version of Reader?  And you are using the AcroPDF.dll, yes?
    Sent from my iPad

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • Search for description containing non-English chars -- ?

    Hello!
    I've implemented a search class, which allows customers to search folders/documents by name, owner, description, etc.
    And here's the problem: if description contains non-English (Russian, in my case) characters, search does not work! Everything (AS infrastructure, CM SDK DB, etc) was installed using UTF-8 Unicode charset. When I debug the code, I see that when I build AttributeQualification and later compose a comples SearchQualification, value in these is correct, but when I call getSQL(), I see string like this:
    ... ( nls_upper(ALIASDOCUMENT.DESCRIPTION) LIKE nls_upper('????') ) ...
    So it seems as if SQL converted passed UNICODE value into ANSI string, and since server's system language is English, my Russian letters were lost -- ?
    Can anybody shed some light here? Is there a way to search for UNICODE descriptions (and content, for that matter)?
    Thanks,
    Sasha.

    Hi Sasha,
    I want you to try the following code. It should output the file description and query to a text file. Use internet explorer / or notepad to open this file and ****specify that the file encoding is UTF8.*****
    thanks,
    matt.
    java -classpath ...blah blah.. RussianSearch parameterfile=c\cmsdkparameters.txt
    cmsdkparameters.txt contains:
    Username = system
    Password = oracle9i
    SchemaPassword = cmsdk
    Domain = ifs://ifspm-sun2.us.oracle.com:1521:mjs92.us.oracle.com:cmsdk903
    ServiceConfiguration = SmallServiceConfiguration
    Service = TestService
    import oracle.ifs.beans.LibraryService;
    import oracle.ifs.beans.LibrarySession;
    import oracle.ifs.beans.ClassObject;
    import oracle.ifs.beans.Document;
    import oracle.ifs.beans.DocumentDefinition;
    import oracle.ifs.beans.Folder;
    import oracle.ifs.beans.FolderDefinition;
    import oracle.ifs.beans.LibraryObject;
    import oracle.ifs.beans.PublicObject;
    import oracle.ifs.beans.Search;
    import oracle.ifs.beans.SearchResultObject;
    import oracle.ifs.common.IfsException;
    import oracle.ifs.common.AttributeValue;
    import oracle.ifs.common.CleartextCredential;
    import oracle.ifs.common.Credential;
    import oracle.ifs.common.ParameterTable;
    import oracle.ifs.search.AttributeQualification;
    import oracle.ifs.search.AttributeSearchSpecification;
    import oracle.ifs.search.SearchClassSpecification;
    import oracle.ifs.search.SearchSortSpecification;
    import java.io.FileOutputStream;
    import java.io.OutputStreamWriter;
    import java.io.PrintWriter;
    import java.util.Hashtable;
    import java.util.Vector;
    * Copyright (c) 2003 Oracle Corporation. All rights reserved.
    * Matt Shannon.
    * Description:
    *  Test searching in Russian Language
    *  View output file in notepad or IE - make sure to specify character
    *  set of document to be UTF8 when opening.
    public class RussianSearch implements Runnable
      // set to 'false' to prevent the class from freeing objects that it creates
      public static final boolean performCleanup = true;
      protected ParameterTable m_parametertable;
      private Vector m_ObjectsRequiringCleanup; 
      public RussianSearch(String[] args)
        // parameter file is retrieved through command line argument parameterfile=
        m_parametertable = new ParameterTable(args, "parameterfile");
      public static void main(String[] args)
        new Thread(new RussianSearch(args)).start();
       *   This is where you write your test program.
      public void run()
        LibraryService service = startService();
        LibrarySession session = establishSession(service);
        if (session == null)
          return;
        try
          DocumentDefinition ddef = new DocumentDefinition(session);
          ddef.setAttribute(PublicObject.NAME_ATTRIBUTE,
            AttributeValue.newAttributeValue("blah.txt"));
          ddef.setAttribute(PublicObject.DESCRIPTION_ATTRIBUTE,
            AttributeValue.newAttributeValue("Я скучаю по родине"));
          ddef.setEmptyContent();
          Document newdoc = (Document) session.createPublicObject(ddef);
          addObjectRequiringCleanup(newdoc);
          /*  Construct AttributeSearchSpecification.
           *  Attribute based conditions are allowed, context conditions are not!
          AttributeSearchSpecification attrSrchSpec =
            new AttributeSearchSpecification();
          /*  Construct SearchClassSpecification.
           *  This represents the FROM and SELECT clauses of the query.
          SearchClassSpecification srchClsSpec = new SearchClassSpecification();
          srchClsSpec.addSearchClass(Document.CLASS_NAME);      // from clause
          srchClsSpec.addResultClass(Document.CLASS_NAME);      // select clause
          /*  Construct SearchSortSpecification.
           *  This represents the ORDER BY clause of the query.
          SearchSortSpecification srchSortSpec = new SearchSortSpecification();
          //  upper case ascending sort on Name
          srchSortSpec.add(Document.CLASS_NAME, PublicObject.NAME_ATTRIBUTE,
            SearchSortSpecification.ASCENDING, "nls_upper");
          /*  AttributeQualification is a WHERE clause component representing an
           *  attribute condition.
          // scalar AttributeQualification - name like '%.html'
          AttributeQualification aq = new AttributeQualification();
          aq.setAttribute(Document.CLASS_NAME, PublicObject.DESCRIPTION_ATTRIBUTE);
          aq.setOperatorType(AttributeQualification.LIKE);
          aq.setValue("%родине");
          // set SELECT & FROM clauses
          attrSrchSpec.setSearchClassSpecification(srchClsSpec);
          // set ORDER BY clause
          attrSrchSpec.setSearchSortSpecification(srchSortSpec);
           // set WHERE clause
          attrSrchSpec.setSearchQualification(aq);
          /* Construct Search, supply SearchSpecification */
          Search s = new Search(session,attrSrchSpec);
          System.out.println("File encoding system property: "+System.getProperty("file.encoding"));
          boolean append = false;
          FileOutputStream fos = new FileOutputStream("c:/test.txt",append);
          OutputStreamWriter osw = new OutputStreamWriter(fos);
          System.out.println("Default character encoding: "+osw.getEncoding());
          osw = new OutputStreamWriter(fos,"UTF8");
          System.out.println("New character encoding: "+osw.getEncoding());
          PrintWriter out = new PrintWriter(osw,true);
          out.println(s.getSQL());
          SearchResultObject obj = null;
          // Open Search!
          s.open();
          try
             * A SearchResultObject encapsulates a row of a search result.  It
             * contains 1 or more LibraryObjects (depending on number of result
             * classes specified).
            while ( (obj = s.next()) != null )
              Document d = (Document)(obj.getLibraryObject(Document.CLASS_NAME));
              out.println(d.getName() + " " + d.getDescription());
          catch (Throwable e)
            if  ((e instanceof IfsException) &&
              (((IfsException)e).containsErrorCode(22000)))
            else
              System.out.println("Unexpected exception occurred in selector cursor");
              System.out.println((e instanceof IfsException)
                ? ((IfsException)e).toLocalizedString()
                : e.toString());
          finally
            out.close();
            if (performCleanup)
              cleanup();
            s.close();
            s.dispose();
        catch (Throwable e)
          System.out.println("Fatal exception occurred in run():");
          System.out.println((e instanceof IfsException)
            ? ((IfsException)e).toLocalizedString()
            : e.toString());
        finally
          disconnectSession(session);
      public LibraryService startService()
        String schemapassword = m_parametertable.getString("SchemaPassword");
        String domain = m_parametertable.getString("Domain");
        String servicename = m_parametertable.getString("Service",domain);
        String serviceconfiguration =
          m_parametertable.getString(
            "ServiceConfiguration","SmallServiceConfiguration"
        LibraryService service = null;
        try
          if (servicename != null &&
            LibraryService.isServiceStarted(servicename))
            // The service name was specified, and is already running.
            // So just use it.
            System.out.println("Service already running: "+servicename);
            service = LibraryService.findService(servicename);
            System.out.println("Existing service retrieved");
          else
            service = LibraryService.startService(
              servicename, schemapassword, serviceconfiguration, domain);
            System.out.println("Service started: '"+servicename+
              "' (version: "+service.getVersionString()+")");
        catch (Throwable e)
          System.out.println("Unable to start service:");
          System.out.println((e instanceof IfsException)
            ? ((IfsException)e).toLocalizedString()
            : e.toString());
        return service;
      public LibrarySession establishSession(LibraryService service)
        String username = m_parametertable.getString("Username");
        String password = m_parametertable.getString("Password");
        return establishSession(service, username, password);
      public LibrarySession establishSession
        LibraryService service,
        String username,
        String password
        LibrarySession session = null;
        try
          CleartextCredential cred = new CleartextCredential(username,
            password);
          session = establishSession(service, cred);
        catch (Throwable e)
          System.out.println("Unable to create credential:");
          System.out.println((e instanceof IfsException)
            ? ((IfsException)e).toLocalizedString()
            : e.toString());
        return session;
      public LibrarySession establishSession
        LibraryService service,
        Credential cred
        LibrarySession session = null;
        if (service != null)
          try
            String username = cred.getName();
            session = service.connect(cred, null);
            System.out.println("Session established for " + username);
          catch (Throwable e)
            System.out.println("Unable to create session:");
            System.out.println((e instanceof IfsException)
              ? ((IfsException)e).toLocalizedString()
              : e.toString());
        return session;
      public void disconnectSession(LibrarySession session)
        System.out.println("Disconnecting session");
        try
          session.disconnect();
        catch (Throwable e)
          System.out.println("Error disconnecting session:");
          System.out.println((e instanceof IfsException)
            ? ((IfsException)e).toLocalizedString()
            : e.toString());
      public void addObjectRequiringCleanup(LibraryObject lo)
        Vector v = getObjectsRequiringCleanupVector();
        v.addElement(lo);
      private Vector getObjectsRequiringCleanupVector()
        if (m_ObjectsRequiringCleanup == null)
          m_ObjectsRequiringCleanup = new Vector();
        return m_ObjectsRequiringCleanup;
       * Frees objects that were marked as requiring clean up
      public void cleanup()
        Vector v = getObjectsRequiringCleanupVector();
        System.out.println("Cleanup - delete objects created during the session");
        int count = (v == null) ? 0 : v.size();
        System.out.println("# of objects to free: "+count);
        // Free the objects in reverse order from which they were added
        for (int i = count - 1; i >= 0; i--)
          LibraryObject lo = (LibraryObject)v.elementAt(i);
          try
            discardObject(lo);
          catch (Exception e)
            System.out.println("Unable to discard an object during cleanup - continuing...");
      public void discardObject(LibraryObject lo) throws IfsException
        if (lo != null)
          try
            System.out.println("Attempting to free: "+getDisplayName(lo));
            LibrarySession session = lo.getSession();
            if (lo instanceof Folder)
              System.out.println("Attempting to free Folder with Deep Option!");
              // free Folder using "Deep" option to free
              // all items in the folder, and all of their items, etc.
              Folder folder = (Folder)lo;
              FolderDefinition def = new FolderDefinition(session);
              def.setFolderDepthOption(
                Folder.SYSTEMOPTIONVALUE_FOLDER_DEPTH_DEEPEST);
              folder.free(def); // removes object from the repository, with options
            else
              // just a regular free
              lo.free();
          catch (Exception e)
            System.out.println("Unable to free an object during cleanup - continuing");
            System.out.println((e instanceof IfsException)
              ? ((IfsException)e).toLocalizedString()
              : e.toString());
      public String getDisplayName(LibraryObject lo)
        throws IfsException
        String displayName;
        if (lo != null)
          displayName = lo.getClassObject().getName()
            + " '" + lo.getName() + "'";
        else
          displayName = "<null object>";
        return displayName;
    }

  • Reading .txt file and non-english chars

    i added .txt files to my app for translations of text messages
    the problem is when i read the translations, non-english characters are read wrong on my Nokia. In Sun Wireless Toolkit it works.
    See the trouble is because I don't even know what is expected by phone...
    UTF-8, ISO Latin 2 or Windows CP1250?
    im using CLDC1.0 and MIDP1.0
    What's the rigth way to do it?
    here's what i have...
    String locale =System.getProperty("microedition.locale");
    String language = locale.substring(0,2);
    String localefile="lang/"+language+".txt";
    InputStream r= getClass().getResourceAsStream("/lang/"+language+".txt");
    byte[] filetext=new byte[2000];
    int len = 0;
    try {
    len=r.read(filetext);
    then i get translation by
    value = new String(filetext,start, i-start).trim();

    Not sure what the issue is with the runtime. How are you outputing the file and accessing the lists? Here is a more complete sample:
    public class Foo {
         final private List colons = new ArrayList();
         final private List nonColons = new ArrayList();
         static final public void main(final String[] args)
              throws Throwable {
              Foo foo = new Foo();
              foo.input();
              foo.output();
         private void input()
              throws IOException {
             BufferedReader reader = new BufferedReader(new FileReader("/temp/foo.txt"));
             String line = reader.readLine();
             while (line != null) {
                 List target = line.indexOf(":") >= 0 ? colons : nonColons;
                 target.add(line);
                 line = reader.readLine();
             reader.close();
         private void output() {
              System.out.println("Colons:");
              Iterator itorColons = colons.iterator();
              while (itorColons.hasNext()) {
                   String current = (String) itorColons.next();
                   System.out.println(current);
              System.out.println("Non-Colons");
              Iterator itorNonColons = nonColons.iterator();
              while (itorNonColons.hasNext()) {
                   String current = (String) itorNonColons.next();
                   System.out.println(current);
    }The output generated is:
    Colons:
    a:b
    b:c
    Non-Colons
    a
    b
    c
    My guess is that you are iterating through your lists incorrectly. But glad I could help.
    - Saish

  • 'Chinese letters' printed instead of non-english chars

    Hi ,
    I use a cmd in which i execute the java programs i have developed... Whereas , i write non-english(greek) letters in my source programs the cmd does not display them....only 'chinese letters or non-understandable symbols'...
    Of course... i can write greek letters in the cmd....
    Is there something that i can do except for writing the messages with english/latin ...chars???
    Note:I use Win XP and jdk 1.4
    Thanks,
    Sim

    public class switchPr
    public static void main(String[] args)
      int a=(int)(12*Math.random());
      switch (a)
       case 1:
         System.out.println("&#921;&#945;&#957;&#959;&#965;&#940;&#961;&#953;&#959;&#962;");
         break;
       case 2:
         System.out.println("&#934;&#949;&#946;&#961;&#959;&#965;&#940;&#961;&#953;&#959;&#962;");
         break;
       case 3:
         System.out.println("&#924;&#940;&#961;&#964;&#953;&#959;&#962;");
         break;
       case 4:
         System.out.println("&#913;&#960;&#961;&#943;&#955;&#953;&#959;&#962;");
         break;
       default:
          System.out.println("other...");
         break;
    C:\oracle_files\Java\Examples\switch>C:\oracle\product\10.2.0\database10g\jdk\bi
    n\javac -classpath . switchPr.java
    C:\oracle_files\Java\Examples\switch>C:\oracle\product\10.2.0\database10g\jdk\bi
    n\java -classpath . switchPr
    other...
    C:\oracle_files\Java\Examples\switch>C:\oracle\product\10.2.0\database10g\jdk\bi
    n\java -classpath . switchPr
    other...
    C:\oracle_files\Java\Examples\switch>C:\oracle\product\10.2.0\database10g\jdk\bi
    n\java -classpath . switchPr
    &#9556;&#940;&#906;&#910;&#939;&#9604;?&#974;&#910;&#8805;The line just above is non-understandable..... Can you decrypt it...???
    Greetings...

  • Attachment File non-English Name DISAPPEARS problem in BizTalk 2010 SMTP Adapter

    Hello ,
    I'm using BizTalk 2010 SMTP Adapter for sending mail with attachments by setting them via property SMTP.Attachments
    //Attachment
    msgEmail(SMTP.Attachments)= AttachmentList;
    I have files in several languages (In English and in Russian partialy) for the example
    My attachment list looks like this:
    "C:\Temp\Files\EnglishNameFile.xml | C:\Temp\Files\RussianFileName_РусскоеИмя.xml";
    After the sending Mail with this attachments the second file (it's name partialy in Russian) received without this part name
    (The non-english part of name is DISAPPEARS)
    like this: 
    RussianFileName_.xml ( must be RussianFileName_РусскоеИмя.xml)
    The NON-English part is DISAPPEARS!!!
    And if i have file that doesn't have latin latters (non-english) at all  than BizTalk SMTP Adapter change name
    to default one like ATT41233.xml
    I found this behaviour occur in other non-english languages also!!!
    Unfortunately i'm not found any info about this
    Any help would be very
    much appreciated
    Vadim

    Refer to this link -
    http://social.msdn.microsoft.com/Forums/en-US/163a47cf-db31-49a5-9ee3-ce9272ba24ff/setting-contenttransferencoding-in-dynamic-smtp-port?forum=biztalkgeneral
    There is an option of the Multipart message that controls the filename and the charset used to control how the attachment is treated, including content-transfer encoding.
    Regards.

  • Problem with Freehand eps and non english char

    HI, I'm new in this forum and newbie in Freehand
    I'm trying to make a eps file with Freehand. This eps is for faxcover, I make it and works fine, but the problem is when I want to use a special charcater, non englis, for example Ñ.
    Somebody knows how can I do to when I make the eps in Freehand permit special characters?
    Any idea?
    Thanks

    Special characters can't be done as FreeHand doesn't support unicode (extended letters with accents, etc)
    This may help: http://freefreehand.org/forum/viewtopic.php?f=5&t=268

  • Problem with  Non English Chars

    OS : Mac OS
    Java : 1.5.0_07
    Hi,
    i have an Swing application that reads data from a database and shows them in a swing GUI. The text returned by the database is in Arabic and saved in a TextField object.
    But once printed, the arabic chars are screwed up.or actually they r not arabic chars at all!!
    For debugging i also write the result of the query in the console and in a log4j log file.
    There, it is printed in the right form.
    here the code:
    System.out.println("D3"+java.nio.charset.Charset.defaultCharset().name());
    System.out.println("singular "+dit.getData().getSingular());
    log4j,debug("singular "+dit.getData().getSingular());
    Font font = Font.decode("Geeza Pro");
    textl.setFont(font);
    textl.setText(dit.getData().getSingular());
    The output in the console is (and log4j) :
    D3MacRoman
    singular &#1589;&#1608;&#1601;
    The output in the Swing Textfield is
    ������
    If i configure log4j to use UTF8 ,then even into log4j log file the same screwed
    chars are written.
    Looks like i've to tell Swing to use MacRoman, which is the default of the OS and
    the used by the console&log4j. but i don't know how to.
    Any clue??
    Thanks,
    Chris.

    convert your strings to unicode:
    example 1
    import java.awt.*;
    import java.awt.event.*;
    public class ApplicationFrame
        extends Frame {
      public ApplicationFrame() { this("ApplicationFrame v1.0"); }
      public ApplicationFrame(String title) {
        super(title);
        createUI();
      protected void createUI() {
        setSize(500, 400);
        center();
        addWindowListener(new WindowAdapter() {
          public void windowClosing(WindowEvent e) {
            dispose();
            System.exit(0);
      public void center() {
        Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
        Dimension frameSize = getSize();
        int x = (screenSize.width - frameSize.width) / 2;
        int y = (screenSize.height - frameSize.height) / 2;
        setLocation(x, y);
    import java.awt.*;
    public class BidirectionalText {
      public static void main(String[] args) {
        Frame f = new ApplicationFrame("BidirectionalText v1.0") {
          public void paint(Graphics g) {
            Graphics2D g2 = (Graphics2D)g;
            g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                RenderingHints.VALUE_ANTIALIAS_ON);
            Font font = new Font("Lucida Sans Regular", Font.PLAIN, 32);
            g2.setFont(font);
            g2.drawString("Please \u062e\u0644\u0639 slowly.", 40, 80);
        f.setVisible(true);
    example2
    Java Internationalization
    By Andy Deitsch, David Czarnecki
    ISBN: 0-596-00019-7
    O'Reilly
    import java.awt.event.*;
    import java.awt.*;
    import java.text.*;
    import javax.swing.*;
    public class ArabicDigits extends JPanel {
      static JFrame frame;
      public ArabicDigits() {
        NumberFormat nf = NumberFormat.getInstance();
        if (nf instanceof DecimalFormat) {
          DecimalFormat df = (DecimalFormat)nf;
          DecimalFormatSymbols dfs = df.getDecimalFormatSymbols();
          // set the beginning of the range to Arabic digits
          dfs.setZeroDigit('\u0660');
          df.setDecimalFormatSymbols(dfs);
        // create a label with the formatted number
        JLabel label = new JLabel(nf.format(1234567.89));
        // set the font with a large enough size so we can easily
        // read the numbers
        label.setFont(new Font("Lucida Sans", Font.PLAIN, 22));
        add(label);
      public static void main(String [] argv) {
        ArabicDigits panel = new ArabicDigits();
        frame = new JFrame("Arabic Digits");
        frame.addWindowListener(new WindowAdapter() {
        public void windowClosing(WindowEvent e) {System.exit(0);}});
        frame.getContentPane().add("Center", panel);
        frame.pack();
        frame.setVisible(true);
    To avoid having to type all the \u... notation manually, use the native2ascii tool (included with the SDK).
    http://java.sun.com/developer/technicalArticles/Intl/HTTPCharset/

  • KDE (or GNOME?) non-latin chars input problem

    After recent upgrade (pacman -Syu) can't anymore type non-latin characters in GNOME apps (like gEdit, Evolution, Thunderbird etc.) under KDE. Openoffice is still OK.
    Keyboard layout switching works (can see the difference between US and GB layouts, but when switching to non-latin layouts getting latin chars instead of non-latin.
    In KDE apps everything is fine.
    Have current KDE and GNOME installed.
    BTW, when trying to run Epiphany web browser (under KDE again) getting error message (not sure but maybe this is related to the first problem):
    "Could not start GNOME Web Browser
    Startup failed because of the following error:
    Unable to determine the address of the message bus (try 'man
    dbus-launch' and 'man dbus-daemon' for help)"
    Have dbus in DAEMONS line in rc.conf.
    Any suggestions?

    Fixed first problem... rollback of inputproto and libxi did the trick (not sure how safe it was).
    scarecrow, thank you for your reply but it didn't help for the second problem. Changed DAEMONS to hal only, added dbus-python and even dbus-sharp (just in case, all other dbus related stuff was already there) but - no luck. Perhaps dbus-1.0 might be a solution who knows.
    Thanks anyway.

  • NON English chars are saved as "?"

    I try to update large amount of chars to both SQL server(or MSDE) and
    MS Access.
    when I use the following :
    byte[] data = str.getBytes("ISO-8859-1");
    ByteArrayInputStream bais = new ByteArrayInputStream(data);
    ps.setAsciiStream(parameterIndex,bais,data.length);
    the text is save correctly to MS Access but in MSDE it is saved as "?"
    (I have installed it with the correct collation=hebrew_ci_ai and the regional settings are correct)
    When I used ps.setCharacterStream() is saved the data o.k in MSDE but with MS Access an exception accoured when saving a large amount of text ..
    Any Ideas what can work for both DBs ?
    THANKS !

    SanskritFritz wrote:
    That should be "es_ES.utf8"
    Sorry, i mispelled it in the post.
    Of course, my locale is es_ES.utf8:
    LANG=es_ES.utf8
    LC_CTYPE="es_ES.utf8"
    LC_NUMERIC="es_ES.utf8"
    LC_TIME="es_ES.utf8"
    LC_COLLATE=C
    LC_MONETARY="es_ES.utf8"
    LC_MESSAGES="es_ES.utf8"
    LC_PAPER="es_ES.utf8"
    LC_NAME="es_ES.utf8"
    LC_ADDRESS="es_ES.utf8"
    LC_TELEPHONE="es_ES.utf8"
    LC_MEASUREMENT="es_ES.utf8"
    LC_IDENTIFICATION="es_ES.utf8"
    LC_ALL=
    I don't think this could be the source of the problem, because, except in the places I said in the firs post, the rest of my system works perfectly.

  • GetFirstFile() and non-english chars

    Hi,
    When using GetFirstFile() / GetNextFile(), if a file is encountered with Chinese chars in it's filename, each of these chars is replaced with a "?".
    As a result, I cant open the file as I dont know its full name.
    Does anyone know of a way around this? Some Windows SDK function maybe?
    cheers,
    Darrin.

    Hi Diz@work,
    Thanks for posting on the NI Discusson Forums. 
    I have a couple questions for you in order to troubleshoot this issue:
    Which language is your Windows operating system set to? Chinese or English?
    When you say that the filename returned contains '?' characters instead of the Chinese characters, do you mean you see this when you output to a message popup panel or print to the console? Are you looking at the values in fileName as you're debugging? Can you take a look at the actual numerical values in the fileName array and see which characters they map to? It's possible that the Chinese characters are being returned correctly, but the function you're using to output them doesn't understand the codes they use.
    Which function are you using to open the file with the fileName you get from GetFirstFile()? Can you take a look at what's being passed to it?
    CVI does include support for multi-byte characters. Take a look at this introduction:
    http://zone.ni.com/reference/en-XX/help/370051V-01/cvi/programmerref/programmingmultibytechars/
    As far as the Windows SDK goes, I did find that the GetFirstFile() and GetNextFile() functions are based on the Windows functions, FindFirstFile() and FindNextFile(). According to MSDN, these functions are capable of handling Unicode characters as well as ASCII:
    http://msdn.microsoft.com/en-us/library/windows/desktop/aa364418(v=vs.85).aspx
    There may be a discrepancy between how these functions are being called and/or what they're returning to the CVI wrapper functions.
    Frank L.
    Software Product Manager
    National Instruments

  • Keyboard input bug with non-english chars

    Hi community, i'm facing a weird problem with the following flex application (something very simple, couldn't be more):
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
         <mx:TextInput/>
    </mx:Application>
    The bug is coming only on Google Chrome (on Mac), when i try to enter french special chars in the input field, let's say i try to enter the following letters: éèçà, instead i'll get ÈËÁ‡ (see the following screenshots respectively for Safari and Chrome)
    I tried these browsers on Mac: Safari, Firefox, Omniweb, Opera. And it works like a charm for them. The bug is only on Google Chrome Mac.
    I also tried on Windows with IE6, IE7, Firefox, Opera, Google Chrome. No bug either for them on Windows.
    I've read on the Internet other people on Linux have the same bug sometimes (again, it's just for a couple of browsers and never all the browsers on their platform). I hope I don't have to tell my users to trick with their OS configuration. It's our job to make our apps to fit the visitor, not theirs!
    Does anyone knows a workaround for it? A special configuration to do (compatibility mode with older Flash version 8, 9, something like that?)...

    I just tested Firefox and Chrome on linux, i doesn't work either, but i get different weird chars: éèça
    However, on both mac and linux, if i copy the chars and paste them in the input field, it passes.

  • [Solved] Can't install kdemod - problem with xine.desktop

    Hi,
    today I tried to install kdemod, but got some problems with it. It couldn't be installed because of files conflict:
    /usr/share/kde4/services/phonombackends/xine.desktop in 'phonom' and 'kdemod-kdebase-runtime'. What's wrong with it?
    Thanks for your help!
    Edit: Used -f and it's done
    Last edited by blackrain (2009-01-15 22:45:47)

    N30N wrote:
    schuschu wrote:Could someone put a PKGBUILD with patch into the AUR please?
    I've added the patch to the linuxwacom-cvs package.
    Thanks, but I still can't build it:
    patching file src/xdrv/xf86Wacom.c
    Hunk #1 succeeded at 100 with fuzz 2 (offset 10 lines).
    Hunk #2 succeeded at 367 (offset -36 lines).
    Hunk #3 succeeded at 397 (offset -33 lines).
    Hunk #4 succeeded at 555 (offset -33 lines).
    Hunk #5 succeeded at 611 (offset -22 lines).
    Hunk #6 succeeded at 649 (offset -22 lines).
    Hunk #7 succeeded at 667 (offset -22 lines).
    Hunk #8 succeeded at 683 (offset -22 lines).
    Hunk #9 succeeded at 723 (offset -22 lines).
    Hunk #10 succeeded at 750 (offset -22 lines).
    Hunk #11 succeeded at 781 (offset -22 lines).
    Hunk #12 succeeded at 791 (offset -22 lines).
    Hunk #13 succeeded at 823 (offset -22 lines).
    Hunk #14 succeeded at 849 (offset -22 lines).
    Hunk #15 FAILED at 881.
    1 out of 15 hunks FAILED -- saving rejects to file src/xdrv/xf86Wacom.c.rej

  • Non-english characters input problem on remote device

    Hello.
    I have ZCM 11 SP1. In remote management I can input on remote device only English characters.
    When I change keyboard layuot to Russian - no input at all. But I need Russian keyboard working. Help, please.
    Management device is Windows XP SP3 box, admin device - Windows 7 SP1 and Windows XP SP3 boxes

    9113060,
    > I have ZCM 11 SP1. In remote management I can input on remote device
    > only English characters.
    > When I change keyboard layuot to Russian - no input at all. But I need
    > Russian keyboard working. Help, please.
    >
    > Management device is Windows XP SP3 box, admin device - Windows 7 SP1
    > and Windows XP SP3 boxes
    Need more info here. I just tested it on Swedish and it works just fine,
    but when you say "When I change keyboard layuot to Russian" does that
    mean that the default on your machine or the target machine is not
    Russian?
    - Anders Gustafsson (NKP)
    The Aaland Islands (N60 E20)
    Novell has a new enhancement request system,
    or what is now known as the requirement portal.
    If customers would like to give input in the upcoming
    releases of Novell products then they should go to
    http://www.novell.com/rms

Maybe you are looking for

  • Satellite L300 - Is overheating and shuts down

    I've just got my laptop back after getting it fixed (new HDD & BIOS/FW update according to the sheet that came with it). But now another problem has developed with the laptop overheating & auto shutting down a lot as a result. Is there any way to che

  • Hooking up a wireless router.

    I am hooking up a wireless n router. The setup wizard is asking me what type of internet connection the verizon router uses (cable modem, fixed-ip, PPPoE, PPTP, or L2TP). Thanks for your help.

  • Ides system

    Can one do the excercises in the SAP Training Manuals in a IDES system? Will all the ready-made test data be there as you follow same as the instructions in training manuals? Thanks

  • Aperture runs out of memory generating previews

    Aperture 3.2.3 65K photos in library OSX 10.7.3 Model Identifier:          MacPro1,1 Processor Name:          Dual-Core Intel Xeon Processor Speed:          2.66 GHz Total Number of Cores:          4 L2 Cache (per Processor):          4 MB Memory:   

  • Item has already been added. Key in dictionary: '???' Key being added: '???'

    Since 1 week I am unable to access any of the Sharepoint 2010 sites that exist in my farm. The error that is shozn is: Item has already been added. Key in dictionary: '???'  Key being added: Here is the stacktrace: [ArgumentException: Item has alread