Non-english input in Wine[SOLVED]

Hi,
Non-english input characters in Windows apps under Wine appear either as blanks or as question marks. Any how-to is welcome .
Meanwhile, I've googled an interesting advice (for Cyrillic):
$ sudo ln -s en_US.UTF-8 /usr/share/X11/locale/ru_RU.UTF-8
The most interesting part is a mystery: where am I supposed to create such a link?
Last edited by Llama (2009-05-29 19:23:46)

Peanut wrote:In other words, that command would overwrite the russian UTF-8 locale with an english UTF-8 locale. I can't see any reason why that should fix anything.
For one thing, the ln command refused to overwrite anything . Yes, I've been curious about the reason.
Peanut wrote:Have you:
1) Installed and enabled fonts that support cyrillic characters?
(1) ru_RU.UTF-8 alongside en_US.UTF-8:
$ locale
LANG=en_US.utf8
LC_CTYPE="en_US.utf8"
LC_NUMERIC="en_US.utf8"
LC_TIME="en_US.utf8"
LC_COLLATE=C
LC_MONETARY="en_US.utf8"
LC_MESSAGES="en_US.utf8"
LC_PAPER="en_US.utf8"
LC_NAME="en_US.utf8"
LC_ADDRESS="en_US.utf8"
LC_TELEPHONE="en_US.utf8"
LC_MEASUREMENT="en_US.utf8"
LC_IDENTIFICATION="en_US.utf8"
LC_ALL=
Peanut wrote:Have you:
2) Tried using other encodings than UTF8?
(2) No, I avoid non-UTF locales.
Luckily, in this day and age the solution turned out to be fairly straightforward:
Starting sequence:
LANG=ru_RU.UTF-8 wine ...
Aliasing, they say, is also possible :
echo "alias wine='LANG=ru_RU.UTF-8 wine'" >> ~/.bashrc
echo "alias wine='LANG=ru_RU.UTF-8 wine'" >> ~/.profile

Similar Messages

  • Types of micro sd card + non english input language

    Hello all,
    I consider buying the Blackberry Passport but have a couple of questions. Please consider that I have never owned or tried a Blackberry before. Also it is not possible for me to try it out before buying it as they are not sold where I live.
    Which class of micro sd card is supported? I would like to buy the fastest 128 gb available and, as I understand it, that would be a UHS-II class 3. However it seems that this card has some physical changes to it compared the previous versions so I'm not sure it will work on the BB Passport.
    The physical keyboard has the disadvantages of not supporting non-english input. Will I be able to input non-english characters anyway? I imagine that they will appear at the top row of virtual keys, but I'm not sure.
    Kind regards
    Peter

    newer os supports exfat format
    not sure about a different format for faster Micro SD cards, have a link?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • JSP - How to Truncate or replace non-english inputs ?

    Hi,
    I have a JSP form in which users needs to input certain values and those values get stored in our database. Now my concern is that many a times users from different part of geographies also input non-english values which comes as weired characters and gets stored in my database.
    I do not want non english stuffs to be stored in my database.
    What I want is that if user enters any NON-ENGLISH character then either it should get truncated or it should get replaced with its nearest matching english character.
    How can I achieve this ?
    Awaiting Help !
    Thanks

    Hey guyz..Iam awaiting ur response......is it something that not achievable....if thats so then please suggest me some way out

  • SMS input box is covered by candidate bar when using non-English input. Any idea?

    Any idea if any get-around?
    Same for other apps too...

    Hi,
    do you have a test case and instructions for me to try and run. Just create a simple page-to-page use case and send it in a ZIP file (renamed to ZIP) for me to test on 11.1.2.4 and file a bug if required. If you have a support contract, you can file a service request with support. My mail addres sis in my OTN profile, just click my name link.
    Frank

  • INPUT textfield does not show non-English letters with transparent mode

    INPUT textfield does not show non-English letters when i
    type, if transparent mode turn on
    this is bug of Flash Player 9?
    will this bug had be fixed?

    I just tested Firefox and Chrome on linux, i doesn't work either, but i get different weird chars: éèça
    However, on both mac and linux, if i copy the chars and paste them in the input field, it passes.

  • [SOLVED] mkvmerge. making mkv's from avis and non-english srt files

    hi! im converting all my huge amount of .avi's to .mkv's with mkvmerge, but i found that with non-english characters, mkvmerge make wrong subtitles in the resulting mkv file, all letters such as á,é,í,ó,ú,ñ and ¿/¡ results in the subtitle not displaying correctly, any advice?
    i think i need some sort of "Options that only apply to text subtitle tracks: --sub-charset <TID:charset>" but don't know how to implement it in the cli
    i only do a "mkvmerge -v -o mkvfile avifile srtfile"
    Last edited by leo2501 (2008-09-01 23:40:53)

    well i come around with this bash script
    avi2mkv:
    #!/bin/bash
    INPUT=$(basename $1 .avi)
    SRT=$(ls | grep $INPUT.srt)
    SUB=$(ls | grep $INPUT.sub)
    SSA=$(ls | grep $INPUT.ssa)
    if [ "$SRT" != "" ]; then
    # mkvmerge -v -o $INPUT.mkv $1 $INPUT.srt
    mkvmerge -v -o $INPUT.mkv $1 --sub-charset 0:UTF8 -s 0 -D -A $INPUT.srt
    # mkvmerge -v -o $INPUT.mkv $1 $INPUT.eng.srt --sub-charset 0:UTF8 -s 0 -D -A $INPUT.spa.srt
    elif [ "$SUB" != "" ]; then
    mkvmerge -v -o $INPUT.mkv $1 $INPUT.sub
    elif [ "$SSA" != "" ]; then
    mkvmerge -v -o $INPUT.mkv $1 $INPUT.ssa
    else
    mkvmerge -v -o $INPUT.mkv $1
    fi

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • [SOLVED] Non english chars kdemod 4 problem

    Hello, I have a little problem with KDE and the non english charactes.
    If I open a file with non english chars in its name I get something like this:
    (In this case kwrite opens "other" file but in other applications it fails with an error of file not found)
    Other sympton is that in KDE menu my name have bad chars too:
    (It must be López)
    And the third sympton is that if try to rename a file in the desktop, I can't write accented chars (á é í ó ú). At the begining the keyboard in this rename dialog was totally in english but i have got a semi spanish keyboard (i can write ñ letters) with the apropiate /etc/hal/fdi/policy/10-keymap.fdi file.
    But the most strange is that in general, in all Kde and non-kde applications and even in the console, non english chars works ok. I can go to the file->Open menu of the application and open a file with non english chars in its name. The problem seems to reside in the part of kde that passes the name of the file to the application (¿kwin?)
    my locale is es_ES@UTF8 and as I said I have configured correctly the 10-keymap.fdi file.
    I have read in some forums that something like this could be a kde or qt bug, but for me it's not clear as i don't see a general complaining about this.
    Any idea will be apreciated.
    Thanks in advance,
    Christian.
    Last edited by christian (2009-03-27 14:52:17)

    SanskritFritz wrote:
    That should be "es_ES.utf8"
    Sorry, i mispelled it in the post.
    Of course, my locale is es_ES.utf8:
    LANG=es_ES.utf8
    LC_CTYPE="es_ES.utf8"
    LC_NUMERIC="es_ES.utf8"
    LC_TIME="es_ES.utf8"
    LC_COLLATE=C
    LC_MONETARY="es_ES.utf8"
    LC_MESSAGES="es_ES.utf8"
    LC_PAPER="es_ES.utf8"
    LC_NAME="es_ES.utf8"
    LC_ADDRESS="es_ES.utf8"
    LC_TELEPHONE="es_ES.utf8"
    LC_MEASUREMENT="es_ES.utf8"
    LC_IDENTIFICATION="es_ES.utf8"
    LC_ALL=
    I don't think this could be the source of the problem, because, except in the places I said in the firs post, the rest of my system works perfectly.

  • Removing non english characters from my string input source

    Guys,
    I have problem where I need to remove all non english (Latin) characters from a string, what should be the right API to do this?
    One I'm using right now is:
    s.replaceAll("[^\\x00-\\x7F]", "");//s is a string having chinese characters.
    I'm looking for a standard Solution for such problems, where we deal with multiple lingual characters.
    TIA
    Nitin

    Nitin_tiwari wrote:
    I have a string which has Chinese as well as Japanese characters, and I only want to remove only Chinese characters.
    What's the best way to go about it?Oh, I see!
    Well, the problem here is that Strings don't have any information on the language. What you can get out of a String (provided you have the necessary data from the Unicode standard) is the script that is used.
    A script can be used for multiple languages (for example English and German use mostly the same script, even if there are a few characters that are only used in German).
    A language can use multiple scripts (for example Japanese uses Kanji, Hiragana and Katakana).
    And if I remember correctly, then Japanese and Chinese texts share some characters on the Unicode plane (I might be wrong, 'though, since I speak/write neither of those languages).
    These two facts make these kinds of detections hard to do. In some cases they are easy (separating latin-script texts from anything else) in others it may be much tougher or even impossible (Chinese/Japanese).

  • Support issue for non-English characters (in html forms)

    Hi group!
    I just want to post an issue here and see if anyone else has the same problem. First off, Im running Windows XP MCE but the French version (not the english version). This may help find out where the problem really is.
    Second, I know a bit of html and such, and I'm referring to HTML Character entities for this thread, there's a quite complete list here for reference: http://www.faqs.org/docs/htmltut/characterentitiesfamsupp69.html
    I noticed that some, not all, non-English characters written in a textarea (which is, basically, a multi-lined input box) doesnt pass well or at all to the server when sending the form from Safari. Most of the time, the content of the text area is reduced to the beginning and ends where the first accentued character is met.
    The most used French accents (&eacute;, &agrave;) are usually well interpreted (but may, once in a while, produce that bug too) by safari, but &ocirc; and &icirc; doesnt do that well.
    Oddly, this bug doesnt happen all the time and doesnt "crash" in the same manner everytime.
    So I started a thread just to see if there's anyone else having issues with any non-english characters mostly in forms. Probably flash/shockwave does work, but I'm not sure- I have not tested yet.
    Acer Aspire 5044   Windows XP   Turion 1.8GHz, 1Gb SDRam, ATI 200M xpress

    Yes, it is a known issue. I also noticed that it sometimes works, but most of the time it does not. It will hopefully be solved in the future. According to http://www.apple.com/safari/download/ changes that will come include:
    # Support for International users
    # International text input methods
    # Advanced text (contextual forms, international scripts)
    Sony Vaio   Windows XP  

  • PDF generation for Non English Characters from ADF

    Hi
    We are using below piece of code to generate pdf from ADF Managed bean. It works fine. However for non English Characters(eg. Japanese,Vietnamese,Arabic)  it puts
    I got few blogs
    https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears
    However we are not using BI Publisher product . We are using its API's
    Can anyone tell where do we need to setup fonts within ADF or Weblogic or Server ?
    Input Parameters are
    a)xml Data
    b)InputStream  ie rtf Template
    import oracle.apps.xdo.XDOException;
    import oracle.apps.xdo.template.FOProcessor;
    import oracle.apps.xdo.template.RTFProcessor;
        public static byte[] genPdfRep(String pOutFileType,byte[] pXmlOut ,InputStream pTemplate)
            byte[] dataBytes = null;
            try {
                //Process RTF template to convert to XSL-FO format
                RTFProcessor rtfp = new RTFProcessor(pTemplate);
                ByteArrayOutputStream xslOutStream = new ByteArrayOutputStream();
                rtfp.setOutput(xslOutStream);
                rtfp.process();
                //Use XSL Template and Data from the VO to generate report and return the OutputStream of report
                ByteArrayInputStream xslInStream = new ByteArrayInputStream(xslOutStream.toByteArray());
                FOProcessor processor = new FOProcessor();
                ByteArrayInputStream dataStream = new ByteArrayInputStream((byte[])pXmlOut);  
                processor.setData(dataStream);
                processor.setTemplate(xslInStream);
                ByteArrayOutputStream pdfOutStream = new ByteArrayOutputStream();
                processor.setOutput(pdfOutStream);
                byte outFileTypeByte = FOProcessor.FORMAT_PDF;
                processor.setOutputFormat(outFileTypeByte); //FOProcessor.FORMAT_HTML
                processor.generate();
                dataBytes = pdfOutStream.toByteArray();
            } catch (XDOException e) {
                e.printStackTrace();
            return dataBytes;
    Appreciate your help.
    Thanks,
    Abhijit

    Fonts are defined in the template you use to generate the pdf. Your application add the data and both is processed yb the FOP processor. Now there are two possible causes of the '???' :
    1. the data you sent to the template contains the '???' already
    2. the template can't digest the data (the special characters) and puts '???' in the pdf.
    Before going on you have to find out which one is your problem. The 2nd is the problem you better ask this in a FOP forum as you have to solve it by changing the template.
    Timo

  • Fonts messing up on non-english windows

    I'm using Director 10.1 and Flash 8 Xtra for a 5 language
    DVD-ROM, English, Spanish,French, German and Italian. The etxt is
    stored in Director code and sent to several Flash 8 movies to
    display them
    Got results from my testing house today to say certain
    characters not appearing when Spanish and Simplified Chinesse set
    as OS Language on Win XP SP2. Both my flash movies and Director
    text fields display ? instead of the relevant character. Sometimes
    the character has an accent above it othertimes just a regular
    letter like e or g. I am using embed fonts in both Director and
    Flash. Avenir LT 85 Heavy seems to generate these ? and on some
    screens in Director using regular text members the layout messes
    (words apearing one character at a time on a line) when I use
    Avenir 45 Book.
    Now on my UK PC all 5 languages display fine, it's just on
    these test machines.
    Any thoughts as to what might be causing this?
    Thanks
    Kevin Boyd

    > Have you tried to install the coreesponding fonts on
    your test machines? I
    > blieve that if a font is missing from a machine director
    will replace it
    > automaticaly with a default font, somethign which
    usually result to the
    effect
    > that you are describing
    Shouldn't be an issue if the font is embedded. The only thing
    I can think
    of is make sure that you're using the embedded version of the
    font for every
    text/field object. (There should be an * next to the font
    name.) Also, is
    this text set by the program? Or is it entered by the user?
    If it's
    user-entered text, a non-English keyboard may be inputting
    foreign
    characters which your embedded font may not support.
    (Particularly the
    Chinese one.) If it's necessary for the foreign-language user
    to input
    text, you may have to make some sort of conversion code to
    make sure that
    it's fully compatible.
    I really hope that Adobe can do what Macromedia never would
    and put Unicode
    support into the next version of Director. Most problems like
    this would go
    away if they did that.

  • Sending non-English parameters vlaues to rwservlet for a pdf reports

    Hi friends,
    I have developed a report which I open through calling rwservlet from an HTML form posting all form inputs to the rwservlet to use as parameters to my report.
    The problem arises when I try to pass non-English (Arabic) values, the report comes out but the passed Arabic values appear corrupted in the PDF file, Although other boilerplate Arabic fields appear very well.
    The proplem is with the values I pass dynamically when posting to rwservlet not with static Arabic boilerplates!
    thanks for help.

    Fonts are defined in the template you use to generate the pdf. Your application add the data and both is processed yb the FOP processor. Now there are two possible causes of the '???' :
    1. the data you sent to the template contains the '???' already
    2. the template can't digest the data (the special characters) and puts '???' in the pdf.
    Before going on you have to find out which one is your problem. The 2nd is the problem you better ask this in a FOP forum as you have to solve it by changing the template.
    Timo

  • Non-English characters

    Hello, I have read several times that since Java uses Unicode, it solves the problems of non-English characters automatically or something like that.
    But my app is not working as expected. Would someone help please?
    I have a client/server combo written in Java. The server can send messages in English or Japanese. The Japanese messages are hard-coded as String literals in the server source code. On the client side, they are displayed on a JEditorPane. But the Japanese characters are all garbled. The OS on the server side and client side are, of course, different.
    My supposition, which is obviously wrong as it is not working, is that since both ends of communication are Java app, I need not worry about any encoding conversions for String literals.
    Suggest me what is wrong here?

    How is the required encoding/decoding supposed to be done?
    When I didn't worry about non-English characters, I did the following, which WORKED.
    // SENDER side
    Socket socket ;
    PrintWriter     out = new PrintWriter(socket.getOutputStream(),true);
    String outMessage = "my message";
    out.println(outMessage);//RECEIVER
    Socket socket ;
    BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
    String inMessage =  in.readLine();When non-English characters are involved, I did the following, which DID NOT WORK. Please someone correct me.
    // SENDER side
    Socket socket ;
    PrintWriter     out = new PrintWriter(socket.getOutputStream(),true);
    String outMessage = "my message";
    String utfString = new String(outMessage.getBytes(),"UTF-8");
    out.println(utfString);//RECEIVER
    Socket socket ;
    InputStreamReader ins = new InputStreamReader(clientSocket.getInputStream(),"UTF-8");
    BufferedReader in = new BufferedReader(ins);
    String inMessage =  in.readLine();The received message is still garbled.

  • Non-english character display as square box

    Hi all,
    I'm not very sure if this question should be asked here or in the JRE board, thus I'm trying here also
    I have been trying an opensourced application called Alliancep2p (could be obtained from www.alliancep2p.com) using JRE 1.6 on an English Windows XP Pro machine.
    The problem:
    all chinese input are displayed as "square box". It looks like the programme "gets" the correct character, only that everything is displayed as "square box".
    It looks like a font issue, though I'm not that sure. Is there anyway the default fonts could be changed, or to get the characters correctly displayed?
    Note: I have east asian fonts installed, and the Java config panel can display chinese or other non-english characters correctly.
    I tried the same application under GNU/Linux (locale is UTF-8) and chinese input/display correctly without any problem at all. Does it mean that it is not the problem of the application, or?
    The original question in the JRE board:
    http://forum.java.sun.com/thread.jspa?threadID=5265369&tstart=0
    Thanks for all the input.

    I'm not really sure if it's a problem of the application or not. But the fact that it works perfectly under Linux makes me think maybe it's not the problem of the program, and actually their developers said that unicode is being used all over the program and seems like they're not CJK users also.
    I'm not a java guru so I can't really tell from the source if there's anything wrong.

Maybe you are looking for

  • How can I share my iTunes library between user accounts in iTunes 11

    Since upgrading my iTunes library to Version 11 I can no longer share the library between user accounts on the same Mac. The 2nd user is asked to create oir use an iTunes library, but when it is pointed to the Media folder it cannot share it. Previou

  • Is it possible to turn off a spot colour

    Hello Probably a dumb question but I will ask anyway. Is it possible to turn off a spot colour in CS4. I can effectively do it by creating a spot white and merging the spot colour I want to delete to the spot white using the ink aliasing. However, th

  • Stuff drawn on JPanel will display in Eclipse but not browser?

    Hi guys, I wrote a simple java applet and just put it up on my webpage. When I ran it in Eclipse, everything worked fine, but now in Firefox, I can see the two buttons it has, but there's a lot of stuff that was supposed to be drawn to some JPanels b

  • How to modify a pint.css file

    Hi Folks, I want to print a couple of pages out of my wiki. But i did not get what i see. How shall i modify the "print.css" to get a better result? It also hast to work with the calendar overview. Thanks

  • Save batch file to png

    i got a lot of file, with alpha, let say 2 of those file, one name tree01.jpg, another tree01_a.jpg, is it possible create a script let my both file(and all other file tree02.jpg, tree02_a.jpg and etc) auto become png? tree01_a.jpg is alpha mask for