Losing non-latin symbols with X-clipboard in urxvt

I seem to have a problem with x-clipboard and urxvt. Whenever I copy explicitly via x-clipboard all of my non-lating characters turn into a strange symbol. Interestingly, selecting a text followed by middle mouse button works correctly and the text is copied with no loss in non-lating symbols.
I've found a similar post on the forum:
https://bbs.archlinux.org/viewtopic.php?id=141785
but my situations is not quite the same. Even if I copy from urxvt into urxvt say from one instance of vim to another the effect I described occurs.
This is exactly what's happening to me
http://lists.gnu.org/archive/html/emacs … 00177.html
but the thread seems discontinued.
By the way here's my locale
[~]%locale
LANG=en_GB.utf8
LC_CTYPE="en_GB.utf8"
LC_NUMERIC="en_GB.utf8"
LC_TIME="en_GB.utf8"
LC_COLLATE="en_GB.utf8"
LC_MONETARY="en_GB.utf8"
LC_MESSAGES="en_GB.utf8"
LC_PAPER="en_GB.utf8"
LC_NAME="en_GB.utf8"
LC_ADDRESS="en_GB.utf8"
LC_TELEPHONE="en_GB.utf8"
LC_MEASUREMENT="en_GB.utf8"
LC_IDENTIFICATION="en_GB.utf8"
LC_ALL=
And locale -a
locale -a
C
en_GB
en_GB.iso88591
en_GB.utf8
pl_PL
pl_PL.iso88592
pl_PL.utf8
polish
POSIX
Any hints would be of great assistance.

@Ranmaru;
already have it xD. I can't use middle click, no mouse and my touchpad's buttons are a bit off.
@Mr. Pedobear:
[hinagiku@alice-tan urxvt-clipboard]# locale
LANG=en_US.UTF-8
LC_CTYPE=ja_JP.UTF8
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE=C
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
[hinagiku@alice-tan urxvt-clipboard]# locale -a
C
POSIX
de_DE.utf8
en_US
en_US.iso88591
en_US.utf8
ja_JP
ja_JP.eucjp
ja_JP.ujis
ja_JP.utf8
japanese
japanese.euc
ko_KR
ko_KR.euckr
ko_KR.utf8
korean
korean.euc
Should I set LANG to ja_JP? Even without Japanese locale, backslash is displayed as Yen sign in terminal and Qt apps.

Similar Messages

  • Quastion marks in non-latin charecters when searching with google

    Hello,
    After installing firefox I encountered a problem.
    I am on the hebrew version of the browser, updated to the latest (33.1) version.
    When searching in the search bar with google using non-latin chars (hebrew) I am taken to google search page but instead of the search term I am getting quastion marks symbols (???????) I then need to again write the search term (which is now ok)
    This doesn't happen in DuckDuckGo or Yahoo search.
    A solution would be appreciated (and also important for other users)
    Thank you

    I can't seem to duplicate this.  Are you typing in the same place I am? I see I only have 6.0.1...

  • Non latin character sets and accented latin character with refind

    I need to use refind to deal with strings containing accented
    characters like žittâ lísu, but it doesn't seem to
    find them. Also when using it with cyrillic characters , it won't
    find individual characters, but if I test for [\w] it'll work.
    I found a livedocs that says cf uses the Java unicode
    standard for characters. Is it possible to use refind with non
    latin characters or accented characters or do I have to write my
    own Java?

    ogre11 wrote:
    > I need to use refind to deal with strings containing
    accented characters like
    > ?itt? l?su, but it doesn't seem to find them. Also when
    using it with cyrillic
    > characters , it won't find individual characters, but if
    I test for [\w] it'll
    > work.
    works fine for me using unicode data:
    <cfprocessingdirective pageencoding="utf-8">
    <cfscript>
    t="Tá mé in ann gloine a ithe;
    Nà chuireann sé isteach nó amach
    orm";
    s="á";
    writeoutput("search:=#t#<br>for:=#s#<br>found
    at:=#reFind(s,t,1,false)#");
    </cfscript>
    what's the encoding for your data?

  • How to display feeds with non-latin utf8 characters in Raggle?

    Has anyone tried to use raggle to read feeds with non-latin utf8 characters?
    If you are successful, how to do it?
    Thanks

    i have this problem too...
    Last edited by vdo (2008-09-02 12:19:31)

  • Regex with strings that contain non-latin chars

    I am having difficulty with a regex when testing for words that contain non-latin characters (specifcally Japanese, I haven't tested other scripts).
    My code:
    keyword = StringUtil.trim(keyword);
    //if(keywords.indexOf(keyword) == -1)
    regex = new RegExp("\\b"+keyword+"\\s*;","i");
    if(!regex.test(keywords))
    {Alert.show('"'+keywords+'" does not contain "'+keyword+'"'); keywords += keyword + "; ";}
    Where keyword is
    日本国
    and keywords is
    Chion-in; 知恩院; Lily Pond; Bridge; 納骨堂; Nōkotsu-dō; Asia; Japan; 日本国; Nihon-koku; Kansai region; 関西地方; Kansai-chihō; Kyoto Prefecture; 京都府; Kyōto-fu; Kyoto; Higashiyama-ku; 東山区; Places;
    When the function is run, it will alert that keywords does not contain keyword, even though it does:
    "Chion-in; 知恩院; Lily Pond; Bridge; 納骨堂; Nōkotsu-dō; Asia; Japan; 日本国; Nihon-koku; Kansai region; 関西地方; Kansai-chihō; Kyoto Prefecture; 京都府; Kyōto-fu; Kyoto; Higashiyama-ku; 東山区; Places; " does not contain "日本国"
    Previously I was using indexOf, which doesn't have this problem, but I can't use that since it doesn't match the whole word.
    Is this a problem with my regex, is there a modifier I need to add to enable unicode support or something?
    Thanks
    Dave

    ogre11 wrote:
    > I need to use refind to deal with strings containing
    accented characters like
    > ?itt? l?su, but it doesn't seem to find them. Also when
    using it with cyrillic
    > characters , it won't find individual characters, but if
    I test for [\w] it'll
    > work.
    works fine for me using unicode data:
    <cfprocessingdirective pageencoding="utf-8">
    <cfscript>
    t="Tá mé in ann gloine a ithe;
    Nà chuireann sé isteach nó amach
    orm";
    s="á";
    writeoutput("search:=#t#<br>for:=#s#<br>found
    at:=#reFind(s,t,1,false)#");
    </cfscript>
    what's the encoding for your data?

  • Cannot create file with Non-latin characters- I/O

    I'm trying to create a file w/ Greek (or any other non-latin) characters ... for use in a RegEx demo.
    I can't seem to create the characters. I'm thinking I'm doing something wrong w/ IO.
    The code follows. Any insight would be appreciated. - Thanks
    import java.util.regex.*;
    import java.io.*;
    public class GreekChars{
         public static void main(String [ ] args ) throws Exception{
              int c;
              createInputFile();
    //          String input = new BufferedReader(new FileReader("GreekChars.txt")).readLine();
    //          System.out.println(input);
              FileReader fr = new FileReader("GreekChars.txt");
              while( (c = fr.read()) != -1)
                   System.out.println( (char)c  );
         public static void createInputFile() throws Exception {
              PrintStream ps = new PrintStream(new FileOutputStream("GreekChars.txt"));
              ps.println("\u03A9\u0398\u03A0\u03A3"); // omega,theta,pi,sigma
              System.out.println("\u03A9\u0398\u03A0\u03A3"); // omega,theta,pi,sigma
              ps.flush();
              ps.close();
              FileWriter fw = new FileWriter("GreekChars.txt");
              fw.write("\u03A9\u0398\u03A0\u03A3",0,4);
              fw.flush();
              fw.close();
    // using a printstream to create file ... and BufferedReader to read
    C:> java GreekChars
    // using a Filewriter to create files  .. and FileReader to read
    C:> java GreekChars
    */

    Construct your file writer using a unicode format. If
    you don't then the file is written using the platform
    "default" format -probably ascii.
    example:
    FileWriter fw = new FileWriter("GreekChars.txt",
    "UTF-8");I don't know what version of FileWriter you are using, but not that I know of take two string parameters. You should try checking the API before trying to help someone, instead of just making things up.
    To the OP:
    The proper way to produce a file in UTF-8 format would be this:
    OutputStreamWriter writer = new OutputStreamWriter(new FileOutputStream("filename"), "UTF-8");Then to read the file, you would use:
    InputStreamReader reader = new InputStreamReader(new FileInputStream("filename"), "UTF-8");

  • Embedded glyphs for alpha not working for non-Latin letters

    Even after reading through all the posts here, I'm still
    confused about why embedding is necessary for alphas on text and
    how embedding works, so maybe someone can sort it out and help me
    with my problem:
    I have a FLA that is trying to present the same script in
    (the user's choice of) several languages -- including non-Latin
    alphabets like Korean, Japanese, Russian, Chinese and Arabic. I'm
    using the Strings library to load translations into my text movie
    clips on the stage.
    The language stuff works great except that alpha tweens
    weren't working. So I selected the movie clip symbols (three of
    them) in the library and told Flash to embed all glpyhs. Each of
    these symbols is using Trebuchet MS, so my thinking is that I'm not
    exceeding the 65K limit because I'm embedding the same glyphs for
    each. (But I'm not sure if that limit is for the SWF or per
    symbol.) I only have one other font in the FLA and for that one,
    I'm just embedding English alpha characters.
    (Perhaps as an aside, I also included these fonts Stone Sans
    and Trebuchet MS as fonts in my library, but I don't understand
    whether this is absolutely necessary or if embedding glyphs
    eliminates the need for these.)
    So with those glyphs embedded, my text alpha tweens work
    beautifully. However -- and this is where I begin to shed tears --
    all the Korean and Cyrillic text is gone. It's just punctuation and
    no characters. I don't have Chinese, Japanese or Arabic text loaded
    yet, but I imagine that these would suffer from the same problem.
    What am I doing wrong? Do I need to create more than one
    movie to achieve my multilanguage goal? (Or worse, render each
    language separately? -- yuck!) In my old version of Flash (just
    up'd to CS3) I could tween alpha text with no problem, so why is
    all this embedding even necessary?
    Thanks,
    Tim

    Is this just impossible? Or really hard?

  • Non-latin characters in photo books

    Has anybody used symbols or non-latin alphabets when designing photo books? They come out on screen and on proofs just fine (well... it's an Apple, isn't it?) and I assume iPhoto just goes ahead and puts Unicode characters into .pdf verbatim before uploading, however, online printing services may have a problem when processing text.
    I am particularly interested in using Greek/Cyrillic alphabets when printing in the UK.

    The problem has gone away. I suspect it was a problem with corrupted prefs. I trashed the Safari prefs and rebooted to clear another problem and no longer have the problem with search using Japanese characters.
    FWIW, the problem I had when I trashed the prefs was with trying to mail a Safari page. I ran in to this originally a week or two ago and called Apple who told me to delete the Safari prefs and reboot. (Actually they gave an alternate procedure to try first, but I didn't bother.) That appears to be a recurrent problem and since there was no hesitation on the solution when I called I would guess that it will be fixed in an early patch. I had already tried trashing the mail prefs since that's where the problem actually appeared (an extra copy of Mail would open, and then hang) but it was in fact the Safari prefs that was causing the problem. I've had to do the delete-and-reboot routine every few days. Not sure why the reboot is required, but it obviously is since just quitting Safari or even logging out doesn't fix it.

  • Non-Latin Characters

    I am getting an error that my file name has non-latin characters, the file is named 01h_backM.jpg I am using save for web.

    Are you saving to a folder with symbols in the name? Try saving to a different folder.
    Benjamin

  • Urxvt non-latin copy-paste

    Hello.
    I've used tutorial from Archlinux wiki to get easy copy (ctrl+c) and paste (ctrl+v) in urxvt. The problem now is that urxvt can't paste non-latin characters correctly.
    For example, I need to paste this to terminal:
    Région parisienne
    Guake terminal can render the above text correctly, while urxvt displays
    Région parisienne
    The same error also happens with Japanese characters, although I can type directly Japanese to urxvt using ibus-anthy.
    The copy-paste extension uses a perl plugin shipped with standard urxvt installation.

    @Ranmaru;
    already have it xD. I can't use middle click, no mouse and my touchpad's buttons are a bit off.
    @Mr. Pedobear:
    [hinagiku@alice-tan urxvt-clipboard]# locale
    LANG=en_US.UTF-8
    LC_CTYPE=ja_JP.UTF8
    LC_NUMERIC="en_US.UTF-8"
    LC_TIME="en_US.UTF-8"
    LC_COLLATE=C
    LC_MONETARY="en_US.UTF-8"
    LC_MESSAGES="en_US.UTF-8"
    LC_PAPER="en_US.UTF-8"
    LC_NAME="en_US.UTF-8"
    LC_ADDRESS="en_US.UTF-8"
    LC_TELEPHONE="en_US.UTF-8"
    LC_MEASUREMENT="en_US.UTF-8"
    LC_IDENTIFICATION="en_US.UTF-8"
    LC_ALL=
    [hinagiku@alice-tan urxvt-clipboard]# locale -a
    C
    POSIX
    de_DE.utf8
    en_US
    en_US.iso88591
    en_US.utf8
    ja_JP
    ja_JP.eucjp
    ja_JP.ujis
    ja_JP.utf8
    japanese
    japanese.euc
    ko_KR
    ko_KR.euckr
    ko_KR.utf8
    korean
    korean.euc
    Should I set LANG to ja_JP? Even without Japanese locale, backslash is displayed as Yen sign in terminal and Qt apps.

  • Waiting the best solution to strange characters for non-latin songs?

    I've tried to find a solution to the "Strange Characters" when importing songs with non-latin characters (I'm fan of greek music)... I`ve changed the ID3 tags to 2.4 and nothing happened...should I wait for the upgrade?

    On re-reading and following your directions, I really should have given you 10 pts. for that answer, Tom, and not 5.  I'd somehow misunderstood the glyph view, thinking it was only a matrix of a certain standard, limited set of symbols, like arrow dingbats and such. I see the way it works now, and it's quite practical.
    Thanks very much.
    Rob

  • [Solved]Ctrl+V not working in libreoffice on gnome(non-latin layout)

    When using gnome-shell&libreoffice or calligra, universal keyboard shortcuts such as ctrl+v or ctrl+z don't do anything while on a non-latin layout(hebrew in my case).
    If I want to paste something with ctrl+v I have to change layout to English and only then will it work.
    Under MATE the shortcuts work fine regardless of the layout in both applications(and all others i have tried).
    Under GNOME Shell all other applications I tried accept the shortcuts regardless of the layout(Firefox, gedit, Empathy, Nautilus)
    Anyone has an idea as to what might cause this behavior when using GNOME?
    Thanks.
    EDIT: Solved for libreoffice by removing the package "libreoffice-gnome". UI is not as pretty now but at least the keyboard shortcuts work.
    Last edited by shimi (2013-08-09 09:00:50)

    After months of switching layouts and banging my head against this bug, I thought I should check LibreOffice settings (I'm using 4.1.5.3 now). What figures? I did find something. And in just a few clicks.
    This is not a bug! It's simply a matter of configuration.
    For the regular keyboard shortcuts (like Ctrl+C, Ctrl+V, etc.) to remain operational in LibreOffice applications while using a non-latin keyboard layout (like Greek or Russian), go to Tools -> Options -> Language Settings -> Languages, check the Ignore system input language option, save, and Bob's your uncle.
    Hope this helps.
    Cheers!
    PS
    Technically, though, shortcuts still remain language-dependent. This means if you enable this option, you will have to set your document languages manually.

  • How to send non-latin unicode characters from Flex application to a web service?

    Hi,
    I am creating an XML containing data entered by user into a TextInput. The XML is sent then to HTTPService.
    I've tried this
    var xml : XML = <title>{_title}</title>;
    and this
    var xml : XML = new XML("<title>" + _title + "</title>");
    _title variable is filled with string taken from TextInput.
    When user enters non-latin characters (e.g. russian) the web service responds that XML contains characters that are not UTF-8.
    I run a sniffer and found that non-printable characters are sent to the web service like
    <title>����</title>
    How can I encode non-latin characters to UTF-8?
    I have an idea to use ByteArray and pair of functions readMultiByte / writeMultiByte (to write in current character set and read UTF-8) but I need to determine the current character set Flex (or TextInput) is using.
    Can anyone help convert the characters?
    Thanks in advance,
    best regards,
    Sergey.

    Found tha answer myself: set System.useCodePage to false

  • Non latin characters in Safari search

    In Safari 6 the search and URL fields are combined. That's fine, except...
    We can no longer search using non-Latin characters, because the field accepts only Latin characters. I was trying to search for a Japanese term, and when I switch to Hiragana input and move to the search field, the input switches back to English.
    What's the workaround??

    The problem has gone away. I suspect it was a problem with corrupted prefs. I trashed the Safari prefs and rebooted to clear another problem and no longer have the problem with search using Japanese characters.
    FWIW, the problem I had when I trashed the prefs was with trying to mail a Safari page. I ran in to this originally a week or two ago and called Apple who told me to delete the Safari prefs and reboot. (Actually they gave an alternate procedure to try first, but I didn't bother.) That appears to be a recurrent problem and since there was no hesitation on the solution when I called I would guess that it will be fixed in an early patch. I had already tried trashing the mail prefs since that's where the problem actually appeared (an extra copy of Mail would open, and then hang) but it was in fact the Safari prefs that was causing the problem. I've had to do the delete-and-reboot routine every few days. Not sure why the reboot is required, but it obviously is since just quitting Safari or even logging out doesn't fix it.

  • Non latin characters in .cfm filename

    Hi - I have users who want to name files with non latin characters.  i.e.
    Логотип_БелРусь_2500x1.cfm
    We get a file not found error, it is not an IIS issue and we have UTF-8 encoding and are running CF8.
    Yes we can rename the files but for now would like to know if non latin characters are allowed in .cfm file names.
    Thank you!
    Sapna

    PaulH wrote:
    en_US is the JRE locale. is that the same as the OS? and what file encoding?
    (check via cfadmin).
    i ask, because pretty sure you can't use non-ascii file names w/cf. there's an
    open bug on that:
    http://cfbugs.adobe.com/cfbugreport/flexbugui/cfbugtracker/main.html#bugId=77177
    only can guess that file encoding isn't latin-1, etc. and/or OS locale equals
    the same language as the file name.
    cfadmin gives pretty much the same information. Here's a direct copy
    Server Product
    ColdFusion
    Version
    9,0,0,241018  
    Edition
    Developer  
    Serial Number
    Operating System
    Windows 2000  
    OS Version
    5.0  
    Update Level
    /C:/ColdFusion9/lib/updates/hf900-78588.jar  
    Adobe Driver Version
    4.0 (Build 0005)  
    JVM Details
    Java Version
    1.6.0_12  
    Java Vendor
    Sun Microsystems Inc.  
    Java Vendor URL
    http://java.sun.com/
    Java Home
    C:\ColdFusion9\runtime\jre  
    Java File Encoding
    Cp1252  
    Java Default Locale
    en_US  
    File Separator
    Path Separator
    Line Separator
    Chr(13)

Maybe you are looking for

  • Copying tables in pdf's to excel in Acrobat Standard 7.0.8

    Hi there I am having problems copying tables of numbers from pdf's into excel in Acrobat Standard 7.0.8. I am finding that when I select data in a table, right-click, and select 'Copy As Table', only about 50% of the time will the data be pasted corr

  • Where should I direct new feature ideas for my Macbook Pro

    I have an idea that I would like to see incorporated into the tracking pad and would like to know who I should direct new ideas for development to?

  • EDI Integration using PI with third party connectors

    Hi Experts , I would like to know the extra cost factors associated if going for EDI integration with PI with third party connectors like seeburger . 1. We have NW 2004s ECC installed . Do we need to purchase PI separately? If yes how much will it co

  • Apex french version of collection returns less records

    Hi, I have a collection when debuging it, I can see 5 rows on my collection which is correct but the actual report only shows two records (excludes ones with empty columns). This is the result of my report which is based on this query select c001,c00

  • Acrobat XI will not execute on my Win8 box

    I have installed Acrobat Xi through the CC on my computer and at first when I brought up Acrobat  I received the following error: Acrobat failed to connect to a DDE server. How can I fix this?