[SOLVED] Urxvt + Inconsolata Displaying UTF-8 Characters Incorrectly

I use the Inconsolata font with urxvt and have noticed that some characters are displayed improperly. For example, an "en dash" (U+2013) appears as a capital N with a tilde over it. It appears that this is because Inconsolata doesn't support anything other than the basic latin characters. Is there any way to allow urxvt to "fall back" on another font for rendering those specific characters?
Last edited by aclindsa (2012-03-16 16:42:35)

Unfortunately, the "-fn" switch (or *font: in .Xresources) does not seem to work properly with Inconsolata. For example, if I do the following:
urxvt -fn "xft:Terminus"
The characters display properly, but if I add Inconsolata ahead of that, like so:
urxvt -fn "xft:Inconsolata,xft:Terminus"
I get the garbage characters. I have experimented with different "fallback" fonts in all different orders, sizes, etc. and whenever Inconsolata is first, I see the garbage chars. The more I look into it, the more it looks like the wrong characters are being reported to urxvt for Inconsolata.

Similar Messages

  • Displaying UTF-8 characters

    Hello all - I'm having a bit of trouble with my current project. I'm adding international support to it, I have an XML file with all the translated strings in UTF-8 format. I read it in using the following code:
              InputStream in = new FileInputStream(name);
              InputStreamReader isr = new InputStreamReader(in,"UTF8");
              BufferedReader br = new BufferedReader(isr);
              StringBuffer buf = new StringBuffer();
              String line = new String();
              while ((line = br.readLine()) != null){
                   buf.append(line);
                   buf.append('\n');
              return buf.toString();The string buffer is then sent off to be chopped up and parsed and displayed. However, for all non-latin characters, I get a question mark in a diamond. this code works in a very similar project, so I'm unsure why it is not working now, when almost all the program components are the same.
    Any ideas?
    thanks!
    Jake

    So the only thing that has changed is the operating system on which you are running this system?
    If that's the case then it's possible that you were (accidentally or otherwise) converting between bytes and strings using the default charset on the non-Apple machine, and this happened to work for some reason. And on the Apple machine perhaps the default charset is different, and that applecart (sorry!) got upset.
    And have you checked that you can actually display non-ASCII characters on your GUI setup from a simple program where you just hard-code those characters?

  • [Solved] URXVT cannot display Japanese Characters

    Solved:
    I had a typo in my locale.conf, setting to an invalid locale - apparently that did it.
    Thanks for the help!
    Hi everybody!
    I just now re-installed Arch because I switched hard-drives (to an SSD) and everything seems to be working again, apart from one thing:
    urxvt doesn't display Japanese Characters, just questionmarks instead when using ls and garbage characters otherwise.
    I literally copied and pasted my ~/.Xresources from my old install, so I'm not quite sure what went wrong.
    This is said file:
    Urxvt.urgentOnBell: True
    urxvt*cursorBlink: false
    !urxvt*internalBorder: 0
    !urxvt*externalBorder: 0
    URxvt*.depth: 32
    URxvt*.background: [85]#000000
    ! URxvt.scrollstyle: plain
    URxvt.scrollBar: false
    URxvt.foreground: grey
    ! red
    URxvt.color1: #CC0000
    URxvt.color9: #B33838
    ! blue
    URxvt.color4: #3465A4
    URxvt.color12: #729FCF
    ! yellow
    Urxvt.color3: #b48363
    URxvt.color11: #d49b4e
    !URxvt.font: 8x13
    urxvt*font: xft:DejaVu Sans Mono:size=8:antialas=true,xft:Kochi Gothic:size=8
    This is what fc-list has to say:
    % fc-list | grep "Kochi\|DejaVuSansMono"
    /usr/share/fonts/TTF/DejaVuSansMono.ttf: DejaVu Sans Mono:style=Book
    /usr/share/fonts/TTF/kochi-mincho-subst.ttf: Kochi Mincho,æ±é¢¨ææ:style=Regular,æ¨æº
    /usr/share/fonts/TTF/kochi-gothic-subst.ttf: Kochi Gothic,æ±é¢¨ã´ã·ãã¯:style=Regular,æ¨æº
    /usr/share/fonts/TTF/DejaVuSansMono-Oblique.ttf: DejaVu Sans Mono:style=Oblique
    /usr/share/fonts/TTF/DejaVuSansMono-Bold.ttf: DejaVu Sans Mono:style=Bold
    /usr/share/fonts/TTF/DejaVuSansMono-BoldOblique.ttf: DejaVu Sans Mono:style=Bold Oblique
    I already tried re-installing the fonts and I also tried out alternative fonts, but nothing seems to work.
    All the other settings from the ~/.Xresources file are applied perfectly, so I'm not quite sure where to look for the error.
    My browser (dwb) displays japanese characters just fine.
    Any help is greatly appreciated
    Edit: I just realized that urxvt seems to completely ignore the fonts line - I had that problem once before, when I used the AMD Catalyst driver and not the open source one.
    I now have an Nvidia card and started using the propietary driver - maybe that has something to do with it?
    Last edited by lorizean (2013-12-02 13:16:14)

    Works here:
    URxvt*depth: 32
    URxvt*buffered: true
    URxvt*termName: rxvt-256color
    URxvt.font: xft:Terminus:pixelsize=12:antialias=false
    urxvt.imLocale: pl_PL.ISO8859-2
    What's the output of 'localectl'?

  • X11 won't display UTF-8 characters (using Kbabel)

    Hello everyone
    Boy, the discussion lists have changed! Forums are a good idea, though.
    I'm a volunteer open-source translator, and the most effective translation editor for us is Kbabel. (LocFactoryEditor [1] for OSX is very good, though. I use it continually, and BBEdit [2] for CVS/SVN management of translation files.)
    When I installed Kbabel, everything went OK, but when I tried to open a PO (Portable Object, translation file format, basically a text file) file, the translations in my language (Vietnamese) were just gibberish. None of the accented characters displayed correctly. Since my language is pretty much all accented characters, this was a critical problem. Characters entered had the same problem. No readable data in, no readable data out, no translation possible.
    I had set my X11 prefs to inherit my keyboard choice, and had chosen the right keyboard. I have Lucida Grande set as my default font, and it handles Vietnamese very well.
    This was months ago. I reported it as a bug against Kbabel, but with investigation, we found it was an X11 problem. I was told at the time (sorry, I can't remember the reference) that this was a known X11 bug, which Apple had not yet fixed. Judging by the continuing mess when I try to use Kbabel now, it hasn't been fixed.
    How do we track bugs reported to Apple, or continuing problems of this type? Do you think this is really the problem, or is there another way to solve it?
    Any help very much appreciated.
    from Clytie
    [1] http://www.triplespin.com/en/products/locfactoryeditor.html
    [2] http://www.barebones.com/index.shtml

    I would suggest you repeat your query in the unix forum, where the experts on such stuff are more likely found, and where there have been other threads about using accented chars in terminal, etc.
    http://discussions.apple.com/forum.jspa?forumID=735

  • TextEdit doesn't display UTF-8 characters corrrectly

    I have a plain text file with some German umlauts, encoded in UTF-8. When I select this file in the Finder, the umlauts are displayed correctly in the Preview. However, if I double click on that file to have it loaded into the TextEdit program, the umlauts are displayed incorrectly.
    The screenshot at ftp://ftp.cadsoft.de/pub/etc/mac-osx-utf-8-bug.png shows the Finder's window in the background, and the TextEdit window in the foreground.
    When I explicitly load the file into the TextEdit program, with "Plain Text Encoding" set to "Unicode (UTF-8)" the text is displayed correctly. Only with "Automatic" it doesn't work.
    Am I doing something wrong here, or is this an actual bug in TextEdit?
    Franz
    Mac mini   Mac OS X (10.4.6)  

    Well, the "Notepad" editor on Windows XP does it
    correctly with both UTF-8 and ISO8859-1 umlauts.
    I think Notepad identifies UTF-8 correctly because Windows (unlike other OS's) puts a BOM at the start of UTF-8 files. Normally you only see this at the start of UTF-16 files, which many text editors can identify correctly.
    However, if the Mac user clicks on our German
    README text file and has "Plain text file encoding"
    set to "Automatic" (which apparently is the default),
    he sees broken umlauts.
    Yes I think Automatic means MacRoman. One possible solution is to use either rtf or html for the readme. I think the encoding for these will be recognized correctly. You could also try putting a BOM at the start of your UTF-8 plain text file.
    I was unter the impression that "UTF-8" was the
    plain text format on the Mac
    No, OS X stores data internally as UTF-16. Anything using xml will be UTF-8, and that's now the default encoding in Mail for lots non-Roman scripts, but otherwise the other encodings seem to have pretty equal status.

  • Displaying UTF chinese characters

    Here's my code
              final StringBuilder sb = new StringBuilder();
              DropWindow dw = new DropWindow() {
                   public void runFile(File f, Object[] extras) {
                        try {
                             String str = IOUtils.readFileAsString(f);
                             for (int i = 0; i < str.length(); i++) {
                                  System.out.println("char[" + i + " ] = " + (int)str.charAt(i));
                             sb.append(str);
                        catch (IOException iox) {
                             iox.printStackTrace();
              JPanel jp = new JPanel() {
                   public void paintComponent(Graphics g) {
                        super.paintComponent(g);
                        g.drawString(sb.toString(), getWidth() / 2, getHeight() / 2);
              dw.add(jp);
              WindowUtilities.visualize(dw);when I run this I get
    char[0 ] = 20320
    char[1 ] = 22909
    char[2 ] = 13
    char[3 ] = 10
    in the command line. On the screen I get 2 squares where I'm hoping my chinese characters will be.
    Is it just that I'm not using an international version of java? I just went to the download page and it seems like all java versions are international now. Could it be that my font can't represent the characters?
    How can I paint chinese characters?

    I don't understand. 'Arial' will never display Chinese. In JRE1.6, the only font I know that displays Chinese is "AR PL ShanHeiSun Uni". In 1.5 there was a second one but that seems to have been removed.
    When I want to find a font for a particular character set I use the following simple Swing application -
    import java.awt.BorderLayout;
    import java.awt.Font;
    import java.awt.GraphicsEnvironment;
    import java.awt.GridLayout;
    import java.awt.event.ActionEvent;
    import java.awt.event.ActionListener;
    import java.util.ArrayList;
    import java.util.Iterator;
    import javax.swing.BorderFactory;
    import javax.swing.JCheckBox;
    import javax.swing.JComboBox;
    import javax.swing.JComponent;
    import javax.swing.JFrame;
    import javax.swing.JLabel;
    import javax.swing.JList;
    import javax.swing.JPanel;
    import javax.swing.JScrollPane;
    import javax.swing.ListSelectionModel;
    import javax.swing.UIManager;
    import javax.swing.event.ListSelectionEvent;
    import javax.swing.event.ListSelectionListener;
    public class FontAndCharDisplay extends JFrame
        private JComponent createStyleSelector()
            JPanel panel = new JPanel();
            panel.add(boldButton);
            panel.add(italicButton);
            ActionListener styleListener = new ActionListener()
                public void actionPerformed(ActionEvent event)
                    updateDisplayOfChars();
            boldButton.addActionListener(styleListener);
            italicButton.addActionListener(styleListener);
            panel.setBorder(BorderFactory.createTitledBorder("Style"));
            return panel;
        private JComponent createSizeSelector()
            int[] sizes =
            {8,9,10,11,12,14,16,18,20,24,28,32, 36, 40, 48, 56, 64, 72, 84,100};
            final JComboBox sizeSelector = new JComboBox();
            for (int index = 0; index < sizes.length; index++)
                sizeSelector.addItem(new Integer(sizes[index]));
            fontSize = 14;
            sizeSelector.setSelectedItem(new Integer(fontSize));
            sizeSelector.addActionListener(new ActionListener()
                public void actionPerformed(ActionEvent event)
                    fontSize = ((Integer)sizeSelector.getSelectedItem()).intValue();
                    updateDisplayOfChars();
            sizeSelector.setBorder(BorderFactory.createTitledBorder("Size"));
            sizeSelector.setOpaque(false);
            return sizeSelector;
        private JComponent createPageSelector()
            String[] pageAddresses = new String[256];
            for (int row = 0; row < 16; row++)
                for (int col = 0; col < 16; col++)
                    pageAddresses[row*16+col] = HEX_CHARS[row] + (HEX_CHARS[col] + "00");
            final JComboBox addressSelector = new JComboBox(pageAddresses);
            addressSelector.addActionListener(new ActionListener()
                public void actionPerformed(ActionEvent event)
                    updateAddressBase(Integer.parseInt((String)addressSelector.getSelectedItem(), 16));
            addressSelector.setBorder(BorderFactory.createTitledBorder("Page"));
            addressSelector.setOpaque(false);
            return addressSelector;
        private FontAndCharDisplay()
            super("Font Display");
            setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
            JPanel upperPanel = new JPanel(new BorderLayout());
            JPanel controlsPanel = new JPanel(new GridLayout(1,0));
            controlsPanel.add(createPageSelector());
            controlsPanel.add(createSizeSelector());
            controlsPanel.add(createStyleSelector());
            upperPanel.add(controlsPanel, BorderLayout.NORTH);
            fontSelector = new JList(GraphicsEnvironment.getLocalGraphicsEnvironment().getAvailableFontFamilyNames());
            fontSelector.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
            fontSelector.addListSelectionListener(new ListSelectionListener()
                public void valueChanged(ListSelectionEvent e)
                    if (!e.getValueIsAdjusting())
                        updateDisplayOfChars();
            JScrollPane fontNameDisplay = new JScrollPane(fontSelector);
            fontNameDisplay.setBorder(BorderFactory.createTitledBorder("Name"));
            upperPanel.add(fontNameDisplay, BorderLayout.CENTER);
            fontSelector.setSelectedIndex(0);
            getContentPane().add(upperPanel, BorderLayout.NORTH);
            // Build the set of components to display the characters
            for (int index = 0; index < 256; index++)
                charDisplayFields.add(new JLabel(""));
            // Build the main character display area
            int startPoint = 0;
            final JPanel charDisplayPanel = new JPanel(new GridLayout(0, 17));
            charDisplayPanel.add(new JLabel(""));
            for (int col = 0; col < 16; col++)
                charDisplayPanel.add(new JLabel(Character.toString(HEX_CHARS[col])));
            for (int row = 0; row < 16; row++)
                charDisplayPanel.add(new JLabel(Character.toString(HEX_CHARS[row])));
                for (int col = 0; col < 16; col++)
                    charDisplayPanel.add((JComponent)charDisplayFields.get(startPoint++));
            JScrollPane characterDisplay = new JScrollPane(charDisplayPanel);
            characterDisplay.setBorder(BorderFactory.createTitledBorder("Page Display"));
            getContentPane().add(characterDisplay, BorderLayout.CENTER);
            updateAddressBase(0);
            updateDisplayOfChars();
            pack();
        private void updateAddressBase(int start)
            for (Iterator it = charDisplayFields.iterator(); it.hasNext();)
                JLabel label = (JLabel)it.next();
                label.setText(Character.toString((char)start++));
        private void updateDisplayOfChars()
            // Calculate the style
            int style = 0;
            if (italicButton.isSelected())
                style |= Font.ITALIC;
            if (boldButton.isSelected())
                style |= Font.BOLD;
            // Build the font
            Font font = new Font((String)fontSelector.getSelectedValue(), style, fontSize);
            System.out.println(font);
            // Update all the char labels to use the new font
            for (Iterator it = charDisplayFields.iterator(); it.hasNext();)
                JLabel label = (JLabel)it.next();
                label.setFont(font);
        public static void main(String[] args)
            try
                UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
                new FontAndCharDisplay().setVisible(true);
            catch (Exception e)
                e.printStackTrace();
        private int fontSize = 10;
        private ArrayList charDisplayFields = new ArrayList(256);
        private JCheckBox boldButton = new JCheckBox("Bold");
        private JCheckBox italicButton = new JCheckBox("Italic");
        private JList fontSelector;
        private static final char[] HEX_CHARS =
        {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
    }Please don't look too close at the code - it was one of my earliest Swing applications.
    P.S. These values
    char[2 ] = 13
    char[3 ] = 10
    are just CR and LF and not Chinese.
    Edited by: sabre150 on Aug 29, 2008 9:21 PM

  • UTF-8 characters not displaying in IE6

    Dear Sirs,
    I have an issue in displaying UTF-8 characters in Internet explorer 6.
    I set up my all jsp pages encoding as UTF-8.
    Any language characters(like chinese,tamil etc) perfectly dispaying in firebox browser.
    But in internet explorer, the characters are not displaying.it displays like ?! ..
    Could any body help me out?
    Thanks
    mulaimaran

    Thanks Viravan,
    But, I have added this line in my jsp before html tag.
    <%@ page contentType="text/html;charset=UTF-8" pageEncoding="UTF-8" %>
    After html tag,i added this meta tag.
    <META http-equiv="Content-Type" content="text/html;charset=UTF-8">
    So, the UTF-8 encoding is capable to show different language characters in firebox browser.
    But In Internet Explorer 6 other language characters not displaying..
    > jsp sends out the UTF-8 BOM (hex: EF BB BF) before
    the HTML tag.I cant understand this line.I m new one to java.
    So ,please help me out.
    Thanks
    mullaimaran

  • [SOLVED] VIM not displaying many glyphs

    I'm struggling to get the full glyph set to display in VIM. Particularly missed are the mathematical super- and sub-script. I've worked on the problem and tried various fixes for displaying UTF-8 characters without success.
    (Code block bad form? Ok T. IMHO. There's that type of post where the OP doesn't know why something isn't working and they quote a dump truck of details, even though they're not sure what's relevant and what isn't. And the longer the quote, the harder it is to figure out what they're doing wrong. I didn't want to be that guy. I was hiding the verbosity, in the event my font problems are simpler than I've made it.)
    1) Applied settings from these WIKI pages:
    Fonts
    Xterm
    2) Apps I've installed
    $ pacman -Qs font
    local/dina-font 2.92-4
        A monospace bitmap font, primarily aimed at programmers
    local/fontconfig 2.11.1-1
        A library for configuring and customizing font access
    local/fontsproto 2.1.3-1
        X11 font extension wire protocol
    local/freetype2 2.5.3-2
        TrueType font rendering library
    local/gsfonts 1.0.7pre44-4
        Standard Ghostscript Type1 fonts from URW
    local/libfontenc 1.1.2-1
        X11 font encoding library
    local/libotf 0.9.13-2
        OpenType Font library
    local/libxfont 1.4.7-3
        X11 font rasterisation library
    local/libxft 2.3.2-1
        FreeType-based font drawing library for X
    local/t1lib 5.1.2-5
        Library for generating character- and string-glyphs from Adobe Type 1 fonts
    local/tamsyn-font 1.10-1
        A monospaced bitmap font for the console and X11
    local/terminus-font 4.39-1
        Monospace bitmap font (for X11 and console)
    local/ttf-bitstream-vera 1.10-10
        Bitstream vera fonts
    local/ttf-droid 20121017-3
        General-purpose fonts released by Google as part of Android
    local/xorg-bdftopcf 1.0.4-2 (xorg xorg-apps)
        Convert X font from Bitmap Distribution Format to Portable Compiled Format
    local/xorg-font-util 1.3.0-2 (xorg-fonts xorg)
        X.Org font utilities
    local/xorg-font-utils 7.6-4
        Transitional package depending on xorg font utilities
    local/xorg-fonts-100dpi 1.0.1-5 (xorg)
        X.org 100dpi fonts
    local/xorg-fonts-alias 1.0.3-1
        X.org font alias files
    local/xorg-fonts-encodings 1.0.4-4 (xorg-fonts xorg)
        X.org font encoding files
    local/xorg-fonts-misc 1.0.1-3
        X.org misc fonts
    local/xorg-fonts-type1 7.4-3
        X.org Type1 fonts
    local/xorg-mkfontdir 1.0.7-2 (xorg xorg-apps)
        Create an index of X font files in a directory
    local/xorg-mkfontscale 1.1.1-1 (xorg-apps xorg)
        Create an index of scalable font files for X
    3) A forum search found some common issues: 
    unicode symbols not working in my terminal
    $ localectl
       System Locale: LANG=en_US.UTF-8
                      LC_COLLATE=C
           VC Keymap: US
          X11 Layout: n/a
    $ locale -a
    C
    en_US.utf8
    POSIX
    $ echo $TERM
    xterm-256color
    4) ~/.Xresources
    ! wiki.archlinux.org/.../Xterm
    xterm*termName:           xterm-256color
    xterm*locale:             true
    xterm*saveLines:          4096
    xterm*bellIsUrgent:       false
    xterm*VT100.geometry:     80x25
    xterm*faceName:           Droid:style=Regular:size=12
    xterm*dynamicColors:      true
    xterm*utf8:               2
    xterm*toolBar:            false
    5) ~/.xinitrc
    # X11 Fonts
    xset +fp /usr/share/fonts
    Last edited by xtian (2014-07-27 14:43:23)

    Linux fonts are a muddle.  Consoles can only display 256 characters, maybe 512.  You simply cannot display many texts in a console. To navigate through the font mess in X, you need some familiarity with fontconfig.  Xft uses fontconfig to select fonts.  Fontconfig documentation is not user-friendly.
    Droid is a family of fonts. My installation of the Droid family includes 27 different fonts. The command fc-list will list fonts matching a pattern.  I usually filter the output by piping through grep. To list the Droid fonts, file name first followed by the fontconfig name, I use:
    $ fc-list | grep Droid
    Your fc-match results for Droid are from fontconfig doing its best to give you a readable display.  Fontconfig cannot find a matching font for the name 'Droid', so it falls back to a "safe" font, 'Bitstream Vera Sans'.
    XTerm or UXTerm or URxvt
    I have my locale correctly configured, I think. I do not see any real advantage for uxterm over xterm. In my X resources, I include the lines,
    xterm*termName: xterm-256color
    XTerm*locale: true
    For good glyph coverage with xterm, I have found 'DejaVu Sans Mono' to be among the better fonts.  If I truly need utf8 coverage, I use urxvt. Urxvt allows one to use a ladder of fonts. If the character is not found in the first font listed, urxvt will search through the other listed fonts until it finds a glyph that can be displayed.
    urxvt*font: xft:DejaVu Sans Mono:style=Book:antialias=false:size=8, \
    xft:WenQuanYi Bitmap Song:size=8, \
    xft:FreeSerif:style=Regular, \
    xft:unifont:style=Medium:antialias=false
    Here's a screenshot with three xterms using Droid, DejaVu Sans Mono, and Liberation Mono, plus one urxvt using the fonts in the code above.  They all show the same portion of Markus Kuhn's utf8 test text.

  • [SOLVED] urxvt doesn't display correctly some special characters

    Hello everyone,
    I have a weird issue with urxvt. For some reason it doesn't display correctly some special character.
    Here is a comparison between xfce4-terminal and rxvt-unicode  (I used theses characters as exemple):
    xfce4-terminal :
    urxvt :
    And here is my .XDefaults file (without the color and the plugin part, since it's unrelevant):
    !Font
    URxvt.font: xft:PragmataPro:pixelsize=11:antialias=false
    !General
    URxvt.scrollBar: false
    URxvt*imLocale: fr_CH.UTF-8
    URxvt.saveLines: 5000
    URxvt.geometry: 95x26+50+50
    Has someone an idea what the problem could be?
    Thank's in advance.
    Last edited by mwm (2013-11-13 13:15:48)

    This is what I think is happpening.
    PragmataPro may not contain those glyphs.  It appears to have a wide array of glyphs but it is not unicode complete.
    Xfce-terminal is a vte terminal.  When a glyph cannot be found in the desired font, it will find the glyph in the 'closest' font.  Urxvt will only use the glyphs in the font or fonts specified.  If PragmataPro does not contain the glyphs, urxvt will display boxes.
    You can give urxvt a series of fonts to search. It will search for a glyph through the listed fonts in the order you specify.  Here's an example from my urxvt configs:
    urxvt*font: xft:DejaVu Sans Mono:style=Book:antialias=false:size=8, \
    xft:WenQuanYi Bitmap Song:size=8, \
    xft:FreeSerif:style=Regular, \
    xft:unifont:style=Medium:antialias=false
    I couldn't use FreeSerif or unifont as a main font, but for an occasional glyph, it works for me.  This file, http://www.cl.cam.ac.uk/~mgk25/ucs/exam … 8-demo.txt, can be displayed in urxvt correctly, with only a few unknow glyphs showing as boxes in the Amharic section.

  • [SOLVED] urxvt does not display special characters

    Okay so I've been scouring the internet, Googling my face off. I am an intermediate linux user, first-time Arch user (just came over from Mint and love it). Perhaps I'm missing something obvious, but I am unable to get urxvt to display special characters. E.g.:






    displays in the terminal as:
    Relevant informations...
    .Xresources:
    !color0 (black) = Black
    !color1 (red) = Red3
    !color2 (green) = Green3
    !color3 (yellow) = Yellow3
    !color4 (blue) = Blue3
    !color5 (magenta) = Magenta3
    !color6 (cyan) = Cyan3
    !color7 (white) = AntiqueWhite
    !color8 (bright black) = Grey25
    !color9 (bright red) = Red
    !color10 (bright green) = Green
    !color11 (bright yellow) = Yellow
    !color12 (bright blue) = Blue
    !color13 (bright magenta) = Magenta
    !color14 (bright cyan) = Cyan
    !color15 (bright white) = White
    !foreground = Black
    !background = White
    !URxvt*termName: rxvt-256color
    URxvt*termName: rxvt-unicode
    URxvt*transparent: true
    URxvt*depth: 32
    URxvt*shading: 70
    URxvt*saveLines: 12000
    URxvt*foreground: #BABABA
    URxvt.font: xft:terminus:pixelsize=11:antialias=false
    URxvt*scrollBar: false
    URxvt*borderLess: false
    URxvt*inheritPixmap: true
    URxvt.urlLauncher: google-chrome
    URxvt.imLocale: en_US.utf8
    URxvt*color0: #000000
    URxvt*color4: #005577
    URxvt*color6: #89b6e2
    URxvt*color7: #cccccc
    URxvt*color8: #555753
    URxvt*color12: #0075A3
    URxvt*color14: #46a4ff
    URxvt*color15: #ffffff
    Output of locale and locale -a:
    [dusty] [~]
    $ locale
    LANG=C
    LC_CTYPE="C"
    LC_NUMERIC="C"
    LC_TIME="C"
    LC_COLLATE="C"
    LC_MONETARY="C"
    LC_MESSAGES="C"
    LC_PAPER="C"
    LC_NAME="C"
    LC_ADDRESS="C"
    LC_TELEPHONE="C"
    LC_MEASUREMENT="C"
    LC_IDENTIFICATION="C"
    LC_ALL=
    [dusty] [~]
    $ locale -a
    C
    POSIX
    en_US.utf8
    I don't believe it's a font issue, as those characters display fine in a text editor using Terminus.
    Last edited by rollhax (2012-09-23 01:04:16)

    Nisstyre56 wrote:Did you install "rxvt-unicode" or just "rxvt"? I'm sorry, but I have to ask
    Yes
    stlarch wrote:Check the wiki page for locale.
    Read over it ten times. The only thing that looks remotely helpful is the section on "My terminal doesn't support UTF-8" ... it then tells me to use rxvt-unicode, which I am using. Feel free to point out something that may be obvious to you. I'm pretty oblivious at times.

  • [Bug Report] CR4E V2: Exported PDF displays Japanese characters incorrectly

    We now plan to transport a legacy application from VB to Java with Crystal Reports for Eclipse. It is required to export report as PDF file, but result PDFs display Japanese characters incorrectly for field with some mostly used Japanese fonts (MS Gothic & Mincho).
    Here is our sample Crystal Reports project:   [download related resources here|http://sites.google.com/site/cr4eexportpdf/example-of-cr4e-export-pdf]
    1. PDFExportSample.rpt located under ..\src contains fields with different Japanese fonts.
    2. Run SampleViewerFrameClient#main(..) to open a Java Report Viewer:
        a) At zoom rate 100%, everything is ok.
        b) Change zoom rate to 200% or 50%, some fields in Japanese font collapse.
        c) Export to PDF file,
             * Fonts "MS Gothic & Mincho": both ASCII & Japanese characters failed.
             * Fonts "Meiryo & HGKyokashotai": everything works well.
             * Open PDF properties, you will see all fonts are embedded with built-in encoding.
             * Interest to note that copy collapsed Japanese characters from Acrobat Reader, then
               paste them into a Notepad window, Notepad will show the correct Japanese characters anyway.
               It seems PDF export in CR4E mistaking to choose right typeface for Japanese characters
               from some TTF file.
    3. Open PDFExportSample.rpt in Crystal Report 2008 Designer (trial version), and export it as PDF.
        The result PDF displays both ASCII & Japanese characters without any problem.
    Test environment as below:
    * Windows XP Professional SP3 (Japanese) with MS Office which including extra fonts (i.e. HGKyokashotai)
    * Font version: MS Gothic, Mincho, Meiryo, all in Version 5.0
        You can download MS Meiryo from Microsoft's Site:
        http://www.microsoft.com/downloads/details.aspx?familyid=F7D758D2-46FF-4C55-92F2-69AE834AC928&displaylang=en)
    * Eclipse 3.5.2
    * Crystal Reports for Eclipse, V2, 12.2.207.r916
    Can this problem be fixed? If yes how long will it take to release a patch?
    We really looking forward to a solution before abandoning CR4E.
    Thanks for any reply.

    I have created a [simple PDF file|http://sites.google.com/site/cr4eexportpdf/inside-the-pdf/simple.pdf?attredirects=0&d=1] exported from CR4E. It is expected to display "漢字" (or in unicode as "\u6F22\u5B57"), but instead being rendered in different ones of "殱塸" (in unicode as "\u6BB1\u5878").
    Look inside into this simple PDF file (you can just open it with your favorite text editor), here is its page content:
    8 0 obj
    <</Filter [ /FlateDecode ] /Length 120>>
    stream ... endstream
    endobj
    Decode this stream, we get:
    /DeviceRGB cs
    /DeviceRGB CS
    q
    1 0 0 1 0 841.7 cm
    13 -13 569.2 -815.7  re W n
    BT
    1 0 0 1 25.75 -105.6 Tm     <-- text position
    0 Tr
    /ttf0 10 Tf                 <-- apply font
    0 0 0 sc
    ( !)Tj                      <-- show glyphs [20, 21], which index is to embedded TrueType font subset
    ET
    Q
    The only embeded font subset is defined as:
    9 0 obj /ttf0 endobj
    10 0 obj /AAAAAA+MSGothic endobj
    11 0 obj
    << /BaseFont /AAAAAA+MSGothic
    /FirstChar 32
    /FontDescriptor 13 0 R
    /LastChar 33
    /Subtype /TrueType
    /ToUnicode 18 0 R                            <-- point to a CMap object
    /Type /Font
    /Widths 17 0 R >>
    endobj
    12 0 obj [ 0 -140 1000 859 ] endobj
    13 0 obj
    << /Ascent 860
    /CapHeight 1001
    /Descent -141
    /Flags 4
    /FontBBox 12 0 R
    /FontFile2 14 0 R                            <-- point to an embedded TrueType font subset
    /FontName /AAAAAA+MSGothic
    /ItalicAngle 0
    /MissingWidth 1000
    /StemV 0
    /Type /FontDescriptor >>
    endobj
    The CMap object after decoded is:
    18 0 obj
    /CIDInit /ProcSet findresource begin 12 dict begin begincmap /CIDSystemInfo <<
    /Registry (AAAAAB+MSGothic) /Ordering (UCS) /Supplement 0 >> def
    /CMapName /AAAAAB+MSGothic def
    1 begincodespacerange <20> <21> endcodespacerange
    2 beginbfrange
    <20> <20> <6f22>                         <-- "u6F22"
    <21> <21> <5b57>                         <-- "u5B57"
    endbfrange
    endcmap CMapName currentdict /CMap defineresource pop end end
    endobj
    I can write out the embedded TrueType font subset (= "14 0 obj") to a file named "[embedded.ttc|http://sites.google.com/site/cr4eexportpdf/inside-the-pdf/embedded.ttf?attredirects=0&d=1]", which is really a tiny TrueType font file containing only the wrong typefaces for "漢" & "字". It seems everything OK except CR4E failed to choose right typefaces from the TrueType file (msgothic.ttc).
    Is it any help? I am looking forward to any solution.

  • UTF-8 characters in database

    Hi there,
    I'm having problems with UTF-8 characters displaying incorrectly. The problem seems to be that the Content-Type HTTP headers have the character set as "Windows 1252" when it should be UTF-8. There is a demonstration of the problem here:
    http://teasel.homeunix.net/~rah/screenshots-unicode-db/apex-unicode-display.html
    I though I'd solved this previously, because I changed the nls_lang variable in wdbsvr.app to use single-quotes instead of double-quotes. After I did this, the server started sending Content-Type HTTP headers with UTF-8 in them and the characters in the example above displayed fine. For some reason, it seems to have reverted to sending "Windows 1252" and I don't know why.
    I noticed that the logs contain the following lines:
    [Wed Nov  1 09:50:45 2006] [alert] mod_plsql: Wrong language for NLS_LANG 'ENGLISH_UNITED KINGDOM.AL32UTF8' for apex DAD
    [Wed Nov  1 09:50:45 2006] [alert] mod_plsql: Wrong charset for NLS_LANG 'ENGLISH_UNITED KINGDOM.AL32UTF8' for apex DAD
    Any help getting it to send UTF-8 would be greatly appreciated.
    Thanks,
    Robert

    However I believe that it is correct to use double
    quotes rather than single quotes where the setting in
    the nls_lang contains space.Well, this is odd; using double quotes gives the same error, but with the double quotes instead of the single in the error message:
    [Thu Nov  2 09:43:42 2006] [alert] mod_plsql: Wrong language for NLS_LANG "ENGLISH_UNITED KINGDOM.AL32UTF8" for apex DAD
    [Thu Nov  2 09:43:42 2006] [alert] mod_plsql: Wrong charset for NLS_LANG "ENGLISH_UNITED KINGDOM.AL32UTF8" for apex DAD
    This implies that the parser is pulling the string out verbatim and including the quotes when it shouldn't. Lo and behold, removing any quotes:
    nls_lang = ENGLISH_UNITED KINGDOM.AL32UTF8
    causes the error to go away, and causes the HTTP headers to declare UTF-8; problem solved.
    I'm loathe to step ahead again and say that the docs need updating, but it certainly looks that way.
    Robert

  • [Solved]urxvt does not search the font for emojis while gnome-terminal

    Hi, I'm using urxvt and am having some problems setting up fonts. These are my URxvt parameters specifying the fonts
    URxvt*font: xft:Inconsolata-dz for Powerline:style=Semibold:pixelsize=14:antialias=true:hinting=slight, \
    xft:PowerlineSymbols:pixelsize=14:antialias=true:hinting=slight, \
    xft:Symbola:pixelsize=14:antialias=true:hinting=slight
    Now, the emojis from Symbola does not appear and it only appears as a box
    The same thing when opened in Gnome-terminal properly displays the emojis
    Also, I have tried starting urxvt with the following options
    urxvt -fn "xft:Symbola"
    and it works but as you see Symbola is not a font for your terminal.
    What am I doing wrong here? I've check my previous lines and it is right. Do you guys have any ideas? Much appreciated. I have also tried with other posts stating that Urxvt cannot display symbols properly and have tried them all. If I've missed any please let me know. The strange thing is that it works for Gnome-terminal and Urxvt with only Symbola fonts
    Last edited by decryptedepsilon (2014-07-12 02:40:24)

    bch24 wrote:
    Make sure the System Locale is set to utf-8 locale.
    This sometimes is the issue.
    Many thanks for your reply, I've checked the system locale is proper. I think I have found the solution.
    I think the official rxvt-unicode faq and documentation here states that rxvt-unicode drops pixels while showing some fonts because different fonts use different height and width. In this particular case, Symbola font is too wide and the default letterSpace does not allow it to display correct. If -letsp parameter is set to 4, it is displaying but with about a half pixel missing
    The specific parameters for .Xresources
    URxvt*font: xft:Inconsolata-dz for Powerline,xft:Symbola
    URxvt*letterSpace: 4
    And this gives this which technically solves the issue
    The documentation clearly states the following
    All of this requires that fonts do not lie about character sizes, however: Xft fonts often draw glyphs larger than their acclaimed bounding box, and rxvt-unicode has no way of detecting this (the correct way is to ask for the character bounding box, which unfortunately is wrong in these cases).
    I guess the first thing one should do is to check the documentation thoroughly of the official project page rather than searching countless forums where people have discussed stuff based on trial and error. I guess that's a classic rookie mistake and I feel very good that I'm moving past the rookie phase

  • Crystal Report 9 - Can't Display UTF-8 Data

    Dear Sir,
    I am using PHP with Crystal Report 9.2 and has problem displaying UTF-8 data from MSSQL field. It shows weird characters instead. I am very sure the data stored in database is in UTF-8 because when I ouput it into the web browser with encoding as UTF-8, it shows correctly. Currently is using ADODB recordset to pass data into CR. I tried using ODBC to connect directly to database and preview the data - same issue.
    Now I wonder whether are there any settings need to be done in CR in order to display unicode correctly? From what I read, CR 9 and above should be able to support unicode.
    Thanks
    AL

    To be honest, I have no idea if TTX files supported UTF-8... I kinda doubt it. I'd check, but the only person I knwo of that may have a clue is not in until September 28...
    Re. ODBC, see if adding the following option to your DSN entry in the odbc.ini file will help:
    stmt = SET CHARACTER SET utf8
    Also, if I remember right, MS SQL does not install UTF-8 support by default (at least at one time or another it did not...(?).
    Oh - one more thing. I've attached a non CR app that I'd like you to use to see what the data it returns looks like. Please give that a try.
    - Ludek
    Edited by: Ludek Uher on Sep 7, 2011 10:43 AM

  • Create HTML file that can display unicode (japanese) characters

    Hi,
    Product:           Java Web Application
    Operating system:     Windows NT/2000 server, Linux, FreeBSD
    Web Server:          IIS, Apache etc
    Application server:     Tomcat 3.2.4, JRun, WebLogic etc
    Database server:     MySQL 3.23.49, MS-SQL, Oracle etc
    Java Architecture:     JSP (presentation) + Java Bean (Business logic)
    Language:          English, Japanese, chinese, italian, arabic etc
    Through our java application we need to create HTML files that have to display unicode text. Our present works well with English and most of the european character set. But when we tried to create HTML files that will display unidoce text, say japanese, only ???? is getting displayed. Following is the code we have used. The out on the browser displays the japanese characters correctly. But the created file displays only ??? in place of japanese chars. Can anybody tell how can we do it?
    <%
    String s = request.getParameter( "txt1" );
    out.println("Orignial Text " + s);
    //for html output
    String f_str_content="";
    f_str_content = f_str_content +"<HTML><HEAD>";
    f_str_content = f_str_content +"<META content=\"text/html; charset=utf-8\" http-equiv=Content-Type></HEAD>";
    f_str_content = f_str_content +"<BODY> ";
    f_str_content = f_str_content +s;
    f_str_content = f_str_content +"</BODY></HTML>";
    f_str_content = new String(f_str_content.getBytes("8859_9"),"Shift_JIS");
    out.println("file = " + f_str_content);
              byte f_arr_c_buffer1[] = new byte[f_str_content.length()];
    f_str_content.getBytes(0,f_str_content.length(),f_arr_c_buffer1,0);
              f_arr_c_buffer1 = f_str_content.getBytes();
    FileOutputStream l_obj_fout; //file object
    //file object for html file
    File l_obj_f5 = new File("jap127.html");
    if(l_obj_f5.exists()) //for dir check
    l_obj_f5.delete();
    l_obj_f5.createNewFile();
    l_obj_fout = new FileOutputStream(l_obj_f5); //file output stream for writing
    for(int i = 0;i<f_arr_c_buffer1.length;i++ ) //for writing
    l_obj_fout.write(f_arr_c_buffer1);
    l_obj_fout.close();
    %>
    thanx.

    Try changing the charset attribute within the META tag from 'utf-8' to 'SHIFT_JIS' or 'utf-16'. One of those two ought to do the trick for you.
    Hope that helps,
    Martin Hughes

Maybe you are looking for

  • Why aren't my adobe webfonts showing correctly in Chrome version 37?

    I am running Windows 7 and using Chrome version 37. My muse sites with adobe webfonts are not displaying the nice fonts. They are jagged and looks crappy. On the other hand, Internet Explorer is displaying them just fine. web sites in trouble are www

  • Bind variables are not used in select statement

    Hello all of you, I have two parameters in Report 6i. 1) Department 2) Section There are many section in a single department. Both parameters are selected from list. The list for department name is as follows - select deptname from department_master

  • RG1 Updation

    Hi, When we are taking report for RG1 "RG1 posting of FG in July showing in Aug" what would be the probable reasons for this? what is the solution on it? regards, Akshay

  • Text mixed with E-mail, Possible to seperate?

    I have a 8110 pearl.  I get over 30 e-mails per day.  I have a desktop that allows me to read e-mail, my phone also gets the same e-mails unopened.  I normally just ck "mark prior opened".  However, I miss many text messages when I do this.  Is there

  • Delete photo in heading?

    Accidently there is a standard photo in my mail. It is on the right in the heading. (from, subject, date and to). How can I delete that photo in the mail? I had made that photo in the past by the webcam in the Imac. Jaap-Jan I Mac 24 inch   Mac OS X