[SOLVED] Unicode characters from keyboard in terminal

I'm not sure if this is the right section, but I couldn't find any more suitable, so posting here. I have quite a strange problem.
As I lately write much about bridge, I use ♣ ♦ ♥ ♠ symbols a lot. So I thought It would be nice to map some Alt+Ctrl combinations on my keyboard to enter them quickly. Using this page I did something like this in the keymap I load to my X-server on startup:
key <AC02> {
type= "CTRL+ALT",
symbols[Group1]= [ s, S, sacute, Sacute, U2660 ]
... and similarly for <c>, <d> and <h> keys.
It works like a charm in all my applications except for terminal (Termite), where I happen to write the most (Vim). Instead of the proper character, Termite shows two unreadable characters (). The Ctrl+Shit+u combination on the other hand works in Termite too.
I use sole XMonad window manager and no GNOMEs/KDEs or such things. Does anyone have an idea why is that so or how to fix it?
Last edited by Sventimir (2014-09-04 09:07:04)

Remapping the Alt key in in Vim on Termite may or may not work.  See
:h map-alt-keys
If you always have a space between the suit glyph and the next character, or follow the suit symbol by a newline character, I would suggest using an abbreviation.  In your .vimrc, add:
iab ,c ♣
iab ,d ♦
iab ,h ♥
iab ,s ♠
In Insert mode, typing a comma then an 's' and then a space or pressing 'Enter' prints the spade glyph.  Or choose another character instead of the comma,  perhaps the backslash, ' \ '.

Similar Messages

  • Unable to pick unicode characters from input file using "outside in"

    Hi,
    I am using your product "Outside in" to read unicode text from input
    source file. For reading text I am using TReadFirst and TReadNext even
    though "It is not picking unicode characters from input source file
    and also it is giving zunk character to the buffer". How can I
    retrieve unicode character from input source using "outside in"
    product. Your help makes me learn more stuff.
    Regards,
    Naresh.D

    I am trying to use CAReadFirst and CAReadNext to read unicode characters. Even it is not picking, I think is there any flags we need to set. can any one help to this.

  • Pb with abscents characters from keyboard

    I installed Solaris 10 on Sun Xfire 4100 dual AMD opteron and all has been done correctely. I used JavaRconsole to install OS.
    But for instense when I want to type the ":" character in a term it's abscent from his place and I have the "<" or ">" character instead. There various others characters like this. (it the same thing from any editor and even from console)
    I tried to use the graphical utility to modify that comportment but it never changed. What could I do to solve this ?
    Thank you by advance for your responses.

    Sounds as if the system is configured to use an US-English keyboard
    layout, and you need the Frensh (?) layout.
    What is configured in the "kbd-type" eeprom variable?
    Check with the command "eeprom kbd-type".
    You probably want to change it to "eeprom kbd-type=French",
    followed by a reboot.

  • [SOLVED]Special characters not working on terminal

    Hi everybody, my problem here is that when i log in from tty i can see characters like áéíóúñ and so, but when i start a terminal in E17, i can't see them. I just see diamonds with ? or just the ?.
    Here's my output of locale and locale -a
    Yes, Master earth? > 
    locale
    locale: Cannot set LC_ALL to default locale: No such file or directory
    LANG=en_US.UTF-8
    LC_CTYPE="en_US.UTF-8"
    LC_NUMERIC="en_US.UTF-8"
    LC_TIME=es_US
    LC_COLLATE="en_US.UTF-8"
    LC_MONETARY="en_US.UTF-8"
    LC_MESSAGES="en_US.UTF-8"
    LC_PAPER="en_US.UTF-8"
    LC_NAME="en_US.UTF-8"
    LC_ADDRESS="en_US.UTF-8"
    LC_TELEPHONE="en_US.UTF-8"
    LC_MEASUREMENT="en_US.UTF-8"
    LC_IDENTIFICATION="en_US.UTF-8"
    LC_ALL=
    locale -a
    C
    en_US
    en_US.iso88591
    en_US.utf8
    es_SV
    es_SV.iso88591
    es_SV.utf8
    POSIX
    PD: I use Sakura as my main terminal, but also UXterm and Xterm
    Any ideas?
    thanks!
    TheEdward
    Last edited by TheEdwardRCT (2012-12-19 23:44:48)

    Oh I see, you don't use setfont to set an X terminal's font.
    Edit: I realized that answer was a bit sparse.  It depends on the terminal in how you set the font.  But you should know that the fonts that X uses are not the same as teh fonts that the console/tty use.  There are a number of wiki pages dedicated to fonts (inluding one appropriately titled "Fonts").  When in doubt, use the wiki.  If that fails, come ask here.
    Last edited by WonderWoofy (2012-12-18 05:23:01)

  • How to send non-latin unicode characters from Flex application to a web service?

    Hi,
    I am creating an XML containing data entered by user into a TextInput. The XML is sent then to HTTPService.
    I've tried this
    var xml : XML = <title>{_title}</title>;
    and this
    var xml : XML = new XML("<title>" + _title + "</title>");
    _title variable is filled with string taken from TextInput.
    When user enters non-latin characters (e.g. russian) the web service responds that XML contains characters that are not UTF-8.
    I run a sniffer and found that non-printable characters are sent to the web service like
    <title>����</title>
    How can I encode non-latin characters to UTF-8?
    I have an idea to use ByteArray and pair of functions readMultiByte / writeMultiByte (to write in current character set and read UTF-8) but I need to determine the current character set Flex (or TextInput) is using.
    Can anyone help convert the characters?
    Thanks in advance,
    best regards,
    Sergey.

    Found tha answer myself: set System.useCodePage to false

  • Passing unicode characters from ExtendScript to HTML5 extension

    I need to pass some Japanese text from ExtendScript to the HTML5 extension. But when I receive the string from the extension it's broken in the middle.
    Is there a way to fix this?

    Hi Mylenium,
    The program just loads useless to use your words when I try to pass a file from Illustrator to Photoshop and when I click anywhere in Illustrator I get Program "Not responding" it doesn't crashes (stops) it stais opened but it doesn't do anything so I need to close it by force from task manager because I can't do anything with it.... Once I left it over night to save an important file and just in the morning I managed to do that so after a while it comes round but all the time there are different periouds of pending. Anyway this is not normal and I should be able to use the program in normal parameters with a powerful system. 
    I don't have any processor over load in the task manager I don't have anything unusual that indicates an over load of information, that's my problem. My guess is that I have a system incompatibility but I don't know where so that's why I asked for Adobe's help. Again I didn't had this problem using the same pack (CS5) with a less powerful system...
    If there are any creash logs pelase tell me where can I find them ?

  • Terminal.app and the European Unicode characters?

    Does anyone have the unicode characters working properly in Terminal.app?
    If I try to write in GNU nano 1.2.4 for instance "örrör" it translates into:
    (one empty line)
    örr
    ör
    which isn't certainly right. This is especially awkward when editing an unicode text file where the text then easily becomes more or less garbled. Usually more.
    It doesn't seem to make any difference whether or not I use the Finnish extended (unicode) keyboard layout or the conventional one in nano. If the Terminal.app window preferences are set as UTF-8, it says:
    ?rr
    ?r
    which looks even more garbled.
    In plain bash the characters print like this:
    å = \345
    ä = \344
    ö = \366
    so my mighty apple translates the example string "örrör" as "\366rr\366r".
    Any ideas, anyone?
    PowerBook G4 @ 1.5 GHz   Mac OS X (10.4.4)   1.25 GB DDR SDRAM
    Debian Sarge 3.1 as a slave fetchmail server.

    Hi solarflare,
       My first (and essentially only) language is English as well. However enough folks have asked that I have experimented with multibyte characters. There are so many apps and options involved, it's difficult to get consistent results. However, I'll recount as many settings as I can recall.
       To begin with, you are right about the LC settings. It helps many apps to have:
    export LCALL=enUS.UTF-8
    export LANG=en_US.UTF-8
    set in your shell startup scripts. Then the system should be set to produce unicode when you type. In the "Input Menu" tab of the "International" pane of "System Preferences", you should select a unicode keyboard layout, such as U.S. Extended.
       To configure the Terminal, you need to open the "Terminal Inspector" by selecting "Window Settings..." in the "Terminal" menu. To type many multibyte characters, you need the option key. To use it, you must have the "Use option key as meta key" checkbox unchecked, although I find the meta key too important in UNIX to leave that unchecked. In the dropdown menu in the "Display" pane of the "Terminal Inspector", you should set the "Character Set Encoding" to "Unicode (UTF-8)". In the "Emulation" pane of the same window, you must uncheck the "Escape non-ASCII characters" checkbox. That is important as I've read that it is checked by default and that can produce some pretty strange results.
       Now it's helpful to use a very modern shell. For instance, the latest beta version of zsh-4.3 has the best unicode support of all versions of zsh. After you've chosen a good shell, you're at the mercy of the application that you're using. As I gather you've noticed, vim has excellent unicode support and picks up on the LC settings. I have no idea about nano but it is meant to be a minimal text editor.
       I know that my settings allow me to type extended characters and the "Character Palette" lets me insert more. As far as other command line utilities go, the best you can do is to choose well and keep your apps as up-to-date as possible. Fink or Darwin Ports can often help in that regard.
    Gary
    ~~~~
       This generation doesn't have emotional baggage.
       We have emotional moving vans.
             -- Bruce Feirstein

  • Displaying unicode or HTML escaped characters from HTTPService in Flex components.

    Here is a solution on the Flex Cookbook I developed for
    displaying data in Flex components when the data comes back from
    HTTPService as unicode of HTML escaped data:
    Displaying
    unicode or HTML escaped characters from HTTPService in Flex
    components.

    Hi again Greg,
    I have just been adapting your idea for encountering
    occasional escaped characters within a body of "normal" text, eg
    something like
    hell&ocirc; sun&scaron;ine
    Now, the handy String.fromCharCode(charCode) call works a
    dream if instead of the above I have
    hell&#244; sun&#353;ine
    Do you know if there is an equivalent call that takes the
    named entities rather than the numeric ones? Clearly I can just do
    some text substitution to get the mapping, but this means rather
    more by-hand work than I had hoped. However, this is definitely a
    step in a useful direction for me.
    Thanks,
    Richard
    PS hoping that the web page won't simply outguess me and
    replace all the above! Basically, the first line uses named
    entities and the second the equivalent numbers...

  • [SOLVED] Conky Not Displaying Unicode Characters Correctly

    I have some music that requires unicode characters in order to be displayed properly. The artist name is Röyksopp, but it is displayed like this:
    http://i.imgur.com/rRBDxZn.png
    Does anyone know how to either make it display correctly (Röyksopp), or at least remove the umlauts to make it almost correct(Royksopp)? I've set the locale, so it displays correctly in both i3bar and urxvt, but it had no effect on conky.
    EDIT: Here is my conkyrc.
    Last edited by ProfessorProcrastinator (2014-05-14 22:50:16)

    x33a wrote:Try a different font.
    I know that this font works because I tried it in urxvt and it displayed correctly.
    Also, I tried "override_utf8_locale yes" and it still was incorrect.

  • CRVS2010 Beta - Cannot export report to PDF with unicode characters

    My report has some unicode data (Chinese), it can be previewed properly in the windows form report viewer. However, if I export the report document to PDF file, the unicode characters in exported file are all displayed as a square.
    In the version of Crystal Report 2008 R2, it can export the Chinese characters to PDF when I select a Chinese font in report. But VS2010 beta cannot export the Chinese characters even a Chinese font is selected.

    Barry, what is the specific font you are using?
    The below is a reformatted response from Program Management:
    Using non-Chinese font with Unicode characters (Chinese) the issue is reproducible when using Arial font in Unicode characters field. After changing the Unicode character to Simsun (A Chinese font named 宋体 in report), the problem is solved in Cortez and CR both.
    Ludek

  • How to insert unicode charater from SQLPLUS

    Hi all,
    My problem is following :
    I want to store unicode character in a column of table so I create table with command : CREATE TABLE PERSON( ID NUMBER(4) NOT NULL ,NAME NVARCHAR2(64) NOT NULL).
    NLS_CHARACTERSET is set in DB :
    SQL> SELECT * FROM NLS_DATABASE_PARAMETERS;
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8ISO8859P1
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    PARAMETER VALUE
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZH:TZM
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION 8.1.7.0.0
    18 rows selected.
    when I insert data into above table (PERSON) with none unicode character is OK, but I failed with unicode character.
    Can anyone help me to solve this proplem?
    Thank you so much for any hints
    Pls email me at : [email protected]

    NLS_CHARACTERSET WE8ISO8859P1 is a western eurpoean character set (without the Euro symbol and about !# other funky characters being supported - use NLS_CHARACTERSET WE8ISO8859P1% if you want the Euro symbol, sorry just a sidenote). Anyways, this will need to be changed to UTF* before it can store Unicode characters. There are various ways to do this, but the best option is probably to export the entire database, create scripts to recreate all tablespaces and datafiles, drop the database, recreate the database in UTF8, then import the export file. There's a lot of little 'gotchas' involved so be prepared for it not to work the first time. But if it doesn't work, you can always create the database again with NLS_CHARACTERSET WE8ISO8859P1 and reimport so everything is back to how it was.

  • Direct Execution of query having Unicode Characters

    Direct Execution of query having Unicode Characters
    Hi All,
    In my application I am firing a Select Query having Unicode characters in Where Clause under condition like '%%'
    to Oracle 10g DB from a Interface written in VC6.0...
    Application funcationality is working fine for ANSI characters and getting the result of Select properly.
    But in case of Unicode Characters in VC it says 'No Data Found'.
    I know where the exact problem is in my code. But not getting the exact solution for resolving my issue...
    Here with I am adding my code snippet with the comments of what i understand and what i want to understand...
    DBPROCESS Structure used in the functions,_
    typedef struct
    HENV hEnv;
    HDBC hDbc;
    HSTMT hStmt;
    char CmdBuff[[8192]];
    char RpcParamName[[255]];
    SQLINTEGER SpRetVal;
    SQLINTEGER ColIndPtr[[255]];
    SQLINTEGER ParamIndPtr[[255]];
    SQLPOINTER pOutputParam;
    SQLUSMALLINT CurrentParamNo;
    SQLUSMALLINT OutputParamNo;
    SQLUSMALLINT InputParamCtr;
    SQLINTEGER BatchStmtNo;
    SQLINTEGER CmdBuffLen;
    short CurrentStmtType;
    SQLRETURN LastStmtRetcode;
    SQLCHAR SqlState[[10]];
    int ShowDebug;
    SQLCHAR* ParameterValuePtr;
    int ColumnSize;
    DBTYPE DatabaseType;
    DRVTYPE OdbcDriverType;
    BLOCKBIND *ptrBlockBind;
    } DBPROCESS;
    BOOL CDynamicPickList::GetResultSet(DBPROCESS *pDBProc, bstrt& pQuery, short pNumOdbcBindParams, COdbcBindParameter pOdbcBindParams[], CQueryResultSet& pQueryResultSet)
         int               lRetVal,
                        lNumRows;
         bstrt               lResultSet;
         wchar_t               lColName[[256]];     
         SQLUINTEGER          lColSize;
         SQLSMALLINT          lColNameLen,
                        lColDataType,
                        lColNullable,
                        lColDecDigits,                         
                        lNumResultCols;
         wchar_t               lResultRow[[32]][[256]];
    OdbcCmdW(pDBProc, (wchar_t *)pQuery); *//Query is perfectly fine till this point all the Unicode Characters are preserved...*
         if ( OdbcSqlExec(pDBProc) != SUCCEED )
              LogAppError(L"Error In Executing Query %s", (wchar_t *)pQuery);          
              return FALSE;
    Function OdbcCmdW_
    //From this point have no idea what is exactly happening to the Unicode Characters...
    //Actually i have try printing the query that gets stored in CmdBuff... it show junk for Unicode Characters...
    //CmdBuff is the Char type Variable and hence must be showing junk for Unicode data
    //I have also try printing the HexaDecimal of the query... I m not getting the proper output... But till i Understand, I think the HexaDecimal Value is perfect & preserved
    //After the execution of this function the call goes to OdbcSqlExec where actual execution of qurey takes place on DB
    SQLRETURN OdbcCmdW( DBPROCESS p_ptr_dbproc, WCHAR      p_sql_command )
         char *p_sql_commandMBCS;
         int l_ret_val;
         int l_size = wcslen(p_sql_command);
         int l_org_length,
    l_newcmd_length;
    p_sql_commandMBCS = (char *)calloc(sizeof(char) * MAX_CMD_BUFF,1);
    l_ret_val = WideCharToMultiByte(
                        CP_UTF8,
                        NULL,                         // performance and mapping flags
                        p_sql_command,          // wide-character string
                        -1,                         // number of chars in string
                        (LPSTR)p_sql_commandMBCS,// buffer for new string
                        MAX_CMD_BUFF,                    // size of buffer
                        NULL, // default for unmappable chars
                        NULL // set when default char used
    l_org_length = p_ptr_dbproc->CmdBuffLen;
    l_newcmd_length = strlen(p_sql_commandMBCS);
    p_ptr_dbproc->CmdBuff[[l_org_length]] = '\0';
    if( l_org_length )
    l_org_length++;
    if( (l_org_length + l_newcmd_length) >= MAX_CMD_BUFF )
    if( l_org_length == 0 )
    OdbcReuseStmtHandle( p_ptr_dbproc );
    else
    strcat(p_ptr_dbproc->CmdBuff, " ");
         l_org_length +=2;
    strcat(p_ptr_dbproc->CmdBuff, p_sql_commandMBCS);
    p_ptr_dbproc->CmdBuffLen = l_org_length + l_newcmd_length;
    if (p_sql_commandMBCS != NULL)
         free(p_sql_commandMBCS);
    return( SUCCEED );
    Function OdbcSqlExec_
    //SQLExecDirect Requires data of Unsigned Char type. Thus the above process is valid...
    //But i am not getting what is the exact problem...
    SQLRETURN OdbcSqlExec( DBPROCESS *p_ptr_dbproc )
    SQLRETURN l_ret_val;
    SQLINTEGER l_db_error_code=0;
         int     i,l_occur = 1;
         char     *token_list[[50]][[2]] =
    {     /*"to_date(","convert(datetime,",
                                       "'yyyy-mm-dd hh24:mi:ss'","1",*/
                                       "nvl","isnull" ,
                                       "to_number(","convert(int,",
                                       /*"to_char(","convert(char,",*/
                                       /*"'yyyymmdd'","112",
                                       "'hh24miss'","108",*/
                                       "sysdate",     "getdate()",
                                       "format_date", "dbo.format_date",
                                       "format_amount", "dbo.format_amount",
                                       "to_char","dbo.to_char",
                                       "to_date", "dbo.to_date",
                                       "unique","distinct",
                                       "\0","\0"};
    char          *l_qry_lwr;  
    l_qry_lwr = (char *)calloc(sizeof(char) * (MAX_CMD_BUFF), 1);
    l_ret_val = SQLExecDirect( p_ptr_dbproc->hStmt,
    (SQLCHAR *)p_ptr_dbproc->CmdBuff,
    SQL_NTS );
    switch( l_ret_val )
    case SQL_SUCCESS :
    case SQL_NO_DATA :
    ClearCmdBuff( p_ptr_dbproc );
    p_ptr_dbproc->LastStmtRetcode = l_ret_val;
    if (l_qry_lwr != NULL)
         free(l_qry_lwr);
    return( SUCCEED );
    case SQL_NEED_DATA :
    case SQL_ERROR :
    case SQL_SUCCESS_WITH_INFO :
    case SQL_STILL_EXECUTING :
    case SQL_INVALID_HANDLE :
    I do not see much issue in the code... The process flow is quite valid...
    But now i am not getting whether,
    1) storing the string in CmdBuff is creating issue
    2) SQLExecDirect si creating an issue(and some other function can be used here)...
    3) Odbc Driver creating an issue and want some Client Setting to be done(though i have tried doing some permutation combination)...
    Any kind of help would be appreciated,
    Thanks & Regards,
    Pratik
    Edited by: prats on Feb 27, 2009 12:57 PM

    Hey Sergiusz,
    You were bang on target...
    Though it took some time for me to resolve the issue...
    to use SQLExecDirectW I need my query in SQLWCHAR *, which is stored in char * in my case...
    So i converted the incoming query using MultibyteToWideChar Conversion with CodePage as CP_UTF8 and
    then passed it on to SQLExecDirectW...
    It solved my problem
    Thanks,
    Pratik...
    Edited by: prats on Mar 3, 2009 2:41 PM

  • What table column size is needed to accomodate Unicode characters

    Hi guys,
    I have encounter something which i dont understand and i hope gurus here will shed some light on me.
    I am running a non-unicode database and i decided to port the data over to a unicode database.
    So
    1) i export the schema out --> data.dmp
    2) then i create the unicode database + create a user
    3) then i import the schema into the database
    during the imp i can see that character conversion will take place.
    During importing of data into the unicode database
    I encounter some error
    saying column size is too small
    so i went to check the row that has the column value that is too large to fit in the table.
    I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
    But regardless,
    I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
    q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
    q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
    q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
    E.g Name VARCHAR2(5);
    value - 'Peter'
    Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
    why is it that i can still accomodate the data into the table column ?
    q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
    Thanks guys!

    /// does oracle automatically "look" at the each and individual characters in a word and determine how much byte it should take.
    Characters usually originate from a keyboard, which has an associated keyboard layout and an associated character set encoding (a.k.a code page, a.k.a. encoding). This means, the keyboard driver knows that when a key with a letter "á" on it is pressed on a French keyboard, and the associated character set encoding is MS Code Page 1252 (Oracle name WE8MSWIN1252), then one byte with the value 225 is generated. If the associated character set encoding is UTF-16LE (standard internal Windows encoding), two bytes 225 and 0 are generated. When the generated bytes travel through APIs, they may undergo character set conversions from one encoding to another encoding. The conversion algorithms use translation tables to find out how to translate given byte sequence from one encoding to another encoding. In case of translation from WE8MSWIN1252 to AL32UTF8, Oracle will know that the byte sequence resulting from conversion of the code 225 should be 195 followed by 161. For a Chinese characters, for example when converting it from ZHS16GBK, Oracle knows the resulting sequence as well, and this sequence is usually 3 bytes.
    This is how AL32UTF8 data gets into a database. Now, when Oracle processes a multibyte string, and needs to look at individual characters, for example to count them with LENGTH, or take a substring with SUBSTR, it uses information it has about the structure of the character set. Multibyte character sets are of two type: fixed-width and variable-width. Currently, Oracle supports only one fixed-width multibyte character set in the database: AL16UTF16, which is Oracle's name for Unicode UTF-16BE encoding. It supports this character set for NCHAR/NVARCHAR2/NCLOB data types only. This character set uses two bytes per each character code. To find the next code, 2 is simply added to the string pointer.
    All other Oracle multibyte character sets are variable-width character sets, including AL32UTF8. In most cases, the length of each character code can be determined by looking at its first byte. In AL32UTF8, the number of 1-bits in the most significant positions in the first byte before the first 0-bit tells how many bytes a character has. 0 such bits means 1 byte (such codes are identical to 7-bit ASCII), 2 such bits mean two bytes, 3 bits mean 3 bytes, 4 bits mean four bytes. 1 bit (e.g. the bit sequence 10) starts each second, third or fourth byte of a code.
    In other ASCII-based multibyte character sets, the number of bytes is usually determined by the value range of the first byte. Bytes below 128 means a one-byte code, bytes above 128 begin a two- or three-byte sequence, depending on the range.
    There are also EBCDIC-based (mainframe) multibyte character sets, a.k.a shift-sensitive character sets, where a sequence of two-byte codes is introduced by inserting the SO character (code 14=0x0e) and ended by inserting the SI character (code 15=0x0f). There are also character sets, like ISO-2022-JP, which use more complicated byte sequences to define the length and meaning of byte sequences but Oracle supports them only in limited number of places.
    /// e.g i have a word with 4 character. the 3rd character will be a chinese character..the rest are ascii character
    /// will oracle use 4 byte per character regardless its ascii(english) or chinese
    No.
    /// or it will use 1 byte per english character then 3 byte for the chinese character ? e.g.total - 6 bytes taken
    It will use 6 bytes.
    Thnx,
    Sergiusz

  • Displaying unicode characters

    Dear all,
    I am currently dealing with a character displaying problem on the MAM.
    We will soon go live in China. Until now we only had European countries, with a Latin alphabet.
    Now however this changes, so we need to use Unicode to display all characters correctly.
    Therefor I have converted all our custom language files to language files with Unicode escape characters.
    e.g.:
    EQUIPMENTS_EQU_MAT_NR=设备材料号码
    Now the strange thing is that when we login in Chinese, everything is displayed correctly, but when we login in German or Polish (countries which also have some special characters), we don't see everything displayed correctly.
    This is the code how we display an entry from the language file on the screen:
    <%@page language="java" contentType="text/html; charset=UTF-8"%>
    meta http-equiv="Content-Type" content="text-html; charset=UTF-8"
    <jsp:useBean id="res" class="com.sap.ip.me.api.services.MEResourceBundle" scope="session"></jsp:useBean>
    <%=PageUtil.ConvertISO8859toUTF8(res.getString("CONFIRMATIONS_HEADER_DETAIL"))%>
    For Chinese language, the characters are displayed correctly in this way.
    e.g.: 最后一次同步时间
    However Polish characters and German characters are not (always) displayed correctly.
    e.g.: Wskaźnik pierwszego usuniÄu2122cia usterki
    The only 'difference' that I can see is that for China, every character in the language file has a special Unicode notation, while for Polish and German characters, only the special characters are displayed in special Unicode notation.
    e.g.:
      EQUIPMENTS_EQU_MAT_NR=Numer materia\u00c5‚u sprz\u00c4™tu
    FYI, I've converted the files to Unicode escape characters with the java util native2ascii.exe.
    Is there anyone who knows how to solve this issue?
    Thanks already in advance!
    Best regards,
    Diederik
    Edited by: Diederik Van Wassenhove on Jul 6, 2009 2:34 PM

    Dear all,
    I've found the reason for this problem.
    Thanks anyway for your time!
    The problem was that when converting the language files to Unicode escape characters, the source format was wrong. The files where saved as UTF-8, but the JAVA tool native2ascii is not taking UTF-8 as standard input format. So the resulting Unicode file was not containing the correct Unicode characters.
    I've re-generated the language files with the parameter -encoding UTF-8, and now everything is displayed correctly!
    Have a good day!
    Diederik

  • Writing Unicode characters to scripting parameters on Windows

    I am trying to read/write a file path that supports Unicode characters to/from scripting parameters (PIDescriptorParameters) with an Export plug-in. This works fine on OS X by using AliasHandle together with the "typeAlias" resource type in the "aete" section of the plugin resource file.
    On Windows I am having trouble to make Photoshop correctly display paths with Unicode characters. I have tried:
    - Writing null-terminated char* (Windows-1252) in a "typePath" parameter -- this works but obviously does not support Unicode.
    - Writing null-terminated wchar* (UTF-16) in a "typePath" parameter -- this causes the saved path in the Action palette to be truncated to the first character, caused by null bytes in UTF-16. It appears PS does not understand UTF-16 in this case?
    - Creating an alias record with sPSAlias->WinNewAliasFromWidePath and storing in a "typePath" or "typeAlias" parameter -- this causes the Action palette to show "txtu&", which does not make sense to me at all.
    The question is: what is the correct scripting parameter resource type (typePath, typeAlias, ... ?) for file paths on Windows, and how do I write to it in such way that Photoshop will correctly display Unicode characters in the Actions palette?

    Hi
    Skip the first (4 or 6 characters) and you'll get the Unicode value.
    regards
    Bartek

Maybe you are looking for

  • Speak selection don't work in all text apps

    speak selection don't work in all apps use text is example pictures with text apps the first 1 is the e-mail app you can see the speak selection bar. in there and the second app is the kik app you can only see the copy bar. only is a away to turn it

  • Can you boot from an external USB hard drive? (Neo2 Platinum)

    As I'm tempted to get a 160Gb drive for Linux and AMD64 XP installations.  But obviously it needs to beable to boot from it. Thanks

  • How to set an empty JPanel to maximum size?

    Hello! I created class extends JFrame. I want this maximum size so I set the minimum size as screen size: setSize(screenWidth,screenHeight); Now I added an empty JPanel, and I want it to be the maximum size. But the problem is - I don't know what is

  • Open downloaded files on linux

    After files have downloaded, in the Downloads dialog, whether I right-click and select "open contaning folder" or simply "Open", firefox tries to open the selected file (or folder) in gedit, which of course gives an error. Why doesn't firefox follow

  • Updating picking qty from Shipment confirmation

    Hi frenz,        We got a requirement that when we get 856(shipment confirmation) from our external warehouse, we gotta update the picked quantities in delivery document and ASN has to be sent automatically.        Can anyone help me on where to writ