To TYPE Unicode characters

Hi!
I use FrameMaker 8 on Windows XP.
I have scanned a two-volume Greek book and ran it through an OCR program. Not all of the transcription is correct, so I have to fix it. Some of the text is quoted from older books, so there's a lot of interesting combinations of diacritical marks, creating special characters. Most of them I have found in the font I use (Alkaios) and have no big trouble typing them with the corresponding key combinations.
Running charmap from the Run... field under Start I can see that also the rest of the characters are present in the Alkaios font. But I don't seem to find the combination of keys in FrameMaker to produce them (and probably those particular combinations are not implemented). And I can't use Alt+number, since that only works with ASCII characters, and the ones I'm after are far beyond those.
Searching the web, I found that what I'm supposed to type is U+number. The problem is that in any editor, typing a 'U' will present me with a 'U'. (Very logical and practical, since you sometimes also want to be able to type a 'U'!) Searching some more, I found that in for example OpenOffice the 'U' in the typing sequence should be translated as Ctrl+Shift (haven't tried it there, though), and I also found another editor in which it works that way. And in Word 2007 I actually CAN use Alt+number (the decimal numbers 0912 and 0944, in this case).
But in FM I can't use Alt+number, because I get a question mark or a ring, or something else. And I can't use Ctrl+Shift+number, because when I hit the zero key, FM tries to copy whatever is highlighted in the document (which i nothing, and hence FM protests).
As you've probably guessed by now, I would like to know what key combination (or other trick) I have to use to be able to somehow type or enter the special characters into my document? Or am I stuck with copy-and-paste from the charmap?
Regards, Mikael Persson!

I think using the Windows calculator is a perfectly acceptable way to convert between hex and decimal :)
Here's another issue:
there are two stages to getting a unicode character to display: you have to specify the correct character, and then the font involved has to actually have a glyph for that character in it.
I expect, to put it rather anthropomorphically, it goes something like:
1. you type/paste the character
2. Application asks the font if it has a glyph for that character
3. If not, application asks operating system if it has a font that has that character
4. Once one is found, it gets displayed.
My hunch from what I've seen is that applications like Word, and most web-broswers these days, will liase with the Windows operating system until a font is found that has the necessary glyphs for the required characters.
However, some applications give up at step two: if, for example, your doc in some Adobe application is using Helvetica, and you ask for a very obscure unicode character, and Helvetica doesn't have a glyph for it, you may just get a blank square or "Missing character" symbol.
This used to also be the case with the Firefox browser - if I was writing a webpage and put in an obscure character, which wasn't present in default fonts like "Times New Roman" or "Verdana", I'd get a missing character symbol instead, even though my PC had other fonts which *did* have glyphs for that. I'd need to explicitly put "font="Arial Unicode" or sthg like that into the webpage to make it talk to windows to retrieve a glyph.
However, over the past couple of years, Firefox has stopped this and will just go get whatever glyphs it needs.
Perhaps programs like Frame and InDesign actually see this kind of behaviour as a virtue?
(All the above is totally speculative based on my own experience, mind you...)

Similar Messages

  • I cannot type Unicode characters (Vietnamese for instance) in Flash input textfields.

    Given the same website containing a Flash Input TextField, I can:
    # Type Vietnamese properly on IE
    # Type Vietnamese properly on Google Chrome, but only when the Flash '''wmode''' property is set at "windows" (not transparent, nor opaque, etc.)
    # Type Vietnamese properly on FireFox 3.6
    # '''Cannot''' type Vietnamese properly at all on FireFox 4 Final, no matter what wmode is set to

    There are two "serial" numbers (the product number) that came on the product packaging.
    The software is Adobe Flash Professional CS5.5 Student and Teacher Edition, purchased from Amazon.com and needed student verification to obtain the license.
    ^ [ I figured out what was wrong, but I'll just leave the information here for anyone in the future that may have short-term memory as well ]
    I have just went through my records that I recovered from my laptop and it seems that the serial number on the product package DOES IN FACT have letters in it.  But I think that I had to register it with Adobe and verify that I was a student before Adobe would give me my legitimate serial number that contained only numbers.  So this is why there are tons of these threads about this that never go answered because the people asking and the people helping are both right and wrong at the same time.  The students are trying to use the only serial that was provided to them, but it isnt the true serial.  Maybe Adobe could be more clear that it isn't a serial number for the product underneath it?   "Product Code" is synonymous with "Serial Number" to most people. 
    You use this code (the code with letters in it; the "product code") to contact Adobe and verify your status as a studen/teacher and then they give you the true serial number that is all numbers.  Search your email for mail from Adobe to find your serial if you have already done this.  I had completely forgotten how the process worked since it was a while ago.
    Thanks for the help Mylenium, I hope this helps whoever comes across it in the future.

  • Helpme: How to change the type of characters created a BD...

    I have a BD oracle configured with the sort of unicode characters, as I can change that configuration to another, for example AL32UTF8

    I see some risks (e.g. with archiving, number range conflicts),
    but I see no reason for this change.
    Copiing the contract types and then moving all existing contracts from the old type to the new type .....is just adding a lot work for no change.
    You have the same situation for less money  if you keep the existing situation.
    Copy the contract type to new ones and use the new ones for new process.
    And change just the description of the old types.

  • The JSP WYSIWYG Editor can't display most Unicode characters

    Eclipse supports display of Unicode characters very well since version 3. However, NitroX couldn't display most most of them. Well, besides characters from other non-Western European languages, NitroX can't even display characters that it's supposed to support. Well, that's what I think so. I mean, when we type the & character, we have the whole list of character entity references amongst which we could find ∧ ∇ ∨ → but which are not displayed correctly. And many more are in this case.
    Is this a feature or a bug? By "feature", it means that we can't get them in free version.

    I have exactly the same problem. I support web pages for 25 European countries. I've not seen Nitrox support any unicode characters. Until M7 answers this question or fixes the editor, you can use the Eclipse editor to see and edit the text.

  • Is there a list of Unicode characters that can be used in Acrobat bookmarks?

    I can add Greek characters to Acrobat bookmarks using hexadecimal strings. For example, to print a lower case gamma symbol I use <FEFF03B3>. FEFF is the required Unicode flag and 03B3 is the Unicode code for gamma. This works fine. However, there are no Unicode entries for greek characters in the PDF 32000-1:2008 PDF Specification manual. Table D.2 - PDFDocEncoding Character Set on page 656 lists Unicode characters and these also work when added to bookmarks but no Greek codes are in this table. Since I can sucessfully use Greek Unicode characters in the range of 0x0391 - 0x03CE and these characters are not listed in the PDF manual I am assuming there are additional Unicode character that will work in bookmarks. Therefore, I am looking for a complete list of Unicode characters that can be used in Acrobat's bookmarks. Does such a list exist?

    Thank you for the response.
    I'm sorry to hear there is no list available. I'm building the Acrobat bookmarks automatically. The input data contains entity codings (for example, a lower case Greek gamma is coded as &#x03B3;) and I was hoping to be able to just pass these through with an automatic conversion to a hexadecimal string (for example <FEFF03B3>). If I had a list of valid Unicode characters, that will display in an Acrobat bookmark, I could validate each entity before the conversion and catch the ones that won't display correctly. I know these types of characters are out there because I have already come across them. For example, a superscript 5 (0x2075) displays fine in MSWord but shows as a white box in a bookmark. Now I'll need to proof the output PDFs and look for white boxes in the bookmarks so that I can build my list of unicode characters that do not work in Acrobat bookmarks.
    Again, thanks for your help.

  • Direct Execution of query having Unicode Characters

    Direct Execution of query having Unicode Characters
    Hi All,
    In my application I am firing a Select Query having Unicode characters in Where Clause under condition like '%%'
    to Oracle 10g DB from a Interface written in VC6.0...
    Application funcationality is working fine for ANSI characters and getting the result of Select properly.
    But in case of Unicode Characters in VC it says 'No Data Found'.
    I know where the exact problem is in my code. But not getting the exact solution for resolving my issue...
    Here with I am adding my code snippet with the comments of what i understand and what i want to understand...
    DBPROCESS Structure used in the functions,_
    typedef struct
    HENV hEnv;
    HDBC hDbc;
    HSTMT hStmt;
    char CmdBuff[[8192]];
    char RpcParamName[[255]];
    SQLINTEGER SpRetVal;
    SQLINTEGER ColIndPtr[[255]];
    SQLINTEGER ParamIndPtr[[255]];
    SQLPOINTER pOutputParam;
    SQLUSMALLINT CurrentParamNo;
    SQLUSMALLINT OutputParamNo;
    SQLUSMALLINT InputParamCtr;
    SQLINTEGER BatchStmtNo;
    SQLINTEGER CmdBuffLen;
    short CurrentStmtType;
    SQLRETURN LastStmtRetcode;
    SQLCHAR SqlState[[10]];
    int ShowDebug;
    SQLCHAR* ParameterValuePtr;
    int ColumnSize;
    DBTYPE DatabaseType;
    DRVTYPE OdbcDriverType;
    BLOCKBIND *ptrBlockBind;
    } DBPROCESS;
    BOOL CDynamicPickList::GetResultSet(DBPROCESS *pDBProc, bstrt& pQuery, short pNumOdbcBindParams, COdbcBindParameter pOdbcBindParams[], CQueryResultSet& pQueryResultSet)
         int               lRetVal,
                        lNumRows;
         bstrt               lResultSet;
         wchar_t               lColName[[256]];     
         SQLUINTEGER          lColSize;
         SQLSMALLINT          lColNameLen,
                        lColDataType,
                        lColNullable,
                        lColDecDigits,                         
                        lNumResultCols;
         wchar_t               lResultRow[[32]][[256]];
    OdbcCmdW(pDBProc, (wchar_t *)pQuery); *//Query is perfectly fine till this point all the Unicode Characters are preserved...*
         if ( OdbcSqlExec(pDBProc) != SUCCEED )
              LogAppError(L"Error In Executing Query %s", (wchar_t *)pQuery);          
              return FALSE;
    Function OdbcCmdW_
    //From this point have no idea what is exactly happening to the Unicode Characters...
    //Actually i have try printing the query that gets stored in CmdBuff... it show junk for Unicode Characters...
    //CmdBuff is the Char type Variable and hence must be showing junk for Unicode data
    //I have also try printing the HexaDecimal of the query... I m not getting the proper output... But till i Understand, I think the HexaDecimal Value is perfect & preserved
    //After the execution of this function the call goes to OdbcSqlExec where actual execution of qurey takes place on DB
    SQLRETURN OdbcCmdW( DBPROCESS p_ptr_dbproc, WCHAR      p_sql_command )
         char *p_sql_commandMBCS;
         int l_ret_val;
         int l_size = wcslen(p_sql_command);
         int l_org_length,
    l_newcmd_length;
    p_sql_commandMBCS = (char *)calloc(sizeof(char) * MAX_CMD_BUFF,1);
    l_ret_val = WideCharToMultiByte(
                        CP_UTF8,
                        NULL,                         // performance and mapping flags
                        p_sql_command,          // wide-character string
                        -1,                         // number of chars in string
                        (LPSTR)p_sql_commandMBCS,// buffer for new string
                        MAX_CMD_BUFF,                    // size of buffer
                        NULL, // default for unmappable chars
                        NULL // set when default char used
    l_org_length = p_ptr_dbproc->CmdBuffLen;
    l_newcmd_length = strlen(p_sql_commandMBCS);
    p_ptr_dbproc->CmdBuff[[l_org_length]] = '\0';
    if( l_org_length )
    l_org_length++;
    if( (l_org_length + l_newcmd_length) >= MAX_CMD_BUFF )
    if( l_org_length == 0 )
    OdbcReuseStmtHandle( p_ptr_dbproc );
    else
    strcat(p_ptr_dbproc->CmdBuff, " ");
         l_org_length +=2;
    strcat(p_ptr_dbproc->CmdBuff, p_sql_commandMBCS);
    p_ptr_dbproc->CmdBuffLen = l_org_length + l_newcmd_length;
    if (p_sql_commandMBCS != NULL)
         free(p_sql_commandMBCS);
    return( SUCCEED );
    Function OdbcSqlExec_
    //SQLExecDirect Requires data of Unsigned Char type. Thus the above process is valid...
    //But i am not getting what is the exact problem...
    SQLRETURN OdbcSqlExec( DBPROCESS *p_ptr_dbproc )
    SQLRETURN l_ret_val;
    SQLINTEGER l_db_error_code=0;
         int     i,l_occur = 1;
         char     *token_list[[50]][[2]] =
    {     /*"to_date(","convert(datetime,",
                                       "'yyyy-mm-dd hh24:mi:ss'","1",*/
                                       "nvl","isnull" ,
                                       "to_number(","convert(int,",
                                       /*"to_char(","convert(char,",*/
                                       /*"'yyyymmdd'","112",
                                       "'hh24miss'","108",*/
                                       "sysdate",     "getdate()",
                                       "format_date", "dbo.format_date",
                                       "format_amount", "dbo.format_amount",
                                       "to_char","dbo.to_char",
                                       "to_date", "dbo.to_date",
                                       "unique","distinct",
                                       "\0","\0"};
    char          *l_qry_lwr;  
    l_qry_lwr = (char *)calloc(sizeof(char) * (MAX_CMD_BUFF), 1);
    l_ret_val = SQLExecDirect( p_ptr_dbproc->hStmt,
    (SQLCHAR *)p_ptr_dbproc->CmdBuff,
    SQL_NTS );
    switch( l_ret_val )
    case SQL_SUCCESS :
    case SQL_NO_DATA :
    ClearCmdBuff( p_ptr_dbproc );
    p_ptr_dbproc->LastStmtRetcode = l_ret_val;
    if (l_qry_lwr != NULL)
         free(l_qry_lwr);
    return( SUCCEED );
    case SQL_NEED_DATA :
    case SQL_ERROR :
    case SQL_SUCCESS_WITH_INFO :
    case SQL_STILL_EXECUTING :
    case SQL_INVALID_HANDLE :
    I do not see much issue in the code... The process flow is quite valid...
    But now i am not getting whether,
    1) storing the string in CmdBuff is creating issue
    2) SQLExecDirect si creating an issue(and some other function can be used here)...
    3) Odbc Driver creating an issue and want some Client Setting to be done(though i have tried doing some permutation combination)...
    Any kind of help would be appreciated,
    Thanks & Regards,
    Pratik
    Edited by: prats on Feb 27, 2009 12:57 PM

    Hey Sergiusz,
    You were bang on target...
    Though it took some time for me to resolve the issue...
    to use SQLExecDirectW I need my query in SQLWCHAR *, which is stored in char * in my case...
    So i converted the incoming query using MultibyteToWideChar Conversion with CodePage as CP_UTF8 and
    then passed it on to SQLExecDirectW...
    It solved my problem
    Thanks,
    Pratik...
    Edited by: prats on Mar 3, 2009 2:41 PM

  • Illustrator CS6 is not displaying unicode characters properly.

    I need to access certain unicode characters (namely the x/8 fractions). I have the appropriate font installed, and the characters display in every other program on the machine, but they are not displaying correctly in Illustrator. If I type the characters in using alt codes, the 3/8 character displays as "\", the 1/8 character displays as "[", etc. Is there some setting I need to change to enable support for Unicode characters?
    *UPDATE*
    If I type the character in another program, copy it, and paste it into Illustrator, it shows up as a box with an X through it. If I highlight the box and change the font to one that supports the character, then the character does display correctly. This is, however, quite inconvenient. Is there a way to type the character directly into Illustrator?
    Message was edited by: NovakDamien

    I use the following method...
    Mike

  • What table column size is needed to accomodate Unicode characters

    Hi guys,
    I have encounter something which i dont understand and i hope gurus here will shed some light on me.
    I am running a non-unicode database and i decided to port the data over to a unicode database.
    So
    1) i export the schema out --> data.dmp
    2) then i create the unicode database + create a user
    3) then i import the schema into the database
    during the imp i can see that character conversion will take place.
    During importing of data into the unicode database
    I encounter some error
    saying column size is too small
    so i went to check the row that has the column value that is too large to fit in the table.
    I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
    But regardless,
    I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
    q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
    q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
    q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
    E.g Name VARCHAR2(5);
    value - 'Peter'
    Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
    why is it that i can still accomodate the data into the table column ?
    q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
    Thanks guys!

    /// does oracle automatically "look" at the each and individual characters in a word and determine how much byte it should take.
    Characters usually originate from a keyboard, which has an associated keyboard layout and an associated character set encoding (a.k.a code page, a.k.a. encoding). This means, the keyboard driver knows that when a key with a letter "á" on it is pressed on a French keyboard, and the associated character set encoding is MS Code Page 1252 (Oracle name WE8MSWIN1252), then one byte with the value 225 is generated. If the associated character set encoding is UTF-16LE (standard internal Windows encoding), two bytes 225 and 0 are generated. When the generated bytes travel through APIs, they may undergo character set conversions from one encoding to another encoding. The conversion algorithms use translation tables to find out how to translate given byte sequence from one encoding to another encoding. In case of translation from WE8MSWIN1252 to AL32UTF8, Oracle will know that the byte sequence resulting from conversion of the code 225 should be 195 followed by 161. For a Chinese characters, for example when converting it from ZHS16GBK, Oracle knows the resulting sequence as well, and this sequence is usually 3 bytes.
    This is how AL32UTF8 data gets into a database. Now, when Oracle processes a multibyte string, and needs to look at individual characters, for example to count them with LENGTH, or take a substring with SUBSTR, it uses information it has about the structure of the character set. Multibyte character sets are of two type: fixed-width and variable-width. Currently, Oracle supports only one fixed-width multibyte character set in the database: AL16UTF16, which is Oracle's name for Unicode UTF-16BE encoding. It supports this character set for NCHAR/NVARCHAR2/NCLOB data types only. This character set uses two bytes per each character code. To find the next code, 2 is simply added to the string pointer.
    All other Oracle multibyte character sets are variable-width character sets, including AL32UTF8. In most cases, the length of each character code can be determined by looking at its first byte. In AL32UTF8, the number of 1-bits in the most significant positions in the first byte before the first 0-bit tells how many bytes a character has. 0 such bits means 1 byte (such codes are identical to 7-bit ASCII), 2 such bits mean two bytes, 3 bits mean 3 bytes, 4 bits mean four bytes. 1 bit (e.g. the bit sequence 10) starts each second, third or fourth byte of a code.
    In other ASCII-based multibyte character sets, the number of bytes is usually determined by the value range of the first byte. Bytes below 128 means a one-byte code, bytes above 128 begin a two- or three-byte sequence, depending on the range.
    There are also EBCDIC-based (mainframe) multibyte character sets, a.k.a shift-sensitive character sets, where a sequence of two-byte codes is introduced by inserting the SO character (code 14=0x0e) and ended by inserting the SI character (code 15=0x0f). There are also character sets, like ISO-2022-JP, which use more complicated byte sequences to define the length and meaning of byte sequences but Oracle supports them only in limited number of places.
    /// e.g i have a word with 4 character. the 3rd character will be a chinese character..the rest are ascii character
    /// will oracle use 4 byte per character regardless its ascii(english) or chinese
    No.
    /// or it will use 1 byte per english character then 3 byte for the chinese character ? e.g.total - 6 bytes taken
    It will use 6 bytes.
    Thnx,
    Sergiusz

  • Terminal.app and the European Unicode characters?

    Does anyone have the unicode characters working properly in Terminal.app?
    If I try to write in GNU nano 1.2.4 for instance "örrör" it translates into:
    (one empty line)
    örr
    ör
    which isn't certainly right. This is especially awkward when editing an unicode text file where the text then easily becomes more or less garbled. Usually more.
    It doesn't seem to make any difference whether or not I use the Finnish extended (unicode) keyboard layout or the conventional one in nano. If the Terminal.app window preferences are set as UTF-8, it says:
    ?rr
    ?r
    which looks even more garbled.
    In plain bash the characters print like this:
    å = \345
    ä = \344
    ö = \366
    so my mighty apple translates the example string "örrör" as "\366rr\366r".
    Any ideas, anyone?
    PowerBook G4 @ 1.5 GHz   Mac OS X (10.4.4)   1.25 GB DDR SDRAM
    Debian Sarge 3.1 as a slave fetchmail server.

    Hi solarflare,
       My first (and essentially only) language is English as well. However enough folks have asked that I have experimented with multibyte characters. There are so many apps and options involved, it's difficult to get consistent results. However, I'll recount as many settings as I can recall.
       To begin with, you are right about the LC settings. It helps many apps to have:
    export LCALL=enUS.UTF-8
    export LANG=en_US.UTF-8
    set in your shell startup scripts. Then the system should be set to produce unicode when you type. In the "Input Menu" tab of the "International" pane of "System Preferences", you should select a unicode keyboard layout, such as U.S. Extended.
       To configure the Terminal, you need to open the "Terminal Inspector" by selecting "Window Settings..." in the "Terminal" menu. To type many multibyte characters, you need the option key. To use it, you must have the "Use option key as meta key" checkbox unchecked, although I find the meta key too important in UNIX to leave that unchecked. In the dropdown menu in the "Display" pane of the "Terminal Inspector", you should set the "Character Set Encoding" to "Unicode (UTF-8)". In the "Emulation" pane of the same window, you must uncheck the "Escape non-ASCII characters" checkbox. That is important as I've read that it is checked by default and that can produce some pretty strange results.
       Now it's helpful to use a very modern shell. For instance, the latest beta version of zsh-4.3 has the best unicode support of all versions of zsh. After you've chosen a good shell, you're at the mercy of the application that you're using. As I gather you've noticed, vim has excellent unicode support and picks up on the LC settings. I have no idea about nano but it is meant to be a minimal text editor.
       I know that my settings allow me to type extended characters and the "Character Palette" lets me insert more. As far as other command line utilities go, the best you can do is to choose well and keep your apps as up-to-date as possible. Fink or Darwin Ports can often help in that regard.
    Gary
    ~~~~
       This generation doesn't have emotional baggage.
       We have emotional moving vans.
             -- Bruce Feirstein

  • Displaying unicode characters

    Dear all,
    I am currently dealing with a character displaying problem on the MAM.
    We will soon go live in China. Until now we only had European countries, with a Latin alphabet.
    Now however this changes, so we need to use Unicode to display all characters correctly.
    Therefor I have converted all our custom language files to language files with Unicode escape characters.
    e.g.:
    EQUIPMENTS_EQU_MAT_NR=设备材料号码
    Now the strange thing is that when we login in Chinese, everything is displayed correctly, but when we login in German or Polish (countries which also have some special characters), we don't see everything displayed correctly.
    This is the code how we display an entry from the language file on the screen:
    <%@page language="java" contentType="text/html; charset=UTF-8"%>
    meta http-equiv="Content-Type" content="text-html; charset=UTF-8"
    <jsp:useBean id="res" class="com.sap.ip.me.api.services.MEResourceBundle" scope="session"></jsp:useBean>
    <%=PageUtil.ConvertISO8859toUTF8(res.getString("CONFIRMATIONS_HEADER_DETAIL"))%>
    For Chinese language, the characters are displayed correctly in this way.
    e.g.: 最后一次同步时间
    However Polish characters and German characters are not (always) displayed correctly.
    e.g.: Wskaźnik pierwszego usuniÄu2122cia usterki
    The only 'difference' that I can see is that for China, every character in the language file has a special Unicode notation, while for Polish and German characters, only the special characters are displayed in special Unicode notation.
    e.g.:
      EQUIPMENTS_EQU_MAT_NR=Numer materia\u00c5‚u sprz\u00c4™tu
    FYI, I've converted the files to Unicode escape characters with the java util native2ascii.exe.
    Is there anyone who knows how to solve this issue?
    Thanks already in advance!
    Best regards,
    Diederik
    Edited by: Diederik Van Wassenhove on Jul 6, 2009 2:34 PM

    Dear all,
    I've found the reason for this problem.
    Thanks anyway for your time!
    The problem was that when converting the language files to Unicode escape characters, the source format was wrong. The files where saved as UTF-8, but the JAVA tool native2ascii is not taking UTF-8 as standard input format. So the resulting Unicode file was not containing the correct Unicode characters.
    I've re-generated the language files with the parameter -encoding UTF-8, and now everything is displayed correctly!
    Have a good day!
    Diederik

  • Function for non unicode-characters

    Hi
    is there a function that permit to  translate a  unicode characters to a non-unicode characters?
    For example with this function  "  à " must become " a ".
    thank you for your help

    Copy paste the below code and execute. This could also solve your problem.
    DATA: BEGIN OF trans OCCURS 0,
    auml TYPE x VALUE 'C4', "'Ä'
    c_8e TYPE c VALUE 'A',
    gra TYPE x VALUE 'E0', "'à'
    c_gra TYPE c VALUE 'a',
    END OF trans.
    DATA : input(40).
    DATA : output(40).
    input = 'ÄBàp'.
    output = input.
    TRANSLATE output USING trans.
    condense output no-gaps.
    write :/ input.
    write:/ output.
    Thanks,
    Senthil

  • Unicode characters appear as      (boxes) in AIR for IOS

    I am developing an application for iPhone and it has an input Text box. (Placed on stage from flash CS6). In the Desktop emulator everything works fine. But, on the actual phone, everything I type appears as       . But surprisingly, when I am inputting the text, (i.e. the textbox is in edit mode) the characters are displayed correctly. But as soon as I leave the box it appears as       . Is there someway I can display the text the same way it is displayed during the edit mode?
    I am trying to input Devanagari देवनागरी characters.
    P.S. I did some research on this and found that, if I specify the  font for the textbox = "DevanagariSangamMN" then there is some improvement.
    I say improvement becasue althought the boxes       are replaced by actual characters , they aren't correctly formated.
    For e.g. during edit mode the text appears as this: कार्यकर्ता
    But as soon as I leave the textbox it appears as this: कार्यकर्ता (I typed this here by adding special unicode characters ZWNJ so that characters won't join)
    Anyway, I don't like the idea of having to specify font names? What If some user would like to input chinese, how would I know what chinese font to use?
    Isn't there some way to let IOS handle things, (just like how it handles things when I am inputing text).
    Thanks.

    Thanks to everyone who replied.
    The conclusiver answer is that there are only 2 ways to display H264 video in AIR for IOS
    (more info here http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/NetStream.htm l#play%28%29)
    1. Progressive download
    2. HLS format (slight caveat, in my tests at least, OSMF 1.6.1 does'nt handle this, but if you use the NetStream directly with StageVideo enabled it works)
    Updated Matrix is :
    FMS 4.5 H.264 Streaming test matrix
    RTMP
    HDS
    HLS
    HTTP Progressive Download
    AIR for Android
    Yes
    Yes
    No
    Yes
    AIR on Windows (Desktop)
    Yes
    Yes
    No
    Yes
    AIR on IOS
    No
    No
    Yes
    Yes
    Safari Browser on IOS
    No
    No
    Yes
    No

  • How to insert unicode characters in oracle

    hiiii...........i want to add special unicode characters in oracle database......can anyone guide me how to do this.
    i kno we have nvarchar2 datatype which supports multilingual languages......but im enable to insert the values from sql prompt........can anyone guide me how to insert the values.
    also please tell will there be special care which had to be taken care of if we are accessing it through .NET??

    output of
    select * from nls_database_parameters where parameter like '%SET';
    is PARAMETER VALUE
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET AL16UTF16
    when i query :select testmsg, dump(testmsg,1016) from test ;
    i get
    TESTMSG DUMP(TESTMSG,1016)
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    dsdas Typ=1 Len=10 CharacterSet=AL16UTF16: 0,64,0,73,0,64,0,61,0,73
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    éµOF¿¿ad¿ Typ=1 Len=18 CharacterSet=AL16UTF16: 0,e9,0,b5,0,4f,0,46,0,bf,0,bf,0,61,0,64,0,bf
    what basically i want is to store some special characters like éµΩΦЛήαδӨװΘ§³¼αγ into my oracle database but i am unable to do dat....
    Edited by: [email protected] on Jun 28, 2010 10:19 PM
    Edited by: [email protected] on Jun 28, 2010 10:54 PM

  • Exporting unicode characters to PDF using JRC not working.

    We have a requirement to support unicode characters (Russian) in our reports. We are using the JRC with the R2 release. When I view the report in the viewer, the characters are correct, but when I export to PDF, they show as ???&#39;s. Is this a bug? When I export from the report designer to pdf, they show correctly, but I have heard it uses a different reporting engine fromt the JRC.

    Solution is quite simple don't worry too much about it.
    JRC PDF Export engine only support for windows-1252 encoding scheme. If your character set using encoding scheme other than windows-1252 you will get bunch of ????. There is simple way to convert this encoding scheme in Java.
    As a example Arabic character scheme using windows-1256 character scheme and we can covert this to JRC supported windows-1252 by
    JRCSupportedCharacterString = new String((InputCharacterString.toString()).getBytes("windows-1256"), "windows-1252");
    InputCharacterString - windows-1256 encoded
    JRCSupportedCharacterString  - windows-1252 encoded (JRC Suppoted)
    Now JRC will correctly process your character string.
    Note: make sure to set font type of Fields in your report template for relevant font style (ex. Arabic, Chinese or whatever)
    Java encoding names and more information about conversion are available at
    http://mindprod.com/jgloss/encoding.html#CONVERSION
    Happy coding............

  • How to display unicode characters

    hi
    How could we save, display, write or read Unicode characters.

    Thanks a lot. I fully understand what you have said.
    My test is running under Traditional Chinese "Big5" Windows XP system.
    Chinese "Big5" font can display on a Console window. I can even input Chinese into the console window.
    Or, in another word, I can TYPE a text file which contains "Big5" Chinese characters and show from the Console window. I assume this will prove that the Console window do know how to handle Chinese characters.
    Now, my test is trying to CONVERT a Unicode string into a Chinese character string expecting the resulting Chinese string can be output into the Console window and let the XP Console window to handle the Chinese character display.
    My main concerned question is "How can I CONVERT a Unicode string into a Big5 Chinese character string?"
    String tstr = new String("\u39321\u28207","big5");
    It seems this statement is INCORRECT, isn't it?

Maybe you are looking for