Displaying non-ascii numerals (digits) in native language

Hello
I have been looking through the forums and docs on OTN and haven't been able to determine whether or not Oracle has builtin support for converting numerals.
I know that is has support for Date and Time formats, Monetary and Numeric formats (position of decimal and comma), and Calender formats. Also, according to the NLS FAQ...
What does the language part of NLS_LANG control?
The languages used for Oracle messages, of day and month names etc.
...Oracle can even automatically translate messages and day/month names. My question is, can it also automatically translate numerals. If my database character set is set to UTF-8 and I insert the number 1285 into a NUMBER datatype column, can I pull that number out in a different language than what it was inserted as? If a user of my site is using a Thai locale for instance, I want him to see digits 0-9 not as 0123456789, but digits in Thai language.
If it doesn't support that, then how do multi-language applications deal with numerals? Do they always store numerals in char or varchar2 datatypes? If non-ascii numerals are stored as char/varchar2, how are mathematical equations done? For instance, how would a multi-language calculator work?
I'd appreciate any insight that can be offered.
Thanks
-Matt

There is some support for numeric month conversion for Asian languages but not for Thai as far as I know.
Arabic, Thai, and Indic languages have classical shapes for numbers that are different from the conventional western digits most often used on computers.
Unicode provides separate code points for each digit shape. So it would be straight forward to substitute a Thai number for numbers 0-9. Displaying these with the appropriate shape is subject to the available fonts and usually means the front end GUI must be able to handle Unicode.

Similar Messages

  • Displaying non-ascii characters in SQL Plus Windows

    Hi,
    is it possible to display data from more than language at the same time in SQL Plus? Or can you do this only one language at the time?
    Thanks,
    Cornee van der Linden

    I think this question is better suited to the globalization discussion forum, Technologies -> Architecture -> Globalization and NLS. Sorry I can't be of more help.
    Alison

  • Non-ASCII chars in applets?

    hi,
    Spent 4 hours to find a way to use non-ASCII chars in applets (buttons, textareas), but didn't make it.
    Simply saying
    TextFieldObj.setText("\uxxxx");
    //or any equivalent obj. Ex. of \uxxxx: \u015F
    doesn't work. I even went into Graphics.paint() example, but it too can paint only ASCII chars.
    My hunch is that it is smt. about Character.Subset but i still can't figure out how to do it.
    Please SOS,
    Reshat.

    Hi,
    I just managed to get Buttons to show Greek characters, so it appears that static buttons are fine.
    However, i still face the same problem for TextField's:
    TextFields work fine for IE, but in NN they sometimes convert into ASCII and sometimes give ? The same in HotJava.
    So there are 2 questions in my head:
    1. why can't NN use the fonts used by IE to display Non-ASCII chars?
    2. What is the safest font to use for Non-ASCII chars, to cover the widest possible audience.
    P.S. Java solves most cross-platform-browser problems, but the font issue still seems to be dependent on a user and his/her browser. It appears Java is not font-independent in non-ASCII context. If so, it would be nice to develop a plug-in to make sure that if the user doesn't have the font, then a Java-standardized Unicode-based font is used. Otherwise, non-ASCII world is still w/o a real Java.)
    Thank you for your feedback,
    Reshat.

  • Folders that having non-ascii chars are not displaying on MAC using JFileChooser

    On MAC OS X 10.8.2, I have latest Java 1.7.25 installed. When I run my simple java program which allows me to browse the files and folders of my native file system using JFileChooser, It does not show the folders that having non-ascii char in there name. According this link, this bug had been reported for Java 7 update 6. It was fixed in 7 Update 10. But I am getting this issue in Java 1.7.21 and Java 1.7.25.
    Sample Code-
    {code}
    public class Encoding {
    public static void main(String[] arg) {
    try {
    //NOTE : Here at desktop there is a folder DKF}æßj having spacial char in its name. That is not showing in file chooser as well as while is trying to read for FILE type, it is not identify by Dir as well as File - getting File Not Found Exception
    UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
    } catch (IllegalAccessException ex) {
    Logger.getLogger(Encoding.class.getName()).log(Level.SEVERE, null, ex);
    } catch (UnsupportedLookAndFeelException ex) {
    Logger.getLogger(Encoding.class.getName()).log(Level.SEVERE, null, ex);
    } catch (ClassNotFoundException ex) {
    Logger.getLogger(Encoding.class.getName()).log(Level.SEVERE, null, ex);
    } catch (InstantiationException ex) {
    Logger.getLogger(Encoding.class.getName()).log(Level.SEVERE, null, ex);
    JFileChooser chooser = new JFileChooser(".");
    chooser.showOpenDialog(null);
    {code}

    Hi,
    Did you try this link - osx - File.list() retrieves file names with NON-ASCII characters incorrectly on Mac OS X when using Java 7 from Oracle -…
    set the LANG environment variable. It's a GUI application that I want to deploy as an Mac OS X application, and doing so, the LSEnvironment setting
    <key>LSEnvironment</key> <dict> <key>LANG</key> <string>en_US.UTF-8</string> </dict>

  • Why does non-ASCII text display improperly?

    One of the things that has long baffled me about OS X is the occasionally improper display of text on web sites. Sometimes, though less than before, the Mac still can't properly diplay non-ASCII characters. Today, for instance, I bought a GPS from Amazon, and the word nüvi has junk characters where the umlaut "ü" should be, as the text image below should show. Why is this? Is there a setting that corrects the problem?

    Hi Yawder, do you want to file a bug report on the problem that when Firefox generates the faux bold face for Droid Sans Mono it is doing a bad job compared with other browsers?
    You can submit that here: https://bugzilla.mozilla.org/

  • How to create a native KeyEvent for non ASCII characters

    Hello
    i need to create a native KeyEvent with my application. I know that it is possible to send such a event with the Robot-Class. But how do i send an event for a none ASCII character such as the german � (O with Diaeresis)?
    I also know that a combination of KeyEvent.VK_DEAD_DIAERESIS + KeyEvent.VK_O will get me the desired result, but i want just to pass the charachter � and let Java create the correct KeyCode(s).
    Does anyone knows a solution?
    BTW: AWTKeyStroke always returns 0 for the �
    Thanks
    Matt

    Hi, James.
    Unfortunately, none of the F-keys can be set to the functions of the Volume keys using the Keyboard Shortcuts preference pane.
    Mac OS X does not include native support for assigning a macro or script to either an Fkey or other keyboard shortcut. For that, you need a third-party tool, like iKey, Quickeys, or Spark, the latter suggested by DPSG-Scout.
    You'd also need a set of scripts to assign to the keys, e.g. scripts to increase, decrease, or mute the volume. Those are tricky to write since System Preferences does not expose much to AppleScript. One would need to use the "GUI Scripting (System Events)" technique to create a script to change the settings in the Sound preferences pane of System Preferences.
    Automator does offer a "Set Computer Volume" action, which allows you to adjust the various Volume settings (Output volume, Alert Volume, Input Volume) to a specific set of levels, including Mute for Output Volume. However, it won't handle the idea of pressing the same key multiple times to either increase or decrease the volume, ala the Volume Adjustment keys. In your case, it's most useful for Mute. Automator Workflows can be saved as applications, which I believe can then be assigned to Fkeys in iKey or QuickKeys.
    Someone may have written a more advanced Automator action for adjusting volume than that provided with Mac OS X. Search Apple's Automator Action Downloads. If you find one, instructions on adding actions to Automator can be found in Automator Help.
    You could search ScriptBuilders to see if someone has already written scripts for the functions you need, then use iKey or Quickeys to assign such to keyboard shortcuts.
    Finally, you could solve the problem another way: buy a set of external speakers that has its own separate volume and mute control. For example, my trusty old Monsoon MM1000s have their own volume control with a mute button.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X

  • Non-ascii charsets in an applet woes

    I'm having trouble getting russian, chinese, japanese, and greek charactersets to display correctly in an applet (plugin is jre 1.4.2_02).
    1) yes the strings are in i18n'ized properties
    2) yes the non-ascii characters are unicode escaped (i.e. \u####) in the properties file
    3) yes the computer displaying the applet has its locale and languages set to the correct language
    4) yes the computer displaying the applet has the fonts available to display
    All the above is done and works great for ascii or extended ascii sets, but it shows junk for the 4 languages listed above.
    Anyone run into this problem before?
    Please help me out.
    Thanks,
    Andrew

    Ok, I got Russian and Greek to work. I had to switch all my awt controls to swing.
    However, I cannot get the Chinese/Japanese characters to display. I double and triple checked the encodings to make sure it was correct (it is).
    However, the font.properties.ja (japanese) lists a font called MSMINCHO.TTC and one called MSGOTHIC.TTC. Both are actually in TTF files on my system. I tried correcting the font.properties.ja with TTF; but to no avail.
    Also, font.properties.zh (chinese) lists SIMSUN.TTC - which I do NOT have, but this shouldn't be a problem because both font.properties.zh_TW (trad. chinese) and font.properties.zh_CN (simplified chinese) both list fonts thatI DO have (and the only Chinese locales I support are zh/TW and zh/CN. Is this a problem? If so, where can I get simsun.ttc (I search and searched and could not find the file).
    Please help,
    Andrew

  • Problem with non-ASCII characters on TTY

    Although I'm not a native speaker, I want my system language to be English (US), since that's what I'm used to. However I have a lot of files which have German language in their file names.
    My /etc/locale.conf has en_US.UTF-8 and de_DE.UTF-8 enabled. My /etc/locale.conf contains only the line
    LANG=en_US.UTF-8
    The German file names show up fine within Dolphin and Konsole (ls -a). But they look weird on either of the TTYs (the "console" you get to by pressing e.g. ctrl+alt+F1). They have other characters like '>>' or the paragraph symbol where non-ASCII characters should be. Is it possible to fix this?

    I don't think the console font is the problem. I use Lat2-Terminus16 because I read the Beginner's Guide on the wiki while installing the system.
    My /etc/vconsole.conf:
    KEYMAP=de
    FONT=Lat2-Terminus16
    showconsolefont even shows me the characters missing in the file names; e.g.: Ö, Ä, Ü

  • Re: Native Language Support in Forte

    [email protected] wrote:
    I've been posed a question in the abstract about Forte's native language
    support. Does Forte support any languages other than C/C++? And if so,
    what are the limitations or caveats?
    Native Language Support could also mean the NLS standard which Forte supports. This
    provides for Internationalization (I18N) of a Forte application. This means a client
    application deployed in french, german, and english (for example), could all be making
    requests of the same Forte shared service and getting responses in the native language.
    We provide for changing the language/character set displayed both statically before the
    application starts, and dynamically change it while the application is running.
    If support native language support means ability for Forte to call existing application
    logic written in C/C++, then today you can "wrapper" C functions in a Forte Class,
    instantiate the class and "call-out" to the member functions (your c functions) directly
    from the 4GL object.
    C++ functions are a challenge due to cross platform C++ compiler issues which at a
    minimum include "name mangling" being non-standard for all the C++ compilers.
    However, you can "export" C++ class member functions as external "C" functions so that C
    code can call the function as if it was regular K&R or ANSI C.
    If other language support is required, I have customers on the East Coast which have
    successfully wrappered MicroFocus Cobol on HP-UX, and the ADA language. This is due to
    the concept of all 3GL languages today support the concept of allowing for their
    language to be called from 'C' (other possibilities are Pascal, Fortran, etc).
    Here the only caveat is you need to be aware of the other language's "boot-code" may be
    registering for operating system "signals" and not handling them appropriately. This
    rarely is an issue these days.
    If other language support means do we code generate our 4GL language to any language
    other than 'C++', the answer is currently "no".
    However, we do support exporting Service object definitions to environments like DCE,
    CORBA, Encinca (a TP monitor), and the WWW. In our next release we will complete our
    support for exporting services objects to Java. This will allow for Java applications to
    call upon the power of Forte's Shared Services architecture. Using this exporting
    concept, applications written in various other languages would be able to "call-in" to
    the Forte shared service from the ourside world.
    Didn't know what you were looking for. Hope the above hit the mark. If not, write me, or
    give me a call.
    Regards,
    jim

    Not exactly sure what you are asking. Can you rephrase your question?
    If your other server's locale is same as the one you configured then it should be ok.

  • Problem searching some PDF files in Acrobat Reader – Non-ASCII characters

    Acrobat Reader cannot search some .pdf files.  I have put an example document up on Scribd here.
    Any attempt to search for any word that can be clearly seen to be in the document fails with “No matches were found.”
    This example document is NOT a scanned document – words and characters can be selected.
    A hex display tool shows that the characters in a PDF document that can be successfully searched are in the ASCII/1252 range (A=0x41, etc).
    Copying and pasting characters in the example document to a hex display tool shows that the characters in the document are not in the ASCII range.
    For example the letters A to Z in the example document are in the range ‘A’ = 0xDF (decimal 223), ‘B’ = 0xDE (decimal 222), through to ‘Z’ = 0xC6 (decimal 198).
    However, characters in these non-ASCII ranges are displayed perfectly by Acrobat Reader, as can be see if the example document is opened.
    Therefore, as Acrobat Reader knows what these characters are, it doesn’t seem unreasonable to say that it should be able to search for and find them.
    Tests were performed using Acrobat Reader X v10.1.4.
    Can anyone say what this problem is?

    Hi Pat, thanks for your reply. 
    Your reference to the title of that page being 'HARNESSES' indicates that, when you view that document in Adobe Reader, you are seeing 'HARNESSES', not
    "ØßÎÒÛÍÍÛÍ".  And that the remainder of the document is similarly being displayed in readable English language.
    Yes as you say, you can search for 'ß' and get hits on 'A' (to use that as an example) in the example document.
    But the need to form a word to be searched for into whatever code mapping this is using (for example having to enter "ØßÎÒÛÍÍ" for HARNESSES - I'm not even sure how that would be entered from a keyboard) doesn't seem to be very convenient.
    Its clear the example document is using some code mapping other than ASCII / Windows-1252 (which has 'A' as 0x41).  But it is also clear that Adobe Reader knows what that mapping is, and knows to use it, as its displaying (for example) 'A' for the code 0xDF. 
    So I guess the question is - why isn't Adobe Reader's knowledge of this mapping being extended to its search input? 

  • Why teststand can not display the ASCII character which number up to 128?

    Hello All,
            I have encountered a problem on an application for ASCII character,why teststand can not display the ASCII character which number up to 128?
           For example:an expression  Local.xx=Chr(164),
           xx-->string,I can not get the correct string.
           Have any idea for this?
    OS:WinXP,Teststand2012 SP1.
           Thanks a lot.
    Solved!
    Go to Solution.

    dug9000 wrote:
    [...]On Windows 7 at least, the code page setting for the operating system is located in the "Region and Language" control panel in the "Administrative" tab where it says "Language for non-Unicode programs".
    Hope this helps,
    -Doug
    Ah, that explains why i see "European Set"....
    One question, Doug: Obviously, you can select only the localization there. Is this PostScript Characters for all languages? Or is it possible to switch to something like true types, e.g. "Wingdings"? (I know, bad example, but i hope you get the point)
    thanks,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Is it possible to display the static numerals as Hindi?

    Trying to generate a report using XML Publisher, the report composed of two columns.
    Need to display the numerals in English language coulmn in Arabic numerals and the numerals in Arabic language column in Hindi numerals. Currently they both appear as Arabic numerals in both columns
    Question:
    Is it possible to display the static numerals as Hindi as they had been
    entred in the template or not?
    I understood that on generation these should be variables not static.
    I would welcome any input on this issue.
    Many thanks

    Hmm,
    In templates i guess, you can use multiple font text ..
    same is the case here..
    Arabic and hindi, are you using different font for these ?
    if so, are these fonts available in server too ?
    The test displayed can be static or dynamic, but if the server doesn't know which font to be applied, then its going to apply default font for the text.
    At runtime, the server has to known, for this text which font has to be applied. if this has to happen, font has to be understood by server.

  • Replacing non-ASCII characters with HTML charcter references

    Hi All,
    In Oracle 10g or greater is there a built-in function that will convert a string with non-ASCII characters like this
    a b č 뮼
    into an ASCII string with HTML character references like this?
    a b & # x 0 1 0 D ; & # x B B B C ;
    (note I had to include spaces between each character in the sample code for message to prevent the forum software from converting my text)
    I tried using
    utl_i18n.escape_reference( val, 'us7ascii' )
    but for some reason it returns
    a b c & # x B B B C ;
    Note how it converted the Western European character "č" to its unaccented counterpart "c", not "& # x 0 1 0 D ;" (is this a bug?).
    I also tried a custom solution using regexp_replace and asciistr (which I can't include here because the forum software chokes on it) but it only returns the correct result for values <=4000 characters long. Unfortunately asciistr doesn't appear to accept CLOB values larger than 4000 characters. It returns an error message like
    (ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 30251, maximum: 4000) ).
    I'm looking for a solution that works on CLOB data of any size.
    Thanks in advance for any insight you can provide.
    Joe Fuda

    So with that (UTF8) in mind, let's take another look.....
    As shown below, I used a AL32UTF8 database.
    Note: I did not use a unicode capable tool for querying. So I set console mode code page to 1250 just to have č displayed properly (instead of posing as an è).
    Also, as a result of using windows-1250 for client character set, in the val column and in the second select's ncr column (iso8859-1), è (00e8) has been replaced with e through character set conversion going from server back to client.
    Running the same code on a database with a db character set such as we8mswin1252, that doesn't define the č (latin small c with caron) character, would yield results with a c in the ncr column.
    C:\>chcp 1250
    Aktuell teckentabell: 1250
    C:\>set nls_lang=.ee8mswin1250
    C:\>sqlplus test/test
    SQL*Plus: Release 11.1.0.6.0 - Production on Fri May 23 21:25:29 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the OLAP option
    SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';
    PARAMETER              VALUE
    NLS_CHARACTERSET       AL32UTF8
    NLS_NCHAR_CHARACTERSET AL16UTF16
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'us7ascii') NCR from dual;
    VAL  NCR
    č e  c e
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'we8iso8859p1') NCR from dual;
    VAL  NCR
    č e  &# x10d; e     <- "è"
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'ee8iso8859p2') NCR from dual;
    VAL  NCR
    č e  č &# xe8;
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'cl8iso8859p5') NCR from dual;
    VAL  NCR
    č e  &# x10d; &# xe8;In the US7ASCII case, where it should be possible for all non-ascii characters to be escaped, it seems as if the actual escape step is skipped over.
    Hope this helps to understand whether utl_i8n is usable or not in your case.
    Message was edited by:
    orafad
    Fixed replaced character references :)

  • Unable to play videos with non-ASCII-characters in filename

    Hi!
    I use a MediaPlayer to display MP4-videos in my application. This works quite well. Unfortunately I have a problem if the filename of the video to be shown contains non-ASCII-charcaters.
    I get the following message:
    -->file:D:\daten\avi\��� ����.MPG
    Error: Unable to realize com.sun.media.amovie.AMController@4b7651
    Failed to realizeThe first line shows the filename I pass to the setMediaLocation()-method of the MediaPlayer-object.
    What's wrong? If I rename the file to ABC.mpg it works fine.
    Thanks for your help
    Thomas

    Hi!
    I use a MediaPlayer to display MP4-videos in my application. This works quite well. Unfortunately I have a problem if the filename of the video to be shown contains non-ASCII-charcaters.
    I get the following message:
    -->file:D:\daten\avi\��� ����.MPG
    Error: Unable to realize com.sun.media.amovie.AMController@4b7651
    Failed to realizeThe first line shows the filename I pass to the setMediaLocation()-method of the MediaPlayer-object.
    What's wrong? If I rename the file to ABC.mpg it works fine.
    Thanks for your help
    Thomas

  • Linux or JVM: cannot display non english character

    hi,
    i am trying to implement a GUI that supports both turkish and english. user can switch between them on the fly.
    public class SampleGUI {
    JButton trTranslate = new JButton(); /* Button, to translate into turkish */
    /* Label text will be translated */
    JLabel label = new JLable("Text to Be Translated!");
    trTranslate.addActionListener (new ActionListener(){
    void ActionPerformed(ActionEvent e){
    String language="tr";
    String country="TR";
    Locale currentLocale;
    ResourceBundle messages;
    currentLocale = new Locale(language, country);
    messages = ResourceBundle.getBundle("TranslateMessages",currentLocale);
    /* get from properties file turkish match of "TextTranslate "*/
    label.setText(messages.getString("TextToTranslate"));
    Finally, my problem is my application does not display non english chracaters like "� &#351; � &#287; � i" in GUI after triggering translation.However, if i do not use ResourceBundle and instead assign directly the turkish match for that label (i.e. label.setText("&#351;&#351;&#351;&#351;&#351;")), GUI successfully displays turkish characters. what may be the problem? which encoding set does not conform?
    ps : i am using redhat linux8.0, j2sdk1.4.1. current locale = "tr_TR.UTF-8". in /etc/sysconfig/keyboard , keyTable = "trq". There seems no problem for me as i can input and output
    turkish characters. OS supports this. Also jvm gets the current encoding from OS.It seems as if there is a problem in reading properties file in inappropriate encoding.
    thanx for dedicating ur time and effort,
    hELin

    I would suspect it would work in vim only if vim supported the UTF8 character set. I have no idea if it does.
    Here is one blurb I found on google:
    USING UNICODE IN THE GUI
    The nice thing about Unicode is that other encodings can be converted to it
    and back without losing information. When you make Vim use Unicode
    internally, you will be able to edit files in any encoding.
    Unfortunately, the number of systems supporting Unicode is still limited.
    Thus it's unlikely that your language uses it. You need to tell Vim you want
    to use Unicode, and how to handle interfacing with the rest of the system.
    Let's start with the GUI version of Vim, which is able to display Unicode
    characters. This should work:
         :set encoding=utf-8
         :set guifont=-misc-fixed-medium-r-normal--18-120-100-100-c-90-iso10646-1
    The 'encoding' option tells Vim the encoding of the characters that you use.
    This applies to the text in buffers (files you are editing), registers, Vim
    script files, etc. You can regard 'encoding' as the setting for the internals
    of Vim.
    This example assumes you have this font on your system. The name in the
    example is for X-Windows. This font is in a package that is used to enhance
    xterm with Unicode support. If you don't have this font, you might find it
    here:
         http://www.cl.cam.ac.uk/~mgk25/download/ucs-fonts.tar.gz

Maybe you are looking for

  • How to send ALV Report in excel format from SAP

    Hi Gurus, We are using SAP 4.7 and using different SAP reports.Now I want to send SAP ALV report in excel format directly from SAP in background.Now we send these reports in background weekly by using autimetic scheduling but this is PDF format.Now I

  • Increase in monthly charge, without being notified!

    Hi. I have an academic individual CC subscription. It automatically renewed last October (2014), however the monthly rate increased from €19.99 p.m. to €30.74 p.m. I was not informed beforehand of this increase, as per my contract, which states that:

  • Field validation in Module Pool

    Hi, In my module pool, on main screen there are various fields, now out of these i've to give validation for 4 fields through a drop down field such that: There are 2 items in my drop down, when 1st item of drop down is clicked then out of 4 fields f

  • Question on BPM Integration Process

    Hello Experts, I am following this BPM article which is very similar to what I want to do. We are using PI 7.1 without EP1. I have an XML to IDoc scenario where my XML's external definition resides in a seperate SCV (SCV1) and my imported IDoc reside

  • I changed my itunes email address and now I am unable to sign in on icloud account. How do I change my icloud email? Thanks

    I am changing cable companies and I need to change me icloud email. I was able to change my itunes email and time is a factor. Thanks