Scandinavian characters with non-scandinavian SW o...

Hey there, I've got E61i with English & Chinese, is there any way to input Scandinavian charaters (å, ä & ö) with this version? FWIW, I'm running SW v. 1.0633.22.05.

"ä" is an entity code for ä and that's what the source actually has (nothing to do with Java); there's nothing you can do but just parse them. It's pretty straight - the code always starts with a & and ends with a semicolon (;) or white space (that's invalid html but that's what some sites have). You'll find the complete list from http://www.w3.org/TR/REC-html40/sgml/entities.html
You'll also have to look for things like &#XXX; - for example é is an entity code for é (233 is the unicode value of é in decimal).

Similar Messages

  • Replacing non-ASCII characters with HTML charcter references

    Hi All,
    In Oracle 10g or greater is there a built-in function that will convert a string with non-ASCII characters like this
    a b č 뮼
    into an ASCII string with HTML character references like this?
    a b & # x 0 1 0 D ; & # x B B B C ;
    (note I had to include spaces between each character in the sample code for message to prevent the forum software from converting my text)
    I tried using
    utl_i18n.escape_reference( val, 'us7ascii' )
    but for some reason it returns
    a b c & # x B B B C ;
    Note how it converted the Western European character "č" to its unaccented counterpart "c", not "& # x 0 1 0 D ;" (is this a bug?).
    I also tried a custom solution using regexp_replace and asciistr (which I can't include here because the forum software chokes on it) but it only returns the correct result for values <=4000 characters long. Unfortunately asciistr doesn't appear to accept CLOB values larger than 4000 characters. It returns an error message like
    (ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 30251, maximum: 4000) ).
    I'm looking for a solution that works on CLOB data of any size.
    Thanks in advance for any insight you can provide.
    Joe Fuda

    So with that (UTF8) in mind, let's take another look.....
    As shown below, I used a AL32UTF8 database.
    Note: I did not use a unicode capable tool for querying. So I set console mode code page to 1250 just to have č displayed properly (instead of posing as an è).
    Also, as a result of using windows-1250 for client character set, in the val column and in the second select's ncr column (iso8859-1), è (00e8) has been replaced with e through character set conversion going from server back to client.
    Running the same code on a database with a db character set such as we8mswin1252, that doesn't define the č (latin small c with caron) character, would yield results with a c in the ncr column.
    C:\>chcp 1250
    Aktuell teckentabell: 1250
    C:\>set nls_lang=.ee8mswin1250
    C:\>sqlplus test/test
    SQL*Plus: Release 11.1.0.6.0 - Production on Fri May 23 21:25:29 2008
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the OLAP option
    SQL> select * from nls_database_parameters where parameter like '%CHARACTERSET';
    PARAMETER              VALUE
    NLS_CHARACTERSET       AL32UTF8
    NLS_NCHAR_CHARACTERSET AL16UTF16
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'us7ascii') NCR from dual;
    VAL  NCR
    č e  c e
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'we8iso8859p1') NCR from dual;
    VAL  NCR
    č e  &# x10d; e     <- "è"
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'ee8iso8859p2') NCR from dual;
    VAL  NCR
    č e  č &# xe8;
    SQL> select unistr('\010d \00e8') val, utl_i18n.escape_reference(unistr('\010d \00e8'),'cl8iso8859p5') NCR from dual;
    VAL  NCR
    č e  &# x10d; &# xe8;In the US7ASCII case, where it should be possible for all non-ascii characters to be escaped, it seems as if the actual escape step is skipped over.
    Hope this helps to understand whether utl_i8n is usable or not in your case.
    Message was edited by:
    orafad
    Fixed replaced character references :)

  • Upload text files with non-english characters

    I use an Apex page to upload text files. Then i retrieve the contents of files from wwv_flow_files.blob_content and convert them to varchar2 with utl_raw.cast_to_varchar2, but characters like ò, à, ù become garbage.
    What could be the problem? Are characters lost when files are stored in wwv_flow_files or when i do the conversion?
    Some other info:
    * I see wwv_flow_files.DAD_CHARSET is set to "ascii", wwv_flow_files.FILE_CHARSET is null.
    * Trying utl_raw.cast_to_varchar2( utl_raw.cast_to_raw('àòèù') ) returns 'àòèù' correctly;
    * NLS_CHARACTERSET parameter is AL32UTF8 (not just english ASCII)

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • Not all chinese characters display correctly with non-embedded text

    Hi all,
      I've updated to the last beta 2 of Flash Player 10.1 (10,1,51,66) and compiled this simple Flex application to illustrate the fact that not all chinese characters can be displayed correctly with non-embedded text (device font).
    <?xml version="1.0" encoding="utf-8"?>
    <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
                   xmlns:s="library://ns.adobe.com/flex/spark"
                   xmlns:mx="library://ns.adobe.com/flex/halo"
                   minWidth="1024" minHeight="768"
                   creationComplete="resourceManager.localeChain = ['zh_CN'];"
                   layout="{new VerticalLayout()}">           
        <fx:Script>
            <![CDATA[
                import spark.layouts.VerticalLayout;
            ]]>
        </fx:Script>
        <s:Label text="伜-伞伟传伡伢伣伤伥伦伧伨伩伪伫伬伭-伝">
        </s:Label>
        <mx:Label text="伜-伞伟传伡伢伣伤伥伦伧伨伩伪伫伬伭-伝"/>       
    </s:Application>
      Notice that characters from the Unicode range 0x4F1E .. 0x4А2D are not displayed within <s:Label> component that uses Flash Text Engine (FTE) by default to display the text, but at the same time those characters display just fine within older <mx:Label> component that relies on flash.text.TextField instance to render the text.

        OK, did not know that "Arial Unicode MS" font is only distributed with Microsoft Office, was looking into relying on this font in case if "simsun.ttc" font is not available at the user's system, by using the following:
    <?xml version="1.0" encoding="utf-8"?>
    <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009"
                   xmlns:s="library://ns.adobe.com/flex/spark"
                   xmlns:mx="library://ns.adobe.com/flex/mx"
                   minWidth="1024" minHeight="768"
                   creationComplete="resourceManager.localeChain = ['zh_CN']"
                   layout="{new VerticalLayout()}">               
        <fx:Style>
            @namespace s "library://ns.adobe.com/flex/spark";
            @namespace mx "library://ns.adobe.com/flex/halo";
            s|Label {
                fontFamily: "Arial Unicode MS, SimSun, Arial";            
        </fx:Style>   
        <fx:Script>
            <![CDATA[
                import spark.layouts.VerticalLayout;
            ]]>
        </fx:Script>
        <s:Label id="sparkLabel" text="伜-伞伟传伡伢伣伤伥伦伧伨伩伪伫伬伭-伝"/>   
    </s:Application>  
    Tough decision ahead, but accordingly to  the survey at codestyle.org (2000 participants from 2007 year and onwards), Arial Unicode MS is present at 62.53%% system surveyed.
    ps
       My current install of Windows XP already includes the install of Microsoft Office 2007 and I've also enabled/disabled the support of East Asian languages, thus the content of  my "fonts" folder differs now significantly from "clean" Windows XP install version.
    Nevertheless I found the page with comprehensive lists of standard fonts installed with different releases of Windows that looks viable here:  http://www.kayskreations.net/fonts/fonttb.html

  • Problems with non-ASCII characters on Linux Unit Test Import

    I found a problem with non-ASCII characters in the Unit Test Import for Linux.  This problem does not appear in the Unit Test Import for Windows.
    I have attached a Unit Test export called PROC1.XML  It tests a procedure that is included in another attachment called PROC1.txt. The unit test includes 2 implementations.  Both implementations pass non-ASCII characters to the procedure and return them unchanged.
    In Linux, the unit test import will change the non-ASCII characters in the XML file to xFFFD. If I copy/paste the the non-ASCII characters into the Unit Test after the import, they will be stored and executed correctly.
    Amazon Ubuntu 3.13.0-45-generic / lubuntu-core
    Oracle 11g Express Edition - AL32UTF8
    SQL*Developer 4.0.3.16 Build MAIN-16.84
    Java(TM) SE Runtime Environment (build 1.7.0_76-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.76-b04, mixed mode)
    In Windows, the unit test will import the non-ASCII characters unchanged from the XML file.
    Windows 7 Home Premium, Service Pack 1
    Oracle 11g Express Edition - AL32UTF8
    SQL*Developer 4.0.3.16 Build MAIN-16.84
    Java(TM) SE Runtime Environment (build 1.8.0_31-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 25.31-b07, mixed mode)
    If SQL*Developer is coded the same between Windows and Linux, The JVM must be causing the problem.

    Set the System property "mail.mime.decodeparameters" to "true" to enable the RFC 2231 support.
    See the javadocs for the javax.mail.internet package for the list of properties.
    Yes, the FAQ entry should contain those details as well.

  • Naming files with non English characters.

    I'm using filemaker to creat PDF's through Acrobat 10.1.12. I need to use Polish, Hungarian, Czech and Slovakian characters in the file name but the characters are not recognised and so the file name will not create. This is for Windows, the problem does not occur on a mac.

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • How to display feeds with non-latin utf8 characters in Raggle?

    Has anyone tried to use raggle to read feeds with non-latin utf8 characters?
    If you are successful, how to do it?
    Thanks

    i have this problem too...
    Last edited by vdo (2008-09-02 12:19:31)

  • Playlists containing songs with titles starting with non-alphabetical characters do not get transferred at all

    I was really happy about how the desktop manager lets me choose the songs from my itunes library by playlists! this is smart! I'm also happy to see the playbook render asian characters properly (unlike my experience on the bold!)
    However, I have a lot of songs which have titles that begin with non-alphabetical characters, ie asian characters. For all playlists which contains such songs, those songs do not get transferred onto the playbook AT ALL. For example, I have one playlist which contains only songs with titles beginning with asian characters - this playlist was transferred but it is shown as containing zero songs! When I browse my song library by artist/album, I can find all those songs that should be in that/those playlists.
    Can this be fixed?
    thanks.
    Solved!
    Go to Solution.

    as it turns out, it kind of got fixed by itself. what i did (i think, since i have not definitively verified) was i went to browse "all songs" wherein i noticed all thoses songs starting with asian characters seemed to show correctly. then i went back to the playlists in question, and viola!
    clearly there is a bug or two in the music player somewhere regarding these title characters. but rim has more urgent things to resolve i guess - like my auto screen lock is still not working!

  • Filling clob with non ascii characters

    Hello,
    I have had some problems with clobs and usage of german
    umlauts (����). I was'nt able to insert or update
    strings containing umlaute in combination with string
    binding. After inserting or updating the umlaut
    characters were replaced by strange (spanish) '?'
    which were upside down.
    However, it was working when I did not use string bindung.
    I tried varios things, after some time I tracked
    the problem down to to oracle.toplink.queryframework.SQLCall.java. In the
    prepareStatement(...) you find something
    like
    ByteArrayInputStream inputStream = new ByteArrayInputStream(((String) parameter).getBytes());
    // Binding starts with a 1 not 0.
    statement.setAsciiStream(index + 1, inputStream,((String) parameter).getBytes().length);
    I replaced the usage of ByteArrayInputStram with CharArrayReader:
    // TH changed, 26.11.2003, Umlaut will not work with this.
    CharArrayReader reader = new CharArrayReader(((String) parameter).toCharArray());     
    statement.setCharacterStream(index + 1, reader, ((String) parameter).length() );
    and this worked.
    Is there any other way achieving this? Did anyone
    get clobs with non ascii characters to work?
    Regards -- Tobias
    (Toplink 9.0.3, Clob was mapped to String, Driver was Oracle OCI)

    I don't think the console font is the problem. I use Lat2-Terminus16 because I read the Beginner's Guide on the wiki while installing the system.
    My /etc/vconsole.conf:
    KEYMAP=de
    FONT=Lat2-Terminus16
    showconsolefont even shows me the characters missing in the file names; e.g.: Ö, Ä, Ü

  • Can't get the attachment filename out of a Part (with non ascii characters)

    Hello, all and happy new year :)
    My issue is with non ascii filename in attachments... Yes i've read the FAQ : http://www.oracle.com/technetwork/java/faq-135477.html#encodefilename
    I can't get the filename out of the BodyPart for those kind of attachments
    here's my unit test :
         * contains various parts from various mailer encoded in different ways...
         private enum EncodedFileNamePart{
              OUTLOOK("Content-Type: text/plain;\n name=\"=?iso-8859-1?Q?c'estd=E9j=E0no=EBl=E7ac'estcool.txt?=\" \nContent-Transfer-Encoding: 7bit\nContent-Disposition: attachment;\n filename=\"=?iso-8859-1?Q?c'estd=E9j=E0no=EBl=E7ac'estcool.txt?=\" \n\nnoel 2010\n","c'estdéjànoëlçac'estcool.txt"),
              GMAIL("Content-Type: text/plain; charset=US-ASCII; name=\"=?ISO-8859-1?B?ZOlq4G5v62znYWNlc3Rjb29sLnR4dA==?=\"\nContent-Disposition: attachment; filename=\"=?ISO-8859-1?B?ZOlq4G5v62znYWNlc3Rjb29sLnR4dA==?=\"\nContent-Transfer-Encoding: base64\nX-Attachment-Id: f_giityr5r0\n\namluZ2xlIGJlbGxzIQo=\n","déjànoëlçacestcool.txt"),
              THUNDERBIRD("Content-Type: text/plain;\n name=\"=?ISO-8859-1?Q?d=E9j=E0no=EBl=E7acestcool=2Etxt?=\"\nContent-Transfer-Encoding: 7bit\nContent-Disposition: attachment;\n filename*0*=ISO-8859-1''%64%E9%6A%E0%6E%6F%EB%6C%E7%61%63%65%73%74%63%6F;\n filename*1*=%6F%6C%2E%74%78%74\n\njingle bells!\n","déjànoëlçacestcool.txt"),
              EVOLUTION("Content-Disposition: attachment; filename*=ISO-8859-1''d%E9j%E0no%EBl.txt\nContent-Type: text/plain; name*=ISO-8859-1''d%E9j%E0no%EBl.txt; charset=\"UTF-8\" \nContent-Transfer-Encoding: 7bit\n\njingle bells\n","déjànoël.txt"),
              String content=null;
              String target=null;
              private EncodedFileNamePart(String content,String target){
                   this.content=content;
                   this.target=target;
              public Part get(){
                   try{
                   ByteArrayInputStream bis = new ByteArrayInputStream(this.content.getBytes());
                   Part part = new MimeBodyPart(bis);
                   bis.close();
                   return part;
                   catch(Throwable e){
                        return null;
              public String getTarget(){
                   return this.target;
         @Test
         public void testJavamailDecode() throws MessagingException, UnsupportedEncodingException{
              System.setProperty("mail.mime.encodefilename", "true");
              System.setProperty("mail.mime.decodefilename", "true");
              for(EncodedFileNamePart part : EncodedFileNamePart.values())
                   assertEquals(part.name(),MimeUtility.decodeText(part.get().getFileName()),part.getTarget());     
    I take a NullPointerExcepion in the decodeText because getFileName() return null for the EVOLUTION case, and work well with OUTLOOK, THUNDERBIRD and GMAIL.
    Evolution's content type is "Content-Disposition: attachment; filename*=ISO-8859-1''d%E9j%E0no%EBl.txt" wich doesn't look like the other (looks like the RFC 2616 or RFC5987 to do it.)
    How can i handle this situation except by writting my own decoder?
    Thanks for your answers!
    Edited by: user13619058 on 4 janv. 2011 07:44

    Set the System property "mail.mime.decodeparameters" to "true" to enable the RFC 2231 support.
    See the javadocs for the javax.mail.internet package for the list of properties.
    Yes, the FAQ entry should contain those details as well.

  • Cp and tar files with non printable characters

    Hi all,
    Maybe it's a silly question but just got stuck with this.
    We have an XSan with diverse material from varios departaments. Besides having a backup on tape I was trying to just do a plain copy from a terminal of all the files to another disk just using cp or tar.
    But whenever cp or tar encounters a file with a nonprintable char they don't copy it.
    Let's say in the client Finder the named the file "opción.txt"
    The file shows up in terminal with an ? but cp or tar won't get the file.
    any clues?
    thanks!

    Hi
    Have a look at csv upload -- suggestion needed with non-English character in csv file it might help you.
    Thanks,
    Manish

  • Problem with Non-English Fields Output to PDF by JASPER in JDev10.1.3

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

    I am using jsprx files(designed in i-report) to generate pdf reports out of an oracle database.
    The non-English fields are shown correctly when I output the report into an HTML or when I view it with JasperView.
    If I try making PDF files (JasperExportManager.exportReportToPdfFile) the static fields containing e.g.Arabic/Chineese characters won't be displayed and dynamic fields from the database with non-English contents will be shown as ??? or null.
    I received some suggestions about using PARAMETERS to feed the report instead of FIELDS, which I think can not be helpful in this case and in general.
    I think this should be a common problem. These are the components I am using:
    itext-1.4.7. jar
    commons-digester- 1.7.zip
    jasperreports- 1.2.8.jar
    Any comment or help is appreciated.
    Thanks
    Farbod

  • Text Messaging in Japanese (or other language with non-Roman script)

    Is it possible to send/receive text messages written in non-Roman characters with Verizon? 
    More specifically, I'm using a Droid 2 with Verizon, and I'm trying to text a friend in Japanese.  She has an iPhone (AT&T), and is physically in the U.S. (so I'm not asking about international text messaging).  I've already installed apps on my Droid (e.g. OpenWNN, or Simeji) to input Japanese, which work fine in and of themselves, and the Droid of course displays Japanese text or other languages just fine on the Browser or in E-mails.
    However, I'm having lots of trouble with Text Messaging.  When I send a message containing Japanese text (typed in perfectly fine with OpenWNN or whatever), she either never gets the message, or the text comes out unreadable (as ???????).  Messages with a mix of Roman and non-Roman characters typically show the Roman characters ok, but the non-Roman ones (i.e. Japanese) are garbled.  If she sends me a message containing any non-Roman characters, I generally don't get the message at all (i.e. not even garbled-- just no message at all).   This deficiency seems to be specific to Text Messaging, as far as I can tell.  I can send and receive e-mails containing Japanese just fine, read foreign language web pages, and type in Japanese into search boxes on those web pages and so on with the Droid.  However, sending an e-mail message to (myphone#)@vtext.com, predictably, results in any Japanese text being garbled, though Roman characters come through just fine.
    Is this perhaps something specific to Verizon's network?  Or is text messaging in non-Roman scripts just inherently impossible?  My friend says she knows others who can successfully text in non-Roman scripts (e.g. Korean Hangul, Japanese Kana or Kanji), and claims some of these folks are Verizon customers too.  A search on droidforums.com yielded someone who supposedly could text in Korean within Verizon.  So, I'm hoping this is possible, and I'm just missing something. However, this is perhaps all secondhand information and rumor.
    (Incidentally, I know for certain that messaging with non-Roman scripts is possible in general.  During my last vacation in Japan, I rented a phone there, and could send messages in English or Japanese scripts.  However, I believe the phones there actually have their own e-mail addresses, and so the service there is more like regular e-mail-- my understanding is that text messaging as we know it in the U.S. is a distinct technology, though I could well be wrong.)
    So, to restate my question: is it possible to text message with non-Roman scripts with Verizon?  Has anyone out there done this successfully-- if so what did you do, or what app did you use?  Or is it in fact impossible? (e.g. Maybe text messaging here only uses 7 or 8 bits, instead of 14 or 16 needed to encode all the various Asian, European, and other scripts?)
    Thanks for any help available.

    I'm trying to figure this out as well, but my understanding on the matter is that Verizon's CDMA network uses unicode which will display, roman characters, loa, thai, and serveral other languages, but not hirigana, kanji, katakana, or simplified or traditional chinese.  The Romaji should work however.  It would be nice to hear some input from a Verizon rep on the subject, because like you my info is all second hand.
    If my understanding is correct, the simple answer is no, it's not possible.  

  • [SOLVED!] On USB drives, problems with non-English chars and HAL

    Hello,
    I am having a problem with non-English caracters (áãàçéẽê...) on files stored on my USB drive.
    On Windows they're created with the correct name. But on Linux the files have the non-English characteres replaced by '?' and are not accessible.
    If I manuallly mount the drives using 'mount -o iocharset=utf8 /dev/sdb1 /media/usbdisk' the characters are OK, so I think I just need to get HAL to pass the correct parameters to mount. However I don't know how to do that, and haven't found any good solution.
    I tried to build a custom kernel setting the default charset as UTF-8 and it didn't work.
    Any ideas? I'm using x86-64, HAL 0.5.13-3 and my locale is pt-BR.UTF-8.
    Thanks!
    EDIT: Actually, this is not a HAL problem, but a problem with 'exo'. For the solution, I edited /etc/xdg/xfce4/mount.rc and added iocharset=utf8 to the [vfat] category.
    Last edited by Renan Birck (2009-11-28 20:54:23)

    I don't use Thunar presently, but I looked in the Thunar Volume Manager doc and I didn't find anything to change the mount options of removable drives. I am not quite sure if it's possible or not. Maybe someone using it can tell for sure.
    But if it is not possible to change the mount options, a possible solution is to disable the Thunar Volume Manager plugin and to use something else more configurable to manage the automount function.
    Personally I use the halevt package from AUR which uses configuration files in the xml format.
    It's not so easy to use but is highly configurable.
    But there exists other tools also.
    I can help you with halevt if you choose that way...

  • Logon with non-English language LSMW donot display character

    hi,When I Logon with non-english language ,
    When I use LSMW ,
    But the screen donot display character,
    and in smlt , i setting supplementation language is english ,
    who can help me ,thanks.

    Hi Benson,
    Can you please elaborate on the issue. What characters are missing?
    Regards.
    Ruchit.

Maybe you are looking for

  • App Error 606

    Hello, my blackberry is giving me "App Error 606" with a white screen and "Reset" I reset the phone but nothing happened. I couldn't connect the phone with the Blackberry Desktop software ! How can I fix the problem without losing my data. note: I di

  • Help...Where Do I Put My Mail Backup File?

    I backed up my entire MAIL folder, as the HELP file instructed off my old MacPro using 10.4.11 Today I got my new Leopard equipped Mac Pro, and am trying to figure out where I put this MAIL folder that has all my old info and mail etc in it, on the n

  • [SOLVED] GDM/Gnome Issues after Updates

    hi everyone! ive run into some pesky problems (possibly nvidia related) after a full update this morning. after booting up, gdm tries to run, but it crashes and retries over and over again with this message (shown in systemctl status gdm while runnin

  • Infoset Query Restriction

    Is there a way to restrict master data within an infoset query? I only want master data for a certain sales organization in the infoset query because this is joined to a cube. The cube does not contain sales org. If I could only have the infoset quer

  • Is there any way to recovery rar archive?

    Multipart archive downloaded at merge files into one file, the contents appear as "# * &%" in Notepad.