Support issue for non-English characters (in html forms)

Hi group!
I just want to post an issue here and see if anyone else has the same problem. First off, Im running Windows XP MCE but the French version (not the english version). This may help find out where the problem really is.
Second, I know a bit of html and such, and I'm referring to HTML Character entities for this thread, there's a quite complete list here for reference: http://www.faqs.org/docs/htmltut/characterentitiesfamsupp69.html
I noticed that some, not all, non-English characters written in a textarea (which is, basically, a multi-lined input box) doesnt pass well or at all to the server when sending the form from Safari. Most of the time, the content of the text area is reduced to the beginning and ends where the first accentued character is met.
The most used French accents (é, à) are usually well interpreted (but may, once in a while, produce that bug too) by safari, but ô and î doesnt do that well.
Oddly, this bug doesnt happen all the time and doesnt "crash" in the same manner everytime.
So I started a thread just to see if there's anyone else having issues with any non-english characters mostly in forms. Probably flash/shockwave does work, but I'm not sure- I have not tested yet.
Acer Aspire 5044   Windows XP   Turion 1.8GHz, 1Gb SDRam, ATI 200M xpress

Yes, it is a known issue. I also noticed that it sometimes works, but most of the time it does not. It will hopefully be solved in the future. According to http://www.apple.com/safari/download/ changes that will come include:
# Support for International users
# International text input methods
# Advanced text (contextual forms, international scripts)
Sony Vaio   Windows XP  

Similar Messages

  • SetMnemonic for non-english characters

    Does anybody knos how to set JButtons mnemonic for non-english characters?
    My mnemonic is loaded from a resource bundle, and in the documentation the setMnemonic(char) is only limited to english and it is written that the user should call setMnemonic(int) instead.
    So what value should this int contains in order to display the non-english char which is loaded from resource bundle?
    Thanks in advanve,
    Hanoch

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • Word Replacements for Non- English Characters

    Hi
    Does anyone have an idea on implementing Word Replacements for non- english characters in TCA- DQM 11i.
    We are trying to identify, capture and cleanse common accented characters like à, â , ê
    However, the default language for replacement is American English , So even if we add these in the existing lists it will not take any effect
    Is creating a new Word replacement list for every language the solution ?? any patch recommendations???
    Thanks in advance

    It seems that this is an issue that has popped up in various forums before, here's one example from last year:
    http://forum.java.sun.com/thread.jspa?forumID=16&threadID=490722
    This entry has some suggestions for handling mnemonics in resource bundles, and they would take care of translated mnemonics - as long as the translated values are restricted to the values contained in the VK_XXX keycodes.
    And since those values are basically the English (ASCII) character set + a bunch of function keys, it doesn't solve the original problem - how to specify mnemonics that are not part of the English character set. The more I look at this I don't really understand the reason for making setMnemonic (char mnemonic) obsolete and making setMnemonic (int mnemonic) the default. If anything this has made the method more difficult to use.
    I also don't understand the statement in the API about setMnemonic (char mnemonic):
    "This method is only designed to handle character values which fall between 'a' and 'z' or 'A' and 'Z'."
    If the type is "char", why would the character values be restricted to values between 'a' and 'z' or 'A' and 'Z'? I understand the need for the value to be restricted to one keystroke (eliminating the possibility of using ideographic characters), but why make it impossible to use all the Latin-1 and Latin-2 characters, for instance? (and is that in fact the case?) It is established practice on other platforms to be able to use characters such as '�', '�' and '�', for instance.
    And if changes were made, why not enable the simple way of specifying a mnemonic that other platforms have implemented, by adding an '&' in front of the character?
    Sorry if this disintegrated into a rant - didn't mean to... :-) I'm sure there must be good reasons for the changes, would love to understand them.

  • PDF generation for Non English Characters from ADF

    Hi
    We are using below piece of code to generate pdf from ADF Managed bean. It works fine. However for non English Characters(eg. Japanese,Vietnamese,Arabic)  it puts
    I got few blogs
    https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears
    However we are not using BI Publisher product . We are using its API's
    Can anyone tell where do we need to setup fonts within ADF or Weblogic or Server ?
    Input Parameters are
    a)xml Data
    b)InputStream  ie rtf Template
    import oracle.apps.xdo.XDOException;
    import oracle.apps.xdo.template.FOProcessor;
    import oracle.apps.xdo.template.RTFProcessor;
        public static byte[] genPdfRep(String pOutFileType,byte[] pXmlOut ,InputStream pTemplate)
            byte[] dataBytes = null;
            try {
                //Process RTF template to convert to XSL-FO format
                RTFProcessor rtfp = new RTFProcessor(pTemplate);
                ByteArrayOutputStream xslOutStream = new ByteArrayOutputStream();
                rtfp.setOutput(xslOutStream);
                rtfp.process();
                //Use XSL Template and Data from the VO to generate report and return the OutputStream of report
                ByteArrayInputStream xslInStream = new ByteArrayInputStream(xslOutStream.toByteArray());
                FOProcessor processor = new FOProcessor();
                ByteArrayInputStream dataStream = new ByteArrayInputStream((byte[])pXmlOut);  
                processor.setData(dataStream);
                processor.setTemplate(xslInStream);
                ByteArrayOutputStream pdfOutStream = new ByteArrayOutputStream();
                processor.setOutput(pdfOutStream);
                byte outFileTypeByte = FOProcessor.FORMAT_PDF;
                processor.setOutputFormat(outFileTypeByte); //FOProcessor.FORMAT_HTML
                processor.generate();
                dataBytes = pdfOutStream.toByteArray();
            } catch (XDOException e) {
                e.printStackTrace();
            return dataBytes;
    Appreciate your help.
    Thanks,
    Abhijit

    Fonts are defined in the template you use to generate the pdf. Your application add the data and both is processed yb the FOP processor. Now there are two possible causes of the '???' :
    1. the data you sent to the template contains the '???' already
    2. the template can't digest the data (the special characters) and puts '???' in the pdf.
    Before going on you have to find out which one is your problem. The 2nd is the problem you better ask this in a FOP forum as you have to solve it by changing the template.
    Timo

  • Prevent Non-English Characters on JSP forms

    I was hoping to get any programming tips/ideas to prevent users from entering non-english text on web-forms.
    Any feedback would be greatly appreciated. Thanks.

    I have a jsp page something like:
    <tr>
    <td colspan=2> </td>
    <td colspan=2>
    <textarea name="title" cols="<%=cols%>" rows="3" wrap><%= form.getTitle()%></textarea>
    </td>
    </tr>
    When the user submits the page, I do the form validation in the java formhandler. I was hoping that I could somehow compare the ascii codes of the character to ensure user is entering only english characters.
    The following is the code, I have written in java the form-handler
    for (int i =0 ; i < title.length() ; i++) {
         char c = title.charAt(i);
         System.out.println("c = " + c + ", ascii = " +(int)'c');
         if (int(c) > 127) {
              setErrorMessage(ID.QUESTION.TITLE, "Non-English characters are not allowed. Please enter the required information only in Enlgish.");     
    But for some reason which I am not able to debug, is that no matter what character I enter english or non-english its ascii equivalent i.e. the int(c) value getting printed out is always 99. Moreover even if I enter a non-english character, in the system.out it is printing its english equivalent...if that makes any sense...
    I hope I was able to explain my problem...Any help/feedback would be greatly appreciated.
    Thanks.

  • Support for non-English characters in Safari?

    When I browse Japanese and Chinese websites, Safari only shows blank lines and empty blocks, it seems Safari doesn't support non-English encoding or UTF-8?
      Windows XP Pro  

    It (usually) works if you use an English version of Windows. Not if you use a Chinese or Japanese version.

  • UTL_RAW.REVERSE for non english characters

    I'm trying to reverse a non-English work like the following and it does not work ....
    SELECT '中国' Original, UTL_RAW.cast_to_varchar2(UTL_RAW.REVERSE (UTL_RAW.cast_to_raw ('中国'))) Not_correctly_Reversed FROM DUAL;
    ORIGINAL NOT_CORRECTLY_REVERSED
    中国 ���
    Any thoughts please ?
    Appreciate responses. Thanks !

    chris227 wrote:
    Works well for meNo, it does not. It will fail if table has duplicate strings of two digit length but, what is even worse, it will produce wrong results if table has small duplicate strings:
    SQL> with testdata as (
      2                    select  '19 character string' str from dual union all
      3                    select  '19 character string' str from dual
      4                   )
      5  select distinct
      6    listagg(substr(str,level,1))within group ( order by level desc) over (partition by str) r
      7  from testdata
      8  connect by
      9  level <= length(str)
    10  and str = prior str
    11  and prior sys_guid() is not null
    12  /
    from testdata
    ERROR at line 7:
    ORA-01489: result of string concatenation is too long
    Elapsed: 00:00:36.08
    SQL> with testdata as (
      2                    select  'ABC' str from dual union all
      3                    select  'ABC' str from dual
      4                   )
      5  select distinct
      6    listagg(substr(str,level,1))within group ( order by level desc) over (partition by str) r
      7  from testdata
      8  connect by
      9  level <= length(str)
    10  and str = prior str
    11  and prior sys_guid() is not null
    12  /
    R
    CCCCCCCCBBBBAA
    Elapsed: 00:00:00.00You need to identify rows uniquely. With real table it is easy - rowid. With subquery factoring we need another view with, for example, ROW_NUMBER. However here we use subquery factoring to create sample table on-the-fly and assume OP will have real table. So I'd leave your example as is, but would let OP know to use:
    select distinct
      listagg(substr(str,level,1))within group ( order by level desc) over (partition by str) r
    from testdata
    connect by
    level <= length(str)
    and rowid = prior rowid
    and prior sys_guid() is not null
    /But this still will not work. Why? Same answer - duplicates:
    SQL> create table testdata as (
      2      select  'ABC' str from dual union all
      3      select  'ABC' str from dual
      4     );
    Table created.
    Elapsed: 00:00:00.42
    SQL> select distinct
      2    listagg(substr(str,level,1))within group ( order by level desc) over (partition by str) r
      3  from testdata
      4  connect by
      5  level <= length(str)
      6  and rowid = prior rowid
      7  and prior sys_guid() is not null
      8  /
    R
    CCBBAA
    Elapsed: 00:00:00.01
    SQL>Again, partition by str doesn't identify rows uniquery. We need to partition by rowid. But even this will not help:
    SQL> select distinct
      2    listagg(substr(str,level,1))within group ( order by level desc) over (partition by rowid) r
      3  from testdata
      4  connect by
      5  level <= length(str)
      6  and rowid = prior rowid
      7  and prior sys_guid() is not null
      8  /
    R
    CBA
    Elapsed: 00:00:00.00
    SQL>We got one row back instead of two. You probably put DISTINCT trying to resolve all these issues caused by building hierarchy and partitions on a non-unique bases. So now, when we identify rows uniquely by rowid, DISTINCT is not needed and should be replaced by GROUP BY (along with using aggregate LISTAGG instead of analytic LISTAGG). So final solution would be:
    select listagg(substr(str,level,1))within group ( order by level desc) r
    from testdata
    connect by
    level <= length(str)
    and rowid = prior rowid
    and prior sys_guid() is not null
    group by rowid
    R
    CBA
    CBA
    Elapsed: 00:00:00.00
    SQL>And with 19 character string:
    SQL> insert
      2    into testdata
      3  select  '19 character string' str from dual union all
      4                    select  '19 character string' str from dual;
    2 rows created.
    Elapsed: 00:00:00.00
    SQL> select listagg(substr(str,level,1))within group ( order by level desc) r
      2  from testdata
      3  connect by
      4  level <= length(str)
      5  and rowid = prior rowid
      6  and prior sys_guid() is not null
      7  group by rowid
      8  /
    R
    CBA
    CBA
    gnirts retcarahc 91
    gnirts retcarahc 91
    Elapsed: 00:00:00.00
    SQL>SY.

  • Question marks in PDF for non-english characters.

    I'm get report from APEX 3.0.1 (Default Report Layout) with BI Publisher 10.1.3.3.1 Base.
    In Adobe Reader 7.0.8 instead of non-english(cyrillic) characters see question marks.
    How to tune BI Publisher?

    After installation BI Publisher 10.1.3.3.1 Base (standalone, OC4J) :
    Directory of F:\bip\jdk\lib\fonts
    13/10/2007 21:16 15 196 128R00.TTF
    13/10/2007 21:16 18 473 348 ALBANWTJ.ttf
    13/10/2007 21:16 18 777 132 ALBANWTK.ttf
    13/10/2007 21:16 18 676 084 ALBANWTS.ttf
    13/10/2007 21:16 18 788 600 ALBANWTT.ttf
    13/10/2007 21:16 276 384 ALBANYWT.ttf
    13/10/2007 21:16 12 860 B39R00.TTF
    13/10/2007 21:16 18 800 MICR____.TTF
    13/10/2007 21:16 6 580 UPCR00.TTF
    Directory of F:\bip\jdk\jre\lib\fonts
    01/08/2006 19:25 75 144 LucidaBrightDemiBold.ttf
    01/08/2006 19:25 75 124 LucidaBrightDemiItalic.ttf
    01/08/2006 19:25 80 856 LucidaBrightItalic.ttf
    01/08/2006 19:25 344 908 LucidaBrightRegular.ttf
    01/08/2006 19:25 317 896 LucidaSansDemiBold.ttf
    01/08/2006 19:25 698 236 LucidaSansRegular.ttf
    01/08/2006 19:25 234 068 LucidaTypewriterBold.ttf
    01/08/2006 19:25 242 700 LucidaTypewriterRegular.ttf
    Directory of F:\bip\jre\1.4.2\lib\fonts
    24/03/2004 19:12 75 144 LucidaBrightDemiBold.ttf
    24/03/2004 19:12 75 124 LucidaBrightDemiItalic.ttf
    24/03/2004 19:12 80 856 LucidaBrightItalic.ttf
    24/03/2004 19:12 344 908 LucidaBrightRegular.ttf
    24/03/2004 19:12 317 896 LucidaSansDemiBold.ttf
    24/03/2004 19:12 698 236 LucidaSansRegular.ttf
    24/03/2004 19:12 234 068 LucidaTypewriterBold.ttf
    24/03/2004 19:12 242 700 LucidaTypewriterRegular.ttf
    What is wrong?
    In Adobe Reader's Document Properties -> Fonts
    +Helvetica:
    Type: Type1
    Encoding: Ansi
    Actual Font: ArialMT
    Actual Font Type: TrueType
    I feel BIP use wrong encoding . . .

  • Problem in converting Spool to PDF file, having non-English characters

    Hi All,
            I have problem in converting Spool to PDF format.
    Scenario : I have a spool which has non-English characters. I am using CONVERT_ABAPSPOOLJOB_2_PDF  FM to perform conversion. But my output is having junk values( ie # ) for non-English characters. Any pointers to solve this issue will be appreciated.
    I even tried with report RSTXPDFT4 , it also gives me the same junk characters.
    Regards,
    Navin.

    Hi All,
            I have problem in converting Spool to PDF format.
    Scenario : I have a spool which has non-English characters. I am using CONVERT_ABAPSPOOLJOB_2_PDF  FM to perform conversion. But my output is having junk values( ie # ) for non-English characters. Any pointers to solve this issue will be appreciated.
    I even tried with report RSTXPDFT4 , it also gives me the same junk characters.
    Regards,
    Navin.

  • Non-English characters in URL for rwservlet

    I'm having a problem when I try to use non-english characters in a URL request to generate a report.
    This works fine:
    http://...rwservlet?report=r1.jsp&m1=Fred
    But if I try Fréd (e with accent graph) the report does not return any data even though the SQL by itself would find data.
    I tried UTF-8 encoding
    http://...rwservlet?report=r1.jsp&m1=Fr%C3%A9d
    8859-1 encoding
    http://...rwservlet?report=r1.jsp&m1=Fr%E9d
    Or just spell it out (not sure what that gets encode as):
    http://...rwservlet?report=r1.jsp&m1=Fréd
    But noting works. Any ideas?
    Thanks, Andreas

    Suggestions
    1) Try with NLS_LANG as
    SWEDISH_SWEDEN.WE8DEC
    2) Make a paramform and enter via paramform (unencoded)
    (This is just for testing purpose)
    3) Change machine locale to swedish and try
    4) Which reports version is this ?
    Please see
    BUG 2713695 - NLS CHARACTERS FOR PARAMETERS CHANGE TO QUESTION MARKS WHEN PASSED ON URL BAR
    Get in touch with Support to see if this is the issue and if "yes" get a one-off patch.
    [    All Docs for all versions    ]
    http://otn.oracle.com/documentation/reports.html
    [     Publishing reports to web  - 10G  ]
    http://download.oracle.com/docs/html/B10314_01/toc.htm (html)
    http://download.oracle.com/docs/pdf/B10314_01.pdf (pdf)
    [   Building reports  - 10G ]
    http://download.oracle.com/docs/pdf/B10602_01.pdf (pdf)
    http://download.oracle.com/docs/html/B10602_01/toc.htm (html)
    [   Forms Reports Integration whitepaper  9i ]
    http://otn.oracle.com/products/forms/pdf/frm9isrw9i.pdf
    ---------------------------------------------------------------------------------

  • Can't CF use non English characters in URLs ???? (critical for SEO)

    Hi all,
    I want to use non English characters (Greek characters) for folders of the URLs.
    eg   http://www.mysite.com/Φάκελος/index.cfm
    where "Φάκελος" is a non English word (Greek).
    When the called page is simple HTML  eg   www.mysite.com/Φάκελος/index.HTM
    it's displayed just fine.
    When the called page is CF page  eg   www.mysite.com/Φάκελος/index.CFM
    I get a "FILE NOT FOUND" error.
    In the page where the link exists everything is UTF-8.
    What's the problem ? Can't CF use non English characters in URLs ????
    It's critical for SEO issues.
    I use CF9.  Any ideas ???
    Thanks in advance.
    Anastassios

    I don't have this setting in the email application. But as I know, html with Exchange is working only with the 2007 version, my server is still 2003 so I think in my case it's plain text only.
    But I'm telling again: good old (and now starting to miss) E60 with MfE worked very well!

  • URLEneQuery encoding is failing for some non english characters

    While creating a URLEneQuery we are getting error com.endeca.navigation.InternalException: No support for 8-bit urls.
    This error happens when the query string has some non english characters. (eg: Á).

    UrlENEQuery is designed around processing URL data, and URLs are not permitted non-ASCII characters in their production. To represent non-ASCII characters they must be %-encoded in URLs according to their byte(s) representation in a particular character-encoding, and you should prefer UTF-8 for URLs. So your LATIN CAPITAL LETTER A WITH ACUTE (U+00C1) should appear as %C3%81 in your URL, then UrlENEQuery should be able to process that character.

  • Non-English characters not displaying correctly - Serious Issue

    My corporate email is on a Lotus Domino server with Lotus Traveler installed.
    I have set my PlayBook (with OS 2) up to syncronize with the corporate email trough Active Sync (see http://alturl.com/qh3nn), which works perfectly.
    I have however noticed that in some emails special non-english characters are displayed correctly but in some emails special non-english characters are displayed as a black diamond with a question mark inside.
    This is of course a serious issue as most non English speaking countries use some special characters.
    When trying to understand this problem how can I analyse the emails and see what character set is being used?
    And of course better; has someone solved this?

    I am having the same problem. Is there any update available?

  • Non English characters conversion issue in LSMW BAPI Inbound IDOCs

    Hi Experts,
    We have some fields in customer master LSMW data load program which can
    contain non-English characters. We are facing issues in LSMW BAPI
    method with non-English characters Conversion. LMSW steps read and
    conversion are showing the non-English characters properly with out any
    issue. While creating inbound IDOCs most of the non-English characters
    replaced with '#' and its causing issues in creating customer master data in
    system. In our scenario customer data with non-English characters in
    the first name, last name and address details. Any specific setting
    needs to be done from our side? Please suggest me to resolve this issue.
    Thanks
    Rajesh Yadla

    If your language is a unicode tehn you need to change the options  like IN SAP you need to change it to unicode  in the initial screen Customize local layout(ALT F12) options 118  --> Encoding ....

  • Non English characters in BIP email

    Hi, my report contains Japanese characters, when I view the output in HTML format. It is displayed properly. But when I click on send button , enter email parameters like to, cc, bcc, subject , etc and send it, in the mail I receive, the japanese characters are not getting displayed properly. The same problem occurs for spanish and portugese texts-in general to all non english characters. I am using Oracle Business Intelligence Publisher Release 10.1.3.4. If someone has faced a similar issue, kindly help. Thanks in advance

    Suggestions
    1) Try with NLS_LANG as
    SWEDISH_SWEDEN.WE8DEC
    2) Make a paramform and enter via paramform (unencoded)
    (This is just for testing purpose)
    3) Change machine locale to swedish and try
    4) Which reports version is this ?
    Please see
    BUG 2713695 - NLS CHARACTERS FOR PARAMETERS CHANGE TO QUESTION MARKS WHEN PASSED ON URL BAR
    Get in touch with Support to see if this is the issue and if "yes" get a one-off patch.
    [    All Docs for all versions    ]
    http://otn.oracle.com/documentation/reports.html
    [     Publishing reports to web  - 10G  ]
    http://download.oracle.com/docs/html/B10314_01/toc.htm (html)
    http://download.oracle.com/docs/pdf/B10314_01.pdf (pdf)
    [   Building reports  - 10G ]
    http://download.oracle.com/docs/pdf/B10602_01.pdf (pdf)
    http://download.oracle.com/docs/html/B10602_01/toc.htm (html)
    [   Forms Reports Integration whitepaper  9i ]
    http://otn.oracle.com/products/forms/pdf/frm9isrw9i.pdf
    ---------------------------------------------------------------------------------

Maybe you are looking for

  • Novell Configuration Management 11

    Hi, I apologize if this has been asked previously with older version, but it is ages since I have looked at Asset Management from an auditing point of view. However I have a client who uses ZEN 7.x for application launching and deployment, group Poli

  • HL7 Messages are not transforming.

    Hi, My composite is picking HL7 message from file adapter and i want to insert some of few elements in database. After transformation null values goes in database. If i give hard code values like current date, then it get inserted properly in databas

  • Update: Credit issue for Orange PAYG customers

    Hi all, We've noticed that some of our Orange Pay As You Go customers have been experiencing a problem with recent top-up transactions that haven't appeared on their account. We can confirm the problem has occurred for a small proportion of our Orang

  • Links around slide show are obtrusive

    Sorry- I need help again. How do you move the links around the slide show like: Back to Album, Play Slide Show, Download, Previous and Next? Can you change the wording, position, font or color? They are pretty obtrusive. The My Albums Index page is m

  • JPG files not opening on MAC

    Hi Guys, I went for a trip in the last month where I clicked a lot of pics. I initially transferred them to my windows laptop (JPG). After that, I transferred them to mac  and when I view the same files in mac the pictures apear to be in .jpg.xml for