JSPs, UTF-8 & multibyte characters

          In our project we have a situation where we must output some multibyte characters
          to a JSP page. The data is retrieved from an Oracle database using BEA ELink and
          XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to be ok
          down to the JSP level, because I can output it to a file and it's properly UTF-8
          encoded.
          But when I try to write the data to the final reply (using <%=dataObject.getData()%>
          the results definitely are not UTF-8 encoded. On the client browser they show
          up as garbage, occupying more than twice the actual length of the data. The response
          headers and META-tags are all set to UTF-8 encoding, and the browser is set to
          use UTF-8.
          The funny part is, that the string seems to be encoded twice or something similar
          as is shown by the next example:
          This is the correct UTF-8 byte sequence for the first twice characters (they are
          just generated data for debugging purposes):
          C3 89 C3 A5
          Which translates to Unicode characters 00C9 and 00E5.
          But on the final page that is sent to the client this sequence has been changed
          to:
          C3 83 E2 80 B0 C3 83 C2 A5
          Which just doesn't make sense since it shows up as five different garbage characters.
          Does anyone have any ideas what is causing the problem and any suggestions? What
          are those extra characters in the final encoding?
          .Pete.
          

It sounds like the Object.toString is coming back already encoded in UTF8,
          and thus the JSP writer encodes that UTF8 using UTF8 again, which is what
          you see. Try making the String value be:
          > ... characters 00C9 and 00E5.
          ... instead of:
          > C3 89 C3 A5
          Then it will be encoded correctly.
          Peace,
          Cameron Purdy
          Tangosol Inc.
          << Tangosol Server: How Weblogic applications are customized >>
          << Download now from http://www.tangosol.com/download.jsp >>
          "Petteri Räisänen" <[email protected]> wrote in message
          news:[email protected]...
          >
          > In our project we have a situation where we must output some multibyte
          characters
          > to a JSP page. The data is retrieved from an Oracle database using BEA
          ELink and
          > XML (don't ask why). The XML-data is UTF-8 encoded, and the data seems to
          be ok
          > down to the JSP level, because I can output it to a file and it's properly
          UTF-8
          > encoded.
          >
          > But when I try to write the data to the final reply (using
          <%=dataObject.getData()%>
          > the results definitely are not UTF-8 encoded. On the client browser they
          show
          > up as garbage, occupying more than twice the actual length of the data.
          The response
          > headers and META-tags are all set to UTF-8 encoding, and the browser is
          set to
          > use UTF-8.
          >
          > The funny part is, that the string seems to be encoded twice or something
          similar
          > as is shown by the next example:
          >
          > This is the correct UTF-8 byte sequence for the first twice characters
          (they are
          > just generated data for debugging purposes):
          >
          > C3 89 C3 A5
          >
          > Which translates to Unicode characters 00C9 and 00E5.
          >
          > But on the final page that is sent to the client this sequence has been
          changed
          > to:
          >
          > C3 83 E2 80 B0 C3 83 C2 A5
          >
          > Which just doesn't make sense since it shows up as five different garbage
          characters.
          >
          >
          > Does anyone have any ideas what is causing the problem and any
          suggestions? What
          > are those extra characters in the final encoding?
          >
          > Pete.
          

Similar Messages

  • IMPDP SQLFILE : multibyte characters in constraint_name leads to ORA-00972

    Hi,
    I'm actually dealing with constraint_name made of multibyte characters (for example : constrain_name='VALIDA_CONFIRMAÇÃO_PREÇO13').
    Of course this Bad Idea® is inherited (I'm against all the fancy stuff like éàù in filenames and/or directories on my filesystem....)
    The scenario is as follows :
    0 - I'm supposed to do a "remap_schema". Everything in the schema SCOTT should now be in a schema NEW_SCOTT.
    1 - The scott schema is exported via datapump
    2 - I do an impdp with SQLFILE in order to get all the DDL (table, packages, synonyms, etc...)
    3 - I do some sed on the generated sqlfile to change every occurence of SCOTT to NEW_SCOTT (this part is OK)
    4 - Once the modified sqlfile is executed, I do an impdp with DATA_ONLY.
    (The scenario was imagined from this thread : {message:id=10628419} )
    I'm getting some ORA-00972: identifier is too long at step 4 when executing the sqlfile.
    I see that some DDL for constraint creation in the file (generated at step#2) is written as follow :ALTER TABLE "TW_PRI"."B_TRANSC" ADD CONSTRAINT "VALIDA_CONFIRMAÃÃO_PREÃO14" CHECK ...Obviously, the original name of the constraint with cedilla and tilde gets translated to something else which is longer than 30 char/byte...
    As the original name is from Brazil, I also tried do add an EXPORT LANG=pt_BR.UTF-8 in my script before running the impdp for sqlfile. This didn't change anything. (the original $LANG is en_US.UTF-8)
    In order to create a testcase for this thread, I tried to reproduce on my sandbox database... but, there, I don't have the issue. :-(
    The real system is an 4-nodes database on Exadata (11.2.0.3) with NLS_CHARACTERSET=AL32UTF8.
    My sandbox database is a (nonRAC) 11.2.0.1 on RHEL4 also AL32UTF8.
    The constraint_name is the same on both system : I checked byte by byte using DUMP() on the constraint_name.
    Feel free to shed any light and/or ask for clarification if needed.
    Thanks in advance for those who'll take on their time to read all this.
    I decided to include my testcase from my sandbox database, even if it does NOT reproduce the issue +(maybe I'm missing something obvious...)+
    I use the following files.
    - createTable.sql :$ cat createTable.sql
    drop table test purge;
    create table test
    (id integer,
    val varchar2(30));
    alter table test add constraint VALIDA_CONFIRMAÇÃO_PREÇO13 check (id<=10000000000);
    select constraint_name, lengthb(constraint_name) lb, lengthc(constraint_name) lc, dump(constraint_name) dmp
    from user_constraints where table_name='TEST';- expdpTest.sh :$ cat expdpTest.sh
    expdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp tables=test- impdpTest.sh :$ cat impdpTest.sh
    impdp scott/tiger directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=testThis is the run :
    [oracle@Nicosa-oel test_nonAsciiColName]$ sqlplus scott/tiger
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 12 18:58:27 2013
    Copyright (c) 1982, 2009, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> @createTable
    Table dropped.
    Table created.
    Table altered.
    CONSTRAINT_NAME                  LB       LC
    DMP
    VALIDA_CONFIRMAÇÃO_PREÇO13             29         26
    Typ=1 Len=29: 86,65,76,73,68,65,95,67,79,78,70,73,82,77,65,195,135,195,131,79,95
    ,80,82,69,195,135,79,49,51
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./expdpTest.sh
    Export: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:12 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp tables=test
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    . . exported "SCOTT"."TEST"                                  0 KB       0 rows
    Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
      /home/oracle/scott_dir/testNonAscii.dmp
    Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 19:00:22
    [oracle@Nicosa-oel test_nonAsciiColName]$ ./impdpTest.sh
    Import: Release 11.2.0.1.0 - Production on Tue Feb 12 19:00:26 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully loaded/unloaded
    Starting "SCOTT"."SYS_SQL_FILE_TABLE_01":  scott/******** directory=scottdir dumpfile=testNonAscii.dmp sqlfile=scottdir:test.sqlfile.sql tables=test
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Job "SCOTT"."SYS_SQL_FILE_TABLE_01" successfully completed at 19:00:32
    [oracle@Nicosa-oel test_nonAsciiColName]$ cat scott_dir/test.sqlfile.sql
    -- CONNECT SCOTT
    ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
    -- new object type path: TABLE_EXPORT/TABLE/TABLE
    CREATE TABLE "SCOTT"."TEST"
       (     "ID" NUMBER(*,0),
         "VAL" VARCHAR2(30 BYTE)
       ) SEGMENT CREATION DEFERRED
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS FOR OLTP LOGGING
      TABLESPACE "MYTBSCOMP" ;
    -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ALTER TABLE "SCOTT"."TEST" ADD CONSTRAINT "VALIDA_CONFIRMAÇÃO_PREÇO13" CHECK (id<=10000000000) ENABLE;I was expecting to have the cedilla and tilde characters displayed incorrectly....
    Edited by: Nicosa on Feb 12, 2013 7:13 PM

    Srini Chavali wrote:
    If I understand you correctly, you are unable to reproduce the issue in the test instance, while it occurs in the production instance. Is the "schema move" being done on the same database - i.e. you are "moving" from SCOTT to NEW_SCOTT on the same database (test to test, and prod to prod) ? Do you have to physically move/copy the dmp file ? Hi Srini,
    On the real system, the schema move will be to and from different machines (but same DBversion).
    I'm not doing the real move for the moment, just trying to validate a way to do it, but I guess it's important to say that the dump being used for the moment comes from the same database (the long story being that due to some column using object datatype which caused error in the remap, I had to reload the dump with the "schema rename", drop the object column, and recreate a dump file without the object_datatype...).
    So Yes, the file will have to move, but in the current test, it doesn't.
    Srini Chavali wrote:
    Obviously something is different in production than test - can you post the output of this command from both databases ?
    SQL> select * from NLS_DATABASE_PARAMETERS;
    Yes Srini, something is obviously different : I'm starting to think that the difference might be in the Linux/shell side rather than on the impdp as datapump is supposed to be NLS_LANG/CHARSET-proof +(when traditional imp/exp was really sensible on those points)+
    The result on the Exadata where I have the issue :PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              11.2.0.3.0the result on my sandbox DB :PARAMETER                      VALUE
    NLS_LANGUAGE                   AMERICAN
    NLS_TERRITORY                  AMERICA
    NLS_CURRENCY                   $
    NLS_ISO_CURRENCY               AMERICA
    NLS_NUMERIC_CHARACTERS         .,
    NLS_CHARACTERSET               AL32UTF8
    NLS_CALENDAR                   GREGORIAN
    NLS_DATE_FORMAT                DD-MON-RR
    NLS_DATE_LANGUAGE              AMERICAN
    NLS_SORT                       BINARY
    NLS_TIME_FORMAT                HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY              $
    NLS_COMP                       BINARY
    NLS_LENGTH_SEMANTICS           BYTE
    NLS_NCHAR_CONV_EXCP            FALSE
    NLS_NCHAR_CHARACTERSET         AL16UTF16
    NLS_RDBMS_VERSION              11.2.0.1.0------
    Richard Harrison .  wrote:
    Hi,
    Did you set NLS_LANG also when you did the import?Yes, that is one of the difference between the Exadata and my sandbox.
    My environnement in sandbox has NLS_LANG=AMERICAN_AMERICA.AL32UTF8 where the Exadata doesn't have the variable set.
    I tried to add it, but it didn't change anything.
    Richard Harrison .  wrote:
    Also not sure why you are doing the sed part? Do you have hard coded scheme references inside some of the plsql?Yes, that is why I choose to sed. The (ugly) code have :
    - Procedures inside the same package that references one another with the schema prepended
    - Triggers with PL/SQL codes referencing tables with schema prepended
    - Dynamic SQL that "builds" queries with schema prepended
    - Object Type that does some %ROWTYPE on tables with schema prepended (that will be solved by dropping the column based on those types as they obviously are not needed...)
    - Data model with object whose names uses non-ascii characters
    +(In France we use to call this "gas power plant" in order to tell how a mess it is : pipes everywhere going who-knows-where...)+
    The big picture is that this kind of "schema move & rename" should be as automatic as possible, as the project is to actually consolidate several existing databases on the Exadata :
    One schema for each country, hence the rename of the schemas to include country-code.
    I actually have a workaround yet : Rename the objects that have funky characters in their name before doing the export.
    But I was curious to understand why the SQLFILE messed up the constraint_name on one sustem when it doesn't on another...

  • Handling multibyte characters

    Hi ,
    I have created a procedure which sends e-mail using UTL_SMTP.
    The procedure has a part in which we add the attachments to e-mail.
    Now , the issue is when i am adding an attachment which contains multibyte characters , these characters are replaced with '?'.
    Can anyone provide any guidance on this?

    First, you should not append 'charset="us-asci"' in this line:
      UTL_SMTP.WRITE_DATA(L_MAIL_CONN, 'Content-Type: ' || IN_ATT_MIME_TYPE ||'charset="us-ascii"'||'; name="' || IN_ATT_FILE_NAME || '"' || UTL_TCP.CRLF);
    The default IN_ATT_MIME_TYPE has this clause already, hence you would have a duplicate. Moreover, you add it without the required preceding semicolon. Further, in the Content-Type, you should pass the original character set of the file, not "us-ascii". This character set must support characters included in the file.
    Second, the NCLOB is not written correctly either. UTL_ENCODE.BASE64_ENCODE expects a RAW value. If you give it an NVARCHAR2 value returned by DBMS_LOB.SUBSTR, then PL/SQL will implicitly apply HEXTORAW.to the value. HEXTORAW fails, if the NCLOB content is not a valid sequence of hex digits. Treating the content of NCLOB as a string of hex digits is obviously not your goal. You should use UTL_I18N.STRING_TO_RAW to convert NVARCHAR2 from DBMS_LOB.SUBSTR to the desired target encoding (the one specified in Content-Type) and cast it to RAW at the same time. UTF-8 (i.e. AL32UTF8) is usually the best choice for the target encoding. You should then apply UTL_RAW.CAST_TO_VARCHAR2 to change the RAW representation of base64-encoded value to VARCHAR2 expected by UTL_SMTP.WRITE_DATA.
    Of course, passing DBMS_LOB.SUBSTR result directly to UTL_ENCODE.BASE64_ENCODE would make sense for a BLOB attachment. However, even then the encoded result should be passed to UTL_RAW.CAST_TO_VARCHAR2, not UTL_RAW.CAST_TO_RAW.
    Third, if you use UTF-8 as Content-Type encoding, you may want to prepend three bytes (0xEF 0xBB 0xBF) to the NCLOB value before base64 encoding. This three-byte character is the UTF-8 Byte Order Mark. It helps some editors, such as Notepad, to recognize the file as encoded in UTF-8.
    Fourth, if the target encoding is UTF-8, l_step should be no more than 8191. This is to avoid intermediate values exceeding 32767 bytes.
    Fifth, the whole procedure will not work well on EBCDIC platform. In contrary to what documentation says, UTL_SMTP.WRITE_DATA does not seem to convert data to US7ASCII before sending (unless the package is ported separately by platform vendors). I guess this is not your worry but I thought I will mention this, just in case.
    Thanks,
    Sergiusz

  • Multibyte characters not displaying on report

    Hi there,
    I am having a problem displaying multibyte characters on my report (Oracle reports 6i). These characters are needed for barcode encoding. e.g chr(203) . But when I run my report they are missing.
    Also when I do the following sql in oracle11g (the version of the db the report is working against) :-
    select chr(203) from dual;
    I get the error :-
    ORA-29275: partial multibyte character
    though it works fine for oracle 8.
    Any help much appreciated.

    Everything depends on your NLS parameters. If I do this on Oracle 11g I get:
    select chr(203) from dual;
    C
    ËFor bar coding you should use a special bar code font, e.g.:
    http://www.idautomation.com/font-encoders/oracle-reports/

  • [SOLVED] Problems opening folders with UTF-8 encoded characters

    Hello everyone, I'm having an issue when I acess folders in all my programs ( except Dolphin File Manager). Every time I open the folder navigation window in my programs, folders with UTF-8 encoded characters ( such as "ç", "á ", "ó", "í", etc ) are not shown or the folder name not show these characters, therefore, I can not open documents inside these folders.
    However, as you saw, I can type these characters normally. Here's my "locale.conf" :
    LANG="en_US.UTF-8:ISO-8859-1"
    LC_TIME="pt_BR.UTF-8:ISO-8859-1"
    And here's the output of the command "locale -a" :
    C
    en_US.utf8
    POSIX
    Last edited by regmoraes (2015-04-17 12:55:19)

    Thing is, when I run locale -a, I get
    $ locale -a
    C
    de_DE@euro
    de_DE.iso885915@euro
    de_DE.utf8
    en_US
    en_US.iso88591
    en_US.utf8
    ja_JP
    ja_JP.eucjp
    ja_JP.ujis
    ja_JP.utf8
    japanese
    japanese.euc
    POSIX
    So an entry for every locale I have uncommented in my locale.conf. Just making sure, by "following the steps in the beginner's guide", you also mean running locale-gen?
    Are those folders on a linux filesystem like ext4 or on a windows (ntfs?)

  • Adobe AIR help; breadcrum navigation doesn't work in multibyte characters?

    Hi there,
    I created Adobe application with RoboHelp 9 (using FrameMaker files,
    which are written in Japanese and English) to find that breadcrum navigation on the top doesn't work.
    Is this a feature that breadcrum navigation doesn't support multibyte characters? Or is this any workaround?
    Many thanks for your kind support in advance,

    See my reply to your other post. You can also test this in the new project and raise it with Adobe Support at the same time as the other problem.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Adobe AIR help; question regarding search criteria with multibyte characters

    Hi,
    I created Adobe AIR application with Robohelp 9 (using FM 10 files as source, and texts are written in Japanese and English),
    and happend to find that search function in AIR application doesn't catch keywords correctly.
    For example,
    1. If you type "文字" and "スタイル" with single byte space in search window, the result appears for both "文字" and "スタイル".
    2. If you type "文字" and "スタイル" with double byte space in search window, the result doesn't match for anything.
    3. If you type "文字スタイル" (in one word) in search window, the result doesn't match for anything.
    Same thing happens for the case "文字種" (literally, "文字"+"種", the meaning is almost the same).
    But, if you type search words which is all in Katakana, the result seems to be fine.
    Is there any limitation for multibyte characters support? Or, this behaviour is a feature??
    If so, how can make AIR application "hit" correct words?
    Thank you very much for your kind help in advance!

    On this one your best course of action is to contact Adobe Support. They will likely require your project and there is one thing I would suggest you do first. Create a new project with just a few topics to prove the problem exists there as well. If it does it will be a simpler upload and you will know the problem is repeatable.
    See www.grainge.org for RoboHelp and Authoring tips
    @petergrainge

  • Unable to call report from jsp - password contains special characters

    Hi
    I used the following url to call my oracle report from my JSP webpage but got the error mentioned below. It seems that this error occurs when i use the login id with password that contains special characters only. How can I overcome this problem?
    Any help appreciated. Thx.
    Regards,
    Siti
    URL used: -
    "http://pc-325:8889/reports/rwservlet?server=pc-325&report=prodeff80120i&P_JDBCPDS="+vlogin1+"&destype=cache&desformat=pdf&paramform=no&p_type="+p_type;
    Error encountered: -
    REP-163: Invalid value for keyword DESTYPE.
    Valid options are FILE, PRINTER, MAIL, INTEROFFICE, or CACHE.

    Hi Stefan,
    Many of the customers are located in hungary and they have created the userid using their keyboard. Hence for now I already have a userid with that hungarian characters, in the SAP system.
    Only I would request for the help on how to interface these characters in SAP Business connector to call RFC.
    Thanks,

  • CSV file encoded as UTF - 8 loses characters when displayed with excel 2010

    Hello everybody,
    I have adapted a customer report to be able to send certain data via mail a a CSV attachment.
    For that purpose I am using class cl_bcs.
    Everything goes fine, but since mail attachment contains certain german characters as Ü, when displaying it with excel those characters appear as corrupted.
    It seems the problem is with excel, because when opening the same file with notepad, the Ü is there. If I import the file to excel with the importer, it is correct too.
    Anyway, is there any solution to this problem?
    I have tried concatenating byte_order_mark_utf8 in the beginning of the file, but still excel does not recognize it.
    Thanks in advance,
    Pablo.
    Edited by: katathema on Jan 31, 2012 2:05 PM

    - Does ms excell actually support UTF-8
    Yes. I believed that we installed some international add-on which is not in default installnation. Anyway, other UTF-8 or UTF-16 file can be openned and viewed by Excel without any problem.
    - have you verifide that the file is viewable as a UTF-8 -encoded file
    I think so. If I open it into Notepad and choose "save as", the file type if UTF-8 file
    - Try opening the file in a program you are confident
    that it support UTF-8 - eg. Mozilla...
    I will try that.
    - Check that your UTF-8 -encoded file has a UTF-8 identifier (0xFEFF ?)
    as the first character
    The unicode-16(LE or BE) file I got from internet, I found there is always two bytes in the front. (0xFEFF or 0xFFFE). My UTF-8 file generated by java doesn't have that. But should UTF-8 file also has this kind of specifcal bytes in the front? If I manually add these bytes in the front of my file using Ultraeditor and open it in Excel2000, it didn't help.
    - Try using another spreadsheet program that supports UTF-8
    Do you know any other spreadsheet program supports csv file and UTF-8.

  • Question marks in outgoing emails for multibyte characters

    I have an application that stores and displays Japanesse characters correctly on the screen. But when I try to email them, the characters come out as ?????.
    I am on version 4.02.
    I am setting the following items at the top:
    MIME-Version: 1.0
    Content-Type: text/html; charset=utf-8
    The body is started with a <html> tag.
    Any ideas?

    After installation BI Publisher 10.1.3.3.1 Base (standalone, OC4J) :
    Directory of F:\bip\jdk\lib\fonts
    13/10/2007 21:16 15 196 128R00.TTF
    13/10/2007 21:16 18 473 348 ALBANWTJ.ttf
    13/10/2007 21:16 18 777 132 ALBANWTK.ttf
    13/10/2007 21:16 18 676 084 ALBANWTS.ttf
    13/10/2007 21:16 18 788 600 ALBANWTT.ttf
    13/10/2007 21:16 276 384 ALBANYWT.ttf
    13/10/2007 21:16 12 860 B39R00.TTF
    13/10/2007 21:16 18 800 MICR____.TTF
    13/10/2007 21:16 6 580 UPCR00.TTF
    Directory of F:\bip\jdk\jre\lib\fonts
    01/08/2006 19:25 75 144 LucidaBrightDemiBold.ttf
    01/08/2006 19:25 75 124 LucidaBrightDemiItalic.ttf
    01/08/2006 19:25 80 856 LucidaBrightItalic.ttf
    01/08/2006 19:25 344 908 LucidaBrightRegular.ttf
    01/08/2006 19:25 317 896 LucidaSansDemiBold.ttf
    01/08/2006 19:25 698 236 LucidaSansRegular.ttf
    01/08/2006 19:25 234 068 LucidaTypewriterBold.ttf
    01/08/2006 19:25 242 700 LucidaTypewriterRegular.ttf
    Directory of F:\bip\jre\1.4.2\lib\fonts
    24/03/2004 19:12 75 144 LucidaBrightDemiBold.ttf
    24/03/2004 19:12 75 124 LucidaBrightDemiItalic.ttf
    24/03/2004 19:12 80 856 LucidaBrightItalic.ttf
    24/03/2004 19:12 344 908 LucidaBrightRegular.ttf
    24/03/2004 19:12 317 896 LucidaSansDemiBold.ttf
    24/03/2004 19:12 698 236 LucidaSansRegular.ttf
    24/03/2004 19:12 234 068 LucidaTypewriterBold.ttf
    24/03/2004 19:12 242 700 LucidaTypewriterRegular.ttf
    What is wrong?
    In Adobe Reader's Document Properties -> Fonts
    +Helvetica:
    Type: Type1
    Encoding: Ansi
    Actual Font: ArialMT
    Actual Font Type: TrueType
    I feel BIP use wrong encoding . . .

  • CALL TRANSFORMATION - UTF-8 - Resut charactere u00E9 u00E8 replace by  #

    hi,
    I'm trying to import XML data in internal table in ABAP programme (6.20).
    But after the call transformation, in the internal table all the accent charactere  (é;è;à)  are replace by charactere #
    Open dataset MyFile FOR INPUT IN TEXT MODE ENCODINF UTF-8
    IGNORING CONVERSION ERRORS.
    DO
    READ DATASET MyFile INTO xmlfield.
    ENDDO.
    LOOP AT xmlfield.
             CONCATENATE xmlfile xmlfield-fiel INTO xmlfile.
    ENDLOOP.
    CALL TRANSFORMATION MyXSLT
    SOURCE xmlfile
    tab2 = TAB2.
    Can you please help me on this.

    Hi,
    Try using ENCODING Default.
    Asvhen

  • How to get utf-8 Chinese Characters from Oracle DB by EJB

    We have found that after disabling JIT in weblogic, the Chinese can be displayed correctly, otherwise, it doesn't work.How come this happens?

    Thanks for all of your suggestions. It still refuses to work.
    I entered the following: ���^�E on the HTML form using the Chinese(PRC)keyboard on my Win2K box.
    I checked and verified the correct encoding in the servlet request (GB2312 for chinese characters)
    request.getParameter(xxx) yields ???
    new String(request.getParameter(xxx).getBytes("GB2312")) yields three boxes (values 20309, 27946 and 23380)
    new String(request.getParameter(xxx).getBytes("GB2312"), "UTF-8") yields nothing
    Any ideas?

  • Entering MultiByte characters using OA Extension JDev forms

    Hello
    We have certain custom OA extension JDev forms running on Oracle Applications 11.5.10/ 10g database. Recently we converted our db to be utf8 compliant. While I can querry chineses characters through the custom JDev forms, when I try to enter chinese characters using the custom forms, it stores it as Junk. In the preference I did set the Client Character Encoding to UTF-8, but that did not help. Any clues as to how to get this working would be appreciated.
    Thanks
    PHK

    When you say it stores junk, you see the junk characters from back end or on the OA page on further retrieval of the same value?
    --Shiv                                                                                                                                                                                                                                                                                   

  • Multibyte characters are not printing correctly.

    Hi all,
    When i read a multibyte character from inputstream i am getting a negative value. The code is something like this.
    int c=in.read();
    System.ouy.ptinyln((char)c);
    It is printing ? instead of �, i know the reason why it is printing the ? mark because when i read a mutibyte character from the InputStream it is returning the negative value so when i try to type cast and try to print the negative value it is printing ?. My question is why it is returning negative value when i read the multibyte character.
    Please help me.
    Thanks

    What kind encoding type u using?
    mulitbyte character is composed by two bytes.
    and the first bit of each bye is 1.
    as a result, u get negative value when u cast the multibye character to int.

  • Utf-8 filename characters in Motif applications

    Hello,
    1) my Xpdf/Xdvi apps work quite well, but unable to save files with non-latin characters in filenames.
    2) in addition Xpdf fails to search non-latin characters/words in pdf files.
    Unfortunately I can't google appropriate way to solve this. Any clue?
    Thanks.

    Hello
    It's a long shot, but if it/they rely on properly configured linux console settings, directly or not - this might help you.
    Good luck

Maybe you are looking for

  • Invoice Control

    I am fi certified and working in ap team. Currently i am working on invoice control support which is an internal module and part of ap in my company. I not able to relate it with my fi studies. Can somebody help me telling where exactly ic comes in s

  • PS CS5 Preferences re-sitting

    Ref: Photoshop CS5 I can use photoshop for several weeks then out of the blue it comes on with all my preferences gone. Especially the way it handles files when opened. I hate have files opened in the folder formate and the docking of files.  I have

  • When I downlaed photos in photoshop they flash  black- i  have to downlaod and retart photoshop so that I can work on pictures .

    Hi There I am have the following issue with Photoshop: I am using Photoshop on a windows system. 1) when I open pictures they flash black and flash black as I am trying to crop and clean. 2) The tiles are very small compared to illustrator, how do I

  • How to disable ksfetch formerly ksurl

    Some time ago, I received frequent feedback from my internet security software that ksurl was trying to update. I understood that this has something to do with google, and I solved the problem through adding keystoneagent (or something) from the libr

  • SRM - BADI debug

    Hi gurus, how can I debug a SRM BADI setting an external break-point? It seems impossible! I'am not an expert ... - I have set the parameter in SICF sap\bc\gui\sap\its ... ~GENERATEDYNPRO 1 ... - I have set in utilities, settings, debugging, external