Japanese Characters in Oracle BI Publisher

We are using Oracle BI Publisher(10.1.3.4). We have a report in Japanese. The output of the report is in HTML and is getting displayed properly. When we send it as a mail using either schedule button or send button, the japanese text is not getting displayed properly. It is showing like
å®šæœŸå•¥å ´æ¤œæŸ»ã¯ã€ä»Šå›žã®å•¥å >´æ¤œæŸ»ã®ç´„1年後ã«å®Ÿæ–½ã„ãŸã—ã¾ã™
If someone has dealt with this earlier, kindly help. The encoding used is UTF-8(Unicode). It works fine for Spanish and Portuguese but not for Japanese.
Edited by: 975903 on Dec 11, 2012 1:56 AM

see link
https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears

Similar Messages

  • Handling Multi-byte/Unicode (Japanese) characters in Oracle Database

    Hello,
    How do I handle the Japanase characters with Oracle database?
    I have a Java application which retrieves some values from the database; makes some changes to these [ex: change value of status column, add comments to Varchar2 column, etc] and then performs an UPDATE back to the database.
    Everything works fine for the English. But NOT for Japanese language, which uses Multi-byte/Unicode characters. The Japanese characters are garbled after the performing the database UPDATE.
    I verified that Java by default uses UTF16 encoding. So there shouldn't be any problem with Java/JDBC.
    What do I need to change at #1- Oracle (Database) side or #2- at the OS (Linux) side?
    /* I tried changing the NLS_LANG value from OS and NLS_SESSION_PARAMETERS settings in Database and tried 'test' insert from SQL*plus. But SQL*Plus converts all Japanese characters to a question mark (?). So could not test it via SQL*plus on my XP (English) edition.
    Any help will be really appreciated.
    Thanks

    Hello Sergiusz,
    Here are the values before & after Update:
    --BEFORE update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,65,74,61,6c,69,6e,6b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    --AFTER Update:
    select tar_sid, DUMP(col_name, 1016) from table_name where tar_sid in ('6997593.880');
    /* Output copied from SQL-Developer: */
    6997593.88 Typ=1 Len=144 CharacterSet=UTF8: 54,45,53,54,5f,41,42,53,54,52,41,43,54,e3,81,ab,e3,81,a6,4f,52,41,2d,30,31,34,32,32,e7,99,ba,e7,94,9f,29,a,4d,45,54,41,4c,49,4e,4b,20,e3,81,a7,e7,a2,ba,e8,aa,8d,e3,81,84,e3,81,9f,e3,81,97,e3,81,be,e3,81,97,e3,81,9f,e3,81,8c,e3,80,81,52,31,30,2e,32,2e,30,2e,34,20,a,e3,81,a7,e3,81,af,e4,bf,ae,e6,ad,a3,e6,b8,88,e3,81,bf,e3,81,ae,e4,ba,8b,e4,be,8b,e3,81,97,e3,81,8b,e7,a2,ba,e8,aa,8d,e3,81,a7,e3,81,8d,e3,81,be,e3,81,9b,e3,82,93,2a
    So the values BEFORE & AFTER Update are the same!
    The problem is that sometimes, the Japanese data in VARCHAR2 (abstract) column gets corrupted. What could be the problem here? Any clues?

  • How to print Arabic characters in Oracle BI Publisher report

    Dear Experts,
    Kindly suggest me how to print arabic characters in BI Publisher.
    Regards,
    Mohan

    see link
    https://blogs.oracle.com/BIDeveloper/entry/non-english_characters_appears

  • How Oracle tables can be used to display Chinese/Japanese characters

    If anyone knows how to display Chinese/Japanese characters from Oracle tables please reply this email. Thanks.
    Regards,
    Preston

    hi
    ->Also please let me know how the Oracle Lite is licenced if I have 300 odd users of this offline applications
    you should speak to your local oracle rep about that, for us for example they gave a pretty cheap packet for olite 10gr3+ 11g repository
    ->need to use only database part of Oracle Lite and not mobile server.
    you cant do that. mobile server is the application server that handles the synchronization process from the server side. when a client tries to sync he actually connects to the mobile server and asks for the database changes , the mobile server know what the client must get so he packs it and give it to him
    ->you can ofc make lightweight .net apps using olite. we make olite apps even for wince handhelds. so yes ofcourse you can. olite had a win32 client also.
    ->can it run from usb.
    ok here to be honest ive never tried that myself, looks kinda weird as a requirement to be honest but i dont see why it shouldnt run. if you set up the paths correctly you shouldnt have a problem i think.
    ->offline application will have more or less similar data entry forms and storage structure
    yes ofcourse , if i have 3 tables in server i can choose to have 2(or all ) of them on the client too. i can even separate which client gets what. for instance if client a sells houses in new york he will get only the house table rows for new york. if another sells for chicago he will get those instead and etc.
    ->all client apps are offline and sync periodically (when you choose) to the server

  • Showing Japanese characters from Report Builder

    Hi , We've some problem showing the Japanese characters using Oracle Report builder 6i (6.0.8.17.0).
    Our database version is Oracle9i-9.0.1.4.0 and
    we'r using the DB character set as US7ASCII and National charecter set as UTF8 (Tried with AL16UTF16 but it's supported as per Oracle-metalink). We'r using Solaris 9 (OS 5.9) to invoke the Report Builder.
    Japanese characters are able to insert and select from SQL prompt using Japanese Terminal..
    But from the Report builder when we run the Report, we'r seeing some junk characters. Please help if anybody has worked on showing the Japanese characters from Report builder.
    Thanks
    -TN Reddy

    Hi Vinod and Sripathy,
    Thanks much for your feedback. We did some testing and it manage to get the results from txt/html/xml outputs but still have problem with post script format.We'r Checking this now..
    The prblem was UTF-8 was installed at the OS level (solaris 9).
    Anyway, Thank you very much.
    Regards
    -TN Reddy

  • Oracle Report Server Issue with Japanese Characters

    We are trying to setup a Oracle Report Server to print the Japanese characters in the PDF format.
    We have separate Oracle Report servers for printing English, Chinese and Vietnamese characters in PDF formats using Oracle Reports in the production which are running properly with Unix AIX version 5.3. Now we have a requirement to print the Japanese characters. Hence we tried to setup the new server for the same and the configurations are done as same as Chinese/Vietnamese report servers. But we are not able to print the Japanese characters.
    I am providing the details which we followed to configure this new server.
    1.     We have modified the reports.sh to map the proper NLS_LANG (JAPANESE_AMERICA.UTF8) and other Admin folder settings.
    2.     We have configured the new report server via OPMN admin.
    3.     We have copied the arialuni.ttf to Printers folder and we have converted this same .ttf file in AFM format. This AFM file has been copied to $ORACLE_HOME/guicommon/gk/JP_Admin/AFM folder.
    4.     We have modified the uifont.ali (JP_admin folder) file for font subsetting.
    5.     We have put an entry in JP_admin/PPD/datap462.ppd as *Font ArialUnicodeMS: Standard "(Version 1.01)" Standard ROM
    6.     We have modified the Tk2Motif.rgb (JP_admin folder) file for character set mapping (Tk2Motif*fontMapCs: iso8859-1=UTF8) as we have enabled this one for other report servers as well.
    Environment Details:-
    Unix AIX version : 5300-07-05-0831
    Oracle Version : 10.1.0.4.2
    NLS_LANG : JAPANESE_AMERICA.UTF8
    Font Mapping : Font Sub Setting in uifont.ali
    Font Used for Printing : arialuni.ttf (Font Name : Arial Unicode MS)
    The error thrown in the rwEng trace (rwEng-0.trc) file is as below
    [2011/9/7 8:11:4:488] Error 50103 (C Engine): 20:11:04 ERR REP-3000: Internal error starting Oracle Toolkit.
    The error thrown when trying to execute the reports is…
    REP-0177: Error while running in remote server
    Engine rwEng-0 crashed, job Id: 67
    Our investigations and findings…
    1.     We disabled the entry Tk2Motif*fontMapCs: iso8859-1=UTF8 in Tk2Motif.rgb then started the server. We found that no error is thrown in the rwEng trace file and we are able to print the report also in PDF format… (Please see the attached japarial.pdf for your verification) but we are able to see only junk characters. We verified the document settings in the PDF file for ensuring the font sub set. We are able to see the font sub setting is used.
    2.     If we enable the above entry then the rwEng trace throwing the above error (oracle toolkit error) and reports engine is crashed.
    It will be a great help from you if you can assist us to resolve this issue…

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • Specify File Encoding(Japanese Characters) for UTL_FILE in Oracle 10g

    Hi All,
    I am creating a text file using the UTL_FILE package. The database is Oracle 10G and the charset of DB is UTF-8.
    The file is created on the DB Server machine itself which is a Windows 2003 machine with Japanese OS. Further, some tables contain Japanese characters which I need to write to the file.
    When these Japanese characters are written to the text file they occupy 3 bytes instead of 1 and distort the format of the file, I need to stick to.
    Can somebody suggest, is there a way to write the Japanese character in 1 byte or change the encoding of the file type to something else viz. ShiftJIS etc.
    Thanking in advance,
    Regards,
    Tushar

    Are you using the UTL_FILE.FOPEN_NCHAR function to open the files?
    Cheers, APC

  • Oracle Translation Builder: Japanese characters in question mark (????)

    Hi All,
    We need to translate a custom from labels from English to Japanese, so that both US and Japan can use that form.
    So we have used Oracle Translation Builder to specify the translation. As an initial step, we just changed the labels to some other english words. And now we have generated American .fmx and Japanese .fmx. And we are able to see the difference.
    Now in Oracle translation Builder(OTB), if i paste Japanese characters in the translation editor, we are able to see the Japanese characters. But if we save&close the OTB and re-open it, we see all the question(???) marks. Even if we generate/upload the Japanese.fmx to applications, we are seeing the same Japanese ??? marks in the applications.
    Can you please let me know, what am i missing...
    (I have installed the Asian language (ARIALUNI.ttf) in my local system.)

    May be You should set the environment variable NLS_LANG to "JAPANESE_JAPAN.WE8MSWIN1252" or "AMERICAN_AMERICA.JA16SJIS" will allow you to store Japanese providing the input data is truly JA16SJIS and if the database is also in a character set that can store Japanese like UTF8 or JA16SJIS).
    Or use Oracle Translation Hub it is useful tool. It replaces the OTB :-)
    Edited by: user9212008 on 2.4.2010 1:09

  • Migrating Japanese Characters from MS SQL Server 2000 to Oracle 9i usng Jsp

    Hi ,
    I have a situation where the Japanese characters are to be migrated from MS SQL Server 2000 to Oracle 91 and then render the same using JSP.
    I followed the below approach,
    1. Extract the Japanese data from MS SQL Server and generate an XML
    2. Parse the XML and store it into Oracle 9i database which is of UTF-8 encoding.
    3. On retreiving and rendering using the Shift-JIS adds few junk characters additionally.
    When I try to copy paste the Japanese contents from XML to a text file, it is working fine.
    Could some one help me in resolving this issue?
    It is very urgent, and any help would be greatly appreciated.

    There is documentation in the reference guide sent with the workbench, there is this discussion forum, the support web page (which includes tech notes and FAQ's), and the company specific procedural language documentation.
    There is also an older document for use with the old sybase toolkits which may be obscelete, and there are some internal documents which were for internal consumption.
    Turloch
    Oracle Migration Workbench Team

  • Oracle 6i Report Output Japanese Characters to XML on UNIX?

    I have a 6i report which reads text from the database and outputs everything to XML. There is now a requirement for it to handle Japanese characters.
    I was able to do this successfully in Windows by changing my NLS_LANG to UTF8 and changing the field font to a Unicode font. I then ran the report, and the XML that was output contains the correct Japanese characters from the underlying database query.
    When I port the report to UNIX, the report runs, but the Japanese characters all show up as upside-down question marks in the XML. If I open the report in Reports Builder from the server, I do not see any Unicode fonts, but it changed my selected font to "clear".
    Can anyone suggest the fix to get this working in UNIX? Does a new font need to be installed on the server to support this?
    Thanks in advance for any suggestions!

    we used UTF8 which worked fine with that.

  • Displaying Japanese characters in JSP page

    Hi,
    I am calling an application which returns Japanese characters from my JSP. I am getting the captions in Japanese characters from the application and I am able to display the Japanese captions. After displaying the Japanese captions, user will select the particular captions by selecting the check box against the caption and Press Save button. Then I am storing the captions in the javascript string separated by :: and passing it to another JSP.
    The acton JSP retrieves that string and split it by using tokenizer and store it in the database. When I retrieve it again from the database and display it, I am not able to see the Japanese characters, it is showing some other characters, may be characters encoded by ISO.
    My database is UTF-8 enabled and in my server I am setting the UTF-8 as default encoding. In my JSP pages also, I am setting the charset and encoding type as UTF-8.
    I shall appreciate you if you can help me in resolving the issue.

    Post the encoding-related statements from your JSPs - there are a number of different ones that may be relevant.
    It may also be relevant which database you store the strings in (Oracle, DB2, etc.), since some require an encoding parameter to be passed.

  • Japanese Characters are showing as Question Marks '?'

    Hi Experts,
    We are using Oracle Database with below nls_database_parameters:
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_CSMIG_SCHEMA_VERSION 3
    NLS_RDBMS_VERSION 11.1.0.7.0
    When we are trying to view the Japanese characters (windows 7) in SQLdeveloper, toad or sqlPlus, we are getting data like '????'.
    Can anybody please explain us the setups required to view the Japanese characters from the local machine and database.
    Thanks in advance.

    user542601 wrote:
    [Note: If I insert the Japanese characters from Sql Developer or Toad, I am unable to see proper results.]For JDBC connections in Oracle SQL Developer, I believe a different parameter setting is required.
    Try running Sql Dveloper with jvm option: -Doracle.jdbc.convertNcharLiterals=true.
    I need to use this data in Oracle 6i Reports now.
    When I am creating reports using the table where I have Japanese characters stored in NVARCHAR2 column, the value is not displaying correctly in Report Regardless of Reports support for nchar columns, 6i is very very old and based on equally ancient database client libraries (8.0.x if memory serves me). Earliest version of Oracle database software that support the N literal replacement feature is 10.2. So, obviously not available for Reports 6i.
    I'm guessing only way to fully support Japanese language symbols is to move to a UTF8 database (if not migrating to a current version of Report Services).
    Please help to provide a workaround for this. Or do I need to post this question in any other forums?There is a Reports forum around here somewhere. Look in the dev tools section or maybe Middleware categories.
    Edit: here it is: {forum:id=84}
    Edited by: orafad on Feb 25, 2012 11:12 PM
    Edited by: orafad on Feb 25, 2012 11:16 PM

  • Create HTML file that can display unicode (japanese) characters

    Hi,
    Product:           Java Web Application
    Operating system:     Windows NT/2000 server, Linux, FreeBSD
    Web Server:          IIS, Apache etc
    Application server:     Tomcat 3.2.4, JRun, WebLogic etc
    Database server:     MySQL 3.23.49, MS-SQL, Oracle etc
    Java Architecture:     JSP (presentation) + Java Bean (Business logic)
    Language:          English, Japanese, chinese, italian, arabic etc
    Through our java application we need to create HTML files that have to display unicode text. Our present works well with English and most of the european character set. But when we tried to create HTML files that will display unidoce text, say japanese, only ???? is getting displayed. Following is the code we have used. The out on the browser displays the japanese characters correctly. But the created file displays only ??? in place of japanese chars. Can anybody tell how can we do it?
    <%
    String s = request.getParameter( "txt1" );
    out.println("Orignial Text " + s);
    //for html output
    String f_str_content="";
    f_str_content = f_str_content +"<HTML><HEAD>";
    f_str_content = f_str_content +"<META content=\"text/html; charset=utf-8\" http-equiv=Content-Type></HEAD>";
    f_str_content = f_str_content +"<BODY> ";
    f_str_content = f_str_content +s;
    f_str_content = f_str_content +"</BODY></HTML>";
    f_str_content = new String(f_str_content.getBytes("8859_9"),"Shift_JIS");
    out.println("file = " + f_str_content);
              byte f_arr_c_buffer1[] = new byte[f_str_content.length()];
    f_str_content.getBytes(0,f_str_content.length(),f_arr_c_buffer1,0);
              f_arr_c_buffer1 = f_str_content.getBytes();
    FileOutputStream l_obj_fout; //file object
    //file object for html file
    File l_obj_f5 = new File("jap127.html");
    if(l_obj_f5.exists()) //for dir check
    l_obj_f5.delete();
    l_obj_f5.createNewFile();
    l_obj_fout = new FileOutputStream(l_obj_f5); //file output stream for writing
    for(int i = 0;i<f_arr_c_buffer1.length;i++ ) //for writing
    l_obj_fout.write(f_arr_c_buffer1);
    l_obj_fout.close();
    %>
    thanx.

    Try changing the charset attribute within the META tag from 'utf-8' to 'SHIFT_JIS' or 'utf-16'. One of those two ought to do the trick for you.
    Hope that helps,
    Martin Hughes

  • Japanese characters retrieved from UTF8 d/b from excel

    Hi All,
    I am generating a csv file(comma seperated) through a query from Oracle 9i database. There is one field which is Japanese. Our database is UTF8 enabled and when the csv file is opened from notepad/textpad then it is showing the Japanese characters properly but when we are opening the file from excel(which is our requirement) then the data is not coming properly.
    I am copying the data below directly from the excel sheet. CLIENT_NAME_LOCAL(NVARCHAR2 field) is the field which captures japanese. It can be seen that the data for the FUND_CODE=811018 is correctly coming but for 809985 it is seen that the CLIENT_NAME_LOCAL and FUND_CODE columns are getting concatenated with a &#12539; sign in the middle and so in the FUND_CODE column the FROM_DATE value is coming, though the delimition ',' can be seen between the two fields when I'm opening the file from notepad. It is to be noted that I've used the CONVERT function in my query after this to change the CLIENT_NAME_LOCAL column to 'JA16SJIS' characterset but nothing got changed.
    N.B- I've copy and paste the data from excel so in the html format it seems that the FUND_CODE and FROM_DATE values are on the same vertical line but it is not so.
    ==========================================================
    TYPE CLIENT_NAME_LOCAL FUND_CODE FROM_DATE
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35674;&#65393;&#39501;&#65382;&#36881;&#65382;&#35649;&#65391;&#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406; 811018 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    AN &#35692;&#65386;&#34833;&#19976;&#65404;&#22786;&#65380;&#65406;&#32306;€&#34656;&#12539;&#39156;&#33651;&#25105;&#65402;&#12539;809985 01/09/2005
    Data in notpad
    ====================
    =======================================================
    TYPE,CLIENT_NAME_LOCAL,FUND_CODE,FROM_DATE,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26481;&#37030;&#29926;&#26031;&#26666;&#24335;&#20250;&#31038;,811018,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    AN,&#26666;&#24335;&#20250;&#31038;&#12288;&#21830;&#33337;&#19977;&#20117;,809985,01/09/2005,
    Thanks & Regards,
    Sudipta

    You can open UTF-8 files in excel:
    1. change file extension to .txt
    2. in excel: Open/File -> point to your file
    3. excel opens file convert dialog, in "file origin" field choose: "65001:Unicode (UTF-8)"
    4. proceed with other setting - You got it!
    This procedure work for sure in Excel 2003
    Regards
    Pawel

  • Japanese Characters Encoding Problem

    Hi All,
    I have been looking at the problems posted in this forum and quite a few describe the issue I am facing currently but none has been able to provide a solution.
    The problem I am facing is as follows:
    Step 1: I am retrieving Japanese data from Oracle DB 9i (Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit) using standard JDBC API calls. [NLS_CHARACTERSET : AL32UTF8,  NLS_NCHAR_CHARACTERSET : AL16UTF16]
    byte[] title = resultSet.getBytes("COLUMN_NAME");
    Step 2: I pass the retrieved bytes to a method that returns SJIS encoded String.
    private String getStringSJIS(byte[] bytesToBeEncoded) {
              StringBuffer sb = new StringBuffer();
              try {
                   if (title != null) {
                        ByteArrayInputStream bais = new ByteArrayInputStream(bytesToBeEncoded);
                        InputStreamReader isr = new InputStreamReader(bais, "SJIS");
                        for (int c = isr.read(); c != (-1); c = isr.read()) {
                             sb.append((char) c);
                   return sb.toString();
              } catch (Exception ex) {;}
    3) I am using an HTML Parser JAR to print the decimal value of the Encoded String.
    String after = getStringSJIS(title);
    System.out.println(Translate.encode(after));
    I get an output of String 1: &#65410;&#31167;&#65402;&#65410;&#26412;&#65410;&#21476;&#65386;&#65410;&#12469;&#65410;&#12452;&#65410;&#12488;
    which contains 14 decimal character codes.
    The same data is being read by another application that uses JDBC again and connects to the same DB and returns the decimal values as: String 2: &#26085;&#26412;&#35486;&#12469;&#12452;&#12488;
    The display of these two Strings differ significantly when viewed in the browser.
    It seems String 1 contains single byte half-width characters and String 2 does not. Is anyone familiar as to why the bytes are getting modified while being retrieved from the Database for the same column value?

    The encoding for the bytes being returned from the database is Cp1252 but this encoding, I understand, depends on the underlying platform I am using.
    If indeed the data from the DB is in UTF-8 or 16, shouldn't it be displayed correctly in the browser? No encoding/decoding should be required on the data then. In the browser it gets displayed as “ú–{ŒêƒTƒCƒg. (The encoding of the JSP page is set to UTF-8.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • Apple Mail: missing plug-in

    I keep getting "missing plug-in" messages from Apple Mail in emails that contain actually JPEGs or other common attachments. I don't know when this exactly started (could be installation of 10.8.3 update or a security update). This "feature" occurs o

  • How to archive incompleted PO

    Hi All I want to archive one incompleted PO.(delivery costs not cleared and not possiable to post delivery costs through MM as user cleared them manually form FI) My i know how i can delete the po even delivery costs not cleared Thanks Regards Manju

  • SD - EDI - Drop Ship - Difficult Scenario - Question

    Hi EDI & SD Experts, I have a very difficult scenario. Whenever a Drop-to Ship order comes through EDI, the customer won't send ship-to code instead they will send just the Drop-To Address of that 1 time customer. The Subsystem maps the Sold-to code

  • Newest version of Firefox 3.6.4 build 3 won't allow Norton Confidential toolbar for anti-plishing to work as before. It cannot load. Help plese.

    Every time Firefox 4.6.4 build 3 open, Norton Confidential (part of Norton Internet Security 4( cannot load it;s toolbar as before (last version of 3.6.4 work fine). So Norton Confidential ask to load. If click yes, Norton Confidential fail to load a

  • How to de-interlace Page Peel transition?

    I've used the FCE 4 Page Peel transition a number of times in a sequence I've been working on. When I was ready to export to a self-contained QuickTime Movie, I selected the entire NTSC anamorphic sequence and applied the De-interlace filter (the ori