Storing JAPANESE Characters

Hi,
I need to store the Japanese characters into the database; for this i have used NCHAR & NVARCHAR2 to store the UNICODE data. I am using VC++ dialog based application using ODBC to connect to the database. When i store the Japanese characters into the database it won't store properly, and it displays the garbage values when i query for selection. How to solve this problem? How to store the Japanese characters into the database? My database's Charcter set is : WE8ISO8859P1. I don't want to change this character set. Insead I have used NCHAR to store the data. Still it is not working...Please give the solution for this one...
Thanks & Regards,
K. Venkata Ramana.

Use the UTF8 (Unicode) character set in Oracle. Assuming you are using the database configuration assistant, you would need to choose a 'custom' install rather than 'Typical(Recommended)' in order to be presented with the chance to specify your database language settings.
Also there is an 'NLS' guide in Oracles documentation which you might find of interest.
Jason.
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Dara:
Hello.
I wish to store chinese, japanese, german and English characters in the same database. Is this possible?
However, I would like to manage the database in English.
If so, how do I specify the characterset when creating the database?
If not, what should I do?
Your help is greatly appreciated.
Thank you.<HR></BLOCKQUOTE>
null

Similar Messages

  • Displaying Japanese characters in JSP page

    Hi,
    I am calling an application which returns Japanese characters from my JSP. I am getting the captions in Japanese characters from the application and I am able to display the Japanese captions. After displaying the Japanese captions, user will select the particular captions by selecting the check box against the caption and Press Save button. Then I am storing the captions in the javascript string separated by :: and passing it to another JSP.
    The acton JSP retrieves that string and split it by using tokenizer and store it in the database. When I retrieve it again from the database and display it, I am not able to see the Japanese characters, it is showing some other characters, may be characters encoded by ISO.
    My database is UTF-8 enabled and in my server I am setting the UTF-8 as default encoding. In my JSP pages also, I am setting the charset and encoding type as UTF-8.
    I shall appreciate you if you can help me in resolving the issue.

    Post the encoding-related statements from your JSPs - there are a number of different ones that may be relevant.
    It may also be relevant which database you store the strings in (Oracle, DB2, etc.), since some require an encoding parameter to be passed.

  • Japanese Characters are showing as Question Marks '?'

    Hi Experts,
    We are using Oracle Database with below nls_database_parameters:
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET WE8MSWIN1252
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_CSMIG_SCHEMA_VERSION 3
    NLS_RDBMS_VERSION 11.1.0.7.0
    When we are trying to view the Japanese characters (windows 7) in SQLdeveloper, toad or sqlPlus, we are getting data like '????'.
    Can anybody please explain us the setups required to view the Japanese characters from the local machine and database.
    Thanks in advance.

    user542601 wrote:
    [Note: If I insert the Japanese characters from Sql Developer or Toad, I am unable to see proper results.]For JDBC connections in Oracle SQL Developer, I believe a different parameter setting is required.
    Try running Sql Dveloper with jvm option: -Doracle.jdbc.convertNcharLiterals=true.
    I need to use this data in Oracle 6i Reports now.
    When I am creating reports using the table where I have Japanese characters stored in NVARCHAR2 column, the value is not displaying correctly in Report Regardless of Reports support for nchar columns, 6i is very very old and based on equally ancient database client libraries (8.0.x if memory serves me). Earliest version of Oracle database software that support the N literal replacement feature is 10.2. So, obviously not available for Reports 6i.
    I'm guessing only way to fully support Japanese language symbols is to move to a UTF8 database (if not migrating to a current version of Report Services).
    Please help to provide a workaround for this. Or do I need to post this question in any other forums?There is a Reports forum around here somewhere. Look in the dev tools section or maybe Middleware categories.
    Edit: here it is: {forum:id=84}
    Edited by: orafad on Feb 25, 2012 11:12 PM
    Edited by: orafad on Feb 25, 2012 11:16 PM

  • Problem viewing Japanese characters in Excel sent via Email attachment

    Hi All
    I am using FM '''SO_DOCUMENT_SEND_API1'' to send out an e-mail attachment (Excel file). I am able to receive the Excel file successfully. However I am not able to display the Japanese characters properly in my Excel file.
    I tried to display some Japanese characters in my e-mail contents and I have no problem viewing these characters in my e-mail (I am using MS Outlook 2003). These statements becomes something illegible when I transfer it over to Excel as an attachment. In my Internal Table, these characters are displayed correctly as well.
    Anyone has any advice to solve this issue?
    Thanks for your replies.

    Hi Divya,
    refer to the link below:
    http://www.sapdevelopment.co.uk/reporting/email/attach_xls.htm
    The code in this demonstrates how to send an email to an external email addresswhere the data is stored within a .xls attachment.
    Hope this helps.
    Reward if helpful.
    Regards,
    Sipra

  • Japanese characters in left pane

    I have a project that has been translated into Japanese, and
    the left pane of my FlashHelp system does not render the characters
    correctly. The funny thing is that I got it to work in another
    project that was translated, one that I think was built in X5 (or
    possibly earlier) and has a skin that RoboHelp says should be
    updated when I generate the output. I don't dare change it now, and
    it seems to work fine.
    Unfortunately, I didn't make a record of exactly how I got
    that to work
    , but I seem to remember it had to do with embedding
    Japanese characters in the Flash skin files. Fonts are also
    declared in the skin .fhs file and the accompanying XML file, and
    I'm not sure how they interact, but it seems that the fonts in the
    Flash files supercede those in the skin file.
    In my newer project, the toolbar buttons show up in Japanese,
    as do the "terms" and "definitions" headings in the glossary and
    the index keyword prompt. It's the TOC entries, glossary terms, and
    index terms that have the problem, and they are all driven by the
    file skin_textnode.swf. As long as the other files are set to Arial
    or Arial Unicode MS in Flash, they display Japanese correctly, even
    if Japanese isn't embedded (using "anti-alias for animation").
    Without Japanese embedded in skin_textnode.fla, I get this kind of
    nonsense in the left pane:
    検索する.
    But with Japanese embedded, I get symbols like paragraph symbols,
    plus-or-minus signs, and daggers. I have also saved the .hhc file
    (the TOC) and the .hhk (index) in UTF-8 format using Notepad, but
    no change there. The junk characters also show up in the overlaying
    window when clicking a term in the index, and that's driven by
    skin_index.fla. I've tried the font and character embedding with
    that file. I have tried changing "font-family:Arial" in the skin
    file to both "font:"Arial Unicode MS"" and "font:"Arial"". No
    difference.
    I have compared files with that earlier project. As far as I
    can tell, I have done things comparably, but the junk characters
    persist in the left pane. It doesn't appear that there are
    substantial differences between the way the old and new skins are
    executed to cause the Japanese to work in one and not the other.
    Any ideas that may help me make this work again? I'm using RoboHelp
    6, Windows XP SP2, IE 6. (In theory, making this work for Japanese
    will also solve my problem for Russian. I'm hoping the language
    capabilities of RH7 handle all of this better so I don't have to
    use these work-arounds for non-Roman characters.)
    I know this is a load of information, but I've tried to
    describe the circumstances adequately without writing the great
    American novel. I'll clarify anything that's needed. Thanks,
    Ben

    Solved: I found once more that the TOC, glossary, and index
    information is pulled from files in the whxdata folder:
    whtdata...xml, whgdata...xml, and whidata...xml, respectively,
    which are all in UTF-8 format. The Japanese characters have to be
    changed in these files, but they get overwritten during a build, so
    they have to be stored with the correct characters in another
    location. Fortunately, our glossary will probably not be changing,
    but the TOC and index will grow as the project moves forward, so
    this will take some babysitting.
    This all leads me to wonder why RoboHelp generates copies of
    the .glo, .hhk, and .hhc files into the output folder when it's the
    XML files that are used instead...

  • Japanese characters in SQL Developer  Version 1.5.4

    I am using SQL Developer ver 1.5.4 with Oracle 11g. There are Japanese characters stored in VARCHAR2 field.
    When I execute a SELECT SQL query, SQL Developer does not display Japanese characters in the Result window -- it displays row of small square characters instead.
    (When I execute the same query as a script -- it displays the Japanese characters in Script Output window,
    What should I do to have SQL Developer to display Japanese characters in Result window?
    Thank you!
    Mark.
    Edited by: MarcoPolo on Jul 7, 2009 11:16 AM

    Hi there, Have you fixed this issue?
    I'm having the same issue albeit with SQL Developer VERSION 1.5.5
    The Select stmt displays the the empty square boxes in the result window, I've set the encoding preferences to UTF8 in the menu and font to Arial Unicode (there is no option to set script to Japanese). How can i display japanese characters in the results window??
    When i export these empty square boxes in the result window to notepad (where encoding set to Unicode and Font set to Arial Unicode and Script to Japanese), the japanese value is displayed correctly in the notepad.
    Kindly provide some input
    Many Thanks

  • Japanese Characters in Tables

    I have a table which stores name,address and country of emplyees.
    I see that address for most of the employees from Japan are shown as SII¿¿5F
    It looks like its a combination english and japanese characters where japanese characteres are displayed as ¿
    Is there any way to read these records?

    Thanks.
    Changing my question a little.
    Can i view already stored data(japanese characters) with following NLS settings?
    SQL> SELECT * FROM v$nls_parameters;
    PARAMETER VALUE
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_CHARACTERSET UTF8
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_NCHAR_CHARACTERSET AL16UTF16
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    Note: Db had same settings when data was stored
    Thanks,
    Coolguy

  • Reading Japanese characters from a JSP/HTML form.

    I have a JSP/STRUTS/WEBLOGIC/ORACLE setup. I am able to get Japanese characters in database to be shown on the screen ( html ). However the users now want to enter Japanese characters on the screen and want to save these unicode characters in DB. How should I go about it?
    I am using html:text tag for the input fields in the JSP. No matter what I try I am getting invalid characters..Thanks in advance.

    hi debo_nair,
    if i am not mistaken the japanese characters might be getting stored in the database as '?????' and other junk characters....
    well I used the following technique:
    1.retrieve the string from the text field and
    2.convert it using the following method...
    new String getbytes("the string entered in the text field","ISO-8859")
    3.Next , store the string in the database
    the syntax may be incorrect but just refer to any java book to get the correct syntax
    regds

  • Converting garbled characters for JAPANESE characters in a custom table

    Hi all,
    I have a custom table that store Japanese characters.
    After my company has upgraded to ECC6.0, this data in the custom table has become garbled and its alot of it garbled.
    Is there any SAP tool that can I use to make the correction on those garbled Japanese characters?
    Thanks,
    William Wilstroth

    Hi Nils,
    I really really really had a field day reading and testing around UC... To my dissappointment, I do not have the authorization to use SUMG and SCP too as well as a few of the TCODES...
    I finally told my higher level technical mgnt. that this table might need some changes...
    Has this problem of mine got anything to do with MDMP since its no longer supported in ECC6 and I found one coding that search for MDMP in RSVTPROT...
    My colleagues suggest that the data be corrected from table DBTABLOG... which i find, in my opinion, is not the right way...
    Thanks,
    William

  • Oracle Report Server Issue with Japanese Characters

    We are trying to setup a Oracle Report Server to print the Japanese characters in the PDF format.
    We have separate Oracle Report servers for printing English, Chinese and Vietnamese characters in PDF formats using Oracle Reports in the production which are running properly with Unix AIX version 5.3. Now we have a requirement to print the Japanese characters. Hence we tried to setup the new server for the same and the configurations are done as same as Chinese/Vietnamese report servers. But we are not able to print the Japanese characters.
    I am providing the details which we followed to configure this new server.
    1.     We have modified the reports.sh to map the proper NLS_LANG (JAPANESE_AMERICA.UTF8) and other Admin folder settings.
    2.     We have configured the new report server via OPMN admin.
    3.     We have copied the arialuni.ttf to Printers folder and we have converted this same .ttf file in AFM format. This AFM file has been copied to $ORACLE_HOME/guicommon/gk/JP_Admin/AFM folder.
    4.     We have modified the uifont.ali (JP_admin folder) file for font subsetting.
    5.     We have put an entry in JP_admin/PPD/datap462.ppd as *Font ArialUnicodeMS: Standard "(Version 1.01)" Standard ROM
    6.     We have modified the Tk2Motif.rgb (JP_admin folder) file for character set mapping (Tk2Motif*fontMapCs: iso8859-1=UTF8) as we have enabled this one for other report servers as well.
    Environment Details:-
    Unix AIX version : 5300-07-05-0831
    Oracle Version : 10.1.0.4.2
    NLS_LANG : JAPANESE_AMERICA.UTF8
    Font Mapping : Font Sub Setting in uifont.ali
    Font Used for Printing : arialuni.ttf (Font Name : Arial Unicode MS)
    The error thrown in the rwEng trace (rwEng-0.trc) file is as below
    [2011/9/7 8:11:4:488] Error 50103 (C Engine): 20:11:04 ERR REP-3000: Internal error starting Oracle Toolkit.
    The error thrown when trying to execute the reports is…
    REP-0177: Error while running in remote server
    Engine rwEng-0 crashed, job Id: 67
    Our investigations and findings…
    1.     We disabled the entry Tk2Motif*fontMapCs: iso8859-1=UTF8 in Tk2Motif.rgb then started the server. We found that no error is thrown in the rwEng trace file and we are able to print the report also in PDF format… (Please see the attached japarial.pdf for your verification) but we are able to see only junk characters. We verified the document settings in the PDF file for ensuring the font sub set. We are able to see the font sub setting is used.
    2.     If we enable the above entry then the rwEng trace throwing the above error (oracle toolkit error) and reports engine is crashed.
    It will be a great help from you if you can assist us to resolve this issue…

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • How can I get Japanese characters to show up for my music in iTunes?

    I am not exactly sure what generation my iPod is, but it says copyright 2004 on the back. It is 20 GB. It has no problems displaying Japanese characters if a particular song of mine is Japanese. I have my iPod set up to manually manage music. I like to carry my iPod between home and work. At both home and work, I use PCs with Windows XP Pro installed. I also use the latest version of iTunes (ver. 7.1.1.5).
    However, I have run into a weird problem. On my home computer, my iTunes displays Japanese characters perfectly fine, but on my work computer, whenever there is a Japanese character, iTunes does not recognize it and puts an ugly "square" character in its place. How do I get my iTunes to display these Japanese characters properly?
    Thanks.

    Well I just answered my own question. I needed to install the files for East Asian languages via the WinXP Control Panel. So if anyone else runs into this problem, there ya go!

  • How to store japanese characters in mysql 5.0

    I want to store japanese characters in mysql 5.0 database through java program and then to retrive the same characters through the program only.Java program means a form containing first name ,last name and address. I am entering corresponding translations in japanese to this fields while inserting to database. In another form i am retrieving those japanese characters.those should display
    in this form.

    How to handle the unicode sir for japanese characters..Pls give me more hints and any reference links to get the answer.

  • Creating a PDF from a SAAS app creates boxes instead of Japanese characters

    I'm using an online app (Unleashed Software) to "print" invoices, and the printed invoices show boxes instead of Japanese characters. The really weird thing about this problem, is that it occurs only on certain devices. I've tested on Macs, Windows, Android, and iOS, and on some devices I get the problem, and on some devices I don't. It's not just a Windows problem, or a iOS problem. Additionally, I use different browsers, from Chrome, to IE, to Firefox, to Safari. Changing the browser doesn't seem to help when it's on a device that won't output Japanese characters in a PDF properly.
    I'm wondering how PDFs are generated when using online software. Since I can't reproduce the problem on certain devices, it seems to me that the software is using some local settings to render the PDF incorrectly.
    Any ideas of how I could go about troubleshooting this problem?

    Hi,
    Could you please answer the following questions
    1.What version of Crystal Reports are you using?
    Go to Help-> About to find out.
    2.What is the font you are using on the report?
    Try to change the font style to MS Gothic or Arial Unicode MS, most preferably MS Gothic.
    And export the report to PDF format.
    This may help you
    Thanks,
    Praveen G

  • Specify File Encoding(Japanese Characters) for UTL_FILE in Oracle 10g

    Hi All,
    I am creating a text file using the UTL_FILE package. The database is Oracle 10G and the charset of DB is UTF-8.
    The file is created on the DB Server machine itself which is a Windows 2003 machine with Japanese OS. Further, some tables contain Japanese characters which I need to write to the file.
    When these Japanese characters are written to the text file they occupy 3 bytes instead of 1 and distort the format of the file, I need to stick to.
    Can somebody suggest, is there a way to write the Japanese character in 1 byte or change the encoding of the file type to something else viz. ShiftJIS etc.
    Thanking in advance,
    Regards,
    Tushar

    Are you using the UTL_FILE.FOPEN_NCHAR function to open the files?
    Cheers, APC

  • Problem with Gui_download using ASC File type - japanese characters

    Hi,
    During upgrade,while downloading data for japanese characters using GUI_DOWNLOAD Function module with file type as 'ASC', the space between 2 fields data getting much wider compared to 4.6C Version ws_download Function module's  data.
    Example: the gap between first field data and second field data in ECC 6.0 is 6 characters length,but in 4.6C it is 2 characters length.
    Is there any possibility to get the results similar to 4.6c version.Please give your valueable suggestions.
    Thanks
    BalaNarasimman

    Hi Sandra
    Please find the detailed information for your questions.
    1.Internal table content before download:During Debugging,it was observed that internal table content was same in both versions.For testing,i used only brand new data(Transaction entry).
    2.Download with code Page conversion:Yes,codepage parameter 4103 was explicitly passed into GUI_DOWNLOAD Function module.Also the front end code page which is used by system is 4110 . No errors occured.
    3.System is an Unicode system only.
    4.Actually this 6 character does not refer the byte value,only the gap between 2 fields data is getting referred in ECC 6.0.Please find the below example.
    Example - File data after Download:
    ECC 6.0: Field1            Field2      (gap - 6 characters space between 2 fields data)  Using GUI_Download
    data       u0152©Ïu201Dԍu2020      EN                               
         4.6C: Field1            Field2       (gap - 2 characters space between 2 fields data) Using WS_Download
         data    u0152©Ïu201Dԍu2020  EN    
    Note:Special characters are Japanese characters:

Maybe you are looking for