# is appearing instead of Japanese characters.

Hi All,
While loading data which comprises of japanese characters, '#' values are appearing in the infocube(infoprovider) instead of japanese characters in the production system.When I load the same data in developmennt system I can see the japanese characters.Is there any setting that we need to make in production system so that we can see japanese characters instead of #.If not then please suggest any other method.
Thanks,
Ishdeep.

Yogesh,
Actually the thing is that we dont have any option of code page setting during secong level loading i.e.in our case we have 2 ODS and 2 masters as base tables. On these tables we had a view and above view we have Infocube (2nd level loads)  and on infocube we are reporting. If you know any code page setting or any other setting for this please let me know.
Ishdeep.

Similar Messages

  • Square boxes instead of Japanese characters in pdf output of a crystal rep

    Hi All,
    Did anyone of you ever faced such an issue, where the pdf output of a crystal report shows square boxes instead of Japanese characters when the output is saved in pdf. However the crystal report output looks perfect. I have saved the ouput in xls and rtf formats the characters look perfect as required, the issue we have is when the output is saved as pdf.
    I have language pack installed on my machine.
    My guess is that the few Character's width is not sufficient and few Characters in other fields appear perfect in pdf. I have to test this, it might take few days before I can access this report. Before that I want to gather information. If anyone has solution to this issue please let me know.
    Thanks,
    Ravi

    Hi,
    Could you please answer the following questions
    1.What version of Crystal Reports are you using?
    Go to Help-> About to find out.
    2.What is the font you are using on the report?
    Try to change the font style to MS Gothic or Arial Unicode MS, most preferably MS Gothic.
    And export the report to PDF format.
    This may help you
    Thanks,
    Praveen G

  • Creating a PDF from a SAAS app creates boxes instead of Japanese characters

    I'm using an online app (Unleashed Software) to "print" invoices, and the printed invoices show boxes instead of Japanese characters. The really weird thing about this problem, is that it occurs only on certain devices. I've tested on Macs, Windows, Android, and iOS, and on some devices I get the problem, and on some devices I don't. It's not just a Windows problem, or a iOS problem. Additionally, I use different browsers, from Chrome, to IE, to Firefox, to Safari. Changing the browser doesn't seem to help when it's on a device that won't output Japanese characters in a PDF properly.
    I'm wondering how PDFs are generated when using online software. Since I can't reproduce the problem on certain devices, it seems to me that the software is using some local settings to render the PDF incorrectly.
    Any ideas of how I could go about troubleshooting this problem?

    Hi,
    Could you please answer the following questions
    1.What version of Crystal Reports are you using?
    Go to Help-> About to find out.
    2.What is the font you are using on the report?
    Try to change the font style to MS Gothic or Arial Unicode MS, most preferably MS Gothic.
    And export the report to PDF format.
    This may help you
    Thanks,
    Praveen G

  • PDF file displays boxes instead of Japanese characters

    Hi,
    I am creating PDF files with a custom third-party set of sortware tools (Infragistics NetAdvantage).
    This particular PDF document contains Japanese text using the Meiryo font and this font is embedded into the PDF file. It works fine in Windows (using the Adobe Acrobat Reader application), even on a machine where Meiryo is not installed.
    But when viewed on the iPad, the text is displayed as a bunch of empty boxes.
    Also, I'm not sure if it's related, but when viewing the same document on Window XP using FoxIt (instead of the Adobe Acrobat Reader), the text is not shown at all - there's just an empty space and not even boxes.
    I would love to attach a sample PDF file here, but I don't see any way to do that... is there a way to attach files to these questions?

    So the empty boxes are indeed shown by built-in preview, and Adobe Reader is also not able to render the content and it appears as blank.
    After investigation , it turns out that system fonts on ipad does not have the required glyphs ( ~ symbol ) to render these characters. This is Apple’s bug and you may have to report this issue , indicating that it is not rendered in Preview / i-books on the ipad.
    Sorry for the inconvenience.
    Thanks,
    -vaibhav

  • Japanese Characters on MI client display.

    Hi All,
    I am having the Japanese OS PDA.but i am not able to see the Japanese character on the SAP MI Client interface.
    In mobile 5 english OS device i am able to see the Japanese characters on SAP MI Client interface by making some changes at registry level.
    I am using the softbank mobile 5 device.Please suggest.
    Thanks in advance.
    Thanks
    Regards
    Devendra..

    Hi All,
    To be more precise I am not familiar with the japanese language.There are many options for selecting the languge type in the keyboard  like katakana,hiragana and kanji.Also we can set the charset value in the extra option in the menu of PIE.
    I tried many permutation combination but i am getting squares instead of japanese characters.I think we might not have to make changes at the registry level as its a japanese OS PDA so something to do with settings for PIE.Please suggest.
    Thanks
    Regards
    Devendra

  • Chinese/Japanese characters not appearing on smartforms PDF output

    Hi,
    The print preview of the Smartforms output layout is correctly displaying the characters of native/local languages like Chinese, Japanses etc.....but when i try to print this output, it is printing the junk characters.
    Whereas the same printer is able to print the Chinese, Japanese characters when printed from MS Word.
    So this issue is occurring only when printing from SAP.
    In spool i could see the Chinese/Japanese characters appearing correctly, whereas when i try to convert it to PDF using program RSTXPDFT4, the PDF is again showing junk characters replacing the chinese characters.
    Thanks!

    This could be of different reeasons...
    1) Make sure your printer is uni-code enabled.
    2) Make sure, your unicode enabled printer is configured in SAP.
    3) make sure, your printer device is supported by SAP. (You can find list of SAP recommended printers in www.service.sap.com)
    4) Check whether the correct device type is used for printing chinese and japanese characters.
    5) Check code pages.
    6) Make sure you use Cyrillic font family, for printing chinese and Japanese characters.
    Regards,
    SaiRam

  • Japanese characters in left pane

    I have a project that has been translated into Japanese, and
    the left pane of my FlashHelp system does not render the characters
    correctly. The funny thing is that I got it to work in another
    project that was translated, one that I think was built in X5 (or
    possibly earlier) and has a skin that RoboHelp says should be
    updated when I generate the output. I don't dare change it now, and
    it seems to work fine.
    Unfortunately, I didn't make a record of exactly how I got
    that to work
    , but I seem to remember it had to do with embedding
    Japanese characters in the Flash skin files. Fonts are also
    declared in the skin .fhs file and the accompanying XML file, and
    I'm not sure how they interact, but it seems that the fonts in the
    Flash files supercede those in the skin file.
    In my newer project, the toolbar buttons show up in Japanese,
    as do the "terms" and "definitions" headings in the glossary and
    the index keyword prompt. It's the TOC entries, glossary terms, and
    index terms that have the problem, and they are all driven by the
    file skin_textnode.swf. As long as the other files are set to Arial
    or Arial Unicode MS in Flash, they display Japanese correctly, even
    if Japanese isn't embedded (using "anti-alias for animation").
    Without Japanese embedded in skin_textnode.fla, I get this kind of
    nonsense in the left pane:
    検索する.
    But with Japanese embedded, I get symbols like paragraph symbols,
    plus-or-minus signs, and daggers. I have also saved the .hhc file
    (the TOC) and the .hhk (index) in UTF-8 format using Notepad, but
    no change there. The junk characters also show up in the overlaying
    window when clicking a term in the index, and that's driven by
    skin_index.fla. I've tried the font and character embedding with
    that file. I have tried changing "font-family:Arial" in the skin
    file to both "font:"Arial Unicode MS"" and "font:"Arial"". No
    difference.
    I have compared files with that earlier project. As far as I
    can tell, I have done things comparably, but the junk characters
    persist in the left pane. It doesn't appear that there are
    substantial differences between the way the old and new skins are
    executed to cause the Japanese to work in one and not the other.
    Any ideas that may help me make this work again? I'm using RoboHelp
    6, Windows XP SP2, IE 6. (In theory, making this work for Japanese
    will also solve my problem for Russian. I'm hoping the language
    capabilities of RH7 handle all of this better so I don't have to
    use these work-arounds for non-Roman characters.)
    I know this is a load of information, but I've tried to
    describe the circumstances adequately without writing the great
    American novel. I'll clarify anything that's needed. Thanks,
    Ben

    Solved: I found once more that the TOC, glossary, and index
    information is pulled from files in the whxdata folder:
    whtdata...xml, whgdata...xml, and whidata...xml, respectively,
    which are all in UTF-8 format. The Japanese characters have to be
    changed in these files, but they get overwritten during a build, so
    they have to be stored with the correct characters in another
    location. Fortunately, our glossary will probably not be changing,
    but the TOC and index will grow as the project moves forward, so
    this will take some babysitting.
    This all leads me to wonder why RoboHelp generates copies of
    the .glo, .hhk, and .hhc files into the output folder when it's the
    XML files that are used instead...

  • Japanese Characters Reading Errors on WinXP - Japanese language mode ...

    I have an application that reads japanese characters encoded in Shift-JIS from a web page.
    I use the following method :
    BufferedReader dis = new BufferedReader(
         new InputStreamReader( urlConnection.getInputStream(),"SJIS"));
    On WinXP, Win98, Win2000 - english version - there are no problems, the characters are read and display CORRECTLY. Also the program work correctly.
    BUT, on WinXP - japanese version - THE SAME APPLICATION give the following ERRORS :
    Warning : Default charset MS932 not supported, using ISO-8859-1 instead.
    java.io.UnsupportedEncodingException: SJIS
    Because of this erros(I suppose) the colours of swing components are changed(I mean there are a lot of strange colors instead of the colors I set in the program - for example red is change to green, yellow to blue ...)
    Can you help me ?
    Regards,
    Cata

    I have written a java program that can write japanese+english letters data to a tab delimited .xls file.
    using SJIS as the encoding scheme. I can see japanese data in browser however the same appears
    as junk characters when I view the .xls file using Microsoft Excel 2000 Application on my WINDOWS2000 machine.
    What am I missing here ... ?
    Thanks and Regards,
    Kumar.

  • Japanese characters, outputstreamwriter, unicode to utf-8

    Hello,
    I have a problem with OutputStreamWriter's encoding of japanese characters into utf-8...if you have any ideas please let me know! This is what is going on:
    static public String convert2UTF8(String iso2022Str) {
       String utf8Str = "";
       try {          
          //convert string to byte array stream
          ByteArrayInputStream is = new     ByteArrayInputStream(iso2022Str.getBytes());
          ByteArrayOutputStream os = new ByteArrayOutputStream();
          //decode iso2022Str byte stream with iso-2022-jp
          InputStreamReader in = new InputStreamReader(is, "ISO2022JP");
          //reencode to utf-8
          OutputStreamWriter out = new OutputStreamWriter(os, "UTF-8");
          //get each character c from the input stream (will be in unicode) and write to output stream
          int c;
          while((c=in.read())!=-1) out.write(c);
          out.flush();          
         //get the utf-8 encoded output byte stream as string
         utf8Str = os.toString();
          is.close();
          os.close();
          in.close();
          out.close();
       } catch (UnsupportedEncodingException e1) {
          return    e1.toString();
       } catch (IOException e2) {
          return e2.toString();
       return utf8Str;
    }I am passing a string received from a database query to this function and the string it returns is saved in an xml file. Opening the xml file in my browser, some Japanese characters are converted but some, particularly hiragana characters come up as ???. For example:
    屋台骨田家は時間目離れ拠り所那覇市矢田亜希子ナタハアサカラマ楢葉さマヤア
    shows up as this:
    屋�?�骨田家�?�時間目離れ拠り所那覇市矢田亜希�?ナタ�?アサカラマ楢葉�?�マヤア
    (sorry that's absolute nonsense in Japanese but it was just an example)
    To note:
    - i am specifying the utf-8 encoding in my xml header
    - my OS, browser, etc... everything is set to support japanese characters (to the best of my knowledge)
    Also, I ran a test with a string, looking at its characters' hex values at several points and comparing them with iso-2022-jp, unicode, and utf-8 mapping tables. Basically:
    - if I don't use this function at all...write the original iso-2022-jp string to an xml file...it IS iso-2022-jp
    - I also looked at the hex values of "c" being read from the InputStreamReader here:
    while((c=in.read())!=-1) out.write(c);and have verified (using character value mapping table) that in a problem string, all characters are still being properly converted from iso-2022-jp to unicode
    - I checked another table (http://www.utf8-chartable.de/) for the unicode values received and all of them have valid mappings to a utf-8 value
    So it appears that when characters are written to the OutputStreamWriter, not all characters can be mapped from Unicode to utf-8 even though their Unicode values are correct and there should be utf-8 equivalents. Instead they are converted to (hex value) EF BF BD 3F EF BF BD which from my understanding is utf-8 for "I don't know what to do with this one".
    The characters that are not working - most hiragana (thought not all) and a few kanji characters. I have yet to find a pattern/relationship between the characters that cannot be converted.
    If I am missing some....or someone has a clue....oh...and I am developing in Eclipse but really don't have a clue about it beyond setting up a project, editing it and hitting build/run. It is possible that I may have missed some needed configuration??
    Thank you!!

    It's worse than that, Rene; the OP is trying to create a UTF-8 encoded string from a (supposedly) iso-2022 encoded string. The whole method would be just an expensive no-op if it weren't for this line:   utf8Str = os.toString(); That converts the (apparently valid) UTF-8 encoded byte array to a string, using the system default encoding (which seems to be iso-2022-jp, BTW). Result: garbage.
    @meggomyeggo, many people make this kind of mistake when they first start dealing with encodings and charset conversions. Until you gain a good understanding of these matters, a few rules of thumb will help steer you away from frustrating dead ends.
    * Never do charset conversions within your application. Only do them when you're communicating with an external entity like a filesystem, a socket, etc. (i.e., when you create your InputStreamReaders and OutputStreamWriters).
    * Forget that the String/byte[] conversion methods (new String(byte[]), getBytes(), etc.) exist. The same advice applies to the ByteArray[Input/Output]Stream classes.
    * You don't need to know how Java strings are encoded. All you need to know is that they always use the same encoding, so phrases like "iso-2022-jp string" or "UTF-8 string" (or even "UTF-16 string") are meaningless and misleading. Streams and byte arrays have encodings, strings do not.
    You will of course run into situations where one or more of these rules don't apply. Hopefully, by then you'll understand why they don't apply.

  • Specify File Encoding(Japanese Characters) for UTL_FILE in Oracle 10g

    Hi All,
    I am creating a text file using the UTL_FILE package. The database is Oracle 10G and the charset of DB is UTF-8.
    The file is created on the DB Server machine itself which is a Windows 2003 machine with Japanese OS. Further, some tables contain Japanese characters which I need to write to the file.
    When these Japanese characters are written to the text file they occupy 3 bytes instead of 1 and distort the format of the file, I need to stick to.
    Can somebody suggest, is there a way to write the Japanese character in 1 byte or change the encoding of the file type to something else viz. ShiftJIS etc.
    Thanking in advance,
    Regards,
    Tushar

    Are you using the UTL_FILE.FOPEN_NCHAR function to open the files?
    Cheers, APC

  • Japanese Characters working as URL parameters, turning to question marks when in URL string itself

    I'm having some trouble getting coldfusion to see japanese
    characters in the URL string.
    To clarify, if I have something like this:
    http://my.domain.com/index.cfm?categorylevel0=Search&categorylevel1=%E3%82%A2%E3%82%B8%E3% 82%A2%E3%83%BB%E3%83%93%E3%82%B8%E3%83%8D%E3%82%B9%E9%96%8B%E7%99%BA
    All of my code works correctly and the server is able to pass
    the japanese characters to the database and retrieve the correct
    data.
    If I have this instead:
    http://my.domain.com/index.cfm/Search/%E3%82%A2%E3%82%B8%E3%82%A2%E3%83%BB%E3%83%93%E3%82% B8%E3%83%8D%E3%82%B9%E9%96%8B%E7%99%BA
    My script (which works fine with English characters) parses
    CGI variables and converts these to the same URL parameters that I
    had in the first URL using a loop and a CFSET url.etc..
    In the first example, looking at the CF debug info shows me
    what I expect to see:
    URL Parameters:
    CATEGORYLEVEL0=Search
    CATEGORYLEVEL1=アジア・ビジネス開発
    In the second example it shows me this:
    URL Parameters:
    CATEGORYLEVEL0=Search
    CATEGORYLEVEL1=???·??????
    Can anyone suggest means for debugging this? I'm not sure if
    this is a CF problem, an IIS problem, a JRUN problem or something
    else altogether that causes it to lose the characters if they are
    in the URL string but NOT as a parameter.

    My suggestion was that you test with the
    first url, not the second. However, I can see a source of
    confusion. I overlooked your delimiter, "/". It should be "?" and
    "=" in this case. With these modifications, we get
    <cfif Len(cgi.query_string) neq 0>
    <cfset i = 1>
    <cfloop list="#cgi.query_string#" delimiters="&"
    index="currentcatname">
    <cfoutput>categorylevel#i# =
    #ListGetAt(currentcatname,2,"=")#</cfoutput><br>
    <cfset i = i + 1>
    </cfloop>
    If it is a failing of Coldfusion, the above test should fail,
    too.
    Now, an adaptation of the same test to your second url.
    <cfset url2 = "
    http://my.domain.com/index.cfm/Search/%E3%82%A2%E3%82%B8%E3%82%A2%E3%83%BB%E3%83%93%E3%82% B8%E3%83%8D%E3%82%B9%E9%96%8B%E7%99%BA">
    <cfset query_str =
    ListGetAt(replacenocase(url2,".cfm/","?"),2,"?")>
    <cfif Len(query_str) neq 0>
    <cfset i = 1>
    <cfloop list="#query_str#" delimiters="/"
    index="currentcatname">
    <cfoutput>categorylevel#i# =
    #currentcatname#</cfoutput><br>
    <cfset i = i + 1>
    </cfloop>

  • Issue with Japanese characters in files/filenames in terminal.

    I recently downloaded a zip file with Japanese characters in the archive and in the files within the archive. The name of the archive is "【批量下载】パノプティコン労働歌 第一等.zip"
    The characters are properly displayed in firefox, chrome, and other applications, but in my terminal some of the characters appear corrupted. Screenshot: https://i.imgur.com/4R22m0D.png
    Additionally, this leads to corruption of the files in the archive. When I try to extract the files, this is what happens:
    % unzip 【批量下载】パノプティコン労働歌 第一等.zip
    Archive: 【批量下载】パノプティコン労働歌 第一等.zip
    extracting: +ii/flac/Let's -+-ʦ1,000,000-.flac bad CRC 5f603d51 (should be debde980)
    extracting: +ii/flac/+ѦѾP++ -instrumental-.flac bad CRC 78b93a2d (should be 3501d555)
    extracting: +ii/flac/----.flac bad CRC ddeb1d3e (should be c05ae84f)
    extracting: +ii/flac/+ѦѾP++.flac bad CRC 0ccf2725 (should be be2b58f1)
    extracting: +ii/flac/Let's -+-ʦ1,000,000--instrumental-.flac bad CRC 67a39f8e (should be ece37917)
    extracting: +ii/flac/.flac bad CRC f90f3aa0 (should be 41756c2c)
    extracting: +ii/flac/ -instrumental-.flac bad CRC 3be03344 (should be 0b7a9cea)
    extracting: +ii/flac/---- -instrumental-.flac bad CRC 569b6194 (should be adb5d5fe)
    I'm not sure what could be the cause of this. I'm using uxterm with terminus as my main font and IPA gothic (a Japanese font) as my secondary font. I have a Japanese locale set up and have tried setting LANG=ja_JP.utf8 before, but the results never change.
    Also, this issue isn't just with this file. This happens with nearly all archives that have Japanese characters associated with it.
    Has anyone encountered this issue before or knows what might be wrong?
    Last edited by Sanbanyo (2015-05-21 03:12:56)

    Maybe 7zip or another tool has workarounds for broken file names, you could try that.
    Or you could try to go over the files in the zip archive one-by-one and write it to files out-1, out-2, ..., out-$n without concerning yourself with the file names. You could get file endings back via the mimetype.
    This script might work:
    #include <stdio.h>
    #include <zip.h>
    static const char *template = "./out-%04d.bin";
    int main(int argc, char**argv)
    int err = 0;
    zip_t *arc = zip_open((const char*)argv[1], ZIP_RDONLY, &err);
    if(arc == NULL)
    printf("Failed to open ZIP, error %d\n", err);
    return -1;
    zip_int64_t n = zip_get_num_entries(arc, 0);
    printf("%s: # of packed files: %d\n", argv[1], n);
    for(int i = 0; i < n; i++)
    zip_stat_t stat;
    zip_stat_index(arc, i, ZIP_FL_UNCHANGED, &stat);
    char buf[stat.size];
    char oname[sizeof(template)];
    zip_file_t *f = zip_fopen_index(arc, (zip_int64_t)i, ZIP_FL_UNCHANGED);
    zip_fread(f, (void*)&buf[0], stat.size);
    snprintf(&oname[0], sizeof(template), template, i);
    FILE *of = fopen(oname, "wb");
    fwrite(&buf[0], stat.size, 1, of);
    printf("%s: %s => %lu bytes\n", argv[1], oname, stat.size);
    zip_fclose(f);
    fclose(of);
    zip_close(arc);
    return 0;
    Compile with
    gcc -std=gnu99 -O3 -o unzip unzip.c -lzip
    and run as
    ./unzip $funnyzipfile
    You should get template-named, numbered output files in the current directory.
    Last edited by 2ion (2015-05-21 23:09:29)

  • Acrobat Pro 9.3.1 does not convert certain Japanese characters

    I have a text document that contains a mix of Roman and Japanese characters - when I do Create PDF From File and read that text document in, there is a sequence of 2 Japanese characters that disappear - the text before them and after them appear in the PDF, but there's a void between.
    The sequence is (don't know if I can insert Japanese in here...)
    before監査証跡after
    When the PDF is generated, the first 2 Japanese characters (after the last 'e' in before) do not appear in the PDF.
    Here is the source text document (UTF-8 encoded with BOM): http://www.scribd.com/doc/28158046
    and here is the resulting PDF: http://www.scribd.com/doc/28158121
    Anyone seen this before?

    If I paste your "before監査証跡after" into Notepad and save it as UTF-8 text, I can print the file to the Acrobat 9.3.1 Pro "Adobe PDF" printer with no problems at all: the 4 kanji appear in a font Acrobat calls "MS-UIGothic".  If I right-click on the saved *.txt file in Windows Exploreer (Vista 64) and select "Convert to Adobe PDF" I still get all the kanji, although the first shows up in Adobe Ming, the 2nd in Adobe Song, and the last 2 in KozGoPr6N.
    I can't explain what's going on here, but perhaps this can help point you down a useful path.
    David

  • Problem in displaying Japanese characters in SAPScripts

    Hi All,
    I am facing a strange problem in one of my SAPScripts. I have one script in both English and Japanese languages. The scripts are already existing. I had to do some minor changes in a logo window. I did them and i did not do any other changes in any of the windows.
    When the output wa s seen for the script in the Japanese version, it was looking ok displaying all hte Japanese characters in various windows. Now, during testing , in the same server, the Japanese characteres are not shown. Instead , some ' #'(hash) symb ols are getting displayed.
    How could it happen? Did any body face such problem? If so, can anybody plz help me out with the solution?
    What shud i do to get back the Japanese characters in my script again?
    Regards,
    Priya

    Priya.
    this is not an ABAP problem ask your BASIS team to set printer cofing from SPAD.dont worry its not an ABAP issue at all.
    sometime printer doesnt support special char so it need to be setting with printer.
    Amit.

  • MySQL Japanese characters

    Hello,
    I already searched the froums for a soluton to my problem but couldn't find any for it though I tried some of the things that were proposed. I have a MySQL Database where I store Japanese characters in SJIS. I tested if they are in SJIS and they are.
    I am using mysql 4.1.3b-beta, mysql-connector-java-3.0.14, JBuilderX (jdk 1.4.2), and OS XP.
    When I try to retrieve the data I only get a square, then a some character, then a square, then another character,... but not the actual Kanji/Kana. I can display Japanese characters correctly when I retrieve them from a file, so I don't think it's a problem with the font.
    My code is as follows:
    public void testDB(){
    String result ="";
    Connection con = null;
    Statement st = null;
    try
    Properties prop = new java.util.Properties();
    prop.put("user","");
    prop.put("password","");
    prop.put("useUnicode", "true");
    prop.put("characterEncoding","SJIS");
    String url = "jdbc:mysql://localhost/japanese";
    //Class.forName ("org.gjt.mm.mysql.Driver");
    Class.forName ("com.mysql.jdbc.Driver").newInstance ();
    con = DriverManager.getConnection (url, prop);
    result= "Database connection established";
    st = con.createStatement();
    ResultSet rs = null;
    String querry1 = "SELECT * FROM jp";
    PreparedStatement pstmt = con.prepareStatement(querry1);
    rs = pstmt.executeQuery();
    int i = 0;
    while (rs.next()) {
    result = rs.getString(1);
         jTable1.setFont(new Font("Arial Unicode MS",0,15));
    jTable1.setValueAt(result,i,0);
    i++;
    catch (Exception e)
    // result = "Cannot connect to database server";
    finally
    if (con != null)
    try
    con.close ();
    // result ="Database connection terminated";
    catch (Exception e) { /* ignore close errors */ }
    I hope someone can help me as MySql is my last option after I already discovered that I couldn't use MS Access and Foxpro because the ODBC:JDBC bridge doesn't work correctly for unicode.
    On a side note as I am new to MySql after installing version mysql4.1.3b. I no longer have the mysqld.exe but only mysqld-opt.exe. So I am using this instead. But I don't think that can be responsible for my problem.
    Thaks for any help.
    S

    I see your original question was sent in August so maybe you have found an answer ?.. Would be interesting to see how you fixed it.
    I was having a similar sort of problem but managed to fix it by looking at the fonts available to Java on my machine and setting the font used by the graphics object to one of the japanese fonts. (nb. I see in your code you are setting the font to arial unicode, maybe setting it to a japanaese font name would be enough)
    Hopefully the following will give you some help.
    // this line sticks the name of all fonts in the graphics environment in an array
    String[] nameArray = GraphicsEnvironment
                   .getLocalGraphicsEnvironment()
                   .getAvailableFontFamilyNames();
    // I know that the last two fonts are for japanese characters. I chose the font name second from the end of the array.
    int listfonts=nameArray.length-2 ;
    //then I sent the font for the g2 object.
    Font fontselec ;
    fontselec = new Font(nameArray[listfonts],Font.PLAIN,24) ;
    g2.setFont(fontselec) ;
    best regards
    Graham

Maybe you are looking for